[jira] [Commented] (HDFS-8157) Writes to RAM DISK reserve locked memory for block files
[ https://issues.apache.org/jira/browse/HDFS-8157?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14529275#comment-14529275 ] Colin Patrick McCabe commented on HDFS-8157: Thanks for this, [~arpitagarwal]. I don't think we should add {{DataNode#skipNativeIoCheckForTesting}}. To simulate locking memory without adding a dependency on NativeIO, then just create a custom cache manipulator. This custom manipulator can always return true for {{verifyCanMlock}}. There are some other unit tests doing this. {code} public void releaseReservedSpace(long bytesToRelease, boolean releaseLockedMemory); {code} I would rather have a separate function for releasing the memory than overload the meaning of this one. Maybe I am missing something, but I don't understand the purpose behind {{releaseRoundDown}}. Why would we round down to a page size when allocating or releasing memory? > Writes to RAM DISK reserve locked memory for block files > > > Key: HDFS-8157 > URL: https://issues.apache.org/jira/browse/HDFS-8157 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: datanode >Reporter: Arpit Agarwal >Assignee: Arpit Agarwal > Attachments: HDFS-8157.01.patch > > > Per discussion on HDFS-6919, the first step is that writes to RAM disk will > reserve locked memory via the FsDatasetCache. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HDFS-8157) Writes to RAM DISK reserve locked memory for block files
[ https://issues.apache.org/jira/browse/HDFS-8157?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14531057#comment-14531057 ] Arpit Agarwal commented on HDFS-8157: - Thanks for the review. This was a very preliminary patch. I'll post an updated patch when I get some more time to work on this. I might just post a consolidated patch for this and HDFS-8192. bq. Maybe I am missing something, but I don't understand the purpose behind releaseRoundDown. Why would we round down to a page size when allocating or releasing memory? For the common case when the finalized length is not a multiple of the page size. e.g. Initial reservation = 16KB, page size = 4KB. A The block is finalized at 11KB. We want to release round_down(16 - 11) and not round up. > Writes to RAM DISK reserve locked memory for block files > > > Key: HDFS-8157 > URL: https://issues.apache.org/jira/browse/HDFS-8157 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: datanode >Reporter: Arpit Agarwal >Assignee: Arpit Agarwal > Labels: BB2015-05-TBR > Attachments: HDFS-8157.01.patch > > > Per discussion on HDFS-6919, the first step is that writes to RAM disk will > reserve locked memory via the FsDatasetCache. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HDFS-8157) Writes to RAM DISK reserve locked memory for block files
[ https://issues.apache.org/jira/browse/HDFS-8157?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14541146#comment-14541146 ] Hadoop QA commented on HDFS-8157: - \\ \\ | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | pre-patch | 15m 22s | Pre-patch trunk compilation is healthy. | | {color:green}+1{color} | @author | 0m 0s | The patch does not contain any @author tags. | | {color:green}+1{color} | tests included | 0m 0s | The patch appears to include 8 new or modified test files. | | {color:green}+1{color} | javac | 7m 50s | There were no new javac warning messages. | | {color:red}-1{color} | javadoc | 9m 58s | The applied patch generated 1 additional warning messages. | | {color:green}+1{color} | release audit | 0m 24s | The applied patch does not increase the total number of release audit warnings. | | {color:red}-1{color} | checkstyle | 2m 15s | The applied patch generated 11 new checkstyle issues (total was 276, now 283). | | {color:red}-1{color} | whitespace | 0m 6s | The patch has 2 line(s) that end in whitespace. Use git apply --whitespace=fix. | | {color:green}+1{color} | install | 1m 35s | mvn install still works. | | {color:green}+1{color} | eclipse:eclipse | 0m 33s | The patch built with eclipse:eclipse. | | {color:green}+1{color} | findbugs | 3m 3s | The patch does not introduce any new Findbugs (version 2.0.3) warnings. | | {color:green}+1{color} | native | 3m 23s | Pre-build of native portion | | {color:red}-1{color} | hdfs tests | 180m 17s | Tests failed in hadoop-hdfs. | | | | 224m 51s | | \\ \\ || Reason || Tests || | Failed unit tests | hadoop.tools.TestHdfsConfigFields | | | hadoop.tracing.TestTraceAdmin | | | hadoop.hdfs.server.balancer.TestBalancer | | | hadoop.hdfs.server.datanode.TestDirectoryScanner | | Timed out tests | org.apache.hadoop.hdfs.util.TestByteArrayManager | \\ \\ || Subsystem || Report/Notes || | Patch URL | http://issues.apache.org/jira/secure/attachment/12732368/HDFS-8157.02.patch | | Optional Tests | javadoc javac unit findbugs checkstyle | | git revision | trunk / f24452d | | javadoc | https://builds.apache.org/job/PreCommit-HDFS-Build/10936/artifact/patchprocess/diffJavadocWarnings.txt | | checkstyle | https://builds.apache.org/job/PreCommit-HDFS-Build/10936/artifact/patchprocess/diffcheckstylehadoop-hdfs.txt | | whitespace | https://builds.apache.org/job/PreCommit-HDFS-Build/10936/artifact/patchprocess/whitespace.txt | | hadoop-hdfs test log | https://builds.apache.org/job/PreCommit-HDFS-Build/10936/artifact/patchprocess/testrun_hadoop-hdfs.txt | | Test Results | https://builds.apache.org/job/PreCommit-HDFS-Build/10936/testReport/ | | Java | 1.7.0_55 | | uname | Linux asf904.gq1.ygridcore.net 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux | | Console output | https://builds.apache.org/job/PreCommit-HDFS-Build/10936/console | This message was automatically generated. > Writes to RAM DISK reserve locked memory for block files > > > Key: HDFS-8157 > URL: https://issues.apache.org/jira/browse/HDFS-8157 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: datanode >Reporter: Arpit Agarwal >Assignee: Arpit Agarwal > Attachments: HDFS-8157.01.patch, HDFS-8157.02.patch > > > Per discussion on HDFS-6919, the first step is that writes to RAM disk will > reserve locked memory via the FsDatasetCache. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HDFS-8157) Writes to RAM DISK reserve locked memory for block files
[ https://issues.apache.org/jira/browse/HDFS-8157?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14542776#comment-14542776 ] Xiaoyu Yao commented on HDFS-8157: -- Thanks Arpit for working on this. Patch v2 looks good to me. {code} - if (v.isTransientStorage()) {cacheManager.releaseRoundDown(replicaInfo.getOriginalBytesReserved() - replicaInfo.getNumBytes()); + if (v.isTransientStorage()) { + releaseLockedMemory(replicaInfo.getOriginalBytesReserved() - replicaInfo.getNumBytes(), false); ramDiskReplicaTracker.addReplica(bpid, replicaInfo.getBlockId(), v, replicaInfo.getNumBytes()); datanode.getMetrics().addRamDiskBytesWrite(replicaInfo.getNumBytes()); } {code} One question: This patch coordinates the maximum memory usage of HDFS cache. But it does not prevent the same replica block from being cached by both CCM (mmap) and Write Cache (ramdisk). For example, based on the block id, we may not want to mmap the same block that has just been written to the ramdisk. > Writes to RAM DISK reserve locked memory for block files > > > Key: HDFS-8157 > URL: https://issues.apache.org/jira/browse/HDFS-8157 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: datanode >Reporter: Arpit Agarwal >Assignee: Arpit Agarwal > Attachments: HDFS-8157.01.patch, HDFS-8157.02.patch > > > Per discussion on HDFS-6919, the first step is that writes to RAM disk will > reserve locked memory via the FsDatasetCache. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HDFS-8157) Writes to RAM DISK reserve locked memory for block files
[ https://issues.apache.org/jira/browse/HDFS-8157?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14542816#comment-14542816 ] Arpit Agarwal commented on HDFS-8157: - Thanks for the review [~xyao]. Good point, the following check in {{FsDatasetImpl#cacheBlock}} should guard against that. {code} if (volume.isTransientStorage()) { LOG.warn("Caching not supported on block with id " + blockId + " since the volume is backed by RAM."); return; } {code} > Writes to RAM DISK reserve locked memory for block files > > > Key: HDFS-8157 > URL: https://issues.apache.org/jira/browse/HDFS-8157 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: datanode >Reporter: Arpit Agarwal >Assignee: Arpit Agarwal > Attachments: HDFS-8157.01.patch, HDFS-8157.02.patch > > > Per discussion on HDFS-6919, the first step is that writes to RAM disk will > reserve locked memory via the FsDatasetCache. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HDFS-8157) Writes to RAM DISK reserve locked memory for block files
[ https://issues.apache.org/jira/browse/HDFS-8157?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14543200#comment-14543200 ] Xiaoyu Yao commented on HDFS-8157: -- Thanks [~arpitagarwal] for pointing that. +1 if you could update the patch that fixes the issue below in FsDatasetImpl.java and the Jenkins issues. {code} - if (v.isTransientStorage()) {cacheManager.releaseRoundDown(replicaInfo.getOriginalBytesReserved() - replicaInfo.getNumBytes()); + if (v.isTransientStorage()) { + releaseLockedMemory(replicaInfo.getOriginalBytesReserved() - replicaInfo.getNumBytes(), false); ramDiskReplicaTracker.addReplica(bpid, replicaInfo.getBlockId(), v, replicaInfo.getNumBytes()); datanode.getMetrics().addRamDiskBytesWrite(replicaInfo.getNumBytes()); } {code} > Writes to RAM DISK reserve locked memory for block files > > > Key: HDFS-8157 > URL: https://issues.apache.org/jira/browse/HDFS-8157 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: datanode >Reporter: Arpit Agarwal >Assignee: Arpit Agarwal > Attachments: HDFS-8157.01.patch, HDFS-8157.02.patch > > > Per discussion on HDFS-6919, the first step is that writes to RAM disk will > reserve locked memory via the FsDatasetCache. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HDFS-8157) Writes to RAM DISK reserve locked memory for block files
[ https://issues.apache.org/jira/browse/HDFS-8157?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14544212#comment-14544212 ] Colin Patrick McCabe commented on HDFS-8157: I still don't understand why we would ever round down. If a block contains 11 kb, 12kb of RAM will be used to cache it. So why would we ever round down? This is the reason why HDFS read caching (HDFS-4949) always rounds up to the nearest 4kb (or whatever the OS page size is). Perhaps I am misunderstanding a detail here. > Writes to RAM DISK reserve locked memory for block files > > > Key: HDFS-8157 > URL: https://issues.apache.org/jira/browse/HDFS-8157 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: datanode >Reporter: Arpit Agarwal >Assignee: Arpit Agarwal > Attachments: HDFS-8157.01.patch, HDFS-8157.02.patch, > HDFS-8157.03.patch > > > Per discussion on HDFS-6919, the first step is that writes to RAM disk will > reserve locked memory via the FsDatasetCache. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HDFS-8157) Writes to RAM DISK reserve locked memory for block files
[ https://issues.apache.org/jira/browse/HDFS-8157?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14544252#comment-14544252 ] Arpit Agarwal commented on HDFS-8157: - It's doing just what you would expect. {{16 - round_up(11) == round_down(16 - 11))}}. > Writes to RAM DISK reserve locked memory for block files > > > Key: HDFS-8157 > URL: https://issues.apache.org/jira/browse/HDFS-8157 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: datanode >Reporter: Arpit Agarwal >Assignee: Arpit Agarwal > Attachments: HDFS-8157.01.patch, HDFS-8157.02.patch, > HDFS-8157.03.patch > > > Per discussion on HDFS-6919, the first step is that writes to RAM disk will > reserve locked memory via the FsDatasetCache. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HDFS-8157) Writes to RAM DISK reserve locked memory for block files
[ https://issues.apache.org/jira/browse/HDFS-8157?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14544253#comment-14544253 ] Arpit Agarwal commented on HDFS-8157: - I think the part you might be missing is we are not calculating how much to reserve but how much to release. We reserved the full block length and now we are giving back the excess since the block is finalized short of its full length. > Writes to RAM DISK reserve locked memory for block files > > > Key: HDFS-8157 > URL: https://issues.apache.org/jira/browse/HDFS-8157 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: datanode >Reporter: Arpit Agarwal >Assignee: Arpit Agarwal > Attachments: HDFS-8157.01.patch, HDFS-8157.02.patch, > HDFS-8157.03.patch > > > Per discussion on HDFS-6919, the first step is that writes to RAM disk will > reserve locked memory via the FsDatasetCache. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HDFS-8157) Writes to RAM DISK reserve locked memory for block files
[ https://issues.apache.org/jira/browse/HDFS-8157?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14544331#comment-14544331 ] Hadoop QA commented on HDFS-8157: - \\ \\ | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | pre-patch | 14m 59s | Pre-patch trunk compilation is healthy. | | {color:green}+1{color} | @author | 0m 0s | The patch does not contain any @author tags. | | {color:green}+1{color} | tests included | 0m 0s | The patch appears to include 8 new or modified test files. | | {color:green}+1{color} | javac | 7m 33s | There were no new javac warning messages. | | {color:green}+1{color} | javadoc | 9m 39s | There were no new javadoc warning messages. | | {color:green}+1{color} | release audit | 0m 23s | The applied patch does not increase the total number of release audit warnings. | | {color:red}-1{color} | checkstyle | 2m 12s | The applied patch generated 3 new checkstyle issues (total was 276, now 275). | | {color:red}-1{color} | whitespace | 0m 6s | The patch has 2 line(s) that end in whitespace. Use git apply --whitespace=fix. | | {color:green}+1{color} | install | 1m 33s | mvn install still works. | | {color:green}+1{color} | eclipse:eclipse | 0m 34s | The patch built with eclipse:eclipse. | | {color:green}+1{color} | findbugs | 3m 4s | The patch does not introduce any new Findbugs (version 2.0.3) warnings. | | {color:green}+1{color} | native | 3m 13s | Pre-build of native portion | | {color:red}-1{color} | hdfs tests | 167m 26s | Tests failed in hadoop-hdfs. | | | | 210m 48s | | \\ \\ || Reason || Tests || | Failed unit tests | hadoop.tools.TestHdfsConfigFields | | | hadoop.hdfs.server.balancer.TestBalancer | | | hadoop.hdfs.server.datanode.TestDirectoryScanner | \\ \\ || Subsystem || Report/Notes || | Patch URL | http://issues.apache.org/jira/secure/attachment/12732899/HDFS-8157.03.patch | | Optional Tests | javadoc javac unit findbugs checkstyle | | git revision | trunk / 05ff54c | | checkstyle | https://builds.apache.org/job/PreCommit-HDFS-Build/10982/artifact/patchprocess/diffcheckstylehadoop-hdfs.txt | | whitespace | https://builds.apache.org/job/PreCommit-HDFS-Build/10982/artifact/patchprocess/whitespace.txt | | hadoop-hdfs test log | https://builds.apache.org/job/PreCommit-HDFS-Build/10982/artifact/patchprocess/testrun_hadoop-hdfs.txt | | Test Results | https://builds.apache.org/job/PreCommit-HDFS-Build/10982/testReport/ | | Java | 1.7.0_55 | | uname | Linux asf904.gq1.ygridcore.net 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux | | Console output | https://builds.apache.org/job/PreCommit-HDFS-Build/10982/console | This message was automatically generated. > Writes to RAM DISK reserve locked memory for block files > > > Key: HDFS-8157 > URL: https://issues.apache.org/jira/browse/HDFS-8157 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: datanode >Reporter: Arpit Agarwal >Assignee: Arpit Agarwal > Attachments: HDFS-8157.01.patch, HDFS-8157.02.patch, > HDFS-8157.03.patch > > > Per discussion on HDFS-6919, the first step is that writes to RAM disk will > reserve locked memory via the FsDatasetCache. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HDFS-8157) Writes to RAM DISK reserve locked memory for block files
[ https://issues.apache.org/jira/browse/HDFS-8157?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14544369#comment-14544369 ] Arpit Agarwal commented on HDFS-8157: - A couple of the failed tests need fixing. Will post a new patch later today. > Writes to RAM DISK reserve locked memory for block files > > > Key: HDFS-8157 > URL: https://issues.apache.org/jira/browse/HDFS-8157 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: datanode >Reporter: Arpit Agarwal >Assignee: Arpit Agarwal > Attachments: HDFS-8157.01.patch, HDFS-8157.02.patch, > HDFS-8157.03.patch > > > Per discussion on HDFS-6919, the first step is that writes to RAM disk will > reserve locked memory via the FsDatasetCache. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HDFS-8157) Writes to RAM DISK reserve locked memory for block files
[ https://issues.apache.org/jira/browse/HDFS-8157?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14544742#comment-14544742 ] Colin Patrick McCabe commented on HDFS-8157: Thanks for the clarification. If the intention is to release {{original_reservation_length - round_up(new_reservation_length, page_size)}}, wouldn't it be more straightforward to just do that? I don't think the overhead of the extra subtraction is significant, and it would be a lot easier to follow. We would also be able to skip writing all the {{roundDown}} code. > Writes to RAM DISK reserve locked memory for block files > > > Key: HDFS-8157 > URL: https://issues.apache.org/jira/browse/HDFS-8157 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: datanode >Reporter: Arpit Agarwal >Assignee: Arpit Agarwal > Attachments: HDFS-8157.01.patch, HDFS-8157.02.patch, > HDFS-8157.03.patch, HDFS-8157.04.patch > > > Per discussion on HDFS-6919, the first step is that writes to RAM disk will > reserve locked memory via the FsDatasetCache. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HDFS-8157) Writes to RAM DISK reserve locked memory for block files
[ https://issues.apache.org/jira/browse/HDFS-8157?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14544794#comment-14544794 ] Hadoop QA commented on HDFS-8157: - \\ \\ | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | pre-patch | 14m 36s | Pre-patch trunk compilation is healthy. | | {color:green}+1{color} | @author | 0m 0s | The patch does not contain any @author tags. | | {color:green}+1{color} | tests included | 0m 0s | The patch appears to include 9 new or modified test files. | | {color:green}+1{color} | javac | 7m 28s | There were no new javac warning messages. | | {color:green}+1{color} | javadoc | 9m 33s | There were no new javadoc warning messages. | | {color:green}+1{color} | release audit | 0m 23s | The applied patch does not increase the total number of release audit warnings. | | {color:red}-1{color} | checkstyle | 2m 14s | The applied patch generated 2 new checkstyle issues (total was 275, now 273). | | {color:green}+1{color} | whitespace | 0m 6s | The patch has no lines that end in whitespace. | | {color:green}+1{color} | install | 1m 32s | mvn install still works. | | {color:green}+1{color} | eclipse:eclipse | 0m 33s | The patch built with eclipse:eclipse. | | {color:green}+1{color} | findbugs | 3m 7s | The patch does not introduce any new Findbugs (version 2.0.3) warnings. | | {color:green}+1{color} | native | 3m 14s | Pre-build of native portion | | {color:red}-1{color} | hdfs tests | 168m 6s | Tests failed in hadoop-hdfs. | | | | 210m 57s | | \\ \\ || Reason || Tests || | Failed unit tests | hadoop.tools.TestHdfsConfigFields | \\ \\ || Subsystem || Report/Notes || | Patch URL | http://issues.apache.org/jira/secure/attachment/12732968/HDFS-8157.04.patch | | Optional Tests | javadoc javac unit findbugs checkstyle | | git revision | trunk / 09fe16f | | checkstyle | https://builds.apache.org/job/PreCommit-HDFS-Build/10990/artifact/patchprocess/diffcheckstylehadoop-hdfs.txt | | hadoop-hdfs test log | https://builds.apache.org/job/PreCommit-HDFS-Build/10990/artifact/patchprocess/testrun_hadoop-hdfs.txt | | Test Results | https://builds.apache.org/job/PreCommit-HDFS-Build/10990/testReport/ | | Java | 1.7.0_55 | | uname | Linux asf905.gq1.ygridcore.net 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux | | Console output | https://builds.apache.org/job/PreCommit-HDFS-Build/10990/console | This message was automatically generated. > Writes to RAM DISK reserve locked memory for block files > > > Key: HDFS-8157 > URL: https://issues.apache.org/jira/browse/HDFS-8157 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: datanode >Reporter: Arpit Agarwal >Assignee: Arpit Agarwal > Attachments: HDFS-8157.01.patch, HDFS-8157.02.patch, > HDFS-8157.03.patch, HDFS-8157.04.patch > > > Per discussion on HDFS-6919, the first step is that writes to RAM disk will > reserve locked memory via the FsDatasetCache. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HDFS-8157) Writes to RAM DISK reserve locked memory for block files
[ https://issues.apache.org/jira/browse/HDFS-8157?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14544956#comment-14544956 ] Arpit Agarwal commented on HDFS-8157: - bq. If the intention is to release original_reservation_length - round_up(new_reservation_length, page_size), wouldn't it be more straightforward to just do that? I don't think the overhead of the extra subtraction is significant, and it would be a lot easier to follow. Agreed the extra arithmetic is a non-issue. I coded it up both ways and found round down simpler. The other way requires moving rounding logic out or making two calls to the cache manager, neither approach felt cleaner. > Writes to RAM DISK reserve locked memory for block files > > > Key: HDFS-8157 > URL: https://issues.apache.org/jira/browse/HDFS-8157 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: datanode >Reporter: Arpit Agarwal >Assignee: Arpit Agarwal > Attachments: HDFS-8157.01.patch, HDFS-8157.02.patch, > HDFS-8157.03.patch, HDFS-8157.04.patch > > > Per discussion on HDFS-6919, the first step is that writes to RAM disk will > reserve locked memory via the FsDatasetCache. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HDFS-8157) Writes to RAM DISK reserve locked memory for block files
[ https://issues.apache.org/jira/browse/HDFS-8157?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14545953#comment-14545953 ] Arpit Agarwal commented on HDFS-8157: - The checkstyle issues were not introduced by the patch. The remaining test failure looks unrelated. > Writes to RAM DISK reserve locked memory for block files > > > Key: HDFS-8157 > URL: https://issues.apache.org/jira/browse/HDFS-8157 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: datanode >Reporter: Arpit Agarwal >Assignee: Arpit Agarwal > Attachments: HDFS-8157.01.patch, HDFS-8157.02.patch, > HDFS-8157.03.patch, HDFS-8157.04.patch > > > Per discussion on HDFS-6919, the first step is that writes to RAM disk will > reserve locked memory via the FsDatasetCache. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HDFS-8157) Writes to RAM DISK reserve locked memory for block files
[ https://issues.apache.org/jira/browse/HDFS-8157?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14546008#comment-14546008 ] Xiaoyu Yao commented on HDFS-8157: -- +1 for the V4 patch. > Writes to RAM DISK reserve locked memory for block files > > > Key: HDFS-8157 > URL: https://issues.apache.org/jira/browse/HDFS-8157 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: datanode >Reporter: Arpit Agarwal >Assignee: Arpit Agarwal > Attachments: HDFS-8157.01.patch, HDFS-8157.02.patch, > HDFS-8157.03.patch, HDFS-8157.04.patch > > > Per discussion on HDFS-6919, the first step is that writes to RAM disk will > reserve locked memory via the FsDatasetCache. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HDFS-8157) Writes to RAM DISK reserve locked memory for block files
[ https://issues.apache.org/jira/browse/HDFS-8157?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14546190#comment-14546190 ] Arpit Agarwal commented on HDFS-8157: - Will hold off committing immediately in case Colin has additional comments. > Writes to RAM DISK reserve locked memory for block files > > > Key: HDFS-8157 > URL: https://issues.apache.org/jira/browse/HDFS-8157 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: datanode >Reporter: Arpit Agarwal >Assignee: Arpit Agarwal > Attachments: HDFS-8157.01.patch, HDFS-8157.02.patch, > HDFS-8157.03.patch, HDFS-8157.04.patch > > > Per discussion on HDFS-6919, the first step is that writes to RAM disk will > reserve locked memory via the FsDatasetCache. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HDFS-8157) Writes to RAM DISK reserve locked memory for block files
[ https://issues.apache.org/jira/browse/HDFS-8157?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14546840#comment-14546840 ] Hudson commented on HDFS-8157: -- FAILURE: Integrated in Hadoop-trunk-Commit #7846 (See [https://builds.apache.org/job/Hadoop-trunk-Commit/7846/]) HDFS-8157. Writes to RAM DISK reserve locked memory for block files. (Arpit Agarwal) (arp: rev e453989a5722e653bd97e3e54f9bbdffc9454fba) * hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/MiniDFSCluster.java * hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/impl/FsVolumeImpl.java * hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/impl/FsDatasetCache.java * hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/ReplicaInPipeline.java * hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt * hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/impl/TestLazyPersistLockedMemory.java * hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/balancer/TestBalancer.java * hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/impl/RamDiskReplicaTracker.java * hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/extdataset/ExternalVolumeImpl.java * hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/impl/BlockPoolSlice.java * hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/TestDirectoryScanner.java * hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/SimulatedFSDataset.java * hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/ReplicaInfo.java * hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/impl/TestWriteToReplica.java * hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/impl/FsDatasetImpl.java * hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/impl/LazyPersistTestCase.java * hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/FsVolumeSpi.java * hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/impl/FsDatasetAsyncDiskService.java * hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/impl/RamDiskReplicaLruTracker.java * hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/TestFsDatasetCache.java > Writes to RAM DISK reserve locked memory for block files > > > Key: HDFS-8157 > URL: https://issues.apache.org/jira/browse/HDFS-8157 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: datanode >Reporter: Arpit Agarwal >Assignee: Arpit Agarwal > Fix For: 2.8.0 > > Attachments: HDFS-8157.01.patch, HDFS-8157.02.patch, > HDFS-8157.03.patch, HDFS-8157.04.patch > > > Per discussion on HDFS-6919, the first step is that writes to RAM disk will > reserve locked memory via the FsDatasetCache. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HDFS-8157) Writes to RAM DISK reserve locked memory for block files
[ https://issues.apache.org/jira/browse/HDFS-8157?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14547142#comment-14547142 ] Hudson commented on HDFS-8157: -- SUCCESS: Integrated in Hadoop-Yarn-trunk-Java8 #199 (See [https://builds.apache.org/job/Hadoop-Yarn-trunk-Java8/199/]) HDFS-8157. Writes to RAM DISK reserve locked memory for block files. (Arpit Agarwal) (arp: rev e453989a5722e653bd97e3e54f9bbdffc9454fba) * hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/impl/LazyPersistTestCase.java * hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/FsVolumeSpi.java * hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/impl/FsDatasetAsyncDiskService.java * hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/impl/RamDiskReplicaTracker.java * hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/TestDirectoryScanner.java * hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/balancer/TestBalancer.java * hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/MiniDFSCluster.java * hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/extdataset/ExternalVolumeImpl.java * hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/TestFsDatasetCache.java * hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/ReplicaInPipeline.java * hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/impl/TestLazyPersistLockedMemory.java * hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/ReplicaInfo.java * hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt * hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/impl/FsDatasetCache.java * hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/impl/FsVolumeImpl.java * hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/impl/BlockPoolSlice.java * hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/impl/RamDiskReplicaLruTracker.java * hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/impl/TestWriteToReplica.java * hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/SimulatedFSDataset.java * hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/impl/FsDatasetImpl.java > Writes to RAM DISK reserve locked memory for block files > > > Key: HDFS-8157 > URL: https://issues.apache.org/jira/browse/HDFS-8157 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: datanode >Reporter: Arpit Agarwal >Assignee: Arpit Agarwal > Fix For: 2.8.0 > > Attachments: HDFS-8157.01.patch, HDFS-8157.02.patch, > HDFS-8157.03.patch, HDFS-8157.04.patch > > > Per discussion on HDFS-6919, the first step is that writes to RAM disk will > reserve locked memory via the FsDatasetCache. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HDFS-8157) Writes to RAM DISK reserve locked memory for block files
[ https://issues.apache.org/jira/browse/HDFS-8157?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14547144#comment-14547144 ] Hudson commented on HDFS-8157: -- SUCCESS: Integrated in Hadoop-Yarn-trunk #930 (See [https://builds.apache.org/job/Hadoop-Yarn-trunk/930/]) HDFS-8157. Writes to RAM DISK reserve locked memory for block files. (Arpit Agarwal) (arp: rev e453989a5722e653bd97e3e54f9bbdffc9454fba) * hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/ReplicaInfo.java * hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/TestDirectoryScanner.java * hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/impl/BlockPoolSlice.java * hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/impl/FsDatasetImpl.java * hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/impl/RamDiskReplicaLruTracker.java * hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/impl/FsVolumeImpl.java * hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/balancer/TestBalancer.java * hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/impl/RamDiskReplicaTracker.java * hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/SimulatedFSDataset.java * hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/impl/LazyPersistTestCase.java * hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/MiniDFSCluster.java * hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/TestFsDatasetCache.java * hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/FsVolumeSpi.java * hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/impl/TestWriteToReplica.java * hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/impl/FsDatasetCache.java * hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/extdataset/ExternalVolumeImpl.java * hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/impl/FsDatasetAsyncDiskService.java * hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt * hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/ReplicaInPipeline.java * hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/impl/TestLazyPersistLockedMemory.java > Writes to RAM DISK reserve locked memory for block files > > > Key: HDFS-8157 > URL: https://issues.apache.org/jira/browse/HDFS-8157 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: datanode >Reporter: Arpit Agarwal >Assignee: Arpit Agarwal > Fix For: 2.8.0 > > Attachments: HDFS-8157.01.patch, HDFS-8157.02.patch, > HDFS-8157.03.patch, HDFS-8157.04.patch > > > Per discussion on HDFS-6919, the first step is that writes to RAM disk will > reserve locked memory via the FsDatasetCache. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HDFS-8157) Writes to RAM DISK reserve locked memory for block files
[ https://issues.apache.org/jira/browse/HDFS-8157?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14547184#comment-14547184 ] Hudson commented on HDFS-8157: -- FAILURE: Integrated in Hadoop-Hdfs-trunk #2128 (See [https://builds.apache.org/job/Hadoop-Hdfs-trunk/2128/]) HDFS-8157. Writes to RAM DISK reserve locked memory for block files. (Arpit Agarwal) (arp: rev e453989a5722e653bd97e3e54f9bbdffc9454fba) * hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/SimulatedFSDataset.java * hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/impl/LazyPersistTestCase.java * hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/FsVolumeSpi.java * hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/impl/FsDatasetAsyncDiskService.java * hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/impl/FsVolumeImpl.java * hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/impl/TestWriteToReplica.java * hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/balancer/TestBalancer.java * hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/TestFsDatasetCache.java * hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/TestDirectoryScanner.java * hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/ReplicaInfo.java * hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/impl/RamDiskReplicaLruTracker.java * hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/MiniDFSCluster.java * hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/impl/FsDatasetImpl.java * hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/impl/TestLazyPersistLockedMemory.java * hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/impl/FsDatasetCache.java * hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/impl/RamDiskReplicaTracker.java * hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/extdataset/ExternalVolumeImpl.java * hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/ReplicaInPipeline.java * hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt * hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/impl/BlockPoolSlice.java > Writes to RAM DISK reserve locked memory for block files > > > Key: HDFS-8157 > URL: https://issues.apache.org/jira/browse/HDFS-8157 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: datanode >Reporter: Arpit Agarwal >Assignee: Arpit Agarwal > Fix For: 2.8.0 > > Attachments: HDFS-8157.01.patch, HDFS-8157.02.patch, > HDFS-8157.03.patch, HDFS-8157.04.patch > > > Per discussion on HDFS-6919, the first step is that writes to RAM disk will > reserve locked memory via the FsDatasetCache. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HDFS-8157) Writes to RAM DISK reserve locked memory for block files
[ https://issues.apache.org/jira/browse/HDFS-8157?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14547186#comment-14547186 ] Hudson commented on HDFS-8157: -- FAILURE: Integrated in Hadoop-Hdfs-trunk-Java8 #188 (See [https://builds.apache.org/job/Hadoop-Hdfs-trunk-Java8/188/]) HDFS-8157. Writes to RAM DISK reserve locked memory for block files. (Arpit Agarwal) (arp: rev e453989a5722e653bd97e3e54f9bbdffc9454fba) * hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/impl/BlockPoolSlice.java * hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/impl/RamDiskReplicaTracker.java * hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/impl/FsVolumeImpl.java * hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/MiniDFSCluster.java * hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/balancer/TestBalancer.java * hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/impl/TestLazyPersistLockedMemory.java * hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/impl/LazyPersistTestCase.java * hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/impl/FsDatasetCache.java * hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/impl/FsDatasetImpl.java * hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/impl/TestWriteToReplica.java * hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt * hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/impl/FsDatasetAsyncDiskService.java * hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/TestDirectoryScanner.java * hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/SimulatedFSDataset.java * hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/impl/RamDiskReplicaLruTracker.java * hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/ReplicaInfo.java * hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/TestFsDatasetCache.java * hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/ReplicaInPipeline.java * hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/FsVolumeSpi.java * hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/extdataset/ExternalVolumeImpl.java > Writes to RAM DISK reserve locked memory for block files > > > Key: HDFS-8157 > URL: https://issues.apache.org/jira/browse/HDFS-8157 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: datanode >Reporter: Arpit Agarwal >Assignee: Arpit Agarwal > Fix For: 2.8.0 > > Attachments: HDFS-8157.01.patch, HDFS-8157.02.patch, > HDFS-8157.03.patch, HDFS-8157.04.patch > > > Per discussion on HDFS-6919, the first step is that writes to RAM disk will > reserve locked memory via the FsDatasetCache. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HDFS-8157) Writes to RAM DISK reserve locked memory for block files
[ https://issues.apache.org/jira/browse/HDFS-8157?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14547201#comment-14547201 ] Hudson commented on HDFS-8157: -- SUCCESS: Integrated in Hadoop-Mapreduce-trunk-Java8 #198 (See [https://builds.apache.org/job/Hadoop-Mapreduce-trunk-Java8/198/]) HDFS-8157. Writes to RAM DISK reserve locked memory for block files. (Arpit Agarwal) (arp: rev e453989a5722e653bd97e3e54f9bbdffc9454fba) * hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/MiniDFSCluster.java * hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/ReplicaInfo.java * hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/impl/FsDatasetImpl.java * hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/impl/TestWriteToReplica.java * hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/impl/TestLazyPersistLockedMemory.java * hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt * hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/impl/RamDiskReplicaLruTracker.java * hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/balancer/TestBalancer.java * hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/ReplicaInPipeline.java * hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/impl/LazyPersistTestCase.java * hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/impl/BlockPoolSlice.java * hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/SimulatedFSDataset.java * hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/impl/FsDatasetAsyncDiskService.java * hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/extdataset/ExternalVolumeImpl.java * hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/impl/FsDatasetCache.java * hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/TestDirectoryScanner.java * hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/TestFsDatasetCache.java * hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/FsVolumeSpi.java * hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/impl/FsVolumeImpl.java * hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/impl/RamDiskReplicaTracker.java > Writes to RAM DISK reserve locked memory for block files > > > Key: HDFS-8157 > URL: https://issues.apache.org/jira/browse/HDFS-8157 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: datanode >Reporter: Arpit Agarwal >Assignee: Arpit Agarwal > Fix For: 2.8.0 > > Attachments: HDFS-8157.01.patch, HDFS-8157.02.patch, > HDFS-8157.03.patch, HDFS-8157.04.patch > > > Per discussion on HDFS-6919, the first step is that writes to RAM disk will > reserve locked memory via the FsDatasetCache. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HDFS-8157) Writes to RAM DISK reserve locked memory for block files
[ https://issues.apache.org/jira/browse/HDFS-8157?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14547204#comment-14547204 ] Hudson commented on HDFS-8157: -- SUCCESS: Integrated in Hadoop-Mapreduce-trunk #2146 (See [https://builds.apache.org/job/Hadoop-Mapreduce-trunk/2146/]) HDFS-8157. Writes to RAM DISK reserve locked memory for block files. (Arpit Agarwal) (arp: rev e453989a5722e653bd97e3e54f9bbdffc9454fba) * hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/impl/BlockPoolSlice.java * hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/ReplicaInPipeline.java * hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/TestFsDatasetCache.java * hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/impl/RamDiskReplicaTracker.java * hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/SimulatedFSDataset.java * hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/FsVolumeSpi.java * hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/impl/FsVolumeImpl.java * hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/impl/TestWriteToReplica.java * hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt * hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/ReplicaInfo.java * hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/impl/TestLazyPersistLockedMemory.java * hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/impl/FsDatasetImpl.java * hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/extdataset/ExternalVolumeImpl.java * hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/TestDirectoryScanner.java * hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/balancer/TestBalancer.java * hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/impl/FsDatasetCache.java * hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/impl/RamDiskReplicaLruTracker.java * hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/MiniDFSCluster.java * hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/impl/LazyPersistTestCase.java * hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/impl/FsDatasetAsyncDiskService.java > Writes to RAM DISK reserve locked memory for block files > > > Key: HDFS-8157 > URL: https://issues.apache.org/jira/browse/HDFS-8157 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: datanode >Reporter: Arpit Agarwal >Assignee: Arpit Agarwal > Fix For: 2.8.0 > > Attachments: HDFS-8157.01.patch, HDFS-8157.02.patch, > HDFS-8157.03.patch, HDFS-8157.04.patch > > > Per discussion on HDFS-6919, the first step is that writes to RAM disk will > reserve locked memory via the FsDatasetCache. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HDFS-8157) Writes to RAM DISK reserve locked memory for block files
[ https://issues.apache.org/jira/browse/HDFS-8157?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14504338#comment-14504338 ] Hadoop QA commented on HDFS-8157: - {color:red}-1 overall{color}. Here are the results of testing the latest attachment http://issues.apache.org/jira/secure/attachment/12726739/HDFS-8157.01.patch against trunk revision 44872b7. {color:green}+1 @author{color}. The patch does not contain any @author tags. {color:green}+1 tests included{color}. The patch appears to include 8 new or modified test files. {color:green}+1 javac{color}. The applied patch does not increase the total number of javac compiler warnings. {color:red}-1 javadoc{color}. The javadoc tool appears to have generated 1 warning messages. See https://builds.apache.org/job/PreCommit-HDFS-Build/10326//artifact/patchprocess/diffJavadocWarnings.txt for details. {color:green}+1 eclipse:eclipse{color}. The patch built with eclipse:eclipse. {color:green}+1 findbugs{color}. The patch does not introduce any new Findbugs (version 2.0.3) warnings. {color:green}+1 release audit{color}. The applied patch does not increase the total number of release audit warnings. {color:red}-1 core tests{color}. The patch failed these unit tests in hadoop-hdfs-project/hadoop-hdfs: org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.TestLazyPersistLockedMemory org.apache.hadoop.hdfs.server.datanode.TestDirectoryScanner org.apache.hadoop.hdfs.server.balancer.TestBalancer The following test timeouts occurred in hadoop-hdfs-project/hadoop-hdfs: org.apache.hadoop.hdfs.TestDFSInotifyEventInputStream Test results: https://builds.apache.org/job/PreCommit-HDFS-Build/10326//testReport/ Console output: https://builds.apache.org/job/PreCommit-HDFS-Build/10326//console This message is automatically generated. > Writes to RAM DISK reserve locked memory for block files > > > Key: HDFS-8157 > URL: https://issues.apache.org/jira/browse/HDFS-8157 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: datanode >Reporter: Arpit Agarwal >Assignee: Arpit Agarwal > Attachments: HDFS-8157.01.patch > > > Per discussion on HDFS-6919, the first step is that writes to RAM disk will > reserve locked memory via the FsDatasetCache. -- This message was sent by Atlassian JIRA (v6.3.4#6332)