[jira] [Commented] (HDFS-13209) DistributedFileSystem.create should allow an option to provide StoragePolicy
[ https://issues.apache.org/jira/browse/HDFS-13209?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16768519#comment-16768519 ] Hudson commented on HDFS-13209: --- SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #15956 (See [https://builds.apache.org/job/Hadoop-trunk-Commit/15956/]) HDFS-13209. DistributedFileSystem.create should allow an option to (surendralilhore: rev 0d7a5ac5f526801367a9ec963e6d72783b637d55) * (edit) hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/DFSClient.java * (edit) hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/NNThroughputBenchmark.java * (edit) hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestDefaultBlockPlacementPolicy.java * (edit) hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestDFSClientRetries.java * (edit) hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/protocolPB/ClientNamenodeProtocolTranslatorPB.java * (edit) hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/DFSOutputStream.java * (edit) hadoop-hdfs-project/hadoop-hdfs-rbf/src/test/java/org/apache/hadoop/hdfs/server/federation/router/TestRouterRpc.java * (edit) hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/DistributedFileSystem.java * (edit) hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/ha/TestRetryCacheWithHA.java * (edit) hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocolPB/ClientNamenodeProtocolServerSideTranslatorPB.java * (edit) hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestAddBlockRetry.java * (edit) hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/java/org/apache/hadoop/hdfs/server/federation/router/RouterClientProtocol.java * (edit) hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/DFSTestUtil.java * (edit) hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSNamesystem.java * (edit) hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestBlockPlacementPolicyRackFaultTolerant.java * (edit) hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestDistributedFileSystem.java * (edit) hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSDirWriteFileOp.java * (edit) hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/java/org/apache/hadoop/hdfs/server/federation/router/RouterRpcServer.java * (edit) hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestNamenodeRetryCache.java * (edit) hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestFileCreation.java * (edit) hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/protocol/ClientProtocol.java * (edit) hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestLease.java * (edit) hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestEncryptionZones.java * (edit) hadoop-hdfs-project/hadoop-hdfs-client/src/main/proto/ClientNamenodeProtocol.proto * (edit) hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/NameNodeRpcServer.java > DistributedFileSystem.create should allow an option to provide StoragePolicy > > > Key: HDFS-13209 > URL: https://issues.apache.org/jira/browse/HDFS-13209 > Project: Hadoop HDFS > Issue Type: Improvement > Components: hdfs >Affects Versions: 3.0.0 >Reporter: Jean-Marc Spaggiari >Assignee: Ayush Saxena >Priority: Major > Fix For: 3.3.0 > > Attachments: HDFS-13209-01.patch, HDFS-13209-02.patch, > HDFS-13209-03.patch, HDFS-13209-04.patch, HDFS-13209-05.patch, > HDFS-13209-06.patch > > > DistributedFileSystem.create allows to get a FSDataOutputStream. The stored > file and related blocks will used the directory based StoragePolicy. > > However, sometime, we might need to keep all files in the same directory > (consistency constraint) but might want some of them on SSD (small, in my > case) until they are processed and merger/removed. Then they will go on the > default policy. > > When creating a file, it will be useful to have an option to specify a > different StoragePolicy... -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-13209) DistributedFileSystem.create should allow an option to provide StoragePolicy
[ https://issues.apache.org/jira/browse/HDFS-13209?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16768263#comment-16768263 ] Vinayakumar B commented on HDFS-13209: -- Checkstyle can be ignored for now. test failures seems unrelated. +1 > DistributedFileSystem.create should allow an option to provide StoragePolicy > > > Key: HDFS-13209 > URL: https://issues.apache.org/jira/browse/HDFS-13209 > Project: Hadoop HDFS > Issue Type: New Feature > Components: hdfs >Affects Versions: 3.0.0 >Reporter: Jean-Marc Spaggiari >Assignee: Ayush Saxena >Priority: Major > Attachments: HDFS-13209-01.patch, HDFS-13209-02.patch, > HDFS-13209-03.patch, HDFS-13209-04.patch, HDFS-13209-05.patch, > HDFS-13209-06.patch > > > DistributedFileSystem.create allows to get a FSDataOutputStream. The stored > file and related blocks will used the directory based StoragePolicy. > > However, sometime, we might need to keep all files in the same directory > (consistency constraint) but might want some of them on SSD (small, in my > case) until they are processed and merger/removed. Then they will go on the > default policy. > > When creating a file, it will be useful to have an option to specify a > different StoragePolicy... -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-13209) DistributedFileSystem.create should allow an option to provide StoragePolicy
[ https://issues.apache.org/jira/browse/HDFS-13209?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16768115#comment-16768115 ] Hadoop QA commented on HDFS-13209: -- | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 20s{color} | {color:blue} Docker mode activated. {color} | || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s{color} | {color:green} The patch appears to include 13 new or modified test files. {color} | || || || || {color:brown} trunk Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 12s{color} | {color:blue} Maven dependency ordering for branch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 17m 45s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 3m 39s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 1m 16s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 2m 25s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 15m 18s{color} | {color:green} branch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 4m 31s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 55s{color} | {color:green} trunk passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 11s{color} | {color:blue} Maven dependency ordering for patch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 2m 15s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 3m 36s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} cc {color} | {color:green} 3m 36s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 3m 36s{color} | {color:green} the patch passed {color} | | {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange} 1m 17s{color} | {color:orange} hadoop-hdfs-project: The patch generated 1 new + 1030 unchanged - 1 fixed = 1031 total (was 1031) {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 2m 17s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 12m 14s{color} | {color:green} patch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 5m 23s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 2m 16s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:green}+1{color} | {color:green} unit {color} | {color:green} 1m 53s{color} | {color:green} hadoop-hdfs-client in the patch passed. {color} | | {color:red}-1{color} | {color:red} unit {color} | {color:red}118m 11s{color} | {color:red} hadoop-hdfs in the patch failed. {color} | | {color:green}+1{color} | {color:green} unit {color} | {color:green} 16m 55s{color} | {color:green} hadoop-hdfs-rbf in the patch passed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 33s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black}213m 9s{color} | {color:black} {color} | \\ \\ || Reason || Tests || | Failed junit tests | hadoop.hdfs.server.mover.TestMover | | | hadoop.hdfs.server.balancer.TestBalancerWithMultipleNameNodes | | | hadoop.hdfs.qjournal.server.TestJournalNodeSync | | | hadoop.hdfs.server.datanode.TestBPOfferService | \\ \\ || Subsystem || Report/Notes || | Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:8f97d6f | | JIRA Issue | HDFS-13209 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12958680/HDFS-13209-06.patch | | Optional Tests | dupname
[jira] [Commented] (HDFS-13209) DistributedFileSystem.create should allow an option to provide StoragePolicy
[ https://issues.apache.org/jira/browse/HDFS-13209?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16767973#comment-16767973 ] Surendra Singh Lilhore commented on HDFS-13209: --- Hi [~ayushtkn] Change the UT and attached v6 patch. Lets wait for QA result. > DistributedFileSystem.create should allow an option to provide StoragePolicy > > > Key: HDFS-13209 > URL: https://issues.apache.org/jira/browse/HDFS-13209 > Project: Hadoop HDFS > Issue Type: New Feature > Components: hdfs >Affects Versions: 3.0.0 >Reporter: Jean-Marc Spaggiari >Assignee: Ayush Saxena >Priority: Major > Attachments: HDFS-13209-01.patch, HDFS-13209-02.patch, > HDFS-13209-03.patch, HDFS-13209-04.patch, HDFS-13209-05.patch, > HDFS-13209-06.patch > > > DistributedFileSystem.create allows to get a FSDataOutputStream. The stored > file and related blocks will used the directory based StoragePolicy. > > However, sometime, we might need to keep all files in the same directory > (consistency constraint) but might want some of them on SSD (small, in my > case) until they are processed and merger/removed. Then they will go on the > default policy. > > When creating a file, it will be useful to have an option to specify a > different StoragePolicy... -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-13209) DistributedFileSystem.create should allow an option to provide StoragePolicy
[ https://issues.apache.org/jira/browse/HDFS-13209?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16767927#comment-16767927 ] Surendra Singh Lilhore commented on HDFS-13209: --- Thanks [~ayushtkn] for patch. Some comment for test code. 1. Change variable name to outputStream._ {code:java} + FSDataOutputStream inputStream = + fs.createFile(file1).storagePolicyName("COLD").build(); + inputStream.write(1); + inputStream.close();{code} 2. Start MiniDfscluster with three storage DISK, ARCHIVE, SSD. 3. Add case for default storage. > DistributedFileSystem.create should allow an option to provide StoragePolicy > > > Key: HDFS-13209 > URL: https://issues.apache.org/jira/browse/HDFS-13209 > Project: Hadoop HDFS > Issue Type: New Feature > Components: hdfs >Affects Versions: 3.0.0 >Reporter: Jean-Marc Spaggiari >Assignee: Ayush Saxena >Priority: Major > Attachments: HDFS-13209-01.patch, HDFS-13209-02.patch, > HDFS-13209-03.patch, HDFS-13209-04.patch, HDFS-13209-05.patch > > > DistributedFileSystem.create allows to get a FSDataOutputStream. The stored > file and related blocks will used the directory based StoragePolicy. > > However, sometime, we might need to keep all files in the same directory > (consistency constraint) but might want some of them on SSD (small, in my > case) until they are processed and merger/removed. Then they will go on the > default policy. > > When creating a file, it will be useful to have an option to specify a > different StoragePolicy... -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-13209) DistributedFileSystem.create should allow an option to provide StoragePolicy
[ https://issues.apache.org/jira/browse/HDFS-13209?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16766537#comment-16766537 ] Hadoop QA commented on HDFS-13209: -- | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 21s{color} | {color:blue} Docker mode activated. {color} | || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s{color} | {color:green} The patch appears to include 13 new or modified test files. {color} | || || || || {color:brown} trunk Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 16s{color} | {color:blue} Maven dependency ordering for branch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 18m 36s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 3m 35s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 1m 20s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 2m 41s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 15m 57s{color} | {color:green} branch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 4m 43s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 53s{color} | {color:green} trunk passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 11s{color} | {color:blue} Maven dependency ordering for patch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 2m 11s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 3m 18s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} cc {color} | {color:green} 3m 18s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 3m 18s{color} | {color:green} the patch passed {color} | | {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange} 1m 13s{color} | {color:orange} hadoop-hdfs-project: The patch generated 1 new + 1030 unchanged - 1 fixed = 1031 total (was 1031) {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 2m 24s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 12m 33s{color} | {color:green} patch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 4m 41s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 44s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:green}+1{color} | {color:green} unit {color} | {color:green} 1m 42s{color} | {color:green} hadoop-hdfs-client in the patch passed. {color} | | {color:red}-1{color} | {color:red} unit {color} | {color:red}107m 16s{color} | {color:red} hadoop-hdfs in the patch failed. {color} | | {color:green}+1{color} | {color:green} unit {color} | {color:green} 18m 22s{color} | {color:green} hadoop-hdfs-rbf in the patch passed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 38s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black}203m 38s{color} | {color:black} {color} | \\ \\ || Reason || Tests || | Failed junit tests | hadoop.hdfs.qjournal.server.TestJournalNodeSync | | | hadoop.hdfs.server.balancer.TestBalancerRPCDelay | \\ \\ || Subsystem || Report/Notes || | Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:8f97d6f | | JIRA Issue | HDFS-13209 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12958439/HDFS-13209-05.patch | | Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient findbugs checkstyle cc | | uname
[jira] [Commented] (HDFS-13209) DistributedFileSystem.create should allow an option to provide StoragePolicy
[ https://issues.apache.org/jira/browse/HDFS-13209?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16766067#comment-16766067 ] Vinayakumar B commented on HDFS-13209: -- Overall change looks fine. Instead of exposing one more public API in {{DistributedFileSystem}}, follow the builder pattern used for ecPolicy. Add a setter method for storagePolicy in {{HdfsDataOutputStreamBuilder}} and use the same during create. > DistributedFileSystem.create should allow an option to provide StoragePolicy > > > Key: HDFS-13209 > URL: https://issues.apache.org/jira/browse/HDFS-13209 > Project: Hadoop HDFS > Issue Type: New Feature > Components: hdfs >Affects Versions: 3.0.0 >Reporter: Jean-Marc Spaggiari >Assignee: Ayush Saxena >Priority: Major > Attachments: HDFS-13209-01.patch, HDFS-13209-02.patch, > HDFS-13209-03.patch, HDFS-13209-04.patch > > > DistributedFileSystem.create allows to get a FSDataOutputStream. The stored > file and related blocks will used the directory based StoragePolicy. > > However, sometime, we might need to keep all files in the same directory > (consistency constraint) but might want some of them on SSD (small, in my > case) until they are processed and merger/removed. Then they will go on the > default policy. > > When creating a file, it will be useful to have an option to specify a > different StoragePolicy... -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-13209) DistributedFileSystem.create should allow an option to provide StoragePolicy
[ https://issues.apache.org/jira/browse/HDFS-13209?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16763125#comment-16763125 ] Ayush Saxena commented on HDFS-13209: - Thanx [~surendrasingh] for the review!!! I have made changes in UT as suggested. Pls review. :) > DistributedFileSystem.create should allow an option to provide StoragePolicy > > > Key: HDFS-13209 > URL: https://issues.apache.org/jira/browse/HDFS-13209 > Project: Hadoop HDFS > Issue Type: New Feature > Components: hdfs >Affects Versions: 3.0.0 >Reporter: Jean-Marc Spaggiari >Assignee: Ayush Saxena >Priority: Major > Attachments: HDFS-13209-01.patch, HDFS-13209-02.patch, > HDFS-13209-03.patch, HDFS-13209-04.patch > > > DistributedFileSystem.create allows to get a FSDataOutputStream. The stored > file and related blocks will used the directory based StoragePolicy. > > However, sometime, we might need to keep all files in the same directory > (consistency constraint) but might want some of them on SSD (small, in my > case) until they are processed and merger/removed. Then they will go on the > default policy. > > When creating a file, it will be useful to have an option to specify a > different StoragePolicy... -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-13209) DistributedFileSystem.create should allow an option to provide StoragePolicy
[ https://issues.apache.org/jira/browse/HDFS-13209?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16763020#comment-16763020 ] Hadoop QA commented on HDFS-13209: -- | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 17s{color} | {color:blue} Docker mode activated. {color} | || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s{color} | {color:green} The patch appears to include 13 new or modified test files. {color} | || || || || {color:brown} trunk Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 19s{color} | {color:blue} Maven dependency ordering for branch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 16m 14s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 2m 57s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 1m 21s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 2m 24s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 14m 25s{color} | {color:green} branch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 4m 26s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 59s{color} | {color:green} trunk passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 11s{color} | {color:blue} Maven dependency ordering for patch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 2m 3s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 2m 51s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} cc {color} | {color:green} 2m 51s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 2m 51s{color} | {color:green} the patch passed {color} | | {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange} 1m 16s{color} | {color:orange} hadoop-hdfs-project: The patch generated 2 new + 1030 unchanged - 1 fixed = 1032 total (was 1031) {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 2m 8s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 11m 40s{color} | {color:green} patch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 4m 48s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 49s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:green}+1{color} | {color:green} unit {color} | {color:green} 1m 50s{color} | {color:green} hadoop-hdfs-client in the patch passed. {color} | | {color:red}-1{color} | {color:red} unit {color} | {color:red} 76m 53s{color} | {color:red} hadoop-hdfs in the patch failed. {color} | | {color:green}+1{color} | {color:green} unit {color} | {color:green} 15m 48s{color} | {color:green} hadoop-hdfs-rbf in the patch passed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 29s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black}164m 37s{color} | {color:black} {color} | \\ \\ || Reason || Tests || | Failed junit tests | hadoop.hdfs.web.TestWebHdfsTimeouts | | | hadoop.hdfs.server.namenode.TestPersistentStoragePolicySatisfier | \\ \\ || Subsystem || Report/Notes || | Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:8f97d6f | | JIRA Issue | HDFS-13209 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12957935/HDFS-13209-04.patch | | Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient findbugs checkstyle cc | |
[jira] [Commented] (HDFS-13209) DistributedFileSystem.create should allow an option to provide StoragePolicy
[ https://issues.apache.org/jira/browse/HDFS-13209?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16762407#comment-16762407 ] Surendra Singh Lilhore commented on HDFS-13209: --- [~ayushtkn], the change LGTM. Pls improve the UT. Start MiniDFSCluster with different storage and check block storage location based on policy after creating a file. > DistributedFileSystem.create should allow an option to provide StoragePolicy > > > Key: HDFS-13209 > URL: https://issues.apache.org/jira/browse/HDFS-13209 > Project: Hadoop HDFS > Issue Type: New Feature > Components: hdfs >Affects Versions: 3.0.0 >Reporter: Jean-Marc Spaggiari >Assignee: Ayush Saxena >Priority: Major > Attachments: HDFS-13209-01.patch, HDFS-13209-02.patch, > HDFS-13209-03.patch > > > DistributedFileSystem.create allows to get a FSDataOutputStream. The stored > file and related blocks will used the directory based StoragePolicy. > > However, sometime, we might need to keep all files in the same directory > (consistency constraint) but might want some of them on SSD (small, in my > case) until they are processed and merger/removed. Then they will go on the > default policy. > > When creating a file, it will be useful to have an option to specify a > different StoragePolicy... -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-13209) DistributedFileSystem.create should allow an option to provide StoragePolicy
[ https://issues.apache.org/jira/browse/HDFS-13209?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16758006#comment-16758006 ] Hadoop QA commented on HDFS-13209: -- | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 15s{color} | {color:blue} Docker mode activated. {color} | || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s{color} | {color:green} The patch appears to include 13 new or modified test files. {color} | || || || || {color:brown} trunk Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 16s{color} | {color:blue} Maven dependency ordering for branch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 19m 12s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 2m 54s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 1m 19s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 2m 21s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 15m 14s{color} | {color:green} branch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 4m 25s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 39s{color} | {color:green} trunk passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 11s{color} | {color:blue} Maven dependency ordering for patch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 2m 8s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 2m 50s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} cc {color} | {color:green} 2m 50s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 2m 50s{color} | {color:green} the patch passed {color} | | {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange} 1m 10s{color} | {color:orange} hadoop-hdfs-project: The patch generated 2 new + 1030 unchanged - 1 fixed = 1032 total (was 1031) {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 2m 8s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 12m 46s{color} | {color:green} patch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 4m 42s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 41s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:green}+1{color} | {color:green} unit {color} | {color:green} 1m 44s{color} | {color:green} hadoop-hdfs-client in the patch passed. {color} | | {color:red}-1{color} | {color:red} unit {color} | {color:red} 74m 34s{color} | {color:red} hadoop-hdfs in the patch failed. {color} | | {color:green}+1{color} | {color:green} unit {color} | {color:green} 15m 53s{color} | {color:green} hadoop-hdfs-rbf in the patch passed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 28s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black}166m 18s{color} | {color:black} {color} | \\ \\ || Reason || Tests || | Failed junit tests | hadoop.hdfs.web.TestWebHdfsTimeouts | | | hadoop.hdfs.server.datanode.TestBPOfferService | \\ \\ || Subsystem || Report/Notes || | Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:8f97d6f | | JIRA Issue | HDFS-13209 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12957194/HDFS-13209-03.patch | | Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient findbugs checkstyle cc | | uname | Linux
[jira] [Commented] (HDFS-13209) DistributedFileSystem.create should allow an option to provide StoragePolicy
[ https://issues.apache.org/jira/browse/HDFS-13209?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16757771#comment-16757771 ] Hadoop QA commented on HDFS-13209: -- | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 38s{color} | {color:blue} Docker mode activated. {color} | || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s{color} | {color:green} The patch appears to include 13 new or modified test files. {color} | || || || || {color:brown} trunk Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 24s{color} | {color:blue} Maven dependency ordering for branch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 23m 48s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 3m 52s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 1m 27s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 2m 38s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 17m 4s{color} | {color:green} branch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 4m 46s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 54s{color} | {color:green} trunk passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 11s{color} | {color:blue} Maven dependency ordering for patch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 2m 19s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 3m 20s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} cc {color} | {color:green} 3m 20s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 3m 20s{color} | {color:green} the patch passed {color} | | {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange} 1m 14s{color} | {color:orange} hadoop-hdfs-project: The patch generated 8 new + 1030 unchanged - 1 fixed = 1038 total (was 1031) {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 2m 12s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 13m 48s{color} | {color:green} patch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 5m 31s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 52s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:green}+1{color} | {color:green} unit {color} | {color:green} 2m 10s{color} | {color:green} hadoop-hdfs-client in the patch passed. {color} | | {color:red}-1{color} | {color:red} unit {color} | {color:red}110m 12s{color} | {color:red} hadoop-hdfs in the patch failed. {color} | | {color:red}-1{color} | {color:red} unit {color} | {color:red} 17m 9s{color} | {color:red} hadoop-hdfs-rbf in the patch failed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 36s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black}215m 13s{color} | {color:black} {color} | \\ \\ || Reason || Tests || | Failed junit tests | hadoop.hdfs.server.namenode.TestProcessCorruptBlocks | | | hadoop.hdfs.server.balancer.TestBalancer | | | hadoop.fs.contract.router.web.TestRouterWebHDFSContractSeek | | | hadoop.fs.contract.router.TestRouterHDFSContractRename | | | hadoop.fs.contract.router.TestRouterHDFSContractConcat | | | hadoop.fs.contract.router.TestRouterHDFSContractSeek | | | hadoop.fs.contract.router.TestRouterHDFSContractRootDirectory | | | hadoop.fs.contract.router.web.TestRouterWebHDFSContractOpen | |
[jira] [Commented] (HDFS-13209) DistributedFileSystem.create should allow an option to provide StoragePolicy
[ https://issues.apache.org/jira/browse/HDFS-13209?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16757584#comment-16757584 ] Hadoop QA commented on HDFS-13209: -- | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 0s{color} | {color:blue} Docker mode activated. {color} | | {color:red}-1{color} | {color:red} patch {color} | {color:red} 0m 8s{color} | {color:red} HDFS-13209 does not apply to trunk. Rebase required? Wrong Branch? See https://wiki.apache.org/hadoop/HowToContribute for help. {color} | \\ \\ || Subsystem || Report/Notes || | JIRA Issue | HDFS-13209 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12957142/HDFS-13209-01.patch | | Console output | https://builds.apache.org/job/PreCommit-HDFS-Build/26102/console | | Powered by | Apache Yetus 0.8.0 http://yetus.apache.org | This message was automatically generated. > DistributedFileSystem.create should allow an option to provide StoragePolicy > > > Key: HDFS-13209 > URL: https://issues.apache.org/jira/browse/HDFS-13209 > Project: Hadoop HDFS > Issue Type: New Feature > Components: hdfs >Affects Versions: 3.0.0 >Reporter: Jean-Marc Spaggiari >Assignee: Ayush Saxena >Priority: Major > Attachments: HDFS-13209-01.patch > > > DistributedFileSystem.create allows to get a FSDataOutputStream. The stored > file and related blocks will used the directory based StoragePolicy. > > However, sometime, we might need to keep all files in the same directory > (consistency constraint) but might want some of them on SSD (small, in my > case) until they are processed and merger/removed. Then they will go on the > default policy. > > When creating a file, it will be useful to have an option to specify a > different StoragePolicy... -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-13209) DistributedFileSystem.create should allow an option to provide StoragePolicy
[ https://issues.apache.org/jira/browse/HDFS-13209?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16391040#comment-16391040 ] Jean-Marc Spaggiari commented on HDFS-13209: Hi [~rakeshr] thanks a lot for your feedback. I guess this will totally work for me. Like you said, this small add to the API will be nice, but maybe not 100% required. I will try with what you proposed here. Thanks. > DistributedFileSystem.create should allow an option to provide StoragePolicy > > > Key: HDFS-13209 > URL: https://issues.apache.org/jira/browse/HDFS-13209 > Project: Hadoop HDFS > Issue Type: New Feature > Components: hdfs >Affects Versions: 3.0.0 >Reporter: Jean-Marc Spaggiari >Priority: Major > > DistributedFileSystem.create allows to get a FSDataOutputStream. The stored > file and related blocks will used the directory based StoragePolicy. > > However, sometime, we might need to keep all files in the same directory > (consistency constraint) but might want some of them on SSD (small, in my > case) until they are processed and merger/removed. Then they will go on the > default policy. > > When creating a file, it will be useful to have an option to specify a > different StoragePolicy... -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-13209) DistributedFileSystem.create should allow an option to provide StoragePolicy
[ https://issues.apache.org/jira/browse/HDFS-13209?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16387507#comment-16387507 ] Rakesh R commented on HDFS-13209: - {quote}However, sometime, we might need to keep all files in the same directory (consistency constraint) but might want some of them on SSD (small, in my case) until they are processed and merger/removed. Then they will go on the default policy. {quote} User can sets StoragePolicy to either a directory or a file, [fs#setStoragePolicy|https://hadoop.apache.org/docs/stable/api/org/apache/hadoop/fs/FileSystem.html#setStoragePolicy(org.apache.hadoop.fs.Path,%20java.lang.String)]. I agree with you, presently there is no option to pass storage policy during a file creation and newly created file inherits the storage policy from its parent directory and continue writing blocks using this storage policy. I'm not against this new API proposal, but I could see this behavior could be achieved with an additional cost of FileSystem API call. How about changing storage policy on a file, before writing contents to it. I'm trying an attempt to describe the steps, please go through and let me know if I missed anything. {code:java} Step-1) Assume parent directory "/myparent" configured with ALL_SSD policy. Step-2) Now, creates a file "/myparent/myfile" under "/myparent" dir. It inherits ALL_SSD policy from its parent. Step-3) Change storage policy of "/myparent/myfile" to "COLD" storage policy, which uses ARCHIVE storage type. Step-4) Writes data to the file. Here, the data blocks will be written to ARCHIVE storage types. {code} {code:java} Sample Code:- String fileName = "/myparent/myfile"; final FSDataOutputStream out = dfs.create(new Path(fileName), replicatonFactor); dfs.setStoragePolicy(new Path(fileName), "COLD"); for (int i = 0; i < 1024; i++) { out.write(i); } out.close(); {code} > DistributedFileSystem.create should allow an option to provide StoragePolicy > > > Key: HDFS-13209 > URL: https://issues.apache.org/jira/browse/HDFS-13209 > Project: Hadoop HDFS > Issue Type: New Feature > Components: hdfs >Affects Versions: 3.0.0 >Reporter: Jean-Marc Spaggiari >Priority: Major > > DistributedFileSystem.create allows to get a FSDataOutputStream. The stored > file and related blocks will used the directory based StoragePolicy. > > However, sometime, we might need to keep all files in the same directory > (consistency constraint) but might want some of them on SSD (small, in my > case) until they are processed and merger/removed. Then they will go on the > default policy. > > When creating a file, it will be useful to have an option to specify a > different StoragePolicy... -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org