[jira] [Commented] (HDFS-9259) Make SO_SNDBUF size configurable at DFSClient side for hdfs write scenario
[ https://issues.apache.org/jira/browse/HDFS-9259?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14973065#comment-14973065 ] Hadoop QA commented on HDFS-9259: - \\ \\ | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:red}-1{color} | pre-patch | 22m 42s | Pre-patch trunk has 1 extant Findbugs (version 3.0.0) warnings. | | {color:green}+1{color} | @author | 0m 0s | The patch does not contain any @author tags. | | {color:green}+1{color} | tests included | 0m 0s | The patch appears to include 1 new or modified test files. | | {color:green}+1{color} | javac | 8m 48s | There were no new javac warning messages. | | {color:green}+1{color} | javadoc | 11m 15s | There were no new javadoc warning messages. | | {color:green}+1{color} | release audit | 0m 25s | The applied patch does not increase the total number of release audit warnings. | | {color:green}+1{color} | checkstyle | 2m 55s | There were no new checkstyle issues. | | {color:green}+1{color} | whitespace | 0m 0s | The patch has no lines that end in whitespace. | | {color:green}+1{color} | install | 1m 56s | mvn install still works. | | {color:green}+1{color} | eclipse:eclipse | 0m 40s | The patch built with eclipse:eclipse. | | {color:green}+1{color} | findbugs | 5m 4s | The patch does not introduce any new Findbugs (version 3.0.0) warnings. | | {color:green}+1{color} | native | 3m 34s | Pre-build of native portion | | {color:red}-1{color} | hdfs tests | 67m 2s | Tests failed in hadoop-hdfs. | | {color:green}+1{color} | hdfs tests | 0m 32s | Tests passed in hadoop-hdfs-client. | | | | 124m 56s | | \\ \\ || Reason || Tests || | Failed unit tests | hadoop.tools.TestHdfsConfigFields | | | hadoop.hdfs.TestRecoverStripedFile | | | hadoop.hdfs.server.namenode.TestFileTruncate | | | hadoop.hdfs.server.blockmanagement.TestUnderReplicatedBlocks | \\ \\ || Subsystem || Report/Notes || | Patch URL | http://issues.apache.org/jira/secure/attachment/12768573/HDFS-9259.000.patch | | Optional Tests | javadoc javac unit findbugs checkstyle | | git revision | trunk / 446212a | | Pre-patch Findbugs warnings | https://builds.apache.org/job/PreCommit-HDFS-Build/13184/artifact/patchprocess/trunkFindbugsWarningshadoop-hdfs.html | | hadoop-hdfs test log | https://builds.apache.org/job/PreCommit-HDFS-Build/13184/artifact/patchprocess/testrun_hadoop-hdfs.txt | | hadoop-hdfs-client test log | https://builds.apache.org/job/PreCommit-HDFS-Build/13184/artifact/patchprocess/testrun_hadoop-hdfs-client.txt | | Test Results | https://builds.apache.org/job/PreCommit-HDFS-Build/13184/testReport/ | | Java | 1.7.0_55 | | uname | Linux asf909.gq1.ygridcore.net 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux | | Console output | https://builds.apache.org/job/PreCommit-HDFS-Build/13184/console | This message was automatically generated. > Make SO_SNDBUF size configurable at DFSClient side for hdfs write scenario > -- > > Key: HDFS-9259 > URL: https://issues.apache.org/jira/browse/HDFS-9259 > Project: Hadoop HDFS > Issue Type: Improvement >Reporter: Ming Ma >Assignee: Mingliang Liu > Attachments: HDFS-9259.000.patch > > > We recently found that cross-DC hdfs write could be really slow. Further > investigation identified that is due to SendBufferSize and ReceiveBufferSize > used for hdfs write. The test ran "hadoop -fs -copyFromLocal" of a 256MB file > across DC with different SendBufferSize and ReceiveBufferSize values. The > results showed that c much faster than b; b is faster than a. > a. SendBufferSize=128k, ReceiveBufferSize=128k (hdfs default setting). > b. SendBufferSize=128K, ReceiveBufferSize=not set(TCP auto tuning). > c. SendBufferSize=not set, ReceiveBufferSize=not set(TCP auto tuning for both) > HDFS-8829 has enabled scenario b. We would like to enable scenario c by > making SendBufferSize configurable at DFSClient side. Cc: [~cmccabe] [~He > Tianyi] [~kanaka] [~vinayrpet]. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HDFS-9245) Fix findbugs warnings in hdfs-nfs/WriteCtx
[ https://issues.apache.org/jira/browse/HDFS-9245?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14973035#comment-14973035 ] Hadoop QA commented on HDFS-9245: - \\ \\ | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:red}-1{color} | pre-patch | 18m 1s | Pre-patch trunk has 2 extant Findbugs (version 3.0.0) warnings. | | {color:green}+1{color} | @author | 0m 0s | The patch does not contain any @author tags. | | {color:red}-1{color} | tests included | 0m 0s | The patch doesn't appear to include any new or modified tests. Please justify why no new tests are needed for this patch. Also please list what manual steps were performed to verify this patch. | | {color:green}+1{color} | javac | 8m 45s | There were no new javac warning messages. | | {color:green}+1{color} | javadoc | 11m 6s | There were no new javadoc warning messages. | | {color:green}+1{color} | release audit | 0m 25s | The applied patch does not increase the total number of release audit warnings. | | {color:red}-1{color} | checkstyle | 0m 25s | The applied patch generated 2 new checkstyle issues (total was 6, now 8). | | {color:green}+1{color} | whitespace | 0m 0s | The patch has no lines that end in whitespace. | | {color:green}+1{color} | install | 1m 36s | mvn install still works. | | {color:green}+1{color} | eclipse:eclipse | 0m 36s | The patch built with eclipse:eclipse. | | {color:green}+1{color} | findbugs | 0m 54s | The patch does not introduce any new Findbugs (version 3.0.0) warnings, and fixes 2 pre-existing warnings. | | {color:green}+1{color} | native | 3m 32s | Pre-build of native portion | | {color:green}+1{color} | hdfs tests | 1m 52s | Tests passed in hadoop-hdfs-nfs. | | | | 47m 15s | | \\ \\ || Subsystem || Report/Notes || | Patch URL | http://issues.apache.org/jira/secure/attachment/12768574/HDFS-9245.001.patch | | Optional Tests | javadoc javac unit findbugs checkstyle | | git revision | trunk / 446212a | | Pre-patch Findbugs warnings | https://builds.apache.org/job/PreCommit-HDFS-Build/13185/artifact/patchprocess/trunkFindbugsWarningshadoop-hdfs-nfs.html | | checkstyle | https://builds.apache.org/job/PreCommit-HDFS-Build/13185/artifact/patchprocess/diffcheckstylehadoop-hdfs-nfs.txt | | hadoop-hdfs-nfs test log | https://builds.apache.org/job/PreCommit-HDFS-Build/13185/artifact/patchprocess/testrun_hadoop-hdfs-nfs.txt | | Test Results | https://builds.apache.org/job/PreCommit-HDFS-Build/13185/testReport/ | | Java | 1.7.0_55 | | uname | Linux asf900.gq1.ygridcore.net 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux | | Console output | https://builds.apache.org/job/PreCommit-HDFS-Build/13185/console | This message was automatically generated. > Fix findbugs warnings in hdfs-nfs/WriteCtx > -- > > Key: HDFS-9245 > URL: https://issues.apache.org/jira/browse/HDFS-9245 > Project: Hadoop HDFS > Issue Type: Bug > Components: nfs >Reporter: Mingliang Liu >Assignee: Mingliang Liu > Attachments: HDFS-9245.000.patch, HDFS-9245.001.patch > > > There are findbugs warnings as follows, brought by [HDFS-9092]. > It seems fine to ignore them by write a filter rule in the > {{findbugsExcludeFile.xml}} file. > {code:xml} > instanceHash="592511935f7cb9e5f97ef4c99a6c46c2" instanceOccurrenceNum="0" > priority="2" abbrev="IS" type="IS2_INCONSISTENT_SYNC" cweid="366" > instanceOccurrenceMax="0"> > Inconsistent synchronization > > Inconsistent synchronization of > org.apache.hadoop.hdfs.nfs.nfs3.WriteCtx.offset; locked 75% of time > > > sourcepath="org/apache/hadoop/hdfs/nfs/nfs3/WriteCtx.java" > sourcefile="WriteCtx.java" end="314"> > At WriteCtx.java:[lines 40-314] > > In class org.apache.hadoop.hdfs.nfs.nfs3.WriteCtx > > {code} > and > {code:xml} > instanceHash="4f3daa339eb819220f26c998369b02fe" instanceOccurrenceNum="0" > priority="2" abbrev="IS" type="IS2_INCONSISTENT_SYNC" cweid="366" > instanceOccurrenceMax="0"> > Inconsistent synchronization > > Inconsistent synchronization of > org.apache.hadoop.hdfs.nfs.nfs3.WriteCtx.originalCount; locked 50% of time > > > sourcepath="org/apache/hadoop/hdfs/nfs/nfs3/WriteCtx.java" > sourcefile="WriteCtx.java" end="314"> > At WriteCtx.java:[lines 40-314] > > In class org.apache.hadoop.hdfs.nfs.nfs3.WriteCtx > > name="originalCount" primary="true" signature="I"> > sourcepath="org/apache/hadoop/hdfs/nfs/nfs3/WriteCtx.java" > sourcefile="WriteCtx.java"> > In WriteCtx.java > > > Field org.apache.hadoop.hdfs.nfs.nfs3.WriteCtx.originalCount > > > {code} -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HDFS-9245) Fix findbugs warnings in hdfs-nfs/WriteCtx
[ https://issues.apache.org/jira/browse/HDFS-9245?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Mingliang Liu updated HDFS-9245: Attachment: HDFS-9245.001.patch Per offline discussion with[~wheat9] and [~gtCarrera9], the {{volatile}} is considered premature optimization. The v1 patch simply use the synchronized block for accessors. The main observation is that synchronized read is not in critical path. > Fix findbugs warnings in hdfs-nfs/WriteCtx > -- > > Key: HDFS-9245 > URL: https://issues.apache.org/jira/browse/HDFS-9245 > Project: Hadoop HDFS > Issue Type: Bug > Components: nfs >Reporter: Mingliang Liu >Assignee: Mingliang Liu > Attachments: HDFS-9245.000.patch, HDFS-9245.001.patch > > > There are findbugs warnings as follows, brought by [HDFS-9092]. > It seems fine to ignore them by write a filter rule in the > {{findbugsExcludeFile.xml}} file. > {code:xml} > instanceHash="592511935f7cb9e5f97ef4c99a6c46c2" instanceOccurrenceNum="0" > priority="2" abbrev="IS" type="IS2_INCONSISTENT_SYNC" cweid="366" > instanceOccurrenceMax="0"> > Inconsistent synchronization > > Inconsistent synchronization of > org.apache.hadoop.hdfs.nfs.nfs3.WriteCtx.offset; locked 75% of time > > > sourcepath="org/apache/hadoop/hdfs/nfs/nfs3/WriteCtx.java" > sourcefile="WriteCtx.java" end="314"> > At WriteCtx.java:[lines 40-314] > > In class org.apache.hadoop.hdfs.nfs.nfs3.WriteCtx > > {code} > and > {code:xml} > instanceHash="4f3daa339eb819220f26c998369b02fe" instanceOccurrenceNum="0" > priority="2" abbrev="IS" type="IS2_INCONSISTENT_SYNC" cweid="366" > instanceOccurrenceMax="0"> > Inconsistent synchronization > > Inconsistent synchronization of > org.apache.hadoop.hdfs.nfs.nfs3.WriteCtx.originalCount; locked 50% of time > > > sourcepath="org/apache/hadoop/hdfs/nfs/nfs3/WriteCtx.java" > sourcefile="WriteCtx.java" end="314"> > At WriteCtx.java:[lines 40-314] > > In class org.apache.hadoop.hdfs.nfs.nfs3.WriteCtx > > name="originalCount" primary="true" signature="I"> > sourcepath="org/apache/hadoop/hdfs/nfs/nfs3/WriteCtx.java" > sourcefile="WriteCtx.java"> > In WriteCtx.java > > > Field org.apache.hadoop.hdfs.nfs.nfs3.WriteCtx.originalCount > > > {code} -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HDFS-9302) WebHDFS throws NullPointerException if newLength is not provided
[ https://issues.apache.org/jira/browse/HDFS-9302?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14973001#comment-14973001 ] Yi Liu commented on HDFS-9302: -- [~Karthik Palaniappan], thanks for reporting this, it's better to return a better exception and error msg to user if there is no {{newLength}}. This can be done by checking whether newLength is null string (NewLengthParam#DEFAULT) in {{NamenodeWebHdfsMethods}}, feel free to upload a patch to fix it. In the document, the newLength is already a required parameter, for any optional parameter, there is [] > WebHDFS throws NullPointerException if newLength is not provided > > > Key: HDFS-9302 > URL: https://issues.apache.org/jira/browse/HDFS-9302 > Project: Hadoop HDFS > Issue Type: Bug > Components: HDFS >Affects Versions: 2.7.1 > Environment: Centos6 >Reporter: Karthik Palaniappan >Priority: Minor > > $ curl -X POST "http://namenode:50070/webhdfs/v1/foo?op=truncate"; > {"RemoteException":{"exception":"NullPointerException","javaClassName":"java.lang.NullPointerException","message":null}} > We should change newLength to be a required parameter in the webhdfs > documentation > (https://hadoop.apache.org/docs/r2.7.1/hadoop-project-dist/hadoop-hdfs/WebHDFS.html#New_Length), > and throw an IllegalArgumentException if isn't provided. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HDFS-9259) Make SO_SNDBUF size configurable at DFSClient side for hdfs write scenario
[ https://issues.apache.org/jira/browse/HDFS-9259?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Mingliang Liu updated HDFS-9259: Status: Patch Available (was: Open) > Make SO_SNDBUF size configurable at DFSClient side for hdfs write scenario > -- > > Key: HDFS-9259 > URL: https://issues.apache.org/jira/browse/HDFS-9259 > Project: Hadoop HDFS > Issue Type: Improvement >Reporter: Ming Ma >Assignee: Mingliang Liu > Attachments: HDFS-9259.000.patch > > > We recently found that cross-DC hdfs write could be really slow. Further > investigation identified that is due to SendBufferSize and ReceiveBufferSize > used for hdfs write. The test ran "hadoop -fs -copyFromLocal" of a 256MB file > across DC with different SendBufferSize and ReceiveBufferSize values. The > results showed that c much faster than b; b is faster than a. > a. SendBufferSize=128k, ReceiveBufferSize=128k (hdfs default setting). > b. SendBufferSize=128K, ReceiveBufferSize=not set(TCP auto tuning). > c. SendBufferSize=not set, ReceiveBufferSize=not set(TCP auto tuning for both) > HDFS-8829 has enabled scenario b. We would like to enable scenario c by > making SendBufferSize configurable at DFSClient side. Cc: [~cmccabe] [~He > Tianyi] [~kanaka] [~vinayrpet]. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HDFS-9259) Make SO_SNDBUF size configurable at DFSClient side for hdfs write scenario
[ https://issues.apache.org/jira/browse/HDFS-9259?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Mingliang Liu updated HDFS-9259: Attachment: HDFS-9259.000.patch The v0 patch is the first effort to address this issue, which: - Adds a new client side config key {{dfs.client.socket.send.buffer.size}} - Makes {{DFSClient}} side socket {{sendBufferSize}} configurable, and auto-tunable for non-positive value - Adds config key description in the {{hdfs-default.xml}} - Adds new unit test {{TestDFSClientSocketSize}} to cover common cases > Make SO_SNDBUF size configurable at DFSClient side for hdfs write scenario > -- > > Key: HDFS-9259 > URL: https://issues.apache.org/jira/browse/HDFS-9259 > Project: Hadoop HDFS > Issue Type: Improvement >Reporter: Ming Ma >Assignee: Mingliang Liu > Attachments: HDFS-9259.000.patch > > > We recently found that cross-DC hdfs write could be really slow. Further > investigation identified that is due to SendBufferSize and ReceiveBufferSize > used for hdfs write. The test ran "hadoop -fs -copyFromLocal" of a 256MB file > across DC with different SendBufferSize and ReceiveBufferSize values. The > results showed that c much faster than b; b is faster than a. > a. SendBufferSize=128k, ReceiveBufferSize=128k (hdfs default setting). > b. SendBufferSize=128K, ReceiveBufferSize=not set(TCP auto tuning). > c. SendBufferSize=not set, ReceiveBufferSize=not set(TCP auto tuning for both) > HDFS-8829 has enabled scenario b. We would like to enable scenario c by > making SendBufferSize configurable at DFSClient side. Cc: [~cmccabe] [~He > Tianyi] [~kanaka] [~vinayrpet]. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HDFS-9077) webhdfs client requires SPNEGO to do renew
[ https://issues.apache.org/jira/browse/HDFS-9077?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14972888#comment-14972888 ] Hadoop QA commented on HDFS-9077: - \\ \\ | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:red}-1{color} | pre-patch | 20m 0s | Pre-patch trunk has 1 extant Findbugs (version 3.0.0) warnings. | | {color:green}+1{color} | @author | 0m 0s | The patch does not contain any @author tags. | | {color:green}+1{color} | tests included | 0m 0s | The patch appears to include 2 new or modified test files. | | {color:green}+1{color} | javac | 8m 3s | There were no new javac warning messages. | | {color:green}+1{color} | javadoc | 10m 33s | There were no new javadoc warning messages. | | {color:green}+1{color} | release audit | 0m 25s | The applied patch does not increase the total number of release audit warnings. | | {color:green}+1{color} | checkstyle | 2m 56s | There were no new checkstyle issues. | | {color:green}+1{color} | whitespace | 0m 0s | The patch has no lines that end in whitespace. | | {color:green}+1{color} | install | 1m 30s | mvn install still works. | | {color:green}+1{color} | eclipse:eclipse | 0m 34s | The patch built with eclipse:eclipse. | | {color:red}-1{color} | findbugs | 4m 40s | The patch appears to introduce 1 new Findbugs (version 3.0.0) warnings. | | {color:green}+1{color} | native | 3m 11s | Pre-build of native portion | | {color:green}+1{color} | hdfs tests | 51m 53s | Tests passed in hadoop-hdfs. | | {color:green}+1{color} | hdfs tests | 0m 31s | Tests passed in hadoop-hdfs-client. | | | | 104m 20s | | \\ \\ || Reason || Tests || | FindBugs | module:hadoop-hdfs-client | \\ \\ || Subsystem || Report/Notes || | Patch URL | http://issues.apache.org/jira/secure/attachment/12768551/HDFS-9077.003.patch | | Optional Tests | javadoc javac unit findbugs checkstyle | | git revision | trunk / 446212a | | Pre-patch Findbugs warnings | https://builds.apache.org/job/PreCommit-HDFS-Build/13183/artifact/patchprocess/trunkFindbugsWarningshadoop-hdfs.html | | Findbugs warnings | https://builds.apache.org/job/PreCommit-HDFS-Build/13183/artifact/patchprocess/newPatchFindbugsWarningshadoop-hdfs-client.html | | hadoop-hdfs test log | https://builds.apache.org/job/PreCommit-HDFS-Build/13183/artifact/patchprocess/testrun_hadoop-hdfs.txt | | hadoop-hdfs-client test log | https://builds.apache.org/job/PreCommit-HDFS-Build/13183/artifact/patchprocess/testrun_hadoop-hdfs-client.txt | | Test Results | https://builds.apache.org/job/PreCommit-HDFS-Build/13183/testReport/ | | Java | 1.7.0_55 | | uname | Linux asf903.gq1.ygridcore.net 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux | | Console output | https://builds.apache.org/job/PreCommit-HDFS-Build/13183/console | This message was automatically generated. > webhdfs client requires SPNEGO to do renew > -- > > Key: HDFS-9077 > URL: https://issues.apache.org/jira/browse/HDFS-9077 > Project: Hadoop HDFS > Issue Type: Bug >Affects Versions: 3.0.0 >Reporter: Allen Wittenauer >Assignee: HeeSoo Kim > Attachments: HDFS-9077.001.patch, HDFS-9077.002.patch, > HDFS-9077.003.patch, HDFS-9077.patch > > > Simple bug. > webhdfs (the file system) doesn't pass delegation= in its REST call to renew > the same token. This forces a SPNEGO (or other auth) instead of just > renewing. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HDFS-9077) webhdfs client requires SPNEGO to do renew
[ https://issues.apache.org/jira/browse/HDFS-9077?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] HeeSoo Kim updated HDFS-9077: - Attachment: HDFS-9077.003.patch > webhdfs client requires SPNEGO to do renew > -- > > Key: HDFS-9077 > URL: https://issues.apache.org/jira/browse/HDFS-9077 > Project: Hadoop HDFS > Issue Type: Bug >Affects Versions: 3.0.0 >Reporter: Allen Wittenauer >Assignee: HeeSoo Kim > Attachments: HDFS-9077.001.patch, HDFS-9077.002.patch, > HDFS-9077.003.patch, HDFS-9077.patch > > > Simple bug. > webhdfs (the file system) doesn't pass delegation= in its REST call to renew > the same token. This forces a SPNEGO (or other auth) instead of just > renewing. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HDFS-9231) fsck doesn't explicitly list when Bad Replicas/Blocks are in a snapshot
[ https://issues.apache.org/jira/browse/HDFS-9231?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14972834#comment-14972834 ] Hadoop QA commented on HDFS-9231: - \\ \\ | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:red}-1{color} | pre-patch | 22m 1s | Pre-patch trunk has 1 extant Findbugs (version 3.0.0) warnings. | | {color:green}+1{color} | @author | 0m 0s | The patch does not contain any @author tags. | | {color:green}+1{color} | tests included | 0m 0s | The patch appears to include 1 new or modified test files. | | {color:green}+1{color} | javac | 9m 25s | There were no new javac warning messages. | | {color:green}+1{color} | javadoc | 11m 38s | There were no new javadoc warning messages. | | {color:green}+1{color} | release audit | 0m 25s | The applied patch does not increase the total number of release audit warnings. | | {color:red}-1{color} | checkstyle | 1m 30s | The applied patch generated 3 new checkstyle issues (total was 427, now 428). | | {color:green}+1{color} | whitespace | 0m 1s | The patch has no lines that end in whitespace. | | {color:green}+1{color} | install | 1m 29s | mvn install still works. | | {color:green}+1{color} | eclipse:eclipse | 0m 35s | The patch built with eclipse:eclipse. | | {color:red}-1{color} | findbugs | 2m 37s | The patch appears to introduce 1 new Findbugs (version 3.0.0) warnings. | | {color:green}+1{color} | native | 3m 17s | Pre-build of native portion | | {color:red}-1{color} | hdfs tests | 51m 11s | Tests failed in hadoop-hdfs. | | | | 104m 12s | | \\ \\ || Reason || Tests || | FindBugs | module:hadoop-hdfs | | Failed unit tests | hadoop.hdfs.server.balancer.TestBalancerWithMultipleNameNodes | | | hadoop.hdfs.server.datanode.TestDirectoryScanner | | | hadoop.hdfs.TestLeaseRecovery2 | \\ \\ || Subsystem || Report/Notes || | Patch URL | http://issues.apache.org/jira/secure/attachment/12768546/HDFS-9231.008.patch | | Optional Tests | javadoc javac unit findbugs checkstyle | | git revision | trunk / 446212a | | Pre-patch Findbugs warnings | https://builds.apache.org/job/PreCommit-HDFS-Build/13182/artifact/patchprocess/trunkFindbugsWarningshadoop-hdfs.html | | checkstyle | https://builds.apache.org/job/PreCommit-HDFS-Build/13182/artifact/patchprocess/diffcheckstylehadoop-hdfs.txt | | Findbugs warnings | https://builds.apache.org/job/PreCommit-HDFS-Build/13182/artifact/patchprocess/newPatchFindbugsWarningshadoop-hdfs.html | | hadoop-hdfs test log | https://builds.apache.org/job/PreCommit-HDFS-Build/13182/artifact/patchprocess/testrun_hadoop-hdfs.txt | | Test Results | https://builds.apache.org/job/PreCommit-HDFS-Build/13182/testReport/ | | Java | 1.7.0_55 | | uname | Linux asf900.gq1.ygridcore.net 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux | | Console output | https://builds.apache.org/job/PreCommit-HDFS-Build/13182/console | This message was automatically generated. > fsck doesn't explicitly list when Bad Replicas/Blocks are in a snapshot > --- > > Key: HDFS-9231 > URL: https://issues.apache.org/jira/browse/HDFS-9231 > Project: Hadoop HDFS > Issue Type: Bug > Components: snapshots >Reporter: Xiao Chen >Assignee: Xiao Chen > Attachments: HDFS-9231.001.patch, HDFS-9231.002.patch, > HDFS-9231.003.patch, HDFS-9231.004.patch, HDFS-9231.005.patch, > HDFS-9231.006.patch, HDFS-9231.007.patch, HDFS-9231.008.patch > > > Currently for snapshot files, {{fsck -list-corruptfileblocks}} shows corrupt > blocks with the original file dir instead of the snapshot dir, and {{fsck > -list-corruptfileblocks -includeSnapshots}} behave the same. > This can be confusing because even when the original file is deleted, fsck > will still show that deleted file as corrupted, although what's actually > corrupted is the snapshot. > As a side note, {{fsck -files -includeSnapshots}} shows the snapshot dirs. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HDFS-9231) fsck doesn't explicitly list when Bad Replicas/Blocks are in a snapshot
[ https://issues.apache.org/jira/browse/HDFS-9231?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14972793#comment-14972793 ] Xiao Chen commented on HDFS-9231: - Patch 08 addresses checkstyle errors. The test failure is unrelate and passed locally. The findbugs warning is not visible from the link... > fsck doesn't explicitly list when Bad Replicas/Blocks are in a snapshot > --- > > Key: HDFS-9231 > URL: https://issues.apache.org/jira/browse/HDFS-9231 > Project: Hadoop HDFS > Issue Type: Bug > Components: snapshots >Reporter: Xiao Chen >Assignee: Xiao Chen > Attachments: HDFS-9231.001.patch, HDFS-9231.002.patch, > HDFS-9231.003.patch, HDFS-9231.004.patch, HDFS-9231.005.patch, > HDFS-9231.006.patch, HDFS-9231.007.patch, HDFS-9231.008.patch > > > Currently for snapshot files, {{fsck -list-corruptfileblocks}} shows corrupt > blocks with the original file dir instead of the snapshot dir, and {{fsck > -list-corruptfileblocks -includeSnapshots}} behave the same. > This can be confusing because even when the original file is deleted, fsck > will still show that deleted file as corrupted, although what's actually > corrupted is the snapshot. > As a side note, {{fsck -files -includeSnapshots}} shows the snapshot dirs. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HDFS-9231) fsck doesn't explicitly list when Bad Replicas/Blocks are in a snapshot
[ https://issues.apache.org/jira/browse/HDFS-9231?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Xiao Chen updated HDFS-9231: Attachment: HDFS-9231.008.patch > fsck doesn't explicitly list when Bad Replicas/Blocks are in a snapshot > --- > > Key: HDFS-9231 > URL: https://issues.apache.org/jira/browse/HDFS-9231 > Project: Hadoop HDFS > Issue Type: Bug > Components: snapshots >Reporter: Xiao Chen >Assignee: Xiao Chen > Attachments: HDFS-9231.001.patch, HDFS-9231.002.patch, > HDFS-9231.003.patch, HDFS-9231.004.patch, HDFS-9231.005.patch, > HDFS-9231.006.patch, HDFS-9231.007.patch, HDFS-9231.008.patch > > > Currently for snapshot files, {{fsck -list-corruptfileblocks}} shows corrupt > blocks with the original file dir instead of the snapshot dir, and {{fsck > -list-corruptfileblocks -includeSnapshots}} behave the same. > This can be confusing because even when the original file is deleted, fsck > will still show that deleted file as corrupted, although what's actually > corrupted is the snapshot. > As a side note, {{fsck -files -includeSnapshots}} shows the snapshot dirs. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HDFS-9289) check genStamp when complete file
[ https://issues.apache.org/jira/browse/HDFS-9289?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14972655#comment-14972655 ] Chang Li commented on HDFS-9289: Hi [~zhz], here is the log, {code} INFO hdfs.StateChange: BLOCK* allocateBlock: /projects/FETLDEV/Benzene/benzene_stg_transient/primer/201510201900/_temporary/1/_temporary/attempt_1444859775697_31140_m_001028_0/part-m-01028. BP-1052427332-98.138.108.146-1350583571998 blk_3773617405_1106111498065{blockUCState=UNDER_CONSTRUCTION, primaryNodeIndex=-1, replicas=[ReplicaUnderConstruction[[DISK]DS-0a28b82a-e3fb-4e42-b925-e76ebd98afb4:NORMAL:10.216.32.61:1004|RBW], ReplicaUnderConstruction[[DISK]DS-236c19ee-0a39-4e53-9520-c32941ca1828:NORMAL:10.216.70.49:1004|RBW], ReplicaUnderConstruction[[DISK]DS-fc7c2dab-9309-46be-b5c0-52be8e698591:NORMAL:10.216.70.43:1004|RBW]]} 2015-10-20 19:49:20,392 [IPC Server handler 63 on 8020] INFO namenode.FSNamesystem: updatePipeline(block=BP-1052427332-98.138.108.146-1350583571998:blk_3773617405_1106111498065, newGenerationStamp=1106111511603, newLength=107761275, newNodes=[10.216.70.49:1004, 10.216.70.43:1004], clientName=DFSClient_attempt_1444859775697_31140_m_001028_0_1424303982_1) 2015-10-20 19:49:20,392 [IPC Server handler 63 on 8020] INFO namenode.FSNamesystem: updatePipeline(BP-1052427332-98.138.108.146-1350583571998:blk_3773617405_1106111498065) successfully to BP-1052427332-98.138.108.146-1350583571998:blk_3773617405_1106111511603 2015-10-20 19:49:20,400 [IPC Server handler 96 on 8020] INFO hdfs.StateChange: DIR* completeFile: /projects/FETLDEV/Benzene/benzene_stg_transient/primer/201510201900/_temporary/1/_temporary/attempt_1444859775697_31140_m_001028_0/part-m-01028 is closed by DFSClient_attempt_1444859775697_31140_m_001028_0_1424303982_1 {code} You can see the file complete after a pipeline update. The block changed its genStamp from blk_3773617405_1106111498065 to blk_3773617405_1106111511603. But then the two nodes in the updated pipeline are marked as corrupted. When I do fsck, it shows {code} hdfs fsck /projects/FETLDEV/Benzene/benzene_stg_transient/primer/201510201900/part-m-01028 Connecting to namenode via http://uraniumtan-nn1.tan.ygrid.yahoo.com:50070 FSCK started by hdfs (auth:KERBEROS_SSL) from /98.138.131.190 for path /projects/FETLDEV/Benzene/benzene_stg_transient/primer/201510201900/part-m-01028 at Wed Oct 21 15:04:56 UTC 2015 . /projects/FETLDEV/Benzene/benzene_stg_transient/primer/201510201900/part-m-01028: CORRUPT blockpool BP-1052427332-98.138.108.146-1350583571998 block blk_3773617405 /projects/FETLDEV/Benzene/benzene_stg_transient/primer/201510201900/part-m-01028: Replica placement policy is violated for BP-1052427332-98.138.108.146-1350583571998:blk_3773617405_1106111498065. Block should be additionally replicated on 1 more rack(s). {code} it shows the blk with old gen stamp blk_3773617405_1106111498065. > check genStamp when complete file > - > > Key: HDFS-9289 > URL: https://issues.apache.org/jira/browse/HDFS-9289 > Project: Hadoop HDFS > Issue Type: Bug >Reporter: Chang Li >Assignee: Chang Li >Priority: Critical > Attachments: HDFS-9289.1.patch, HDFS-9289.2.patch > > > we have seen a case of corrupt block which is caused by file complete after a > pipelineUpdate, but the file complete with the old block genStamp. This > caused the replicas of two datanodes in updated pipeline to be viewed as > corrupte. Propose to check genstamp when commit block -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HDFS-9303) Balancer slowly with too many small file blocks
[ https://issues.apache.org/jira/browse/HDFS-9303?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14972619#comment-14972619 ] Tsz Wo Nicholas Sze commented on HDFS-9303: --- Is this a duplicate of HDFS-8824? > Balancer slowly with too many small file blocks > --- > > Key: HDFS-9303 > URL: https://issues.apache.org/jira/browse/HDFS-9303 > Project: Hadoop HDFS > Issue Type: Improvement > Components: balancer & mover >Affects Versions: 2.7.1 >Reporter: Lin Yiqun >Assignee: Lin Yiqun > Attachments: HDFS-9303.001.patch > > > In the recent hadoop release versions I found that balance operation is > always so slowly, even though I upgrade the version.When I analyse balancer > log, I found that in every balance iteration,it use only 4 to 5 minutes, and > is a short time.And The most important is that the most of being moving > blocks is small blocks ,and the size is smaller than 1M.And this is a main > reason of the low effective of balance operation. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HDFS-9303) Balancer slowly with too many small file blocks
[ https://issues.apache.org/jira/browse/HDFS-9303?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14972605#comment-14972605 ] Hadoop QA commented on HDFS-9303: - \\ \\ | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:red}-1{color} | patch | 0m 0s | The patch command could not apply the patch during dryrun. | \\ \\ || Subsystem || Report/Notes || | Patch URL | http://issues.apache.org/jira/secure/attachment/12768510/HDFS-9303.001.patch | | Optional Tests | javadoc javac unit findbugs checkstyle | | git revision | trunk / 446212a | | Console output | https://builds.apache.org/job/PreCommit-HDFS-Build/13181/console | This message was automatically generated. > Balancer slowly with too many small file blocks > --- > > Key: HDFS-9303 > URL: https://issues.apache.org/jira/browse/HDFS-9303 > Project: Hadoop HDFS > Issue Type: Improvement > Components: balancer & mover >Affects Versions: 2.7.1 >Reporter: Lin Yiqun >Assignee: Lin Yiqun > Attachments: HDFS-9303.001.patch > > > In the recent hadoop release versions I found that balance operation is > always so slowly, even though I upgrade the version.When I analyse balancer > log, I found that in every balance iteration,it use only 4 to 5 minutes, and > is a short time.And The most important is that the most of being moving > blocks is small blocks ,and the size is smaller than 1M.And this is a main > reason of the low effective of balance operation. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HDFS-9303) Balancer slowly with too many small file blocks
[ https://issues.apache.org/jira/browse/HDFS-9303?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Lin Yiqun updated HDFS-9303: Attachment: HDFS-9303.001.patch Add a new param blockBytesNum to limit the source block size.And update some testcases. > Balancer slowly with too many small file blocks > --- > > Key: HDFS-9303 > URL: https://issues.apache.org/jira/browse/HDFS-9303 > Project: Hadoop HDFS > Issue Type: Improvement > Components: balancer & mover >Affects Versions: 2.7.1 >Reporter: Lin Yiqun >Assignee: Lin Yiqun > Attachments: HDFS-9303.001.patch > > > In the recent hadoop release versions I found that balance operation is > always so slowly, even though I upgrade the version.When I analyse balancer > log, I found that in every balance iteration,it use only 4 to 5 minutes, and > is a short time.And The most important is that the most of being moving > blocks is small blocks ,and the size is smaller than 1M.And this is a main > reason of the low effective of balance operation. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HDFS-9303) Balancer slowly with too many small file blocks
[ https://issues.apache.org/jira/browse/HDFS-9303?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Lin Yiqun updated HDFS-9303: Status: Patch Available (was: Open) The solution of this problem is to limit the small source blocks and give a limit value then filter some small blocks.This will be increase the every balance iteration time but the balancer's velocity is faster.Attach a init patch. > Balancer slowly with too many small file blocks > --- > > Key: HDFS-9303 > URL: https://issues.apache.org/jira/browse/HDFS-9303 > Project: Hadoop HDFS > Issue Type: Improvement > Components: balancer & mover >Affects Versions: 2.7.1 >Reporter: Lin Yiqun >Assignee: Lin Yiqun > > In the recent hadoop release versions I found that balance operation is > always so slowly, even though I upgrade the version.When I analyse balancer > log, I found that in every balance iteration,it use only 4 to 5 minutes, and > is a short time.And The most important is that the most of being moving > blocks is small blocks ,and the size is smaller than 1M.And this is a main > reason of the low effective of balance operation. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Created] (HDFS-9303) Balancer slowly with too many small file blocks
Lin Yiqun created HDFS-9303: --- Summary: Balancer slowly with too many small file blocks Key: HDFS-9303 URL: https://issues.apache.org/jira/browse/HDFS-9303 Project: Hadoop HDFS Issue Type: Improvement Components: balancer & mover Affects Versions: 2.7.1 Reporter: Lin Yiqun Assignee: Lin Yiqun In the recent hadoop release versions I found that balance operation is always so slowly, even though I upgrade the version.When I analyse balancer log, I found that in every balance iteration,it use only 4 to 5 minutes, and is a short time.And The most important is that the most of being moving blocks is small blocks ,and the size is smaller than 1M.And this is a main reason of the low effective of balance operation. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HDFS-9129) Move the safemode block count into BlockManager
[ https://issues.apache.org/jira/browse/HDFS-9129?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14972490#comment-14972490 ] Hadoop QA commented on HDFS-9129: - \\ \\ | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:red}-1{color} | pre-patch | 18m 54s | Pre-patch trunk has 1 extant Findbugs (version 3.0.0) warnings. | | {color:green}+1{color} | @author | 0m 0s | The patch does not contain any @author tags. | | {color:green}+1{color} | tests included | 0m 0s | The patch appears to include 8 new or modified test files. | | {color:green}+1{color} | javac | 8m 1s | There were no new javac warning messages. | | {color:green}+1{color} | javadoc | 10m 20s | There were no new javadoc warning messages. | | {color:green}+1{color} | release audit | 0m 24s | The applied patch does not increase the total number of release audit warnings. | | {color:red}-1{color} | checkstyle | 1m 23s | The applied patch generated 6 new checkstyle issues (total was 803, now 754). | | {color:green}+1{color} | whitespace | 0m 3s | The patch has no lines that end in whitespace. | | {color:green}+1{color} | install | 1m 29s | mvn install still works. | | {color:green}+1{color} | eclipse:eclipse | 0m 33s | The patch built with eclipse:eclipse. | | {color:green}+1{color} | findbugs | 2m 29s | The patch does not introduce any new Findbugs (version 3.0.0) warnings. | | {color:green}+1{color} | native | 3m 12s | Pre-build of native portion | | {color:red}-1{color} | hdfs tests | 51m 51s | Tests failed in hadoop-hdfs. | | | | 98m 44s | | \\ \\ || Reason || Tests || | Failed unit tests | hadoop.hdfs.server.datanode.TestDirectoryScanner | \\ \\ || Subsystem || Report/Notes || | Patch URL | http://issues.apache.org/jira/secure/attachment/12768498/HDFS-9129.010.patch | | Optional Tests | javadoc javac unit findbugs checkstyle | | git revision | trunk / 7781fe1 | | Pre-patch Findbugs warnings | https://builds.apache.org/job/PreCommit-HDFS-Build/13180/artifact/patchprocess/trunkFindbugsWarningshadoop-hdfs.html | | checkstyle | https://builds.apache.org/job/PreCommit-HDFS-Build/13180/artifact/patchprocess/diffcheckstylehadoop-hdfs.txt | | hadoop-hdfs test log | https://builds.apache.org/job/PreCommit-HDFS-Build/13180/artifact/patchprocess/testrun_hadoop-hdfs.txt | | Test Results | https://builds.apache.org/job/PreCommit-HDFS-Build/13180/testReport/ | | Java | 1.7.0_55 | | uname | Linux asf900.gq1.ygridcore.net 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux | | Console output | https://builds.apache.org/job/PreCommit-HDFS-Build/13180/console | This message was automatically generated. > Move the safemode block count into BlockManager > --- > > Key: HDFS-9129 > URL: https://issues.apache.org/jira/browse/HDFS-9129 > Project: Hadoop HDFS > Issue Type: Sub-task >Reporter: Haohui Mai >Assignee: Mingliang Liu > Attachments: HDFS-9129.000.patch, HDFS-9129.001.patch, > HDFS-9129.002.patch, HDFS-9129.003.patch, HDFS-9129.004.patch, > HDFS-9129.005.patch, HDFS-9129.006.patch, HDFS-9129.007.patch, > HDFS-9129.008.patch, HDFS-9129.009.patch, HDFS-9129.010.patch > > > The {{SafeMode}} needs to track whether there are enough blocks so that the > NN can get out of the safemode. These fields can moved to the > {{BlockManager}} class. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HDFS-9079) Erasure coding: preallocate multiple generation stamps and serialize updates from data streamers
[ https://issues.apache.org/jira/browse/HDFS-9079?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14972482#comment-14972482 ] Tsz Wo Nicholas Sze commented on HDFS-9079: --- > More details of the proposed protocol can be found here . ... It seems that some failure cases were not considered. For example, what happen if the client dies after some of the streamers updated the GS but some are not? > Erasure coding: preallocate multiple generation stamps and serialize updates > from data streamers > > > Key: HDFS-9079 > URL: https://issues.apache.org/jira/browse/HDFS-9079 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: erasure-coding >Affects Versions: HDFS-7285 >Reporter: Zhe Zhang >Assignee: Zhe Zhang > Attachments: HDFS-9079-HDFS-7285.00.patch, HDFS-9079.01.patch, > HDFS-9079.02.patch, HDFS-9079.03.patch, HDFS-9079.04.patch, HDFS-9079.05.patch > > > A non-striped DataStreamer goes through the following steps in error handling: > {code} > 1) Finds error => 2) Asks NN for new GS => 3) Gets new GS from NN => 4) > Applies new GS to DN (createBlockOutputStream) => 5) Ack from DN => 6) > Updates block on NN > {code} > To simplify the above we can preallocate GS when NN creates a new striped > block group ({{FSN#createNewBlock}}). For each new striped block group we can > reserve {{NUM_PARITY_BLOCKS}} GS's. Then steps 1~3 in the above sequence can > be saved. If more than {{NUM_PARITY_BLOCKS}} errors have happened we > shouldn't try to further recover anyway. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HDFS-8831) Trash Support for deletion in HDFS encryption zone
[ https://issues.apache.org/jira/browse/HDFS-8831?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14972479#comment-14972479 ] Hadoop QA commented on HDFS-8831: - \\ \\ | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:red}-1{color} | pre-patch | 24m 2s | Pre-patch trunk has 1 extant Findbugs (version 3.0.0) warnings. | | {color:green}+1{color} | @author | 0m 0s | The patch does not contain any @author tags. | | {color:green}+1{color} | tests included | 0m 0s | The patch appears to include 3 new or modified test files. | | {color:red}-1{color} | javac | 8m 31s | The applied patch generated 2 additional warning messages. | | {color:green}+1{color} | javadoc | 11m 5s | There were no new javadoc warning messages. | | {color:green}+1{color} | release audit | 0m 27s | The applied patch does not increase the total number of release audit warnings. | | {color:red}-1{color} | checkstyle | 2m 49s | The applied patch generated 3 new checkstyle issues (total was 200, now 200). | | {color:green}+1{color} | whitespace | 0m 3s | The patch has no lines that end in whitespace. | | {color:green}+1{color} | install | 1m 59s | mvn install still works. | | {color:green}+1{color} | eclipse:eclipse | 0m 37s | The patch built with eclipse:eclipse. | | {color:red}-1{color} | findbugs | 7m 35s | The patch appears to introduce 3 new Findbugs (version 3.0.0) warnings. | | {color:red}-1{color} | common tests | 9m 1s | Tests failed in hadoop-common. | | {color:red}-1{color} | hdfs tests | 66m 30s | Tests failed in hadoop-hdfs. | | {color:green}+1{color} | hdfs tests | 0m 36s | Tests passed in hadoop-hdfs-client. | | | | 134m 11s | | \\ \\ || Reason || Tests || | FindBugs | module:hadoop-common | | FindBugs | module:hadoop-hdfs-client | | Failed unit tests | hadoop.fs.TestLocalFsFCStatistics | | | hadoop.ipc.TestDecayRpcScheduler | | | hadoop.hdfs.TestReplaceDatanodeOnFailure | | | hadoop.hdfs.TestRecoverStripedFile | | | hadoop.hdfs.server.blockmanagement.TestNodeCount | \\ \\ || Subsystem || Report/Notes || | Patch URL | http://issues.apache.org/jira/secure/attachment/12768496/HDFS-8831.02.patch | | Optional Tests | javadoc javac unit findbugs checkstyle | | git revision | trunk / 7781fe1 | | Pre-patch Findbugs warnings | https://builds.apache.org/job/PreCommit-HDFS-Build/13179/artifact/patchprocess/trunkFindbugsWarningshadoop-hdfs.html | | javac | https://builds.apache.org/job/PreCommit-HDFS-Build/13179/artifact/patchprocess/diffJavacWarnings.txt | | checkstyle | https://builds.apache.org/job/PreCommit-HDFS-Build/13179/artifact/patchprocess/diffcheckstylehadoop-common.txt | | Findbugs warnings | https://builds.apache.org/job/PreCommit-HDFS-Build/13179/artifact/patchprocess/newPatchFindbugsWarningshadoop-common.html | | Findbugs warnings | https://builds.apache.org/job/PreCommit-HDFS-Build/13179/artifact/patchprocess/newPatchFindbugsWarningshadoop-hdfs-client.html | | hadoop-common test log | https://builds.apache.org/job/PreCommit-HDFS-Build/13179/artifact/patchprocess/testrun_hadoop-common.txt | | hadoop-hdfs test log | https://builds.apache.org/job/PreCommit-HDFS-Build/13179/artifact/patchprocess/testrun_hadoop-hdfs.txt | | hadoop-hdfs-client test log | https://builds.apache.org/job/PreCommit-HDFS-Build/13179/artifact/patchprocess/testrun_hadoop-hdfs-client.txt | | Test Results | https://builds.apache.org/job/PreCommit-HDFS-Build/13179/testReport/ | | Java | 1.7.0_55 | | uname | Linux asf909.gq1.ygridcore.net 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux | | Console output | https://builds.apache.org/job/PreCommit-HDFS-Build/13179/console | This message was automatically generated. > Trash Support for deletion in HDFS encryption zone > -- > > Key: HDFS-8831 > URL: https://issues.apache.org/jira/browse/HDFS-8831 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: encryption >Reporter: Xiaoyu Yao >Assignee: Xiaoyu Yao > Attachments: HDFS-8831-10152015.pdf, HDFS-8831.00.patch, > HDFS-8831.01.patch, HDFS-8831.02.patch > > > Currently, "Soft Delete" is only supported if the whole encryption zone is > deleted. If you delete files whinin the zone with trash feature enabled, you > will get error similar to the following > {code} > rm: Failed to move to trash: hdfs://HW11217.local:9000/z1_1/startnn.sh: > /z1_1/startnn.sh can't be moved from an encryption zone. > {code} > With HDFS-8830, we can support "Soft Delete" by adding the .Trash folder of > the file being deleted appropriately to the same encryption zone. -- This message was sent by Atlassian JIRA (v6.3.4#6332)