[jira] [Commented] (HDFS-11647) Add -E option in hdfs "count" command to show erasure policy summarization
[ https://issues.apache.org/jira/browse/HDFS-11647?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16050033#comment-16050033 ] Hadoop QA commented on HDFS-11647: -- | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 14s{color} | {color:blue} Docker mode activated. {color} | | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s{color} | {color:green} The patch appears to include 4 new or modified test files. {color} | | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 15s{color} | {color:blue} Maven dependency ordering for branch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 13m 8s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 13m 31s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 1m 57s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 2m 45s{color} | {color:green} trunk passed {color} | | {color:red}-1{color} | {color:red} findbugs {color} | {color:red} 1m 24s{color} | {color:red} hadoop-common-project/hadoop-common in trunk has 19 extant Findbugs warnings. {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 2m 3s{color} | {color:green} trunk passed {color} | | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 15s{color} | {color:blue} Maven dependency ordering for patch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 1m 58s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 10m 9s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} cc {color} | {color:green} 10m 9s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 10m 9s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 1m 56s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 2m 42s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} xml {color} | {color:green} 0m 3s{color} | {color:green} The patch has no ill-formed XML file. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 4m 59s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 2m 3s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} unit {color} | {color:green} 8m 21s{color} | {color:green} hadoop-common in the patch passed. {color} | | {color:green}+1{color} | {color:green} unit {color} | {color:green} 1m 19s{color} | {color:green} hadoop-hdfs-client in the patch passed. {color} | | {color:red}-1{color} | {color:red} unit {color} | {color:red} 64m 34s{color} | {color:red} hadoop-hdfs in the patch failed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 37s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black}138m 46s{color} | {color:black} {color} | \\ \\ || Reason || Tests || | Failed junit tests | hadoop.hdfs.TestDFSStripedOutputStreamWithFailure080 | \\ \\ || Subsystem || Report/Notes || | Docker | Image:yetus/hadoop:14b5c93 | | JIRA Issue | HDFS-11647 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12873054/HDFS-11647-005.patch | | Optional Tests | asflicense compile javac javadoc mvninstall mvnsite unit findbugs checkstyle xml cc | | uname | Linux af29e729e1b7 3.13.0-106-generic #153-Ubuntu SMP Tue Dec 6 15:44:32 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh | | git revision | trunk / 999c8fc | | Default Java | 1.8.0_131 | | findbugs | v3.1.0-RC1 | | findbugs | https://builds.apache.org/job/PreCommit-HDFS-Build/19911/artifact/patchprocess/branch-findbugs-hadoop-common-project_hadoop-common-warnings.html | | unit |
[jira] [Commented] (HDFS-11345) Document the configuration key for FSNamesystem lock fairness
[ https://issues.apache.org/jira/browse/HDFS-11345?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16050013#comment-16050013 ] Akira Ajisaka commented on HDFS-11345: -- I'm +1 for documenting this. Would you avoid adding glob pattern for imports? > Document the configuration key for FSNamesystem lock fairness > - > > Key: HDFS-11345 > URL: https://issues.apache.org/jira/browse/HDFS-11345 > Project: Hadoop HDFS > Issue Type: Improvement > Components: documentation, namenode >Reporter: Zhe Zhang >Assignee: Erik Krogen >Priority: Minor > Attachments: HADOOP-11345.000.patch, HADOOP-11345.001.patch > > > Per [earlier | > https://issues.apache.org/jira/browse/HDFS-5239?focusedCommentId=15536471=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-15536471] > discussion. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-11647) Add -E option in hdfs "count" command to show erasure policy summarization
[ https://issues.apache.org/jira/browse/HDFS-11647?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16050012#comment-16050012 ] luhuichun commented on HDFS-11647: -- [~eddyxu] Hi Eddy, updated according to your comments, thx for review > Add -E option in hdfs "count" command to show erasure policy summarization > -- > > Key: HDFS-11647 > URL: https://issues.apache.org/jira/browse/HDFS-11647 > Project: Hadoop HDFS > Issue Type: Improvement >Reporter: SammiChen >Assignee: luhuichun > Labels: hdfs-ec-3.0-nice-to-have > Attachments: HDFS-11647-001.patch, HDFS-11647-002.patch, > HDFS-11647-003.patch, HDFS-11647-004.patch, HDFS-11647-005.patch > > > Add -E option in hdfs "count" command to show erasure policy summarization -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-11646) Add -E option in 'ls' to list erasure coding policy of each file and directory if applicable
[ https://issues.apache.org/jira/browse/HDFS-11646?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16050010#comment-16050010 ] luhuichun commented on HDFS-11646: -- [~eddyxu][~andrew.wang] thx for comments, updated the patch > Add -E option in 'ls' to list erasure coding policy of each file and > directory if applicable > > > Key: HDFS-11646 > URL: https://issues.apache.org/jira/browse/HDFS-11646 > Project: Hadoop HDFS > Issue Type: Improvement > Components: erasure-coding >Reporter: SammiChen >Assignee: luhuichun > Labels: hdfs-ec-3.0-nice-to-have > Attachments: HDFS-11646-001.patch, HDFS-11646-002.patch, > HDFS-11646-003.patch, HDFS-11646-004.patch > > > Add -E option in "ls" to show erasure coding policy of file and directory, > leverage the "number_of_replicas " column. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-11583) Parent spans are not initialized to NullScope for every DFSPacket
[ https://issues.apache.org/jira/browse/HDFS-11583?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Akira Ajisaka updated HDFS-11583: - Resolution: Fixed Hadoop Flags: Reviewed Fix Version/s: 2.7.4 Status: Resolved (was: Patch Available) Committed this to branch-2.7. Thanks [~iwasakims] for the fix and thanks all for the discussion. > Parent spans are not initialized to NullScope for every DFSPacket > - > > Key: HDFS-11583 > URL: https://issues.apache.org/jira/browse/HDFS-11583 > Project: Hadoop HDFS > Issue Type: Bug > Components: tracing >Affects Versions: 2.7.1 >Reporter: Karan Mehta >Assignee: Masatake Iwasaki > Fix For: 2.7.4 > > Attachments: HDFS-11583-branch-2.7.001.patch, > HDFS-11583-branch-2.7.002.patch, HDFS-11583-branch-2.7.003.patch > > > The issue was found while working with PHOENIX-3752. > Each packet received by the {{run()}} method of {{DataStreamer}} class, uses > the {{parents}} field of the {{DFSPacket}} to create a new {{dataStreamer}} > span, which in turn creates a {{writeTo}} span as its child span. The parents > field is initialized when the packet is added to the {{dataQueue}} and the > value is initialized from the {{ThreadLocal}}. This is how HTrace handles > spans. > A {{TraceScope}} is created and initialized to {{NullScope}} before the loop > which runs till the point when the stream is closed. > Consider the following scenario, when the {{dataQueue}} contains multiple > packets, only the first of which has a tracing enabled. The scope is > initialized to the {{dataStreamer}} scope and a {{writeTo}} span is created > as its child, which gets closed once the packet is sent out to a remote > datanode. Before {{writeTo}} span is started, the {{dataStreamer}} scope is > detached. So calling the close method on it doesn't do anything at the end of > loop. > The second iteration will be using the stale value of the {{scope}} variable > with a DFSPacket on which tracing is not enabled. This results in generation > of an orphan {{writeTo}} spans which are being delivered to the > {{SpanReceiver}} as registered in the TraceFramework. This may result in > unlimited number of spans being generated and sent out to receiver. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-11583) Parent spans are not initialized to NullScope for every DFSPacket
[ https://issues.apache.org/jira/browse/HDFS-11583?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Akira Ajisaka updated HDFS-11583: - Summary: Parent spans are not initialized to NullScope for every DFSPacket (was: Parent spans not initialized to NullScope for every DFSPacket) > Parent spans are not initialized to NullScope for every DFSPacket > - > > Key: HDFS-11583 > URL: https://issues.apache.org/jira/browse/HDFS-11583 > Project: Hadoop HDFS > Issue Type: Bug > Components: tracing >Affects Versions: 2.7.1 >Reporter: Karan Mehta >Assignee: Masatake Iwasaki > Fix For: 2.7.4 > > Attachments: HDFS-11583-branch-2.7.001.patch, > HDFS-11583-branch-2.7.002.patch, HDFS-11583-branch-2.7.003.patch > > > The issue was found while working with PHOENIX-3752. > Each packet received by the {{run()}} method of {{DataStreamer}} class, uses > the {{parents}} field of the {{DFSPacket}} to create a new {{dataStreamer}} > span, which in turn creates a {{writeTo}} span as its child span. The parents > field is initialized when the packet is added to the {{dataQueue}} and the > value is initialized from the {{ThreadLocal}}. This is how HTrace handles > spans. > A {{TraceScope}} is created and initialized to {{NullScope}} before the loop > which runs till the point when the stream is closed. > Consider the following scenario, when the {{dataQueue}} contains multiple > packets, only the first of which has a tracing enabled. The scope is > initialized to the {{dataStreamer}} scope and a {{writeTo}} span is created > as its child, which gets closed once the packet is sent out to a remote > datanode. Before {{writeTo}} span is started, the {{dataStreamer}} scope is > detached. So calling the close method on it doesn't do anything at the end of > loop. > The second iteration will be using the stale value of the {{scope}} variable > with a DFSPacket on which tracing is not enabled. This results in generation > of an orphan {{writeTo}} spans which are being delivered to the > {{SpanReceiver}} as registered in the TraceFramework. This may result in > unlimited number of spans being generated and sent out to receiver. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-11583) Parent spans not initialized to NullScope for every DFSPacket
[ https://issues.apache.org/jira/browse/HDFS-11583?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16049998#comment-16049998 ] Akira Ajisaka commented on HDFS-11583: -- +1, checking this in. > Parent spans not initialized to NullScope for every DFSPacket > - > > Key: HDFS-11583 > URL: https://issues.apache.org/jira/browse/HDFS-11583 > Project: Hadoop HDFS > Issue Type: Bug > Components: tracing >Affects Versions: 2.7.1 >Reporter: Karan Mehta >Assignee: Masatake Iwasaki > Attachments: HDFS-11583-branch-2.7.001.patch, > HDFS-11583-branch-2.7.002.patch, HDFS-11583-branch-2.7.003.patch > > > The issue was found while working with PHOENIX-3752. > Each packet received by the {{run()}} method of {{DataStreamer}} class, uses > the {{parents}} field of the {{DFSPacket}} to create a new {{dataStreamer}} > span, which in turn creates a {{writeTo}} span as its child span. The parents > field is initialized when the packet is added to the {{dataQueue}} and the > value is initialized from the {{ThreadLocal}}. This is how HTrace handles > spans. > A {{TraceScope}} is created and initialized to {{NullScope}} before the loop > which runs till the point when the stream is closed. > Consider the following scenario, when the {{dataQueue}} contains multiple > packets, only the first of which has a tracing enabled. The scope is > initialized to the {{dataStreamer}} scope and a {{writeTo}} span is created > as its child, which gets closed once the packet is sent out to a remote > datanode. Before {{writeTo}} span is started, the {{dataStreamer}} scope is > detached. So calling the close method on it doesn't do anything at the end of > loop. > The second iteration will be using the stale value of the {{scope}} variable > with a DFSPacket on which tracing is not enabled. This results in generation > of an orphan {{writeTo}} spans which are being delivered to the > {{SpanReceiver}} as registered in the TraceFramework. This may result in > unlimited number of spans being generated and sent out to receiver. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-11736) OIV tests should not write outside 'target' directory.
[ https://issues.apache.org/jira/browse/HDFS-11736?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16049997#comment-16049997 ] Akira Ajisaka commented on HDFS-11736: -- Sorry, I forgot to pushing this to branch-2. Now I've done. Thanks [~linyiqun]! > OIV tests should not write outside 'target' directory. > -- > > Key: HDFS-11736 > URL: https://issues.apache.org/jira/browse/HDFS-11736 > Project: Hadoop HDFS > Issue Type: Bug >Affects Versions: 2.6.0 >Reporter: Konstantin Shvachko >Assignee: Yiqun Lin > Labels: newbie++, test > Fix For: 2.9.0, 2.7.4, 3.0.0-alpha4, 2.8.2 > > Attachments: HDFS-11736.001.patch, HDFS-11736.002.patch, > HDFS-11736.003.patch, HDFS-11736-branch-2.7.001.patch, > HDFS-11736-branch-2.7.002.patch > > > A few tests use {{Files.createTempDir()}} from Guava package, but do not set > {{java.io.tmpdir}} system property. Thus the temp directory is created in > unpredictable places and is not being cleaned up by {{mvn clean}}. > This was probably introduced in {{TestOfflineImageViewer}} and then > replicated in {{TestCheckpoint}}, {{TestStandbyCheckpoints}}. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-11736) OIV tests should not write outside 'target' directory.
[ https://issues.apache.org/jira/browse/HDFS-11736?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16049994#comment-16049994 ] Yiqun Lin commented on HDFS-11736: -- Thanks [~ajisakaa] for the commit, are you missing the commit of branch-2? As I see there are only three commits on this JIRA. > OIV tests should not write outside 'target' directory. > -- > > Key: HDFS-11736 > URL: https://issues.apache.org/jira/browse/HDFS-11736 > Project: Hadoop HDFS > Issue Type: Bug >Affects Versions: 2.6.0 >Reporter: Konstantin Shvachko >Assignee: Yiqun Lin > Labels: newbie++, test > Fix For: 2.9.0, 2.7.4, 3.0.0-alpha4, 2.8.2 > > Attachments: HDFS-11736.001.patch, HDFS-11736.002.patch, > HDFS-11736.003.patch, HDFS-11736-branch-2.7.001.patch, > HDFS-11736-branch-2.7.002.patch > > > A few tests use {{Files.createTempDir()}} from Guava package, but do not set > {{java.io.tmpdir}} system property. Thus the temp directory is created in > unpredictable places and is not being cleaned up by {{mvn clean}}. > This was probably introduced in {{TestOfflineImageViewer}} and then > replicated in {{TestCheckpoint}}, {{TestStandbyCheckpoints}}. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-11647) Add -E option in hdfs "count" command to show erasure policy summarization
[ https://issues.apache.org/jira/browse/HDFS-11647?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16049995#comment-16049995 ] ASF GitHub Bot commented on HDFS-11647: --- Github user wayblink closed the pull request at: https://github.com/apache/hadoop/pull/233 > Add -E option in hdfs "count" command to show erasure policy summarization > -- > > Key: HDFS-11647 > URL: https://issues.apache.org/jira/browse/HDFS-11647 > Project: Hadoop HDFS > Issue Type: Improvement >Reporter: SammiChen >Assignee: luhuichun > Labels: hdfs-ec-3.0-nice-to-have > Attachments: HDFS-11647-001.patch, HDFS-11647-002.patch, > HDFS-11647-003.patch, HDFS-11647-004.patch, HDFS-11647-005.patch > > > Add -E option in hdfs "count" command to show erasure policy summarization -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-11647) Add -E option in hdfs "count" command to show erasure policy summarization
[ https://issues.apache.org/jira/browse/HDFS-11647?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16049993#comment-16049993 ] ASF GitHub Bot commented on HDFS-11647: --- GitHub user wayblink opened a pull request: https://github.com/apache/hadoop/pull/233 HDFS-11647 You can merge this pull request into a Git repository by running: $ git pull https://github.com/wayblink/hadoop hdfs-11647 Alternatively you can review and apply these changes as the patch at: https://github.com/apache/hadoop/pull/233.patch To close this pull request, make a commit to your master/trunk branch with (at least) the following in the commit message: This closes #233 commit 9c60c302974f57fbff5f6bab3627f64fc0039d2b Author: wayblink <137658...@qq.com> Date: 2017-06-08T15:14:11Z a successful test commit a93f6894061c008e55f1b5fa01b36ec75f1616ec Author: wayblink <137658...@qq.com> Date: 2017-06-08T15:15:58Z small edit commit e04082e72c61157302fcaa4c1c55c5cff8e8b5eb Author: wayblink <137658...@qq.com> Date: 2017-06-09T04:39:11Z add doc in FileSystemSHell.md, modify some details commit c934e2e33bcdb7845dfd9a53d6085970da080460 Author: wayblink <137658...@qq.com> Date: 2017-06-10T08:46:36Z modify the unit test commit 18cb930a4c80e0c6f11bda76012bdb29f69c7a8b Author: wayblink <137658...@qq.com> Date: 2017-06-12T03:29:06Z delete some useless code commit 759da1090667194a943609584a9171b4a312c21f Author: wayblink <137658...@qq.com> Date: 2017-06-13T06:50:27Z checkstyle commit 57f968847dca8af4dc29bdbb64661540f8ac4414 Author: wayblink <137658...@qq.com> Date: 2017-06-15T03:27:13Z format > Add -E option in hdfs "count" command to show erasure policy summarization > -- > > Key: HDFS-11647 > URL: https://issues.apache.org/jira/browse/HDFS-11647 > Project: Hadoop HDFS > Issue Type: Improvement >Reporter: SammiChen >Assignee: luhuichun > Labels: hdfs-ec-3.0-nice-to-have > Attachments: HDFS-11647-001.patch, HDFS-11647-002.patch, > HDFS-11647-003.patch, HDFS-11647-004.patch, HDFS-11647-005.patch > > > Add -E option in hdfs "count" command to show erasure policy summarization -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-11736) OIV tests should not write outside 'target' directory.
[ https://issues.apache.org/jira/browse/HDFS-11736?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Akira Ajisaka updated HDFS-11736: - Resolution: Fixed Hadoop Flags: Reviewed Fix Version/s: 2.8.2 3.0.0-alpha4 2.7.4 2.9.0 Status: Resolved (was: Patch Available) Committed this to trunk, branch-2, branch-2.8, and branch-2.7. > OIV tests should not write outside 'target' directory. > -- > > Key: HDFS-11736 > URL: https://issues.apache.org/jira/browse/HDFS-11736 > Project: Hadoop HDFS > Issue Type: Bug >Affects Versions: 2.6.0 >Reporter: Konstantin Shvachko >Assignee: Yiqun Lin > Labels: newbie++, test > Fix For: 2.9.0, 2.7.4, 3.0.0-alpha4, 2.8.2 > > Attachments: HDFS-11736.001.patch, HDFS-11736.002.patch, > HDFS-11736.003.patch, HDFS-11736-branch-2.7.001.patch, > HDFS-11736-branch-2.7.002.patch > > > A few tests use {{Files.createTempDir()}} from Guava package, but do not set > {{java.io.tmpdir}} system property. Thus the temp directory is created in > unpredictable places and is not being cleaned up by {{mvn clean}}. > This was probably introduced in {{TestOfflineImageViewer}} and then > replicated in {{TestCheckpoint}}, {{TestStandbyCheckpoints}}. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-11736) OIV tests should not write outside 'target' directory.
[ https://issues.apache.org/jira/browse/HDFS-11736?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16049983#comment-16049983 ] Akira Ajisaka commented on HDFS-11736: -- +1, checking this in. > OIV tests should not write outside 'target' directory. > -- > > Key: HDFS-11736 > URL: https://issues.apache.org/jira/browse/HDFS-11736 > Project: Hadoop HDFS > Issue Type: Bug >Affects Versions: 2.6.0 >Reporter: Konstantin Shvachko >Assignee: Yiqun Lin > Labels: newbie++, test > Attachments: HDFS-11736.001.patch, HDFS-11736.002.patch, > HDFS-11736.003.patch, HDFS-11736-branch-2.7.001.patch, > HDFS-11736-branch-2.7.002.patch > > > A few tests use {{Files.createTempDir()}} from Guava package, but do not set > {{java.io.tmpdir}} system property. Thus the temp directory is created in > unpredictable places and is not being cleaned up by {{mvn clean}}. > This was probably introduced in {{TestOfflineImageViewer}} and then > replicated in {{TestCheckpoint}}, {{TestStandbyCheckpoints}}. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-11647) Add -E option in hdfs "count" command to show erasure policy summarization
[ https://issues.apache.org/jira/browse/HDFS-11647?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] luhuichun updated HDFS-11647: - Attachment: HDFS-11647-005.patch > Add -E option in hdfs "count" command to show erasure policy summarization > -- > > Key: HDFS-11647 > URL: https://issues.apache.org/jira/browse/HDFS-11647 > Project: Hadoop HDFS > Issue Type: Improvement >Reporter: SammiChen >Assignee: luhuichun > Labels: hdfs-ec-3.0-nice-to-have > Attachments: HDFS-11647-001.patch, HDFS-11647-002.patch, > HDFS-11647-003.patch, HDFS-11647-004.patch, HDFS-11647-005.patch > > > Add -E option in hdfs "count" command to show erasure policy summarization -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-11789) Maintain Short-Circuit Read Statistics
[ https://issues.apache.org/jira/browse/HDFS-11789?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16049953#comment-16049953 ] Hadoop QA commented on HDFS-11789: -- | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 14s{color} | {color:blue} Docker mode activated. {color} | | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s{color} | {color:green} The patch appears to include 2 new or modified test files. {color} | | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 24s{color} | {color:blue} Maven dependency ordering for branch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 13m 9s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 24s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 41s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 26s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 2m 58s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 2s{color} | {color:green} trunk passed {color} | | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 8s{color} | {color:blue} Maven dependency ordering for patch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 1m 19s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 22s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 1m 22s{color} | {color:green} the patch passed {color} | | {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange} 0m 39s{color} | {color:orange} hadoop-hdfs-project: The patch generated 28 new + 47 unchanged - 0 fixed = 75 total (was 47) {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 21s{color} | {color:green} the patch passed {color} | | {color:red}-1{color} | {color:red} whitespace {color} | {color:red} 0m 0s{color} | {color:red} The patch 1 line(s) with tabs. {color} | | {color:red}-1{color} | {color:red} findbugs {color} | {color:red} 1m 29s{color} | {color:red} hadoop-hdfs-project/hadoop-hdfs-client generated 1 new + 0 unchanged - 0 fixed = 1 total (was 0) {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 58s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} unit {color} | {color:green} 1m 11s{color} | {color:green} hadoop-hdfs-client in the patch passed. {color} | | {color:red}-1{color} | {color:red} unit {color} | {color:red} 64m 50s{color} | {color:red} hadoop-hdfs in the patch failed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 20s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 98m 3s{color} | {color:black} {color} | \\ \\ || Reason || Tests || | FindBugs | module:hadoop-hdfs-project/hadoop-hdfs-client | | | new org.apache.hadoop.hdfs.client.impl.BlockReaderLocal(BlockReaderLocal$Builder) does not release lock on all exception paths At BlockReaderLocal.java:lock on all exception paths At BlockReaderLocal.java:[line 288] | | Failed junit tests | hadoop.hdfs.TestDFSStripedOutputStreamWithFailure080 | | | hadoop.hdfs.TestDFSStripedOutputStreamWithFailureWithRandomECPolicy | \\ \\ || Subsystem || Report/Notes || | Docker | Image:yetus/hadoop:14b5c93 | | JIRA Issue | HDFS-11789 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12873050/HDFS-11789.003.patch | | Optional Tests | asflicense compile javac javadoc mvninstall mvnsite unit findbugs checkstyle | | uname | Linux 87aa177b5bd7 3.13.0-106-generic #153-Ubuntu SMP Tue Dec 6 15:44:32 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh | | git revision | trunk / 999c8fc | | Default Java | 1.8.0_131 | | findbugs | v3.1.0-RC1 | | checkstyle | https://builds.apache.org/job/PreCommit-HDFS-Build/19910/artifact/patchprocess/diff-checkstyle-hadoop-hdfs-project.txt | | whitespace |
[jira] [Commented] (HDFS-11949) Add testcase for ensuring that FsShell cann't move file to the target directory that file exists
[ https://issues.apache.org/jira/browse/HDFS-11949?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16049952#comment-16049952 ] legend commented on HDFS-11949: --- Thank [~yzhangal] for review. It's success when executing "mvn clean test -Dtest=TestDNS#testDefaultDnsServer" in my own environment. Maybe is caused by CI environment. The test failures are not related to the patch. Can we create a new jira to track the issue? > Add testcase for ensuring that FsShell cann't move file to the target > directory that file exists > > > Key: HDFS-11949 > URL: https://issues.apache.org/jira/browse/HDFS-11949 > Project: Hadoop HDFS > Issue Type: Test > Components: test >Affects Versions: 3.0.0-alpha4 >Reporter: legend >Assignee: legend >Priority: Minor > Attachments: HDFS-11949.patch > > > moveFromLocal returns error when move file to the target directory that the > file exists. So we need add test case to check it. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-11518) libhdfs++: Add a build option to skip building examples, tests, and tools
[ https://issues.apache.org/jira/browse/HDFS-11518?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16049951#comment-16049951 ] James Clampffer commented on HDFS-11518: Nice! Change looks good to me. Could you just add a comment towards the top of that CMakeLists file (under the license stuff) listing the option name and what it does? I assume eventually there will be a few more build config options so an example on how to document one will help keep things consistent. A quick example of what you'd add to a cmake invocation to get it into this mode from the command line would also be helpful e.g. cmake -DHDFSPP_LIBRARY_ONLY: . Will +1 once that's added. > libhdfs++: Add a build option to skip building examples, tests, and tools > - > > Key: HDFS-11518 > URL: https://issues.apache.org/jira/browse/HDFS-11518 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: hdfs-client >Reporter: James Clampffer >Assignee: Anatoli Shein > Attachments: HDFS-11518.HDFS-8707.000.patch > > > Adding a flag to just build the core library without tools, examples, and > tests will make it easier and lighter weight to embed the libhdfs++ source as > a third-party component of other projects. It won't need to look for a JDK, > valgrind, and gmock and won't generate a handful of binaries that might not > be relevant to other projects during normal use. > This should also make it a bit easier to wire into other build frameworks > since there won't be standalone binaries that need the path to other > libraries like protobuf while the library builds. They just need to be > around while the project embedding libhdfs++ gets linked. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-11971) libhdfs++: A few portability issues
[ https://issues.apache.org/jira/browse/HDFS-11971?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16049943#comment-16049943 ] James Clampffer commented on HDFS-11971: Thanks for the fixes [~anatoli.shein]!. Linking the static library in the tools is definitely an improvement. {code} 29 # Several examples in different languages need to produce executables with 30 # same names. To allow executables with same names we keep their CMake 31 # names different, but specify their executable names as follows: 32 set_target_properties( gendirs_cc 33 PROPERTIES 34 OUTPUT_NAME "gendirs" 35 ) {code} Did set_target_properties cause issues for your build? > libhdfs++: A few portability issues > --- > > Key: HDFS-11971 > URL: https://issues.apache.org/jira/browse/HDFS-11971 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: hdfs-client >Reporter: Anatoli Shein >Assignee: Anatoli Shein > Attachments: HDFS-11971.HDFS-8707.000.patch, > HDFS-11971.HDFS-8707.001.patch, HDFS-11971.HDFS-8707.002.patch > > > I recently encountered a few portability issues with libhdfs++ while trying > to build it as a stand alone project (and also as part of another Apache > project). > 1. Method fixCase in configuration.h file produces a warning "conversion to > ‘char’ from ‘int’ may alter its value [-Werror=conversion]" which does not > allow libhdfs++ to be compiled as part of the codebase that treats such > warnings as errors (can be fixed with a simple cast). > 2. In CMakeLists.txt file (in libhdfspp directory) we do > find_package(Threads) however we do not link it to the targets (e.g. > hdfspp_static), which causes the build to fail with pthread errors. After the > Threads package is found we need to link it using ${CMAKE_THREAD_LIBS_INIT}. > 3. All the tools and examples fail to build as part of a standalone libhdfs++ > because they are missing multiple libraries such as protobuf, ssl, pthread, > etc. This happens because we link them to a shared library hdfspp instead of > hdfspp_static library. We should either link all the tools and examples to > hdfspp_static library or explicitly add linking to all missing libraries for > each tool/example. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-11969) Block Storage: Convert unnecessary info log levels to debug
[ https://issues.apache.org/jira/browse/HDFS-11969?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Anu Engineer updated HDFS-11969: Resolution: Fixed Hadoop Flags: Reviewed Status: Resolved (was: Patch Available) [~msingh] Thanks for the contribution. I have committed this to the feature branch > Block Storage: Convert unnecessary info log levels to debug > --- > > Key: HDFS-11969 > URL: https://issues.apache.org/jira/browse/HDFS-11969 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: ozone >Affects Versions: HDFS-7240 >Reporter: Mukul Kumar Singh >Assignee: Mukul Kumar Singh > Fix For: HDFS-7240 > > Attachments: HDFS-11969-HDFS-7240.001.patch > > > Following log lines in ContainerCacheFlusher.java is generated for every > Dirty/Retry Log files and should be converted to debug. > {code} > LOG.info("Remaining blocks count {} and {}", > blockIDBuffer.remaining(), > blockCount); > {code} -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-11742) Improve balancer usability after HDFS-8818
[ https://issues.apache.org/jira/browse/HDFS-11742?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16049905#comment-16049905 ] Kihwal Lee commented on HDFS-11742: --- bq. But may be we want to understand more closely at the root cause of the behavior before we commit? HDFS-8818 made a thread pool to be created per target. The size of each thread pool is fixed to {{dfs.datanode.balance.max.concurrent.moves}}(default=50). When the number of targets * max concurrent moves exceeds the configured mover thread limit (default=1000), it stops creating thread pools. The end result is that not all calculated moves are executed. So when the default config is used, if an iteration involves more than 20 targets (1000/50), some blocks won't be moved. We have clusters where balancing involves hundreds of targets and the effect of this limitation is very visible. My patch dynamically sizes each thread pool based on the number of targets in the iteration and the configured maximum mover threads to mitigate this problem. It does not limit the capability of HDFS-8818. It simply makes it possible to continue to use the default value or adjust it without much side effect. > Improve balancer usability after HDFS-8818 > -- > > Key: HDFS-11742 > URL: https://issues.apache.org/jira/browse/HDFS-11742 > Project: Hadoop HDFS > Issue Type: Bug >Reporter: Kihwal Lee >Assignee: Kihwal Lee >Priority: Blocker > Labels: release-blocker > Attachments: balancer2.8.png, balancer_fix.png, > HDFS-11742.branch-2.8.patch, HDFS-11742.branch-2.patch, > HDFS-11742.trunk.patch, HDFS-11742.v2.trunk.patch, replaceBlockNumOps-8w.jpg > > > We ran 2.8 balancer with HDFS-8818 on a 280-node and a 2,400-node cluster. In > both cases, it would hang forever after two iterations. The two iterations > were also moving things at a significantly lower rate. The hang itself is > fixed by HDFS-11377, but the design limitation remains, so the balancer > throughput ends up actually lower. > Instead of reverting HDFS-8188 as originally suggested, I am making a small > change to make it less error prone and more usable. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-11964) Fix TestDFSStripedInputStreamWithRandomECPolicy#testPreadWithDNFailure failure
[ https://issues.apache.org/jira/browse/HDFS-11964?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16049898#comment-16049898 ] Takanobu Asanuma commented on HDFS-11964: - Thanks for filing this issue, [~eddyxu]. Looks like it happens when it chooses {{RS-10-4}}. I created the test in HDFS-11823. Sorry that I failed to make sure. Can I assign this jira to me? > Fix TestDFSStripedInputStreamWithRandomECPolicy#testPreadWithDNFailure failure > -- > > Key: HDFS-11964 > URL: https://issues.apache.org/jira/browse/HDFS-11964 > Project: Hadoop HDFS > Issue Type: Bug > Components: erasure-coding >Affects Versions: 3.0.0-alpha3 >Reporter: Lei (Eddy) Xu > > TestDFSStripedInputStreamWithRandomECPolicy#testPreadWithDNFailure fails on > trunk: > {code} > Running org.apache.hadoop.hdfs.TestDFSStripedInputStreamWithRandomECPolicy > Tests run: 5, Failures: 1, Errors: 0, Skipped: 0, Time elapsed: 10.99 sec <<< > FAILURE! - in > org.apache.hadoop.hdfs.TestDFSStripedInputStreamWithRandomECPolicy > testPreadWithDNFailure(org.apache.hadoop.hdfs.TestDFSStripedInputStreamWithRandomECPolicy) > Time elapsed: 1.265 sec <<< FAILURE! > org.junit.internal.ArrayComparisonFailure: arrays first differed at element > [327680]; expected:<-36> but was:<2> > at > org.junit.internal.ComparisonCriteria.arrayEquals(ComparisonCriteria.java:50) > at org.junit.Assert.internalArrayEquals(Assert.java:473) > at org.junit.Assert.assertArrayEquals(Assert.java:294) > at org.junit.Assert.assertArrayEquals(Assert.java:305) > at > org.apache.hadoop.hdfs.TestDFSStripedInputStream.testPreadWithDNFailure(TestDFSStripedInputStream.java:306) > at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) > at > sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) > at > sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) > at java.lang.reflect.Method.invoke(Method.java:498) > at > org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:47) > at > org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) > at > org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:44) > at > org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17) > at > org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) > at > org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) > at > org.junit.internal.runners.statements.FailOnTimeout$StatementThread.run(FailOnTimeout.java:74) > {code} -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-11916) Extend TestErasureCodingPolicies/TestErasureCodingPolicyWithSnapshot with a random EC policy
[ https://issues.apache.org/jira/browse/HDFS-11916?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16049900#comment-16049900 ] Takanobu Asanuma commented on HDFS-11916: - Hi [~eddyxu], thanks for your comment and reviewing the patch! The idea of choosing a random EC policy for each unit test comes from HDFS-7866 and HDFS-9962. I basically agree this idea because testing all EC policies for all EC unit tests is too much. >> For example, one implementation of EC policy has bug, but it is hard to >> reproduce in the following jenkins run? I will add more logs and comments that we can debug it more easily. If we find a bug, I think it would be better to create another unit test which is particular for the bug. > Extend TestErasureCodingPolicies/TestErasureCodingPolicyWithSnapshot with a > random EC policy > > > Key: HDFS-11916 > URL: https://issues.apache.org/jira/browse/HDFS-11916 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: erasure-coding, test >Reporter: Takanobu Asanuma >Assignee: Takanobu Asanuma > Labels: hdfs-ec-3.0-nice-to-have > Attachments: HDFS-11916.1.patch, HDFS-11916.2.patch > > -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-11789) Maintain Short-Circuit Read Statistics
[ https://issues.apache.org/jira/browse/HDFS-11789?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Hanisha Koneru updated HDFS-11789: -- Attachment: HDFS-11789.003.patch Thanks [~arpitagarwal] for the review. Posted patch v03 with the updates. > Maintain Short-Circuit Read Statistics > -- > > Key: HDFS-11789 > URL: https://issues.apache.org/jira/browse/HDFS-11789 > Project: Hadoop HDFS > Issue Type: Improvement > Components: hdfs-client >Reporter: Hanisha Koneru >Assignee: Hanisha Koneru > Attachments: HDFS-11789.001.patch, HDFS-11789.002.patch, > HDFS-11789.003.patch > > > If a disk or controller hardware is faulty then short-circuit read requests > can stall indefinitely while reading from the file descriptor. Currently > there is no way to detect when short-circuit read requests are slow or > blocked. > This Jira proposes that each BlockReaderLocal maintain read statistics while > it is active by measuring the time taken for a pre-determined fraction of > read requests. These per-reader stats can be aggregated into global stats > when the reader is closed. The aggregate statistics can be exposed via JMX. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-11956) Fix BlockToken compatibility with Hadoop 2.x clients
[ https://issues.apache.org/jira/browse/HDFS-11956?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16049888#comment-16049888 ] Hadoop QA commented on HDFS-11956: -- | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 18s{color} | {color:blue} Docker mode activated. {color} | | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s{color} | {color:green} The patch appears to include 2 new or modified test files. {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 14m 38s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 58s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 50s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 6s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 2m 0s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 48s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 58s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 57s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 57s{color} | {color:green} the patch passed {color} | | {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange} 0m 50s{color} | {color:orange} hadoop-hdfs-project/hadoop-hdfs: The patch generated 7 new + 711 unchanged - 0 fixed = 718 total (was 711) {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 0s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 55s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 40s{color} | {color:green} the patch passed {color} | | {color:red}-1{color} | {color:red} unit {color} | {color:red} 93m 38s{color} | {color:red} hadoop-hdfs in the patch failed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 20s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black}122m 21s{color} | {color:black} {color} | \\ \\ || Reason || Tests || | Failed junit tests | hadoop.hdfs.server.datanode.TestDataNodeMXBean | | | hadoop.hdfs.TestDFSStripedOutputStreamWithFailure140 | | | hadoop.tools.TestHdfsConfigFields | | | hadoop.hdfs.TestDFSStripedOutputStreamWithFailure070 | | | hadoop.hdfs.server.namenode.TestDiskspaceQuotaUpdate | | | hadoop.hdfs.server.blockmanagement.TestReplicationPolicy | | | hadoop.hdfs.server.namenode.ha.TestPipelinesFailover | \\ \\ || Subsystem || Report/Notes || | Docker | Image:yetus/hadoop:14b5c93 | | JIRA Issue | HDFS-11956 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12873046/HDFS-11956.002.patch | | Optional Tests | asflicense compile javac javadoc mvninstall mvnsite unit findbugs checkstyle | | uname | Linux d3243376df8b 3.13.0-108-generic #155-Ubuntu SMP Wed Jan 11 16:58:52 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh | | git revision | trunk / 999c8fc | | Default Java | 1.8.0_131 | | findbugs | v3.1.0-RC1 | | checkstyle | https://builds.apache.org/job/PreCommit-HDFS-Build/19909/artifact/patchprocess/diff-checkstyle-hadoop-hdfs-project_hadoop-hdfs.txt | | unit | https://builds.apache.org/job/PreCommit-HDFS-Build/19909/artifact/patchprocess/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt | | Test Results | https://builds.apache.org/job/PreCommit-HDFS-Build/19909/testReport/ | | modules | C: hadoop-hdfs-project/hadoop-hdfs U: hadoop-hdfs-project/hadoop-hdfs | | Console output | https://builds.apache.org/job/PreCommit-HDFS-Build/19909/console | | Powered by | Apache Yetus 0.5.0-SNAPSHOT http://yetus.apache.org | This message was automatically generated. > Fix BlockToken compatibility with Hadoop 2.x clients >
[jira] [Commented] (HDFS-11682) TestBalancer#testBalancerWithStripedFile is flaky
[ https://issues.apache.org/jira/browse/HDFS-11682?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16049855#comment-16049855 ] Andrew Wang commented on HDFS-11682: LGTM +1 thanks Eddy > TestBalancer#testBalancerWithStripedFile is flaky > - > > Key: HDFS-11682 > URL: https://issues.apache.org/jira/browse/HDFS-11682 > Project: Hadoop HDFS > Issue Type: Bug > Components: test >Affects Versions: 3.0.0-alpha4 >Reporter: Andrew Wang >Assignee: Lei (Eddy) Xu > Attachments: HDFS-11682.00.patch, HDFS-11682.01.patch, > IndexOutOfBoundsException.log, timeout.log > > > Saw this fail in two different ways on a precommit run, but pass locally. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-11976) Examine code base for cases that exception is thrown from finally block and fix it
[ https://issues.apache.org/jira/browse/HDFS-11976?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Yongjun Zhang updated HDFS-11976: - Description: If exception X is thrown in try block, and exception Y is thrown is finally block, X will be swallowed. In addition, finally block is used to ensure resources are released properly in general. If we throw exception from there, some resources may be leaked. So it's not recommended to throw exception in the finally block I caught one today and reported HDFS-11794, creating this jira as a master one to catch other similar cases. Hope there is some static analyzer to find all. was: If exception X is thrown in try block, and exception Y is thrown is finally block, X will be swallowed. In addition, finally block is used to ensure resources are released properly in general. If we throw exception from there, some resources may be leaked. So it's not recommended to throw exception in the finally block I caught one today and reported HDFS-11794, creating this jira as a master one to catch other similar cases. > Examine code base for cases that exception is thrown from finally block and > fix it > -- > > Key: HDFS-11976 > URL: https://issues.apache.org/jira/browse/HDFS-11976 > Project: Hadoop HDFS > Issue Type: Bug >Reporter: Yongjun Zhang > > If exception X is thrown in try block, and exception Y is thrown is finally > block, X will be swallowed. > In addition, finally block is used to ensure resources are released properly > in general. If we throw exception from there, some resources may be leaked. > So it's not recommended to throw exception in the finally block > I caught one today and reported HDFS-11794, creating this jira as a master > one to catch other similar cases. > Hope there is some static analyzer to find all. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Created] (HDFS-11976) Examine code base for cases that exception is thrown from finally block and fix it
Yongjun Zhang created HDFS-11976: Summary: Examine code base for cases that exception is thrown from finally block and fix it Key: HDFS-11976 URL: https://issues.apache.org/jira/browse/HDFS-11976 Project: Hadoop HDFS Issue Type: Bug Reporter: Yongjun Zhang If exception X is thrown in try block, and exception Y is thrown is finally block, X will be swallowed. In addition, finally block is used to ensure resources are released properly in general. If we throw exception from there, some resources may be leaked. So it's not recommended to throw exception in the finally block I caught one today and reported HDFS-11794, creating this jira as a master one to catch other similar cases. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-11606) Add CLI cmd to remove an erasure code policy
[ https://issues.apache.org/jira/browse/HDFS-11606?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16049810#comment-16049810 ] Lei (Eddy) Xu commented on HDFS-11606: -- Ping [~timmyyao] and [~Sammi] , do we have progress on this? > Add CLI cmd to remove an erasure code policy > > > Key: HDFS-11606 > URL: https://issues.apache.org/jira/browse/HDFS-11606 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: erasure-coding >Reporter: Kai Zheng >Assignee: Tim Yao > Fix For: 3.0.0-alpha4 > > Attachments: HDFS-11606.01.patch > > > This is to develop a CLI cmd allowing user to remove a user defined erasure > code policy by specifying its name. Note if the policy is referenced and used > by existing HDFS files, the removal should fail with a good message. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Created] (HDFS-11975) Provide a system-default EC policy
Lei (Eddy) Xu created HDFS-11975: Summary: Provide a system-default EC policy Key: HDFS-11975 URL: https://issues.apache.org/jira/browse/HDFS-11975 Project: Hadoop HDFS Issue Type: Sub-task Components: erasure-coding Affects Versions: 3.0.0-alpha3 Reporter: Lei (Eddy) Xu Assignee: SammiChen >From the usability point of view, it'd be nice to be able to specify a >system-wide EC policy, i.e., in {{hdfs-site.xml}}. For most of users / admin / >downstream projects, it is not necessary to know the tradeoffs of the EC >policy, considering that it requires the knowledge of EC, the actual physical >topology of the clusters, and many other factors (i.e., network, cluster size >and etc). -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Assigned] (HDFS-11949) Add testcase for ensuring that FsShell cann't move file to the target directory that file exists
[ https://issues.apache.org/jira/browse/HDFS-11949?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Yongjun Zhang reassigned HDFS-11949: Assignee: legend Hi [~legend], I added you to the contributer and am assigning this jira to you. thanks. > Add testcase for ensuring that FsShell cann't move file to the target > directory that file exists > > > Key: HDFS-11949 > URL: https://issues.apache.org/jira/browse/HDFS-11949 > Project: Hadoop HDFS > Issue Type: Test > Components: test >Affects Versions: 3.0.0-alpha4 >Reporter: legend >Assignee: legend >Priority: Minor > Attachments: HDFS-11949.patch > > > moveFromLocal returns error when move file to the target directory that the > file exists. So we need add test case to check it. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-11949) Add testcase for ensuring that FsShell cann't move file to the target directory that file exists
[ https://issues.apache.org/jira/browse/HDFS-11949?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16049796#comment-16049796 ] Yongjun Zhang commented on HDFS-11949: -- Thanks for reporting and working on this issue [~legend]. The patch looks good to me, except the issues reported at https://issues.apache.org/jira/browse/HDFS-11949?focusedCommentId=16042202=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-16042202 would you please go through and try to resolve them? > Add testcase for ensuring that FsShell cann't move file to the target > directory that file exists > > > Key: HDFS-11949 > URL: https://issues.apache.org/jira/browse/HDFS-11949 > Project: Hadoop HDFS > Issue Type: Test > Components: test >Affects Versions: 3.0.0-alpha4 >Reporter: legend >Priority: Minor > Attachments: HDFS-11949.patch > > > moveFromLocal returns error when move file to the target directory that the > file exists. So we need add test case to check it. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-11956) Fix BlockToken compatibility with Hadoop 2.x clients
[ https://issues.apache.org/jira/browse/HDFS-11956?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Ewan Higgs updated HDFS-11956: -- Attachment: HDFS-11956.002.patch Attaching updated patch with a unit test. In the test, {{strictSM}} {{BlockTokenSecretManager}} will fail when the passed storageIds are wrong; but {{permissiveSM}} will allow it. {{strictSM}} corresponds to having the config value enabled while {{permissiveSM}} corresponds to it being disabled for legacy clients. > Fix BlockToken compatibility with Hadoop 2.x clients > > > Key: HDFS-11956 > URL: https://issues.apache.org/jira/browse/HDFS-11956 > Project: Hadoop HDFS > Issue Type: Bug >Affects Versions: 3.0.0-alpha4 >Reporter: Andrew Wang >Assignee: Ewan Higgs >Priority: Blocker > Attachments: HDFS-11956.001.patch, HDFS-11956.002.patch > > > Seems like HDFS-9807 broke backwards compatibility with Hadoop 2.x clients. > When talking to a 3.0.0-alpha4 DN with security on: > {noformat} > 2017-06-06 23:27:22,568 WARN org.apache.hadoop.hdfs.server.datanode.DataNode: > Block token verification failed: op=WRITE_BLOCK, > remoteAddress=/172.28.208.200:53900, message=Block token with StorageIDs > [DS-c0f24154-a39b-4941-93cd-5b8323067ba2] not valid for access with > StorageIDs [] > {noformat} -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Created] (HDFS-11974) Fsimage transfer failed due to socket timeout, but logs doesn't show that
Yongjun Zhang created HDFS-11974: Summary: Fsimage transfer failed due to socket timeout, but logs doesn't show that Key: HDFS-11974 URL: https://issues.apache.org/jira/browse/HDFS-11974 Project: Hadoop HDFS Issue Type: Bug Reporter: Yongjun Zhang Assignee: Yongjun Zhang The idea of HDFS-11914 is to add more diagnosis information to understand what happened when we saw {code} WARN org.apache.hadoop.security.UserGroupInformation: PriviledgedActionException as:hdfs (auth:SIMPLE) cause:java.io.IOException: File http://x.y.z:50070/imagetransfer?getimage=1=latest received length xyz is not of the advertised size abc. {code} After further study, I realize that the above exception is thrown in the {{finally}} block of {{TransferFsImage#receiveFile}} method, thus other exception thrown in the main code is not reported, such as SocketTimeOut. We should include the information of the exceptions thrown in the main code when throwing exception in the {{finally}} block. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-11971) libhdfs++: A few portability issues
[ https://issues.apache.org/jira/browse/HDFS-11971?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16049638#comment-16049638 ] Hadoop QA commented on HDFS-11971: -- | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 17m 2s{color} | {color:blue} Docker mode activated. {color} | | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:red}-1{color} | {color:red} test4tests {color} | {color:red} 0m 0s{color} | {color:red} The patch doesn't appear to include any new or modified tests. Please justify why no new tests are needed for this patch. Also please list what manual steps were performed to verify this patch. {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 10m 14s{color} | {color:green} HDFS-8707 passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 7m 9s{color} | {color:green} HDFS-8707 passed with JDK v1.8.0_131 {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 7m 13s{color} | {color:green} HDFS-8707 passed with JDK v1.7.0_131 {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 20s{color} | {color:green} HDFS-8707 passed {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 11s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 7m 30s{color} | {color:green} the patch passed with JDK v1.8.0_131 {color} | | {color:green}+1{color} | {color:green} cc {color} | {color:green} 7m 30s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 7m 30s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 7m 34s{color} | {color:green} the patch passed with JDK v1.7.0_131 {color} | | {color:green}+1{color} | {color:green} cc {color} | {color:green} 7m 34s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 7m 34s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 15s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} unit {color} | {color:green} 10m 40s{color} | {color:green} hadoop-hdfs-native-client in the patch passed with JDK v1.7.0_131. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 20s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 83m 33s{color} | {color:black} {color} | \\ \\ || Subsystem || Report/Notes || | Docker | Image:yetus/hadoop:5ae34ac | | JIRA Issue | HDFS-11971 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12873012/HDFS-11971.HDFS-8707.002.patch | | Optional Tests | asflicense compile cc mvnsite javac unit | | uname | Linux fd976fddab0a 3.13.0-116-generic #163-Ubuntu SMP Fri Mar 31 14:13:22 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh | | git revision | HDFS-8707 / 40e3290 | | Default Java | 1.7.0_131 | | Multi-JDK versions | /usr/lib/jvm/java-8-oracle:1.8.0_131 /usr/lib/jvm/java-7-openjdk-amd64:1.7.0_131 | | JDK v1.7.0_131 Test Results | https://builds.apache.org/job/PreCommit-HDFS-Build/19907/testReport/ | | modules | C: hadoop-hdfs-project/hadoop-hdfs-native-client U: hadoop-hdfs-project/hadoop-hdfs-native-client | | Console output | https://builds.apache.org/job/PreCommit-HDFS-Build/19907/console | | Powered by | Apache Yetus 0.5.0-SNAPSHOT http://yetus.apache.org | This message was automatically generated. > libhdfs++: A few portability issues > --- > > Key: HDFS-11971 > URL: https://issues.apache.org/jira/browse/HDFS-11971 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: hdfs-client >Reporter: Anatoli Shein >Assignee: Anatoli Shein > Attachments: HDFS-11971.HDFS-8707.000.patch, > HDFS-11971.HDFS-8707.001.patch, HDFS-11971.HDFS-8707.002.patch > > > I recently encountered a few portability issues with libhdfs++ while trying > to build it as a stand alone project (and also as part of another Apache > project). > 1. Method fixCase in configuration.h file
[jira] [Commented] (HDFS-11971) libhdfs++: A few portability issues
[ https://issues.apache.org/jira/browse/HDFS-11971?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16049623#comment-16049623 ] Hadoop QA commented on HDFS-11971: -- | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 15s{color} | {color:blue} Docker mode activated. {color} | | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:red}-1{color} | {color:red} test4tests {color} | {color:red} 0m 0s{color} | {color:red} The patch doesn't appear to include any new or modified tests. Please justify why no new tests are needed for this patch. Also please list what manual steps were performed to verify this patch. {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 7m 20s{color} | {color:green} HDFS-8707 passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 6m 30s{color} | {color:green} HDFS-8707 passed with JDK v1.8.0_131 {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 6m 40s{color} | {color:green} HDFS-8707 passed with JDK v1.7.0_131 {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 15s{color} | {color:green} HDFS-8707 passed {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 9s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 6m 49s{color} | {color:green} the patch passed with JDK v1.8.0_131 {color} | | {color:green}+1{color} | {color:green} cc {color} | {color:green} 6m 49s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 6m 49s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 7m 39s{color} | {color:green} the patch passed with JDK v1.7.0_131 {color} | | {color:green}+1{color} | {color:green} cc {color} | {color:green} 7m 40s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 7m 39s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 24s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} unit {color} | {color:green} 10m 33s{color} | {color:green} hadoop-hdfs-native-client in the patch passed with JDK v1.7.0_131. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 29s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 61m 21s{color} | {color:black} {color} | \\ \\ || Subsystem || Report/Notes || | Docker | Image:yetus/hadoop:5ae34ac | | JIRA Issue | HDFS-11971 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12873012/HDFS-11971.HDFS-8707.002.patch | | Optional Tests | asflicense compile cc mvnsite javac unit | | uname | Linux 7075f0063906 3.13.0-106-generic #153-Ubuntu SMP Tue Dec 6 15:44:32 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh | | git revision | HDFS-8707 / 40e3290 | | Default Java | 1.7.0_131 | | Multi-JDK versions | /usr/lib/jvm/java-8-oracle:1.8.0_131 /usr/lib/jvm/java-7-openjdk-amd64:1.7.0_131 | | JDK v1.7.0_131 Test Results | https://builds.apache.org/job/PreCommit-HDFS-Build/19908/testReport/ | | modules | C: hadoop-hdfs-project/hadoop-hdfs-native-client U: hadoop-hdfs-project/hadoop-hdfs-native-client | | Console output | https://builds.apache.org/job/PreCommit-HDFS-Build/19908/console | | Powered by | Apache Yetus 0.5.0-SNAPSHOT http://yetus.apache.org | This message was automatically generated. > libhdfs++: A few portability issues > --- > > Key: HDFS-11971 > URL: https://issues.apache.org/jira/browse/HDFS-11971 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: hdfs-client >Reporter: Anatoli Shein >Assignee: Anatoli Shein > Attachments: HDFS-11971.HDFS-8707.000.patch, > HDFS-11971.HDFS-8707.001.patch, HDFS-11971.HDFS-8707.002.patch > > > I recently encountered a few portability issues with libhdfs++ while trying > to build it as a stand alone project (and also as part of another Apache > project). > 1. Method fixCase in configuration.h file
[jira] [Updated] (HDFS-11971) libhdfs++: A few portability issues
[ https://issues.apache.org/jira/browse/HDFS-11971?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Anatoli Shein updated HDFS-11971: - Attachment: HDFS-11971.HDFS-8707.002.patch Had a small typo in the previous patch, just fixed and resubmitted. > libhdfs++: A few portability issues > --- > > Key: HDFS-11971 > URL: https://issues.apache.org/jira/browse/HDFS-11971 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: hdfs-client >Reporter: Anatoli Shein >Assignee: Anatoli Shein > Attachments: HDFS-11971.HDFS-8707.000.patch, > HDFS-11971.HDFS-8707.001.patch, HDFS-11971.HDFS-8707.002.patch > > > I recently encountered a few portability issues with libhdfs++ while trying > to build it as a stand alone project (and also as part of another Apache > project). > 1. Method fixCase in configuration.h file produces a warning "conversion to > ‘char’ from ‘int’ may alter its value [-Werror=conversion]" which does not > allow libhdfs++ to be compiled as part of the codebase that treats such > warnings as errors (can be fixed with a simple cast). > 2. In CMakeLists.txt file (in libhdfspp directory) we do > find_package(Threads) however we do not link it to the targets (e.g. > hdfspp_static), which causes the build to fail with pthread errors. After the > Threads package is found we need to link it using ${CMAKE_THREAD_LIBS_INIT}. > 3. All the tools and examples fail to build as part of a standalone libhdfs++ > because they are missing multiple libraries such as protobuf, ssl, pthread, > etc. This happens because we link them to a shared library hdfspp instead of > hdfspp_static library. We should either link all the tools and examples to > hdfspp_static library or explicitly add linking to all missing libraries for > each tool/example. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-11956) Fix BlockToken compatibility with Hadoop 2.x clients
[ https://issues.apache.org/jira/browse/HDFS-11956?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16049561#comment-16049561 ] Hadoop QA commented on HDFS-11956: -- | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 14s{color} | {color:blue} Docker mode activated. {color} | | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s{color} | {color:green} The patch appears to include 2 new or modified test files. {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 13m 31s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 50s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 44s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 0s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 43s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 42s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 48s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 45s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 45s{color} | {color:green} the patch passed {color} | | {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange} 0m 41s{color} | {color:orange} hadoop-hdfs-project/hadoop-hdfs: The patch generated 3 new + 711 unchanged - 0 fixed = 714 total (was 711) {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 51s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 44s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 37s{color} | {color:green} the patch passed {color} | | {color:red}-1{color} | {color:red} unit {color} | {color:red} 70m 15s{color} | {color:red} hadoop-hdfs in the patch failed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 19s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 96m 0s{color} | {color:black} {color} | \\ \\ || Reason || Tests || | Failed junit tests | hadoop.hdfs.server.datanode.TestDataNodeVolumeFailureReporting | | | hadoop.hdfs.TestDFSStripedInputStreamWithRandomECPolicy | | | hadoop.hdfs.web.TestWebHDFS | | | hadoop.hdfs.TestDFSStripedOutputStreamWithFailure090 | | | hadoop.hdfs.server.namenode.TestDecommissioningStatus | | | hadoop.tools.TestHdfsConfigFields | \\ \\ || Subsystem || Report/Notes || | Docker | Image:yetus/hadoop:14b5c93 | | JIRA Issue | HDFS-11956 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12873002/HDFS-11956.001.patch | | Optional Tests | asflicense compile javac javadoc mvninstall mvnsite unit findbugs checkstyle | | uname | Linux 14c8f2a52ed8 3.13.0-106-generic #153-Ubuntu SMP Tue Dec 6 15:44:32 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh | | git revision | trunk / 999c8fc | | Default Java | 1.8.0_131 | | findbugs | v3.1.0-RC1 | | checkstyle | https://builds.apache.org/job/PreCommit-HDFS-Build/19906/artifact/patchprocess/diff-checkstyle-hadoop-hdfs-project_hadoop-hdfs.txt | | unit | https://builds.apache.org/job/PreCommit-HDFS-Build/19906/artifact/patchprocess/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt | | Test Results | https://builds.apache.org/job/PreCommit-HDFS-Build/19906/testReport/ | | modules | C: hadoop-hdfs-project/hadoop-hdfs U: hadoop-hdfs-project/hadoop-hdfs | | Console output | https://builds.apache.org/job/PreCommit-HDFS-Build/19906/console | | Powered by | Apache Yetus 0.5.0-SNAPSHOT http://yetus.apache.org | This message was automatically generated. > Fix BlockToken compatibility with Hadoop 2.x clients > > > Key: HDFS-11956 >
[jira] [Updated] (HDFS-11971) libhdfs++: A few portability issues
[ https://issues.apache.org/jira/browse/HDFS-11971?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Anatoli Shein updated HDFS-11971: - Attachment: HDFS-11971.HDFS-8707.001.patch Thanks for the review [~James C]. I just moved the relocation of the files to a separate jira (HDFS-11973) so that it will be easier to review. In the new patch here (attached) I addressed all the portability issued mentioned in the jira description and also the Protobuf linking problem mentioned in my previous comment. The problem was that ${PROTOBUF_INCLUDE_DIRS} was missing from the cmake command 'include_directories', and because of that I would get protobuf errors during the build of libhdfspp as a stand alone library. Now all the linking errors should be resolved. Please review the new patch. > libhdfs++: A few portability issues > --- > > Key: HDFS-11971 > URL: https://issues.apache.org/jira/browse/HDFS-11971 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: hdfs-client >Reporter: Anatoli Shein >Assignee: Anatoli Shein > Attachments: HDFS-11971.HDFS-8707.000.patch, > HDFS-11971.HDFS-8707.001.patch > > > I recently encountered a few portability issues with libhdfs++ while trying > to build it as a stand alone project (and also as part of another Apache > project). > 1. Method fixCase in configuration.h file produces a warning "conversion to > ‘char’ from ‘int’ may alter its value [-Werror=conversion]" which does not > allow libhdfs++ to be compiled as part of the codebase that treats such > warnings as errors (can be fixed with a simple cast). > 2. In CMakeLists.txt file (in libhdfspp directory) we do > find_package(Threads) however we do not link it to the targets (e.g. > hdfspp_static), which causes the build to fail with pthread errors. After the > Threads package is found we need to link it using ${CMAKE_THREAD_LIBS_INIT}. > 3. All the tools and examples fail to build as part of a standalone libhdfs++ > because they are missing multiple libraries such as protobuf, ssl, pthread, > etc. This happens because we link them to a shared library hdfspp instead of > hdfspp_static library. We should either link all the tools and examples to > hdfspp_static library or explicitly add linking to all missing libraries for > each tool/example. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-11956) Fix BlockToken compatibility with Hadoop 2.x clients
[ https://issues.apache.org/jira/browse/HDFS-11956?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16049514#comment-16049514 ] Andrew Wang commented on HDFS-11956: Thanks for working on this Ewan. Is it possible to add a unit test for this? > Fix BlockToken compatibility with Hadoop 2.x clients > > > Key: HDFS-11956 > URL: https://issues.apache.org/jira/browse/HDFS-11956 > Project: Hadoop HDFS > Issue Type: Bug >Affects Versions: 3.0.0-alpha4 >Reporter: Andrew Wang >Assignee: Ewan Higgs >Priority: Blocker > Attachments: HDFS-11956.001.patch > > > Seems like HDFS-9807 broke backwards compatibility with Hadoop 2.x clients. > When talking to a 3.0.0-alpha4 DN with security on: > {noformat} > 2017-06-06 23:27:22,568 WARN org.apache.hadoop.hdfs.server.datanode.DataNode: > Block token verification failed: op=WRITE_BLOCK, > remoteAddress=/172.28.208.200:53900, message=Block token with StorageIDs > [DS-c0f24154-a39b-4941-93cd-5b8323067ba2] not valid for access with > StorageIDs [] > {noformat} -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-10999) Introduce separate stats for Replicated and Erasure Coded Blocks apart from the current Aggregated stats
[ https://issues.apache.org/jira/browse/HDFS-10999?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16049498#comment-16049498 ] Manoj Govindassamy commented on HDFS-10999: --- Will file a follow-on jiras to make use of the newly available stats in {{dfsadmin -report}} and webui. > Introduce separate stats for Replicated and Erasure Coded Blocks apart from > the current Aggregated stats > > > Key: HDFS-10999 > URL: https://issues.apache.org/jira/browse/HDFS-10999 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: erasure-coding >Affects Versions: 3.0.0-alpha1 >Reporter: Wei-Chiu Chuang >Assignee: Manoj Govindassamy > Labels: hdfs-ec-3.0-nice-to-have, supportability > Fix For: 3.0.0-alpha4 > > Attachments: HDFS-10999.01.patch, HDFS-10999.02.patch, > HDFS-10999.03.patch, HDFS-10999.04.patch, HDFS-10999.05.patch > > > Per HDFS-9857, it seems in the Hadoop 3 world, people prefer the more generic > term "low redundancy" to the old-fashioned "under replicated". But this term > is still being used in messages in several places, such as web ui, dfsadmin > and fsck. We should probably change them to avoid confusion. > File this jira to discuss it. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-11198) NN UI should link DN web address using hostnames
[ https://issues.apache.org/jira/browse/HDFS-11198?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16049497#comment-16049497 ] Xiao Chen commented on HDFS-11198: -- Re-added the link as 'is broken by' HDFS-10440 per Yongjun suggested. Thanks. > NN UI should link DN web address using hostnames > > > Key: HDFS-11198 > URL: https://issues.apache.org/jira/browse/HDFS-11198 > Project: Hadoop HDFS > Issue Type: Bug >Affects Versions: 2.8.0 >Reporter: Kihwal Lee >Assignee: Weiwei Yang >Priority: Critical > Fix For: 2.8.0, 3.0.0-alpha2 > > Attachments: HDFS-11198.01.patch, HDFS-11198.02.patch > > > The new NN UI shows links to DN web pages, but since the link is from the > info address returned from jmx, it is in the IP address:port form. This > breaks if users are using filters utilizing cookies. > Since this is a new feature in 2.8, I didn't mark it as a blocker. I.e. it > does not break any existing functions. It just doesn't work properly in > certain environments. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-10999) Introduce separate stats for Replicated and Erasure Coded Blocks apart from the current Aggregated stats
[ https://issues.apache.org/jira/browse/HDFS-10999?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16049496#comment-16049496 ] Manoj Govindassamy commented on HDFS-10999: --- Thanks for the review and commit help [~andrew.wang], [~tasanuma0829], [~eddyxu], [~aw], [~zhz], [~jojochuang]. > Introduce separate stats for Replicated and Erasure Coded Blocks apart from > the current Aggregated stats > > > Key: HDFS-10999 > URL: https://issues.apache.org/jira/browse/HDFS-10999 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: erasure-coding >Affects Versions: 3.0.0-alpha1 >Reporter: Wei-Chiu Chuang >Assignee: Manoj Govindassamy > Labels: hdfs-ec-3.0-nice-to-have, supportability > Fix For: 3.0.0-alpha4 > > Attachments: HDFS-10999.01.patch, HDFS-10999.02.patch, > HDFS-10999.03.patch, HDFS-10999.04.patch, HDFS-10999.05.patch > > > Per HDFS-9857, it seems in the Hadoop 3 world, people prefer the more generic > term "low redundancy" to the old-fashioned "under replicated". But this term > is still being used in messages in several places, such as web ui, dfsadmin > and fsck. We should probably change them to avoid confusion. > File this jira to discuss it. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Comment Edited] (HDFS-11198) NN UI should link DN web address using hostnames
[ https://issues.apache.org/jira/browse/HDFS-11198?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16049490#comment-16049490 ] Yongjun Zhang edited comment on HDFS-11198 at 6/14/17 6:27 PM: --- HI [~xiaochen], Per offline discussion, it seems HDFS-11198 breaks "NameNode UI Datanodes tab" which is fixed by HDFS-10440, in that case, we can make the link "broke" instead of just "related to". Thanks. was (Author: yzhangal): HI [~xiaochen], Per offline discussion, it seems HDFS-11198 breaks "NameNode UI Datanodes tab", in that case, we can make the link "broke" instead of just "related to". Thanks. > NN UI should link DN web address using hostnames > > > Key: HDFS-11198 > URL: https://issues.apache.org/jira/browse/HDFS-11198 > Project: Hadoop HDFS > Issue Type: Bug >Affects Versions: 2.8.0 >Reporter: Kihwal Lee >Assignee: Weiwei Yang >Priority: Critical > Fix For: 2.8.0, 3.0.0-alpha2 > > Attachments: HDFS-11198.01.patch, HDFS-11198.02.patch > > > The new NN UI shows links to DN web pages, but since the link is from the > info address returned from jmx, it is in the IP address:port form. This > breaks if users are using filters utilizing cookies. > Since this is a new feature in 2.8, I didn't mark it as a blocker. I.e. it > does not break any existing functions. It just doesn't work properly in > certain environments. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-11198) NN UI should link DN web address using hostnames
[ https://issues.apache.org/jira/browse/HDFS-11198?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16049490#comment-16049490 ] Yongjun Zhang commented on HDFS-11198: -- HI [~xiaochen], Per offline discussion, it seems HDFS-11198 breaks "NameNode UI Datanodes tab", in that case, we can make the link "broke" instead of just "related to". Thanks. > NN UI should link DN web address using hostnames > > > Key: HDFS-11198 > URL: https://issues.apache.org/jira/browse/HDFS-11198 > Project: Hadoop HDFS > Issue Type: Bug >Affects Versions: 2.8.0 >Reporter: Kihwal Lee >Assignee: Weiwei Yang >Priority: Critical > Fix For: 2.8.0, 3.0.0-alpha2 > > Attachments: HDFS-11198.01.patch, HDFS-11198.02.patch > > > The new NN UI shows links to DN web pages, but since the link is from the > info address returned from jmx, it is in the IP address:port form. This > breaks if users are using filters utilizing cookies. > Since this is a new feature in 2.8, I didn't mark it as a blocker. I.e. it > does not break any existing functions. It just doesn't work properly in > certain environments. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-10999) Introduce separate stats for Replicated and Erasure Coded Blocks apart from the current Aggregated stats
[ https://issues.apache.org/jira/browse/HDFS-10999?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16049464#comment-16049464 ] Hudson commented on HDFS-10999: --- SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #11865 (See [https://builds.apache.org/job/Hadoop-trunk-Commit/11865/]) HDFS-10999. Introduce separate stats for Replicated and Erasure Coded (lei: rev 999c8fcbefc876d9c26c23c5b87a64a81e4f113e) * (edit) hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestFileCorruption.java * (edit) hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockManagerSafeMode.java * (edit) hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSNamesystem.java * (edit) hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/LowRedundancyBlocks.java * (edit) hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/blockmanagement/TestUnderReplicatedBlocks.java * (add) hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/metrics/ECBlockGroupsStatsMBean.java * (edit) hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/DFSTestUtil.java * (edit) hadoop-hdfs-project/hadoop-hdfs-client/src/main/proto/ClientNamenodeProtocol.proto * (add) hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/metrics/ReplicatedBlocksStatsMBean.java * (edit) hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/InvalidateBlocks.java * (edit) hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/impl/TestLazyPersistFiles.java * (add) hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/protocol/ECBlockGroupsStats.java * (edit) hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/protocolPB/ClientNamenodeProtocolTranslatorPB.java * (edit) hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/DistributedFileSystem.java * (edit) hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/NameNodeRpcServer.java * (edit) hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestNameNodeMXBean.java * (edit) hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestReconstructStripedBlocks.java * (edit) hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/DFSClient.java * (edit) hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/blockmanagement/TestCorruptReplicaInfo.java * (edit) hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/metrics/TestNameNodeMetrics.java * (edit) hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/protocol/ClientProtocol.java * (edit) hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/metrics/FSNamesystemMBean.java * (edit) hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/CorruptReplicasMap.java * (edit) hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/tools/DFSAdmin.java * (edit) hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/blockmanagement/TestLowRedundancyBlockQueues.java * (edit) hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/blockmanagement/TestComputeInvalidateWork.java * (edit) hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestMaintenanceState.java * (edit) hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/TestReadOnlySharedStorage.java * (edit) hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestAddStripedBlocks.java * (edit) hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocolPB/ClientNamenodeProtocolServerSideTranslatorPB.java * (edit) hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockManager.java * (add) hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/protocol/BlocksStats.java * (edit) hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestMissingBlocksAlert.java * (edit) hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestDecommissioningStatus.java * (edit) hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/protocolPB/PBHelperClient.java > Introduce separate stats for Replicated and Erasure Coded Blocks apart from > the current Aggregated stats > > > Key: HDFS-10999 > URL:
[jira] [Commented] (HDFS-11198) NN UI should link DN web address using hostnames
[ https://issues.apache.org/jira/browse/HDFS-11198?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16049438#comment-16049438 ] Xiao Chen commented on HDFS-11198: -- Thanks for the contributions [~cheersyang] and [~kihwal]. I added a link to HDFS-10440. > NN UI should link DN web address using hostnames > > > Key: HDFS-11198 > URL: https://issues.apache.org/jira/browse/HDFS-11198 > Project: Hadoop HDFS > Issue Type: Bug >Affects Versions: 2.8.0 >Reporter: Kihwal Lee >Assignee: Weiwei Yang >Priority: Critical > Fix For: 2.8.0, 3.0.0-alpha2 > > Attachments: HDFS-11198.01.patch, HDFS-11198.02.patch > > > The new NN UI shows links to DN web pages, but since the link is from the > info address returned from jmx, it is in the IP address:port form. This > breaks if users are using filters utilizing cookies. > Since this is a new feature in 2.8, I didn't mark it as a blocker. I.e. it > does not break any existing functions. It just doesn't work properly in > certain environments. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-10999) Introduce separate stats for Replicated and Erasure Coded Blocks apart from the current Aggregated stats
[ https://issues.apache.org/jira/browse/HDFS-10999?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Lei (Eddy) Xu updated HDFS-10999: - Resolution: Fixed Fix Version/s: 3.0.0-alpha4 Status: Resolved (was: Patch Available) +1. Thanks [~manojg]. Committed to trunk. > Introduce separate stats for Replicated and Erasure Coded Blocks apart from > the current Aggregated stats > > > Key: HDFS-10999 > URL: https://issues.apache.org/jira/browse/HDFS-10999 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: erasure-coding >Affects Versions: 3.0.0-alpha1 >Reporter: Wei-Chiu Chuang >Assignee: Manoj Govindassamy > Labels: hdfs-ec-3.0-nice-to-have, supportability > Fix For: 3.0.0-alpha4 > > Attachments: HDFS-10999.01.patch, HDFS-10999.02.patch, > HDFS-10999.03.patch, HDFS-10999.04.patch, HDFS-10999.05.patch > > > Per HDFS-9857, it seems in the Hadoop 3 world, people prefer the more generic > term "low redundancy" to the old-fashioned "under replicated". But this term > is still being used in messages in several places, such as web ui, dfsadmin > and fsck. We should probably change them to avoid confusion. > File this jira to discuss it. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-11948) Ozone: change TestRatisManager to check cluster with data
[ https://issues.apache.org/jira/browse/HDFS-11948?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16049415#comment-16049415 ] Hadoop QA commented on HDFS-11948: -- | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 14s{color} | {color:blue} Docker mode activated. {color} | | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s{color} | {color:green} The patch appears to include 1 new or modified test files. {color} | | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 8s{color} | {color:blue} Maven dependency ordering for branch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 14m 33s{color} | {color:green} HDFS-7240 passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 34s{color} | {color:green} HDFS-7240 passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 42s{color} | {color:green} HDFS-7240 passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 34s{color} | {color:green} HDFS-7240 passed {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 3m 29s{color} | {color:green} HDFS-7240 passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 35s{color} | {color:green} HDFS-7240 passed {color} | | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 7s{color} | {color:blue} Maven dependency ordering for patch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 1m 27s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 30s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 1m 30s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 39s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 31s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 3m 40s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 31s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} unit {color} | {color:green} 1m 15s{color} | {color:green} hadoop-hdfs-client in the patch passed. {color} | | {color:red}-1{color} | {color:red} unit {color} | {color:red} 64m 58s{color} | {color:red} hadoop-hdfs in the patch failed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 21s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black}102m 11s{color} | {color:black} {color} | \\ \\ || Reason || Tests || | Failed junit tests | hadoop.hdfs.TestDFSStripedOutputStreamWithFailureWithRandomECPolicy | | | hadoop.ozone.scm.TestXceiverClientManager | | | hadoop.hdfs.TestDFSStripedOutputStreamWithFailure010 | | | hadoop.hdfs.TestDFSStripedOutputStreamWithFailure150 | | Timed out junit tests | org.apache.hadoop.ozone.container.ozoneimpl.TestRatisManager | \\ \\ || Subsystem || Report/Notes || | Docker | Image:yetus/hadoop:14b5c93 | | JIRA Issue | HDFS-11948 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12872994/HDFS-11948-HDFS-7240.20170614.patch | | Optional Tests | asflicense compile javac javadoc mvninstall mvnsite unit findbugs checkstyle | | uname | Linux f4b378787790 3.13.0-106-generic #153-Ubuntu SMP Tue Dec 6 15:44:32 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh | | git revision | HDFS-7240 / 0688a1c | | Default Java | 1.8.0_131 | | findbugs | v3.1.0-RC1 | | unit | https://builds.apache.org/job/PreCommit-HDFS-Build/19905/artifact/patchprocess/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt | | Test Results | https://builds.apache.org/job/PreCommit-HDFS-Build/19905/testReport/ | | modules | C: hadoop-hdfs-project/hadoop-hdfs-client hadoop-hdfs-project/hadoop-hdfs U: hadoop-hdfs-project | | Console output | https://builds.apache.org/job/PreCommit-HDFS-Build/19905/console | |
[jira] [Updated] (HDFS-11956) Fix BlockToken compatibility with Hadoop 2.x clients
[ https://issues.apache.org/jira/browse/HDFS-11956?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Ewan Higgs updated HDFS-11956: -- Assignee: Ewan Higgs (was: Chris Douglas) Release Note: Introduce dfs.block.access.token.storageid.enable which will be false by default. When it's turned on, the BlockTokenSecretManager.checkAccess will consider the storage ID when verifying the request. This allows for backwards compatibility all the way back to 2.6.x. Status: Patch Available (was: Open) Introduce dfs.block.access.token.storageid.enable which will be false by default. When it's turned on, the BlockTokenSecretManager.checkAccess will consider the storage ID when verifying the request. This allows for backwards compatibility all the way back to 2.6.x. > Fix BlockToken compatibility with Hadoop 2.x clients > > > Key: HDFS-11956 > URL: https://issues.apache.org/jira/browse/HDFS-11956 > Project: Hadoop HDFS > Issue Type: Bug >Affects Versions: 3.0.0-alpha4 >Reporter: Andrew Wang >Assignee: Ewan Higgs >Priority: Blocker > Attachments: HDFS-11956.001.patch > > > Seems like HDFS-9807 broke backwards compatibility with Hadoop 2.x clients. > When talking to a 3.0.0-alpha4 DN with security on: > {noformat} > 2017-06-06 23:27:22,568 WARN org.apache.hadoop.hdfs.server.datanode.DataNode: > Block token verification failed: op=WRITE_BLOCK, > remoteAddress=/172.28.208.200:53900, message=Block token with StorageIDs > [DS-c0f24154-a39b-4941-93cd-5b8323067ba2] not valid for access with > StorageIDs [] > {noformat} -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-11956) Fix BlockToken compatibility with Hadoop 2.x clients
[ https://issues.apache.org/jira/browse/HDFS-11956?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Ewan Higgs updated HDFS-11956: -- Attachment: HDFS-11956.001.patch Attaching a patch that introduces {{dfs.block.access.token.storageid.enable}} which will be false by default. When it's turned on, the {{BlockTokenSecretManager.checkAccess}} will consider the storage ID when verifying the request. > Fix BlockToken compatibility with Hadoop 2.x clients > > > Key: HDFS-11956 > URL: https://issues.apache.org/jira/browse/HDFS-11956 > Project: Hadoop HDFS > Issue Type: Bug >Affects Versions: 3.0.0-alpha4 >Reporter: Andrew Wang >Assignee: Chris Douglas >Priority: Blocker > Attachments: HDFS-11956.001.patch > > > Seems like HDFS-9807 broke backwards compatibility with Hadoop 2.x clients. > When talking to a 3.0.0-alpha4 DN with security on: > {noformat} > 2017-06-06 23:27:22,568 WARN org.apache.hadoop.hdfs.server.datanode.DataNode: > Block token verification failed: op=WRITE_BLOCK, > remoteAddress=/172.28.208.200:53900, message=Block token with StorageIDs > [DS-c0f24154-a39b-4941-93cd-5b8323067ba2] not valid for access with > StorageIDs [] > {noformat} -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-11956) Fix BlockToken compatibility with Hadoop 2.x clients
[ https://issues.apache.org/jira/browse/HDFS-11956?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16049350#comment-16049350 ] Ewan Higgs commented on HDFS-11956: --- I took a look and see that this fails when writing blocks. e.g.: {code} hadoop-2.6.5/bin/hdfs dfs -copyFromLocal hello.txt / {code} This comes from the fact that the {{BlockTokenIdenfitier}} has the StorageID in there; but the StorageID is an optional field in the request which is new in 3.0. This means that it isn't passed in. Defaulting to 'null' and allowing this would of course defeat the purpose of the BlockTokenIdentifier, so I think this should be fixed with a bitflag (e.g. {{dfs.block.access.token.storageid.enable}}) which defaults to false and makes the [[BlockTokenSecretManager}} only use the storage id in the {{checkAccess}} call if it's enabled. This will allow old clients work; but it won't allow the system to take advantage of new features enabled by using the storage id in the write calls. > Fix BlockToken compatibility with Hadoop 2.x clients > > > Key: HDFS-11956 > URL: https://issues.apache.org/jira/browse/HDFS-11956 > Project: Hadoop HDFS > Issue Type: Bug >Affects Versions: 3.0.0-alpha4 >Reporter: Andrew Wang >Assignee: Chris Douglas >Priority: Blocker > > Seems like HDFS-9807 broke backwards compatibility with Hadoop 2.x clients. > When talking to a 3.0.0-alpha4 DN with security on: > {noformat} > 2017-06-06 23:27:22,568 WARN org.apache.hadoop.hdfs.server.datanode.DataNode: > Block token verification failed: op=WRITE_BLOCK, > remoteAddress=/172.28.208.200:53900, message=Block token with StorageIDs > [DS-c0f24154-a39b-4941-93cd-5b8323067ba2] not valid for access with > StorageIDs [] > {noformat} -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-11890) Handle NPE in BlockRecoveryWorker when DN is getting shoutdown.
[ https://issues.apache.org/jira/browse/HDFS-11890?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16049346#comment-16049346 ] Brahma Reddy Battula commented on HDFS-11890: - [~surendrasingh] thanks for reporting and working on this. can we've one common method for this..? may be like below..? {code} DatanodeID getDatanodeID() throws IOException { BPOfferService bpos = datanode.getBPOfferService(bpid); if (bpos == null) { throw new IOException("No block pool offer service for bpid=" + bpid); } return new DatanodeID(bpos.bpRegistration); } {code} > Handle NPE in BlockRecoveryWorker when DN is getting shoutdown. > --- > > Key: HDFS-11890 > URL: https://issues.apache.org/jira/browse/HDFS-11890 > Project: Hadoop HDFS > Issue Type: Bug > Components: datanode >Affects Versions: 2.7.2 >Reporter: Surendra Singh Lilhore >Assignee: Surendra Singh Lilhore > Attachments: HDFS-11890-001.patch > > > {code} > Exception in thread > "org.apache.hadoop.hdfs.server.datanode.BlockRecoveryWorker$1@1c03e6ae" > java.lang.NullPointerException > at > org.apache.hadoop.hdfs.server.datanode.BlockRecoveryWorker$RecoveryTaskContiguous.recover(BlockRecoveryWorker.java:131) > at > org.apache.hadoop.hdfs.server.datanode.BlockRecoveryWorker$1.run(BlockRecoveryWorker.java:596) > at java.lang.Thread.run(Thread.java:748) > {code} -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-11948) Ozone: change TestRatisManager to check cluster with data
[ https://issues.apache.org/jira/browse/HDFS-11948?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Tsz Wo Nicholas Sze updated HDFS-11948: --- Status: Patch Available (was: Open) > Ozone: change TestRatisManager to check cluster with data > - > > Key: HDFS-11948 > URL: https://issues.apache.org/jira/browse/HDFS-11948 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: ozone >Reporter: Tsz Wo Nicholas Sze >Assignee: Tsz Wo Nicholas Sze > Attachments: HDFS-11948-HDFS-7240.20170614.patch > > > TestRatisManager first creates multiple Ratis clusters. Then it changes the > membership and closes some clusters. However, it does not test the clusters > with data. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-11948) Ozone: change TestRatisManager to check cluster with data
[ https://issues.apache.org/jira/browse/HDFS-11948?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Tsz Wo Nicholas Sze updated HDFS-11948: --- Attachment: HDFS-11948-HDFS-7240.20170614.patch HDFS-11948-HDFS-7240.20170614.patch: uses TestOzoneContainer::runTestBothGetandPutSmallFile to check Ratis clusters. > Ozone: change TestRatisManager to check cluster with data > - > > Key: HDFS-11948 > URL: https://issues.apache.org/jira/browse/HDFS-11948 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: ozone >Reporter: Tsz Wo Nicholas Sze >Assignee: Tsz Wo Nicholas Sze > Attachments: HDFS-11948-HDFS-7240.20170614.patch > > > TestRatisManager first creates multiple Ratis clusters. Then it changes the > membership and closes some clusters. However, it does not test the clusters > with data. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-11782) Ozone: KSM: Add listKey
[ https://issues.apache.org/jira/browse/HDFS-11782?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16049295#comment-16049295 ] Xiaoyu Yao commented on HDFS-11782: --- [~linyiqun]/[~cheersyang], patch v2 LGTM. I just have a minor issue with the test in TestKeySpaceManager.java {code} 766 storageHandler.newKeyWriter(keyArgs); {code} We should ensure the OutputStreams returned from newKeyWriter are closed properly. +1 otherwise. > Ozone: KSM: Add listKey > --- > > Key: HDFS-11782 > URL: https://issues.apache.org/jira/browse/HDFS-11782 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: HDFS-7240 >Affects Versions: ozone >Reporter: Anu Engineer >Assignee: Yiqun Lin > Attachments: HDFS-11782-HDFS-7240.001.patch, > HDFS-11782-HDFS-7240.002.patch > > > Add support for listing keys in a bucket. Just like other 2 list operations, > this API supports paging via, prevKey, prefix and maxKeys. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-11973) libhdfs++: Remove redundant directories in examples
[ https://issues.apache.org/jira/browse/HDFS-11973?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Anatoli Shein updated HDFS-11973: - Description: In order to keep consistent with the tools and tests I think we should remove one level of directories in examples folder. E.g this directory: /hadoop-hdfs-native-client/src/main/native/libhdfspp/examples/c/cat/cat.c Should become this: /hadoop-hdfs-native-client/src/main/native/libhdfspp/examples/c/cat.c Removing the redundant directories will also simplify our cmake file maintenance. > libhdfs++: Remove redundant directories in examples > --- > > Key: HDFS-11973 > URL: https://issues.apache.org/jira/browse/HDFS-11973 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: hdfs-client >Reporter: Anatoli Shein > > In order to keep consistent with the tools and tests I think we should remove > one level of directories in examples folder. > E.g this directory: > /hadoop-hdfs-native-client/src/main/native/libhdfspp/examples/c/cat/cat.c > Should become this: > /hadoop-hdfs-native-client/src/main/native/libhdfspp/examples/c/cat.c > Removing the redundant directories will also simplify our cmake file > maintenance. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Created] (HDFS-11973) libhdfs++: Remove redundant directories in examples
Anatoli Shein created HDFS-11973: Summary: libhdfs++: Remove redundant directories in examples Key: HDFS-11973 URL: https://issues.apache.org/jira/browse/HDFS-11973 Project: Hadoop HDFS Issue Type: Sub-task Reporter: Anatoli Shein -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-11585) Ozone: Support force update a container
[ https://issues.apache.org/jira/browse/HDFS-11585?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16049066#comment-16049066 ] Hadoop QA commented on HDFS-11585: -- | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 17s{color} | {color:blue} Docker mode activated. {color} | | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s{color} | {color:green} The patch appears to include 1 new or modified test files. {color} | | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 55s{color} | {color:blue} Maven dependency ordering for branch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 14m 41s{color} | {color:green} HDFS-7240 passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 32s{color} | {color:green} HDFS-7240 passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 42s{color} | {color:green} HDFS-7240 passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 34s{color} | {color:green} HDFS-7240 passed {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 3m 30s{color} | {color:green} HDFS-7240 passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 35s{color} | {color:green} HDFS-7240 passed {color} | | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 7s{color} | {color:blue} Maven dependency ordering for patch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 1m 29s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 30s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} cc {color} | {color:green} 1m 30s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 1m 30s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 39s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 31s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 3m 39s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 33s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} unit {color} | {color:green} 1m 14s{color} | {color:green} hadoop-hdfs-client in the patch passed. {color} | | {color:red}-1{color} | {color:red} unit {color} | {color:red} 66m 29s{color} | {color:red} hadoop-hdfs in the patch failed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 21s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black}104m 44s{color} | {color:black} {color} | \\ \\ || Reason || Tests || | Failed junit tests | hadoop.hdfs.server.datanode.TestDataNodeVolumeFailureReporting | | | hadoop.ozone.container.ozoneimpl.TestOzoneContainer | | Timed out junit tests | org.apache.hadoop.ozone.container.ozoneimpl.TestRatisManager | \\ \\ || Subsystem || Report/Notes || | Docker | Image:yetus/hadoop:14b5c93 | | JIRA Issue | HDFS-11585 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12872951/HDFS-11585-HDFS-7240.001.patch | | Optional Tests | asflicense compile cc mvnsite javac unit javadoc mvninstall findbugs checkstyle | | uname | Linux 766f3407a731 3.13.0-106-generic #153-Ubuntu SMP Tue Dec 6 15:44:32 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh | | git revision | HDFS-7240 / 0688a1c | | Default Java | 1.8.0_131 | | findbugs | v3.1.0-RC1 | | unit | https://builds.apache.org/job/PreCommit-HDFS-Build/19904/artifact/patchprocess/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt | | Test Results | https://builds.apache.org/job/PreCommit-HDFS-Build/19904/testReport/ | | modules | C: hadoop-hdfs-project/hadoop-hdfs-client hadoop-hdfs-project/hadoop-hdfs U: hadoop-hdfs-project | | Console output |
[jira] [Updated] (HDFS-11585) Ozone: Support force update a container
[ https://issues.apache.org/jira/browse/HDFS-11585?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Yuanbo Liu updated HDFS-11585: -- Status: Patch Available (was: Open) > Ozone: Support force update a container > --- > > Key: HDFS-11585 > URL: https://issues.apache.org/jira/browse/HDFS-11585 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: ozone >Reporter: Weiwei Yang >Assignee: Yuanbo Liu > Attachments: HDFS-11585-HDFS-7240.001.patch > > > HDFS-11567 added support of updating a container, and in following cases > # Container is closed > # Container meta file is falsely removed on disk or corrupted > a container cannot be gracefully updated. It is useful to support forcibly > update if a container gets into such state, that gives us the chance to > repair meta data. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-11585) Ozone: Support force update a container
[ https://issues.apache.org/jira/browse/HDFS-11585?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Yuanbo Liu updated HDFS-11585: -- Attachment: HDFS-11585-HDFS-7240.001.patch Upload v1 patch for this JIRA. > Ozone: Support force update a container > --- > > Key: HDFS-11585 > URL: https://issues.apache.org/jira/browse/HDFS-11585 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: ozone >Reporter: Weiwei Yang >Assignee: Yuanbo Liu > Attachments: HDFS-11585-HDFS-7240.001.patch > > > HDFS-11567 added support of updating a container, and in following cases > # Container is closed > # Container meta file is falsely removed on disk or corrupted > a container cannot be gracefully updated. It is useful to support forcibly > update if a container gets into such state, that gives us the chance to > repair meta data. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-11670) [SPS]: Add CLI command for satisfy storage policy operations
[ https://issues.apache.org/jira/browse/HDFS-11670?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16048880#comment-16048880 ] Hadoop QA commented on HDFS-11670: -- | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 20s{color} | {color:blue} Docker mode activated. {color} | | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s{color} | {color:green} The patch appears to include 1 new or modified test files. {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 17m 28s{color} | {color:green} HDFS-10285 passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 59s{color} | {color:green} HDFS-10285 passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 42s{color} | {color:green} HDFS-10285 passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 4s{color} | {color:green} HDFS-10285 passed {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 2m 15s{color} | {color:green} HDFS-10285 passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 51s{color} | {color:green} HDFS-10285 passed {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 1m 1s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 50s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 50s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 36s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 56s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 56s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 38s{color} | {color:green} the patch passed {color} | | {color:red}-1{color} | {color:red} unit {color} | {color:red}122m 36s{color} | {color:red} hadoop-hdfs in the patch failed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 20s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black}153m 58s{color} | {color:black} {color} | \\ \\ || Reason || Tests || | Failed junit tests | hadoop.hdfs.server.namenode.TestFileTruncate | | | hadoop.hdfs.server.datanode.TestDataNodeUUID | \\ \\ || Subsystem || Report/Notes || | Docker | Image:yetus/hadoop:a9ad5d6 | | JIRA Issue | HDFS-11670 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12872929/HDFS-11670-HDFS-10285.005.patch | | Optional Tests | asflicense compile javac javadoc mvninstall mvnsite unit findbugs checkstyle | | uname | Linux 3a8308189d9a 3.13.0-108-generic #155-Ubuntu SMP Wed Jan 11 16:58:52 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh | | git revision | HDFS-10285 / 6d428ed | | Default Java | 1.8.0_131 | | findbugs | v3.0.0 | | unit | https://builds.apache.org/job/PreCommit-HDFS-Build/19903/artifact/patchprocess/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt | | Test Results | https://builds.apache.org/job/PreCommit-HDFS-Build/19903/testReport/ | | modules | C: hadoop-hdfs-project/hadoop-hdfs U: hadoop-hdfs-project/hadoop-hdfs | | Console output | https://builds.apache.org/job/PreCommit-HDFS-Build/19903/console | | Powered by | Apache Yetus 0.5.0-SNAPSHOT http://yetus.apache.org | This message was automatically generated. > [SPS]: Add CLI command for satisfy storage policy operations > > > Key: HDFS-11670 > URL: https://issues.apache.org/jira/browse/HDFS-11670 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: datanode, namenode >Affects Versions: HDFS-10285 >Reporter: Surendra Singh Lilhore >Assignee: Surendra Singh Lilhore > Attachments: HDFS-11670-HDFS-10285.001.patch, > HDFS-11670-HDFS-10285.002.patch,
[jira] [Commented] (HDFS-11679) Ozone: SCM CLI: Implement list container command
[ https://issues.apache.org/jira/browse/HDFS-11679?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16048754#comment-16048754 ] Weiwei Yang commented on HDFS-11679: Hi [~yuanbo] bq. After HDFS-11926, propose to use "-prefix" instead of "-end" for consistence. I am +1 on this idea. Thank you. > Ozone: SCM CLI: Implement list container command > > > Key: HDFS-11679 > URL: https://issues.apache.org/jira/browse/HDFS-11679 > Project: Hadoop HDFS > Issue Type: Sub-task >Reporter: Weiwei Yang >Assignee: Yuanbo Liu > Labels: command-line > Attachments: HDFS-11679-HDFS-7240.001.patch, > HDFS-11679-HDFS-7240.002.patch, HDFS-11679-HDFS-7240.003.patch > > > Implement the command to list containers > {code} > hdfs scm -container list -start [-count <100> | -end > ]{code} > Lists all containers known to SCM. The option -start allows the listing to > start from a specified container and -count controls the number of entries > returned but it is mutually exclusive with the -end option which returns keys > from the -start to -end range. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-11782) Ozone: KSM: Add listKey
[ https://issues.apache.org/jira/browse/HDFS-11782?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16048752#comment-16048752 ] Weiwei Yang commented on HDFS-11782: Hi [~linyiqun] Your patch looks good to me except the point I said earlier, personally I am fine with tracking remaining work with HDFS-11886. But lets get confirmation from [~anu] or [~xyao]. Thank you. > Ozone: KSM: Add listKey > --- > > Key: HDFS-11782 > URL: https://issues.apache.org/jira/browse/HDFS-11782 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: HDFS-7240 >Affects Versions: ozone >Reporter: Anu Engineer >Assignee: Yiqun Lin > Attachments: HDFS-11782-HDFS-7240.001.patch, > HDFS-11782-HDFS-7240.002.patch > > > Add support for listing keys in a bucket. Just like other 2 list operations, > this API supports paging via, prevKey, prefix and maxKeys. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-11670) [SPS]: Add CLI command for satisfy storage policy operations
[ https://issues.apache.org/jira/browse/HDFS-11670?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16048747#comment-16048747 ] Surendra Singh Lilhore commented on HDFS-11670: --- V5 : Fixed whitespace warnings > [SPS]: Add CLI command for satisfy storage policy operations > > > Key: HDFS-11670 > URL: https://issues.apache.org/jira/browse/HDFS-11670 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: datanode, namenode >Affects Versions: HDFS-10285 >Reporter: Surendra Singh Lilhore >Assignee: Surendra Singh Lilhore > Attachments: HDFS-11670-HDFS-10285.001.patch, > HDFS-11670-HDFS-10285.002.patch, HDFS-11670-HDFS-10285.003.patch, > HDFS-11670-HDFS-10285.004.patch, HDFS-11670-HDFS-10285.005.patch > > > This jira to discuss and implement set of satisfy storage policy > sub-commands. Following are the list of sub-commands: > # Schedule blocks to move based on file/directory policy: > {code}hdfs storagepolicies -satisfyStoragePolicy -path ]{code} > # Its good to have one command to check SPS is enabled or not. Based on this > user can take the decision to run the Mover: > {code} > hdfs storagepolicies -isSPSRunning > {code} -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-11670) [SPS]: Add CLI command for satisfy storage policy operations
[ https://issues.apache.org/jira/browse/HDFS-11670?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Surendra Singh Lilhore updated HDFS-11670: -- Attachment: HDFS-11670-HDFS-10285.005.patch > [SPS]: Add CLI command for satisfy storage policy operations > > > Key: HDFS-11670 > URL: https://issues.apache.org/jira/browse/HDFS-11670 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: datanode, namenode >Affects Versions: HDFS-10285 >Reporter: Surendra Singh Lilhore >Assignee: Surendra Singh Lilhore > Attachments: HDFS-11670-HDFS-10285.001.patch, > HDFS-11670-HDFS-10285.002.patch, HDFS-11670-HDFS-10285.003.patch, > HDFS-11670-HDFS-10285.004.patch, HDFS-11670-HDFS-10285.005.patch > > > This jira to discuss and implement set of satisfy storage policy > sub-commands. Following are the list of sub-commands: > # Schedule blocks to move based on file/directory policy: > {code}hdfs storagepolicies -satisfyStoragePolicy -path ]{code} > # Its good to have one command to check SPS is enabled or not. Based on this > user can take the decision to run the Mover: > {code} > hdfs storagepolicies -isSPSRunning > {code} -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org