[jira] [Commented] (HDFS-11622) TraceId hardcoded to 0 in DataStreamer, correlation between multiple spans is lost
[ https://issues.apache.org/jira/browse/HDFS-11622?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15958293#comment-15958293 ] Masatake Iwasaki commented on HDFS-11622: - Thanks for reporting this, [~karanmehta93]. We can not determine which trace id to set if there is a span having multiple parent. While it is limitation of HTrace after multiple parent feature was added, I think it is reasonable to set trace id of the parent if {{parents.length == 1}} for applications currently using trace id to analyze tracing spans. > TraceId hardcoded to 0 in DataStreamer, correlation between multiple spans is > lost > -- > > Key: HDFS-11622 > URL: https://issues.apache.org/jira/browse/HDFS-11622 > Project: Hadoop HDFS > Issue Type: Bug > Components: tracing >Reporter: Karan Mehta > > In the {{run()}} method of {{DataStreamer}} class, the following code is > written. {{parents\[0\]}} refer to the {{spanId}} of the parent span. > {code} > one = dataQueue.getFirst(); // regular data packet > long parents[] = one.getTraceParents(); > if (parents.length > 0) { > scope = Trace.startSpan("dataStreamer", new TraceInfo(0, > parents[0])); > // TODO: use setParents API once it's available from HTrace > 3.2 > // scope = Trace.startSpan("dataStreamer", Sampler.ALWAYS); > // scope.getSpan().setParents(parents); > } > {code} > The {{scope}} starts a new TraceSpan with a traceId hardcoded to 0. Ideally > it should be taken when {{currentPacket.addTraceParent(Trace.currentSpan())}} > is invoked. This JIRA is to propose an additional long field inside the > {{DFSPacket}} class which holds the parent {{traceId}}. -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-11062) Ozone:SCM: Explore if we can remove nullcommand
[ https://issues.apache.org/jira/browse/HDFS-11062?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15958266#comment-15958266 ] Hadoop QA commented on HDFS-11062: -- | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 21s{color} | {color:blue} Docker mode activated. {color} | | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s{color} | {color:green} The patch appears to include 3 new or modified test files. {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 15m 59s{color} | {color:green} HDFS-7240 passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 59s{color} | {color:green} HDFS-7240 passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 42s{color} | {color:green} HDFS-7240 passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 4s{color} | {color:green} HDFS-7240 passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 15s{color} | {color:green} HDFS-7240 passed {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 2m 18s{color} | {color:green} HDFS-7240 passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 56s{color} | {color:green} HDFS-7240 passed {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 59s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 55s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} cc {color} | {color:green} 0m 55s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 55s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 40s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 1s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 13s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 2m 40s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 2s{color} | {color:green} the patch passed {color} | | {color:red}-1{color} | {color:red} unit {color} | {color:red} 71m 5s{color} | {color:red} hadoop-hdfs in the patch failed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 20s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black}103m 9s{color} | {color:black} {color} | \\ \\ || Reason || Tests || | Failed junit tests | hadoop.cblock.TestCBlockCLI | | | hadoop.cblock.TestCBlockServerPersistence | | | hadoop.hdfs.TestDFSUpgradeFromImage | \\ \\ || Subsystem || Report/Notes || | Docker | Image:yetus/hadoop:a9ad5d6 | | JIRA Issue | HDFS-11062 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12862214/HDFS-11062-HDFS-7240.003.patch | | Optional Tests | asflicense compile javac javadoc mvninstall mvnsite unit findbugs checkstyle cc | | uname | Linux 616a6acf7b33 3.13.0-105-generic #152-Ubuntu SMP Fri Dec 2 15:37:11 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh | | git revision | HDFS-7240 / 7ce1090 | | Default Java | 1.8.0_121 | | findbugs | v3.0.0 | | unit | https://builds.apache.org/job/PreCommit-HDFS-Build/18992/artifact/patchprocess/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt | | Test Results | https://builds.apache.org/job/PreCommit-HDFS-Build/18992/testReport/ | | modules | C: hadoop-hdfs-project/hadoop-hdfs U: hadoop-hdfs-project/hadoop-hdfs | | Console output | https://builds.apache.org/job/PreCommit-HDFS-Build/18992/console | | Powered by | Apache Yetus 0.5.0-SNAPSHOT http://yetus.apache.org | This message was automatically generated. > Ozone:SCM: Explore if we can remove nullcommand > --- > > Key: HDFS-11062 > URL:
[jira] [Commented] (HDFS-11569) Ozone: Implement listKey function for KeyManager
[ https://issues.apache.org/jira/browse/HDFS-11569?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15958233#comment-15958233 ] Weiwei Yang commented on HDFS-11569: Hi [~anu] Thanks for your comments, apparently I missed the web handler part. I will implement this. And I noticed the deleteKey function has been implemented in {{KeyManagerImpl}}, but not yet implemented in {{DistributedStorageHandler}}, we need to get that done too (maybe in another jira), right? About the pagination, you are making a good point. It looks better to simply honor the arguments {{prefix}}, {{prevKey}} and {{maxKeys}}, send them to container layer and return desired set of keys. That means we do not need pagination in server side, instead we let client side to request proper size of results. And we set {{maxKeys}} a default value 1000. Please let me know if this approach looks good to you or not. Thank you. > Ozone: Implement listKey function for KeyManager > > > Key: HDFS-11569 > URL: https://issues.apache.org/jira/browse/HDFS-11569 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: ozone >Reporter: Weiwei Yang >Assignee: Weiwei Yang > Attachments: HDFS-11569-HDFS-7240.001.patch, > HDFS-11569-HDFS-7240.002.patch, HDFS-11569-HDFS-7240.003.patch, > HDFS-11569-HDFS-7240.004.patch, HDFS-11569-HDFS-7240.005.patch > > > List keys by prefix from a container. This will need to support pagination > for the purpose of small object support. So the listKey function returns > something like ListKeyResult, client can iterate the object to get pagination > results. -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-10848) Move hadoop-hdfs-native-client module into hadoop-hdfs-client
[ https://issues.apache.org/jira/browse/HDFS-10848?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15958225#comment-15958225 ] Kai Zheng commented on HDFS-10848: -- Sounds like a good idea. What kinds of codes the new module {{hadoop-hdfs-common}} would contain? > Move hadoop-hdfs-native-client module into hadoop-hdfs-client > - > > Key: HDFS-10848 > URL: https://issues.apache.org/jira/browse/HDFS-10848 > Project: Hadoop HDFS > Issue Type: Improvement > Components: hdfs-client >Reporter: Akira Ajisaka >Assignee: Huafeng Wang > Attachments: HDFS-10848.001.patch > > > When a patch changes hadoop-hdfs-client module, Jenkins does not pick up the > tests in the native code. That way we overlooked test failure when committing > the patch. (ex. HDFS-10844) > [~aw] said in HDFS-10844, > bq. Ideally, all of this native code would be hdfs-client. Then when a change > is made to to that code, this code will also get tested. -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-10848) Move hadoop-hdfs-native-client module into hadoop-hdfs-client
[ https://issues.apache.org/jira/browse/HDFS-10848?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15958215#comment-15958215 ] Huafeng Wang commented on HDFS-10848: - Maybe introduce a hadoop-hdfs-common to break the cyclic? Kind of like how hadoop-yarn-project structure modules. hadoop-hdfs-common <- hadoop-hdfs-client hadoop-hdfs-common <- hadoop-hdfs-server hadoop-hdfs-server <- hadoop-hdfs-client (test) Just some rough idea. > Move hadoop-hdfs-native-client module into hadoop-hdfs-client > - > > Key: HDFS-10848 > URL: https://issues.apache.org/jira/browse/HDFS-10848 > Project: Hadoop HDFS > Issue Type: Improvement > Components: hdfs-client >Reporter: Akira Ajisaka >Assignee: Huafeng Wang > Attachments: HDFS-10848.001.patch > > > When a patch changes hadoop-hdfs-client module, Jenkins does not pick up the > tests in the native code. That way we overlooked test failure when committing > the patch. (ex. HDFS-10844) > [~aw] said in HDFS-10844, > bq. Ideally, all of this native code would be hdfs-client. Then when a change > is made to to that code, this code will also get tested. -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-11062) Ozone:SCM: Explore if we can remove nullcommand
[ https://issues.apache.org/jira/browse/HDFS-11062?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15958193#comment-15958193 ] Yuanbo Liu commented on HDFS-11062: --- [~anu] Thanks for your review. Upload v3 patch to address the checkstyle and test failures. > Ozone:SCM: Explore if we can remove nullcommand > --- > > Key: HDFS-11062 > URL: https://issues.apache.org/jira/browse/HDFS-11062 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: ozone >Affects Versions: HDFS-7240 >Reporter: Anu Engineer >Assignee: Yuanbo Liu > Fix For: HDFS-7240 > > Attachments: HDFS-11062-HDFS-7240.001.patch, > HDFS-11062-HDFS-7240.002.patch, HDFS-11062-HDFS-7240.003.patch > > > in SCM protocol we have a nullCommand that gets returned as the default case. > Explore if we can remove this. -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-11062) Ozone:SCM: Explore if we can remove nullcommand
[ https://issues.apache.org/jira/browse/HDFS-11062?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Yuanbo Liu updated HDFS-11062: -- Attachment: HDFS-11062-HDFS-7240.003.patch > Ozone:SCM: Explore if we can remove nullcommand > --- > > Key: HDFS-11062 > URL: https://issues.apache.org/jira/browse/HDFS-11062 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: ozone >Affects Versions: HDFS-7240 >Reporter: Anu Engineer >Assignee: Yuanbo Liu > Fix For: HDFS-7240 > > Attachments: HDFS-11062-HDFS-7240.001.patch, > HDFS-11062-HDFS-7240.002.patch, HDFS-11062-HDFS-7240.003.patch > > > in SCM protocol we have a nullCommand that gets returned as the default case. > Explore if we can remove this. -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-11583) Parent spans not initialized to NullScope for every DFSPacket
[ https://issues.apache.org/jira/browse/HDFS-11583?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15958178#comment-15958178 ] Masatake Iwasaki commented on HDFS-11583: - [~jamestaylor], I will comment on HDFS-11622. Thanks for pinging me. > Parent spans not initialized to NullScope for every DFSPacket > - > > Key: HDFS-11583 > URL: https://issues.apache.org/jira/browse/HDFS-11583 > Project: Hadoop HDFS > Issue Type: Bug > Components: tracing >Affects Versions: 2.7.1 >Reporter: Karan Mehta >Assignee: Masatake Iwasaki > Attachments: HDFS-11583-branch-2.7.001.patch, > HDFS-11583-branch-2.7.002.patch, HDFS-11583-branch-2.7.003.patch > > > The issue was found while working with PHOENIX-3752. > Each packet received by the {{run()}} method of {{DataStreamer}} class, uses > the {{parents}} field of the {{DFSPacket}} to create a new {{dataStreamer}} > span, which in turn creates a {{writeTo}} span as its child span. The parents > field is initialized when the packet is added to the {{dataQueue}} and the > value is initialized from the {{ThreadLocal}}. This is how HTrace handles > spans. > A {{TraceScope}} is created and initialized to {{NullScope}} before the loop > which runs till the point when the stream is closed. > Consider the following scenario, when the {{dataQueue}} contains multiple > packets, only the first of which has a tracing enabled. The scope is > initialized to the {{dataStreamer}} scope and a {{writeTo}} span is created > as its child, which gets closed once the packet is sent out to a remote > datanode. Before {{writeTo}} span is started, the {{dataStreamer}} scope is > detached. So calling the close method on it doesn't do anything at the end of > loop. > The second iteration will be using the stale value of the {{scope}} variable > with a DFSPacket on which tracing is not enabled. This results in generation > of an orphan {{writeTo}} spans which are being delivered to the > {{SpanReceiver}} as registered in the TraceFramework. This may result in > unlimited number of spans being generated and sent out to receiver. -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-11576) Block recovery will fail indefinitely if recovery time > heartbeat interval
[ https://issues.apache.org/jira/browse/HDFS-11576?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15958171#comment-15958171 ] Hadoop QA commented on HDFS-11576: -- | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 14s{color} | {color:blue} Docker mode activated. {color} | | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s{color} | {color:green} The patch appears to include 5 new or modified test files. {color} | | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 25s{color} | {color:blue} Maven dependency ordering for branch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 13m 11s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 15m 17s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 2m 6s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 2m 7s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 38s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 3m 18s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 36s{color} | {color:green} trunk passed {color} | | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 14s{color} | {color:blue} Maven dependency ordering for patch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 1m 26s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 14m 11s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 14m 11s{color} | {color:green} the patch passed {color} | | {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange} 2m 3s{color} | {color:orange} root: The patch generated 4 new + 804 unchanged - 1 fixed = 808 total (was 805) {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 58s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 39s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} xml {color} | {color:green} 0m 1s{color} | {color:green} The patch has no ill-formed XML file. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 3m 30s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 35s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} unit {color} | {color:green} 7m 25s{color} | {color:green} hadoop-common in the patch passed. {color} | | {color:red}-1{color} | {color:red} unit {color} | {color:red} 64m 24s{color} | {color:red} hadoop-hdfs in the patch failed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 37s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black}138m 14s{color} | {color:black} {color} | \\ \\ || Reason || Tests || | Failed junit tests | hadoop.hdfs.server.datanode.TestDataNodeVolumeFailureReporting | \\ \\ || Subsystem || Report/Notes || | Docker | Image:yetus/hadoop:a9ad5d6 | | JIRA Issue | HDFS-11576 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12862198/HDFS-11576.004.patch | | Optional Tests | asflicense compile javac javadoc mvninstall mvnsite unit findbugs checkstyle xml | | uname | Linux 824610890c4b 3.13.0-107-generic #154-Ubuntu SMP Tue Dec 20 09:57:27 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh | | git revision | trunk / a2c57bb | | Default Java | 1.8.0_121 | | findbugs | v3.0.0 | | checkstyle | https://builds.apache.org/job/PreCommit-HDFS-Build/18991/artifact/patchprocess/diff-checkstyle-root.txt | | unit | https://builds.apache.org/job/PreCommit-HDFS-Build/18991/artifact/patchprocess/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt
[jira] [Commented] (HDFS-11623) Move system erasure coding policies into hadoop-hdfs-client
[ https://issues.apache.org/jira/browse/HDFS-11623?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15958160#comment-15958160 ] Kai Zheng commented on HDFS-11623: -- bq. Should I rename the newly split ErasureCodingPolicies to SystemErasureCodingPolicies for clarity? Yeah, that would be more accurate. Thanks! bq. the ECPManager is responsible for managing the set of enabled policies, which is different from the full set of system policies. I got your idea, so in your view the manager is to manage the set of *enabled* policies (from system ones and user defined ones). In my thought, we need a central place to manage: * system policies, or the built-in ones; * user defined policies, input by an CLI command via an XML file; Or * enabled policies; * available policies to choose and enable; * maybe disabled policies? For those admins want to blacklist but can't just simply remove as already used by some data. It can have : * get a list of all policies; * get a policy by name or id; * deal with fsimage/editlog persisting user defined policies, or doing the removal; * maybe a method like {{isSystemPolicy}} to tell one policy is system one or not. Maybe server side tests prefer the manager over the new {{SystemErasureCodingPolicies}} since it will also support test of user defined policies. I think it's a good time to discuss what the manager should do, as quite a few issues on-going may relate to it. > Move system erasure coding policies into hadoop-hdfs-client > --- > > Key: HDFS-11623 > URL: https://issues.apache.org/jira/browse/HDFS-11623 > Project: Hadoop HDFS > Issue Type: Improvement > Components: erasure-coding >Affects Versions: 3.0.0-alpha2 >Reporter: Andrew Wang >Assignee: Andrew Wang > Attachments: HDFS-11623.001.patch, HDFS-11623.002.patch > > > This is a precursor to HDFS-11565. We need to move the set of system defined > EC policies out of the NameNode's ECPolicyManager into the hdfs-client module > so it can be referenced by the client. -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-11302) Improve Logging for SSLHostnameVerifier
[ https://issues.apache.org/jira/browse/HDFS-11302?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Xiaoyu Yao updated HDFS-11302: -- Fix Version/s: 2.9.0 > Improve Logging for SSLHostnameVerifier > --- > > Key: HDFS-11302 > URL: https://issues.apache.org/jira/browse/HDFS-11302 > Project: Hadoop HDFS > Issue Type: Improvement > Components: security >Reporter: Xiaoyu Yao >Assignee: Chen Liang > Fix For: 2.9.0, 3.0.0-alpha3 > > Attachments: HDFS-11302.001.patch > > > SSLHostnameVerifier interface/class was copied from other projects without > any logging to help troubleshooting SSL certificate related issues. For a > misconfigured SSL truststore, we may get some very confusing error message > like > {code} > >hdfs dfs -cat swebhdfs://NNl/tmp/test1.txt > ... > cause:java.io.IOException: DN2:50475: HTTPS hostname wrong: should be > cat: DN2:50475: HTTPS hostname wrong: should be > {code} > This ticket is opened to add tracing to give more useful context information > around SSL certificate verification failures inside the following code. > {code}AbstractVerifier#check(String[] host, X509Certificate cert) {code} -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Comment Edited] (HDFS-11302) Improve Logging for SSLHostnameVerifier
[ https://issues.apache.org/jira/browse/HDFS-11302?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15958097#comment-15958097 ] Xiaoyu Yao edited comment on HDFS-11302 at 4/6/17 1:33 AM: --- Thanks [~vagarychen] for the contribution and all for the reviews. I've commit the fix to trunk and 2.9.0. was (Author: xyao): Thanks [~vagarychen] for the contribution and all for the reviews. I've commit the fix to trunk. > Improve Logging for SSLHostnameVerifier > --- > > Key: HDFS-11302 > URL: https://issues.apache.org/jira/browse/HDFS-11302 > Project: Hadoop HDFS > Issue Type: Improvement > Components: security >Reporter: Xiaoyu Yao >Assignee: Chen Liang > Fix For: 2.9.0, 3.0.0-alpha3 > > Attachments: HDFS-11302.001.patch > > > SSLHostnameVerifier interface/class was copied from other projects without > any logging to help troubleshooting SSL certificate related issues. For a > misconfigured SSL truststore, we may get some very confusing error message > like > {code} > >hdfs dfs -cat swebhdfs://NNl/tmp/test1.txt > ... > cause:java.io.IOException: DN2:50475: HTTPS hostname wrong: should be > cat: DN2:50475: HTTPS hostname wrong: should be > {code} > This ticket is opened to add tracing to give more useful context information > around SSL certificate verification failures inside the following code. > {code}AbstractVerifier#check(String[] host, X509Certificate cert) {code} -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-11131) TestThrottledAsyncChecker#testCancellation is flaky
[ https://issues.apache.org/jira/browse/HDFS-11131?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15958144#comment-15958144 ] Hudson commented on HDFS-11131: --- SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #11539 (See [https://builds.apache.org/job/Hadoop-trunk-Commit/11539/]) HDFS-11131. TestThrottledAsyncChecker#testCancellation is flaky. (arp: rev 8c57aeb5b4fcf9f688c0f00df684b9125f683250) * (edit) hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/checker/TestThrottledAsyncChecker.java * (edit) hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/checker/ThrottledAsyncChecker.java > TestThrottledAsyncChecker#testCancellation is flaky > --- > > Key: HDFS-11131 > URL: https://issues.apache.org/jira/browse/HDFS-11131 > Project: Hadoop HDFS > Issue Type: Bug > Components: test >Affects Versions: 3.0.0-alpha2 >Reporter: Arpit Agarwal >Assignee: Arpit Agarwal > Fix For: 2.9.0, 3.0.0-alpha3 > > Attachments: HDFS-11131.01.patch, HDFS-11131.02.patch, > HDFS-11131.03.patch > > > This test failed in a few precommit runs. e.g. > https://builds.apache.org/job/PreCommit-HDFS-Build/18952/testReport/org.apache.hadoop.hdfs.server.datanode.checker/TestThrottledAsyncChecker/testCancellation/ -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-11302) Improve Logging for SSLHostnameVerifier
[ https://issues.apache.org/jira/browse/HDFS-11302?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15958135#comment-15958135 ] Hudson commented on HDFS-11302: --- SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #11538 (See [https://builds.apache.org/job/Hadoop-trunk-Commit/11538/]) HDFS-11302. Improve Logging for SSLHostnameVerifier. Contributed by Chen (xyao: rev 32bb36b750ab656f2f32f6c74eaa1a3e68ae956e) * (edit) hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/security/ssl/SSLHostnameVerifier.java > Improve Logging for SSLHostnameVerifier > --- > > Key: HDFS-11302 > URL: https://issues.apache.org/jira/browse/HDFS-11302 > Project: Hadoop HDFS > Issue Type: Improvement > Components: security >Reporter: Xiaoyu Yao >Assignee: Chen Liang > Fix For: 3.0.0-alpha3 > > Attachments: HDFS-11302.001.patch > > > SSLHostnameVerifier interface/class was copied from other projects without > any logging to help troubleshooting SSL certificate related issues. For a > misconfigured SSL truststore, we may get some very confusing error message > like > {code} > >hdfs dfs -cat swebhdfs://NNl/tmp/test1.txt > ... > cause:java.io.IOException: DN2:50475: HTTPS hostname wrong: should be > cat: DN2:50475: HTTPS hostname wrong: should be > {code} > This ticket is opened to add tracing to give more useful context information > around SSL certificate verification failures inside the following code. > {code}AbstractVerifier#check(String[] host, X509Certificate cert) {code} -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-11131) TestThrottledAsyncChecker#testCancellation is flaky
[ https://issues.apache.org/jira/browse/HDFS-11131?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Arpit Agarwal updated HDFS-11131: - Component/s: test > TestThrottledAsyncChecker#testCancellation is flaky > --- > > Key: HDFS-11131 > URL: https://issues.apache.org/jira/browse/HDFS-11131 > Project: Hadoop HDFS > Issue Type: Bug > Components: test >Affects Versions: 3.0.0-alpha2 >Reporter: Arpit Agarwal >Assignee: Arpit Agarwal > Fix For: 2.9.0, 3.0.0-alpha3 > > Attachments: HDFS-11131.01.patch, HDFS-11131.02.patch, > HDFS-11131.03.patch > > > This test failed in a few precommit runs. e.g. > https://builds.apache.org/job/PreCommit-HDFS-Build/18952/testReport/org.apache.hadoop.hdfs.server.datanode.checker/TestThrottledAsyncChecker/testCancellation/ -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-11131) TestThrottledAsyncChecker#testCancellation is flaky
[ https://issues.apache.org/jira/browse/HDFS-11131?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Arpit Agarwal updated HDFS-11131: - Resolution: Fixed Hadoop Flags: Reviewed Fix Version/s: 3.0.0-alpha3 2.9.0 Status: Resolved (was: Patch Available) I've committed this. Thanks all for reporting and reviewing the fix. > TestThrottledAsyncChecker#testCancellation is flaky > --- > > Key: HDFS-11131 > URL: https://issues.apache.org/jira/browse/HDFS-11131 > Project: Hadoop HDFS > Issue Type: Bug > Components: test >Affects Versions: 3.0.0-alpha2 >Reporter: Arpit Agarwal >Assignee: Arpit Agarwal > Fix For: 2.9.0, 3.0.0-alpha3 > > Attachments: HDFS-11131.01.patch, HDFS-11131.02.patch, > HDFS-11131.03.patch > > > This test failed in a few precommit runs. e.g. > https://builds.apache.org/job/PreCommit-HDFS-Build/18952/testReport/org.apache.hadoop.hdfs.server.datanode.checker/TestThrottledAsyncChecker/testCancellation/ -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-11608) HDFS write crashed in the case of huge block size
[ https://issues.apache.org/jira/browse/HDFS-11608?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15958100#comment-15958100 ] Chen Liang commented on HDFS-11608: --- v002 patch LGTM, but it looks like {{TestDFSOutputStream#testNoLocalWriteFlag}} is consistently failing in my local runs. I noticed though, it is always the next test right after the newly added test {{TestDFSOutputStream#testPreventOverflow}}. Disabling the new test it will pass. I guess the new test modified {{cluster}} variable in some way, causing the next test {{testPreventOverflow}} to fail. All the other failed tests passed in my local run, so probably unrelated. > HDFS write crashed in the case of huge block size > - > > Key: HDFS-11608 > URL: https://issues.apache.org/jira/browse/HDFS-11608 > Project: Hadoop HDFS > Issue Type: Bug > Components: hdfs-client >Affects Versions: 2.8.0 >Reporter: Xiaobing Zhou >Assignee: Xiaobing Zhou >Priority: Critical > Attachments: HDFS-11608.000.patch, HDFS-11608.001.patch, > HDFS-11608.002.patch > > > We've seen HDFS write crashes in the case of huge block size. For example, > writing a 3G file using 3G block size, HDFS client throws out of memory > exception. DataNode gives out IOException. After changing heap size limit, > DFSOutputStream ResponseProcessor exception is seen followed by Broken pipe > and pipeline recovery. > Give below: > DN exception, > {noformat} > 2017-03-30 16:34:33,828 ERROR datanode.DataNode (DataXceiver.java:run(278)) - > c6401.ambari.apache.org:50010:DataXceiver error processing WRITE_BLOCK > operation src: /192.168.64.101:47167 dst: /192.168.64.101:50010 > java.io.IOException: Incorrect value for packet payload size: 2147483128 > at > org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:159) > at > org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) > at > org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:502) > at > org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:898) > at > org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:806) > at > org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:137) > at > org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:74) > at > org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:251) > at java.lang.Thread.run(Thread.java:745) > {noformat} -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-11302) Improve Logging for SSLHostnameVerifier
[ https://issues.apache.org/jira/browse/HDFS-11302?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Xiaoyu Yao updated HDFS-11302: -- Resolution: Fixed Hadoop Flags: Reviewed Fix Version/s: 3.0.0-alpha3 Target Version/s: (was: 2.8.1) Status: Resolved (was: Patch Available) Thanks [~vagarychen] for the contribution and all for the reviews. I've commit the fix to trunk. > Improve Logging for SSLHostnameVerifier > --- > > Key: HDFS-11302 > URL: https://issues.apache.org/jira/browse/HDFS-11302 > Project: Hadoop HDFS > Issue Type: Improvement > Components: security >Reporter: Xiaoyu Yao >Assignee: Chen Liang > Fix For: 3.0.0-alpha3 > > Attachments: HDFS-11302.001.patch > > > SSLHostnameVerifier interface/class was copied from other projects without > any logging to help troubleshooting SSL certificate related issues. For a > misconfigured SSL truststore, we may get some very confusing error message > like > {code} > >hdfs dfs -cat swebhdfs://NNl/tmp/test1.txt > ... > cause:java.io.IOException: DN2:50475: HTTPS hostname wrong: should be > cat: DN2:50475: HTTPS hostname wrong: should be > {code} > This ticket is opened to add tracing to give more useful context information > around SSL certificate verification failures inside the following code. > {code}AbstractVerifier#check(String[] host, X509Certificate cert) {code} -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-10882) Federation State Store Interface API
[ https://issues.apache.org/jira/browse/HDFS-10882?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15958092#comment-15958092 ] Chris Douglas commented on HDFS-10882: -- Just a few minor points: * The static initialization of {{StateStoreSerializer}} could be cleaned up, and only cache the default instance * Could {{StateStoreSerializerPBImpl#newRecordInstance}} callers use {{ReflectionUtils#newInstance}}? * What's the difference between a "required record" and other records? This might be clarified in the javadocs Otherwise +1 > Federation State Store Interface API > > > Key: HDFS-10882 > URL: https://issues.apache.org/jira/browse/HDFS-10882 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: fs >Reporter: Jason Kace >Assignee: Jason Kace > Attachments: HDFS-10882-HDFS-10467-001.patch, > HDFS-10882-HDFS-10467-002.patch, HDFS-10882-HDFS-10467-003.patch, > HDFS-10882-HDFS-10467-004.patch > > > The minimal classes and interfaces required to create state store internal > data APIs using protobuf serialization. This is a pre-requisite for higher > level APIs such as the registration API and the mount table API. -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-11302) Improve Logging for SSLHostnameVerifier
[ https://issues.apache.org/jira/browse/HDFS-11302?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15958089#comment-15958089 ] Xiaoyu Yao commented on HDFS-11302: --- +1 for the patch too. I will commit it shortly. > Improve Logging for SSLHostnameVerifier > --- > > Key: HDFS-11302 > URL: https://issues.apache.org/jira/browse/HDFS-11302 > Project: Hadoop HDFS > Issue Type: Improvement > Components: security >Reporter: Xiaoyu Yao >Assignee: Chen Liang > Attachments: HDFS-11302.001.patch > > > SSLHostnameVerifier interface/class was copied from other projects without > any logging to help troubleshooting SSL certificate related issues. For a > misconfigured SSL truststore, we may get some very confusing error message > like > {code} > >hdfs dfs -cat swebhdfs://NNl/tmp/test1.txt > ... > cause:java.io.IOException: DN2:50475: HTTPS hostname wrong: should be > cat: DN2:50475: HTTPS hostname wrong: should be > {code} > This ticket is opened to add tracing to give more useful context information > around SSL certificate verification failures inside the following code. > {code}AbstractVerifier#check(String[] host, X509Certificate cert) {code} -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-11362) Storage#shouldReturnNextDir should check for null dirType
[ https://issues.apache.org/jira/browse/HDFS-11362?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15958087#comment-15958087 ] Xiaoyu Yao commented on HDFS-11362: --- [~hanishakoneru], thanks for reporting the issue and post the fix. The patch looks good to me overall. I just have one suggestion: Is it possible to avoid the null check by ensuring that StorageDirecotry#getStorageDirType never returns null? For example, we can set a default value of StorageDirectory#dirType to NameNodeDirType#UNDEFINED if it is not assigned in the constructor? > Storage#shouldReturnNextDir should check for null dirType > - > > Key: HDFS-11362 > URL: https://issues.apache.org/jira/browse/HDFS-11362 > Project: Hadoop HDFS > Issue Type: Bug > Components: hdfs >Reporter: Hanisha Koneru >Assignee: Hanisha Koneru >Priority: Minor > Attachments: HDFS-11362.000.patch > > > _Storage#shouldReturnNextDir_ method checks if the next Storage directory is > of the same type us dirType. > {noformat} > private boolean shouldReturnNextDir() { > StorageDirectory sd = getStorageDir(nextIndex); > return (dirType == null || sd.getStorageDirType().isOfType(dirType)) && > (includeShared || !sd.isShared()); > } > {noformat} > There is a possibility that sd.getStorageDirType() returns null (default > dirType is null). Hence, before checking for type match, we should make sure > that the value returned by sd.getStorageDirType() is not null. -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-11362) Storage#shouldReturnNextDir should check for null dirType
[ https://issues.apache.org/jira/browse/HDFS-11362?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15958080#comment-15958080 ] Chen Liang commented on HDFS-11362: --- +1 for v000 patch, the failed tests are unrelated, and passed in my local run > Storage#shouldReturnNextDir should check for null dirType > - > > Key: HDFS-11362 > URL: https://issues.apache.org/jira/browse/HDFS-11362 > Project: Hadoop HDFS > Issue Type: Bug > Components: hdfs >Reporter: Hanisha Koneru >Assignee: Hanisha Koneru >Priority: Minor > Attachments: HDFS-11362.000.patch > > > _Storage#shouldReturnNextDir_ method checks if the next Storage directory is > of the same type us dirType. > {noformat} > private boolean shouldReturnNextDir() { > StorageDirectory sd = getStorageDir(nextIndex); > return (dirType == null || sd.getStorageDirType().isOfType(dirType)) && > (includeShared || !sd.isShared()); > } > {noformat} > There is a possibility that sd.getStorageDirType() returns null (default > dirType is null). Hence, before checking for type match, we should make sure > that the value returned by sd.getStorageDirType() is not null. -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-11583) Parent spans not initialized to NullScope for every DFSPacket
[ https://issues.apache.org/jira/browse/HDFS-11583?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15958074#comment-15958074 ] James Taylor commented on HDFS-11583: - [~iwasakims] - thanks so much for looking at this patch. FYI, HDFS-11622 has been filed for the issue mentioned by [~samarthjain]. > Parent spans not initialized to NullScope for every DFSPacket > - > > Key: HDFS-11583 > URL: https://issues.apache.org/jira/browse/HDFS-11583 > Project: Hadoop HDFS > Issue Type: Bug > Components: tracing >Affects Versions: 2.7.1 >Reporter: Karan Mehta >Assignee: Masatake Iwasaki > Attachments: HDFS-11583-branch-2.7.001.patch, > HDFS-11583-branch-2.7.002.patch, HDFS-11583-branch-2.7.003.patch > > > The issue was found while working with PHOENIX-3752. > Each packet received by the {{run()}} method of {{DataStreamer}} class, uses > the {{parents}} field of the {{DFSPacket}} to create a new {{dataStreamer}} > span, which in turn creates a {{writeTo}} span as its child span. The parents > field is initialized when the packet is added to the {{dataQueue}} and the > value is initialized from the {{ThreadLocal}}. This is how HTrace handles > spans. > A {{TraceScope}} is created and initialized to {{NullScope}} before the loop > which runs till the point when the stream is closed. > Consider the following scenario, when the {{dataQueue}} contains multiple > packets, only the first of which has a tracing enabled. The scope is > initialized to the {{dataStreamer}} scope and a {{writeTo}} span is created > as its child, which gets closed once the packet is sent out to a remote > datanode. Before {{writeTo}} span is started, the {{dataStreamer}} scope is > detached. So calling the close method on it doesn't do anything at the end of > loop. > The second iteration will be using the stale value of the {{scope}} variable > with a DFSPacket on which tracing is not enabled. This results in generation > of an orphan {{writeTo}} spans which are being delivered to the > {{SpanReceiver}} as registered in the TraceFramework. This may result in > unlimited number of spans being generated and sent out to receiver. -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-11576) Block recovery will fail indefinitely if recovery time > heartbeat interval
[ https://issues.apache.org/jira/browse/HDFS-11576?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Lukas Majercak updated HDFS-11576: -- Attachment: HDFS-11576.004.patch Thanks for the review [~elgoiri]. Attached a new patch, fixing TestAppendSnapshotTruncate and some codestyle issues. > Block recovery will fail indefinitely if recovery time > heartbeat interval > --- > > Key: HDFS-11576 > URL: https://issues.apache.org/jira/browse/HDFS-11576 > Project: Hadoop HDFS > Issue Type: Bug > Components: datanode, hdfs, namenode >Affects Versions: 2.7.1, 2.7.2, 2.7.3, 3.0.0-alpha1, 3.0.0-alpha2 >Reporter: Lukas Majercak >Assignee: Lukas Majercak >Priority: Critical > Attachments: HDFS-11576.001.patch, HDFS-11576.002.patch, > HDFS-11576.003.patch, HDFS-11576.004.patch, HDFS-11576.repro.patch > > > Block recovery will fail indefinitely if the time to recover a block is > always longer than the heartbeat interval. Scenario: > 1. DN sends heartbeat > 2. NN sends a recovery command to DN, recoveryID=X > 3. DN starts recovery > 4. DN sends another heartbeat > 5. NN sends a recovery command to DN, recoveryID=X+1 > 6. DN calls commitBlockSyncronization after succeeding with first recovery to > NN, which fails because X < X+1 > ... -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-11622) TraceId hardcoded to 0 in DataStreamer, correlation between multiple spans is lost
[ https://issues.apache.org/jira/browse/HDFS-11622?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15958062#comment-15958062 ] James Taylor commented on HDFS-11622: - FYI, [~apurtell] - not sure if you've seen this one, but it's another important one. > TraceId hardcoded to 0 in DataStreamer, correlation between multiple spans is > lost > -- > > Key: HDFS-11622 > URL: https://issues.apache.org/jira/browse/HDFS-11622 > Project: Hadoop HDFS > Issue Type: Bug > Components: tracing >Reporter: Karan Mehta > > In the {{run()}} method of {{DataStreamer}} class, the following code is > written. {{parents\[0\]}} refer to the {{spanId}} of the parent span. > {code} > one = dataQueue.getFirst(); // regular data packet > long parents[] = one.getTraceParents(); > if (parents.length > 0) { > scope = Trace.startSpan("dataStreamer", new TraceInfo(0, > parents[0])); > // TODO: use setParents API once it's available from HTrace > 3.2 > // scope = Trace.startSpan("dataStreamer", Sampler.ALWAYS); > // scope.getSpan().setParents(parents); > } > {code} > The {{scope}} starts a new TraceSpan with a traceId hardcoded to 0. Ideally > it should be taken when {{currentPacket.addTraceParent(Trace.currentSpan())}} > is invoked. This JIRA is to propose an additional long field inside the > {{DFSPacket}} class which holds the parent {{traceId}}. -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-11362) Storage#shouldReturnNextDir should check for null dirType
[ https://issues.apache.org/jira/browse/HDFS-11362?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15958060#comment-15958060 ] Xiaobing Zhou commented on HDFS-11362: -- The patch looks good, thanks [~hkoneru]. > Storage#shouldReturnNextDir should check for null dirType > - > > Key: HDFS-11362 > URL: https://issues.apache.org/jira/browse/HDFS-11362 > Project: Hadoop HDFS > Issue Type: Bug > Components: hdfs >Reporter: Hanisha Koneru >Assignee: Hanisha Koneru >Priority: Minor > Attachments: HDFS-11362.000.patch > > > _Storage#shouldReturnNextDir_ method checks if the next Storage directory is > of the same type us dirType. > {noformat} > private boolean shouldReturnNextDir() { > StorageDirectory sd = getStorageDir(nextIndex); > return (dirType == null || sd.getStorageDirType().isOfType(dirType)) && > (includeShared || !sd.isShared()); > } > {noformat} > There is a possibility that sd.getStorageDirType() returns null (default > dirType is null). Hence, before checking for type match, we should make sure > that the value returned by sd.getStorageDirType() is not null. -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-11538) Move ClientProtocol HA proxies into hadoop-hdfs-client
[ https://issues.apache.org/jira/browse/HDFS-11538?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Andrew Wang updated HDFS-11538: --- Fix Version/s: (was: 2.9.0) 2.8.1 Cool. I did the requisite JIRA stuff to make the release notes track properly, and also pulled in these other good looking fixes to make the 2.8.1 backport clean: {noformat} d12a0a2 (HEAD -> refs/heads/branch-2.8) HDFS-11538. Move ClientProtocol HA proxies into hadoop-hdfs-client. Contributed by Huafeng Wang. eff4b2f HDFS-10683. Make class Token$PrivateToken private. Contributed by John Zhuge. 7ad5b27 HDFS-9276. Failed to Update HDFS Delegation Token for long running application in HA mode. Contributed by Liangliang Gu and John Zhuge e216c15 HDFS-11395. RequestHedgingProxyProvider#RequestHedgingInvocationHandler hides the Exception thrown from NameNode. Contributed by Nandakumar. ab673aa HDFS-11629. Revert "HDFS-11431. hadoop-hdfs-client JAR does not include ConfiguredFailoverProxyProvider." {noformat} > Move ClientProtocol HA proxies into hadoop-hdfs-client > -- > > Key: HDFS-11538 > URL: https://issues.apache.org/jira/browse/HDFS-11538 > Project: Hadoop HDFS > Issue Type: Bug > Components: hdfs-client >Affects Versions: 2.8.0, 3.0.0-alpha1 >Reporter: Andrew Wang >Assignee: Huafeng Wang >Priority: Blocker > Fix For: 3.0.0-alpha3, 2.8.1 > > Attachments: HDFS-11538.001.patch, HDFS-11538.002.patch, > HDFS-11538.003.patch, HDFS-11538-branch-2.001.patch > > > Follow-up for HDFS-11431. We should move this missing class over rather than > pulling in the whole hadoop-hdfs dependency. -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-9276) Failed to Update HDFS Delegation Token for long running application in HA mode
[ https://issues.apache.org/jira/browse/HDFS-9276?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Andrew Wang updated HDFS-9276: -- Fix Version/s: (was: 2.9.0) 2.8.1 Backported this to 2.8.1, looks pretty critical in any case. > Failed to Update HDFS Delegation Token for long running application in HA mode > -- > > Key: HDFS-9276 > URL: https://issues.apache.org/jira/browse/HDFS-9276 > Project: Hadoop HDFS > Issue Type: Bug > Components: fs, ha, security >Affects Versions: 2.7.1 >Reporter: Liangliang Gu >Assignee: Liangliang Gu > Fix For: 3.0.0-alpha1, 2.8.1 > > Attachments: debug1.PNG, debug2.PNG, HDFS-9276.01.patch, > HDFS-9276.02.patch, HDFS-9276.03.patch, HDFS-9276.04.patch, > HDFS-9276.05.patch, HDFS-9276.06.patch, HDFS-9276.07.patch, > HDFS-9276.08.patch, HDFS-9276.09.patch, HDFS-9276.10.patch, > HDFS-9276.11.patch, HDFS-9276.12.patch, HDFS-9276.13.patch, > HDFS-9276.14.patch, HDFS-9276.15.patch, HDFS-9276.16.patch, > HDFS-9276.17.patch, HDFS-9276.18.patch, HDFS-9276.19.patch, > HDFS-9276.20.patch, HDFSReadLoop.scala > > > The Scenario is as follows: > 1. NameNode HA is enabled. > 2. Kerberos is enabled. > 3. HDFS Delegation Token (not Keytab or TGT) is used to communicate with > NameNode. > 4. We want to update the HDFS Delegation Token for long running applicatons. > HDFS Client will generate private tokens for each NameNode. When we update > the HDFS Delegation Token, these private tokens will not be updated, which > will cause token expired. > This bug can be reproduced by the following program: > {code} > import java.security.PrivilegedExceptionAction > import org.apache.hadoop.conf.Configuration > import org.apache.hadoop.fs.{FileSystem, Path} > import org.apache.hadoop.security.UserGroupInformation > object HadoopKerberosTest { > def main(args: Array[String]): Unit = { > val keytab = "/path/to/keytab/xxx.keytab" > val principal = "x...@abc.com" > val creds1 = new org.apache.hadoop.security.Credentials() > val ugi1 = > UserGroupInformation.loginUserFromKeytabAndReturnUGI(principal, keytab) > ugi1.doAs(new PrivilegedExceptionAction[Void] { > // Get a copy of the credentials > override def run(): Void = { > val fs = FileSystem.get(new Configuration()) > fs.addDelegationTokens("test", creds1) > null > } > }) > val ugi = UserGroupInformation.createRemoteUser("test") > ugi.addCredentials(creds1) > ugi.doAs(new PrivilegedExceptionAction[Void] { > // Get a copy of the credentials > override def run(): Void = { > var i = 0 > while (true) { > val creds1 = new org.apache.hadoop.security.Credentials() > val ugi1 = > UserGroupInformation.loginUserFromKeytabAndReturnUGI(principal, keytab) > ugi1.doAs(new PrivilegedExceptionAction[Void] { > // Get a copy of the credentials > override def run(): Void = { > val fs = FileSystem.get(new Configuration()) > fs.addDelegationTokens("test", creds1) > null > } > }) > UserGroupInformation.getCurrentUser.addCredentials(creds1) > val fs = FileSystem.get( new Configuration()) > i += 1 > println() > println(i) > println(fs.listFiles(new Path("/user"), false)) > Thread.sleep(60 * 1000) > } > null > } > }) > } > } > {code} > To reproduce the bug, please set the following configuration to Name Node: > {code} > dfs.namenode.delegation.token.max-lifetime = 10min > dfs.namenode.delegation.key.update-interval = 3min > dfs.namenode.delegation.token.renew-interval = 3min > {code} > The bug will occure after 3 minutes. > The stacktrace is: > {code} > Exception in thread "main" > org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.security.token.SecretManager$InvalidToken): > token (HDFS_DELEGATION_TOKEN token 330156 for test) is expired > at org.apache.hadoop.ipc.Client.call(Client.java:1347) > at org.apache.hadoop.ipc.Client.call(Client.java:1300) > at > org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:206) > at com.sun.proxy.$Proxy9.getFileInfo(Unknown Source) > at > org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.getFileInfo(ClientNamenodeProtocolTranslatorPB.java:651) > at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) > at > sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57) > at > sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) > at java.lang.reflect.Method.invoke(Method.java:606) >
[jira] [Commented] (HDFS-11395) RequestHedgingProxyProvider#RequestHedgingInvocationHandler hides the Exception thrown from NameNode
[ https://issues.apache.org/jira/browse/HDFS-11395?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15958034#comment-15958034 ] Andrew Wang commented on HDFS-11395: I backported this to branch-2.8 to make another backport easier, looks like a good fix too. > RequestHedgingProxyProvider#RequestHedgingInvocationHandler hides the > Exception thrown from NameNode > > > Key: HDFS-11395 > URL: https://issues.apache.org/jira/browse/HDFS-11395 > Project: Hadoop HDFS > Issue Type: Bug > Components: ha >Reporter: Nandakumar >Assignee: Nandakumar > Fix For: 3.0.0-alpha3, 2.8.1 > > Attachments: HDFS-11395.000.patch, HDFS-11395.001.patch, > HDFS-11395.002.patch, HDFS-11395.003.patch, HDFS-11395.004.patch, > HDFS-11395.005.patch > > > When using RequestHedgingProxyProvider, in case of Exception (like > FileNotFoundException) from ActiveNameNode, > {{RequestHedgingProxyProvider#RequestHedgingInvocationHandler.invoke}} > receives {{ExecutionException}} since we use {{CompletionService}} for the > call. The ExecutionException is put into a map and wrapped with > {{MultiException}}. > So for a FileNotFoundException the client receives > {{MultiException(Map(ExecutionException(InvocationTargetException(RemoteException(FileNotFoundException)}} > It will cause problem in clients which are handling RemoteExceptions. -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-10683) Make class Token$PrivateToken private
[ https://issues.apache.org/jira/browse/HDFS-10683?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Andrew Wang updated HDFS-10683: --- Target Version/s: 3.0.0-alpha2, 2.9.0 (was: 2.9.0, 3.0.0-alpha2) Fix Version/s: (was: 2.9.0) 2.8.1 Backported this to 2.8.1, looks innocuous, to make another backport clean. > Make class Token$PrivateToken private > - > > Key: HDFS-10683 > URL: https://issues.apache.org/jira/browse/HDFS-10683 > Project: Hadoop HDFS > Issue Type: Improvement >Affects Versions: 2.9.0 >Reporter: John Zhuge >Assignee: John Zhuge >Priority: Minor > Labels: fs, ha, security, security_token > Fix For: 3.0.0-alpha2, 2.8.1 > > Attachments: HDFS-10683.001.patch, HDFS-10683.002.patch > > > Avoid {{instanceof}} or typecasting of {{Toke.PrivateToken}} by introducing > an interface method in {{Token}}. Make class {{Toke.PrivateToken}} private. > Use a factory method instead. -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Resolved] (HDFS-11629) Revert HDFS-11431 hadoop-hdfs-client JAR does not include ConfiguredFailoverProxyProvider.
[ https://issues.apache.org/jira/browse/HDFS-11629?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Andrew Wang resolved HDFS-11629. Resolution: Fixed Fix Version/s: 2.8.1 > Revert HDFS-11431 hadoop-hdfs-client JAR does not include > ConfiguredFailoverProxyProvider. > -- > > Key: HDFS-11629 > URL: https://issues.apache.org/jira/browse/HDFS-11629 > Project: Hadoop HDFS > Issue Type: Bug >Affects Versions: 2.8.0 >Reporter: Andrew Wang >Assignee: Andrew Wang > Fix For: 2.8.1 > > > New JIRA for tracking the revert of HDFS-11431 from branch-2.8. -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-11395) RequestHedgingProxyProvider#RequestHedgingInvocationHandler hides the Exception thrown from NameNode
[ https://issues.apache.org/jira/browse/HDFS-11395?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Andrew Wang updated HDFS-11395: --- Fix Version/s: (was: 2.9.0) 2.8.1 3.0.0-alpha3 > RequestHedgingProxyProvider#RequestHedgingInvocationHandler hides the > Exception thrown from NameNode > > > Key: HDFS-11395 > URL: https://issues.apache.org/jira/browse/HDFS-11395 > Project: Hadoop HDFS > Issue Type: Bug > Components: ha >Reporter: Nandakumar >Assignee: Nandakumar > Fix For: 3.0.0-alpha3, 2.8.1 > > Attachments: HDFS-11395.000.patch, HDFS-11395.001.patch, > HDFS-11395.002.patch, HDFS-11395.003.patch, HDFS-11395.004.patch, > HDFS-11395.005.patch > > > When using RequestHedgingProxyProvider, in case of Exception (like > FileNotFoundException) from ActiveNameNode, > {{RequestHedgingProxyProvider#RequestHedgingInvocationHandler.invoke}} > receives {{ExecutionException}} since we use {{CompletionService}} for the > call. The ExecutionException is put into a map and wrapped with > {{MultiException}}. > So for a FileNotFoundException the client receives > {{MultiException(Map(ExecutionException(InvocationTargetException(RemoteException(FileNotFoundException)}} > It will cause problem in clients which are handling RemoteExceptions. -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-11302) Improve Logging for SSLHostnameVerifier
[ https://issues.apache.org/jira/browse/HDFS-11302?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15958026#comment-15958026 ] Xiaobing Zhou commented on HDFS-11302: -- Thanks for the patch [~vagarychen]. LGTM, +1 non-binding. > Improve Logging for SSLHostnameVerifier > --- > > Key: HDFS-11302 > URL: https://issues.apache.org/jira/browse/HDFS-11302 > Project: Hadoop HDFS > Issue Type: Improvement > Components: security >Reporter: Xiaoyu Yao >Assignee: Chen Liang > Attachments: HDFS-11302.001.patch > > > SSLHostnameVerifier interface/class was copied from other projects without > any logging to help troubleshooting SSL certificate related issues. For a > misconfigured SSL truststore, we may get some very confusing error message > like > {code} > >hdfs dfs -cat swebhdfs://NNl/tmp/test1.txt > ... > cause:java.io.IOException: DN2:50475: HTTPS hostname wrong: should be > cat: DN2:50475: HTTPS hostname wrong: should be > {code} > This ticket is opened to add tracing to give more useful context information > around SSL certificate verification failures inside the following code. > {code}AbstractVerifier#check(String[] host, X509Certificate cert) {code} -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Created] (HDFS-11629) Revert HDFS-11431 hadoop-hdfs-client JAR does not include ConfiguredFailoverProxyProvider.
Andrew Wang created HDFS-11629: -- Summary: Revert HDFS-11431 hadoop-hdfs-client JAR does not include ConfiguredFailoverProxyProvider. Key: HDFS-11629 URL: https://issues.apache.org/jira/browse/HDFS-11629 Project: Hadoop HDFS Issue Type: Bug Affects Versions: 2.8.0 Reporter: Andrew Wang Assignee: Andrew Wang New JIRA for tracking the revert of HDFS-11431 from branch-2.8. -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-11626) Deprecate oiv_legacy tool
[ https://issues.apache.org/jira/browse/HDFS-11626?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15957998#comment-15957998 ] Andrew Wang commented on HDFS-11626: We should definitely deprecate if it's not already. The name "oiv_legacy" already implies deprecation. Since we have the LevelDB based OIV tool now, I think we can consider removal (though it's not much of a maintenance burden). We should check with [~daryn] and [~kihwal] since their usecases were why we brought back oiv_legacy. > Deprecate oiv_legacy tool > - > > Key: HDFS-11626 > URL: https://issues.apache.org/jira/browse/HDFS-11626 > Project: Hadoop HDFS > Issue Type: Task > Components: tools >Reporter: Wei-Chiu Chuang > > oiv_legacy only works for fsimage at or before Hadoop 2.4. I think we can > deprecate oiv_legacy in Hadoop 3 in preparation for final removal in Hadoop 4. > Or would people favor removing it in Hadoop 3? -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-11467) Support ErasureCodingPolicyManager section in OIV XML/ReverseXML and OEV tools
[ https://issues.apache.org/jira/browse/HDFS-11467?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Wei-Chiu Chuang updated HDFS-11467: --- Issue Type: Sub-task (was: Improvement) Parent: HDFS-8031 > Support ErasureCodingPolicyManager section in OIV XML/ReverseXML and OEV tools > -- > > Key: HDFS-11467 > URL: https://issues.apache.org/jira/browse/HDFS-11467 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: tools >Affects Versions: 3.0.0-alpha3 >Reporter: Wei-Chiu Chuang > > As discussed in HDFS-7859, after ErasureCodingPolicyManager section is added > into fsimage, we would like to also support exporting this section into an > XML back and forth using the OIV tool. > Likewise, HDFS-7859 adds new edit log ops, so OEV tool should also support it. -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-11596) hadoop-hdfs-client jar is in the wrong directory in release tarball
[ https://issues.apache.org/jira/browse/HDFS-11596?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Andrew Wang updated HDFS-11596: --- Resolution: Fixed Fix Version/s: 3.0.0-alpha3 Release Note: The scope of hadoop-hdfs's dependency on hadoop-hdfs-client has changed from "compile" to "provided". This may affect users who directly consume hadoop-hdfs, which is a private API. These users need to add a new dependency on hadoop-hdfs-client, or better yet, switch from hadoop-hdfs to hadoop-hdfs-client. Status: Resolved (was: Patch Available) Thanks for working on this [~yuanbo], I've committed this to trunk! Added a release note too. > hadoop-hdfs-client jar is in the wrong directory in release tarball > --- > > Key: HDFS-11596 > URL: https://issues.apache.org/jira/browse/HDFS-11596 > Project: Hadoop HDFS > Issue Type: Bug > Components: build >Affects Versions: 2.8.0, 3.0.0-alpha2 >Reporter: Andrew Wang >Assignee: Yuanbo Liu >Priority: Critical > Fix For: 3.0.0-alpha3 > > Attachments: HDFS-11596.001.patch, HDFS-11596.002.patch > > > Mentioned by [~aw] on HDFS-11356. The hdfs-client jar is in the lib directory > rather than with the other hadoop jars: > From the alpha2 artifacts: > {noformat} > -> % find . -name "*hdfs-client*.jar" > ./share/hadoop/httpfs/tomcat/webapps/webhdfs/WEB-INF/lib/hadoop-hdfs-client-3.0.0-alpha2.jar > ./share/hadoop/hdfs/sources/hadoop-hdfs-client-3.0.0-alpha2-sources.jar > ./share/hadoop/hdfs/sources/hadoop-hdfs-client-3.0.0-alpha2-test-sources.jar > ./share/hadoop/hdfs/lib/hadoop-hdfs-client-3.0.0-alpha2.jar > ./share/hadoop/hdfs/hadoop-hdfs-client-3.0.0-alpha2-tests.jar > {noformat} > Strangely enough, the tests jar is in the right place. -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-11596) hadoop-hdfs-client jar is in the wrong directory in release tarball
[ https://issues.apache.org/jira/browse/HDFS-11596?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15957974#comment-15957974 ] Andrew Wang commented on HDFS-11596: bq. Don't ask me, I don't understand maven I knew we should have stuck with Ant :) I propose we keep this change in trunk and see how alpha3 impacts downstreams. It's theoretically fine for branch-2 too since downstreams shouldn't be using our server-side artifacts, but we all know how that is. +1 will commit to trunk shortly. > hadoop-hdfs-client jar is in the wrong directory in release tarball > --- > > Key: HDFS-11596 > URL: https://issues.apache.org/jira/browse/HDFS-11596 > Project: Hadoop HDFS > Issue Type: Bug > Components: build >Affects Versions: 2.8.0, 3.0.0-alpha2 >Reporter: Andrew Wang >Assignee: Yuanbo Liu >Priority: Critical > Attachments: HDFS-11596.001.patch, HDFS-11596.002.patch > > > Mentioned by [~aw] on HDFS-11356. The hdfs-client jar is in the lib directory > rather than with the other hadoop jars: > From the alpha2 artifacts: > {noformat} > -> % find . -name "*hdfs-client*.jar" > ./share/hadoop/httpfs/tomcat/webapps/webhdfs/WEB-INF/lib/hadoop-hdfs-client-3.0.0-alpha2.jar > ./share/hadoop/hdfs/sources/hadoop-hdfs-client-3.0.0-alpha2-sources.jar > ./share/hadoop/hdfs/sources/hadoop-hdfs-client-3.0.0-alpha2-test-sources.jar > ./share/hadoop/hdfs/lib/hadoop-hdfs-client-3.0.0-alpha2.jar > ./share/hadoop/hdfs/hadoop-hdfs-client-3.0.0-alpha2-tests.jar > {noformat} > Strangely enough, the tests jar is in the right place. -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-11628) Clarify the behavior of HDFS Mover in documentation
[ https://issues.apache.org/jira/browse/HDFS-11628?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15957967#comment-15957967 ] Hudson commented on HDFS-11628: --- SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #11534 (See [https://builds.apache.org/job/Hadoop-trunk-Commit/11534/]) HDFS-11628. Clarify the behavior of HDFS Mover in documentation. (arp: rev 3db8d68d63f832d1747efcdf079e02ecda0b0127) * (edit) hadoop-hdfs-project/hadoop-hdfs/src/site/markdown/ArchivalStorage.md > Clarify the behavior of HDFS Mover in documentation > --- > > Key: HDFS-11628 > URL: https://issues.apache.org/jira/browse/HDFS-11628 > Project: Hadoop HDFS > Issue Type: Improvement > Components: documentation >Reporter: Xiaobing Zhou >Assignee: Xiaobing Zhou > Labels: docs > Fix For: 2.9.0, 2.7.4, 3.0.0-alpha2, 2.8.1 > > Attachments: HDFS-11628.000.patch, HDFS-11628.001.patch > > > It's helpful to state that Mover always tries to move block replicas within > the same node whenever possible. If that is not possible (e.g. when a node > doesn’t have the target storage type) then it will copy the block replica to > another node over the network. -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-11131) TestThrottledAsyncChecker#testCancellation is flaky
[ https://issues.apache.org/jira/browse/HDFS-11131?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15957965#comment-15957965 ] Hadoop QA commented on HDFS-11131: -- | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 13s{color} | {color:blue} Docker mode activated. {color} | | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s{color} | {color:green} The patch appears to include 1 new or modified test files. {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 13m 46s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 46s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 36s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 51s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 12s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 44s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 39s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 46s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 45s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 45s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 33s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 48s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 10s{color} | {color:green} the patch passed {color} | | {color:red}-1{color} | {color:red} whitespace {color} | {color:red} 0m 0s{color} | {color:red} The patch has 2 line(s) that end in whitespace. Use git apply --whitespace=fix <>. Refer https://git-scm.com/docs/git-apply {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 49s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 36s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} unit {color} | {color:green} 63m 22s{color} | {color:green} hadoop-hdfs in the patch passed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 18s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 89m 7s{color} | {color:black} {color} | \\ \\ || Subsystem || Report/Notes || | Docker | Image:yetus/hadoop:a9ad5d6 | | JIRA Issue | HDFS-11131 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12862165/HDFS-11131.03.patch | | Optional Tests | asflicense compile javac javadoc mvninstall mvnsite unit findbugs checkstyle | | uname | Linux 1a4da9f0972f 3.13.0-106-generic #153-Ubuntu SMP Tue Dec 6 15:44:32 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh | | git revision | trunk / 7d963c4 | | Default Java | 1.8.0_121 | | findbugs | v3.0.0 | | whitespace | https://builds.apache.org/job/PreCommit-HDFS-Build/18989/artifact/patchprocess/whitespace-eol.txt | | Test Results | https://builds.apache.org/job/PreCommit-HDFS-Build/18989/testReport/ | | modules | C: hadoop-hdfs-project/hadoop-hdfs U: hadoop-hdfs-project/hadoop-hdfs | | Console output | https://builds.apache.org/job/PreCommit-HDFS-Build/18989/console | | Powered by | Apache Yetus 0.5.0-SNAPSHOT http://yetus.apache.org | This message was automatically generated. > TestThrottledAsyncChecker#testCancellation is flaky > --- > > Key: HDFS-11131 > URL: https://issues.apache.org/jira/browse/HDFS-11131 > Project: Hadoop HDFS > Issue Type: Bug >Affects Versions: 3.0.0-alpha2 >Reporter: Arpit Agarwal >Assignee: Arpit Agarwal > Attachments: HDFS-11131.01.patch, HDFS-11131.02.patch,
[jira] [Updated] (HDFS-11628) Clarify the behavior of HDFS Mover in documentation
[ https://issues.apache.org/jira/browse/HDFS-11628?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Arpit Agarwal updated HDFS-11628: - Target Version/s: (was: 3.0.0-alpha3, 2.8.1) > Clarify the behavior of HDFS Mover in documentation > --- > > Key: HDFS-11628 > URL: https://issues.apache.org/jira/browse/HDFS-11628 > Project: Hadoop HDFS > Issue Type: Improvement > Components: documentation >Reporter: Xiaobing Zhou >Assignee: Xiaobing Zhou > Labels: docs > Fix For: 2.9.0, 2.7.4, 3.0.0-alpha2, 2.8.1 > > Attachments: HDFS-11628.000.patch, HDFS-11628.001.patch > > > It's helpful to state that Mover always tries to move block replicas within > the same node whenever possible. If that is not possible (e.g. when a node > doesn’t have the target storage type) then it will copy the block replica to > another node over the network. -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-11628) Clarify the behavior of HDFS Mover in documentation
[ https://issues.apache.org/jira/browse/HDFS-11628?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Arpit Agarwal updated HDFS-11628: - Component/s: documentation > Clarify the behavior of HDFS Mover in documentation > --- > > Key: HDFS-11628 > URL: https://issues.apache.org/jira/browse/HDFS-11628 > Project: Hadoop HDFS > Issue Type: Improvement > Components: documentation >Reporter: Xiaobing Zhou >Assignee: Xiaobing Zhou > Labels: docs > Fix For: 2.9.0, 2.7.4, 3.0.0-alpha2, 2.8.1 > > Attachments: HDFS-11628.000.patch, HDFS-11628.001.patch > > > It's helpful to state that Mover always tries to move block replicas within > the same node whenever possible. If that is not possible (e.g. when a node > doesn’t have the target storage type) then it will copy the block replica to > another node over the network. -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-11628) Clarify the behavior of HDFS Mover in documentation
[ https://issues.apache.org/jira/browse/HDFS-11628?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Arpit Agarwal updated HDFS-11628: - Labels: docs (was: docuentation) > Clarify the behavior of HDFS Mover in documentation > --- > > Key: HDFS-11628 > URL: https://issues.apache.org/jira/browse/HDFS-11628 > Project: Hadoop HDFS > Issue Type: Improvement > Components: documentation >Reporter: Xiaobing Zhou >Assignee: Xiaobing Zhou > Labels: docs > Fix For: 2.9.0, 2.7.4, 3.0.0-alpha2, 2.8.1 > > Attachments: HDFS-11628.000.patch, HDFS-11628.001.patch > > > It's helpful to state that Mover always tries to move block replicas within > the same node whenever possible. If that is not possible (e.g. when a node > doesn’t have the target storage type) then it will copy the block replica to > another node over the network. -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-11628) Clarify the behavior of HDFS Mover in documentation
[ https://issues.apache.org/jira/browse/HDFS-11628?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Arpit Agarwal updated HDFS-11628: - Resolution: Fixed Hadoop Flags: Reviewed Fix Version/s: 3.0.0-alpha2 2.8.1 2.7.4 2.9.0 Status: Resolved (was: Patch Available) Thanks for the contribution [~xiaobingo]. I've committed this. Thanks for the review [~liuml07]. > Clarify the behavior of HDFS Mover in documentation > --- > > Key: HDFS-11628 > URL: https://issues.apache.org/jira/browse/HDFS-11628 > Project: Hadoop HDFS > Issue Type: Improvement >Reporter: Xiaobing Zhou >Assignee: Xiaobing Zhou > Labels: docuentation > Fix For: 2.9.0, 2.7.4, 2.8.1, 3.0.0-alpha2 > > Attachments: HDFS-11628.000.patch, HDFS-11628.001.patch > > > It's helpful to state that Mover always tries to move block replicas within > the same node whenever possible. If that is not possible (e.g. when a node > doesn’t have the target storage type) then it will copy the block replica to > another node over the network. -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Comment Edited] (HDFS-10848) Move hadoop-hdfs-native-client module into hadoop-hdfs-client
[ https://issues.apache.org/jira/browse/HDFS-10848?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15957866#comment-15957866 ] Andrew Wang edited comment on HDFS-10848 at 4/5/17 10:20 PM: - hadoop-hdfs-native-client only depends on hadoop-hdfs for testing, so the prod artifact is still separated. We have some of our very first client-only tests introduced by HADOOP-11538, but since we depend so heavily on miniclusters, we can't move many of the tests from hadoop-hdfs to hadoop-hdfs-client. Anyway, even if we perfectly split the client and server tests, we would still want to run the tests for both if either changes. Is there a maven-y way of doing this? Move it all to integration tests? Split tests into a separate module? was (Author: andrew.wang): hadoop-hdfs-client only depends on hadoop-hdfs for testing, so the prod artifact is still separated. We have some of our very first client-only tests introduced by HADOOP-11538, but since we depend so heavily on miniclusters, we can't move many of the tests from hadoop-hdfs to hadoop-hdfs-client. Anyway, even if we perfectly split the client and server tests, we would still want to run the tests for both if either changes. Is there a maven-y way of doing this? Move it all to integration tests? Split tests into a separate module? > Move hadoop-hdfs-native-client module into hadoop-hdfs-client > - > > Key: HDFS-10848 > URL: https://issues.apache.org/jira/browse/HDFS-10848 > Project: Hadoop HDFS > Issue Type: Improvement > Components: hdfs-client >Reporter: Akira Ajisaka >Assignee: Huafeng Wang > Attachments: HDFS-10848.001.patch > > > When a patch changes hadoop-hdfs-client module, Jenkins does not pick up the > tests in the native code. That way we overlooked test failure when committing > the patch. (ex. HDFS-10844) > [~aw] said in HDFS-10844, > bq. Ideally, all of this native code would be hdfs-client. Then when a change > is made to to that code, this code will also get tested. -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-10848) Move hadoop-hdfs-native-client module into hadoop-hdfs-client
[ https://issues.apache.org/jira/browse/HDFS-10848?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15957866#comment-15957866 ] Andrew Wang commented on HDFS-10848: hadoop-hdfs-client only depends on hadoop-hdfs for testing, so the prod artifact is still separated. We have some of our very first client-only tests introduced by HADOOP-11538, but since we depend so heavily on miniclusters, we can't move many of the tests from hadoop-hdfs to hadoop-hdfs-client. Anyway, even if we perfectly split the client and server tests, we would still want to run the tests for both if either changes. Is there a maven-y way of doing this? Move it all to integration tests? Split tests into a separate module? > Move hadoop-hdfs-native-client module into hadoop-hdfs-client > - > > Key: HDFS-10848 > URL: https://issues.apache.org/jira/browse/HDFS-10848 > Project: Hadoop HDFS > Issue Type: Improvement > Components: hdfs-client >Reporter: Akira Ajisaka >Assignee: Huafeng Wang > Attachments: HDFS-10848.001.patch > > > When a patch changes hadoop-hdfs-client module, Jenkins does not pick up the > tests in the native code. That way we overlooked test failure when committing > the patch. (ex. HDFS-10844) > [~aw] said in HDFS-10844, > bq. Ideally, all of this native code would be hdfs-client. Then when a change > is made to to that code, this code will also get tested. -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-11131) TestThrottledAsyncChecker#testCancellation is flaky
[ https://issues.apache.org/jira/browse/HDFS-11131?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15957864#comment-15957864 ] Xiaoyu Yao commented on HDFS-11131: --- +1 Thanks for the clarification. > TestThrottledAsyncChecker#testCancellation is flaky > --- > > Key: HDFS-11131 > URL: https://issues.apache.org/jira/browse/HDFS-11131 > Project: Hadoop HDFS > Issue Type: Bug >Affects Versions: 3.0.0-alpha2 >Reporter: Arpit Agarwal >Assignee: Arpit Agarwal > Attachments: HDFS-11131.01.patch, HDFS-11131.02.patch, > HDFS-11131.03.patch > > > This test failed in a few precommit runs. e.g. > https://builds.apache.org/job/PreCommit-HDFS-Build/18952/testReport/org.apache.hadoop.hdfs.server.datanode.checker/TestThrottledAsyncChecker/testCancellation/ -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-11628) Clarify the behavior of HDFS Mover in documentation
[ https://issues.apache.org/jira/browse/HDFS-11628?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15957860#comment-15957860 ] Hadoop QA commented on HDFS-11628: -- | (/) *{color:green}+1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 16s{color} | {color:blue} Docker mode activated. {color} | | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 13m 50s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 2s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 1s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 18s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 16m 51s{color} | {color:black} {color} | \\ \\ || Subsystem || Report/Notes || | Docker | Image:yetus/hadoop:a9ad5d6 | | JIRA Issue | HDFS-11628 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12862178/HDFS-11628.001.patch | | Optional Tests | asflicense mvnsite | | uname | Linux dcac54cf2919 3.13.0-108-generic #155-Ubuntu SMP Wed Jan 11 16:58:52 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh | | git revision | trunk / 7d963c4 | | modules | C: hadoop-hdfs-project/hadoop-hdfs U: hadoop-hdfs-project/hadoop-hdfs | | Console output | https://builds.apache.org/job/PreCommit-HDFS-Build/18990/console | | Powered by | Apache Yetus 0.5.0-SNAPSHOT http://yetus.apache.org | This message was automatically generated. > Clarify the behavior of HDFS Mover in documentation > --- > > Key: HDFS-11628 > URL: https://issues.apache.org/jira/browse/HDFS-11628 > Project: Hadoop HDFS > Issue Type: Improvement >Reporter: Xiaobing Zhou >Assignee: Xiaobing Zhou > Labels: docuentation > Attachments: HDFS-11628.000.patch, HDFS-11628.001.patch > > > It's helpful to state that Mover always tries to move block replicas within > the same node whenever possible. If that is not possible (e.g. when a node > doesn’t have the target storage type) then it will copy the block replica to > another node over the network. -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-11628) Clarify the behavior of HDFS Mover in documentation
[ https://issues.apache.org/jira/browse/HDFS-11628?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15957831#comment-15957831 ] Arpit Agarwal commented on HDFS-11628: -- +1 pending Jenkins. Thanks [~xiaobingo]. > Clarify the behavior of HDFS Mover in documentation > --- > > Key: HDFS-11628 > URL: https://issues.apache.org/jira/browse/HDFS-11628 > Project: Hadoop HDFS > Issue Type: Improvement >Reporter: Xiaobing Zhou >Assignee: Xiaobing Zhou > Labels: docuentation > Attachments: HDFS-11628.000.patch, HDFS-11628.001.patch > > > It's helpful to state that Mover always tries to move block replicas within > the same node whenever possible. If that is not possible (e.g. when a node > doesn’t have the target storage type) then it will copy the block replica to > another node over the network. -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-11576) Block recovery will fail indefinitely if recovery time > heartbeat interval
[ https://issues.apache.org/jira/browse/HDFS-11576?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15957829#comment-15957829 ] Inigo Goiri commented on HDFS-11576: I've seen {{TestDataNodeVolumeFailureReporting}} failing in other JIRAs and going through the code, it doesn't seem related to this patch. A couple of the checkstyles could be fixed though. > Block recovery will fail indefinitely if recovery time > heartbeat interval > --- > > Key: HDFS-11576 > URL: https://issues.apache.org/jira/browse/HDFS-11576 > Project: Hadoop HDFS > Issue Type: Bug > Components: datanode, hdfs, namenode >Affects Versions: 2.7.1, 2.7.2, 2.7.3, 3.0.0-alpha1, 3.0.0-alpha2 >Reporter: Lukas Majercak >Assignee: Lukas Majercak >Priority: Critical > Attachments: HDFS-11576.001.patch, HDFS-11576.002.patch, > HDFS-11576.003.patch, HDFS-11576.repro.patch > > > Block recovery will fail indefinitely if the time to recover a block is > always longer than the heartbeat interval. Scenario: > 1. DN sends heartbeat > 2. NN sends a recovery command to DN, recoveryID=X > 3. DN starts recovery > 4. DN sends another heartbeat > 5. NN sends a recovery command to DN, recoveryID=X+1 > 6. DN calls commitBlockSyncronization after succeeding with first recovery to > NN, which fails because X < X+1 > ... -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-11628) Clarify the behavior of HDFS Mover in documentation
[ https://issues.apache.org/jira/browse/HDFS-11628?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15957824#comment-15957824 ] Xiaobing Zhou commented on HDFS-11628: -- v1 fixed that. Thanks [~liuml07] and [~arpitagarwal]. > Clarify the behavior of HDFS Mover in documentation > --- > > Key: HDFS-11628 > URL: https://issues.apache.org/jira/browse/HDFS-11628 > Project: Hadoop HDFS > Issue Type: Improvement >Reporter: Xiaobing Zhou >Assignee: Xiaobing Zhou > Labels: docuentation > Attachments: HDFS-11628.000.patch, HDFS-11628.001.patch > > > It's helpful to state that Mover always tries to move block replicas within > the same node whenever possible. If that is not possible (e.g. when a node > doesn’t have the target storage type) then it will copy the block replica to > another node over the network. -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-11628) Clarify the behavior of HDFS Mover in documentation
[ https://issues.apache.org/jira/browse/HDFS-11628?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Xiaobing Zhou updated HDFS-11628: - Attachment: HDFS-11628.001.patch > Clarify the behavior of HDFS Mover in documentation > --- > > Key: HDFS-11628 > URL: https://issues.apache.org/jira/browse/HDFS-11628 > Project: Hadoop HDFS > Issue Type: Improvement >Reporter: Xiaobing Zhou >Assignee: Xiaobing Zhou > Labels: docuentation > Attachments: HDFS-11628.000.patch, HDFS-11628.001.patch > > > It's helpful to state that Mover always tries to move block replicas within > the same node whenever possible. If that is not possible (e.g. when a node > doesn’t have the target storage type) then it will copy the block replica to > another node over the network. -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-11576) Block recovery will fail indefinitely if recovery time > heartbeat interval
[ https://issues.apache.org/jira/browse/HDFS-11576?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15957798#comment-15957798 ] Hadoop QA commented on HDFS-11576: -- | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 14s{color} | {color:blue} Docker mode activated. {color} | | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s{color} | {color:green} The patch appears to include 5 new or modified test files. {color} | | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 14s{color} | {color:blue} Maven dependency ordering for branch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 13m 8s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 16m 2s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 2m 3s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 59s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 38s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 3m 22s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 32s{color} | {color:green} trunk passed {color} | | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 13s{color} | {color:blue} Maven dependency ordering for patch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 1m 36s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 14m 28s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 14m 28s{color} | {color:green} the patch passed {color} | | {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange} 2m 3s{color} | {color:orange} root: The patch generated 8 new + 807 unchanged - 1 fixed = 815 total (was 808) {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 2m 8s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 37s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} xml {color} | {color:green} 0m 2s{color} | {color:green} The patch has no ill-formed XML file. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 3m 41s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 38s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} unit {color} | {color:green} 9m 5s{color} | {color:green} hadoop-common in the patch passed. {color} | | {color:red}-1{color} | {color:red} unit {color} | {color:red} 68m 1s{color} | {color:red} hadoop-hdfs in the patch failed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 40s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black}144m 41s{color} | {color:black} {color} | \\ \\ || Reason || Tests || | Failed junit tests | hadoop.hdfs.server.datanode.TestDataNodeVolumeFailureReporting | | | hadoop.hdfs.TestAppendSnapshotTruncate | \\ \\ || Subsystem || Report/Notes || | Docker | Image:yetus/hadoop:a9ad5d6 | | JIRA Issue | HDFS-11576 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12862140/HDFS-11576.003.patch | | Optional Tests | asflicense compile javac javadoc mvninstall mvnsite unit findbugs checkstyle xml | | uname | Linux 959096a74abf 3.13.0-106-generic #153-Ubuntu SMP Tue Dec 6 15:44:32 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh | | git revision | trunk / 87e2ef8 | | Default Java | 1.8.0_121 | | findbugs | v3.0.0 | | checkstyle | https://builds.apache.org/job/PreCommit-HDFS-Build/18988/artifact/patchprocess/diff-checkstyle-root.txt | | unit |
[jira] [Commented] (HDFS-11628) Clarify the behavior of HDFS Mover in documentation
[ https://issues.apache.org/jira/browse/HDFS-11628?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15957778#comment-15957778 ] Arpit Agarwal commented on HDFS-11628: -- +1 for the patch. A nitpick, can you please remove the extra linebreak here? {code} .. Note that +it always tries to move block... {code} > Clarify the behavior of HDFS Mover in documentation > --- > > Key: HDFS-11628 > URL: https://issues.apache.org/jira/browse/HDFS-11628 > Project: Hadoop HDFS > Issue Type: Improvement >Reporter: Xiaobing Zhou >Assignee: Xiaobing Zhou > Labels: docuentation > Attachments: HDFS-11628.000.patch > > > It's helpful to state that Mover always tries to move block replicas within > the same node whenever possible. If that is not possible (e.g. when a node > doesn’t have the target storage type) then it will copy the block replica to > another node over the network. -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-11628) Clarify the behavior of HDFS Mover in documentation
[ https://issues.apache.org/jira/browse/HDFS-11628?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Arpit Agarwal updated HDFS-11628: - Summary: Clarify the behavior of HDFS Mover in documentation (was: Clarify behaviors of HDFS Mover in documentation) > Clarify the behavior of HDFS Mover in documentation > --- > > Key: HDFS-11628 > URL: https://issues.apache.org/jira/browse/HDFS-11628 > Project: Hadoop HDFS > Issue Type: Improvement >Reporter: Xiaobing Zhou >Assignee: Xiaobing Zhou > Labels: docuentation > Attachments: HDFS-11628.000.patch > > > It's helpful to state that Mover always tries to move block replicas within > the same node whenever possible. If that is not possible (e.g. when a node > doesn’t have the target storage type) then it will copy the block replica to > another node over the network. -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-11551) Handle SlowDiskReport from DataNode at the NameNode
[ https://issues.apache.org/jira/browse/HDFS-11551?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Hanisha Koneru updated HDFS-11551: -- Attachment: HDFS-11551-branch-2.001.patch > Handle SlowDiskReport from DataNode at the NameNode > --- > > Key: HDFS-11551 > URL: https://issues.apache.org/jira/browse/HDFS-11551 > Project: Hadoop HDFS > Issue Type: Improvement > Components: hdfs >Reporter: Hanisha Koneru >Assignee: Hanisha Koneru > Fix For: 3.0.0-alpha3 > > Attachments: HDFS-11551.001.patch, HDFS-11551.002.patch, > HDFS-11551.003.patch, HDFS-11551.004.patch, HDFS-11551.005.patch, > HDFS-11551.006.patch, HDFS-11551.007.patch, HDFS-11551.008.patch, > HDFS-11551.009.patch, HDFS-11551.010.patch, HDFS-11551-branch-2.001.patch > > > DataNodes send slow disk reports via heartbeats. Handle these reports at the > NameNode to find the topN slow disks. -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-11131) TestThrottledAsyncChecker#testCancellation is flaky
[ https://issues.apache.org/jira/browse/HDFS-11131?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Arpit Agarwal updated HDFS-11131: - Attachment: HDFS-11131.03.patch Thanks for the reviews [~hanishakoneru] and [~xyao]. Xiaoyu, I added a comment to {{ThrottledAsyncChecker#shutdownAndWait}} to clarify that we deliberately invoke shutdownNow without invoking shutdown first. > TestThrottledAsyncChecker#testCancellation is flaky > --- > > Key: HDFS-11131 > URL: https://issues.apache.org/jira/browse/HDFS-11131 > Project: Hadoop HDFS > Issue Type: Bug >Affects Versions: 3.0.0-alpha2 >Reporter: Arpit Agarwal >Assignee: Arpit Agarwal > Attachments: HDFS-11131.01.patch, HDFS-11131.02.patch, > HDFS-11131.03.patch > > > This test failed in a few precommit runs. e.g. > https://builds.apache.org/job/PreCommit-HDFS-Build/18952/testReport/org.apache.hadoop.hdfs.server.datanode.checker/TestThrottledAsyncChecker/testCancellation/ -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-11538) Move ClientProtocol HA proxies into hadoop-hdfs-client
[ https://issues.apache.org/jira/browse/HDFS-11538?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15957608#comment-15957608 ] Junping Du commented on HDFS-11538: --- Thanks [~andrew.wang] and all who work on this JIRA. +1 on get this fix in 2.8.1 and revert HDFS-11431. > Move ClientProtocol HA proxies into hadoop-hdfs-client > -- > > Key: HDFS-11538 > URL: https://issues.apache.org/jira/browse/HDFS-11538 > Project: Hadoop HDFS > Issue Type: Bug > Components: hdfs-client >Affects Versions: 2.8.0, 3.0.0-alpha1 >Reporter: Andrew Wang >Assignee: Huafeng Wang >Priority: Blocker > Fix For: 2.9.0, 3.0.0-alpha3 > > Attachments: HDFS-11538.001.patch, HDFS-11538.002.patch, > HDFS-11538.003.patch, HDFS-11538-branch-2.001.patch > > > Follow-up for HDFS-11431. We should move this missing class over rather than > pulling in the whole hadoop-hdfs dependency. -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-11131) TestThrottledAsyncChecker#testCancellation is flaky
[ https://issues.apache.org/jira/browse/HDFS-11131?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15957584#comment-15957584 ] Xiaoyu Yao commented on HDFS-11131: --- Thanks [~arpitagarwal] for reporting and posting the fix. The patch v02 looks good to me. I just have one minor question. Great catch that fixes the order of executor shutdown. That looks good. Can we keep the original logic on executorService? +1 after that. {code} if (!executorService.awaitTermination(timeout, timeUnit)) { // Interrupt executing tasks and wait again. executorService.shutdownNow(); executorService.awaitTermination(timeout, timeUnit); } {code} > TestThrottledAsyncChecker#testCancellation is flaky > --- > > Key: HDFS-11131 > URL: https://issues.apache.org/jira/browse/HDFS-11131 > Project: Hadoop HDFS > Issue Type: Bug >Affects Versions: 3.0.0-alpha2 >Reporter: Arpit Agarwal >Assignee: Arpit Agarwal > Attachments: HDFS-11131.01.patch, HDFS-11131.02.patch > > > This test failed in a few precommit runs. e.g. > https://builds.apache.org/job/PreCommit-HDFS-Build/18952/testReport/org.apache.hadoop.hdfs.server.datanode.checker/TestThrottledAsyncChecker/testCancellation/ -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-11569) Ozone: Implement listKey function for KeyManager
[ https://issues.apache.org/jira/browse/HDFS-11569?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15957544#comment-15957544 ] Anu Engineer commented on HDFS-11569: - [~cheersyang] Thanks for the patch. Sorry for the delay in looking at this. * Just to make sure that we are all on the same page. ListKey function has to be invoked via {{DistributedStorageHandler#ListKeys}} -- We don't have do it in this patch. But wanted to make sure that we are on the same page. @Override public ListKeys listKeys(ListArgs args) throws IOException, OzoneException { throw new UnsupportedOperationException("listKeys not implemented"); } This function is invoked each time we make an HTTP request to the Ozone Web Handler. Unfortunately, we cannot hold on to the result state on the HTTP front end. We need the client that is talking to us holding the state, so if you have ListKeyResult and it has has more pages to return, I am trying to visualize what we should do with it. I just want to make sure that you are indeed trying to introduce paging between HTTP layer and container layer, I am okay with that, but we could effectively return the whole data since we are sure that web front will read until it reaches the count number of keys. So we can propagate that to container and we can return the whole data set, with of course a max cap on how many keys can be read in a single call. https://issues.apache.org/jira/secure/attachment/12799549/ozone_user_v0.pdf from page:16 we discuss the Rest Protocol. In other words, we should just honor the prefix, start and count. That is return keys that match a prefix, go to the start key under that prefix and return up to count or less number of keys. * {{ListKeyResult.java:}} private static final Logger LOG = LoggerFactory.getLogger(ContainerMapping.class); ==> ListKeyResult.class > Ozone: Implement listKey function for KeyManager > > > Key: HDFS-11569 > URL: https://issues.apache.org/jira/browse/HDFS-11569 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: ozone >Reporter: Weiwei Yang >Assignee: Weiwei Yang > Attachments: HDFS-11569-HDFS-7240.001.patch, > HDFS-11569-HDFS-7240.002.patch, HDFS-11569-HDFS-7240.003.patch, > HDFS-11569-HDFS-7240.004.patch, HDFS-11569-HDFS-7240.005.patch > > > List keys by prefix from a container. This will need to support pagination > for the purpose of small object support. So the listKey function returns > something like ListKeyResult, client can iterate the object to get pagination > results. -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-11538) Move ClientProtocol HA proxies into hadoop-hdfs-client
[ https://issues.apache.org/jira/browse/HDFS-11538?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Andrew Wang updated HDFS-11538: --- Resolution: Fixed Fix Version/s: 2.9.0 Status: Resolved (was: Patch Available) Thanks again Huafeng. I've reverted HDFS-11431 from branch-2 and committed this one (HDFS-11538). [~djp] if you'd like this for 2.8.1, let us know. I think it's worthwhile. We'd need another JIRA to track the revert of HDFS-11431 though since 2.8.0 went out. > Move ClientProtocol HA proxies into hadoop-hdfs-client > -- > > Key: HDFS-11538 > URL: https://issues.apache.org/jira/browse/HDFS-11538 > Project: Hadoop HDFS > Issue Type: Bug > Components: hdfs-client >Affects Versions: 2.8.0, 3.0.0-alpha1 >Reporter: Andrew Wang >Assignee: Huafeng Wang >Priority: Blocker > Fix For: 2.9.0, 3.0.0-alpha3 > > Attachments: HDFS-11538.001.patch, HDFS-11538.002.patch, > HDFS-11538.003.patch, HDFS-11538-branch-2.001.patch > > > Follow-up for HDFS-11431. We should move this missing class over rather than > pulling in the whole hadoop-hdfs dependency. -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-11628) Clarify behaviors of HDFS Mover in documentation
[ https://issues.apache.org/jira/browse/HDFS-11628?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Mingliang Liu updated HDFS-11628: - Target Version/s: 3.0.0-alpha3, 2.8.1 > Clarify behaviors of HDFS Mover in documentation > > > Key: HDFS-11628 > URL: https://issues.apache.org/jira/browse/HDFS-11628 > Project: Hadoop HDFS > Issue Type: Improvement >Reporter: Xiaobing Zhou >Assignee: Xiaobing Zhou > Labels: docuentation > Attachments: HDFS-11628.000.patch > > > It's helpful to state that Mover always tries to move block replicas within > the same node whenever possible. If that is not possible (e.g. when a node > doesn’t have the target storage type) then it will copy the block replica to > another node over the network. -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Comment Edited] (HDFS-11628) Clarify behaviors of HDFS Mover in documentation
[ https://issues.apache.org/jira/browse/HDFS-11628?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15957524#comment-15957524 ] Mingliang Liu edited comment on HDFS-11628 at 4/5/17 7:47 PM: -- +1. [~ajisakaa] do you have 2nd opinion? was (Author: liuml07): +1 pending on Jenkins. [~ajisakaa] do you have 2nd opinion? > Clarify behaviors of HDFS Mover in documentation > > > Key: HDFS-11628 > URL: https://issues.apache.org/jira/browse/HDFS-11628 > Project: Hadoop HDFS > Issue Type: Improvement >Reporter: Xiaobing Zhou >Assignee: Xiaobing Zhou > Labels: docuentation > Attachments: HDFS-11628.000.patch > > > It's helpful to state that Mover always tries to move block replicas within > the same node whenever possible. If that is not possible (e.g. when a node > doesn’t have the target storage type) then it will copy the block replica to > another node over the network. -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-11628) Clarify behaviors of HDFS Mover in documentation
[ https://issues.apache.org/jira/browse/HDFS-11628?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15957524#comment-15957524 ] Mingliang Liu commented on HDFS-11628: -- +1 pending on Jenkins. [~ajisakaa] do you have 2nd opinion? > Clarify behaviors of HDFS Mover in documentation > > > Key: HDFS-11628 > URL: https://issues.apache.org/jira/browse/HDFS-11628 > Project: Hadoop HDFS > Issue Type: Improvement >Reporter: Xiaobing Zhou >Assignee: Xiaobing Zhou > Labels: docuentation > Attachments: HDFS-11628.000.patch > > > It's helpful to state that Mover always tries to move block replicas within > the same node whenever possible. If that is not possible (e.g. when a node > doesn’t have the target storage type) then it will copy the block replica to > another node over the network. -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-11538) Move ClientProtocol HA proxies into hadoop-hdfs-client
[ https://issues.apache.org/jira/browse/HDFS-11538?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15957513#comment-15957513 ] Hadoop QA commented on HDFS-11538: -- | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 14m 16s{color} | {color:blue} Docker mode activated. {color} | | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s{color} | {color:green} The patch appears to include 5 new or modified test files. {color} | | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 33s{color} | {color:blue} Maven dependency ordering for branch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 7m 40s{color} | {color:green} branch-2 passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 26s{color} | {color:green} branch-2 passed with JDK v1.8.0_121 {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 33s{color} | {color:green} branch-2 passed with JDK v1.7.0_121 {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 45s{color} | {color:green} branch-2 passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 36s{color} | {color:green} branch-2 passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 32s{color} | {color:green} branch-2 passed {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 4m 25s{color} | {color:green} branch-2 passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 17s{color} | {color:green} branch-2 passed with JDK v1.8.0_121 {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 2m 0s{color} | {color:green} branch-2 passed with JDK v1.7.0_121 {color} | | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 8s{color} | {color:blue} Maven dependency ordering for patch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 1m 18s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 14s{color} | {color:green} the patch passed with JDK v1.8.0_121 {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 1m 14s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 22s{color} | {color:green} the patch passed with JDK v1.7.0_121 {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 1m 22s{color} | {color:green} the patch passed {color} | | {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange} 0m 40s{color} | {color:orange} hadoop-hdfs-project: The patch generated 20 new + 629 unchanged - 26 fixed = 649 total (was 655) {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 23s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 25s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 4m 4s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 9s{color} | {color:green} the patch passed with JDK v1.8.0_121 {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 56s{color} | {color:green} the patch passed with JDK v1.7.0_121 {color} | | {color:green}+1{color} | {color:green} unit {color} | {color:green} 1m 17s{color} | {color:green} hadoop-hdfs-client in the patch passed with JDK v1.7.0_121. {color} | | {color:red}-1{color} | {color:red} unit {color} | {color:red} 48m 38s{color} | {color:red} hadoop-hdfs in the patch failed with JDK v1.7.0_121. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 20s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black}156m 3s{color} | {color:black} {color} | \\ \\ || Reason || Tests || | JDK v1.7.0_121 Failed junit tests | hadoop.hdfs.server.blockmanagement.TestReplicationPolicyWithUpgradeDomain | \\ \\ || Subsystem || Report/Notes || | Docker | Image:yetus/hadoop:b59b8b7 | | JIRA Issue | HDFS-11538 | | JIRA Patch URL
[jira] [Commented] (HDFS-11576) Block recovery will fail indefinitely if recovery time > heartbeat interval
[ https://issues.apache.org/jira/browse/HDFS-11576?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15957475#comment-15957475 ] Lukas Majercak commented on HDFS-11576: --- Thanks for the review [~hkoneru]. Submitted a new patch, fixed the UnderRecoveryBlocks#addRecoveryAttempt + the codestyle issues. > Block recovery will fail indefinitely if recovery time > heartbeat interval > --- > > Key: HDFS-11576 > URL: https://issues.apache.org/jira/browse/HDFS-11576 > Project: Hadoop HDFS > Issue Type: Bug > Components: datanode, hdfs, namenode >Affects Versions: 2.7.1, 2.7.2, 2.7.3, 3.0.0-alpha1, 3.0.0-alpha2 >Reporter: Lukas Majercak >Assignee: Lukas Majercak >Priority: Critical > Attachments: HDFS-11576.001.patch, HDFS-11576.002.patch, > HDFS-11576.003.patch, HDFS-11576.repro.patch > > > Block recovery will fail indefinitely if the time to recover a block is > always longer than the heartbeat interval. Scenario: > 1. DN sends heartbeat > 2. NN sends a recovery command to DN, recoveryID=X > 3. DN starts recovery > 4. DN sends another heartbeat > 5. NN sends a recovery command to DN, recoveryID=X+1 > 6. DN calls commitBlockSyncronization after succeeding with first recovery to > NN, which fails because X < X+1 > ... -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-11576) Block recovery will fail indefinitely if recovery time > heartbeat interval
[ https://issues.apache.org/jira/browse/HDFS-11576?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Lukas Majercak updated HDFS-11576: -- Status: Patch Available (was: Open) > Block recovery will fail indefinitely if recovery time > heartbeat interval > --- > > Key: HDFS-11576 > URL: https://issues.apache.org/jira/browse/HDFS-11576 > Project: Hadoop HDFS > Issue Type: Bug > Components: datanode, hdfs, namenode >Affects Versions: 3.0.0-alpha2, 3.0.0-alpha1, 2.7.3, 2.7.2, 2.7.1 >Reporter: Lukas Majercak >Assignee: Lukas Majercak >Priority: Critical > Attachments: HDFS-11576.001.patch, HDFS-11576.002.patch, > HDFS-11576.003.patch, HDFS-11576.repro.patch > > > Block recovery will fail indefinitely if the time to recover a block is > always longer than the heartbeat interval. Scenario: > 1. DN sends heartbeat > 2. NN sends a recovery command to DN, recoveryID=X > 3. DN starts recovery > 4. DN sends another heartbeat > 5. NN sends a recovery command to DN, recoveryID=X+1 > 6. DN calls commitBlockSyncronization after succeeding with first recovery to > NN, which fails because X < X+1 > ... -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-11576) Block recovery will fail indefinitely if recovery time > heartbeat interval
[ https://issues.apache.org/jira/browse/HDFS-11576?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Lukas Majercak updated HDFS-11576: -- Status: Open (was: Patch Available) > Block recovery will fail indefinitely if recovery time > heartbeat interval > --- > > Key: HDFS-11576 > URL: https://issues.apache.org/jira/browse/HDFS-11576 > Project: Hadoop HDFS > Issue Type: Bug > Components: datanode, hdfs, namenode >Affects Versions: 3.0.0-alpha2, 3.0.0-alpha1, 2.7.3, 2.7.2, 2.7.1 >Reporter: Lukas Majercak >Assignee: Lukas Majercak >Priority: Critical > Attachments: HDFS-11576.001.patch, HDFS-11576.002.patch, > HDFS-11576.003.patch, HDFS-11576.repro.patch > > > Block recovery will fail indefinitely if the time to recover a block is > always longer than the heartbeat interval. Scenario: > 1. DN sends heartbeat > 2. NN sends a recovery command to DN, recoveryID=X > 3. DN starts recovery > 4. DN sends another heartbeat > 5. NN sends a recovery command to DN, recoveryID=X+1 > 6. DN calls commitBlockSyncronization after succeeding with first recovery to > NN, which fails because X < X+1 > ... -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-11576) Block recovery will fail indefinitely if recovery time > heartbeat interval
[ https://issues.apache.org/jira/browse/HDFS-11576?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Lukas Majercak updated HDFS-11576: -- Attachment: HDFS-11576.003.patch > Block recovery will fail indefinitely if recovery time > heartbeat interval > --- > > Key: HDFS-11576 > URL: https://issues.apache.org/jira/browse/HDFS-11576 > Project: Hadoop HDFS > Issue Type: Bug > Components: datanode, hdfs, namenode >Affects Versions: 2.7.1, 2.7.2, 2.7.3, 3.0.0-alpha1, 3.0.0-alpha2 >Reporter: Lukas Majercak >Assignee: Lukas Majercak >Priority: Critical > Attachments: HDFS-11576.001.patch, HDFS-11576.002.patch, > HDFS-11576.003.patch, HDFS-11576.repro.patch > > > Block recovery will fail indefinitely if the time to recover a block is > always longer than the heartbeat interval. Scenario: > 1. DN sends heartbeat > 2. NN sends a recovery command to DN, recoveryID=X > 3. DN starts recovery > 4. DN sends another heartbeat > 5. NN sends a recovery command to DN, recoveryID=X+1 > 6. DN calls commitBlockSyncronization after succeeding with first recovery to > NN, which fails because X < X+1 > ... -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-11628) Clarify behaviors of HDFS Mover in documentation
[ https://issues.apache.org/jira/browse/HDFS-11628?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15957405#comment-15957405 ] Hadoop QA commented on HDFS-11628: -- | (/) *{color:green}+1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 28s{color} | {color:blue} Docker mode activated. {color} | | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 16m 4s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 54s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 52s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 16s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 18m 51s{color} | {color:black} {color} | \\ \\ || Subsystem || Report/Notes || | Docker | Image:yetus/hadoop:a9ad5d6 | | JIRA Issue | HDFS-11628 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12862131/HDFS-11628.000.patch | | Optional Tests | asflicense mvnsite | | uname | Linux 09085e0fb74b 3.13.0-108-generic #155-Ubuntu SMP Wed Jan 11 16:58:52 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh | | git revision | trunk / e8071aa | | modules | C: hadoop-hdfs-project/hadoop-hdfs U: hadoop-hdfs-project/hadoop-hdfs | | Console output | https://builds.apache.org/job/PreCommit-HDFS-Build/18987/console | | Powered by | Apache Yetus 0.5.0-SNAPSHOT http://yetus.apache.org | This message was automatically generated. > Clarify behaviors of HDFS Mover in documentation > > > Key: HDFS-11628 > URL: https://issues.apache.org/jira/browse/HDFS-11628 > Project: Hadoop HDFS > Issue Type: Improvement >Reporter: Xiaobing Zhou >Assignee: Xiaobing Zhou > Labels: docuentation > Attachments: HDFS-11628.000.patch > > > It's helpful to state that Mover always tries to move block replicas within > the same node whenever possible. If that is not possible (e.g. when a node > doesn’t have the target storage type) then it will copy the block replica to > another node over the network. -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Comment Edited] (HDFS-11530) Use HDFS specific network topology to choose datanode in BlockPlacementPolicyDefault
[ https://issues.apache.org/jira/browse/HDFS-11530?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15957343#comment-15957343 ] Chen Liang edited comment on HDFS-11530 at 4/5/17 6:22 PM: --- Hi [~linyiqun], after revisiting the code, I have a question about the change in the earlier patch. I think when {{getInstance()}} is called, it is always a fresh network topology instance being created, this applies to both the old and the new {{NetworkTopology}} classes. Given this, I wonder, for classes where the new {{chooseRandom}} is *not* going to be called, is it necessary at all to switch to new topology class ? More specifically, in the changes from v2 patch, {{Dispatcher}} switched to the new topology class, but they never called the new {{chooseRandom}} method. So in this case, is it really needed to switch to the new topology at all? I digged down to this when I was looking at {{TestMover}} fail, which failed because it adds {{DatanodeInfo}}, instead of {{DatanodeDescriptor}} to the topology. However {{DFSNetworkTopology}} requires a {{DatanodeDescriptor}} as the leaf nodes, because this class is where storage type info lives. This appears to me that certain classes run with a slightly more abstracted datanode representation that is not {{DatanodeDescriptor}}, in which case a more detailed {{DFSNetworkTopology}} class might not be necessary. An alternative way is to still replacing with {{DFSNetworkTopology}}, but within {{DFSNetworkTopology}}, it checks if it is {{DatanodeInfo}} instead, if yes, then it does not do anything fancy about storage type info, but pretty much just stick to whatever it was in {{NetworkTopology}}. Any comments? cc. [~arpitagarwal]. was (Author: vagarychen): Hi [~linyiqun], after revisiting the code, I have a question about the change in the earlier patch. I think when {{getInstance()}} is called, it is always a fresh network topology instance being created, this applies to both the old and the new {{NetworkTopology}} classes. Given this, I wonder, for classes where the new {{chooseRandom}} is *not* going to be called, is it necessary at all to switch to new topology class ? More specifically, in the changes from v2 patch, {{DatanodeManager}} and {{Dispatcher}} switched to the new topology class, but they never called the new {{chooseRandom}} method. So in this case, is it really needed to switch to the new topology at all? Appears to me that only the changes to {{BlockPlacementPolicyDefault}} are required. I digged down to this when I was looking at {{TestMover}} fail, which failed because it adds {{DatanodeInfo}}, instead of {{DatanodeDescriptor}} to the topology. However {{DFSNetworkTopology}} requires a {{DatanodeDescriptor}} as the leaf nodes, because this class is where storage type info lives. This appears to me that certain classes run with a slightly more abstracted datanode representation that is not {{DatanodeDescriptor}}, in which case a more detailed {{DFSNetworkTopology}} class might not be necessary. An alternative way is to still replacing with {{DFSNetworkTopology}}, but within {{DFSNetworkTopology}}, it checks if it is {{DatanodeInfo}} instead, if yes, then it does not do anything fancy about storage type info, but pretty much just stick to whatever it was in {{NetworkTopology}}. Any comments? cc. [~arpitagarwal]. > Use HDFS specific network topology to choose datanode in > BlockPlacementPolicyDefault > > > Key: HDFS-11530 > URL: https://issues.apache.org/jira/browse/HDFS-11530 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: namenode >Affects Versions: 3.0.0-alpha2 >Reporter: Yiqun Lin >Assignee: Yiqun Lin > Attachments: HDFS-11530.001.patch, HDFS-11530.002.patch, > HDFS-11530.003.patch, HDFS-11530.004.patch > > > The work for {{chooseRandomWithStorageType}} has been merged in HDFS-11482. > But this method is contained in new topology {{DFSNetworkTopology}} which is > specified for HDFS. We should update this and let > {{BlockPlacementPolicyDefault}} use the new way since the original way is > inefficient. -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-11062) Ozone:SCM: Explore if we can remove nullcommand
[ https://issues.apache.org/jira/browse/HDFS-11062?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15957359#comment-15957359 ] Anu Engineer commented on HDFS-11062: - [~yuanbo] Let us go ahead and commit this, I will make corresponding changes in my patches. > Ozone:SCM: Explore if we can remove nullcommand > --- > > Key: HDFS-11062 > URL: https://issues.apache.org/jira/browse/HDFS-11062 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: ozone >Affects Versions: HDFS-7240 >Reporter: Anu Engineer >Assignee: Yuanbo Liu > Fix For: HDFS-7240 > > Attachments: HDFS-11062-HDFS-7240.001.patch, > HDFS-11062-HDFS-7240.002.patch > > > in SCM protocol we have a nullCommand that gets returned as the default case. > Explore if we can remove this. -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-11006) Ozone: support setting chunk size in streaming API
[ https://issues.apache.org/jira/browse/HDFS-11006?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Anu Engineer updated HDFS-11006: Resolution: Fixed Hadoop Flags: Reviewed Fix Version/s: HDFS-7240 Target Version/s: HDFS-7240 Status: Resolved (was: Patch Available) [~linyiqun] Thanks for the contribution. I have committed this to the feature branch. > Ozone: support setting chunk size in streaming API > -- > > Key: HDFS-11006 > URL: https://issues.apache.org/jira/browse/HDFS-11006 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: ozone >Affects Versions: HDFS-7240 >Reporter: Anu Engineer >Assignee: Yiqun Lin > Fix For: HDFS-7240 > > Attachments: HDFS-11006-HDFS-7240.001.patch, > HDFS-11006-HDFS-7240.002.patch, HDFS-11006-HDFS-7240.003.patch > > > Right now we have a hard coded chunk size, we should either have this read > from config or the user should be able to pass this to ChunkInputStream -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-11628) Clarify behaviors of HDFS Mover in documentation
[ https://issues.apache.org/jira/browse/HDFS-11628?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Xiaobing Zhou updated HDFS-11628: - Status: Patch Available (was: Open) > Clarify behaviors of HDFS Mover in documentation > > > Key: HDFS-11628 > URL: https://issues.apache.org/jira/browse/HDFS-11628 > Project: Hadoop HDFS > Issue Type: Improvement >Reporter: Xiaobing Zhou >Assignee: Xiaobing Zhou > Labels: docuentation > Attachments: HDFS-11628.000.patch > > > It's helpful to state that Mover always tries to move block replicas within > the same node whenever possible. If that is not possible (e.g. when a node > doesn’t have the target storage type) then it will copy the block replica to > another node over the network. -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-11628) Clarify behaviors of HDFS Mover in documentation
[ https://issues.apache.org/jira/browse/HDFS-11628?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Xiaobing Zhou updated HDFS-11628: - Attachment: HDFS-11628.000.patch Posted a patch. Will have a dry run to visualize the changes for sanity check. > Clarify behaviors of HDFS Mover in documentation > > > Key: HDFS-11628 > URL: https://issues.apache.org/jira/browse/HDFS-11628 > Project: Hadoop HDFS > Issue Type: Improvement >Reporter: Xiaobing Zhou >Assignee: Xiaobing Zhou > Labels: docuentation > Attachments: HDFS-11628.000.patch > > > It's helpful to state that Mover always tries to move block replicas within > the same node whenever possible. If that is not possible (e.g. when a node > doesn’t have the target storage type) then it will copy the block replica to > another node over the network. -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-11530) Use HDFS specific network topology to choose datanode in BlockPlacementPolicyDefault
[ https://issues.apache.org/jira/browse/HDFS-11530?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15957343#comment-15957343 ] Chen Liang commented on HDFS-11530: --- Hi [~linyiqun], after revisiting the code, I have a question about the change in the earlier patch. I think when {{getInstance()}} is called, it is always a fresh network topology instance being created, this applies to both the old and the new {{NetworkTopology}} classes. Given this, I wonder, for classes where the new {{chooseRandom}} is *not* going to be called, is it necessary at all to switch to new topology class ? More specifically, in the changes from v2 patch, {{DatanodeManager}} and {{Dispatcher}} switched to the new topology class, but they never called the new {{chooseRandom}} method. So in this case, is it really needed to switch to the new topology at all? Appears to me that only the changes to {{BlockPlacementPolicyDefault}} are required. I digged down to this when I was looking at {{TestMover}} fail, which failed because it adds {{DatanodeInfo}}, instead of {{DatanodeDescriptor}} to the topology. However {{DFSNetworkTopology}} requires a {{DatanodeDescriptor}} as the leaf nodes, because this class is where storage type info lives. This appears to me that certain classes run with a slightly more abstracted datanode representation that is not {{DatanodeDescriptor}}, in which case a more detailed {{DFSNetworkTopology}} class might not be necessary. An alternative way is to still replacing with {{DFSNetworkTopology}}, but within {{DFSNetworkTopology}}, it checks if it is {{DatanodeInfo}} instead, if yes, then it does not do anything fancy about storage type info, but pretty much just stick to whatever it was in {{NetworkTopology}}. Any comments? cc. [~arpitagarwal]. > Use HDFS specific network topology to choose datanode in > BlockPlacementPolicyDefault > > > Key: HDFS-11530 > URL: https://issues.apache.org/jira/browse/HDFS-11530 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: namenode >Affects Versions: 3.0.0-alpha2 >Reporter: Yiqun Lin >Assignee: Yiqun Lin > Attachments: HDFS-11530.001.patch, HDFS-11530.002.patch, > HDFS-11530.003.patch, HDFS-11530.004.patch > > > The work for {{chooseRandomWithStorageType}} has been merged in HDFS-11482. > But this method is contained in new topology {{DFSNetworkTopology}} which is > specified for HDFS. We should update this and let > {{BlockPlacementPolicyDefault}} use the new way since the original way is > inefficient. -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-11006) Ozone: support setting chunk size in streaming API
[ https://issues.apache.org/jira/browse/HDFS-11006?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15957336#comment-15957336 ] Anu Engineer commented on HDFS-11006: - +1, Thank you for taking care of this issue. I will commit this shortly. There are 2 minor language fixes. I will fix them while committing. I will change the messages to the one below. DistributedHandler.java: {noformat} LOG.warn("The chunk size ({}) is not allowed to be more than" + " the maximum size ({})," + " resetting to the maximum size.", chunkSize, ScmConfigKeys.OZONE_SCM_CHUNK_MAX_SIZE); {noformat} ozone-defaults.xml: {noformat} The chunk size defaults to 1MB. If the vaule configured is more than the maximum size (1MB), it will be reset to the maximum size. {noformat} > Ozone: support setting chunk size in streaming API > -- > > Key: HDFS-11006 > URL: https://issues.apache.org/jira/browse/HDFS-11006 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: ozone >Affects Versions: HDFS-7240 >Reporter: Anu Engineer >Assignee: Yiqun Lin > Attachments: HDFS-11006-HDFS-7240.001.patch, > HDFS-11006-HDFS-7240.002.patch, HDFS-11006-HDFS-7240.003.patch > > > Right now we have a hard coded chunk size, we should either have this read > from config or the user should be able to pass this to ChunkInputStream -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Created] (HDFS-11628) Clarify behaviors of HDFS Mover in documentation
Xiaobing Zhou created HDFS-11628: Summary: Clarify behaviors of HDFS Mover in documentation Key: HDFS-11628 URL: https://issues.apache.org/jira/browse/HDFS-11628 Project: Hadoop HDFS Issue Type: Improvement Reporter: Xiaobing Zhou Assignee: Xiaobing Zhou It's helpful to state that Mover always tries to move block replicas within the same node whenever possible. If that is not possible (e.g. when a node doesn’t have the target storage type) then it will copy the block replica to another node over the network. -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-11625) Ozone: Fix UT failures that caused by hard coded datanode data dirs
[ https://issues.apache.org/jira/browse/HDFS-11625?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Anu Engineer updated HDFS-11625: Resolution: Fixed Fix Version/s: HDFS-7240 Status: Resolved (was: Patch Available) [~cheersyang] Thanks for the contribution. I have committed this to the feature branch. > Ozone: Fix UT failures that caused by hard coded datanode data dirs > --- > > Key: HDFS-11625 > URL: https://issues.apache.org/jira/browse/HDFS-11625 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: ozone, test >Affects Versions: HDFS-7240 >Reporter: Weiwei Yang >Assignee: Weiwei Yang > Fix For: HDFS-7240 > > Attachments: HDFS-11625-HDFS-7240.001.patch, > HDFS-11625-HDFS-7240.002.patch, HDFS-11625-HDFS-7240.003.patch > > > There seems to be some UT regressions after HDFS-11519, such as > * TestDataNodeVolumeFailureToleration > * TestDataNodeVolumeFailureReporting > * TestDiskBalancerCommand > * TestBlockStatsMXBean > * TestDataNodeVolumeMetrics > * TestDFSAdmin > * TestDataNodeHotSwapVolumes > * TestDataNodeVolumeFailure > these tests set up datanode data dir by some hard coded names, such as > {code} > new File(cluster.getDataDirectory(), "data1"); > {code} > this no longer works since HDFS-11519 changes the pattern from > {code} > /data/data<2*dnIndex + 1> > /data/data<2*dnIndex + 2> > ... > {code} > to > {code} > /data/dn0_data0 > /data/dn0_data1 > /data/dn1_data0 > /data/dn1_data1 > ... > {code} -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-11625) Ozone: Fix UT failures that caused by hard coded datanode data dirs
[ https://issues.apache.org/jira/browse/HDFS-11625?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15957241#comment-15957241 ] Anu Engineer commented on HDFS-11625: - Thank you for the contribution. I will commit this shortly. > Ozone: Fix UT failures that caused by hard coded datanode data dirs > --- > > Key: HDFS-11625 > URL: https://issues.apache.org/jira/browse/HDFS-11625 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: ozone, test >Affects Versions: HDFS-7240 >Reporter: Weiwei Yang >Assignee: Weiwei Yang > Attachments: HDFS-11625-HDFS-7240.001.patch, > HDFS-11625-HDFS-7240.002.patch, HDFS-11625-HDFS-7240.003.patch > > > There seems to be some UT regressions after HDFS-11519, such as > * TestDataNodeVolumeFailureToleration > * TestDataNodeVolumeFailureReporting > * TestDiskBalancerCommand > * TestBlockStatsMXBean > * TestDataNodeVolumeMetrics > * TestDFSAdmin > * TestDataNodeHotSwapVolumes > * TestDataNodeVolumeFailure > these tests set up datanode data dir by some hard coded names, such as > {code} > new File(cluster.getDataDirectory(), "data1"); > {code} > this no longer works since HDFS-11519 changes the pattern from > {code} > /data/data<2*dnIndex + 1> > /data/data<2*dnIndex + 2> > ... > {code} > to > {code} > /data/dn0_data0 > /data/dn0_data1 > /data/dn1_data0 > /data/dn1_data1 > ... > {code} -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-11538) Move ClientProtocol HA proxies into hadoop-hdfs-client
[ https://issues.apache.org/jira/browse/HDFS-11538?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Andrew Wang updated HDFS-11538: --- Status: Patch Available (was: Reopened) > Move ClientProtocol HA proxies into hadoop-hdfs-client > -- > > Key: HDFS-11538 > URL: https://issues.apache.org/jira/browse/HDFS-11538 > Project: Hadoop HDFS > Issue Type: Bug > Components: hdfs-client >Affects Versions: 3.0.0-alpha1, 2.8.0 >Reporter: Andrew Wang >Assignee: Huafeng Wang >Priority: Blocker > Fix For: 3.0.0-alpha3 > > Attachments: HDFS-11538.001.patch, HDFS-11538.002.patch, > HDFS-11538.003.patch, HDFS-11538-branch-2.001.patch > > > Follow-up for HDFS-11431. We should move this missing class over rather than > pulling in the whole hadoop-hdfs dependency. -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-11623) Move system erasure coding policies into hadoop-hdfs-client
[ https://issues.apache.org/jira/browse/HDFS-11623?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15957183#comment-15957183 ] Andrew Wang commented on HDFS-11623: Yep, that was precisely my plan. Already, the ECPManager is responsible for managing the set of enabled policies, which is different from the full set of system policies. Should I rename the newly split {{ErasureCodingPolicies}} to {{SystemErasureCodingPolicies}} for clarity? > Move system erasure coding policies into hadoop-hdfs-client > --- > > Key: HDFS-11623 > URL: https://issues.apache.org/jira/browse/HDFS-11623 > Project: Hadoop HDFS > Issue Type: Improvement > Components: erasure-coding >Affects Versions: 3.0.0-alpha2 >Reporter: Andrew Wang >Assignee: Andrew Wang > Attachments: HDFS-11623.001.patch, HDFS-11623.002.patch > > > This is a precursor to HDFS-11565. We need to move the set of system defined > EC policies out of the NameNode's ECPolicyManager into the hdfs-client module > so it can be referenced by the client. -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-11625) Ozone: Fix UT failures that caused by hard coded datanode data dirs
[ https://issues.apache.org/jira/browse/HDFS-11625?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15957022#comment-15957022 ] Hadoop QA commented on HDFS-11625: -- | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 13s{color} | {color:blue} Docker mode activated. {color} | | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s{color} | {color:green} The patch appears to include 10 new or modified test files. {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 13m 50s{color} | {color:green} HDFS-7240 passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 49s{color} | {color:green} HDFS-7240 passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 38s{color} | {color:green} HDFS-7240 passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 54s{color} | {color:green} HDFS-7240 passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 13s{color} | {color:green} HDFS-7240 passed {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 56s{color} | {color:green} HDFS-7240 passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 51s{color} | {color:green} HDFS-7240 passed {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 50s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 48s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 48s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 35s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 52s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 10s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 2m 3s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 47s{color} | {color:green} the patch passed {color} | | {color:red}-1{color} | {color:red} unit {color} | {color:red} 71m 4s{color} | {color:red} hadoop-hdfs in the patch failed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 20s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 98m 8s{color} | {color:black} {color} | \\ \\ || Reason || Tests || | Failed junit tests | hadoop.hdfs.server.datanode.TestDataNodeVolumeFailureReporting | | | hadoop.hdfs.TestDFSUpgradeFromImage | | | hadoop.hdfs.server.datanode.TestDirectoryScanner | | | hadoop.ozone.scm.node.TestContainerPlacement | | | hadoop.hdfs.TestMaintenanceState | | | hadoop.hdfs.server.datanode.TestDataNodeVolumeFailure | \\ \\ || Subsystem || Report/Notes || | Docker | Image:yetus/hadoop:a9ad5d6 | | JIRA Issue | HDFS-11625 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12862082/HDFS-11625-HDFS-7240.003.patch | | Optional Tests | asflicense compile javac javadoc mvninstall mvnsite unit findbugs checkstyle | | uname | Linux 1970e6ba99b5 3.13.0-106-generic #153-Ubuntu SMP Tue Dec 6 15:44:32 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh | | git revision | HDFS-7240 / adc6510 | | Default Java | 1.8.0_121 | | findbugs | v3.0.0 | | unit | https://builds.apache.org/job/PreCommit-HDFS-Build/18985/artifact/patchprocess/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt | | Test Results | https://builds.apache.org/job/PreCommit-HDFS-Build/18985/testReport/ | | modules | C: hadoop-hdfs-project/hadoop-hdfs U: hadoop-hdfs-project/hadoop-hdfs | | Console output | https://builds.apache.org/job/PreCommit-HDFS-Build/18985/console | | Powered by | Apache Yetus 0.5.0-SNAPSHOT http://yetus.apache.org | This message was automatically generated. > Ozone: Fix UT failures that caused by hard coded datanode data dirs >
[jira] [Commented] (HDFS-10848) Move hadoop-hdfs-native-client module into hadoop-hdfs-client
[ https://issues.apache.org/jira/browse/HDFS-10848?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15956986#comment-15956986 ] Allen Wittenauer commented on HDFS-10848: - bq. Is there some tweak we can make to test-patch.sh change detection? test-patch effectively highlights what the build tool reports. This is a maven/source tree design problem rather than a test-patch issue. If hadoop-hdfs-client depends upon hadoop-hdfs, I'm not exactly sure what benefit splitting it out is supposed to provide. > Move hadoop-hdfs-native-client module into hadoop-hdfs-client > - > > Key: HDFS-10848 > URL: https://issues.apache.org/jira/browse/HDFS-10848 > Project: Hadoop HDFS > Issue Type: Improvement > Components: hdfs-client >Reporter: Akira Ajisaka >Assignee: Huafeng Wang > Attachments: HDFS-10848.001.patch > > > When a patch changes hadoop-hdfs-client module, Jenkins does not pick up the > tests in the native code. That way we overlooked test failure when committing > the patch. (ex. HDFS-10844) > [~aw] said in HDFS-10844, > bq. Ideally, all of this native code would be hdfs-client. Then when a change > is made to to that code, this code will also get tested. -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-11545) Propagate DataNode's slow disks info to the NameNode via Heartbeat
[ https://issues.apache.org/jira/browse/HDFS-11545?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Arpit Agarwal updated HDFS-11545: - Fix Version/s: 2.9.0 +1 for the branch-2 patch. I've committed it after running the affected unit tests locally with JDK 7. Thanks [~hanishakoneru]. > Propagate DataNode's slow disks info to the NameNode via Heartbeat > -- > > Key: HDFS-11545 > URL: https://issues.apache.org/jira/browse/HDFS-11545 > Project: Hadoop HDFS > Issue Type: Improvement >Reporter: Hanisha Koneru >Assignee: Hanisha Koneru > Fix For: 2.9.0, 3.0.0-alpha3 > > Attachments: HDFS-11545.001.patch, HDFS-11545.002.patch, > HDFS-11545.003.patch, HDFS-11545-branch-2.001.patch > > > DataNode detects the outliers (slow disks) among all its disk. This > information can be propagated to the NameNode so that the NameNode gets disk > information from all Datanodes. -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-11608) HDFS write crashed in the case of huge block size
[ https://issues.apache.org/jira/browse/HDFS-11608?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15956932#comment-15956932 ] Arpit Agarwal commented on HDFS-11608: -- +1 for the fix. Thanks [~xiaobingo]. Still need to review the unit test. > HDFS write crashed in the case of huge block size > - > > Key: HDFS-11608 > URL: https://issues.apache.org/jira/browse/HDFS-11608 > Project: Hadoop HDFS > Issue Type: Bug > Components: hdfs-client >Affects Versions: 2.8.0 >Reporter: Xiaobing Zhou >Assignee: Xiaobing Zhou >Priority: Critical > Attachments: HDFS-11608.000.patch, HDFS-11608.001.patch, > HDFS-11608.002.patch > > > We've seen HDFS write crashes in the case of huge block size. For example, > writing a 3G file using 3G block size, HDFS client throws out of memory > exception. DataNode gives out IOException. After changing heap size limit, > DFSOutputStream ResponseProcessor exception is seen followed by Broken pipe > and pipeline recovery. > Give below: > DN exception, > {noformat} > 2017-03-30 16:34:33,828 ERROR datanode.DataNode (DataXceiver.java:run(278)) - > c6401.ambari.apache.org:50010:DataXceiver error processing WRITE_BLOCK > operation src: /192.168.64.101:47167 dst: /192.168.64.101:50010 > java.io.IOException: Incorrect value for packet payload size: 2147483128 > at > org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:159) > at > org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) > at > org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:502) > at > org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:898) > at > org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:806) > at > org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:137) > at > org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:74) > at > org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:251) > at java.lang.Thread.run(Thread.java:745) > {noformat} -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-11625) Ozone: Fix UT failures that caused by hard coded datanode data dirs
[ https://issues.apache.org/jira/browse/HDFS-11625?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15956910#comment-15956910 ] Tsz Wo Nicholas Sze commented on HDFS-11625: +1 the 003 patch looks good. > Ozone: Fix UT failures that caused by hard coded datanode data dirs > --- > > Key: HDFS-11625 > URL: https://issues.apache.org/jira/browse/HDFS-11625 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: ozone, test >Affects Versions: HDFS-7240 >Reporter: Weiwei Yang >Assignee: Weiwei Yang > Attachments: HDFS-11625-HDFS-7240.001.patch, > HDFS-11625-HDFS-7240.002.patch, HDFS-11625-HDFS-7240.003.patch > > > There seems to be some UT regressions after HDFS-11519, such as > * TestDataNodeVolumeFailureToleration > * TestDataNodeVolumeFailureReporting > * TestDiskBalancerCommand > * TestBlockStatsMXBean > * TestDataNodeVolumeMetrics > * TestDFSAdmin > * TestDataNodeHotSwapVolumes > * TestDataNodeVolumeFailure > these tests set up datanode data dir by some hard coded names, such as > {code} > new File(cluster.getDataDirectory(), "data1"); > {code} > this no longer works since HDFS-11519 changes the pattern from > {code} > /data/data<2*dnIndex + 1> > /data/data<2*dnIndex + 2> > ... > {code} > to > {code} > /data/dn0_data0 > /data/dn0_data1 > /data/dn1_data0 > /data/dn1_data1 > ... > {code} -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-11625) Ozone: Fix UT failures that caused by hard coded datanode data dirs
[ https://issues.apache.org/jira/browse/HDFS-11625?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15956892#comment-15956892 ] Weiwei Yang commented on HDFS-11625: Thanks [~szetszwo] and [~msingh] for the comments. I have addressed [~msingh]'s comment and fixed 2 checkstyle issues in v3 patch. > Ozone: Fix UT failures that caused by hard coded datanode data dirs > --- > > Key: HDFS-11625 > URL: https://issues.apache.org/jira/browse/HDFS-11625 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: ozone, test >Affects Versions: HDFS-7240 >Reporter: Weiwei Yang >Assignee: Weiwei Yang > Attachments: HDFS-11625-HDFS-7240.001.patch, > HDFS-11625-HDFS-7240.002.patch, HDFS-11625-HDFS-7240.003.patch > > > There seems to be some UT regressions after HDFS-11519, such as > * TestDataNodeVolumeFailureToleration > * TestDataNodeVolumeFailureReporting > * TestDiskBalancerCommand > * TestBlockStatsMXBean > * TestDataNodeVolumeMetrics > * TestDFSAdmin > * TestDataNodeHotSwapVolumes > * TestDataNodeVolumeFailure > these tests set up datanode data dir by some hard coded names, such as > {code} > new File(cluster.getDataDirectory(), "data1"); > {code} > this no longer works since HDFS-11519 changes the pattern from > {code} > /data/data<2*dnIndex + 1> > /data/data<2*dnIndex + 2> > ... > {code} > to > {code} > /data/dn0_data0 > /data/dn0_data1 > /data/dn1_data0 > /data/dn1_data1 > ... > {code} -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-11625) Ozone: Fix UT failures that caused by hard coded datanode data dirs
[ https://issues.apache.org/jira/browse/HDFS-11625?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Weiwei Yang updated HDFS-11625: --- Attachment: HDFS-11625-HDFS-7240.003.patch > Ozone: Fix UT failures that caused by hard coded datanode data dirs > --- > > Key: HDFS-11625 > URL: https://issues.apache.org/jira/browse/HDFS-11625 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: ozone, test >Affects Versions: HDFS-7240 >Reporter: Weiwei Yang >Assignee: Weiwei Yang > Attachments: HDFS-11625-HDFS-7240.001.patch, > HDFS-11625-HDFS-7240.002.patch, HDFS-11625-HDFS-7240.003.patch > > > There seems to be some UT regressions after HDFS-11519, such as > * TestDataNodeVolumeFailureToleration > * TestDataNodeVolumeFailureReporting > * TestDiskBalancerCommand > * TestBlockStatsMXBean > * TestDataNodeVolumeMetrics > * TestDFSAdmin > * TestDataNodeHotSwapVolumes > * TestDataNodeVolumeFailure > these tests set up datanode data dir by some hard coded names, such as > {code} > new File(cluster.getDataDirectory(), "data1"); > {code} > this no longer works since HDFS-11519 changes the pattern from > {code} > /data/data<2*dnIndex + 1> > /data/data<2*dnIndex + 2> > ... > {code} > to > {code} > /data/dn0_data0 > /data/dn0_data1 > /data/dn1_data0 > /data/dn1_data1 > ... > {code} -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Created] (HDFS-11627) Block Storage: Cblock cache should register with flusher to upload blocks to containers
Mukul Kumar Singh created HDFS-11627: Summary: Block Storage: Cblock cache should register with flusher to upload blocks to containers Key: HDFS-11627 URL: https://issues.apache.org/jira/browse/HDFS-11627 Project: Hadoop HDFS Issue Type: Sub-task Components: ozone Reporter: Mukul Kumar Singh Assignee: Mukul Kumar Singh Cblock cache should register with flusher to upload blocks to containers. Currently Container Cache flusher tries to write to the container even when the CblockLocalCache pipelines are not registered with the flusher. This will result in the Container writes to fail. CblockLocalCache should register with the flusher before accepting any blocks for write -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-11625) Ozone: Fix UT failures that caused by hard coded datanode data dirs
[ https://issues.apache.org/jira/browse/HDFS-11625?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15956749#comment-15956749 ] Hadoop QA commented on HDFS-11625: -- | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 21s{color} | {color:blue} Docker mode activated. {color} | | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s{color} | {color:green} The patch appears to include 10 new or modified test files. {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 14m 37s{color} | {color:green} HDFS-7240 passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 50s{color} | {color:green} HDFS-7240 passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 38s{color} | {color:green} HDFS-7240 passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 55s{color} | {color:green} HDFS-7240 passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 14s{color} | {color:green} HDFS-7240 passed {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 57s{color} | {color:green} HDFS-7240 passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 51s{color} | {color:green} HDFS-7240 passed {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 50s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 46s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 46s{color} | {color:green} the patch passed {color} | | {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange} 0m 36s{color} | {color:orange} hadoop-hdfs-project/hadoop-hdfs: The patch generated 2 new + 96 unchanged - 0 fixed = 98 total (was 96) {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 52s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 11s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 2m 3s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 47s{color} | {color:green} the patch passed {color} | | {color:red}-1{color} | {color:red} unit {color} | {color:red} 65m 55s{color} | {color:red} hadoop-hdfs in the patch failed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 20s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 94m 2s{color} | {color:black} {color} | \\ \\ || Reason || Tests || | Failed junit tests | hadoop.hdfs.TestDFSUpgradeFromImage | | | hadoop.ozone.scm.node.TestContainerPlacement | \\ \\ || Subsystem || Report/Notes || | Docker | Image:yetus/hadoop:a9ad5d6 | | JIRA Issue | HDFS-11625 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12862054/HDFS-11625-HDFS-7240.002.patch | | Optional Tests | asflicense compile javac javadoc mvninstall mvnsite unit findbugs checkstyle | | uname | Linux ec610078c23d 3.13.0-107-generic #154-Ubuntu SMP Tue Dec 20 09:57:27 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh | | git revision | HDFS-7240 / adc6510 | | Default Java | 1.8.0_121 | | findbugs | v3.0.0 | | checkstyle | https://builds.apache.org/job/PreCommit-HDFS-Build/18984/artifact/patchprocess/diff-checkstyle-hadoop-hdfs-project_hadoop-hdfs.txt | | unit | https://builds.apache.org/job/PreCommit-HDFS-Build/18984/artifact/patchprocess/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt | | Test Results | https://builds.apache.org/job/PreCommit-HDFS-Build/18984/testReport/ | | modules | C: hadoop-hdfs-project/hadoop-hdfs U: hadoop-hdfs-project/hadoop-hdfs | | Console output | https://builds.apache.org/job/PreCommit-HDFS-Build/18984/console | | Powered by | Apache Yetus 0.5.0-SNAPSHOT http://yetus.apache.org | This message was automatically generated. > Ozone: Fix UT failures that caused by hard coded datanode data dirs >
[jira] [Updated] (HDFS-11625) Ozone: Fix UT failures that caused by hard coded datanode data dirs
[ https://issues.apache.org/jira/browse/HDFS-11625?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Tsz Wo Nicholas Sze updated HDFS-11625: --- Hadoop Flags: Reviewed Component/s: test +1 patch looks good. Thanks a lot! > Ozone: Fix UT failures that caused by hard coded datanode data dirs > --- > > Key: HDFS-11625 > URL: https://issues.apache.org/jira/browse/HDFS-11625 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: ozone, test >Affects Versions: HDFS-7240 >Reporter: Weiwei Yang >Assignee: Weiwei Yang > Attachments: HDFS-11625-HDFS-7240.001.patch, > HDFS-11625-HDFS-7240.002.patch > > > There seems to be some UT regressions after HDFS-11519, such as > * TestDataNodeVolumeFailureToleration > * TestDataNodeVolumeFailureReporting > * TestDiskBalancerCommand > * TestBlockStatsMXBean > * TestDataNodeVolumeMetrics > * TestDFSAdmin > * TestDataNodeHotSwapVolumes > * TestDataNodeVolumeFailure > these tests set up datanode data dir by some hard coded names, such as > {code} > new File(cluster.getDataDirectory(), "data1"); > {code} > this no longer works since HDFS-11519 changes the pattern from > {code} > /data/data<2*dnIndex + 1> > /data/data<2*dnIndex + 2> > ... > {code} > to > {code} > /data/dn0_data0 > /data/dn0_data1 > /data/dn1_data0 > /data/dn1_data1 > ... > {code} -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-11623) Move system erasure coding policies into hadoop-hdfs-client
[ https://issues.apache.org/jira/browse/HDFS-11623?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15956732#comment-15956732 ] Kai Zheng commented on HDFS-11623: -- Thanks [~andrew.wang] for the work! I haven't got a chance to read the patch yet, so far just a quick question: it looks to me the current {{ErasureCodingPolicyManager}} is still needed, and what we might want is to have something like {{System/BuiltInErasureCodingPolicies}} on client side. I mean, the manager will manage both system or built-in polices and user defined polices via the plugin mechanism. For client side codes and tests, it may be good enough to just know the built-in policies; on NN server side and related tests, it may be good to still use the manager. > Move system erasure coding policies into hadoop-hdfs-client > --- > > Key: HDFS-11623 > URL: https://issues.apache.org/jira/browse/HDFS-11623 > Project: Hadoop HDFS > Issue Type: Improvement > Components: erasure-coding >Affects Versions: 3.0.0-alpha2 >Reporter: Andrew Wang >Assignee: Andrew Wang > Attachments: HDFS-11623.001.patch, HDFS-11623.002.patch > > > This is a precursor to HDFS-11565. We need to move the set of system defined > EC policies out of the NameNode's ECPolicyManager into the hdfs-client module > so it can be referenced by the client. -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-11062) Ozone:SCM: Explore if we can remove nullcommand
[ https://issues.apache.org/jira/browse/HDFS-11062?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15956717#comment-15956717 ] Hadoop QA commented on HDFS-11062: -- | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 28s{color} | {color:blue} Docker mode activated. {color} | | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s{color} | {color:green} The patch appears to include 2 new or modified test files. {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 15m 28s{color} | {color:green} HDFS-7240 passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 58s{color} | {color:green} HDFS-7240 passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 47s{color} | {color:green} HDFS-7240 passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 18s{color} | {color:green} HDFS-7240 passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 17s{color} | {color:green} HDFS-7240 passed {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 2m 35s{color} | {color:green} HDFS-7240 passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 1s{color} | {color:green} HDFS-7240 passed {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 1m 10s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 8s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} cc {color} | {color:green} 1m 8s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 1m 8s{color} | {color:green} the patch passed {color} | | {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange} 0m 44s{color} | {color:orange} hadoop-hdfs-project/hadoop-hdfs: The patch generated 2 new + 11 unchanged - 0 fixed = 13 total (was 11) {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 16s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 14s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 2m 26s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 58s{color} | {color:green} the patch passed {color} | | {color:red}-1{color} | {color:red} unit {color} | {color:red} 79m 6s{color} | {color:red} hadoop-hdfs in the patch failed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 21s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black}111m 46s{color} | {color:black} {color} | \\ \\ || Reason || Tests || | Failed junit tests | hadoop.hdfs.server.datanode.TestDataNodeVolumeFailureToleration | | | hadoop.cblock.TestCBlockCLI | | | hadoop.hdfs.server.datanode.checker.TestThrottledAsyncChecker | | | hadoop.hdfs.server.datanode.TestDataNodeVolumeFailureReporting | | | hadoop.cblock.TestCBlockServer | | | hadoop.hdfs.TestDistributedFileSystem | | | hadoop.ozone.scm.node.TestContainerPlacement | | | hadoop.hdfs.server.diskbalancer.command.TestDiskBalancerCommand | | | hadoop.hdfs.server.blockmanagement.TestBlockStatsMXBean | | | hadoop.ozone.container.common.TestDatanodeStateMachine | | | hadoop.hdfs.TestDFSStripedOutputStreamWithFailure040 | | | hadoop.hdfs.server.datanode.TestDataNodeVolumeMetrics | | | hadoop.hdfs.TestDFSUpgradeFromImage | | | hadoop.hdfs.tools.TestDFSAdmin | | | hadoop.hdfs.server.datanode.TestDataNodeHotSwapVolumes | | | hadoop.hdfs.server.datanode.TestDataNodeVolumeFailure | \\ \\ || Subsystem || Report/Notes || | Docker | Image:yetus/hadoop:a9ad5d6 | | JIRA Issue | HDFS-11062 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12862048/HDFS-11062-HDFS-7240.002.patch | | Optional Tests | asflicense compile javac javadoc mvninstall mvnsite unit findbugs checkstyle cc | | uname | Linux 7670b01a3bee 3.13.0-105-generic #152-Ubuntu SMP Fri Dec 2 15:37:11 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux | |
[jira] [Created] (HDFS-11626) Deprecate oiv_legacy tool
Wei-Chiu Chuang created HDFS-11626: -- Summary: Deprecate oiv_legacy tool Key: HDFS-11626 URL: https://issues.apache.org/jira/browse/HDFS-11626 Project: Hadoop HDFS Issue Type: Task Components: tools Reporter: Wei-Chiu Chuang oiv_legacy only works for fsimage at or before Hadoop 2.4. I think we can deprecate oiv_legacy in Hadoop 3 in preparation for final removal in Hadoop 4. Or would people favor removing it in Hadoop 3? -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-11625) Ozone: Fix UT failures that caused by hard coded datanode data dirs
[ https://issues.apache.org/jira/browse/HDFS-11625?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15956644#comment-15956644 ] Hadoop QA commented on HDFS-11625: -- | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 19s{color} | {color:blue} Docker mode activated. {color} | | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 1s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s{color} | {color:green} The patch appears to include 9 new or modified test files. {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 14m 36s{color} | {color:green} HDFS-7240 passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 49s{color} | {color:green} HDFS-7240 passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 38s{color} | {color:green} HDFS-7240 passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 56s{color} | {color:green} HDFS-7240 passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 14s{color} | {color:green} HDFS-7240 passed {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 56s{color} | {color:green} HDFS-7240 passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 50s{color} | {color:green} HDFS-7240 passed {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 50s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 47s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 47s{color} | {color:green} the patch passed {color} | | {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange} 0m 35s{color} | {color:orange} hadoop-hdfs-project/hadoop-hdfs: The patch generated 2 new + 96 unchanged - 0 fixed = 98 total (was 96) {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 51s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 11s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 2m 1s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 49s{color} | {color:green} the patch passed {color} | | {color:red}-1{color} | {color:red} unit {color} | {color:red} 71m 14s{color} | {color:red} hadoop-hdfs in the patch failed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 19s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 99m 13s{color} | {color:black} {color} | \\ \\ || Reason || Tests || | Failed junit tests | hadoop.hdfs.server.datanode.TestDataNodeVolumeFailureReporting | | | hadoop.hdfs.server.datanode.metrics.TestDataNodeOutlierDetectionViaMetrics | | | hadoop.hdfs.TestDFSUpgradeFromImage | | | hadoop.ozone.container.ozoneimpl.TestOzoneContainer | | | hadoop.ozone.container.common.TestDatanodeStateMachine | | | hadoop.ozone.scm.node.TestContainerPlacement | \\ \\ || Subsystem || Report/Notes || | Docker | Image:yetus/hadoop:a9ad5d6 | | JIRA Issue | HDFS-11625 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12862040/HDFS-11625-HDFS-7240.001.patch | | Optional Tests | asflicense compile javac javadoc mvninstall mvnsite unit findbugs checkstyle | | uname | Linux dd35b56a44d9 3.13.0-106-generic #153-Ubuntu SMP Tue Dec 6 15:44:32 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh | | git revision | HDFS-7240 / adc6510 | | Default Java | 1.8.0_121 | | findbugs | v3.0.0 | | checkstyle | https://builds.apache.org/job/PreCommit-HDFS-Build/18982/artifact/patchprocess/diff-checkstyle-hadoop-hdfs-project_hadoop-hdfs.txt | | unit | https://builds.apache.org/job/PreCommit-HDFS-Build/18982/artifact/patchprocess/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt | | Test Results | https://builds.apache.org/job/PreCommit-HDFS-Build/18982/testReport/ | | modules | C: hadoop-hdfs-project/hadoop-hdfs U: hadoop-hdfs-project/hadoop-hdfs | | Console output |
[jira] [Updated] (HDFS-11625) Ozone: Fix UT failures that caused by hard coded datanode data dirs
[ https://issues.apache.org/jira/browse/HDFS-11625?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Weiwei Yang updated HDFS-11625: --- Attachment: HDFS-11625-HDFS-7240.002.patch > Ozone: Fix UT failures that caused by hard coded datanode data dirs > --- > > Key: HDFS-11625 > URL: https://issues.apache.org/jira/browse/HDFS-11625 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: ozone >Affects Versions: HDFS-7240 >Reporter: Weiwei Yang >Assignee: Weiwei Yang > Attachments: HDFS-11625-HDFS-7240.001.patch, > HDFS-11625-HDFS-7240.002.patch > > > There seems to be some UT regressions after HDFS-11519, such as > * TestDataNodeVolumeFailureToleration > * TestDataNodeVolumeFailureReporting > * TestDiskBalancerCommand > * TestBlockStatsMXBean > * TestDataNodeVolumeMetrics > * TestDFSAdmin > * TestDataNodeHotSwapVolumes > * TestDataNodeVolumeFailure > these tests set up datanode data dir by some hard coded names, such as > {code} > new File(cluster.getDataDirectory(), "data1"); > {code} > this no longer works since HDFS-11519 changes the pattern from > {code} > /data/data<2*dnIndex + 1> > /data/data<2*dnIndex + 2> > ... > {code} > to > {code} > /data/dn0_data0 > /data/dn0_data1 > /data/dn1_data0 > /data/dn1_data1 > ... > {code} -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-11625) Ozone: Fix UT failures that caused by hard coded datanode data dirs
[ https://issues.apache.org/jira/browse/HDFS-11625?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15956631#comment-15956631 ] Mukul Kumar Singh commented on HDFS-11625: -- [~cheersyang] This patch looks great and will help in fixing most of the failures. I have two minor comments, +1 otherwise. 1)TestCBlockCLI -> there should be a space between the "=" and "null" {code} if (cBlockManager !=null) { {code} 2) TestDataNodeVolumeFailureToleration -> dataDir can be removed as it is not being accessed anywhere in the code > Ozone: Fix UT failures that caused by hard coded datanode data dirs > --- > > Key: HDFS-11625 > URL: https://issues.apache.org/jira/browse/HDFS-11625 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: ozone >Affects Versions: HDFS-7240 >Reporter: Weiwei Yang >Assignee: Weiwei Yang > Attachments: HDFS-11625-HDFS-7240.001.patch > > > There seems to be some UT regressions after HDFS-11519, such as > * TestDataNodeVolumeFailureToleration > * TestDataNodeVolumeFailureReporting > * TestDiskBalancerCommand > * TestBlockStatsMXBean > * TestDataNodeVolumeMetrics > * TestDFSAdmin > * TestDataNodeHotSwapVolumes > * TestDataNodeVolumeFailure > these tests set up datanode data dir by some hard coded names, such as > {code} > new File(cluster.getDataDirectory(), "data1"); > {code} > this no longer works since HDFS-11519 changes the pattern from > {code} > /data/data<2*dnIndex + 1> > /data/data<2*dnIndex + 2> > ... > {code} > to > {code} > /data/dn0_data0 > /data/dn0_data1 > /data/dn1_data0 > /data/dn1_data1 > ... > {code} -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-11538) Move ClientProtocol HA proxies into hadoop-hdfs-client
[ https://issues.apache.org/jira/browse/HDFS-11538?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Huafeng Wang updated HDFS-11538: Attachment: HDFS-11538-branch-2.001.patch > Move ClientProtocol HA proxies into hadoop-hdfs-client > -- > > Key: HDFS-11538 > URL: https://issues.apache.org/jira/browse/HDFS-11538 > Project: Hadoop HDFS > Issue Type: Bug > Components: hdfs-client >Affects Versions: 2.8.0, 3.0.0-alpha1 >Reporter: Andrew Wang >Assignee: Huafeng Wang >Priority: Blocker > Fix For: 3.0.0-alpha3 > > Attachments: HDFS-11538.001.patch, HDFS-11538.002.patch, > HDFS-11538.003.patch, HDFS-11538-branch-2.001.patch > > > Follow-up for HDFS-11431. We should move this missing class over rather than > pulling in the whole hadoop-hdfs dependency. -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-11062) Ozone:SCM: Explore if we can remove nullcommand
[ https://issues.apache.org/jira/browse/HDFS-11062?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15956609#comment-15956609 ] Yuanbo Liu commented on HDFS-11062: --- {quote} SCMTestMock.java {quote} Addressed. {quote} StorageContainerManager.java {quote} Let's evaluate it after HDFS-11493 is committed. > Ozone:SCM: Explore if we can remove nullcommand > --- > > Key: HDFS-11062 > URL: https://issues.apache.org/jira/browse/HDFS-11062 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: ozone >Affects Versions: HDFS-7240 >Reporter: Anu Engineer >Assignee: Yuanbo Liu > Fix For: HDFS-7240 > > Attachments: HDFS-11062-HDFS-7240.001.patch, > HDFS-11062-HDFS-7240.002.patch > > > in SCM protocol we have a nullCommand that gets returned as the default case. > Explore if we can remove this. -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org