[jira] [Commented] (HDFS-13926) ThreadLocal aggregations for FileSystem.Statistics are incorrect with striped reads
[ https://issues.apache.org/jira/browse/HDFS-13926?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16618551#comment-16618551 ] Hadoop QA commented on HDFS-13926: -- | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 21s{color} | {color:blue} Docker mode activated. {color} | || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s{color} | {color:green} The patch appears to include 1 new or modified test files. {color} | || || || || {color:brown} trunk Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 12s{color} | {color:blue} Maven dependency ordering for branch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 17m 42s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 3m 47s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 55s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 36s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 12m 9s{color} | {color:green} branch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 3m 25s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 12s{color} | {color:green} trunk passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 8s{color} | {color:blue} Maven dependency ordering for patch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 1m 34s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 2m 49s{color} | {color:green} the patch passed {color} | | {color:red}-1{color} | {color:red} javac {color} | {color:red} 2m 49s{color} | {color:red} hadoop-hdfs-project generated 2 new + 467 unchanged - 0 fixed = 469 total (was 467) {color} | | {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange} 0m 57s{color} | {color:orange} hadoop-hdfs-project: The patch generated 4 new + 77 unchanged - 2 fixed = 81 total (was 79) {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 37s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 11m 9s{color} | {color:green} patch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 3m 47s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 8s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:green}+1{color} | {color:green} unit {color} | {color:green} 1m 37s{color} | {color:green} hadoop-hdfs-client in the patch passed. {color} | | {color:red}-1{color} | {color:red} unit {color} | {color:red}102m 59s{color} | {color:red} hadoop-hdfs in the patch failed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 32s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black}168m 59s{color} | {color:black} {color} | \\ \\ || Reason || Tests || | Failed junit tests | hadoop.hdfs.TestReadStripedFileWithDecoding | | | hadoop.hdfs.TestExternalBlockReader | | | hadoop.hdfs.web.TestWebHdfsTimeouts | | | hadoop.hdfs.client.impl.TestBlockReaderLocal | | | hadoop.hdfs.server.namenode.TestNameNodeMXBean | | | hadoop.hdfs.TestDistributedFileSystem | | | hadoop.hdfs.server.namenode.sps.TestBlockStorageMovementAttemptedItems | | | hadoop.hdfs.TestReadStripedFileWithDecodingCorruptData | | | hadoop.hdfs.server.namenode.TestFsck | \\ \\ || Subsystem || Report/Notes || | Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:4b8c2b1 | | JIRA Issue | HDFS-13926 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12940120/HDFS
[jira] [Comment Edited] (HDDS-490) Improve om and scm start up options
[ https://issues.apache.org/jira/browse/HDDS-490?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16618548#comment-16618548 ] Elek, Marton edited comment on HDDS-490 at 9/18/18 6:34 AM: 1. I would prefer to use the more GNU style --create-object-store. (no camel case) 2. Would be easier for me to use the same option for scm and om (--format could be a generic or --init. With picocli it could be a secondary alias.) scm --format / om --format 3. Please allow the run multiple times without exiting with exit code. It's very important for kubernetes + containerized word. (In kubernetes there is an option to define an init container. We can add 'ozone om -format' to it, but it should work at the second ran.) I propose to introduce a new argument, something like '--if-not-exists', and in that case error should not be thrown. You can suggest better name ('--if-missing' ?). By default it could be false and the default behaviour could be the failing (as described in 3). was (Author: elek): 1. I would prefer to use the more GNU style --create-object-store. (no camel case) 2. Would be easier for me to use the same option for scm and om (--format could be a generic or --init. With picocli it could be a secondary alias.) scm --format / om --format 3. Please allow the run multiple times without exiting with exit code. It's very important for kubernetes + containerized word. (In kubernetes there is an option to define an init container. We can add 'ozone om \--format' to it, but it should work at the second ran.) I propose to introduce a new argument, something like '--if-not-exists', and in that case error should not be thrown. You can suggest better name ('--if-missing' ?). By default it could be false and the default behaviour could be the failing (as described in 3). > Improve om and scm start up options > > > Key: HDDS-490 > URL: https://issues.apache.org/jira/browse/HDDS-490 > Project: Hadoop Distributed Data Store > Issue Type: Bug >Reporter: Namit Maheshwari >Assignee: Namit Maheshwari >Priority: Major > Labels: incompatible > > I propose the following changes: > # Rename createObjectStore to format > # Change the flag to use --createObjectStore instead of using > -createObjectStore. It is also applicable to other scm and om startup options. > # Fail to format existing object store. If a user runs: > {code:java} > ozone om -createObjectStore{code} > And there is already an object store, it should give a warning message and > exit the process. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Comment Edited] (HDDS-490) Improve om and scm start up options
[ https://issues.apache.org/jira/browse/HDDS-490?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16618548#comment-16618548 ] Elek, Marton edited comment on HDDS-490 at 9/18/18 6:34 AM: 1. I would prefer to use the more GNU style --create-object-store. (no camel case) 2. Would be easier for me to use the same option for scm and om (--format could be a generic or --init. With picocli it could be a secondary alias.) scm --format / om --format 3. Please allow the run multiple times without exiting with exit code. It's very important for kubernetes + containerized word. (In kubernetes there is an option to define an init container. We can add 'ozone om --format' to it, but it should work at the second ran.) I propose to introduce a new argument, something like '--if-not-exists', and in that case error should not be thrown. You can suggest better name ('--if-missing' ?). By default it could be false and the default behaviour could be the failing (as described in 3). was (Author: elek): 1. I would prefer to use the more GNU style --create-object-store. (no camel case) 2. Would be easier for me to use the same option for scm and om (--format could be a generic or --init. With picocli it could be a secondary alias.) scm --format / om --format 3. Please allow the run multiple times without exiting with exit code. It's very important for kubernetes + containerized word. (In kubernetes there is an option to define an init container. We can add 'ozone om -format' to it, but it should work at the second ran.) I propose to introduce a new argument, something like '--if-not-exists', and in that case error should not be thrown. You can suggest better name ('--if-missing' ?). By default it could be false and the default behaviour could be the failing (as described in 3). > Improve om and scm start up options > > > Key: HDDS-490 > URL: https://issues.apache.org/jira/browse/HDDS-490 > Project: Hadoop Distributed Data Store > Issue Type: Bug >Reporter: Namit Maheshwari >Assignee: Namit Maheshwari >Priority: Major > Labels: incompatible > > I propose the following changes: > # Rename createObjectStore to format > # Change the flag to use --createObjectStore instead of using > -createObjectStore. It is also applicable to other scm and om startup options. > # Fail to format existing object store. If a user runs: > {code:java} > ozone om -createObjectStore{code} > And there is already an object store, it should give a warning message and > exit the process. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Comment Edited] (HDDS-490) Improve om and scm start up options
[ https://issues.apache.org/jira/browse/HDDS-490?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16618548#comment-16618548 ] Elek, Marton edited comment on HDDS-490 at 9/18/18 6:34 AM: 1. I would prefer to use the more GNU style --create-object-store. (no camel case) 2. Would be easier for me to use the same option for scm and om (--format could be a generic or --init. With picocli it could be a secondary alias.) scm --format / om --format 3. Please allow the run multiple times without exiting with exit code. It's very important for kubernetes + containerized word. (In kubernetes there is an option to define an init container. We can add 'ozone om \--format' to it, but it should work at the second ran.) I propose to introduce a new argument, something like '--if-not-exists', and in that case error should not be thrown. You can suggest better name ('--if-missing' ?). By default it could be false and the default behaviour could be the failing (as described in 3). was (Author: elek): 1. I would prefer to use the more GNU style --create-object-store. (no camel case) 2. Would be easier for me to use the same option for scm and om (--format could be a generic or --init. With picocli it could be a secondary alias.) scm --format / om --format 3. Please allow the run multiple times without exiting with exit code. It's very important for kubernetes + containerized word. (In kubernetes there is an option to define an init container. We can add 'ozone om --format' to it, but it should work at the second ran.) I propose to introduce a new argument, something like '--if-not-exists', and in that case error should not be thrown. You can suggest better name ('--if-missing' ?). By default it could be false and the default behaviour could be the failing (as described in 3). > Improve om and scm start up options > > > Key: HDDS-490 > URL: https://issues.apache.org/jira/browse/HDDS-490 > Project: Hadoop Distributed Data Store > Issue Type: Bug >Reporter: Namit Maheshwari >Assignee: Namit Maheshwari >Priority: Major > Labels: incompatible > > I propose the following changes: > # Rename createObjectStore to format > # Change the flag to use --createObjectStore instead of using > -createObjectStore. It is also applicable to other scm and om startup options. > # Fail to format existing object store. If a user runs: > {code:java} > ozone om -createObjectStore{code} > And there is already an object store, it should give a warning message and > exit the process. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDDS-490) Improve om and scm start up options
[ https://issues.apache.org/jira/browse/HDDS-490?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16618548#comment-16618548 ] Elek, Marton commented on HDDS-490: --- 1. I would prefer to use the more GNU style --create-object-store. (no camel case) 2. Would be easier for me to use the same option for scm and om (--format could be a generic or --init. With picocli it could be a secondary alias.) scm --format / om --format 3. Please allow the run multiple times without exiting with exit code. It's very important for kubernetes + containerized word. (In kubernetes there is an option to define an init container. We can add 'ozone om --format' to it, but it should work at the second ran.) I propose to introduce a new argument, something like '--if-not-exists', and in that case error should not be thrown. You can suggest better name ('--if-missing' ?). By default it could be false and the default behaviour could be the failing (as described in 3). > Improve om and scm start up options > > > Key: HDDS-490 > URL: https://issues.apache.org/jira/browse/HDDS-490 > Project: Hadoop Distributed Data Store > Issue Type: Bug >Reporter: Namit Maheshwari >Assignee: Namit Maheshwari >Priority: Major > Labels: incompatible > > I propose the following changes: > # Rename createObjectStore to format > # Change the flag to use --createObjectStore instead of using > -createObjectStore. It is also applicable to other scm and om startup options. > # Fail to format existing object store. If a user runs: > {code:java} > ozone om -createObjectStore{code} > And there is already an object store, it should give a warning message and > exit the process. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-13908) TestDataNodeMultipleRegistrations is flaky
[ https://issues.apache.org/jira/browse/HDFS-13908?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16618546#comment-16618546 ] Hadoop QA commented on HDFS-13908: -- | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 24s{color} | {color:blue} Docker mode activated. {color} | || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s{color} | {color:green} The patch appears to include 1 new or modified test files. {color} | || || || || {color:brown} trunk Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 21m 20s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 8s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 56s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 16s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 14m 16s{color} | {color:green} branch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 2m 9s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 59s{color} | {color:green} trunk passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 1m 17s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 9s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 1m 9s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 57s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 14s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 12m 25s{color} | {color:green} patch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 59s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 46s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:red}-1{color} | {color:red} unit {color} | {color:red} 98m 20s{color} | {color:red} hadoop-hdfs in the patch failed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 30s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black}160m 45s{color} | {color:black} {color} | \\ \\ || Reason || Tests || | Failed junit tests | hadoop.hdfs.TestDFSInotifyEventInputStreamKerberized | | | hadoop.hdfs.TestLeaseRecovery2 | | | hadoop.hdfs.server.namenode.TestFsck | | | hadoop.hdfs.client.impl.TestBlockReaderLocal | \\ \\ || Subsystem || Report/Notes || | Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:4b8c2b1 | | JIRA Issue | HDFS-13908 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12940115/HDFS-13908-04.patch | | Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient findbugs checkstyle | | uname | Linux a05f69552fdb 3.13.0-153-generic #203-Ubuntu SMP Thu Jun 14 08:52:28 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | /testptch/patchprocess/precommit/personality/provided.sh | | git revision | trunk / ee051ef | | maven | version: Apache Maven 3.3.9 | | Default Java | 1.8.0_181 | | findbugs | v3.1.0-RC1 | | unit | https://builds.apache.org/job/PreCommit-HDFS-Build/25088/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt | | Test Results | https://builds.apache.org/job/PreCommit-HDFS-Build/25088/testReport/ | | Max. process+thread count | 3479 (vs. ulimit of 1) | | modules | C: hadoop-hdfs-project/hadoop-hdfs U: hadoop-hdfs-project/hadoop-hdfs
[jira] [Comment Edited] (HDDS-441) Create new s3gateway daemon
[ https://issues.apache.org/jira/browse/HDDS-441?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16618538#comment-16618538 ] Elek, Marton edited comment on HDDS-441 at 9/18/18 6:19 AM: The patch requires HDDS-447. was (Author: elek): The patch required HDDS-447. > Create new s3gateway daemon > --- > > Key: HDDS-441 > URL: https://issues.apache.org/jira/browse/HDDS-441 > Project: Hadoop Distributed Data Store > Issue Type: Sub-task >Reporter: Elek, Marton >Assignee: Elek, Marton >Priority: Major > Labels: newbie > Attachments: HDDS-441.001.patch > > > The first element what we need is a new command line application to start the > s3 gateway. > 1. A new project should be introduced: hadoop-ozone/s3-gateway > 2. A new command line application (eg. org.apache.hadoop.ozone.s3.Gateway > should be added with a simple main and start/stop method which just prints > out a starting/stopping log message > 3. dev-support/bin/ozone-distlayout-stitching should be modified with coping > the jar files from the new project > 4. hadoop-ozone/common/src/main/bin/ozone should be modified to manage the > new service (eg. ozone s3g start, ozone s3g stop) > 5. to make it easier to test a new docker-compose based test cluster should > be added to the hadoop-dist/src/main/compose (the normal ./ozone could be > copied but we need to add the new s3g component) -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDDS-441) Create new s3gateway daemon
[ https://issues.apache.org/jira/browse/HDDS-441?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16618538#comment-16618538 ] Elek, Marton commented on HDDS-441: --- The patch required HDDS-447. > Create new s3gateway daemon > --- > > Key: HDDS-441 > URL: https://issues.apache.org/jira/browse/HDDS-441 > Project: Hadoop Distributed Data Store > Issue Type: Sub-task >Reporter: Elek, Marton >Assignee: Elek, Marton >Priority: Major > Labels: newbie > Attachments: HDDS-441.001.patch > > > The first element what we need is a new command line application to start the > s3 gateway. > 1. A new project should be introduced: hadoop-ozone/s3-gateway > 2. A new command line application (eg. org.apache.hadoop.ozone.s3.Gateway > should be added with a simple main and start/stop method which just prints > out a starting/stopping log message > 3. dev-support/bin/ozone-distlayout-stitching should be modified with coping > the jar files from the new project > 4. hadoop-ozone/common/src/main/bin/ozone should be modified to manage the > new service (eg. ozone s3g start, ozone s3g stop) > 5. to make it easier to test a new docker-compose based test cluster should > be added to the hadoop-dist/src/main/compose (the normal ./ozone could be > copied but we need to add the new s3g component) -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDDS-476) Add Pipeline reports to make pipeline active on SCM restart
[ https://issues.apache.org/jira/browse/HDDS-476?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Mukul Kumar Singh updated HDDS-476: --- Attachment: HDDS-476.002.patch > Add Pipeline reports to make pipeline active on SCM restart > --- > > Key: HDDS-476 > URL: https://issues.apache.org/jira/browse/HDDS-476 > Project: Hadoop Distributed Data Store > Issue Type: Bug > Components: SCM >Affects Versions: 0.2.1 >Reporter: Mukul Kumar Singh >Assignee: Mukul Kumar Singh >Priority: Blocker > Fix For: 0.2.1 > > Attachments: HDDS-476.001.patch, HDDS-476.002.patch > > > Creating this jira as a followup for HDDS-399, This jira proposes to add > pipeline reports so that SCM can identify healthy pipelines on restart and > should be able to reconstruct the pipelines. > This jira is being created to simplify review for HDDS-399. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDDS-491) Minor typos in README.md in smoketest
[ https://issues.apache.org/jira/browse/HDDS-491?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16618510#comment-16618510 ] Hudson commented on HDDS-491: - SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #14987 (See [https://builds.apache.org/job/Hadoop-trunk-Commit/14987/]) HDDS-491. Minor typos in README.md in smoketest. Contributed by chencan. (bharat: rev 51fda2d7733a17a22f68c1c57b0ada062b713620) * (edit) hadoop-dist/src/main/smoketest/README.md > Minor typos in README.md in smoketest > - > > Key: HDDS-491 > URL: https://issues.apache.org/jira/browse/HDDS-491 > Project: Hadoop Distributed Data Store > Issue Type: Bug >Reporter: Bharat Viswanadham >Assignee: chencan >Priority: Trivial > Labels: newbie > Fix For: 0.2.1 > > Attachments: HDDS-491.001.patch > > > File:hadoop-dist/src/main/smoketest/README.md > Line 23: robot smoketest/bascic should be change to robot smoketest/basic. > Line 30: ozone standalon should be changed to ozone standalone -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDDS-451) PutKey failed due to error "Rejecting write chunk request. Chunk overwrite without explicit request"
[ https://issues.apache.org/jira/browse/HDDS-451?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jitendra Nath Pandey updated HDDS-451: -- Target Version/s: (was: 0.2.1) > PutKey failed due to error "Rejecting write chunk request. Chunk overwrite > without explicit request" > > > Key: HDDS-451 > URL: https://issues.apache.org/jira/browse/HDDS-451 > Project: Hadoop Distributed Data Store > Issue Type: Bug > Components: Ozone Client >Affects Versions: 0.2.1 >Reporter: Nilotpal Nandi >Assignee: Shashikant Banerjee >Priority: Blocker > Attachments: all-node-ozone-logs-1536841590.tar.gz > > > steps taken : > -- > # Ran Put Key command to write 50GB data. Put Key client operation failed > after 17 mins. > error seen ozone.log : > > > {code} > 2018-09-13 12:11:53,734 [ForkJoinPool.commonPool-worker-20] DEBUG > (ChunkManagerImpl.java:85) - writing > chunk:bd80b58a5eba888200a4832a0f2aafb3_stream_5f3b2505-6964-45c9-a7ad-827388a1e6a0_chunk_1 > chunk stage:COMMIT_DATA chunk > file:/tmp/hadoop-root/dfs/data/hdds/de0a9e01-4a12-40e3-b567-51b9bd83248e/current/containerDir0/16/chunks/bd80b58a5eba888200a4832a0f2aafb3_stream_5f3b2505-6964-45c9-a7ad-827388a1e6a0_chunk_1 > tmp chunk file > 2018-09-13 12:11:56,576 [pool-3-thread-60] DEBUG (ChunkManagerImpl.java:85) - > writing > chunk:bd80b58a5eba888200a4832a0f2aafb3_stream_5f3b2505-6964-45c9-a7ad-827388a1e6a0_chunk_2 > chunk stage:WRITE_DATA chunk > file:/tmp/hadoop-root/dfs/data/hdds/de0a9e01-4a12-40e3-b567-51b9bd83248e/current/containerDir0/16/chunks/bd80b58a5eba888200a4832a0f2aafb3_stream_5f3b2505-6964-45c9-a7ad-827388a1e6a0_chunk_2 > tmp chunk file > 2018-09-13 12:11:56,739 [ForkJoinPool.commonPool-worker-20] DEBUG > (ChunkManagerImpl.java:85) - writing > chunk:bd80b58a5eba888200a4832a0f2aafb3_stream_5f3b2505-6964-45c9-a7ad-827388a1e6a0_chunk_2 > chunk stage:COMMIT_DATA chunk > file:/tmp/hadoop-root/dfs/data/hdds/de0a9e01-4a12-40e3-b567-51b9bd83248e/current/containerDir0/16/chunks/bd80b58a5eba888200a4832a0f2aafb3_stream_5f3b2505-6964-45c9-a7ad-827388a1e6a0_chunk_2 > tmp chunk file > 2018-09-13 12:12:21,410 [Datanode State Machine Thread - 0] DEBUG > (DatanodeStateMachine.java:148) - Executing cycle Number : 206 > 2018-09-13 12:12:51,411 [Datanode State Machine Thread - 0] DEBUG > (DatanodeStateMachine.java:148) - Executing cycle Number : 207 > 2018-09-13 12:12:53,525 [BlockDeletingService#1] DEBUG > (TopNOrderedContainerDeletionChoosingPolicy.java:79) - Stop looking for next > container, there is no pending deletion block contained in remaining > containers. > 2018-09-13 12:12:55,048 [Datanode ReportManager Thread - 1] DEBUG > (ContainerSet.java:191) - Starting container report iteration. > 2018-09-13 12:13:02,626 [pool-3-thread-1] ERROR (ChunkUtils.java:244) - > Rejecting write chunk request. Chunk overwrite without explicit request. > ChunkInfo{chunkName='bd80b58a5eba888200a4832a0f2aafb3_stream_5f3b2505-6964-45c9-a7ad-827388a1e6a0_chunk_2, > offset=0, len=16777216} > 2018-09-13 12:13:03,035 [pool-3-thread-1] INFO (ContainerUtils.java:149) - > Operation: WriteChunk : Trace ID: 54834b29-603d-4ba9-9d68-0885215759d8 : > Message: Rejecting write chunk request. OverWrite flag > required.ChunkInfo{chunkName='bd80b58a5eba888200a4832a0f2aafb3_stream_5f3b2505-6964-45c9-a7ad-827388a1e6a0_chunk_2, > offset=0, len=16777216} : Result: OVERWRITE_FLAG_REQUIRED > 2018-09-13 12:13:03,037 [ForkJoinPool.commonPool-worker-11] ERROR > (ChunkUtils.java:244) - Rejecting write chunk request. Chunk overwrite > without explicit request. > ChunkInfo{chunkName='bd80b58a5eba888200a4832a0f2aafb3_stream_5f3b2505-6964-45c9-a7ad-827388a1e6a0_chunk_2, > offset=0, len=16777216} > 2018-09-13 12:13:03,037 [ForkJoinPool.commonPool-worker-11] INFO > (ContainerUtils.java:149) - Operation: WriteChunk : Trace ID: > 54834b29-603d-4ba9-9d68-0885215759d8 : Message: Rejecting write chunk > request. OverWrite flag > required.ChunkInfo{chunkName='bd80b58a5eba888200a4832a0f2aafb3_stream_5f3b2505-6964-45c9-a7ad-827388a1e6a0_chunk_2, > offset=0, len=16777216} : Result: OVERWRITE_FLAG_REQUIRED > > {code} > -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDDS-461) container remains in CLOSING state in SCM forever
[ https://issues.apache.org/jira/browse/HDDS-461?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jitendra Nath Pandey updated HDDS-461: -- Fix Version/s: (was: 0.2.1) > container remains in CLOSING state in SCM forever > - > > Key: HDDS-461 > URL: https://issues.apache.org/jira/browse/HDDS-461 > Project: Hadoop Distributed Data Store > Issue Type: Bug > Components: SCM >Affects Versions: 0.2.1 >Reporter: Nilotpal Nandi >Assignee: Shashikant Banerjee >Priority: Major > Attachments: HDDS-461.00.patch, all-node-ozone-logs-1536920345.tar.gz > > > Container id # 13's state is not changing from CLOSING to CLOSED. > {noformat} > [root@ctr-e138-1518143905142-459606-01-02 bin]# ./ozone scmcli info 13 > raft.rpc.type = GRPC (default) > raft.grpc.message.size.max = 33554432 (custom) > raft.client.rpc.retryInterval = 300 ms (default) > raft.client.async.outstanding-requests.max = 100 (default) > raft.client.async.scheduler-threads = 3 (default) > raft.grpc.flow.control.window = 1MB (=1048576) (default) > raft.grpc.message.size.max = 33554432 (custom) > raft.client.rpc.request.timeout = 3000 ms (default) > Container id: 13 > Container State: OPEN > Container Path: > /tmp/hadoop-root/dfs/data/hdds/de0a9e01-4a12-40e3-b567-51b9bd83248e/current/containerDir0/13/metadata > Container Metadata: > LeaderID: ctr-e138-1518143905142-459606-01-03.hwx.site > Datanodes: > [ctr-e138-1518143905142-459606-01-07.hwx.site,ctr-e138-1518143905142-459606-01-08.hwx.site,ctr-e138-1518143905142-459606-01-03.hwx.site]{noformat} > > snippet of scmcli list : > {noformat} > { > "state" : "CLOSING", > "replicationFactor" : "THREE", > "replicationType" : "RATIS", > "allocatedBytes" : 4831838208, > "usedBytes" : 4831838208, > "numberOfKeys" : 0, > "lastUsed" : 4391827471, > "stateEnterTime" : 5435591457, > "owner" : "f8332db1-b8b1-4077-a9ea-097033d074b7", > "containerID" : 13, > "deleteTransactionId" : 0, > "containerOpen" : true > }{noformat} -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDDS-461) container remains in CLOSING state in SCM forever
[ https://issues.apache.org/jira/browse/HDDS-461?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jitendra Nath Pandey updated HDDS-461: -- Target Version/s: 0.2.1 > container remains in CLOSING state in SCM forever > - > > Key: HDDS-461 > URL: https://issues.apache.org/jira/browse/HDDS-461 > Project: Hadoop Distributed Data Store > Issue Type: Bug > Components: SCM >Affects Versions: 0.2.1 >Reporter: Nilotpal Nandi >Assignee: Shashikant Banerjee >Priority: Major > Attachments: HDDS-461.00.patch, all-node-ozone-logs-1536920345.tar.gz > > > Container id # 13's state is not changing from CLOSING to CLOSED. > {noformat} > [root@ctr-e138-1518143905142-459606-01-02 bin]# ./ozone scmcli info 13 > raft.rpc.type = GRPC (default) > raft.grpc.message.size.max = 33554432 (custom) > raft.client.rpc.retryInterval = 300 ms (default) > raft.client.async.outstanding-requests.max = 100 (default) > raft.client.async.scheduler-threads = 3 (default) > raft.grpc.flow.control.window = 1MB (=1048576) (default) > raft.grpc.message.size.max = 33554432 (custom) > raft.client.rpc.request.timeout = 3000 ms (default) > Container id: 13 > Container State: OPEN > Container Path: > /tmp/hadoop-root/dfs/data/hdds/de0a9e01-4a12-40e3-b567-51b9bd83248e/current/containerDir0/13/metadata > Container Metadata: > LeaderID: ctr-e138-1518143905142-459606-01-03.hwx.site > Datanodes: > [ctr-e138-1518143905142-459606-01-07.hwx.site,ctr-e138-1518143905142-459606-01-08.hwx.site,ctr-e138-1518143905142-459606-01-03.hwx.site]{noformat} > > snippet of scmcli list : > {noformat} > { > "state" : "CLOSING", > "replicationFactor" : "THREE", > "replicationType" : "RATIS", > "allocatedBytes" : 4831838208, > "usedBytes" : 4831838208, > "numberOfKeys" : 0, > "lastUsed" : 4391827471, > "stateEnterTime" : 5435591457, > "owner" : "f8332db1-b8b1-4077-a9ea-097033d074b7", > "containerID" : 13, > "deleteTransactionId" : 0, > "containerOpen" : true > }{noformat} -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDDS-461) container remains in CLOSING state in SCM forever
[ https://issues.apache.org/jira/browse/HDDS-461?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Mukul Kumar Singh updated HDDS-461: --- Fix Version/s: 0.2.1 > container remains in CLOSING state in SCM forever > - > > Key: HDDS-461 > URL: https://issues.apache.org/jira/browse/HDDS-461 > Project: Hadoop Distributed Data Store > Issue Type: Bug > Components: SCM >Affects Versions: 0.2.1 >Reporter: Nilotpal Nandi >Assignee: Shashikant Banerjee >Priority: Major > Attachments: HDDS-461.00.patch, all-node-ozone-logs-1536920345.tar.gz > > > Container id # 13's state is not changing from CLOSING to CLOSED. > {noformat} > [root@ctr-e138-1518143905142-459606-01-02 bin]# ./ozone scmcli info 13 > raft.rpc.type = GRPC (default) > raft.grpc.message.size.max = 33554432 (custom) > raft.client.rpc.retryInterval = 300 ms (default) > raft.client.async.outstanding-requests.max = 100 (default) > raft.client.async.scheduler-threads = 3 (default) > raft.grpc.flow.control.window = 1MB (=1048576) (default) > raft.grpc.message.size.max = 33554432 (custom) > raft.client.rpc.request.timeout = 3000 ms (default) > Container id: 13 > Container State: OPEN > Container Path: > /tmp/hadoop-root/dfs/data/hdds/de0a9e01-4a12-40e3-b567-51b9bd83248e/current/containerDir0/13/metadata > Container Metadata: > LeaderID: ctr-e138-1518143905142-459606-01-03.hwx.site > Datanodes: > [ctr-e138-1518143905142-459606-01-07.hwx.site,ctr-e138-1518143905142-459606-01-08.hwx.site,ctr-e138-1518143905142-459606-01-03.hwx.site]{noformat} > > snippet of scmcli list : > {noformat} > { > "state" : "CLOSING", > "replicationFactor" : "THREE", > "replicationType" : "RATIS", > "allocatedBytes" : 4831838208, > "usedBytes" : 4831838208, > "numberOfKeys" : 0, > "lastUsed" : 4391827471, > "stateEnterTime" : 5435591457, > "owner" : "f8332db1-b8b1-4077-a9ea-097033d074b7", > "containerID" : 13, > "deleteTransactionId" : 0, > "containerOpen" : true > }{noformat} -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDDS-491) Minor typos in README.md in smoketest
[ https://issues.apache.org/jira/browse/HDDS-491?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Bharat Viswanadham updated HDDS-491: Fix Version/s: 0.2.1 > Minor typos in README.md in smoketest > - > > Key: HDDS-491 > URL: https://issues.apache.org/jira/browse/HDDS-491 > Project: Hadoop Distributed Data Store > Issue Type: Bug >Reporter: Bharat Viswanadham >Assignee: chencan >Priority: Trivial > Labels: newbie > Fix For: 0.2.1 > > Attachments: HDDS-491.001.patch > > > File:hadoop-dist/src/main/smoketest/README.md > Line 23: robot smoketest/bascic should be change to robot smoketest/basic. > Line 30: ozone standalon should be changed to ozone standalone -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDDS-491) Minor typos in README.md in smoketest
[ https://issues.apache.org/jira/browse/HDDS-491?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16618500#comment-16618500 ] Bharat Viswanadham commented on HDDS-491: - Thank You [~candychencan] for the fix. I have committed this to the trunk and ozone-0.2 branch. > Minor typos in README.md in smoketest > - > > Key: HDDS-491 > URL: https://issues.apache.org/jira/browse/HDDS-491 > Project: Hadoop Distributed Data Store > Issue Type: Bug >Reporter: Bharat Viswanadham >Assignee: chencan >Priority: Trivial > Labels: newbie > Fix For: 0.2.1 > > Attachments: HDDS-491.001.patch > > > File:hadoop-dist/src/main/smoketest/README.md > Line 23: robot smoketest/bascic should be change to robot smoketest/basic. > Line 30: ozone standalon should be changed to ozone standalone -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDDS-491) Minor typos in README.md in smoketest
[ https://issues.apache.org/jira/browse/HDDS-491?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Bharat Viswanadham updated HDDS-491: Resolution: Fixed Status: Resolved (was: Patch Available) > Minor typos in README.md in smoketest > - > > Key: HDDS-491 > URL: https://issues.apache.org/jira/browse/HDDS-491 > Project: Hadoop Distributed Data Store > Issue Type: Bug >Reporter: Bharat Viswanadham >Assignee: chencan >Priority: Trivial > Labels: newbie > Attachments: HDDS-491.001.patch > > > File:hadoop-dist/src/main/smoketest/README.md > Line 23: robot smoketest/bascic should be change to robot smoketest/basic. > Line 30: ozone standalon should be changed to ozone standalone -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDDS-491) Minor typos in README.md in smoketest
[ https://issues.apache.org/jira/browse/HDDS-491?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16618499#comment-16618499 ] Bharat Viswanadham commented on HDDS-491: - +1. I will commit this shortly. > Minor typos in README.md in smoketest > - > > Key: HDDS-491 > URL: https://issues.apache.org/jira/browse/HDDS-491 > Project: Hadoop Distributed Data Store > Issue Type: Bug >Reporter: Bharat Viswanadham >Assignee: chencan >Priority: Trivial > Labels: newbie > Attachments: HDDS-491.001.patch > > > File:hadoop-dist/src/main/smoketest/README.md > Line 23: robot smoketest/bascic should be change to robot smoketest/basic. > Line 30: ozone standalon should be changed to ozone standalone -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-13882) Set a maximum for the delay before retrying locateFollowingBlock
[ https://issues.apache.org/jira/browse/HDFS-13882?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16618474#comment-16618474 ] Xiao Chen commented on HDFS-13882: -- Test failure and checkstyle seem related, could you take a look? Also while we're at it, {{long maxSleepTime = dfsClient.getConf().getBlockWriteLocateFollowingMaxDelayMs();}} can just use the {{conf}} object like existing code. > Set a maximum for the delay before retrying locateFollowingBlock > > > Key: HDFS-13882 > URL: https://issues.apache.org/jira/browse/HDFS-13882 > Project: Hadoop HDFS > Issue Type: Improvement >Affects Versions: 3.1.0 >Reporter: Kitti Nanasi >Assignee: Kitti Nanasi >Priority: Major > Attachments: HDFS-13882.001.patch, HDFS-13882.002.patch, > HDFS-13882.003.patch, HDFS-13882.004.patch > > > More and more we are seeing cases where customers are running into the java > io exception "Unable to close file because the last block does not have > enough number of replicas" on client file closure. The common workaround is > to increase dfs.client.block.write.locateFollowingBlock.retries from 5 to 10. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDDS-491) Minor typos in README.md in smoketest
[ https://issues.apache.org/jira/browse/HDDS-491?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16618472#comment-16618472 ] Hadoop QA commented on HDDS-491: | (/) *{color:green}+1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 19s{color} | {color:blue} Docker mode activated. {color} | || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | || || || || {color:brown} trunk Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 24m 3s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 29s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 36m 52s{color} | {color:green} branch has no errors when building and testing our client artifacts. {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 19s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 14m 15s{color} | {color:green} patch has no errors when building and testing our client artifacts. {color} | || || || || {color:brown} Other Tests {color} || | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 37s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 52m 58s{color} | {color:black} {color} | \\ \\ || Subsystem || Report/Notes || | Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:4b8c2b1 | | JIRA Issue | HDDS-491 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12940119/HDDS-491.001.patch | | Optional Tests | asflicense mvnsite | | uname | Linux 95e9b05a4f01 3.13.0-153-generic #203-Ubuntu SMP Thu Jun 14 08:52:28 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | /testptch/patchprocess/precommit/personality/provided.sh | | git revision | trunk / ee051ef | | maven | version: Apache Maven 3.3.9 | | Max. process+thread count | 333 (vs. ulimit of 1) | | modules | C: hadoop-dist U: hadoop-dist | | Console output | https://builds.apache.org/job/PreCommit-HDDS-Build/1128/console | | Powered by | Apache Yetus 0.8.0-SNAPSHOT http://yetus.apache.org | This message was automatically generated. > Minor typos in README.md in smoketest > - > > Key: HDDS-491 > URL: https://issues.apache.org/jira/browse/HDDS-491 > Project: Hadoop Distributed Data Store > Issue Type: Bug >Reporter: Bharat Viswanadham >Assignee: chencan >Priority: Trivial > Labels: newbie > Attachments: HDDS-491.001.patch > > > File:hadoop-dist/src/main/smoketest/README.md > Line 23: robot smoketest/bascic should be change to robot smoketest/basic. > Line 30: ozone standalon should be changed to ozone standalone -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-13882) Set a maximum for the delay before retrying locateFollowingBlock
[ https://issues.apache.org/jira/browse/HDFS-13882?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16618470#comment-16618470 ] Hadoop QA commented on HDFS-13882: -- | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 19s{color} | {color:blue} Docker mode activated. {color} | || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s{color} | {color:green} The patch appears to include 1 new or modified test files. {color} | || || || || {color:brown} trunk Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 30s{color} | {color:blue} Maven dependency ordering for branch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 19m 7s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 4m 2s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 1m 18s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 2m 10s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 16m 20s{color} | {color:green} branch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 4m 17s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 35s{color} | {color:green} trunk passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 13s{color} | {color:blue} Maven dependency ordering for patch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 2m 11s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 3m 34s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 3m 34s{color} | {color:green} the patch passed {color} | | {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange} 1m 4s{color} | {color:orange} hadoop-hdfs-project: The patch generated 1 new + 155 unchanged - 2 fixed = 156 total (was 157) {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 50s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} xml {color} | {color:green} 0m 2s{color} | {color:green} The patch has no ill-formed XML file. {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 12m 32s{color} | {color:green} patch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 3m 58s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 14s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:green}+1{color} | {color:green} unit {color} | {color:green} 1m 35s{color} | {color:green} hadoop-hdfs-client in the patch passed. {color} | | {color:red}-1{color} | {color:red} unit {color} | {color:red} 95m 16s{color} | {color:red} hadoop-hdfs in the patch failed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 30s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black}172m 17s{color} | {color:black} {color} | \\ \\ || Reason || Tests || | Failed junit tests | hadoop.tools.TestHdfsConfigFields | | | hadoop.hdfs.server.namenode.ha.TestStandbyCheckpoints | \\ \\ || Subsystem || Report/Notes || | Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:4b8c2b1 | | JIRA Issue | HDFS-13882 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12940108/HDFS-13882.004.patch | | Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient findbugs checkstyle xml | | uname | Linux ea050974736e 3.13.0-153-generic #203-Ubuntu SMP Thu Jun 14 08:52:28 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux | |
[jira] [Commented] (HDFS-13882) Set a maximum for the delay before retrying locateFollowingBlock
[ https://issues.apache.org/jira/browse/HDFS-13882?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16618446#comment-16618446 ] Xiao Chen commented on HDFS-13882: -- +1 pending pre-commit. Will push on end of Tuesday if no objections > Set a maximum for the delay before retrying locateFollowingBlock > > > Key: HDFS-13882 > URL: https://issues.apache.org/jira/browse/HDFS-13882 > Project: Hadoop HDFS > Issue Type: Improvement >Affects Versions: 3.1.0 >Reporter: Kitti Nanasi >Assignee: Kitti Nanasi >Priority: Major > Attachments: HDFS-13882.001.patch, HDFS-13882.002.patch, > HDFS-13882.003.patch, HDFS-13882.004.patch > > > More and more we are seeing cases where customers are running into the java > io exception "Unable to close file because the last block does not have > enough number of replicas" on client file closure. The common workaround is > to increase dfs.client.block.write.locateFollowingBlock.retries from 5 to 10. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-13833) Failed to choose from local rack (location = /default); the second replica is not found, retry choosing ramdomly
[ https://issues.apache.org/jira/browse/HDFS-13833?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16618443#comment-16618443 ] Xiao Chen commented on HDFS-13833: -- Thanks for revving Shweta, and Kitti for reviewing! We're close. Additional comments on patch 3: - The goal for javadoc is to allow people to know what the method does without looking into code. So for the {{@return}} sentence, suggest to use something like: {quote}@return true if the datanode should be excluded, otherwise false {quote} instead of {quote}@return Return true if the workload in datanode is not more than maximum, otherwise false {quote} - IMO, the intuitive way for this method is to return true on exclusion, and return false on inclusion. Expectation is 'excludeNodeByLoad' returning true means the node should be excluded. {code:java} if (considerLoad) { if (!excludeNodeByLoad(node)) { return false; } } {code} You can choose to change the code as suggested, or maybe find a better name. I can't seem to find a better name to express the fact that: 1) it will consider load 2) it will exclude some nodes based on 1), so suggested to change the behavior. - In tests, {{assertTrue}} / {{assertFalse}} is preferred than {{assertEquals(true)}} / {{assertEquals(false)}} - Trivial, but could you format the code to follow existing format? checkstyle complains some, and it looks like the {{if(considerLoad){}} line in BPPD should keep original formatting. > Failed to choose from local rack (location = /default); the second replica is > not found, retry choosing ramdomly > > > Key: HDFS-13833 > URL: https://issues.apache.org/jira/browse/HDFS-13833 > Project: Hadoop HDFS > Issue Type: Bug >Reporter: Henrique Barros >Assignee: Shweta >Priority: Critical > Attachments: HDFS-13833.001.patch, HDFS-13833.002.patch, > HDFS-13833.003.patch > > > I'm having a random problem with blocks replication with Hadoop > 2.6.0-cdh5.15.0 > With Cloudera CDH-5.15.0-1.cdh5.15.0.p0.21 > > In my case we are getting this error very randomly (after some hours) and > with only one Datanode (for now, we are trying this cloudera cluster for a > POC) > Here is the Log. > {code:java} > Choosing random from 1 available nodes on node /default, scope=/default, > excludedScope=null, excludeNodes=[] > 2:38:20.527 PMDEBUG NetworkTopology > Choosing random from 0 available nodes on node /default, scope=/default, > excludedScope=null, excludeNodes=[192.168.220.53:50010] > 2:38:20.527 PMDEBUG NetworkTopology > chooseRandom returning null > 2:38:20.527 PMDEBUG BlockPlacementPolicy > [ > Node /default/192.168.220.53:50010 [ > Datanode 192.168.220.53:50010 is not chosen since the node is too busy > (load: 8 > 0.0). > 2:38:20.527 PMDEBUG NetworkTopology > chooseRandom returning 192.168.220.53:50010 > 2:38:20.527 PMINFOBlockPlacementPolicy > Not enough replicas was chosen. Reason:{NODE_TOO_BUSY=1} > 2:38:20.527 PMDEBUG StateChange > closeFile: > /mobi.me/development/apps/flink/checkpoints/a5a6806866c1640660924ea1453cbe34/chk-2118/eef8bff6-75a9-43c1-ae93-4b1a9ca31ad9 > with 1 blocks is persisted to the file system > 2:38:20.527 PMDEBUG StateChange > *BLOCK* NameNode.addBlock: file > /mobi.me/development/apps/flink/checkpoints/a5a6806866c1640660924ea1453cbe34/chk-2118/1cfe900d-6f45-4b55-baaa-73c02ace2660 > fileId=129628869 for DFSClient_NONMAPREDUCE_467616914_65 > 2:38:20.527 PMDEBUG BlockPlacementPolicy > Failed to choose from local rack (location = /default); the second replica is > not found, retry choosing ramdomly > org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicy$NotEnoughReplicasException: > > at > org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicyDefault.chooseRandom(BlockPlacementPolicyDefault.java:784) > at > org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicyDefault.chooseRandom(BlockPlacementPolicyDefault.java:694) > at > org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicyDefault.chooseLocalRack(BlockPlacementPolicyDefault.java:601) > at > org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicyDefault.chooseLocalStorage(BlockPlacementPolicyDefault.java:561) > at > org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicyDefault.chooseTargetInOrder(BlockPlacementPolicyDefault.java:464) > at > org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicyDefault.chooseTarget(BlockPlacementPolicyDefault.java:395) > at > org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicyDefault.cho
[jira] [Resolved] (HDDS-472) TestDataValidate fails in trunk
[ https://issues.apache.org/jira/browse/HDDS-472?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Nanda kumar resolved HDDS-472. -- Resolution: Duplicate > TestDataValidate fails in trunk > --- > > Key: HDDS-472 > URL: https://issues.apache.org/jira/browse/HDDS-472 > Project: Hadoop Distributed Data Store > Issue Type: Sub-task > Components: test >Reporter: Arpit Agarwal >Assignee: Lokesh Jain >Priority: Blocker > > {code:java} > [INFO] Running org.apache.hadoop.ozone.freon.TestDataValidate > [ERROR] Tests run: 3, Failures: 1, Errors: 0, Skipped: 0, Time elapsed: > 17.326 s <<< FAILURE! - in org.apache.hadoop.ozone.freon.TestDataValidate > [ERROR] validateWriteTest(org.apache.hadoop.ozone.freon.TestDataValidate) > Time elapsed: 2.026 s <<< FAILURE! > java.lang.AssertionError: expected:<0> but was:<7> > at org.junit.Assert.fail(Assert.java:88) > at org.junit.Assert.failNotEquals(Assert.java:743) > at org.junit.Assert.assertEquals(Assert.java:118) > at org.junit.Assert.assertEquals(Assert.java:555) > at org.junit.Assert.assertEquals(Assert.java:542) > at > org.apache.hadoop.ozone.freon.TestDataValidate.validateWriteTest(TestDataValidate.java:112) > at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) > at > sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) > at > sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) > at java.lang.reflect.Method.invoke(Method.java:498) > at > org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:47) > at > org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) > at > org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:44) > at > org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17) > at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:271) > at > org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:70) > at > org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:50) > at org.junit.runners.ParentRunner$3.run(ParentRunner.java:238) > at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:63) > at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:236) > at org.junit.runners.ParentRunner.access$000(ParentRunner.java:53) > at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:229) > at > org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) > at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) > at org.junit.runners.ParentRunner.run(ParentRunner.java:309) > at > org.apache.maven.surefire.junit4.JUnit4Provider.execute(JUnit4Provider.java:365) > at > org.apache.maven.surefire.junit4.JUnit4Provider.executeWithRerun(JUnit4Provider.java:273) > at > org.apache.maven.surefire.junit4.JUnit4Provider.executeTestSet(JUnit4Provider.java:238) > at > org.apache.maven.surefire.junit4.JUnit4Provider.invoke(JUnit4Provider.java:159) > at > org.apache.maven.surefire.booter.ForkedBooter.invokeProviderInSameClassLoader(ForkedBooter.java:379) > at > org.apache.maven.surefire.booter.ForkedBooter.runSuitesInProcess(ForkedBooter.java:340) > at > org.apache.maven.surefire.booter.ForkedBooter.execute(ForkedBooter.java:125) > at > org.apache.maven.surefire.booter.ForkedBooter.main(ForkedBooter.java:413){code} -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-13926) ThreadLocal aggregations for FileSystem.Statistics are incorrect with striped reads
[ https://issues.apache.org/jira/browse/HDFS-13926?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Xiao Chen updated HDFS-13926: - Status: Patch Available (was: Open) > ThreadLocal aggregations for FileSystem.Statistics are incorrect with striped > reads > --- > > Key: HDFS-13926 > URL: https://issues.apache.org/jira/browse/HDFS-13926 > Project: Hadoop HDFS > Issue Type: Bug > Components: erasure-coding >Affects Versions: 3.0.0 >Reporter: Xiao Chen >Assignee: Xiao Chen >Priority: Major > Attachments: HDFS-13926.prelim.patch > > > During some integration testing, [~nsheth] found out that per-thread read > stats for EC is incorrect. This is due to the striped reads are done > asynchronously on the worker threads. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-13926) ThreadLocal aggregations for FileSystem.Statistics are incorrect with striped reads
[ https://issues.apache.org/jira/browse/HDFS-13926?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16618427#comment-16618427 ] Xiao Chen commented on HDFS-13926: -- Shooting up a preliminary patch for pre-commit > ThreadLocal aggregations for FileSystem.Statistics are incorrect with striped > reads > --- > > Key: HDFS-13926 > URL: https://issues.apache.org/jira/browse/HDFS-13926 > Project: Hadoop HDFS > Issue Type: Bug > Components: erasure-coding >Affects Versions: 3.0.0 >Reporter: Xiao Chen >Assignee: Xiao Chen >Priority: Major > Attachments: HDFS-13926.prelim.patch > > > During some integration testing, [~nsheth] found out that per-thread read > stats for EC is incorrect. This is due to the striped reads are done > asynchronously on the worker threads. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-13926) ThreadLocal aggregations for FileSystem.Statistics are incorrect with striped reads
[ https://issues.apache.org/jira/browse/HDFS-13926?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Xiao Chen updated HDFS-13926: - Attachment: HDFS-13926.prelim.patch > ThreadLocal aggregations for FileSystem.Statistics are incorrect with striped > reads > --- > > Key: HDFS-13926 > URL: https://issues.apache.org/jira/browse/HDFS-13926 > Project: Hadoop HDFS > Issue Type: Bug > Components: erasure-coding >Affects Versions: 3.0.0 >Reporter: Xiao Chen >Assignee: Xiao Chen >Priority: Major > Attachments: HDFS-13926.prelim.patch > > > During some integration testing, [~nsheth] found out that per-thread read > stats for EC is incorrect. This is due to the striped reads are done > asynchronously on the worker threads. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Created] (HDFS-13926) ThreadLocal aggregations for FileSystem.Statistics are incorrect with striped reads
Xiao Chen created HDFS-13926: Summary: ThreadLocal aggregations for FileSystem.Statistics are incorrect with striped reads Key: HDFS-13926 URL: https://issues.apache.org/jira/browse/HDFS-13926 Project: Hadoop HDFS Issue Type: Bug Components: erasure-coding Affects Versions: 3.0.0 Reporter: Xiao Chen Assignee: Xiao Chen During some integration testing, [~nsheth] found out that per-thread read stats for EC is incorrect. This is due to the striped reads are done asynchronously on the worker threads. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDDS-491) Minor typos in README.md in smoketest
[ https://issues.apache.org/jira/browse/HDDS-491?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16618424#comment-16618424 ] chencan commented on HDDS-491: -- Hi [~bharatviswa], I have uploaded the patch to fix the typos in README.md in smoketest. Thanks! h1. > Minor typos in README.md in smoketest > - > > Key: HDDS-491 > URL: https://issues.apache.org/jira/browse/HDDS-491 > Project: Hadoop Distributed Data Store > Issue Type: Bug >Reporter: Bharat Viswanadham >Assignee: chencan >Priority: Trivial > Labels: newbie > Attachments: HDDS-491.001.patch > > > File:hadoop-dist/src/main/smoketest/README.md > Line 23: robot smoketest/bascic should be change to robot smoketest/basic. > Line 30: ozone standalon should be changed to ozone standalone -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDDS-491) Minor typos in README.md in smoketest
[ https://issues.apache.org/jira/browse/HDDS-491?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] chencan updated HDDS-491: - Status: Patch Available (was: Open) > Minor typos in README.md in smoketest > - > > Key: HDDS-491 > URL: https://issues.apache.org/jira/browse/HDDS-491 > Project: Hadoop Distributed Data Store > Issue Type: Bug >Reporter: Bharat Viswanadham >Assignee: chencan >Priority: Trivial > Labels: newbie > Attachments: HDDS-491.001.patch > > > File:hadoop-dist/src/main/smoketest/README.md > Line 23: robot smoketest/bascic should be change to robot smoketest/basic. > Line 30: ozone standalon should be changed to ozone standalone -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDDS-491) Minor typos in README.md in smoketest
[ https://issues.apache.org/jira/browse/HDDS-491?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] chencan updated HDDS-491: - Attachment: HDDS-491.001.patch > Minor typos in README.md in smoketest > - > > Key: HDDS-491 > URL: https://issues.apache.org/jira/browse/HDDS-491 > Project: Hadoop Distributed Data Store > Issue Type: Bug >Reporter: Bharat Viswanadham >Assignee: chencan >Priority: Trivial > Labels: newbie > Attachments: HDDS-491.001.patch > > > File:hadoop-dist/src/main/smoketest/README.md > Line 23: robot smoketest/bascic should be change to robot smoketest/basic. > Line 30: ozone standalon should be changed to ozone standalone -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Assigned] (HDDS-491) Minor typos in README.md in smoketest
[ https://issues.apache.org/jira/browse/HDDS-491?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] chencan reassigned HDDS-491: Assignee: chencan > Minor typos in README.md in smoketest > - > > Key: HDDS-491 > URL: https://issues.apache.org/jira/browse/HDDS-491 > Project: Hadoop Distributed Data Store > Issue Type: Bug >Reporter: Bharat Viswanadham >Assignee: chencan >Priority: Trivial > Labels: newbie > > File:hadoop-dist/src/main/smoketest/README.md > Line 23: robot smoketest/bascic should be change to robot smoketest/basic. > Line 30: ozone standalon should be changed to ozone standalone -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-13833) Failed to choose from local rack (location = /default); the second replica is not found, retry choosing ramdomly
[ https://issues.apache.org/jira/browse/HDFS-13833?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16618417#comment-16618417 ] Hadoop QA commented on HDFS-13833: -- | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 22s{color} | {color:blue} Docker mode activated. {color} | || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s{color} | {color:green} The patch appears to include 1 new or modified test files. {color} | || || || || {color:brown} trunk Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 21m 27s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 52s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 46s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 0s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 11m 23s{color} | {color:green} branch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 54s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 49s{color} | {color:green} trunk passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 59s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 53s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 53s{color} | {color:green} the patch passed {color} | | {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange} 0m 43s{color} | {color:orange} hadoop-hdfs-project/hadoop-hdfs: The patch generated 4 new + 73 unchanged - 0 fixed = 77 total (was 73) {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 55s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 10m 47s{color} | {color:green} patch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 2m 9s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 43s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:red}-1{color} | {color:red} unit {color} | {color:red}120m 28s{color} | {color:red} hadoop-hdfs in the patch failed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 33s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black}176m 27s{color} | {color:black} {color} | \\ \\ || Reason || Tests || | Failed junit tests | hadoop.hdfs.server.datanode.TestDataNodeVolumeFailure | | | hadoop.hdfs.qjournal.server.TestJournalNodeSync | | | hadoop.hdfs.web.TestWebHdfsTimeouts | | | hadoop.hdfs.TestReconstructStripedFileWithRandomECPolicy | | | hadoop.hdfs.TestBlocksScheduledCounter | \\ \\ || Subsystem || Report/Notes || | Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:4b8c2b1 | | JIRA Issue | HDFS-13833 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12940102/HDFS-13833.003.patch | | Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient findbugs checkstyle | | uname | Linux 07078723218c 4.4.0-133-generic #159-Ubuntu SMP Fri Aug 10 07:31:43 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | /testptch/patchprocess/precommit/personality/provided.sh | | git revision | trunk / 0a26c52 | | maven | version: Apache Maven 3.3.9 | | Default Java | 1.8.0_181 | | findbugs | v3.1.0-RC1 | | checkstyle | https://builds.apache.org/job/PreCommit-HDFS-Build/25086/artifact/out/diff-checkstyle-hadoop-hdfs-project_hadoop-hdfs.txt | | unit | https://builds.apache.org/job/PreCommit-HD
[jira] [Commented] (HDFS-6092) DistributedFileSystem#getCanonicalServiceName() and DistributedFileSystem#getUri() may return inconsistent results w.r.t. port
[ https://issues.apache.org/jira/browse/HDFS-6092?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16618403#comment-16618403 ] Hadoop QA commented on HDFS-6092: - | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 20s{color} | {color:blue} Docker mode activated. {color} | || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s{color} | {color:green} The patch appears to include 3 new or modified test files. {color} | || || || || {color:brown} trunk Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 1m 56s{color} | {color:blue} Maven dependency ordering for branch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 17m 42s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 14m 52s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 3m 24s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 3m 19s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 18m 14s{color} | {color:green} branch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 5m 52s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 2m 28s{color} | {color:green} trunk passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 1m 40s{color} | {color:blue} Maven dependency ordering for patch {color} | | {color:red}-1{color} | {color:red} mvninstall {color} | {color:red} 0m 31s{color} | {color:red} hadoop-hdfs-client in the patch failed. {color} | | {color:red}-1{color} | {color:red} mvninstall {color} | {color:red} 1m 1s{color} | {color:red} hadoop-hdfs in the patch failed. {color} | | {color:red}-1{color} | {color:red} compile {color} | {color:red} 1m 38s{color} | {color:red} root in the patch failed. {color} | | {color:red}-1{color} | {color:red} javac {color} | {color:red} 1m 38s{color} | {color:red} root in the patch failed. {color} | | {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange} 2m 55s{color} | {color:orange} root: The patch generated 8 new + 200 unchanged - 0 fixed = 208 total (was 200) {color} | | {color:red}-1{color} | {color:red} mvnsite {color} | {color:red} 0m 32s{color} | {color:red} hadoop-hdfs-client in the patch failed. {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:red}-1{color} | {color:red} shadedclient {color} | {color:red} 2m 7s{color} | {color:red} patch has errors when building and testing our client artifacts. {color} | | {color:red}-1{color} | {color:red} findbugs {color} | {color:red} 0m 17s{color} | {color:red} hadoop-hdfs-client in the patch failed. {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 46s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:green}+1{color} | {color:green} unit {color} | {color:green} 8m 10s{color} | {color:green} hadoop-common in the patch passed. {color} | | {color:red}-1{color} | {color:red} unit {color} | {color:red} 0m 31s{color} | {color:red} hadoop-hdfs-client in the patch failed. {color} | | {color:red}-1{color} | {color:red} unit {color} | {color:red}119m 53s{color} | {color:red} hadoop-hdfs in the patch failed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 25s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black}211m 38s{color} | {color:black} {color} | \\ \\ || Reason || Tests || | Failed junit tests | hadoop.hdfs.server.datanode.TestDataNodeVolumeFailure | | | hadoop.hdfs.web.TestWebHdfsTimeouts | | | hadoop.hdfs.TestDFSClientFailover | | | hadoop.hdfs.client.impl.TestBlockReaderLocal | | | hadoop.hdfs.TestDistributedFileSystem | \\ \\ || Subsystem || Report/Notes || | Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:4b8c2b1 | | JIRA Issue | HDFS-6092 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12911
[jira] [Commented] (HDFS-13908) TestDataNodeMultipleRegistrations is flaky
[ https://issues.apache.org/jira/browse/HDFS-13908?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16618404#comment-16618404 ] Ayush Saxena commented on HDFS-13908: - Thanx [~elgoiri] for the comment. Have uploaded patch v4. > TestDataNodeMultipleRegistrations is flaky > -- > > Key: HDFS-13908 > URL: https://issues.apache.org/jira/browse/HDFS-13908 > Project: Hadoop HDFS > Issue Type: New Feature >Reporter: Íñigo Goiri >Assignee: Ayush Saxena >Priority: Major > Attachments: Above Timeout.rar, HDFS-13908-01.patch, > HDFS-13908-02.patch, HDFS-13908-03.patch, HDFS-13908-04.patch, Within > TImeout.rar > > > We have seen this issue in multiple runs: > https://builds.apache.org/job/PreCommit-HADOOP-Build/15146/testReport/org.apache.hadoop.hdfs.server.datanode/TestDataNodeMultipleRegistrations/testClusterIdMismatchAtStartupWithHA/ > https://builds.apache.org/job/PreCommit-HADOOP-Build/15116/testReport/org.apache.hadoop.hdfs.server.datanode/TestDataNodeMultipleRegistrations/testDNWithInvalidStorageWithHA/ -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-13908) TestDataNodeMultipleRegistrations is flaky
[ https://issues.apache.org/jira/browse/HDFS-13908?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Ayush Saxena updated HDFS-13908: Attachment: HDFS-13908-04.patch > TestDataNodeMultipleRegistrations is flaky > -- > > Key: HDFS-13908 > URL: https://issues.apache.org/jira/browse/HDFS-13908 > Project: Hadoop HDFS > Issue Type: New Feature >Reporter: Íñigo Goiri >Assignee: Ayush Saxena >Priority: Major > Attachments: Above Timeout.rar, HDFS-13908-01.patch, > HDFS-13908-02.patch, HDFS-13908-03.patch, HDFS-13908-04.patch, Within > TImeout.rar > > > We have seen this issue in multiple runs: > https://builds.apache.org/job/PreCommit-HADOOP-Build/15146/testReport/org.apache.hadoop.hdfs.server.datanode/TestDataNodeMultipleRegistrations/testClusterIdMismatchAtStartupWithHA/ > https://builds.apache.org/job/PreCommit-HADOOP-Build/15116/testReport/org.apache.hadoop.hdfs.server.datanode/TestDataNodeMultipleRegistrations/testDNWithInvalidStorageWithHA/ -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-13778) TestStateAlignmentContextWithHA should use real ObserverReadProxyProvider instead of AlignmentContextProxyProvider.
[ https://issues.apache.org/jira/browse/HDFS-13778?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Konstantin Shvachko updated HDFS-13778: --- Resolution: Fixed Assignee: Plamen Jeliazkov (was: Sherwood Zheng) Hadoop Flags: Reviewed Fix Version/s: HDFS-12943 Status: Resolved (was: Patch Available) I just committed this to branch HDFS-12943. Thank you [~zero45]. > TestStateAlignmentContextWithHA should use real ObserverReadProxyProvider > instead of AlignmentContextProxyProvider. > --- > > Key: HDFS-13778 > URL: https://issues.apache.org/jira/browse/HDFS-13778 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: test >Reporter: Konstantin Shvachko >Assignee: Plamen Jeliazkov >Priority: Major > Fix For: HDFS-12943 > > Attachments: HDFS-13778-HDFS-12943.001.patch, > HDFS-13778-HDFS-12943.002.patch, HDFS-13778-HDFS-12943.003.patch, > HDFS-13778-HDFS-12943.004.patch > > > TestStateAlignmentContextWithHA uses an artificial > AlignmentContextProxyProvider, which was temporary needed for testing. Now > that we have real ObserverReadProxyProvider it can take over ACPP. This is > also useful for testing the ORPP. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-13778) TestStateAlignmentContextWithHA should use real ObserverReadProxyProvider instead of AlignmentContextProxyProvider.
[ https://issues.apache.org/jira/browse/HDFS-13778?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Konstantin Shvachko updated HDFS-13778: --- Summary: TestStateAlignmentContextWithHA should use real ObserverReadProxyProvider instead of AlignmentContextProxyProvider. (was: In TestStateAlignmentContextWithHA replace artificial AlignmentContextProxyProvider with real ObserverReadProxyProvider.) > TestStateAlignmentContextWithHA should use real ObserverReadProxyProvider > instead of AlignmentContextProxyProvider. > --- > > Key: HDFS-13778 > URL: https://issues.apache.org/jira/browse/HDFS-13778 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: test >Reporter: Konstantin Shvachko >Assignee: Sherwood Zheng >Priority: Major > Attachments: HDFS-13778-HDFS-12943.001.patch, > HDFS-13778-HDFS-12943.002.patch, HDFS-13778-HDFS-12943.003.patch, > HDFS-13778-HDFS-12943.004.patch > > > TestStateAlignmentContextWithHA uses an artificial > AlignmentContextProxyProvider, which was temporary needed for testing. Now > that we have real ObserverReadProxyProvider it can take over ACPP. This is > also useful for testing the ORPP. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDDS-366) Update functions impacted by SCM chill mode in StorageContainerLocationProtocol
[ https://issues.apache.org/jira/browse/HDDS-366?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16618358#comment-16618358 ] Hadoop QA commented on HDDS-366: | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 14s{color} | {color:blue} Docker mode activated. {color} | || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s{color} | {color:green} The patch appears to include 4 new or modified test files. {color} | || || || || {color:brown} trunk Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 22s{color} | {color:blue} Maven dependency ordering for branch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 21m 8s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 16m 22s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 5m 2s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 2m 16s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 19m 40s{color} | {color:green} branch has no errors when building and testing our client artifacts. {color} | | {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue} 0m 0s{color} | {color:blue} Skipped patched modules with no Java source: hadoop-ozone/integration-test {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 59s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 2m 0s{color} | {color:green} trunk passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 34s{color} | {color:blue} Maven dependency ordering for patch {color} | | {color:red}-1{color} | {color:red} mvninstall {color} | {color:red} 0m 34s{color} | {color:red} common in the patch failed. {color} | | {color:red}-1{color} | {color:red} mvninstall {color} | {color:red} 0m 19s{color} | {color:red} server-scm in the patch failed. {color} | | {color:red}-1{color} | {color:red} mvninstall {color} | {color:red} 0m 22s{color} | {color:red} integration-test in the patch failed. {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 15m 45s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} cc {color} | {color:green} 15m 45s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 15m 45s{color} | {color:green} the patch passed {color} | | {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange} 3m 29s{color} | {color:orange} root: The patch generated 3 new + 0 unchanged - 0 fixed = 3 total (was 0) {color} | | {color:red}-1{color} | {color:red} mvnsite {color} | {color:red} 0m 32s{color} | {color:red} server-scm in the patch failed. {color} | | {color:red}-1{color} | {color:red} mvnsite {color} | {color:red} 0m 35s{color} | {color:red} integration-test in the patch failed. {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 11m 11s{color} | {color:green} patch has no errors when building and testing our client artifacts. {color} | | {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue} 0m 0s{color} | {color:blue} Skipped patched modules with no Java source: hadoop-ozone/integration-test {color} | | {color:red}-1{color} | {color:red} findbugs {color} | {color:red} 0m 29s{color} | {color:red} server-scm in the patch failed. {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 54s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:green}+1{color} | {color:green} unit {color} | {color:green} 1m 6s{color} | {color:green} common in the patch passed. {color} | | {color:red}-1{color} | {color:red} unit {color} | {color:red} 0m 32s{color} | {color:red} server-scm in the patch failed. {color} | | {color:red}-1{color} | {color:red} unit {color} | {color:red} 0m 35s{color} | {color:red} integration-test in the patch failed. {color} | | {color:green}+1{color} |
[jira] [Updated] (HDFS-13882) Set a maximum for the delay before retrying locateFollowingBlock
[ https://issues.apache.org/jira/browse/HDFS-13882?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Kitti Nanasi updated HDFS-13882: Attachment: HDFS-13882.004.patch > Set a maximum for the delay before retrying locateFollowingBlock > > > Key: HDFS-13882 > URL: https://issues.apache.org/jira/browse/HDFS-13882 > Project: Hadoop HDFS > Issue Type: Improvement >Affects Versions: 3.1.0 >Reporter: Kitti Nanasi >Assignee: Kitti Nanasi >Priority: Major > Attachments: HDFS-13882.001.patch, HDFS-13882.002.patch, > HDFS-13882.003.patch, HDFS-13882.004.patch > > > More and more we are seeing cases where customers are running into the java > io exception "Unable to close file because the last block does not have > enough number of replicas" on client file closure. The common workaround is > to increase dfs.client.block.write.locateFollowingBlock.retries from 5 to 10. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-13882) Set a maximum for the delay before retrying locateFollowingBlock
[ https://issues.apache.org/jira/browse/HDFS-13882?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16618356#comment-16618356 ] Kitti Nanasi commented on HDFS-13882: - Thanks for the comments, [~xiaochen]! I fixed them in patch v004. > Set a maximum for the delay before retrying locateFollowingBlock > > > Key: HDFS-13882 > URL: https://issues.apache.org/jira/browse/HDFS-13882 > Project: Hadoop HDFS > Issue Type: Improvement >Affects Versions: 3.1.0 >Reporter: Kitti Nanasi >Assignee: Kitti Nanasi >Priority: Major > Attachments: HDFS-13882.001.patch, HDFS-13882.002.patch, > HDFS-13882.003.patch, HDFS-13882.004.patch > > > More and more we are seeing cases where customers are running into the java > io exception "Unable to close file because the last block does not have > enough number of replicas" on client file closure. The common workaround is > to increase dfs.client.block.write.locateFollowingBlock.retries from 5 to 10. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-13833) Failed to choose from local rack (location = /default); the second replica is not found, retry choosing ramdomly
[ https://issues.apache.org/jira/browse/HDFS-13833?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16618353#comment-16618353 ] Hadoop QA commented on HDFS-13833: -- | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 18s{color} | {color:blue} Docker mode activated. {color} | || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s{color} | {color:green} The patch appears to include 1 new or modified test files. {color} | || || || || {color:brown} trunk Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 20m 49s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 7s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 1m 0s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 15s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 13m 58s{color} | {color:green} branch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 2m 6s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 54s{color} | {color:green} trunk passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 1m 11s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 1s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 1m 1s{color} | {color:green} the patch passed {color} | | {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange} 0m 52s{color} | {color:orange} hadoop-hdfs-project/hadoop-hdfs: The patch generated 3 new + 73 unchanged - 0 fixed = 76 total (was 73) {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 12s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 13m 29s{color} | {color:green} patch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 2m 13s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 52s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:red}-1{color} | {color:red} unit {color} | {color:red}105m 32s{color} | {color:red} hadoop-hdfs in the patch failed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 31s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black}168m 1s{color} | {color:black} {color} | \\ \\ || Reason || Tests || | Failed junit tests | hadoop.hdfs.server.namenode.TestReconstructStripedBlocks | \\ \\ || Subsystem || Report/Notes || | Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:4b8c2b1 | | JIRA Issue | HDFS-13833 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12940088/HDFS-13833.002.patch | | Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient findbugs checkstyle | | uname | Linux b5f64ec378e0 3.13.0-153-generic #203-Ubuntu SMP Thu Jun 14 08:52:28 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | /testptch/patchprocess/precommit/personality/provided.sh | | git revision | trunk / d154193 | | maven | version: Apache Maven 3.3.9 | | Default Java | 1.8.0_181 | | findbugs | v3.1.0-RC1 | | checkstyle | https://builds.apache.org/job/PreCommit-HDFS-Build/25084/artifact/out/diff-checkstyle-hadoop-hdfs-project_hadoop-hdfs.txt | | unit | https://builds.apache.org/job/PreCommit-HDFS-Build/25084/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt | | Test Results | https://builds.apache.org/job/PreCommit-HDFS-Build/25084/testReport/ | | Max. process+thread count | 3493 (vs.
[jira] [Created] (HDDS-492) Add more unit tests to ozonefs robot framework
Namit Maheshwari created HDDS-492: - Summary: Add more unit tests to ozonefs robot framework Key: HDDS-492 URL: https://issues.apache.org/jira/browse/HDDS-492 Project: Hadoop Distributed Data Store Issue Type: Bug Reporter: Namit Maheshwari Currently there are only couple of tests inside ozonefs.robot We should add more unit tests for the same. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDDS-491) Minor typos in README.md in smoketest
[ https://issues.apache.org/jira/browse/HDDS-491?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Bharat Viswanadham updated HDDS-491: Description: File:hadoop-dist/src/main/smoketest/README.md Line 23: robot smoketest/bascic should be change to robot smoketest/basic. Line 30: ozone standalon should be changed to ozone standalone was: Line 23: robot smoketest/bascic should be change to robot smoketest/basic. Line 30: ozone standalon should be changed to ozone standalone > Minor typos in README.md in smoketest > - > > Key: HDDS-491 > URL: https://issues.apache.org/jira/browse/HDDS-491 > Project: Hadoop Distributed Data Store > Issue Type: Bug >Reporter: Bharat Viswanadham >Priority: Trivial > Labels: newbie > > File:hadoop-dist/src/main/smoketest/README.md > Line 23: robot smoketest/bascic should be change to robot smoketest/basic. > Line 30: ozone standalon should be changed to ozone standalone -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Assigned] (HDDS-492) Add more unit tests to ozonefs robot framework
[ https://issues.apache.org/jira/browse/HDDS-492?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Namit Maheshwari reassigned HDDS-492: - Assignee: Namit Maheshwari > Add more unit tests to ozonefs robot framework > -- > > Key: HDDS-492 > URL: https://issues.apache.org/jira/browse/HDDS-492 > Project: Hadoop Distributed Data Store > Issue Type: Bug >Reporter: Namit Maheshwari >Assignee: Namit Maheshwari >Priority: Major > > Currently there are only couple of tests inside ozonefs.robot > We should add more unit tests for the same. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDDS-491) Minor typos in README.md in smoketest
[ https://issues.apache.org/jira/browse/HDDS-491?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Bharat Viswanadham updated HDDS-491: Labels: newbie (was: ) > Minor typos in README.md in smoketest > - > > Key: HDDS-491 > URL: https://issues.apache.org/jira/browse/HDDS-491 > Project: Hadoop Distributed Data Store > Issue Type: Bug >Reporter: Bharat Viswanadham >Priority: Trivial > Labels: newbie > > File:hadoop-dist/src/main/smoketest/README.md > Line 23: robot smoketest/bascic should be change to robot smoketest/basic. > Line 30: ozone standalon should be changed to ozone standalone -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Created] (HDDS-491) Minor typos in README.md in smoketest
Bharat Viswanadham created HDDS-491: --- Summary: Minor typos in README.md in smoketest Key: HDDS-491 URL: https://issues.apache.org/jira/browse/HDDS-491 Project: Hadoop Distributed Data Store Issue Type: Bug Reporter: Bharat Viswanadham Line 23: robot smoketest/bascic should be change to robot smoketest/basic. Line 30: ozone standalon should be changed to ozone standalone -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDDS-468) Add version number to datanode plugin and ozone file system jar
[ https://issues.apache.org/jira/browse/HDDS-468?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16618340#comment-16618340 ] Bharat Viswanadham commented on HDDS-468: - Rebased after HDDS-352 went in. > Add version number to datanode plugin and ozone file system jar > --- > > Key: HDDS-468 > URL: https://issues.apache.org/jira/browse/HDDS-468 > Project: Hadoop Distributed Data Store > Issue Type: Improvement >Reporter: Bharat Viswanadham >Assignee: Bharat Viswanadham >Priority: Major > Attachments: HDDS-468.00.patch, HDDS-468.01.patch, HDDS-468.02.patch > > > Below 2 jars are copied to distribution without any ozone version. > hadoop-ozone-datanode-plugin.jar > hadoop-ozone-filesystem.jar > > Ozone version number should be appended at the end like other ozone jars have. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDDS-468) Add version number to datanode plugin and ozone file system jar
[ https://issues.apache.org/jira/browse/HDDS-468?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Bharat Viswanadham updated HDDS-468: Attachment: HDDS-468.02.patch > Add version number to datanode plugin and ozone file system jar > --- > > Key: HDDS-468 > URL: https://issues.apache.org/jira/browse/HDDS-468 > Project: Hadoop Distributed Data Store > Issue Type: Improvement >Reporter: Bharat Viswanadham >Assignee: Bharat Viswanadham >Priority: Major > Attachments: HDDS-468.00.patch, HDDS-468.01.patch, HDDS-468.02.patch > > > Below 2 jars are copied to distribution without any ozone version. > hadoop-ozone-datanode-plugin.jar > hadoop-ozone-filesystem.jar > > Ozone version number should be appended at the end like other ozone jars have. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDDS-352) Separate install and testing phases in acceptance tests.
[ https://issues.apache.org/jira/browse/HDDS-352?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16618334#comment-16618334 ] Hudson commented on HDDS-352: - SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #14984 (See [https://builds.apache.org/job/Hadoop-trunk-Commit/14984/]) HDDS-352. Separate install and testing phases in acceptance tests. (bharat: rev 8b2f5e60fa4647cd11f51bc5e8b86b84b41db5f7) * (add) hadoop-dist/src/main/smoketest/basic/basic.robot * (delete) hadoop-ozone/acceptance-test/src/test/acceptance/basic/.env * (delete) hadoop-ozone/acceptance-test/src/test/acceptance/basic/docker-compose.yaml * (delete) hadoop-ozone/acceptance-test/src/test/acceptance/basic/docker-config * (delete) hadoop-ozone/acceptance-test/README.md * (delete) hadoop-ozone/acceptance-test/src/test/acceptance/basic/ozone-shell.robot * (delete) hadoop-ozone/acceptance-test/dev-support/docker/Dockerfile * (add) hadoop-dist/src/main/smoketest/README.md * (delete) hadoop-ozone/acceptance-test/dev-support/bin/robot-all.sh * (delete) hadoop-ozone/acceptance-test/src/test/acceptance/basic/basic.robot * (delete) hadoop-ozone/acceptance-test/src/test/acceptance/ozonefs/docker-config * (delete) hadoop-ozone/acceptance-test/src/test/acceptance/ozonefs/.env * (edit) hadoop-ozone/pom.xml * (add) hadoop-dist/src/main/smoketest/basic/ozone-shell.robot * (edit) dev-support/bin/ozone-dist-layout-stitching * (add) hadoop-dist/src/main/compose/ozonefs/docker-compose.yaml * (edit) hadoop-dist/src/main/compose/ozonescripts/docker-config * (edit) hadoop-dist/src/main/compose/ozoneperf/docker-config * (add) hadoop-dist/src/main/compose/ozonefs/docker-config * (delete) hadoop-ozone/acceptance-test/dev-support/bin/robot-dnd-all.sh * (delete) hadoop-ozone/acceptance-test/src/test/acceptance/ozonefs/ozonesinglenode.robot * (add) hadoop-dist/src/main/smoketest/ozonefs/ozonefs.robot * (add) hadoop-dist/src/main/smoketest/test.sh * (delete) hadoop-ozone/acceptance-test/src/test/acceptance/commonlib.robot * (edit) hadoop-dist/src/main/compose/ozone-hdfs/docker-config * (delete) hadoop-ozone/acceptance-test/dev-support/docker/docker-compose.yaml * (edit) pom.xml * (delete) hadoop-ozone/acceptance-test/dev-support/bin/robot.sh * (delete) hadoop-ozone/acceptance-test/src/test/acceptance/ozonefs/ozonefs.robot * (delete) hadoop-ozone/acceptance-test/src/test/acceptance/ozonefs/docker-compose.yaml * (edit) hadoop-dist/src/main/compose/ozone/docker-config * (add) hadoop-dist/src/main/smoketest/commonlib.robot * (delete) hadoop-ozone/acceptance-test/pom.xml > Separate install and testing phases in acceptance tests. > > > Key: HDDS-352 > URL: https://issues.apache.org/jira/browse/HDDS-352 > Project: Hadoop Distributed Data Store > Issue Type: Improvement >Reporter: Elek, Marton >Assignee: Elek, Marton >Priority: Major > Labels: test > Fix For: 0.2.1 > > Attachments: HDDS-352-ozone-0.2.001.patch, > HDDS-352-ozone-0.2.002.patch, HDDS-352-ozone-0.2.003.patch, > HDDS-352-ozone-0.2.004.patch, HDDS-352-ozone-0.2.005.patch, > HDDS-352-ozone-0.2.006.patch, HDDS-352.00.patch, TestRun.rtf > > > In the current acceptance tests (hadoop-ozone/acceptance-test) the robot > files contain two kind of commands: > 1) starting and stopping clusters > 2) testing the basic behaviour with client calls > It would be great to separate the two functionality and include only the > testing part in the robot files. > 1. Ideally the tests could be executed in any environment. After a kubernetes > install I would like to do a smoke test. It could be a different environment > but I would like to execute most of the tests (check ozone cli, rest api, > etc.) > 2. There could be multiple ozone environment (standlaone ozone cluster, hdfs > + ozone cluster, etc.). We need to test all of them with all the tests. > 3. With this approach we can collect the docker-compose files just in one > place (hadoop-dist project). After a docker-compose up there should be a way > to execute the tests with an existing cluster. Something like this: > {code} > docker run -it apache/hadoop-runner -v ./acceptance-test:/opt/acceptance-test > -e SCM_URL=http://scm:9876 --network=composenetwork start-all-tests.sh > {code} > 4. It also means that we need to execute the tests from a separated container > instance. We need a configuration parameter to define the cluster topology. > Ideally it could be just one environment variables with the url of the scm > and the scm could be used to discovery all of the required components + > download the configuration files from there. > 5. Until now we used the log output of the docker-compose files to do some > readiness probes. They should be converted to poll the jmx endpoints and > check if th
[jira] [Updated] (HDDS-352) Separate install and testing phases in acceptance tests.
[ https://issues.apache.org/jira/browse/HDDS-352?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Bharat Viswanadham updated HDDS-352: Resolution: Fixed Status: Resolved (was: Patch Available) I have committed this to trunk and ozone-0.2 branch. Thank You [~jnp] for review and [~elek] for the fix. > Separate install and testing phases in acceptance tests. > > > Key: HDDS-352 > URL: https://issues.apache.org/jira/browse/HDDS-352 > Project: Hadoop Distributed Data Store > Issue Type: Improvement >Reporter: Elek, Marton >Assignee: Elek, Marton >Priority: Major > Labels: test > Attachments: HDDS-352-ozone-0.2.001.patch, > HDDS-352-ozone-0.2.002.patch, HDDS-352-ozone-0.2.003.patch, > HDDS-352-ozone-0.2.004.patch, HDDS-352-ozone-0.2.005.patch, > HDDS-352-ozone-0.2.006.patch, HDDS-352.00.patch, TestRun.rtf > > > In the current acceptance tests (hadoop-ozone/acceptance-test) the robot > files contain two kind of commands: > 1) starting and stopping clusters > 2) testing the basic behaviour with client calls > It would be great to separate the two functionality and include only the > testing part in the robot files. > 1. Ideally the tests could be executed in any environment. After a kubernetes > install I would like to do a smoke test. It could be a different environment > but I would like to execute most of the tests (check ozone cli, rest api, > etc.) > 2. There could be multiple ozone environment (standlaone ozone cluster, hdfs > + ozone cluster, etc.). We need to test all of them with all the tests. > 3. With this approach we can collect the docker-compose files just in one > place (hadoop-dist project). After a docker-compose up there should be a way > to execute the tests with an existing cluster. Something like this: > {code} > docker run -it apache/hadoop-runner -v ./acceptance-test:/opt/acceptance-test > -e SCM_URL=http://scm:9876 --network=composenetwork start-all-tests.sh > {code} > 4. It also means that we need to execute the tests from a separated container > instance. We need a configuration parameter to define the cluster topology. > Ideally it could be just one environment variables with the url of the scm > and the scm could be used to discovery all of the required components + > download the configuration files from there. > 5. Until now we used the log output of the docker-compose files to do some > readiness probes. They should be converted to poll the jmx endpoints and > check if the cluster is up and running. If we need the log files for > additional testing we can create multiple implementations for different type > of environments (docker-compose/kubernetes) and include the right set of > functions based on an external parameters. > 6. Still we need a generic script under the ozone-acceptance test project to > run all the tests (starting the docker-compose clusters, execute tests in a > different container, stop the cluster) -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDDS-463) Fix the release packaging of the ozone distribution
[ https://issues.apache.org/jira/browse/HDDS-463?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Bharat Viswanadham updated HDDS-463: Fix Version/s: 0.2.1 > Fix the release packaging of the ozone distribution > --- > > Key: HDDS-463 > URL: https://issues.apache.org/jira/browse/HDDS-463 > Project: Hadoop Distributed Data Store > Issue Type: Bug >Reporter: Elek, Marton >Assignee: Elek, Marton >Priority: Blocker > Fix For: 0.2.1 > > Attachments: HDDS-463-ozone-0.2.001.patch, > HDDS-463-ozone-0.2.002.patch > > > I found a few small problem during my test to release ozone: > 1. The source assembly file still contains the ancient hdsl string in the name > 2. The README of the binary distribution is confusing (this is Hadoop) > 3. the binary distribution contains unnecessary test and source jar files > 4. (Thanks to [~bharatviswa]): The log message after the dist creation is bad > (doesn't contain the restored version tag in the name) > I combined these problems as all of the problems could be solved with very > small modifications... -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDDS-352) Separate install and testing phases in acceptance tests.
[ https://issues.apache.org/jira/browse/HDDS-352?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Bharat Viswanadham updated HDDS-352: Fix Version/s: 0.2.1 > Separate install and testing phases in acceptance tests. > > > Key: HDDS-352 > URL: https://issues.apache.org/jira/browse/HDDS-352 > Project: Hadoop Distributed Data Store > Issue Type: Improvement >Reporter: Elek, Marton >Assignee: Elek, Marton >Priority: Major > Labels: test > Fix For: 0.2.1 > > Attachments: HDDS-352-ozone-0.2.001.patch, > HDDS-352-ozone-0.2.002.patch, HDDS-352-ozone-0.2.003.patch, > HDDS-352-ozone-0.2.004.patch, HDDS-352-ozone-0.2.005.patch, > HDDS-352-ozone-0.2.006.patch, HDDS-352.00.patch, TestRun.rtf > > > In the current acceptance tests (hadoop-ozone/acceptance-test) the robot > files contain two kind of commands: > 1) starting and stopping clusters > 2) testing the basic behaviour with client calls > It would be great to separate the two functionality and include only the > testing part in the robot files. > 1. Ideally the tests could be executed in any environment. After a kubernetes > install I would like to do a smoke test. It could be a different environment > but I would like to execute most of the tests (check ozone cli, rest api, > etc.) > 2. There could be multiple ozone environment (standlaone ozone cluster, hdfs > + ozone cluster, etc.). We need to test all of them with all the tests. > 3. With this approach we can collect the docker-compose files just in one > place (hadoop-dist project). After a docker-compose up there should be a way > to execute the tests with an existing cluster. Something like this: > {code} > docker run -it apache/hadoop-runner -v ./acceptance-test:/opt/acceptance-test > -e SCM_URL=http://scm:9876 --network=composenetwork start-all-tests.sh > {code} > 4. It also means that we need to execute the tests from a separated container > instance. We need a configuration parameter to define the cluster topology. > Ideally it could be just one environment variables with the url of the scm > and the scm could be used to discovery all of the required components + > download the configuration files from there. > 5. Until now we used the log output of the docker-compose files to do some > readiness probes. They should be converted to poll the jmx endpoints and > check if the cluster is up and running. If we need the log files for > additional testing we can create multiple implementations for different type > of environments (docker-compose/kubernetes) and include the right set of > functions based on an external parameters. > 6. Still we need a generic script under the ozone-acceptance test project to > run all the tests (starting the docker-compose clusters, execute tests in a > different container, stop the cluster) -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDDS-352) Separate install and testing phases in acceptance tests.
[ https://issues.apache.org/jira/browse/HDDS-352?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16618331#comment-16618331 ] Hadoop QA commented on HDDS-352: | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 0s{color} | {color:blue} Docker mode activated. {color} | | {color:red}-1{color} | {color:red} patch {color} | {color:red} 0m 5s{color} | {color:red} HDDS-352 does not apply to ozone-0.2. Rebase required? Wrong Branch? See https://wiki.apache.org/hadoop/HowToContribute for help. {color} | \\ \\ || Subsystem || Report/Notes || | JIRA Issue | HDDS-352 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12940105/HDDS-352-ozone-0.2.006.patch | | Console output | https://builds.apache.org/job/PreCommit-HDDS-Build/1126/console | | Powered by | Apache Yetus 0.8.0-SNAPSHOT http://yetus.apache.org | This message was automatically generated. > Separate install and testing phases in acceptance tests. > > > Key: HDDS-352 > URL: https://issues.apache.org/jira/browse/HDDS-352 > Project: Hadoop Distributed Data Store > Issue Type: Improvement >Reporter: Elek, Marton >Assignee: Elek, Marton >Priority: Major > Labels: test > Attachments: HDDS-352-ozone-0.2.001.patch, > HDDS-352-ozone-0.2.002.patch, HDDS-352-ozone-0.2.003.patch, > HDDS-352-ozone-0.2.004.patch, HDDS-352-ozone-0.2.005.patch, > HDDS-352-ozone-0.2.006.patch, HDDS-352.00.patch, TestRun.rtf > > > In the current acceptance tests (hadoop-ozone/acceptance-test) the robot > files contain two kind of commands: > 1) starting and stopping clusters > 2) testing the basic behaviour with client calls > It would be great to separate the two functionality and include only the > testing part in the robot files. > 1. Ideally the tests could be executed in any environment. After a kubernetes > install I would like to do a smoke test. It could be a different environment > but I would like to execute most of the tests (check ozone cli, rest api, > etc.) > 2. There could be multiple ozone environment (standlaone ozone cluster, hdfs > + ozone cluster, etc.). We need to test all of them with all the tests. > 3. With this approach we can collect the docker-compose files just in one > place (hadoop-dist project). After a docker-compose up there should be a way > to execute the tests with an existing cluster. Something like this: > {code} > docker run -it apache/hadoop-runner -v ./acceptance-test:/opt/acceptance-test > -e SCM_URL=http://scm:9876 --network=composenetwork start-all-tests.sh > {code} > 4. It also means that we need to execute the tests from a separated container > instance. We need a configuration parameter to define the cluster topology. > Ideally it could be just one environment variables with the url of the scm > and the scm could be used to discovery all of the required components + > download the configuration files from there. > 5. Until now we used the log output of the docker-compose files to do some > readiness probes. They should be converted to poll the jmx endpoints and > check if the cluster is up and running. If we need the log files for > additional testing we can create multiple implementations for different type > of environments (docker-compose/kubernetes) and include the right set of > functions based on an external parameters. > 6. Still we need a generic script under the ozone-acceptance test project to > run all the tests (starting the docker-compose clusters, execute tests in a > different container, stop the cluster) -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDDS-352) Separate install and testing phases in acceptance tests.
[ https://issues.apache.org/jira/browse/HDDS-352?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Bharat Viswanadham updated HDDS-352: Attachment: HDDS-352-ozone-0.2.006.patch > Separate install and testing phases in acceptance tests. > > > Key: HDDS-352 > URL: https://issues.apache.org/jira/browse/HDDS-352 > Project: Hadoop Distributed Data Store > Issue Type: Improvement >Reporter: Elek, Marton >Assignee: Elek, Marton >Priority: Major > Labels: test > Attachments: HDDS-352-ozone-0.2.001.patch, > HDDS-352-ozone-0.2.002.patch, HDDS-352-ozone-0.2.003.patch, > HDDS-352-ozone-0.2.004.patch, HDDS-352-ozone-0.2.005.patch, > HDDS-352-ozone-0.2.006.patch, HDDS-352.00.patch, TestRun.rtf > > > In the current acceptance tests (hadoop-ozone/acceptance-test) the robot > files contain two kind of commands: > 1) starting and stopping clusters > 2) testing the basic behaviour with client calls > It would be great to separate the two functionality and include only the > testing part in the robot files. > 1. Ideally the tests could be executed in any environment. After a kubernetes > install I would like to do a smoke test. It could be a different environment > but I would like to execute most of the tests (check ozone cli, rest api, > etc.) > 2. There could be multiple ozone environment (standlaone ozone cluster, hdfs > + ozone cluster, etc.). We need to test all of them with all the tests. > 3. With this approach we can collect the docker-compose files just in one > place (hadoop-dist project). After a docker-compose up there should be a way > to execute the tests with an existing cluster. Something like this: > {code} > docker run -it apache/hadoop-runner -v ./acceptance-test:/opt/acceptance-test > -e SCM_URL=http://scm:9876 --network=composenetwork start-all-tests.sh > {code} > 4. It also means that we need to execute the tests from a separated container > instance. We need a configuration parameter to define the cluster topology. > Ideally it could be just one environment variables with the url of the scm > and the scm could be used to discovery all of the required components + > download the configuration files from there. > 5. Until now we used the log output of the docker-compose files to do some > readiness probes. They should be converted to poll the jmx endpoints and > check if the cluster is up and running. If we need the log files for > additional testing we can create multiple implementations for different type > of environments (docker-compose/kubernetes) and include the right set of > functions based on an external parameters. > 6. Still we need a generic script under the ozone-acceptance test project to > run all the tests (starting the docker-compose clusters, execute tests in a > different container, stop the cluster) -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDDS-352) Separate install and testing phases in acceptance tests.
[ https://issues.apache.org/jira/browse/HDDS-352?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16618315#comment-16618315 ] Bharat Viswanadham commented on HDDS-352: - Thank You [~jnp] for review and [~elek] for the patch. I have committed this to the trunk. I will push into ozone-0.2 branch shortly. [~elek] For any additional change's as discussed we can file new Jira's. > Separate install and testing phases in acceptance tests. > > > Key: HDDS-352 > URL: https://issues.apache.org/jira/browse/HDDS-352 > Project: Hadoop Distributed Data Store > Issue Type: Improvement >Reporter: Elek, Marton >Assignee: Elek, Marton >Priority: Major > Labels: test > Attachments: HDDS-352-ozone-0.2.001.patch, > HDDS-352-ozone-0.2.002.patch, HDDS-352-ozone-0.2.003.patch, > HDDS-352-ozone-0.2.004.patch, HDDS-352-ozone-0.2.005.patch, > HDDS-352.00.patch, TestRun.rtf > > > In the current acceptance tests (hadoop-ozone/acceptance-test) the robot > files contain two kind of commands: > 1) starting and stopping clusters > 2) testing the basic behaviour with client calls > It would be great to separate the two functionality and include only the > testing part in the robot files. > 1. Ideally the tests could be executed in any environment. After a kubernetes > install I would like to do a smoke test. It could be a different environment > but I would like to execute most of the tests (check ozone cli, rest api, > etc.) > 2. There could be multiple ozone environment (standlaone ozone cluster, hdfs > + ozone cluster, etc.). We need to test all of them with all the tests. > 3. With this approach we can collect the docker-compose files just in one > place (hadoop-dist project). After a docker-compose up there should be a way > to execute the tests with an existing cluster. Something like this: > {code} > docker run -it apache/hadoop-runner -v ./acceptance-test:/opt/acceptance-test > -e SCM_URL=http://scm:9876 --network=composenetwork start-all-tests.sh > {code} > 4. It also means that we need to execute the tests from a separated container > instance. We need a configuration parameter to define the cluster topology. > Ideally it could be just one environment variables with the url of the scm > and the scm could be used to discovery all of the required components + > download the configuration files from there. > 5. Until now we used the log output of the docker-compose files to do some > readiness probes. They should be converted to poll the jmx endpoints and > check if the cluster is up and running. If we need the log files for > additional testing we can create multiple implementations for different type > of environments (docker-compose/kubernetes) and include the right set of > functions based on an external parameters. > 6. Still we need a generic script under the ozone-acceptance test project to > run all the tests (starting the docker-compose clusters, execute tests in a > different container, stop the cluster) -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-13749) Use getServiceStatus to discover observer namenodes
[ https://issues.apache.org/jira/browse/HDFS-13749?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16618310#comment-16618310 ] Hadoop QA commented on HDFS-13749: -- | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 16m 20s{color} | {color:blue} Docker mode activated. {color} | || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s{color} | {color:green} The patch appears to include 1 new or modified test files. {color} | || || || || {color:brown} HDFS-12943 Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 34s{color} | {color:blue} Maven dependency ordering for branch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 19m 6s{color} | {color:green} HDFS-12943 passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 3m 31s{color} | {color:green} HDFS-12943 passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 1m 8s{color} | {color:green} HDFS-12943 passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 47s{color} | {color:green} HDFS-12943 passed {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 14m 10s{color} | {color:green} branch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 3m 31s{color} | {color:green} HDFS-12943 passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 22s{color} | {color:green} HDFS-12943 passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 10s{color} | {color:blue} Maven dependency ordering for patch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 1m 40s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 3m 4s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 3m 4s{color} | {color:green} the patch passed {color} | | {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange} 0m 57s{color} | {color:orange} hadoop-hdfs-project: The patch generated 3 new + 22 unchanged - 13 fixed = 25 total (was 35) {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 37s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 12m 1s{color} | {color:green} patch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 4m 7s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 19s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:green}+1{color} | {color:green} unit {color} | {color:green} 1m 37s{color} | {color:green} hadoop-hdfs-client in the patch passed. {color} | | {color:red}-1{color} | {color:red} unit {color} | {color:red}108m 27s{color} | {color:red} hadoop-hdfs in the patch failed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 48s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black}196m 27s{color} | {color:black} {color} | \\ \\ || Reason || Tests || | Failed junit tests | hadoop.hdfs.server.namenode.TestNameNodeMetadataConsistency | | | hadoop.hdfs.TestReconstructStripedFileWithRandomECPolicy | | | hadoop.hdfs.server.namenode.ha.TestRetryCacheWithHA | | | hadoop.hdfs.client.impl.TestBlockReaderLocal | | | hadoop.hdfs.server.blockmanagement.TestUnderReplicatedBlocks | \\ \\ || Subsystem || Report/Notes || | Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:9b55946 | | JIRA Issue | HDFS-13749 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12940074/HDFS-13749-HDFS-12943.004.patch | | Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient findbugs checkstyle | | uname | Linux a1c402b2d7
[jira] [Updated] (HDDS-488) Handle chill mode exception from SCM in OzoneManager
[ https://issues.apache.org/jira/browse/HDDS-488?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Ajay Kumar updated HDDS-488: Attachment: HDDS-488.00.patch > Handle chill mode exception from SCM in OzoneManager > > > Key: HDDS-488 > URL: https://issues.apache.org/jira/browse/HDDS-488 > Project: Hadoop Distributed Data Store > Issue Type: Task >Reporter: Ajay Kumar >Assignee: Ajay Kumar >Priority: Major > Attachments: HDDS-488.00.patch > > > Following functions should propagate SCM chill mode exception back to the > clients: > allocateBlock > openKey -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Created] (HDDS-490) Improve om and scm start up options
Namit Maheshwari created HDDS-490: - Summary: Improve om and scm start up options Key: HDDS-490 URL: https://issues.apache.org/jira/browse/HDDS-490 Project: Hadoop Distributed Data Store Issue Type: Bug Reporter: Namit Maheshwari I propose the following changes: # Rename createObjectStore to format # Change the flag to use --createObjectStore instead of using -createObjectStore. It is also applicable to other scm and om startup options. # Fail to format existing object store. If a user runs: {code:java} ozone om -createObjectStore{code} And there is already an object store, it should give a warning message and exit the process. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Assigned] (HDDS-490) Improve om and scm start up options
[ https://issues.apache.org/jira/browse/HDDS-490?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Namit Maheshwari reassigned HDDS-490: - Assignee: Namit Maheshwari > Improve om and scm start up options > > > Key: HDDS-490 > URL: https://issues.apache.org/jira/browse/HDDS-490 > Project: Hadoop Distributed Data Store > Issue Type: Bug >Reporter: Namit Maheshwari >Assignee: Namit Maheshwari >Priority: Major > Labels: incompatible > > I propose the following changes: > # Rename createObjectStore to format > # Change the flag to use --createObjectStore instead of using > -createObjectStore. It is also applicable to other scm and om startup options. > # Fail to format existing object store. If a user runs: > {code:java} > ozone om -createObjectStore{code} > And there is already an object store, it should give a warning message and > exit the process. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDDS-490) Improve om and scm start up options
[ https://issues.apache.org/jira/browse/HDDS-490?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Namit Maheshwari updated HDDS-490: -- Labels: incompatible (was: ) > Improve om and scm start up options > > > Key: HDDS-490 > URL: https://issues.apache.org/jira/browse/HDDS-490 > Project: Hadoop Distributed Data Store > Issue Type: Bug >Reporter: Namit Maheshwari >Priority: Major > Labels: incompatible > > I propose the following changes: > # Rename createObjectStore to format > # Change the flag to use --createObjectStore instead of using > -createObjectStore. It is also applicable to other scm and om startup options. > # Fail to format existing object store. If a user runs: > {code:java} > ozone om -createObjectStore{code} > And there is already an object store, it should give a warning message and > exit the process. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-13833) Failed to choose from local rack (location = /default); the second replica is not found, retry choosing ramdomly
[ https://issues.apache.org/jira/browse/HDFS-13833?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Shweta updated HDFS-13833: -- Attachment: HDFS-13833.003.patch > Failed to choose from local rack (location = /default); the second replica is > not found, retry choosing ramdomly > > > Key: HDFS-13833 > URL: https://issues.apache.org/jira/browse/HDFS-13833 > Project: Hadoop HDFS > Issue Type: Bug >Reporter: Henrique Barros >Assignee: Shweta >Priority: Critical > Attachments: HDFS-13833.001.patch, HDFS-13833.002.patch, > HDFS-13833.003.patch > > > I'm having a random problem with blocks replication with Hadoop > 2.6.0-cdh5.15.0 > With Cloudera CDH-5.15.0-1.cdh5.15.0.p0.21 > > In my case we are getting this error very randomly (after some hours) and > with only one Datanode (for now, we are trying this cloudera cluster for a > POC) > Here is the Log. > {code:java} > Choosing random from 1 available nodes on node /default, scope=/default, > excludedScope=null, excludeNodes=[] > 2:38:20.527 PMDEBUG NetworkTopology > Choosing random from 0 available nodes on node /default, scope=/default, > excludedScope=null, excludeNodes=[192.168.220.53:50010] > 2:38:20.527 PMDEBUG NetworkTopology > chooseRandom returning null > 2:38:20.527 PMDEBUG BlockPlacementPolicy > [ > Node /default/192.168.220.53:50010 [ > Datanode 192.168.220.53:50010 is not chosen since the node is too busy > (load: 8 > 0.0). > 2:38:20.527 PMDEBUG NetworkTopology > chooseRandom returning 192.168.220.53:50010 > 2:38:20.527 PMINFOBlockPlacementPolicy > Not enough replicas was chosen. Reason:{NODE_TOO_BUSY=1} > 2:38:20.527 PMDEBUG StateChange > closeFile: > /mobi.me/development/apps/flink/checkpoints/a5a6806866c1640660924ea1453cbe34/chk-2118/eef8bff6-75a9-43c1-ae93-4b1a9ca31ad9 > with 1 blocks is persisted to the file system > 2:38:20.527 PMDEBUG StateChange > *BLOCK* NameNode.addBlock: file > /mobi.me/development/apps/flink/checkpoints/a5a6806866c1640660924ea1453cbe34/chk-2118/1cfe900d-6f45-4b55-baaa-73c02ace2660 > fileId=129628869 for DFSClient_NONMAPREDUCE_467616914_65 > 2:38:20.527 PMDEBUG BlockPlacementPolicy > Failed to choose from local rack (location = /default); the second replica is > not found, retry choosing ramdomly > org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicy$NotEnoughReplicasException: > > at > org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicyDefault.chooseRandom(BlockPlacementPolicyDefault.java:784) > at > org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicyDefault.chooseRandom(BlockPlacementPolicyDefault.java:694) > at > org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicyDefault.chooseLocalRack(BlockPlacementPolicyDefault.java:601) > at > org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicyDefault.chooseLocalStorage(BlockPlacementPolicyDefault.java:561) > at > org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicyDefault.chooseTargetInOrder(BlockPlacementPolicyDefault.java:464) > at > org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicyDefault.chooseTarget(BlockPlacementPolicyDefault.java:395) > at > org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicyDefault.chooseTarget(BlockPlacementPolicyDefault.java:270) > at > org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicyDefault.chooseTarget(BlockPlacementPolicyDefault.java:142) > at > org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicyDefault.chooseTarget(BlockPlacementPolicyDefault.java:158) > at > org.apache.hadoop.hdfs.server.blockmanagement.BlockManager.chooseTarget4NewBlock(BlockManager.java:1715) > at > org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getAdditionalBlock(FSNamesystem.java:3505) > at > org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.addBlock(NameNodeRpcServer.java:694) > at > org.apache.hadoop.hdfs.server.namenode.AuthorizationProviderProxyClientProtocol.addBlock(AuthorizationProviderProxyClientProtocol.java:219) > at > org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.addBlock(ClientNamenodeProtocolServerSideTranslatorPB.java:507) > at > org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java) > at > org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:617) > at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1073) > at org.apache.hadoop.ipc.Se
[jira] [Comment Edited] (HDFS-13833) Failed to choose from local rack (location = /default); the second replica is not found, retry choosing ramdomly
[ https://issues.apache.org/jira/browse/HDFS-13833?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16618292#comment-16618292 ] Shweta edited comment on HDFS-13833 at 9/18/18 12:08 AM: - Thanks [~knanasi] for reviewing the patch. That's silly of me to not have checked for the package-private before submitting the patch, I have uploade patch with this change. Also, the check style warning were related to the hidden field i.e. the stats object which has been resolved in this patch as I am not passing it as a parameter. was (Author: shwetayakkali): Thanks [~knanasi] for reviewing the patch. That's silly of me to not have checked for the package-private before submitting the patch, will update the patch. Also, the check style warning were related to the hidden field i.e. the stats object which has been resolved in this patch as I am not passing it as a parameter. > Failed to choose from local rack (location = /default); the second replica is > not found, retry choosing ramdomly > > > Key: HDFS-13833 > URL: https://issues.apache.org/jira/browse/HDFS-13833 > Project: Hadoop HDFS > Issue Type: Bug >Reporter: Henrique Barros >Assignee: Shweta >Priority: Critical > Attachments: HDFS-13833.001.patch, HDFS-13833.002.patch > > > I'm having a random problem with blocks replication with Hadoop > 2.6.0-cdh5.15.0 > With Cloudera CDH-5.15.0-1.cdh5.15.0.p0.21 > > In my case we are getting this error very randomly (after some hours) and > with only one Datanode (for now, we are trying this cloudera cluster for a > POC) > Here is the Log. > {code:java} > Choosing random from 1 available nodes on node /default, scope=/default, > excludedScope=null, excludeNodes=[] > 2:38:20.527 PMDEBUG NetworkTopology > Choosing random from 0 available nodes on node /default, scope=/default, > excludedScope=null, excludeNodes=[192.168.220.53:50010] > 2:38:20.527 PMDEBUG NetworkTopology > chooseRandom returning null > 2:38:20.527 PMDEBUG BlockPlacementPolicy > [ > Node /default/192.168.220.53:50010 [ > Datanode 192.168.220.53:50010 is not chosen since the node is too busy > (load: 8 > 0.0). > 2:38:20.527 PMDEBUG NetworkTopology > chooseRandom returning 192.168.220.53:50010 > 2:38:20.527 PMINFOBlockPlacementPolicy > Not enough replicas was chosen. Reason:{NODE_TOO_BUSY=1} > 2:38:20.527 PMDEBUG StateChange > closeFile: > /mobi.me/development/apps/flink/checkpoints/a5a6806866c1640660924ea1453cbe34/chk-2118/eef8bff6-75a9-43c1-ae93-4b1a9ca31ad9 > with 1 blocks is persisted to the file system > 2:38:20.527 PMDEBUG StateChange > *BLOCK* NameNode.addBlock: file > /mobi.me/development/apps/flink/checkpoints/a5a6806866c1640660924ea1453cbe34/chk-2118/1cfe900d-6f45-4b55-baaa-73c02ace2660 > fileId=129628869 for DFSClient_NONMAPREDUCE_467616914_65 > 2:38:20.527 PMDEBUG BlockPlacementPolicy > Failed to choose from local rack (location = /default); the second replica is > not found, retry choosing ramdomly > org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicy$NotEnoughReplicasException: > > at > org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicyDefault.chooseRandom(BlockPlacementPolicyDefault.java:784) > at > org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicyDefault.chooseRandom(BlockPlacementPolicyDefault.java:694) > at > org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicyDefault.chooseLocalRack(BlockPlacementPolicyDefault.java:601) > at > org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicyDefault.chooseLocalStorage(BlockPlacementPolicyDefault.java:561) > at > org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicyDefault.chooseTargetInOrder(BlockPlacementPolicyDefault.java:464) > at > org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicyDefault.chooseTarget(BlockPlacementPolicyDefault.java:395) > at > org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicyDefault.chooseTarget(BlockPlacementPolicyDefault.java:270) > at > org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicyDefault.chooseTarget(BlockPlacementPolicyDefault.java:142) > at > org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicyDefault.chooseTarget(BlockPlacementPolicyDefault.java:158) > at > org.apache.hadoop.hdfs.server.blockmanagement.BlockManager.chooseTarget4NewBlock(BlockManager.java:1715) > at > org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getAdditionalBlock(FSNamesystem.java:3505) > at > org.apache.hadoop.hdfs.server.na
[jira] [Assigned] (HDDS-489) Update ozone Documentation to add noz option for ozone shell
[ https://issues.apache.org/jira/browse/HDDS-489?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Ajay Kumar reassigned HDDS-489: --- Assignee: (was: Namit Maheshwari) > Update ozone Documentation to add noz option for ozone shell > > > Key: HDDS-489 > URL: https://issues.apache.org/jira/browse/HDDS-489 > Project: Hadoop Distributed Data Store > Issue Type: Bug >Reporter: Ajay Kumar >Priority: Major > > Update ozone Documentation to add noz option for ozone shell > {code} > getozoneconf get ozone config values from configuration > nozozone debug tool, convert ozone metadata into relational data > scmcli run the CLI of the Storage Container Manager > sh command line interface for object store operations > {code} -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDDS-489) Update ozone Documentation to add noz option for ozone shell
[ https://issues.apache.org/jira/browse/HDDS-489?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Ajay Kumar updated HDDS-489: Description: Update ozone Documentation to add noz option for ozone shell {code} getozoneconf get ozone config values from configuration nozozone debug tool, convert ozone metadata into relational data scmcli run the CLI of the Storage Container Manager sh command line interface for object store operations {code} was: Update ozone Documentation to fix below issues: * Update ozone File system documentation to use 'sh' instead of 'oz' * Update ozone File system documentation to not overwrite HADOOP_CLASSPATH * Java API documentation, the complete example misses a line to get the ObjectStore from the client: {code:java} ObjectStore objectStore = ozClient.getObjectStore();{code} > Update ozone Documentation to add noz option for ozone shell > > > Key: HDDS-489 > URL: https://issues.apache.org/jira/browse/HDDS-489 > Project: Hadoop Distributed Data Store > Issue Type: Bug >Reporter: Ajay Kumar >Assignee: Namit Maheshwari >Priority: Major > > Update ozone Documentation to add noz option for ozone shell > {code} > getozoneconf get ozone config values from configuration > nozozone debug tool, convert ozone metadata into relational data > scmcli run the CLI of the Storage Container Manager > sh command line interface for object store operations > {code} -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Created] (HDDS-489) Update ozone Documentation to add noz option for ozone shell
Ajay Kumar created HDDS-489: --- Summary: Update ozone Documentation to add noz option for ozone shell Key: HDDS-489 URL: https://issues.apache.org/jira/browse/HDDS-489 Project: Hadoop Distributed Data Store Issue Type: Bug Reporter: Ajay Kumar Assignee: Namit Maheshwari Update ozone Documentation to fix below issues: * Update ozone File system documentation to use 'sh' instead of 'oz' * Update ozone File system documentation to not overwrite HADOOP_CLASSPATH * Java API documentation, the complete example misses a line to get the ObjectStore from the client: {code:java} ObjectStore objectStore = ozClient.getObjectStore();{code} -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDDS-352) Separate install and testing phases in acceptance tests.
[ https://issues.apache.org/jira/browse/HDDS-352?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16618297#comment-16618297 ] Jitendra Nath Pandey commented on HDDS-352: --- +1 for the latest patch. > Separate install and testing phases in acceptance tests. > > > Key: HDDS-352 > URL: https://issues.apache.org/jira/browse/HDDS-352 > Project: Hadoop Distributed Data Store > Issue Type: Improvement >Reporter: Elek, Marton >Assignee: Elek, Marton >Priority: Major > Labels: test > Attachments: HDDS-352-ozone-0.2.001.patch, > HDDS-352-ozone-0.2.002.patch, HDDS-352-ozone-0.2.003.patch, > HDDS-352-ozone-0.2.004.patch, HDDS-352-ozone-0.2.005.patch, > HDDS-352.00.patch, TestRun.rtf > > > In the current acceptance tests (hadoop-ozone/acceptance-test) the robot > files contain two kind of commands: > 1) starting and stopping clusters > 2) testing the basic behaviour with client calls > It would be great to separate the two functionality and include only the > testing part in the robot files. > 1. Ideally the tests could be executed in any environment. After a kubernetes > install I would like to do a smoke test. It could be a different environment > but I would like to execute most of the tests (check ozone cli, rest api, > etc.) > 2. There could be multiple ozone environment (standlaone ozone cluster, hdfs > + ozone cluster, etc.). We need to test all of them with all the tests. > 3. With this approach we can collect the docker-compose files just in one > place (hadoop-dist project). After a docker-compose up there should be a way > to execute the tests with an existing cluster. Something like this: > {code} > docker run -it apache/hadoop-runner -v ./acceptance-test:/opt/acceptance-test > -e SCM_URL=http://scm:9876 --network=composenetwork start-all-tests.sh > {code} > 4. It also means that we need to execute the tests from a separated container > instance. We need a configuration parameter to define the cluster topology. > Ideally it could be just one environment variables with the url of the scm > and the scm could be used to discovery all of the required components + > download the configuration files from there. > 5. Until now we used the log output of the docker-compose files to do some > readiness probes. They should be converted to poll the jmx endpoints and > check if the cluster is up and running. If we need the log files for > additional testing we can create multiple implementations for different type > of environments (docker-compose/kubernetes) and include the right set of > functions based on an external parameters. > 6. Still we need a generic script under the ozone-acceptance test project to > run all the tests (starting the docker-compose clusters, execute tests in a > different container, stop the cluster) -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Comment Edited] (HDDS-352) Separate install and testing phases in acceptance tests.
[ https://issues.apache.org/jira/browse/HDDS-352?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16618293#comment-16618293 ] Bharat Viswanadham edited comment on HDDS-352 at 9/18/18 12:04 AM: --- Hi [~elek] I have attached a patch for trunk. Now all the acceptance-tests are passing. Attached the output run (TestRun) And also deleted other files in the hadoop-ozone/acceptance-test folder, and corrected fewthings in test.sh. The additional change to make tests work is added missing property to docker-config OZONE-SITE.XML_ozone.om.http-address=ozoneManager:9874 was (Author: bharatviswa): Hi [~elek] I have attached a patch for trunk. Now all the acceptance-tests are passing. Attached the output run. And also deleted other files in the hadoop-ozone/acceptance-test folder, and corrected fewthings in test.sh. The additional change to make tests work is added missing property to docker-config OZONE-SITE.XML_ozone.om.http-address=ozoneManager:9874 > Separate install and testing phases in acceptance tests. > > > Key: HDDS-352 > URL: https://issues.apache.org/jira/browse/HDDS-352 > Project: Hadoop Distributed Data Store > Issue Type: Improvement >Reporter: Elek, Marton >Assignee: Elek, Marton >Priority: Major > Labels: test > Attachments: HDDS-352-ozone-0.2.001.patch, > HDDS-352-ozone-0.2.002.patch, HDDS-352-ozone-0.2.003.patch, > HDDS-352-ozone-0.2.004.patch, HDDS-352-ozone-0.2.005.patch, > HDDS-352.00.patch, TestRun.rtf > > > In the current acceptance tests (hadoop-ozone/acceptance-test) the robot > files contain two kind of commands: > 1) starting and stopping clusters > 2) testing the basic behaviour with client calls > It would be great to separate the two functionality and include only the > testing part in the robot files. > 1. Ideally the tests could be executed in any environment. After a kubernetes > install I would like to do a smoke test. It could be a different environment > but I would like to execute most of the tests (check ozone cli, rest api, > etc.) > 2. There could be multiple ozone environment (standlaone ozone cluster, hdfs > + ozone cluster, etc.). We need to test all of them with all the tests. > 3. With this approach we can collect the docker-compose files just in one > place (hadoop-dist project). After a docker-compose up there should be a way > to execute the tests with an existing cluster. Something like this: > {code} > docker run -it apache/hadoop-runner -v ./acceptance-test:/opt/acceptance-test > -e SCM_URL=http://scm:9876 --network=composenetwork start-all-tests.sh > {code} > 4. It also means that we need to execute the tests from a separated container > instance. We need a configuration parameter to define the cluster topology. > Ideally it could be just one environment variables with the url of the scm > and the scm could be used to discovery all of the required components + > download the configuration files from there. > 5. Until now we used the log output of the docker-compose files to do some > readiness probes. They should be converted to poll the jmx endpoints and > check if the cluster is up and running. If we need the log files for > additional testing we can create multiple implementations for different type > of environments (docker-compose/kubernetes) and include the right set of > functions based on an external parameters. > 6. Still we need a generic script under the ozone-acceptance test project to > run all the tests (starting the docker-compose clusters, execute tests in a > different container, stop the cluster) -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDDS-352) Separate install and testing phases in acceptance tests.
[ https://issues.apache.org/jira/browse/HDDS-352?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Bharat Viswanadham updated HDDS-352: Attachment: TestRun.rtf > Separate install and testing phases in acceptance tests. > > > Key: HDDS-352 > URL: https://issues.apache.org/jira/browse/HDDS-352 > Project: Hadoop Distributed Data Store > Issue Type: Improvement >Reporter: Elek, Marton >Assignee: Elek, Marton >Priority: Major > Labels: test > Attachments: HDDS-352-ozone-0.2.001.patch, > HDDS-352-ozone-0.2.002.patch, HDDS-352-ozone-0.2.003.patch, > HDDS-352-ozone-0.2.004.patch, HDDS-352-ozone-0.2.005.patch, > HDDS-352.00.patch, TestRun.rtf > > > In the current acceptance tests (hadoop-ozone/acceptance-test) the robot > files contain two kind of commands: > 1) starting and stopping clusters > 2) testing the basic behaviour with client calls > It would be great to separate the two functionality and include only the > testing part in the robot files. > 1. Ideally the tests could be executed in any environment. After a kubernetes > install I would like to do a smoke test. It could be a different environment > but I would like to execute most of the tests (check ozone cli, rest api, > etc.) > 2. There could be multiple ozone environment (standlaone ozone cluster, hdfs > + ozone cluster, etc.). We need to test all of them with all the tests. > 3. With this approach we can collect the docker-compose files just in one > place (hadoop-dist project). After a docker-compose up there should be a way > to execute the tests with an existing cluster. Something like this: > {code} > docker run -it apache/hadoop-runner -v ./acceptance-test:/opt/acceptance-test > -e SCM_URL=http://scm:9876 --network=composenetwork start-all-tests.sh > {code} > 4. It also means that we need to execute the tests from a separated container > instance. We need a configuration parameter to define the cluster topology. > Ideally it could be just one environment variables with the url of the scm > and the scm could be used to discovery all of the required components + > download the configuration files from there. > 5. Until now we used the log output of the docker-compose files to do some > readiness probes. They should be converted to poll the jmx endpoints and > check if the cluster is up and running. If we need the log files for > additional testing we can create multiple implementations for different type > of environments (docker-compose/kubernetes) and include the right set of > functions based on an external parameters. > 6. Still we need a generic script under the ozone-acceptance test project to > run all the tests (starting the docker-compose clusters, execute tests in a > different container, stop the cluster) -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDDS-352) Separate install and testing phases in acceptance tests.
[ https://issues.apache.org/jira/browse/HDDS-352?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16618293#comment-16618293 ] Bharat Viswanadham commented on HDDS-352: - Hi [~elek] I have attached a patch for trunk. Now all the acceptance-tests are passing. Attached the output run. And also deleted other files in the hadoop-ozone/acceptance-test folder, and corrected fewthings in test.sh. The additional change to make tests work is added missing property to docker-config OZONE-SITE.XML_ozone.om.http-address=ozoneManager:9874 > Separate install and testing phases in acceptance tests. > > > Key: HDDS-352 > URL: https://issues.apache.org/jira/browse/HDDS-352 > Project: Hadoop Distributed Data Store > Issue Type: Improvement >Reporter: Elek, Marton >Assignee: Elek, Marton >Priority: Major > Labels: test > Attachments: HDDS-352-ozone-0.2.001.patch, > HDDS-352-ozone-0.2.002.patch, HDDS-352-ozone-0.2.003.patch, > HDDS-352-ozone-0.2.004.patch, HDDS-352-ozone-0.2.005.patch, HDDS-352.00.patch > > > In the current acceptance tests (hadoop-ozone/acceptance-test) the robot > files contain two kind of commands: > 1) starting and stopping clusters > 2) testing the basic behaviour with client calls > It would be great to separate the two functionality and include only the > testing part in the robot files. > 1. Ideally the tests could be executed in any environment. After a kubernetes > install I would like to do a smoke test. It could be a different environment > but I would like to execute most of the tests (check ozone cli, rest api, > etc.) > 2. There could be multiple ozone environment (standlaone ozone cluster, hdfs > + ozone cluster, etc.). We need to test all of them with all the tests. > 3. With this approach we can collect the docker-compose files just in one > place (hadoop-dist project). After a docker-compose up there should be a way > to execute the tests with an existing cluster. Something like this: > {code} > docker run -it apache/hadoop-runner -v ./acceptance-test:/opt/acceptance-test > -e SCM_URL=http://scm:9876 --network=composenetwork start-all-tests.sh > {code} > 4. It also means that we need to execute the tests from a separated container > instance. We need a configuration parameter to define the cluster topology. > Ideally it could be just one environment variables with the url of the scm > and the scm could be used to discovery all of the required components + > download the configuration files from there. > 5. Until now we used the log output of the docker-compose files to do some > readiness probes. They should be converted to poll the jmx endpoints and > check if the cluster is up and running. If we need the log files for > additional testing we can create multiple implementations for different type > of environments (docker-compose/kubernetes) and include the right set of > functions based on an external parameters. > 6. Still we need a generic script under the ozone-acceptance test project to > run all the tests (starting the docker-compose clusters, execute tests in a > different container, stop the cluster) -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-13833) Failed to choose from local rack (location = /default); the second replica is not found, retry choosing ramdomly
[ https://issues.apache.org/jira/browse/HDFS-13833?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16618292#comment-16618292 ] Shweta commented on HDFS-13833: --- Thanks [~knanasi] for reviewing the patch. That's silly of me to not have checked for the package-private before submitting the patch, will update the patch. Also, the check style warning were related to the hidden field i.e. the stats object which has been resolved in this patch as I am not passing it as a parameter. > Failed to choose from local rack (location = /default); the second replica is > not found, retry choosing ramdomly > > > Key: HDFS-13833 > URL: https://issues.apache.org/jira/browse/HDFS-13833 > Project: Hadoop HDFS > Issue Type: Bug >Reporter: Henrique Barros >Assignee: Shweta >Priority: Critical > Attachments: HDFS-13833.001.patch, HDFS-13833.002.patch > > > I'm having a random problem with blocks replication with Hadoop > 2.6.0-cdh5.15.0 > With Cloudera CDH-5.15.0-1.cdh5.15.0.p0.21 > > In my case we are getting this error very randomly (after some hours) and > with only one Datanode (for now, we are trying this cloudera cluster for a > POC) > Here is the Log. > {code:java} > Choosing random from 1 available nodes on node /default, scope=/default, > excludedScope=null, excludeNodes=[] > 2:38:20.527 PMDEBUG NetworkTopology > Choosing random from 0 available nodes on node /default, scope=/default, > excludedScope=null, excludeNodes=[192.168.220.53:50010] > 2:38:20.527 PMDEBUG NetworkTopology > chooseRandom returning null > 2:38:20.527 PMDEBUG BlockPlacementPolicy > [ > Node /default/192.168.220.53:50010 [ > Datanode 192.168.220.53:50010 is not chosen since the node is too busy > (load: 8 > 0.0). > 2:38:20.527 PMDEBUG NetworkTopology > chooseRandom returning 192.168.220.53:50010 > 2:38:20.527 PMINFOBlockPlacementPolicy > Not enough replicas was chosen. Reason:{NODE_TOO_BUSY=1} > 2:38:20.527 PMDEBUG StateChange > closeFile: > /mobi.me/development/apps/flink/checkpoints/a5a6806866c1640660924ea1453cbe34/chk-2118/eef8bff6-75a9-43c1-ae93-4b1a9ca31ad9 > with 1 blocks is persisted to the file system > 2:38:20.527 PMDEBUG StateChange > *BLOCK* NameNode.addBlock: file > /mobi.me/development/apps/flink/checkpoints/a5a6806866c1640660924ea1453cbe34/chk-2118/1cfe900d-6f45-4b55-baaa-73c02ace2660 > fileId=129628869 for DFSClient_NONMAPREDUCE_467616914_65 > 2:38:20.527 PMDEBUG BlockPlacementPolicy > Failed to choose from local rack (location = /default); the second replica is > not found, retry choosing ramdomly > org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicy$NotEnoughReplicasException: > > at > org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicyDefault.chooseRandom(BlockPlacementPolicyDefault.java:784) > at > org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicyDefault.chooseRandom(BlockPlacementPolicyDefault.java:694) > at > org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicyDefault.chooseLocalRack(BlockPlacementPolicyDefault.java:601) > at > org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicyDefault.chooseLocalStorage(BlockPlacementPolicyDefault.java:561) > at > org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicyDefault.chooseTargetInOrder(BlockPlacementPolicyDefault.java:464) > at > org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicyDefault.chooseTarget(BlockPlacementPolicyDefault.java:395) > at > org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicyDefault.chooseTarget(BlockPlacementPolicyDefault.java:270) > at > org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicyDefault.chooseTarget(BlockPlacementPolicyDefault.java:142) > at > org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicyDefault.chooseTarget(BlockPlacementPolicyDefault.java:158) > at > org.apache.hadoop.hdfs.server.blockmanagement.BlockManager.chooseTarget4NewBlock(BlockManager.java:1715) > at > org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getAdditionalBlock(FSNamesystem.java:3505) > at > org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.addBlock(NameNodeRpcServer.java:694) > at > org.apache.hadoop.hdfs.server.namenode.AuthorizationProviderProxyClientProtocol.addBlock(AuthorizationProviderProxyClientProtocol.java:219) > at > org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.addBlock(ClientNamenodeProtocolServerSideTranslatorPB.java:507) > at > org.apache.hadoop.hdfs.protocol.proto
[jira] [Commented] (HDFS-13566) Add configurable additional RPC listener to NameNode
[ https://issues.apache.org/jira/browse/HDFS-13566?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16618291#comment-16618291 ] Chen Liang commented on HDFS-13566: --- {{TestLeaseRecovery2}} failed regardless of whether the patch is applied or now. The other failed tests succeeded locally. The check-style issues were not introduced in this patch. > Add configurable additional RPC listener to NameNode > > > Key: HDFS-13566 > URL: https://issues.apache.org/jira/browse/HDFS-13566 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: ipc >Reporter: Chen Liang >Assignee: Chen Liang >Priority: Major > Attachments: HDFS-13566.001.patch, HDFS-13566.002.patch, > HDFS-13566.003.patch, HDFS-13566.004.patch, HDFS-13566.005.patch, > HDFS-13566.006.patch > > > This Jira aims to add the capability to NameNode to run additional > listener(s). Such that NameNode can be accessed from multiple ports. > Fundamentally, this Jira tries to extend ipc.Server to allow configured with > more listeners, binding to different ports, but sharing the same call queue > and the handlers. Useful when different clients are only allowed to access > certain different ports. Combined with HDFS-13547, this also allows different > ports to have different SASL security levels. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDDS-352) Separate install and testing phases in acceptance tests.
[ https://issues.apache.org/jira/browse/HDDS-352?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Bharat Viswanadham updated HDDS-352: Attachment: HDDS-352.00.patch > Separate install and testing phases in acceptance tests. > > > Key: HDDS-352 > URL: https://issues.apache.org/jira/browse/HDDS-352 > Project: Hadoop Distributed Data Store > Issue Type: Improvement >Reporter: Elek, Marton >Assignee: Elek, Marton >Priority: Major > Labels: test > Attachments: HDDS-352-ozone-0.2.001.patch, > HDDS-352-ozone-0.2.002.patch, HDDS-352-ozone-0.2.003.patch, > HDDS-352-ozone-0.2.004.patch, HDDS-352-ozone-0.2.005.patch, HDDS-352.00.patch > > > In the current acceptance tests (hadoop-ozone/acceptance-test) the robot > files contain two kind of commands: > 1) starting and stopping clusters > 2) testing the basic behaviour with client calls > It would be great to separate the two functionality and include only the > testing part in the robot files. > 1. Ideally the tests could be executed in any environment. After a kubernetes > install I would like to do a smoke test. It could be a different environment > but I would like to execute most of the tests (check ozone cli, rest api, > etc.) > 2. There could be multiple ozone environment (standlaone ozone cluster, hdfs > + ozone cluster, etc.). We need to test all of them with all the tests. > 3. With this approach we can collect the docker-compose files just in one > place (hadoop-dist project). After a docker-compose up there should be a way > to execute the tests with an existing cluster. Something like this: > {code} > docker run -it apache/hadoop-runner -v ./acceptance-test:/opt/acceptance-test > -e SCM_URL=http://scm:9876 --network=composenetwork start-all-tests.sh > {code} > 4. It also means that we need to execute the tests from a separated container > instance. We need a configuration parameter to define the cluster topology. > Ideally it could be just one environment variables with the url of the scm > and the scm could be used to discovery all of the required components + > download the configuration files from there. > 5. Until now we used the log output of the docker-compose files to do some > readiness probes. They should be converted to poll the jmx endpoints and > check if the cluster is up and running. If we need the log files for > additional testing we can create multiple implementations for different type > of environments (docker-compose/kubernetes) and include the right set of > functions based on an external parameters. > 6. Still we need a generic script under the ozone-acceptance test project to > run all the tests (starting the docker-compose clusters, execute tests in a > different container, stop the cluster) -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-13833) Failed to choose from local rack (location = /default); the second replica is not found, retry choosing ramdomly
[ https://issues.apache.org/jira/browse/HDFS-13833?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16618288#comment-16618288 ] Kitti Nanasi commented on HDFS-13833: - [~shwetayakkali], thanks for the new patch! I have some minor comments which hasn't been solved in patch v002: - Now that the test class has been moved to the same package as the tested class, the new method can be package-private. - There are some checktsyle warnings in the test. > Failed to choose from local rack (location = /default); the second replica is > not found, retry choosing ramdomly > > > Key: HDFS-13833 > URL: https://issues.apache.org/jira/browse/HDFS-13833 > Project: Hadoop HDFS > Issue Type: Bug >Reporter: Henrique Barros >Assignee: Shweta >Priority: Critical > Attachments: HDFS-13833.001.patch, HDFS-13833.002.patch > > > I'm having a random problem with blocks replication with Hadoop > 2.6.0-cdh5.15.0 > With Cloudera CDH-5.15.0-1.cdh5.15.0.p0.21 > > In my case we are getting this error very randomly (after some hours) and > with only one Datanode (for now, we are trying this cloudera cluster for a > POC) > Here is the Log. > {code:java} > Choosing random from 1 available nodes on node /default, scope=/default, > excludedScope=null, excludeNodes=[] > 2:38:20.527 PMDEBUG NetworkTopology > Choosing random from 0 available nodes on node /default, scope=/default, > excludedScope=null, excludeNodes=[192.168.220.53:50010] > 2:38:20.527 PMDEBUG NetworkTopology > chooseRandom returning null > 2:38:20.527 PMDEBUG BlockPlacementPolicy > [ > Node /default/192.168.220.53:50010 [ > Datanode 192.168.220.53:50010 is not chosen since the node is too busy > (load: 8 > 0.0). > 2:38:20.527 PMDEBUG NetworkTopology > chooseRandom returning 192.168.220.53:50010 > 2:38:20.527 PMINFOBlockPlacementPolicy > Not enough replicas was chosen. Reason:{NODE_TOO_BUSY=1} > 2:38:20.527 PMDEBUG StateChange > closeFile: > /mobi.me/development/apps/flink/checkpoints/a5a6806866c1640660924ea1453cbe34/chk-2118/eef8bff6-75a9-43c1-ae93-4b1a9ca31ad9 > with 1 blocks is persisted to the file system > 2:38:20.527 PMDEBUG StateChange > *BLOCK* NameNode.addBlock: file > /mobi.me/development/apps/flink/checkpoints/a5a6806866c1640660924ea1453cbe34/chk-2118/1cfe900d-6f45-4b55-baaa-73c02ace2660 > fileId=129628869 for DFSClient_NONMAPREDUCE_467616914_65 > 2:38:20.527 PMDEBUG BlockPlacementPolicy > Failed to choose from local rack (location = /default); the second replica is > not found, retry choosing ramdomly > org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicy$NotEnoughReplicasException: > > at > org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicyDefault.chooseRandom(BlockPlacementPolicyDefault.java:784) > at > org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicyDefault.chooseRandom(BlockPlacementPolicyDefault.java:694) > at > org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicyDefault.chooseLocalRack(BlockPlacementPolicyDefault.java:601) > at > org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicyDefault.chooseLocalStorage(BlockPlacementPolicyDefault.java:561) > at > org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicyDefault.chooseTargetInOrder(BlockPlacementPolicyDefault.java:464) > at > org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicyDefault.chooseTarget(BlockPlacementPolicyDefault.java:395) > at > org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicyDefault.chooseTarget(BlockPlacementPolicyDefault.java:270) > at > org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicyDefault.chooseTarget(BlockPlacementPolicyDefault.java:142) > at > org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicyDefault.chooseTarget(BlockPlacementPolicyDefault.java:158) > at > org.apache.hadoop.hdfs.server.blockmanagement.BlockManager.chooseTarget4NewBlock(BlockManager.java:1715) > at > org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getAdditionalBlock(FSNamesystem.java:3505) > at > org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.addBlock(NameNodeRpcServer.java:694) > at > org.apache.hadoop.hdfs.server.namenode.AuthorizationProviderProxyClientProtocol.addBlock(AuthorizationProviderProxyClientProtocol.java:219) > at > org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.addBlock(ClientNamenodeProtocolServerSideTranslatorPB.java:507) > at > org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProto
[jira] [Commented] (HDFS-13833) Failed to choose from local rack (location = /default); the second replica is not found, retry choosing ramdomly
[ https://issues.apache.org/jira/browse/HDFS-13833?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16618289#comment-16618289 ] Kitti Nanasi commented on HDFS-13833: - [~shwetayakkali], thanks for the new patch! I have some minor comments which hasn't been solved in patch v002: - Now that the test class has been moved to the same package as the tested class, the new method can be package-private. - There are some checktsyle warnings in the test. > Failed to choose from local rack (location = /default); the second replica is > not found, retry choosing ramdomly > > > Key: HDFS-13833 > URL: https://issues.apache.org/jira/browse/HDFS-13833 > Project: Hadoop HDFS > Issue Type: Bug >Reporter: Henrique Barros >Assignee: Shweta >Priority: Critical > Attachments: HDFS-13833.001.patch, HDFS-13833.002.patch > > > I'm having a random problem with blocks replication with Hadoop > 2.6.0-cdh5.15.0 > With Cloudera CDH-5.15.0-1.cdh5.15.0.p0.21 > > In my case we are getting this error very randomly (after some hours) and > with only one Datanode (for now, we are trying this cloudera cluster for a > POC) > Here is the Log. > {code:java} > Choosing random from 1 available nodes on node /default, scope=/default, > excludedScope=null, excludeNodes=[] > 2:38:20.527 PMDEBUG NetworkTopology > Choosing random from 0 available nodes on node /default, scope=/default, > excludedScope=null, excludeNodes=[192.168.220.53:50010] > 2:38:20.527 PMDEBUG NetworkTopology > chooseRandom returning null > 2:38:20.527 PMDEBUG BlockPlacementPolicy > [ > Node /default/192.168.220.53:50010 [ > Datanode 192.168.220.53:50010 is not chosen since the node is too busy > (load: 8 > 0.0). > 2:38:20.527 PMDEBUG NetworkTopology > chooseRandom returning 192.168.220.53:50010 > 2:38:20.527 PMINFOBlockPlacementPolicy > Not enough replicas was chosen. Reason:{NODE_TOO_BUSY=1} > 2:38:20.527 PMDEBUG StateChange > closeFile: > /mobi.me/development/apps/flink/checkpoints/a5a6806866c1640660924ea1453cbe34/chk-2118/eef8bff6-75a9-43c1-ae93-4b1a9ca31ad9 > with 1 blocks is persisted to the file system > 2:38:20.527 PMDEBUG StateChange > *BLOCK* NameNode.addBlock: file > /mobi.me/development/apps/flink/checkpoints/a5a6806866c1640660924ea1453cbe34/chk-2118/1cfe900d-6f45-4b55-baaa-73c02ace2660 > fileId=129628869 for DFSClient_NONMAPREDUCE_467616914_65 > 2:38:20.527 PMDEBUG BlockPlacementPolicy > Failed to choose from local rack (location = /default); the second replica is > not found, retry choosing ramdomly > org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicy$NotEnoughReplicasException: > > at > org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicyDefault.chooseRandom(BlockPlacementPolicyDefault.java:784) > at > org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicyDefault.chooseRandom(BlockPlacementPolicyDefault.java:694) > at > org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicyDefault.chooseLocalRack(BlockPlacementPolicyDefault.java:601) > at > org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicyDefault.chooseLocalStorage(BlockPlacementPolicyDefault.java:561) > at > org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicyDefault.chooseTargetInOrder(BlockPlacementPolicyDefault.java:464) > at > org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicyDefault.chooseTarget(BlockPlacementPolicyDefault.java:395) > at > org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicyDefault.chooseTarget(BlockPlacementPolicyDefault.java:270) > at > org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicyDefault.chooseTarget(BlockPlacementPolicyDefault.java:142) > at > org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicyDefault.chooseTarget(BlockPlacementPolicyDefault.java:158) > at > org.apache.hadoop.hdfs.server.blockmanagement.BlockManager.chooseTarget4NewBlock(BlockManager.java:1715) > at > org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getAdditionalBlock(FSNamesystem.java:3505) > at > org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.addBlock(NameNodeRpcServer.java:694) > at > org.apache.hadoop.hdfs.server.namenode.AuthorizationProviderProxyClientProtocol.addBlock(AuthorizationProviderProxyClientProtocol.java:219) > at > org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.addBlock(ClientNamenodeProtocolServerSideTranslatorPB.java:507) > at > org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProto
[jira] [Updated] (HDFS-13925) Unit Test for transitioning between different states
[ https://issues.apache.org/jira/browse/HDFS-13925?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Sherwood Zheng updated HDFS-13925: -- Description: adding two unit tests: 1. Ensure that Active cannot be transitioned to Observer and vice versa. 2. Ensure that Observer can be transitioned to Standby and vice versa. > Unit Test for transitioning between different states > > > Key: HDFS-13925 > URL: https://issues.apache.org/jira/browse/HDFS-13925 > Project: Hadoop HDFS > Issue Type: Sub-task >Reporter: Sherwood Zheng >Assignee: Sherwood Zheng >Priority: Major > > adding two unit tests: > 1. Ensure that Active cannot be transitioned to Observer and vice versa. > 2. Ensure that Observer can be transitioned to Standby and vice versa. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Created] (HDFS-13925) Unit Test for transitioning between different states
Sherwood Zheng created HDFS-13925: - Summary: Unit Test for transitioning between different states Key: HDFS-13925 URL: https://issues.apache.org/jira/browse/HDFS-13925 Project: Hadoop HDFS Issue Type: Sub-task Reporter: Sherwood Zheng Assignee: Sherwood Zheng -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDDS-487) Doc files are missing ASF license headers
[ https://issues.apache.org/jira/browse/HDDS-487?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16618283#comment-16618283 ] Hudson commented on HDDS-487: - SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #14983 (See [https://builds.apache.org/job/Hadoop-trunk-Commit/14983/]) HDDS-487. Doc files are missing ASF license headers. Contributed by (arp: rev 0a26c521f0e5d3a2f7d40e07f11fe0a26765bc41) * (edit) hadoop-ozone/docs/themes/ozonedoc/layouts/partials/footer.html * (edit) hadoop-ozone/docs/themes/ozonedoc/layouts/partials/header.html * (edit) hadoop-ozone/docs/content/KeyCommands.md * (edit) hadoop-ozone/docs/themes/ozonedoc/layouts/index.html * (edit) hadoop-ozone/docs/content/Dozone.md * (edit) hadoop-ozone/docs/content/Freon.md * (edit) hadoop-ozone/docs/content/RealCluster.md * (edit) hadoop-ozone/docs/content/Rest.md * (edit) hadoop-ozone/docs/content/VolumeCommands.md * (edit) hadoop-ozone/docs/content/OzoneManager.md * (edit) hadoop-ozone/docs/content/RunningWithHDFS.md * (edit) hadoop-ozone/docs/themes/ozonedoc/layouts/partials/sidebar.html * (edit) hadoop-ozone/docs/content/_index.md * (edit) hadoop-ozone/docs/content/Hdds.md * (edit) hadoop-ozone/docs/content/BucketCommands.md * (edit) hadoop-ozone/docs/content/Concepts.md * (edit) hadoop-ozone/docs/content/JavaApi.md * (edit) hadoop-ozone/docs/content/OzoneFS.md * (edit) hadoop-ozone/docs/content/RunningViaDocker.md * (edit) hadoop-ozone/docs/content/SCMCLI.md * (edit) hadoop-ozone/docs/content/Settings.md * (edit) hadoop-ozone/docs/themes/ozonedoc/layouts/partials/navbar.html * (edit) hadoop-ozone/docs/static/NOTES.md * (edit) hadoop-ozone/docs/archetypes/default.md * (edit) hadoop-ozone/docs/README.md * (edit) hadoop-ozone/docs/content/CommandShell.md * (edit) hadoop-ozone/docs/content/BuildingSources.md * (edit) hadoop-ozone/docs/themes/ozonedoc/layouts/_default/single.html > Doc files are missing ASF license headers > - > > Key: HDDS-487 > URL: https://issues.apache.org/jira/browse/HDDS-487 > Project: Hadoop Distributed Data Store > Issue Type: Bug > Components: documentation >Reporter: Arpit Agarwal >Assignee: Namit Maheshwari >Priority: Blocker > Labels: newbie > Fix For: 0.2.1, 0.3.0 > > Attachments: HDDS-487.001.patch, HDDS-487.002.patch > > > The following doc files are missing ASF license headers: > {code} > Lines that start with ? in the ASF License report indicate files that do > not have an Apache license header: > !? /testptch/hadoop/hadoop-ozone/docs/content/BuildingSources.md > !? /testptch/hadoop/hadoop-ozone/docs/content/KeyCommands.md > !? /testptch/hadoop/hadoop-ozone/docs/content/Hdds.md > !? /testptch/hadoop/hadoop-ozone/docs/content/OzoneManager.md > !? /testptch/hadoop/hadoop-ozone/docs/content/BucketCommands.md > !? /testptch/hadoop/hadoop-ozone/docs/content/OzoneFS.md > !? /testptch/hadoop/hadoop-ozone/docs/content/VolumeCommands.md > !? /testptch/hadoop/hadoop-ozone/docs/content/JavaApi.md > !? /testptch/hadoop/hadoop-ozone/docs/content/RunningWithHDFS.md > {code} -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-13924) Handle BlockMissingException when reading from observer
[ https://issues.apache.org/jira/browse/HDFS-13924?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16618281#comment-16618281 ] Chen Liang commented on HDFS-13924: --- Thanks [~csun]. I see, so I imagine the error did not happen on server side, because server side does not treat this as error, it still returns a LocatedBlock, but with empty block info list. This only becomes an exception when later client actually tries to read the block? If this is what was happening, maybe another fix would be that on server side, if server finds itself in observer state, and getBlockLocations is called with no known block info, instead of returning empty list, it throws exception instead, so that client side triggers retry to a different node. Let DFSInputStream switch to active also makes sense to me though. > Handle BlockMissingException when reading from observer > --- > > Key: HDFS-13924 > URL: https://issues.apache.org/jira/browse/HDFS-13924 > Project: Hadoop HDFS > Issue Type: Sub-task >Reporter: Chao Sun >Priority: Major > > Internally we found that reading from ObserverNode may result to > {{BlockMissingException}}. This may happen when the observer sees a smaller > number of DNs than active (maybe due to communication issue with those DNs), > or (we guess) late block reports from some DNs to the observer. This error > happens in > [DFSInputStream#chooseDataNode|https://github.com/apache/hadoop/blob/trunk/hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/DFSInputStream.java#L846], > when no valid DN can be found for the {{LocatedBlock}} got from the NN side. > One potential solution (although a little hacky) is to ask the > {{DFSInputStream}} to retry active when this happens. The retry logic already > present in the code - we just have to dynamically set a flag to ask the > {{ObserverReadProxyProvider}} try active in this case. > cc [~shv], [~xkrogen], [~vagarychen], [~zero45] for discussion. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDDS-366) Update functions impacted by SCM chill mode in StorageContainerLocationProtocol
[ https://issues.apache.org/jira/browse/HDDS-366?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16618277#comment-16618277 ] Ajay Kumar commented on HDDS-366: - [~xyao] thanks for review. Addressed your comments in patch v1. Failure in TestNodeFailure, TestCloseContainerHandlingByClient seems unrelated. > Update functions impacted by SCM chill mode in > StorageContainerLocationProtocol > --- > > Key: HDDS-366 > URL: https://issues.apache.org/jira/browse/HDDS-366 > Project: Hadoop Distributed Data Store > Issue Type: Task >Reporter: Ajay Kumar >Assignee: Ajay Kumar >Priority: Major > Attachments: HDDS-366.00.patch, HDDS-366.01.patch > > > Modify functions impacted by SCM chill mode in > StorageContainerLocationProtocol. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDDS-366) Update functions impacted by SCM chill mode in StorageContainerLocationProtocol
[ https://issues.apache.org/jira/browse/HDDS-366?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Ajay Kumar updated HDDS-366: Attachment: HDDS-366.01.patch > Update functions impacted by SCM chill mode in > StorageContainerLocationProtocol > --- > > Key: HDDS-366 > URL: https://issues.apache.org/jira/browse/HDDS-366 > Project: Hadoop Distributed Data Store > Issue Type: Task >Reporter: Ajay Kumar >Assignee: Ajay Kumar >Priority: Major > Attachments: HDDS-366.00.patch, HDDS-366.01.patch > > > Modify functions impacted by SCM chill mode in > StorageContainerLocationProtocol. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDDS-487) Doc files are missing ASF license headers
[ https://issues.apache.org/jira/browse/HDDS-487?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Arpit Agarwal updated HDDS-487: --- Resolution: Fixed Fix Version/s: 0.3.0 0.2.1 Status: Resolved (was: Patch Available) I've committed this. Thanks for the contribution [~nmaheshwari]. > Doc files are missing ASF license headers > - > > Key: HDDS-487 > URL: https://issues.apache.org/jira/browse/HDDS-487 > Project: Hadoop Distributed Data Store > Issue Type: Bug > Components: documentation >Reporter: Arpit Agarwal >Assignee: Namit Maheshwari >Priority: Blocker > Labels: newbie > Fix For: 0.2.1, 0.3.0 > > Attachments: HDDS-487.001.patch, HDDS-487.002.patch > > > The following doc files are missing ASF license headers: > {code} > Lines that start with ? in the ASF License report indicate files that do > not have an Apache license header: > !? /testptch/hadoop/hadoop-ozone/docs/content/BuildingSources.md > !? /testptch/hadoop/hadoop-ozone/docs/content/KeyCommands.md > !? /testptch/hadoop/hadoop-ozone/docs/content/Hdds.md > !? /testptch/hadoop/hadoop-ozone/docs/content/OzoneManager.md > !? /testptch/hadoop/hadoop-ozone/docs/content/BucketCommands.md > !? /testptch/hadoop/hadoop-ozone/docs/content/OzoneFS.md > !? /testptch/hadoop/hadoop-ozone/docs/content/VolumeCommands.md > !? /testptch/hadoop/hadoop-ozone/docs/content/JavaApi.md > !? /testptch/hadoop/hadoop-ozone/docs/content/RunningWithHDFS.md > {code} -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDDS-487) Doc files are missing ASF license headers
[ https://issues.apache.org/jira/browse/HDDS-487?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16618261#comment-16618261 ] Hadoop QA commented on HDDS-487: | (/) *{color:green}+1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 13s{color} | {color:blue} Docker mode activated. {color} | || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | || || || || {color:brown} trunk Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 23m 13s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 22s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 35m 8s{color} | {color:green} branch has no errors when building and testing our client artifacts. {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 18s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 12m 39s{color} | {color:green} patch has no errors when building and testing our client artifacts. {color} | || || || || {color:brown} Other Tests {color} || | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 27s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 49m 18s{color} | {color:black} {color} | \\ \\ || Subsystem || Report/Notes || | Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:4b8c2b1 | | JIRA Issue | HDDS-487 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12940086/HDDS-487.002.patch | | Optional Tests | asflicense mvnsite | | uname | Linux 1ffa9515c73d 3.13.0-143-generic #192-Ubuntu SMP Tue Feb 27 10:45:36 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | /testptch/patchprocess/precommit/personality/provided.sh | | git revision | trunk / d154193 | | maven | version: Apache Maven 3.3.9 | | Max. process+thread count | 347 (vs. ulimit of 1) | | modules | C: hadoop-ozone/docs U: hadoop-ozone/docs | | Console output | https://builds.apache.org/job/PreCommit-HDDS-Build/1123/console | | Powered by | Apache Yetus 0.8.0-SNAPSHOT http://yetus.apache.org | This message was automatically generated. > Doc files are missing ASF license headers > - > > Key: HDDS-487 > URL: https://issues.apache.org/jira/browse/HDDS-487 > Project: Hadoop Distributed Data Store > Issue Type: Bug > Components: documentation >Reporter: Arpit Agarwal >Assignee: Namit Maheshwari >Priority: Blocker > Labels: newbie > Attachments: HDDS-487.001.patch, HDDS-487.002.patch > > > The following doc files are missing ASF license headers: > {code} > Lines that start with ? in the ASF License report indicate files that do > not have an Apache license header: > !? /testptch/hadoop/hadoop-ozone/docs/content/BuildingSources.md > !? /testptch/hadoop/hadoop-ozone/docs/content/KeyCommands.md > !? /testptch/hadoop/hadoop-ozone/docs/content/Hdds.md > !? /testptch/hadoop/hadoop-ozone/docs/content/OzoneManager.md > !? /testptch/hadoop/hadoop-ozone/docs/content/BucketCommands.md > !? /testptch/hadoop/hadoop-ozone/docs/content/OzoneFS.md > !? /testptch/hadoop/hadoop-ozone/docs/content/VolumeCommands.md > !? /testptch/hadoop/hadoop-ozone/docs/content/JavaApi.md > !? /testptch/hadoop/hadoop-ozone/docs/content/RunningWithHDFS.md > {code} -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-6092) DistributedFileSystem#getCanonicalServiceName() and DistributedFileSystem#getUri() may return inconsistent results w.r.t. port
[ https://issues.apache.org/jira/browse/HDFS-6092?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16618259#comment-16618259 ] Ted Yu commented on HDFS-6092: -- Test failure was not related. > DistributedFileSystem#getCanonicalServiceName() and > DistributedFileSystem#getUri() may return inconsistent results w.r.t. port > -- > > Key: HDFS-6092 > URL: https://issues.apache.org/jira/browse/HDFS-6092 > Project: Hadoop HDFS > Issue Type: Bug >Affects Versions: 2.3.0 >Reporter: Ted Yu >Assignee: Ted Yu >Priority: Major > Labels: BB2015-05-TBR > Attachments: HDFS-6092-v4.patch, HDFS-6092-v5.patch, > haosdent-HDFS-6092-v2.patch, haosdent-HDFS-6092.patch, hdfs-6092-v1.txt, > hdfs-6092-v2.txt, hdfs-6092-v3.txt > > > I discovered this when working on HBASE-10717 > Here is sample code to reproduce the problem: > {code} > Path desPath = new Path("hdfs://127.0.0.1/"); > FileSystem desFs = desPath.getFileSystem(conf); > > String s = desFs.getCanonicalServiceName(); > URI uri = desFs.getUri(); > {code} > Canonical name string contains the default port - 8020 > But uri doesn't contain port. > This would result in the following exception: > {code} > testIsSameHdfs(org.apache.hadoop.hbase.util.TestFSHDFSUtils) Time elapsed: > 0.001 sec <<< ERROR! > java.lang.IllegalArgumentException: port out of range:-1 > at java.net.InetSocketAddress.checkPort(InetSocketAddress.java:143) > at java.net.InetSocketAddress.(InetSocketAddress.java:224) > at > org.apache.hadoop.hbase.util.FSHDFSUtils.getNNAddresses(FSHDFSUtils.java:88) > {code} > Thanks to Brando Li who helped debug this. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Comment Edited] (HDFS-13515) NetUtils#connect should log remote address for NoRouteToHostException
[ https://issues.apache.org/jira/browse/HDFS-13515?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16471170#comment-16471170 ] Ted Yu edited comment on HDFS-13515 at 9/17/18 11:01 PM: - Can you log the remote address in case of exception ? Thanks was (Author: yuzhih...@gmail.com): Can you log the remote address in case of exception? Thanks > NetUtils#connect should log remote address for NoRouteToHostException > - > > Key: HDFS-13515 > URL: https://issues.apache.org/jira/browse/HDFS-13515 > Project: Hadoop HDFS > Issue Type: Improvement >Reporter: Ted Yu >Priority: Minor > > {code} > hdfs.BlockReaderFactory: I/O error constructing remote block reader. > java.net.NoRouteToHostException: No route to host > at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method) > at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:717) > at > org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:206) > at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:529) > at org.apache.hadoop.hdfs.DFSClient.newConnectedPeer(DFSClient.java:2884) > {code} > In the above stack trace, the remote host was not logged. > This makes troubleshooting a bit hard. > NetUtils#connect should log remote address for NoRouteToHostException . -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-11719) Arrays.fill() wrong index in BlockSender.readChecksum() exception handling
[ https://issues.apache.org/jira/browse/HDFS-11719?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16618253#comment-16618253 ] Mingliang Liu commented on HDFS-11719: -- Failing tests are not related. This is to fix bug of using API and does not change checksum logic so existing unit tests are enough. I'll hold on 1 day to commit in case people have more comments. > Arrays.fill() wrong index in BlockSender.readChecksum() exception handling > -- > > Key: HDFS-11719 > URL: https://issues.apache.org/jira/browse/HDFS-11719 > Project: Hadoop HDFS > Issue Type: Bug > Components: datanode >Reporter: Tao Zhang >Assignee: Tao Zhang >Priority: Major > Attachments: HADOOP-11719.001.patch, HADOOP-11719.002.patch, > HADOOP-11719.003.patch > > > In BlockSender.readChecksum() exception handling part: > Arrays.fill(buf, checksumOffset, checksumLen, (byte) 0); > Actually the paramters should be: Arrays.fill(buf, , , > value); > So it should be changed to: > Arrays.fill(buf, checksumOffset, checksumOffset + checksumLen, (byte) 0); -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-13924) Handle BlockMissingException when reading from observer
[ https://issues.apache.org/jira/browse/HDFS-13924?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16618251#comment-16618251 ] Chao Sun commented on HDFS-13924: - [~vagarychen] yes the stack trace: {code:java} Caused by: org.apache.hadoop.hdfs.BlockMissingException: Could not obtain block: file= at org.apache.hadoop.hdfs.DFSInputStream.chooseDataNode(DFSInputStream.java:1021) at org.apache.hadoop.hdfs.DFSInputStream.blockSeekTo(DFSInputStream.java:641) at org.apache.hadoop.hdfs.DFSInputStream.readWithStrategy(DFSInputStream.java:920) at org.apache.hadoop.hdfs.DFSInputStream.read(DFSInputStream.java:976) at org.apache.hadoop.hdfs.DFSInputStream.read(DFSInputStream.java:745) at java.io.FilterInputStream.read(FilterInputStream.java:83) at java.io.FilterInputStream.read(FilterInputStream.java:83) at com.facebook.presto.hive.parquet.reader.ParquetMetadataReader.readIntLittleEndian(ParquetMetadataReader.java:293) at com.facebook.presto.hive.parquet.reader.ParquetMetadataReader.readFooter(ParquetMetadataReader.java:88) at com.facebook.presto.hive.parquet.ParquetPageSourceFactory.createParquetPageSource(ParquetPageSourceFactory.java:168) ... 16 more {code} Note that we are using HDFS 2.8.2 with some custom patches, so you should look at the code [here|https://github.com/apache/hadoop/blob/branch-2.8.2/hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/DFSInputStream.java#L1010]. Even with retry, I think it will still call {{getBlockLocations}} on the same observer and get the same wrong results. > Handle BlockMissingException when reading from observer > --- > > Key: HDFS-13924 > URL: https://issues.apache.org/jira/browse/HDFS-13924 > Project: Hadoop HDFS > Issue Type: Sub-task >Reporter: Chao Sun >Priority: Major > > Internally we found that reading from ObserverNode may result to > {{BlockMissingException}}. This may happen when the observer sees a smaller > number of DNs than active (maybe due to communication issue with those DNs), > or (we guess) late block reports from some DNs to the observer. This error > happens in > [DFSInputStream#chooseDataNode|https://github.com/apache/hadoop/blob/trunk/hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/DFSInputStream.java#L846], > when no valid DN can be found for the {{LocatedBlock}} got from the NN side. > One potential solution (although a little hacky) is to ask the > {{DFSInputStream}} to retry active when this happens. The retry logic already > present in the code - we just have to dynamically set a flag to ask the > {{ObserverReadProxyProvider}} try active in this case. > cc [~shv], [~xkrogen], [~vagarychen], [~zero45] for discussion. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-13924) Handle BlockMissingException when reading from observer
[ https://issues.apache.org/jira/browse/HDFS-13924?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16618247#comment-16618247 ] Chen Liang commented on HDFS-13924: --- Good finding, thanks for reporting [~csun]! I'm wondering, is there a full stack trace still available? Just want to get a better idea on why the retry logic did not help for this case. > Handle BlockMissingException when reading from observer > --- > > Key: HDFS-13924 > URL: https://issues.apache.org/jira/browse/HDFS-13924 > Project: Hadoop HDFS > Issue Type: Sub-task >Reporter: Chao Sun >Priority: Major > > Internally we found that reading from ObserverNode may result to > {{BlockMissingException}}. This may happen when the observer sees a smaller > number of DNs than active (maybe due to communication issue with those DNs), > or (we guess) late block reports from some DNs to the observer. This error > happens in > [DFSInputStream#chooseDataNode|https://github.com/apache/hadoop/blob/trunk/hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/DFSInputStream.java#L846], > when no valid DN can be found for the {{LocatedBlock}} got from the NN side. > One potential solution (although a little hacky) is to ask the > {{DFSInputStream}} to retry active when this happens. The retry logic already > present in the code - we just have to dynamically set a flag to ask the > {{ObserverReadProxyProvider}} try active in this case. > cc [~shv], [~xkrogen], [~vagarychen], [~zero45] for discussion. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDDS-487) Doc files are missing ASF license headers
[ https://issues.apache.org/jira/browse/HDDS-487?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16618232#comment-16618232 ] Hadoop QA commented on HDDS-487: | (/) *{color:green}+1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 20s{color} | {color:blue} Docker mode activated. {color} | || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | || || || || {color:brown} trunk Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 20m 3s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 22s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 30m 49s{color} | {color:green} branch has no errors when building and testing our client artifacts. {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 17s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 12m 2s{color} | {color:green} patch has no errors when building and testing our client artifacts. {color} | || || || || {color:brown} Other Tests {color} || | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 26s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 44m 28s{color} | {color:black} {color} | \\ \\ || Subsystem || Report/Notes || | Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:4b8c2b1 | | JIRA Issue | HDDS-487 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12940079/HDDS-487.001.patch | | Optional Tests | asflicense mvnsite | | uname | Linux c54d2ba40357 4.4.0-133-generic #159-Ubuntu SMP Fri Aug 10 07:31:43 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | /testptch/patchprocess/precommit/personality/provided.sh | | git revision | trunk / 23a6137 | | maven | version: Apache Maven 3.3.9 | | Max. process+thread count | 406 (vs. ulimit of 1) | | modules | C: hadoop-ozone/docs U: hadoop-ozone/docs | | Console output | https://builds.apache.org/job/PreCommit-HDDS-Build/1121/console | | Powered by | Apache Yetus 0.8.0-SNAPSHOT http://yetus.apache.org | This message was automatically generated. > Doc files are missing ASF license headers > - > > Key: HDDS-487 > URL: https://issues.apache.org/jira/browse/HDDS-487 > Project: Hadoop Distributed Data Store > Issue Type: Bug > Components: documentation >Reporter: Arpit Agarwal >Assignee: Namit Maheshwari >Priority: Blocker > Labels: newbie > Attachments: HDDS-487.001.patch, HDDS-487.002.patch > > > The following doc files are missing ASF license headers: > {code} > Lines that start with ? in the ASF License report indicate files that do > not have an Apache license header: > !? /testptch/hadoop/hadoop-ozone/docs/content/BuildingSources.md > !? /testptch/hadoop/hadoop-ozone/docs/content/KeyCommands.md > !? /testptch/hadoop/hadoop-ozone/docs/content/Hdds.md > !? /testptch/hadoop/hadoop-ozone/docs/content/OzoneManager.md > !? /testptch/hadoop/hadoop-ozone/docs/content/BucketCommands.md > !? /testptch/hadoop/hadoop-ozone/docs/content/OzoneFS.md > !? /testptch/hadoop/hadoop-ozone/docs/content/VolumeCommands.md > !? /testptch/hadoop/hadoop-ozone/docs/content/JavaApi.md > !? /testptch/hadoop/hadoop-ozone/docs/content/RunningWithHDFS.md > {code} -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDDS-487) Doc files are missing ASF license headers
[ https://issues.apache.org/jira/browse/HDDS-487?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16618230#comment-16618230 ] Arpit Agarwal commented on HDDS-487: +1 pending Jenkins. > Doc files are missing ASF license headers > - > > Key: HDDS-487 > URL: https://issues.apache.org/jira/browse/HDDS-487 > Project: Hadoop Distributed Data Store > Issue Type: Bug > Components: documentation >Reporter: Arpit Agarwal >Assignee: Namit Maheshwari >Priority: Blocker > Labels: newbie > Attachments: HDDS-487.001.patch, HDDS-487.002.patch > > > The following doc files are missing ASF license headers: > {code} > Lines that start with ? in the ASF License report indicate files that do > not have an Apache license header: > !? /testptch/hadoop/hadoop-ozone/docs/content/BuildingSources.md > !? /testptch/hadoop/hadoop-ozone/docs/content/KeyCommands.md > !? /testptch/hadoop/hadoop-ozone/docs/content/Hdds.md > !? /testptch/hadoop/hadoop-ozone/docs/content/OzoneManager.md > !? /testptch/hadoop/hadoop-ozone/docs/content/BucketCommands.md > !? /testptch/hadoop/hadoop-ozone/docs/content/OzoneFS.md > !? /testptch/hadoop/hadoop-ozone/docs/content/VolumeCommands.md > !? /testptch/hadoop/hadoop-ozone/docs/content/JavaApi.md > !? /testptch/hadoop/hadoop-ozone/docs/content/RunningWithHDFS.md > {code} -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDDS-352) Separate install and testing phases in acceptance tests.
[ https://issues.apache.org/jira/browse/HDDS-352?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16618225#comment-16618225 ] Hadoop QA commented on HDDS-352: (!) A patch to the testing environment has been detected. Re-executing against the patched versions to perform further tests. The console is at https://builds.apache.org/job/PreCommit-HDDS-Build/1122/console in case of problems. > Separate install and testing phases in acceptance tests. > > > Key: HDDS-352 > URL: https://issues.apache.org/jira/browse/HDDS-352 > Project: Hadoop Distributed Data Store > Issue Type: Improvement >Reporter: Elek, Marton >Assignee: Elek, Marton >Priority: Major > Labels: test > Attachments: HDDS-352-ozone-0.2.001.patch, > HDDS-352-ozone-0.2.002.patch, HDDS-352-ozone-0.2.003.patch, > HDDS-352-ozone-0.2.004.patch, HDDS-352-ozone-0.2.005.patch > > > In the current acceptance tests (hadoop-ozone/acceptance-test) the robot > files contain two kind of commands: > 1) starting and stopping clusters > 2) testing the basic behaviour with client calls > It would be great to separate the two functionality and include only the > testing part in the robot files. > 1. Ideally the tests could be executed in any environment. After a kubernetes > install I would like to do a smoke test. It could be a different environment > but I would like to execute most of the tests (check ozone cli, rest api, > etc.) > 2. There could be multiple ozone environment (standlaone ozone cluster, hdfs > + ozone cluster, etc.). We need to test all of them with all the tests. > 3. With this approach we can collect the docker-compose files just in one > place (hadoop-dist project). After a docker-compose up there should be a way > to execute the tests with an existing cluster. Something like this: > {code} > docker run -it apache/hadoop-runner -v ./acceptance-test:/opt/acceptance-test > -e SCM_URL=http://scm:9876 --network=composenetwork start-all-tests.sh > {code} > 4. It also means that we need to execute the tests from a separated container > instance. We need a configuration parameter to define the cluster topology. > Ideally it could be just one environment variables with the url of the scm > and the scm could be used to discovery all of the required components + > download the configuration files from there. > 5. Until now we used the log output of the docker-compose files to do some > readiness probes. They should be converted to poll the jmx endpoints and > check if the cluster is up and running. If we need the log files for > additional testing we can create multiple implementations for different type > of environments (docker-compose/kubernetes) and include the right set of > functions based on an external parameters. > 6. Still we need a generic script under the ozone-acceptance test project to > run all the tests (starting the docker-compose clusters, execute tests in a > different container, stop the cluster) -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-13921) Remove the 'Path ... should be specified as a URI' warnings on startup
[ https://issues.apache.org/jira/browse/HDFS-13921?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16618222#comment-16618222 ] Hadoop QA commented on HDFS-13921: -- | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 26s{color} | {color:blue} Docker mode activated. {color} | || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:red}-1{color} | {color:red} test4tests {color} | {color:red} 0m 0s{color} | {color:red} The patch doesn't appear to include any new or modified tests. Please justify why no new tests are needed for this patch. Also please list what manual steps were performed to verify this patch. {color} | || || || || {color:brown} trunk Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 19m 31s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 57s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 50s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 1s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 13m 3s{color} | {color:green} branch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 59s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 47s{color} | {color:green} trunk passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 1m 1s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 55s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 55s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 47s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 0s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 12m 5s{color} | {color:green} patch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 2m 5s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 48s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:red}-1{color} | {color:red} unit {color} | {color:red}102m 34s{color} | {color:red} hadoop-hdfs in the patch failed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 42s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black}160m 24s{color} | {color:black} {color} | \\ \\ || Reason || Tests || | Failed junit tests | hadoop.hdfs.TestLeaseRecovery2 | | | hadoop.hdfs.client.impl.TestBlockReaderLocal | \\ \\ || Subsystem || Report/Notes || | Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:4b8c2b1 | | JIRA Issue | HDFS-13921 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12940048/HDFS-13921-01.patch | | Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient findbugs checkstyle | | uname | Linux 788177a14352 3.13.0-153-generic #203-Ubuntu SMP Thu Jun 14 08:52:28 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | /testptch/patchprocess/precommit/personality/provided.sh | | git revision | trunk / 3d89c3e | | maven | version: Apache Maven 3.3.9 | | Default Java | 1.8.0_181 | | findbugs | v3.1.0-RC1 | | unit | https://builds.apache.org/job/PreCommit-HDFS-Build/25080/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt | | Test Results | https://builds.apache.org/job/PreCommit-HDFS-Build/25080/testReport/ | | Max. process+thread count | 2778 (vs. ulimit of 1) | | modules | C: hadoop-hdfs-project/hadoop-hdfs U: hadoop-hd
[jira] [Updated] (HDDS-352) Separate install and testing phases in acceptance tests.
[ https://issues.apache.org/jira/browse/HDDS-352?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Elek, Marton updated HDDS-352: -- Attachment: HDDS-352-ozone-0.2.005.patch > Separate install and testing phases in acceptance tests. > > > Key: HDDS-352 > URL: https://issues.apache.org/jira/browse/HDDS-352 > Project: Hadoop Distributed Data Store > Issue Type: Improvement >Reporter: Elek, Marton >Assignee: Elek, Marton >Priority: Major > Labels: test > Attachments: HDDS-352-ozone-0.2.001.patch, > HDDS-352-ozone-0.2.002.patch, HDDS-352-ozone-0.2.003.patch, > HDDS-352-ozone-0.2.004.patch, HDDS-352-ozone-0.2.005.patch > > > In the current acceptance tests (hadoop-ozone/acceptance-test) the robot > files contain two kind of commands: > 1) starting and stopping clusters > 2) testing the basic behaviour with client calls > It would be great to separate the two functionality and include only the > testing part in the robot files. > 1. Ideally the tests could be executed in any environment. After a kubernetes > install I would like to do a smoke test. It could be a different environment > but I would like to execute most of the tests (check ozone cli, rest api, > etc.) > 2. There could be multiple ozone environment (standlaone ozone cluster, hdfs > + ozone cluster, etc.). We need to test all of them with all the tests. > 3. With this approach we can collect the docker-compose files just in one > place (hadoop-dist project). After a docker-compose up there should be a way > to execute the tests with an existing cluster. Something like this: > {code} > docker run -it apache/hadoop-runner -v ./acceptance-test:/opt/acceptance-test > -e SCM_URL=http://scm:9876 --network=composenetwork start-all-tests.sh > {code} > 4. It also means that we need to execute the tests from a separated container > instance. We need a configuration parameter to define the cluster topology. > Ideally it could be just one environment variables with the url of the scm > and the scm could be used to discovery all of the required components + > download the configuration files from there. > 5. Until now we used the log output of the docker-compose files to do some > readiness probes. They should be converted to poll the jmx endpoints and > check if the cluster is up and running. If we need the log files for > additional testing we can create multiple implementations for different type > of environments (docker-compose/kubernetes) and include the right set of > functions based on an external parameters. > 6. Still we need a generic script under the ozone-acceptance test project to > run all the tests (starting the docker-compose clusters, execute tests in a > different container, stop the cluster) -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-13833) Failed to choose from local rack (location = /default); the second replica is not found, retry choosing ramdomly
[ https://issues.apache.org/jira/browse/HDFS-13833?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16618221#comment-16618221 ] Shweta commented on HDFS-13833: --- Thank you very much [~xiaochen] for the review comments. As suggested, I have made the changes and uploaded a new patch. Please review and suggest if there need to be any further modifications. > Failed to choose from local rack (location = /default); the second replica is > not found, retry choosing ramdomly > > > Key: HDFS-13833 > URL: https://issues.apache.org/jira/browse/HDFS-13833 > Project: Hadoop HDFS > Issue Type: Bug >Reporter: Henrique Barros >Assignee: Shweta >Priority: Critical > Attachments: HDFS-13833.001.patch, HDFS-13833.002.patch > > > I'm having a random problem with blocks replication with Hadoop > 2.6.0-cdh5.15.0 > With Cloudera CDH-5.15.0-1.cdh5.15.0.p0.21 > > In my case we are getting this error very randomly (after some hours) and > with only one Datanode (for now, we are trying this cloudera cluster for a > POC) > Here is the Log. > {code:java} > Choosing random from 1 available nodes on node /default, scope=/default, > excludedScope=null, excludeNodes=[] > 2:38:20.527 PMDEBUG NetworkTopology > Choosing random from 0 available nodes on node /default, scope=/default, > excludedScope=null, excludeNodes=[192.168.220.53:50010] > 2:38:20.527 PMDEBUG NetworkTopology > chooseRandom returning null > 2:38:20.527 PMDEBUG BlockPlacementPolicy > [ > Node /default/192.168.220.53:50010 [ > Datanode 192.168.220.53:50010 is not chosen since the node is too busy > (load: 8 > 0.0). > 2:38:20.527 PMDEBUG NetworkTopology > chooseRandom returning 192.168.220.53:50010 > 2:38:20.527 PMINFOBlockPlacementPolicy > Not enough replicas was chosen. Reason:{NODE_TOO_BUSY=1} > 2:38:20.527 PMDEBUG StateChange > closeFile: > /mobi.me/development/apps/flink/checkpoints/a5a6806866c1640660924ea1453cbe34/chk-2118/eef8bff6-75a9-43c1-ae93-4b1a9ca31ad9 > with 1 blocks is persisted to the file system > 2:38:20.527 PMDEBUG StateChange > *BLOCK* NameNode.addBlock: file > /mobi.me/development/apps/flink/checkpoints/a5a6806866c1640660924ea1453cbe34/chk-2118/1cfe900d-6f45-4b55-baaa-73c02ace2660 > fileId=129628869 for DFSClient_NONMAPREDUCE_467616914_65 > 2:38:20.527 PMDEBUG BlockPlacementPolicy > Failed to choose from local rack (location = /default); the second replica is > not found, retry choosing ramdomly > org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicy$NotEnoughReplicasException: > > at > org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicyDefault.chooseRandom(BlockPlacementPolicyDefault.java:784) > at > org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicyDefault.chooseRandom(BlockPlacementPolicyDefault.java:694) > at > org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicyDefault.chooseLocalRack(BlockPlacementPolicyDefault.java:601) > at > org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicyDefault.chooseLocalStorage(BlockPlacementPolicyDefault.java:561) > at > org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicyDefault.chooseTargetInOrder(BlockPlacementPolicyDefault.java:464) > at > org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicyDefault.chooseTarget(BlockPlacementPolicyDefault.java:395) > at > org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicyDefault.chooseTarget(BlockPlacementPolicyDefault.java:270) > at > org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicyDefault.chooseTarget(BlockPlacementPolicyDefault.java:142) > at > org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicyDefault.chooseTarget(BlockPlacementPolicyDefault.java:158) > at > org.apache.hadoop.hdfs.server.blockmanagement.BlockManager.chooseTarget4NewBlock(BlockManager.java:1715) > at > org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getAdditionalBlock(FSNamesystem.java:3505) > at > org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.addBlock(NameNodeRpcServer.java:694) > at > org.apache.hadoop.hdfs.server.namenode.AuthorizationProviderProxyClientProtocol.addBlock(AuthorizationProviderProxyClientProtocol.java:219) > at > org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.addBlock(ClientNamenodeProtocolServerSideTranslatorPB.java:507) > at > org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java) > at > org.apa
[jira] [Updated] (HDFS-13833) Failed to choose from local rack (location = /default); the second replica is not found, retry choosing ramdomly
[ https://issues.apache.org/jira/browse/HDFS-13833?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Shweta updated HDFS-13833: -- Attachment: HDFS-13833.002.patch > Failed to choose from local rack (location = /default); the second replica is > not found, retry choosing ramdomly > > > Key: HDFS-13833 > URL: https://issues.apache.org/jira/browse/HDFS-13833 > Project: Hadoop HDFS > Issue Type: Bug >Reporter: Henrique Barros >Assignee: Shweta >Priority: Critical > Attachments: HDFS-13833.001.patch, HDFS-13833.002.patch > > > I'm having a random problem with blocks replication with Hadoop > 2.6.0-cdh5.15.0 > With Cloudera CDH-5.15.0-1.cdh5.15.0.p0.21 > > In my case we are getting this error very randomly (after some hours) and > with only one Datanode (for now, we are trying this cloudera cluster for a > POC) > Here is the Log. > {code:java} > Choosing random from 1 available nodes on node /default, scope=/default, > excludedScope=null, excludeNodes=[] > 2:38:20.527 PMDEBUG NetworkTopology > Choosing random from 0 available nodes on node /default, scope=/default, > excludedScope=null, excludeNodes=[192.168.220.53:50010] > 2:38:20.527 PMDEBUG NetworkTopology > chooseRandom returning null > 2:38:20.527 PMDEBUG BlockPlacementPolicy > [ > Node /default/192.168.220.53:50010 [ > Datanode 192.168.220.53:50010 is not chosen since the node is too busy > (load: 8 > 0.0). > 2:38:20.527 PMDEBUG NetworkTopology > chooseRandom returning 192.168.220.53:50010 > 2:38:20.527 PMINFOBlockPlacementPolicy > Not enough replicas was chosen. Reason:{NODE_TOO_BUSY=1} > 2:38:20.527 PMDEBUG StateChange > closeFile: > /mobi.me/development/apps/flink/checkpoints/a5a6806866c1640660924ea1453cbe34/chk-2118/eef8bff6-75a9-43c1-ae93-4b1a9ca31ad9 > with 1 blocks is persisted to the file system > 2:38:20.527 PMDEBUG StateChange > *BLOCK* NameNode.addBlock: file > /mobi.me/development/apps/flink/checkpoints/a5a6806866c1640660924ea1453cbe34/chk-2118/1cfe900d-6f45-4b55-baaa-73c02ace2660 > fileId=129628869 for DFSClient_NONMAPREDUCE_467616914_65 > 2:38:20.527 PMDEBUG BlockPlacementPolicy > Failed to choose from local rack (location = /default); the second replica is > not found, retry choosing ramdomly > org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicy$NotEnoughReplicasException: > > at > org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicyDefault.chooseRandom(BlockPlacementPolicyDefault.java:784) > at > org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicyDefault.chooseRandom(BlockPlacementPolicyDefault.java:694) > at > org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicyDefault.chooseLocalRack(BlockPlacementPolicyDefault.java:601) > at > org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicyDefault.chooseLocalStorage(BlockPlacementPolicyDefault.java:561) > at > org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicyDefault.chooseTargetInOrder(BlockPlacementPolicyDefault.java:464) > at > org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicyDefault.chooseTarget(BlockPlacementPolicyDefault.java:395) > at > org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicyDefault.chooseTarget(BlockPlacementPolicyDefault.java:270) > at > org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicyDefault.chooseTarget(BlockPlacementPolicyDefault.java:142) > at > org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicyDefault.chooseTarget(BlockPlacementPolicyDefault.java:158) > at > org.apache.hadoop.hdfs.server.blockmanagement.BlockManager.chooseTarget4NewBlock(BlockManager.java:1715) > at > org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getAdditionalBlock(FSNamesystem.java:3505) > at > org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.addBlock(NameNodeRpcServer.java:694) > at > org.apache.hadoop.hdfs.server.namenode.AuthorizationProviderProxyClientProtocol.addBlock(AuthorizationProviderProxyClientProtocol.java:219) > at > org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.addBlock(ClientNamenodeProtocolServerSideTranslatorPB.java:507) > at > org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java) > at > org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:617) > at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1073) > at org.apache.hadoop.ipc.Server$Handler$1.run(Server
[jira] [Updated] (HDDS-487) Doc files are missing ASF license headers
[ https://issues.apache.org/jira/browse/HDDS-487?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Namit Maheshwari updated HDDS-487: -- Attachment: HDDS-487.002.patch Status: Patch Available (was: Open) > Doc files are missing ASF license headers > - > > Key: HDDS-487 > URL: https://issues.apache.org/jira/browse/HDDS-487 > Project: Hadoop Distributed Data Store > Issue Type: Bug > Components: documentation >Reporter: Arpit Agarwal >Assignee: Namit Maheshwari >Priority: Blocker > Labels: newbie > Attachments: HDDS-487.001.patch, HDDS-487.002.patch > > > The following doc files are missing ASF license headers: > {code} > Lines that start with ? in the ASF License report indicate files that do > not have an Apache license header: > !? /testptch/hadoop/hadoop-ozone/docs/content/BuildingSources.md > !? /testptch/hadoop/hadoop-ozone/docs/content/KeyCommands.md > !? /testptch/hadoop/hadoop-ozone/docs/content/Hdds.md > !? /testptch/hadoop/hadoop-ozone/docs/content/OzoneManager.md > !? /testptch/hadoop/hadoop-ozone/docs/content/BucketCommands.md > !? /testptch/hadoop/hadoop-ozone/docs/content/OzoneFS.md > !? /testptch/hadoop/hadoop-ozone/docs/content/VolumeCommands.md > !? /testptch/hadoop/hadoop-ozone/docs/content/JavaApi.md > !? /testptch/hadoop/hadoop-ozone/docs/content/RunningWithHDFS.md > {code} -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDDS-487) Doc files are missing ASF license headers
[ https://issues.apache.org/jira/browse/HDDS-487?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Namit Maheshwari updated HDDS-487: -- Status: Open (was: Patch Available) > Doc files are missing ASF license headers > - > > Key: HDDS-487 > URL: https://issues.apache.org/jira/browse/HDDS-487 > Project: Hadoop Distributed Data Store > Issue Type: Bug > Components: documentation >Reporter: Arpit Agarwal >Assignee: Namit Maheshwari >Priority: Blocker > Labels: newbie > Attachments: HDDS-487.001.patch, HDDS-487.002.patch > > > The following doc files are missing ASF license headers: > {code} > Lines that start with ? in the ASF License report indicate files that do > not have an Apache license header: > !? /testptch/hadoop/hadoop-ozone/docs/content/BuildingSources.md > !? /testptch/hadoop/hadoop-ozone/docs/content/KeyCommands.md > !? /testptch/hadoop/hadoop-ozone/docs/content/Hdds.md > !? /testptch/hadoop/hadoop-ozone/docs/content/OzoneManager.md > !? /testptch/hadoop/hadoop-ozone/docs/content/BucketCommands.md > !? /testptch/hadoop/hadoop-ozone/docs/content/OzoneFS.md > !? /testptch/hadoop/hadoop-ozone/docs/content/VolumeCommands.md > !? /testptch/hadoop/hadoop-ozone/docs/content/JavaApi.md > !? /testptch/hadoop/hadoop-ozone/docs/content/RunningWithHDFS.md > {code} -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDDS-488) Handle chill mode exception from SCM in OzoneManager
[ https://issues.apache.org/jira/browse/HDDS-488?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Ajay Kumar updated HDDS-488: Description: Following functions should propagate SCM chill mode exception back to the clients: allocateBlock openKey was:Modify functions impacted by SCM chill mode in StorageContainerLocationProtocol. > Handle chill mode exception from SCM in OzoneManager > > > Key: HDDS-488 > URL: https://issues.apache.org/jira/browse/HDDS-488 > Project: Hadoop Distributed Data Store > Issue Type: Task >Reporter: Ajay Kumar >Assignee: Ajay Kumar >Priority: Major > > Following functions should propagate SCM chill mode exception back to the > clients: > allocateBlock > openKey -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-13844) Fix the fmt_bytes function in the dfs-dust.js
[ https://issues.apache.org/jira/browse/HDFS-13844?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16618216#comment-16618216 ] Hudson commented on HDFS-13844: --- SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #14982 (See [https://builds.apache.org/job/Hadoop-trunk-Commit/14982/]) HDFS-13844. Fix the fmt_bytes function in the dfs-dust.js. Contributed (inigoiri: rev d1541932dbf2efd09da251b23c8825ce97f9c86c) * (edit) hadoop-hdfs-project/hadoop-hdfs/src/main/webapps/static/dfs-dust.js > Fix the fmt_bytes function in the dfs-dust.js > - > > Key: HDFS-13844 > URL: https://issues.apache.org/jira/browse/HDFS-13844 > Project: Hadoop HDFS > Issue Type: Bug > Components: hdfs, ui >Affects Versions: 1.2.0, 2.2.0, 2.7.2, 3.0.0, 3.1.0 >Reporter: yanghuafeng >Assignee: yanghuafeng >Priority: Minor > Fix For: 2.10.0, 3.2.0, 2.9.2, 3.0.4, 3.1.2 > > Attachments: HDFS-13844.001.patch, overflow_undefined_unit.jpg, > overflow_unit.jpg, undefined_unit.jpg > > > The namenode WebUI cannot display the capacity with correct units. I have > found that the function fmt_bytes in the dfs-dust.js missed the EB unit. This > will lead to undefined unit in the ui. > And although the unit ZB is very large, we should take the unit overflow into > consideration. Supposing the last unit is GB, we should get the 8192 GB with > the total capacity 8T rather than 8 undefined. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Created] (HDDS-488) Handle chill mode exception from SCM in OzoneManager
Ajay Kumar created HDDS-488: --- Summary: Handle chill mode exception from SCM in OzoneManager Key: HDDS-488 URL: https://issues.apache.org/jira/browse/HDDS-488 Project: Hadoop Distributed Data Store Issue Type: Task Reporter: Ajay Kumar Assignee: Ajay Kumar Modify functions impacted by SCM chill mode in StorageContainerLocationProtocol. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org