[jira] [Commented] (HDFS-9924) [umbrella] Nonblocking HDFS Access
[ https://issues.apache.org/jira/browse/HDFS-9924?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15839393#comment-15839393 ] stack commented on HDFS-9924: - bq. Add a port unification service in front of the grpc server and the old rpc server to support both grpc client and old client. When you say port unification service, what are you thinking? It'd be in-process listening on the DN port reading a few bytes to figure which RPC? Reading https://www.cockroachlabs.com/blog/a-tale-of-two-ports/ would advocate listening on a new port altogether; an option 5 which is probably too much to ask. We should probably perf test grpc (going by the citation). Thanks [~Apache9] > [umbrella] Nonblocking HDFS Access > -- > > Key: HDFS-9924 > URL: https://issues.apache.org/jira/browse/HDFS-9924 > Project: Hadoop HDFS > Issue Type: New Feature > Components: fs >Reporter: Tsz Wo Nicholas Sze >Assignee: Xiaobing Zhou > Attachments: AsyncHdfs20160510.pdf, Async-HDFS-Performance-Report.pdf > > > This is an umbrella JIRA for supporting Nonblocking HDFS Access. > Currently, all the API methods are blocking calls -- the caller is blocked > until the method returns. It is very slow if a client makes a large number > of independent calls in a single thread since each call has to wait until the > previous call is finished. It is inefficient if a client needs to create a > large number of threads to invoke the calls. > We propose adding a new API to support nonblocking calls, i.e. the caller is > not blocked. The methods in the new API immediately return a Java Future > object. The return value can be obtained by the usual Future.get() method. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-11370) Optimize NamenodeFsck#getReplicaInfo
[ https://issues.apache.org/jira/browse/HDFS-11370?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15839389#comment-15839389 ] Hadoop QA commented on HDFS-11370: -- | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 11s{color} | {color:blue} Docker mode activated. {color} | | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:red}-1{color} | {color:red} test4tests {color} | {color:red} 0m 0s{color} | {color:red} The patch doesn't appear to include any new or modified tests. Please justify why no new tests are needed for this patch. Also please list what manual steps were performed to verify this patch. {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 14m 9s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 51s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 32s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 58s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 16s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 46s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 38s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 46s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 42s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 42s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 24s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 48s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 10s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 49s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 37s{color} | {color:green} the patch passed {color} | | {color:red}-1{color} | {color:red} unit {color} | {color:red} 66m 15s{color} | {color:red} hadoop-hdfs in the patch failed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 19s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 92m 21s{color} | {color:black} {color} | \\ \\ || Reason || Tests || | Failed junit tests | hadoop.hdfs.TestBlockStoragePolicy | | | hadoop.hdfs.TestAclsEndToEnd | \\ \\ || Subsystem || Report/Notes || | Docker | Image:yetus/hadoop:a9ad5d6 | | JIRA Issue | HDFS-11370 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12849463/HDFS-11370.1.patch | | Optional Tests | asflicense compile javac javadoc mvninstall mvnsite unit findbugs checkstyle | | uname | Linux b76bdbac4889 3.13.0-106-generic #153-Ubuntu SMP Tue Dec 6 15:44:32 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh | | git revision | trunk / 425a7e5 | | Default Java | 1.8.0_121 | | findbugs | v3.0.0 | | unit | https://builds.apache.org/job/PreCommit-HDFS-Build/18268/artifact/patchprocess/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt | | Test Results | https://builds.apache.org/job/PreCommit-HDFS-Build/18268/testReport/ | | modules | C: hadoop-hdfs-project/hadoop-hdfs U: hadoop-hdfs-project/hadoop-hdfs | | Console output | https://builds.apache.org/job/PreCommit-HDFS-Build/18268/console | | Powered by | Apache Yetus 0.5.0-SNAPSHOT http://yetus.apache.org | This message was automatically generated. > Optimize NamenodeFsck#getReplicaInfo > > > Key: HDFS-11370 > URL: https://issues.apache.org/jira/browse/HDFS-11370 > Project: Hadoop HDFS > Issue Type: Improvement >
[jira] [Commented] (HDFS-4025) QJM: Sychronize past log segments to JNs that missed them
[ https://issues.apache.org/jira/browse/HDFS-4025?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15839341#comment-15839341 ] Hadoop QA commented on HDFS-4025: - | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 36s{color} | {color:blue} Docker mode activated. {color} | | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s{color} | {color:green} The patch appears to include 3 new or modified test files. {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 13m 44s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 47s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 33s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 57s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 13s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 51s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 38s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 50s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 45s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 45s{color} | {color:green} the patch passed {color} | | {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange} 0m 29s{color} | {color:orange} hadoop-hdfs-project/hadoop-hdfs: The patch generated 7 new + 551 unchanged - 0 fixed = 558 total (was 551) {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 51s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 10s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} xml {color} | {color:green} 0m 2s{color} | {color:green} The patch has no ill-formed XML file. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 54s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 38s{color} | {color:green} the patch passed {color} | | {color:red}-1{color} | {color:red} unit {color} | {color:red} 66m 52s{color} | {color:red} hadoop-hdfs in the patch failed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 20s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 93m 29s{color} | {color:black} {color} | \\ \\ || Reason || Tests || | Failed junit tests | hadoop.hdfs.TestAclsEndToEnd | \\ \\ || Subsystem || Report/Notes || | Docker | Image:yetus/hadoop:a9ad5d6 | | JIRA Issue | HDFS-4025 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12849415/HDFS-4025.008.patch | | Optional Tests | asflicense compile javac javadoc mvninstall mvnsite unit findbugs checkstyle xml | | uname | Linux e10586eba46a 3.13.0-107-generic #154-Ubuntu SMP Tue Dec 20 09:57:27 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh | | git revision | trunk / 425a7e5 | | Default Java | 1.8.0_121 | | findbugs | v3.0.0 | | checkstyle | https://builds.apache.org/job/PreCommit-HDFS-Build/18266/artifact/patchprocess/diff-checkstyle-hadoop-hdfs-project_hadoop-hdfs.txt | | unit | https://builds.apache.org/job/PreCommit-HDFS-Build/18266/artifact/patchprocess/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt | | Test Results | https://builds.apache.org/job/PreCommit-HDFS-Build/18266/testReport/ | | modules | C: hadoop-hdfs-project/hadoop-hdfs U: hadoop-hdfs-project/hadoop-hdfs | | Console output | https://builds.apache.org/job/PreCommit-HDFS-Build/18266/console | | Powered by | Apache Yetus 0.5.0-SNAPSHOT http://yetus.apache.org | This message was automatically generated. > QJM: Sychronize past log segments to JNs
[jira] [Commented] (HDFS-9884) Use doxia macro to generate in-page TOC of HDFS site documentation
[ https://issues.apache.org/jira/browse/HDFS-9884?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15839333#comment-15839333 ] Hadoop QA commented on HDFS-9884: - | (/) *{color:green}+1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 13s{color} | {color:blue} Docker mode activated. {color} | | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 14m 27s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 58s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 57s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 20s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 17m 13s{color} | {color:black} {color} | \\ \\ || Subsystem || Report/Notes || | Docker | Image:yetus/hadoop:a9ad5d6 | | JIRA Issue | HDFS-9884 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12849464/HDFS-9884.004.patch | | Optional Tests | asflicense mvnsite | | uname | Linux bee6c6c71bfe 3.13.0-106-generic #153-Ubuntu SMP Tue Dec 6 15:44:32 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh | | git revision | trunk / 425a7e5 | | modules | C: hadoop-hdfs-project/hadoop-hdfs U: hadoop-hdfs-project/hadoop-hdfs | | Console output | https://builds.apache.org/job/PreCommit-HDFS-Build/18269/console | | Powered by | Apache Yetus 0.5.0-SNAPSHOT http://yetus.apache.org | This message was automatically generated. > Use doxia macro to generate in-page TOC of HDFS site documentation > -- > > Key: HDFS-9884 > URL: https://issues.apache.org/jira/browse/HDFS-9884 > Project: Hadoop HDFS > Issue Type: Improvement > Components: documentation >Affects Versions: 2.7.0 >Reporter: Masatake Iwasaki >Assignee: Masatake Iwasaki > Attachments: HDFS-9884.001.patch, HDFS-9884.002.patch, > HDFS-9884.003.patch, HDFS-9884.004.patch > > > Since maven-site-plugin 3.5 was released, we can use toc macro in Markdown. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-9884) Use doxia macro to generate in-page TOC of HDFS site documentation
[ https://issues.apache.org/jira/browse/HDFS-9884?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Masatake Iwasaki updated HDFS-9884: --- Attachment: HDFS-9884.004.patch Yeah. I attached 004. > Use doxia macro to generate in-page TOC of HDFS site documentation > -- > > Key: HDFS-9884 > URL: https://issues.apache.org/jira/browse/HDFS-9884 > Project: Hadoop HDFS > Issue Type: Improvement > Components: documentation >Affects Versions: 2.7.0 >Reporter: Masatake Iwasaki >Assignee: Masatake Iwasaki > Attachments: HDFS-9884.001.patch, HDFS-9884.002.patch, > HDFS-9884.003.patch, HDFS-9884.004.patch > > > Since maven-site-plugin 3.5 was released, we can use toc macro in Markdown. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-11370) Optimize NamenodeFsck#getReplicaInfo
[ https://issues.apache.org/jira/browse/HDFS-11370?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Takanobu Asanuma updated HDFS-11370: Attachment: HDFS-11370.1.patch Uploaded a new patch. It avoids scanning the storages multiple times. > Optimize NamenodeFsck#getReplicaInfo > > > Key: HDFS-11370 > URL: https://issues.apache.org/jira/browse/HDFS-11370 > Project: Hadoop HDFS > Issue Type: Improvement > Components: namenode >Reporter: Takanobu Asanuma >Assignee: Takanobu Asanuma >Priority: Minor > Attachments: HDFS-11370.1.patch > > > We can optimize the logic of calculating the number of storages in > {{NamenodeFsck#getReplicaInfo}}. This is a follow-on task of HDFS-11124. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-11370) Optimize NamenodeFsck#getReplicaInfo
[ https://issues.apache.org/jira/browse/HDFS-11370?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Takanobu Asanuma updated HDFS-11370: Status: Patch Available (was: Open) > Optimize NamenodeFsck#getReplicaInfo > > > Key: HDFS-11370 > URL: https://issues.apache.org/jira/browse/HDFS-11370 > Project: Hadoop HDFS > Issue Type: Improvement > Components: namenode >Reporter: Takanobu Asanuma >Assignee: Takanobu Asanuma >Priority: Minor > Attachments: HDFS-11370.1.patch > > > We can optimize the logic of calculating the number of storages in > {{NamenodeFsck#getReplicaInfo}}. This is a follow-on task of HDFS-11124. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-9884) Use doxia macro to generate in-page TOC of HDFS site documentation
[ https://issues.apache.org/jira/browse/HDFS-9884?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15839307#comment-15839307 ] Akira Ajisaka commented on HDFS-9884: - Would you update HDFSDiskbalancer.md as well? > Use doxia macro to generate in-page TOC of HDFS site documentation > -- > > Key: HDFS-9884 > URL: https://issues.apache.org/jira/browse/HDFS-9884 > Project: Hadoop HDFS > Issue Type: Improvement > Components: documentation >Affects Versions: 2.7.0 >Reporter: Masatake Iwasaki >Assignee: Masatake Iwasaki > Attachments: HDFS-9884.001.patch, HDFS-9884.002.patch, > HDFS-9884.003.patch > > > Since maven-site-plugin 3.5 was released, we can use toc macro in Markdown. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-11258) File mtime change could not save to editlog
[ https://issues.apache.org/jira/browse/HDFS-11258?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15839293#comment-15839293 ] Akira Ajisaka commented on HDFS-11258: -- Filed HDFS-11373 and created a patch. > File mtime change could not save to editlog > --- > > Key: HDFS-11258 > URL: https://issues.apache.org/jira/browse/HDFS-11258 > Project: Hadoop HDFS > Issue Type: Bug >Reporter: Jimmy Xiang >Assignee: Jimmy Xiang >Priority: Critical > Fix For: 2.8.0, 3.0.0-alpha2 > > Attachments: hdfs-11258.1.patch, hdfs-11258.2.patch, > hdfs-11258.3.patch, hdfs-11258.4.patch, hdfs-11258-addendum-branch2.patch > > > When both mtime and atime are changed, and atime is not beyond the precision > limit, the mtime change is not saved to edit logs. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-11373) Backport HDFS-11258 and HDFS-11272 to branch-2.7
[ https://issues.apache.org/jira/browse/HDFS-11373?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Akira Ajisaka updated HDFS-11373: - Status: Patch Available (was: Open) > Backport HDFS-11258 and HDFS-11272 to branch-2.7 > > > Key: HDFS-11373 > URL: https://issues.apache.org/jira/browse/HDFS-11373 > Project: Hadoop HDFS > Issue Type: Bug >Reporter: Akira Ajisaka >Assignee: Akira Ajisaka >Priority: Critical > Attachments: HDFS-11373-branch-2.7.01.patch > > -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-11373) Backport HDFS-11258 and HDFS-11272 to branch-2.7
[ https://issues.apache.org/jira/browse/HDFS-11373?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Akira Ajisaka updated HDFS-11373: - Attachment: HDFS-11373-branch-2.7.01.patch > Backport HDFS-11258 and HDFS-11272 to branch-2.7 > > > Key: HDFS-11373 > URL: https://issues.apache.org/jira/browse/HDFS-11373 > Project: Hadoop HDFS > Issue Type: Bug >Reporter: Akira Ajisaka >Assignee: Akira Ajisaka >Priority: Critical > Attachments: HDFS-11373-branch-2.7.01.patch > > -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-4025) QJM: Sychronize past log segments to JNs that missed them
[ https://issues.apache.org/jira/browse/HDFS-4025?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Hanisha Koneru updated HDFS-4025: - Attachment: HDFS-4025.008.patch Thank you [~jingzhao] for reviewing the patch. {quote} 5. Similarly please see if we still need JNStorage#getTemporaryEditsFile and JNStorage#getFinalizedEditsFile. {quote} We would need these two methods as the corresponding methods in NNStorage require the current storage directory to passed as arguments. {quote} 12. The whole "getMissingLogSegments" may need to be redesigned: Each time we download a missing segment successfully, we should update lastSyncedTxId accordingly. {quote} Suppose the lastSyncedTxId is 10 and the other journal node from which we are downloading missing logs has logs starting from edits_20_30. then we should not update the lastSyncedTxId to 30 as we might still get the missing edits 11 to 20 in another journal node. Instead, if we update the lastSyncedTxId at the end of one sync cycle (after downloading all missing logs from a journal), then we can avoid this scenario. I have addressed rest of the comments in patch v08. > QJM: Sychronize past log segments to JNs that missed them > - > > Key: HDFS-4025 > URL: https://issues.apache.org/jira/browse/HDFS-4025 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: ha >Affects Versions: QuorumJournalManager (HDFS-3077) >Reporter: Todd Lipcon >Assignee: Hanisha Koneru > Fix For: QuorumJournalManager (HDFS-3077) > > Attachments: HDFS-4025.000.patch, HDFS-4025.001.patch, > HDFS-4025.002.patch, HDFS-4025.003.patch, HDFS-4025.004.patch, > HDFS-4025.005.patch, HDFS-4025.006.patch, HDFS-4025.007.patch, > HDFS-4025.008.patch > > > Currently, if a JournalManager crashes and misses some segment of logs, and > then comes back, it will be re-added as a valid part of the quorum on the > next log roll. However, it will not have a complete history of log segments > (i.e any individual JN may have gaps in its transaction history). This > mirrors the behavior of the NameNode when there are multiple local > directories specified. > However, it would be better if a background thread noticed these gaps and > "filled them in" by grabbing the segments from other JournalNodes. This > increases the resilience of the system when JournalNodes get reformatted or > otherwise lose their local disk. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Created] (HDFS-11373) Backport HDFS-11258 and HDFS-11272 to branch-2.7
Akira Ajisaka created HDFS-11373: Summary: Backport HDFS-11258 and HDFS-11272 to branch-2.7 Key: HDFS-11373 URL: https://issues.apache.org/jira/browse/HDFS-11373 Project: Hadoop HDFS Issue Type: Bug Reporter: Akira Ajisaka Assignee: Akira Ajisaka Priority: Critical -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-11258) File mtime change could not save to editlog
[ https://issues.apache.org/jira/browse/HDFS-11258?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15839232#comment-15839232 ] Akira Ajisaka commented on HDFS-11258: -- Thanks [~kanaka] for the reminder. The patches cannot be applied to branch-2.7, so I'll file a jira for backporting this and HDFS-11272 to branch-2.7 and create a patch. > File mtime change could not save to editlog > --- > > Key: HDFS-11258 > URL: https://issues.apache.org/jira/browse/HDFS-11258 > Project: Hadoop HDFS > Issue Type: Bug >Reporter: Jimmy Xiang >Assignee: Jimmy Xiang >Priority: Critical > Fix For: 2.8.0, 3.0.0-alpha2 > > Attachments: hdfs-11258.1.patch, hdfs-11258.2.patch, > hdfs-11258.3.patch, hdfs-11258.4.patch, hdfs-11258-addendum-branch2.patch > > > When both mtime and atime are changed, and atime is not beyond the precision > limit, the mtime change is not saved to edit logs. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-9884) Use doxia macro to generate in-page TOC of HDFS site documentation
[ https://issues.apache.org/jira/browse/HDFS-9884?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Masatake Iwasaki updated HDFS-9884: --- Attachment: HDFS-9884.003.patch Thanks for pinging me, [~ajisakaa]. I attached updated patch. > Use doxia macro to generate in-page TOC of HDFS site documentation > -- > > Key: HDFS-9884 > URL: https://issues.apache.org/jira/browse/HDFS-9884 > Project: Hadoop HDFS > Issue Type: Improvement > Components: documentation >Affects Versions: 2.7.0 >Reporter: Masatake Iwasaki >Assignee: Masatake Iwasaki > Attachments: HDFS-9884.001.patch, HDFS-9884.002.patch, > HDFS-9884.003.patch > > > Since maven-site-plugin 3.5 was released, we can use toc macro in Markdown. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-11372) Increase test timeouts that are too aggressive.
[ https://issues.apache.org/jira/browse/HDFS-11372?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15839151#comment-15839151 ] Xiao Chen commented on HDFS-11372: -- Sure Yiqun, will take a look. Thanks. bq. So in my opition, we just adding the missing timeout for the test method is enough. Adding a {{@Rule}} to the test class would be sufficient then. > Increase test timeouts that are too aggressive. > --- > > Key: HDFS-11372 > URL: https://issues.apache.org/jira/browse/HDFS-11372 > Project: Hadoop HDFS > Issue Type: Improvement > Components: test >Affects Versions: 3.0.0-alpha3 >Reporter: Xiao Chen >Priority: Minor > > Seen these timeout in some > [precommit|https://issues.apache.org/jira/browse/HDFS-10899?focusedCommentId=15838964=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-15838964] > false positives, at a brief look I think likely due to timeout being to > small. > - TestLeaseRecovery2 > - TestDataNodeVolumeFailure > Can't seem to find from jenkins which test method is at fault, but > TestLeaseRecovery2 has some 30-second timeout cases, and > TestDataNodeVolumeFailure has 1 10-second timeout case. > We should make them at least 2 minutes, or maybe 10x local run time. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-11371) Document missing metrics of erasure coding
[ https://issues.apache.org/jira/browse/HDFS-11371?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15839139#comment-15839139 ] Hadoop QA commented on HDFS-11371: -- | (/) *{color:green}+1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 19s{color} | {color:blue} Docker mode activated. {color} | | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 13m 36s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 0s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 55s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 17s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 16m 24s{color} | {color:black} {color} | \\ \\ || Subsystem || Report/Notes || | Docker | Image:yetus/hadoop:a9ad5d6 | | JIRA Issue | HDFS-11371 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12849406/HDFS-11371.001.patch | | Optional Tests | asflicense mvnsite | | uname | Linux f7f745fe63dc 3.13.0-106-generic #153-Ubuntu SMP Tue Dec 6 15:44:32 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh | | git revision | trunk / 425a7e5 | | modules | C: hadoop-common-project/hadoop-common U: hadoop-common-project/hadoop-common | | Console output | https://builds.apache.org/job/PreCommit-HDFS-Build/18263/console | | Powered by | Apache Yetus 0.5.0-SNAPSHOT http://yetus.apache.org | This message was automatically generated. > Document missing metrics of erasure coding > -- > > Key: HDFS-11371 > URL: https://issues.apache.org/jira/browse/HDFS-11371 > Project: Hadoop HDFS > Issue Type: Bug > Components: documentation, erasure-coding >Affects Versions: 3.0.0-alpha2 >Reporter: Yiqun Lin >Assignee: Yiqun Lin >Priority: Minor > Attachments: HDFS-11371.001.patch > > > HDFS-8411, HDFS-11216 add some metrics of erasure coding, but it hasn't been > documented. The following metrics in {{DataNodeMetrics}} is missing in > documentation. > {code} > @Metric("Count of erasure coding reconstruction tasks") > MutableCounterLong ecReconstructionTasks; > @Metric("Count of erasure coding failed reconstruction tasks") > MutableCounterLong ecFailedReconstructionTasks; > @Metric("Nanoseconds spent by decoding tasks") > MutableCounterLong ecDecodingTimeNanos; > @Metric("Bytes read by erasure coding worker") > MutableCounterLong ecReconstructionBytesRead; > @Metric("Bytes written by erasure coding worker") > MutableCounterLong ecReconstructionBytesWritten; > @Metric("Bytes remote read by erasure coding worker") > MutableCounterLong ecReconstructionRemoteBytesRead; > {code} -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Comment Edited] (HDFS-11372) Increase test timeouts that are too aggressive.
[ https://issues.apache.org/jira/browse/HDFS-11372?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15839114#comment-15839114 ] Yiqun Lin edited comment on HDFS-11372 at 1/26/17 3:35 AM: --- Hi [~xiaochen], I am working for improving the test for {{TestDataNodeVolumeFailure}} and some relevant tests in HDFS-11353. Can you have a look? I think that have been address your comment. Also I do some other improvement. I am thinking the test fails is caused by some test method which are not set the timeout. Otherwise, the test method will throw {{TimeoutException}} and we will catch that. So in my opition, we just adding the missing timeout for the test method is enough. was (Author: linyiqun): Hi [~xiaochen], I am working for improving the test for {{TestDataNodeVolumeFailure}} and some relevant tests in HDFS-11353. Can you have a look? I think that have been address your comment. Also I do some other improvement. > Increase test timeouts that are too aggressive. > --- > > Key: HDFS-11372 > URL: https://issues.apache.org/jira/browse/HDFS-11372 > Project: Hadoop HDFS > Issue Type: Improvement > Components: test >Affects Versions: 3.0.0-alpha3 >Reporter: Xiao Chen >Priority: Minor > > Seen these timeout in some > [precommit|https://issues.apache.org/jira/browse/HDFS-10899?focusedCommentId=15838964=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-15838964] > false positives, at a brief look I think likely due to timeout being to > small. > - TestLeaseRecovery2 > - TestDataNodeVolumeFailure > Can't seem to find from jenkins which test method is at fault, but > TestLeaseRecovery2 has some 30-second timeout cases, and > TestDataNodeVolumeFailure has 1 10-second timeout case. > We should make them at least 2 minutes, or maybe 10x local run time. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-11372) Increase test timeouts that are too aggressive.
[ https://issues.apache.org/jira/browse/HDFS-11372?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15839114#comment-15839114 ] Yiqun Lin commented on HDFS-11372: -- Hi [~xiaochen], I am working for improving the test for {{TestDataNodeVolumeFailure}} and some relevant tests in HDFS-11353. Can you have a look? I think that have been address your comment. Also I do some other improvement. > Increase test timeouts that are too aggressive. > --- > > Key: HDFS-11372 > URL: https://issues.apache.org/jira/browse/HDFS-11372 > Project: Hadoop HDFS > Issue Type: Improvement > Components: test >Affects Versions: 3.0.0-alpha3 >Reporter: Xiao Chen >Priority: Minor > > Seen these timeout in some > [precommit|https://issues.apache.org/jira/browse/HDFS-10899?focusedCommentId=15838964=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-15838964] > false positives, at a brief look I think likely due to timeout being to > small. > - TestLeaseRecovery2 > - TestDataNodeVolumeFailure > Can't seem to find from jenkins which test method is at fault, but > TestLeaseRecovery2 has some 30-second timeout cases, and > TestDataNodeVolumeFailure has 1 10-second timeout case. > We should make them at least 2 minutes, or maybe 10x local run time. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-11353) Improve the unit tests relevant to DataNode volume failure testing
[ https://issues.apache.org/jira/browse/HDFS-11353?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Yiqun Lin updated HDFS-11353: - Attachment: HDFS-11353.004.patch > Improve the unit tests relevant to DataNode volume failure testing > -- > > Key: HDFS-11353 > URL: https://issues.apache.org/jira/browse/HDFS-11353 > Project: Hadoop HDFS > Issue Type: Improvement >Affects Versions: 3.0.0-alpha2 >Reporter: Yiqun Lin >Assignee: Yiqun Lin > Attachments: HDFS-11353.001.patch, HDFS-11353.002.patch, > HDFS-11353.003.patch, HDFS-11353.004.patch > > > Currently there are many tests which start with > {{TestDataNodeVolumeFailure*}} frequently run timedout or failed. I found one > failure test in recent Jenkins building. The stack info: > {code} > org.apache.hadoop.hdfs.server.datanode.TestDataNodeVolumeFailureReporting.testSuccessiveVolumeFailures > java.util.concurrent.TimeoutException: Timed out waiting for DN to die > at > org.apache.hadoop.hdfs.DFSTestUtil.waitForDatanodeDeath(DFSTestUtil.java:702) > at > org.apache.hadoop.hdfs.server.datanode.TestDataNodeVolumeFailureReporting.testSuccessiveVolumeFailures(TestDataNodeVolumeFailureReporting.java:208) > {code} > The related codes: > {code} > /* > * Now fail the 2nd volume on the 3rd datanode. All its volumes > * are now failed and so it should report two volume failures > * and that it's no longer up. Only wait for two replicas since > * we'll never get a third. > */ > DataNodeTestUtils.injectDataDirFailure(dn3Vol2); > Path file3 = new Path("/test3"); > DFSTestUtil.createFile(fs, file3, 1024, (short)3, 1L); > DFSTestUtil.waitReplication(fs, file3, (short)2); > // The DN should consider itself dead > DFSTestUtil.waitForDatanodeDeath(dns.get(2)); > {code} > Here the code waits for the datanode failed all the volume and then become > dead. But it timed out. We would be better to compare that if all the volumes > are failed then wair for the datanode dead. > In addition, we can use the method {{checkDiskErrorSync}} to do the disk > error check instead of creaing files. In this JIRA, I would like to extract > this logic and defined that in {{DataNodeTestUtils}}. And then we can reuse > this method for datanode volme failure testing in the future. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-11353) Improve the unit tests relevant to DataNode volume failure testing
[ https://issues.apache.org/jira/browse/HDFS-11353?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15839109#comment-15839109 ] Yiqun Lin commented on HDFS-11353: -- Attach the v004 patch with adding timeout for {{TestDataNodeVolumeFailure}} as well. > Improve the unit tests relevant to DataNode volume failure testing > -- > > Key: HDFS-11353 > URL: https://issues.apache.org/jira/browse/HDFS-11353 > Project: Hadoop HDFS > Issue Type: Improvement >Affects Versions: 3.0.0-alpha2 >Reporter: Yiqun Lin >Assignee: Yiqun Lin > Attachments: HDFS-11353.001.patch, HDFS-11353.002.patch, > HDFS-11353.003.patch, HDFS-11353.004.patch > > > Currently there are many tests which start with > {{TestDataNodeVolumeFailure*}} frequently run timedout or failed. I found one > failure test in recent Jenkins building. The stack info: > {code} > org.apache.hadoop.hdfs.server.datanode.TestDataNodeVolumeFailureReporting.testSuccessiveVolumeFailures > java.util.concurrent.TimeoutException: Timed out waiting for DN to die > at > org.apache.hadoop.hdfs.DFSTestUtil.waitForDatanodeDeath(DFSTestUtil.java:702) > at > org.apache.hadoop.hdfs.server.datanode.TestDataNodeVolumeFailureReporting.testSuccessiveVolumeFailures(TestDataNodeVolumeFailureReporting.java:208) > {code} > The related codes: > {code} > /* > * Now fail the 2nd volume on the 3rd datanode. All its volumes > * are now failed and so it should report two volume failures > * and that it's no longer up. Only wait for two replicas since > * we'll never get a third. > */ > DataNodeTestUtils.injectDataDirFailure(dn3Vol2); > Path file3 = new Path("/test3"); > DFSTestUtil.createFile(fs, file3, 1024, (short)3, 1L); > DFSTestUtil.waitReplication(fs, file3, (short)2); > // The DN should consider itself dead > DFSTestUtil.waitForDatanodeDeath(dns.get(2)); > {code} > Here the code waits for the datanode failed all the volume and then become > dead. But it timed out. We would be better to compare that if all the volumes > are failed then wair for the datanode dead. > In addition, we can use the method {{checkDiskErrorSync}} to do the disk > error check instead of creaing files. In this JIRA, I would like to extract > this logic and defined that in {{DataNodeTestUtils}}. And then we can reuse > this method for datanode volme failure testing in the future. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Created] (HDFS-11372) Increase test timeouts that are too aggressive.
Xiao Chen created HDFS-11372: Summary: Increase test timeouts that are too aggressive. Key: HDFS-11372 URL: https://issues.apache.org/jira/browse/HDFS-11372 Project: Hadoop HDFS Issue Type: Improvement Components: test Affects Versions: 3.0.0-alpha3 Reporter: Xiao Chen Priority: Minor Seen these timeout in some [precommit|https://issues.apache.org/jira/browse/HDFS-10899?focusedCommentId=15838964=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-15838964] false positives, at a brief look I think likely due to timeout being to small. - TestLeaseRecovery2 - TestDataNodeVolumeFailure Can't seem to find from jenkins which test method is at fault, but TestLeaseRecovery2 has some 30-second timeout cases, and TestDataNodeVolumeFailure has 1 10-second timeout case. We should make them at least 2 minutes, or maybe 10x local run time. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-10899) Add functionality to re-encrypt EDEKs.
[ https://issues.apache.org/jira/browse/HDFS-10899?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15839090#comment-15839090 ] Xiao Chen commented on HDFS-10899: -- Failed tests are not related and passes locally, except for the oev which I added a wrong editStored binary locally... Patch 6 again. > Add functionality to re-encrypt EDEKs. > -- > > Key: HDFS-10899 > URL: https://issues.apache.org/jira/browse/HDFS-10899 > Project: Hadoop HDFS > Issue Type: New Feature > Components: encryption, kms >Reporter: Xiao Chen >Assignee: Xiao Chen > Attachments: HDFS-10899.01.patch, HDFS-10899.02.patch, > HDFS-10899.03.patch, HDFS-10899.04.patch, HDFS-10899.05.patch, > HDFS-10899.06.patch, HDFS-10899.wip.2.patch, HDFS-10899.wip.patch, Re-encrypt > edek design doc.pdf > > > Currently when an encryption zone (EZ) key is rotated, it only takes effect > on new EDEKs. We should provide a way to re-encrypt EDEKs after the EZ key > rotation, for improved security. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-10899) Add functionality to re-encrypt EDEKs.
[ https://issues.apache.org/jira/browse/HDFS-10899?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Xiao Chen updated HDFS-10899: - Attachment: (was: HDFS-10899.06.patch) > Add functionality to re-encrypt EDEKs. > -- > > Key: HDFS-10899 > URL: https://issues.apache.org/jira/browse/HDFS-10899 > Project: Hadoop HDFS > Issue Type: New Feature > Components: encryption, kms >Reporter: Xiao Chen >Assignee: Xiao Chen > Attachments: HDFS-10899.01.patch, HDFS-10899.02.patch, > HDFS-10899.03.patch, HDFS-10899.04.patch, HDFS-10899.05.patch, > HDFS-10899.06.patch, HDFS-10899.wip.2.patch, HDFS-10899.wip.patch, Re-encrypt > edek design doc.pdf > > > Currently when an encryption zone (EZ) key is rotated, it only takes effect > on new EDEKs. We should provide a way to re-encrypt EDEKs after the EZ key > rotation, for improved security. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-10899) Add functionality to re-encrypt EDEKs.
[ https://issues.apache.org/jira/browse/HDFS-10899?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Xiao Chen updated HDFS-10899: - Attachment: HDFS-10899.06.patch > Add functionality to re-encrypt EDEKs. > -- > > Key: HDFS-10899 > URL: https://issues.apache.org/jira/browse/HDFS-10899 > Project: Hadoop HDFS > Issue Type: New Feature > Components: encryption, kms >Reporter: Xiao Chen >Assignee: Xiao Chen > Attachments: HDFS-10899.01.patch, HDFS-10899.02.patch, > HDFS-10899.03.patch, HDFS-10899.04.patch, HDFS-10899.05.patch, > HDFS-10899.06.patch, HDFS-10899.wip.2.patch, HDFS-10899.wip.patch, Re-encrypt > edek design doc.pdf > > > Currently when an encryption zone (EZ) key is rotated, it only takes effect > on new EDEKs. We should provide a way to re-encrypt EDEKs after the EZ key > rotation, for improved security. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-10534) NameNode WebUI should display DataNode usage histogram
[ https://issues.apache.org/jira/browse/HDFS-10534?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15839088#comment-15839088 ] Kai Sasaki commented on HDFS-10534: --- Sure, I'll try to create a patch for branch-2.7. > NameNode WebUI should display DataNode usage histogram > -- > > Key: HDFS-10534 > URL: https://issues.apache.org/jira/browse/HDFS-10534 > Project: Hadoop HDFS > Issue Type: Improvement > Components: namenode, ui >Reporter: Zhe Zhang >Assignee: Kai Sasaki > Fix For: 2.9.0, 3.0.0-alpha3, 2.8.1 > > Attachments: HDFS-10534.01.patch, HDFS-10534.02.patch, > HDFS-10534.03.patch, HDFS-10534.04.patch, HDFS-10534.05.patch, > HDFS-10534.06.patch, HDFS-10534.07.patch, HDFS-10534.08.patch, > HDFS-10534.09.patch, HDFS-10534.10.patch, HDFS-10534.11.patch, Screen Shot > 2016-06-23 at 6.25.50 AM.png, Screen Shot 2016-07-07 at 23.29.14.png, Screen > Shot 2016-11-14 at 4.27.15 PM.png, Screen Shot 2016-11-17 at 0.14.06.png, > table_histogram.html > > > In addition of *Min/Median/Max*, another meaningful metric for cluster > balance is DN usage in histogram style. > Since NN already has provided necessary information to calculate histogram of > DN usage, it can be done in JS side. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-11124) Report blockIds of internal blocks for EC files in Fsck
[ https://issues.apache.org/jira/browse/HDFS-11124?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15839082#comment-15839082 ] Takanobu Asanuma commented on HDFS-11124: - Thank you for reviewing and committing, [~jingzhao]! I created a jira for the further optimization, HDFS-11370. I realized we can do this by a simple way. I will upload the patch soon. I would appreciate if you would watch it. > Report blockIds of internal blocks for EC files in Fsck > --- > > Key: HDFS-11124 > URL: https://issues.apache.org/jira/browse/HDFS-11124 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: erasure-coding >Affects Versions: 3.0.0-alpha1 >Reporter: Takanobu Asanuma >Assignee: Takanobu Asanuma > Labels: hdfs-ec-3.0-nice-to-have > Fix For: 3.0.0-alpha3 > > Attachments: HDFS-11124.1.patch, HDFS-11124.2.patch, > HDFS-11124.3.patch > > > At the moment, when we do fsck for an EC file which has corrupt blocks and > missing blocks, the result of fsck is like this: > {quote} > /data/striped 393216 bytes, erasure-coded: policy=RS-DEFAULT-6-3-64k, 1 > block(s): > /data/striped: CORRUPT blockpool BP-1204772930-172.16.165.209-1478761131832 > block blk_-9223372036854775792 > CORRUPT 1 blocks of total size 393216 B > 0. BP-1204772930-172.16.165.209-1478761131832:blk_-9223372036854775792_1001 > len=393216 Live_repl=4 > [DatanodeInfoWithStorage[127.0.0.1:61617,DS-bcfebe1f-ff54-4d57-9258-ff5bdfde01b5,DISK](CORRUPT), > > DatanodeInfoWithStorage[127.0.0.1:61601,DS-9abf64d0-bb6b-434c-8c5e-de8e3b278f91,DISK](CORRUPT), > > DatanodeInfoWithStorage[127.0.0.1:61596,DS-62698e61-c13f-44f2-9da5-614945960221,DISK](CORRUPT), > > DatanodeInfoWithStorage[127.0.0.1:61605,DS-bbce6708-16fe-44ca-9f1c-506cf00f7e0d,DISK](LIVE), > > DatanodeInfoWithStorage[127.0.0.1:61592,DS-9cdd4afd-2dc8-40da-8805-09712e2afcc4,DISK](LIVE), > > DatanodeInfoWithStorage[127.0.0.1:61621,DS-f2a72d28-c880-4ffe-a70f-0f403e374504,DISK](LIVE), > > DatanodeInfoWithStorage[127.0.0.1:61629,DS-fa6ac558-2c38-41fe-9ef8-222b3f6b2b3c,DISK](LIVE)] > {quote} > It would be useful for admins if it reports the blockIds of the internal > blocks. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-11371) Document missing metrics of erasure coding
[ https://issues.apache.org/jira/browse/HDFS-11371?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Yiqun Lin updated HDFS-11371: - Description: HDFS-8411, HDFS-11216 add some metrics of erasure coding, but it hasn't been documented. The following metrics in {{DataNodeMetrics}} is missing in documentation. {code} @Metric("Count of erasure coding reconstruction tasks") MutableCounterLong ecReconstructionTasks; @Metric("Count of erasure coding failed reconstruction tasks") MutableCounterLong ecFailedReconstructionTasks; @Metric("Nanoseconds spent by decoding tasks") MutableCounterLong ecDecodingTimeNanos; @Metric("Bytes read by erasure coding worker") MutableCounterLong ecReconstructionBytesRead; @Metric("Bytes written by erasure coding worker") MutableCounterLong ecReconstructionBytesWritten; @Metric("Bytes remote read by erasure coding worker") MutableCounterLong ecReconstructionRemoteBytesRead; {code} was:HDFS-8411, HDFS-11216 add some metrics of erasure coding, but it hasn't been documented. > Document missing metrics of erasure coding > -- > > Key: HDFS-11371 > URL: https://issues.apache.org/jira/browse/HDFS-11371 > Project: Hadoop HDFS > Issue Type: Bug > Components: documentation, erasure-coding >Affects Versions: 3.0.0-alpha2 >Reporter: Yiqun Lin >Assignee: Yiqun Lin >Priority: Minor > Attachments: HDFS-11371.001.patch > > > HDFS-8411, HDFS-11216 add some metrics of erasure coding, but it hasn't been > documented. The following metrics in {{DataNodeMetrics}} is missing in > documentation. > {code} > @Metric("Count of erasure coding reconstruction tasks") > MutableCounterLong ecReconstructionTasks; > @Metric("Count of erasure coding failed reconstruction tasks") > MutableCounterLong ecFailedReconstructionTasks; > @Metric("Nanoseconds spent by decoding tasks") > MutableCounterLong ecDecodingTimeNanos; > @Metric("Bytes read by erasure coding worker") > MutableCounterLong ecReconstructionBytesRead; > @Metric("Bytes written by erasure coding worker") > MutableCounterLong ecReconstructionBytesWritten; > @Metric("Bytes remote read by erasure coding worker") > MutableCounterLong ecReconstructionRemoteBytesRead; > {code} -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-11371) Document missing metrics of erasure coding
[ https://issues.apache.org/jira/browse/HDFS-11371?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Yiqun Lin updated HDFS-11371: - Attachment: HDFS-11371.001.patch > Document missing metrics of erasure coding > -- > > Key: HDFS-11371 > URL: https://issues.apache.org/jira/browse/HDFS-11371 > Project: Hadoop HDFS > Issue Type: Bug > Components: documentation, erasure-coding >Affects Versions: 3.0.0-alpha2 >Reporter: Yiqun Lin >Assignee: Yiqun Lin >Priority: Minor > Attachments: HDFS-11371.001.patch > > > HDFS-8411, HDFS-11216 add some metrics of erasure coding, but it hasn't been > documented. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-11371) Document missing metrics of erasure coding
[ https://issues.apache.org/jira/browse/HDFS-11371?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Yiqun Lin updated HDFS-11371: - Status: Patch Available (was: Open) Attach a patch. > Document missing metrics of erasure coding > -- > > Key: HDFS-11371 > URL: https://issues.apache.org/jira/browse/HDFS-11371 > Project: Hadoop HDFS > Issue Type: Bug > Components: documentation, erasure-coding >Affects Versions: 3.0.0-alpha2 >Reporter: Yiqun Lin >Assignee: Yiqun Lin >Priority: Minor > > HDFS-8411, HDFS-11216 add some metrics of erasure coding, but it hasn't been > documented. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Created] (HDFS-11371) Document missing metrics of erasure coding
Yiqun Lin created HDFS-11371: Summary: Document missing metrics of erasure coding Key: HDFS-11371 URL: https://issues.apache.org/jira/browse/HDFS-11371 Project: Hadoop HDFS Issue Type: Bug Components: documentation, erasure-coding Affects Versions: 3.0.0-alpha2 Reporter: Yiqun Lin Assignee: Yiqun Lin Priority: Minor HDFS-8411, HDFS-11216 add some metrics of erasure coding, but it hasn't been documented. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Created] (HDFS-11370) Optimize NamenodeFsck#getReplicaInfo
Takanobu Asanuma created HDFS-11370: --- Summary: Optimize NamenodeFsck#getReplicaInfo Key: HDFS-11370 URL: https://issues.apache.org/jira/browse/HDFS-11370 Project: Hadoop HDFS Issue Type: Improvement Components: namenode Reporter: Takanobu Asanuma Assignee: Takanobu Asanuma Priority: Minor We can optimize the logic of calculating the number of storages in {{NamenodeFsck#getReplicaInfo}}. This is a follow-on task of HDFS-11124. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-11369) Change exception message in StorageLocationChecker
[ https://issues.apache.org/jira/browse/HDFS-11369?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15839061#comment-15839061 ] Hadoop QA commented on HDFS-11369: -- | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 15s{color} | {color:blue} Docker mode activated. {color} | | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s{color} | {color:green} The patch appears to include 1 new or modified test files. {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 13m 52s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 56s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 29s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 0s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 12s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 56s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 43s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 47s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 44s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 44s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 24s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 50s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 11s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 48s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 36s{color} | {color:green} the patch passed {color} | | {color:red}-1{color} | {color:red} unit {color} | {color:red} 80m 57s{color} | {color:red} hadoop-hdfs in the patch failed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 36s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black}107m 31s{color} | {color:black} {color} | \\ \\ || Reason || Tests || | Failed junit tests | hadoop.hdfs.TestAclsEndToEnd | | | hadoop.hdfs.TestDFSStripedOutputStreamWithFailure180 | | | hadoop.hdfs.TestDFSStripedOutputStreamWithFailure170 | | | hadoop.hdfs.TestDFSStripedOutputStreamWithFailure000 | | Timed out junit tests | org.apache.hadoop.hdfs.server.datanode.TestDataNodeVolumeFailureReporting | \\ \\ || Subsystem || Report/Notes || | Docker | Image:yetus/hadoop:a9ad5d6 | | JIRA Issue | HDFS-11369 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12849385/HDFS-11369.01.patch | | Optional Tests | asflicense compile javac javadoc mvninstall mvnsite unit findbugs checkstyle | | uname | Linux 86e6bea91a45 3.13.0-95-generic #142-Ubuntu SMP Fri Aug 12 17:00:09 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh | | git revision | trunk / 425a7e5 | | Default Java | 1.8.0_121 | | findbugs | v3.0.0 | | unit | https://builds.apache.org/job/PreCommit-HDFS-Build/18261/artifact/patchprocess/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt | | Test Results | https://builds.apache.org/job/PreCommit-HDFS-Build/18261/testReport/ | | modules | C: hadoop-hdfs-project/hadoop-hdfs U: hadoop-hdfs-project/hadoop-hdfs | | Console output | https://builds.apache.org/job/PreCommit-HDFS-Build/18261/console | | Powered by | Apache Yetus 0.5.0-SNAPSHOT http://yetus.apache.org | This message was automatically generated. > Change exception message in StorageLocationChecker > -- > > Key: HDFS-11369 >
[jira] [Commented] (HDFS-10534) NameNode WebUI should display DataNode usage histogram
[ https://issues.apache.org/jira/browse/HDFS-10534?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15839056#comment-15839056 ] Zhe Zhang commented on HDFS-10534: -- Thanks Kai! If you could provide a branch-2.7 patch that'd be great. I tried backporting HDFS-6407 but that depends on HDFS-8816 and that's too much change to backport. If we can only have this histogram change in 2.7 that's ideal. > NameNode WebUI should display DataNode usage histogram > -- > > Key: HDFS-10534 > URL: https://issues.apache.org/jira/browse/HDFS-10534 > Project: Hadoop HDFS > Issue Type: Improvement > Components: namenode, ui >Reporter: Zhe Zhang >Assignee: Kai Sasaki > Fix For: 2.9.0, 3.0.0-alpha3, 2.8.1 > > Attachments: HDFS-10534.01.patch, HDFS-10534.02.patch, > HDFS-10534.03.patch, HDFS-10534.04.patch, HDFS-10534.05.patch, > HDFS-10534.06.patch, HDFS-10534.07.patch, HDFS-10534.08.patch, > HDFS-10534.09.patch, HDFS-10534.10.patch, HDFS-10534.11.patch, Screen Shot > 2016-06-23 at 6.25.50 AM.png, Screen Shot 2016-07-07 at 23.29.14.png, Screen > Shot 2016-11-14 at 4.27.15 PM.png, Screen Shot 2016-11-17 at 0.14.06.png, > table_histogram.html > > > In addition of *Min/Median/Max*, another meaningful metric for cluster > balance is DN usage in histogram style. > Since NN already has provided necessary information to calculate histogram of > DN usage, it can be done in JS side. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-8377) Support HTTP/2 in datanode
[ https://issues.apache.org/jira/browse/HDFS-8377?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Xiao Chen updated HDFS-8377: Attachment: HDFS-8377.revert.branch-2.patch > Support HTTP/2 in datanode > -- > > Key: HDFS-8377 > URL: https://issues.apache.org/jira/browse/HDFS-8377 > Project: Hadoop HDFS > Issue Type: Sub-task >Reporter: Duo Zhang >Assignee: Duo Zhang > Fix For: 2.8.0, 3.0.0-alpha1 > > Attachments: HDFS-8377.1.patch, HDFS-8377.2.patch, HDFS-8377.patch, > HDFS-8377.revert.branch-2.patch, HDFS-8377.revert.patch > > -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-8377) Support HTTP/2 in datanode
[ https://issues.apache.org/jira/browse/HDFS-8377?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Xiao Chen updated HDFS-8377: Attachment: HDFS-8377.revert.patch Conflicts: - hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt - hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/web/DatanodeHttpServer.java - hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/web/PortUnificationServerHandler.java - hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/web/webhdfs/WebHdfsHandler.java - hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/tools/offlineImageViewer/FSImageHandler.java Reverting has some trivial conflicts, and hence some manual tweaks: - CHANGES.txt no longer exists - PortUnificationServerHandler is added by this patch. HDFS-9711 had some minor changes to the class. Simply removing this class. - WebHdfsHandler had import conflicts due to HDFS-7766. - FSImageHandler has a whitespace conflict due to HDFS-8462. - DatanodeHttpServer also due to HDFS-9711. Had to manually edit the code about PortUnificationServerHandler, similar to another change in HDFS-9711 to add handlers. - Import conflicts. - Intentionally didn't revert the netty version update, so HADOOP-13866 can upgrade upon this. This has been there for a while so some trunk code depends on netty 4.1.0 already. > Support HTTP/2 in datanode > -- > > Key: HDFS-8377 > URL: https://issues.apache.org/jira/browse/HDFS-8377 > Project: Hadoop HDFS > Issue Type: Sub-task >Reporter: Duo Zhang >Assignee: Duo Zhang > Fix For: 2.8.0, 3.0.0-alpha1 > > Attachments: HDFS-8377.1.patch, HDFS-8377.2.patch, HDFS-8377.patch, > HDFS-8377.revert.patch > > -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Comment Edited] (HDFS-10534) NameNode WebUI should display DataNode usage histogram
[ https://issues.apache.org/jira/browse/HDFS-10534?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15839046#comment-15839046 ] Kai Sasaki edited comment on HDFS-10534 at 1/26/17 2:14 AM: [~zhz] Thanks for reviewing! It depends on DataNode metrics so I can rebase and resolve conflict if DataNode provides these metrics too in 2.7.x. And I think so. was (Author: lewuathe): [~zhz] Thanks for reviewing! It depends on DataNode metrics so I can rebase and resolve conflict if necessary, I think. > NameNode WebUI should display DataNode usage histogram > -- > > Key: HDFS-10534 > URL: https://issues.apache.org/jira/browse/HDFS-10534 > Project: Hadoop HDFS > Issue Type: Improvement > Components: namenode, ui >Reporter: Zhe Zhang >Assignee: Kai Sasaki > Fix For: 2.9.0, 3.0.0-alpha3, 2.8.1 > > Attachments: HDFS-10534.01.patch, HDFS-10534.02.patch, > HDFS-10534.03.patch, HDFS-10534.04.patch, HDFS-10534.05.patch, > HDFS-10534.06.patch, HDFS-10534.07.patch, HDFS-10534.08.patch, > HDFS-10534.09.patch, HDFS-10534.10.patch, HDFS-10534.11.patch, Screen Shot > 2016-06-23 at 6.25.50 AM.png, Screen Shot 2016-07-07 at 23.29.14.png, Screen > Shot 2016-11-14 at 4.27.15 PM.png, Screen Shot 2016-11-17 at 0.14.06.png, > table_histogram.html > > > In addition of *Min/Median/Max*, another meaningful metric for cluster > balance is DN usage in histogram style. > Since NN already has provided necessary information to calculate histogram of > DN usage, it can be done in JS side. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-10534) NameNode WebUI should display DataNode usage histogram
[ https://issues.apache.org/jira/browse/HDFS-10534?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15839046#comment-15839046 ] Kai Sasaki commented on HDFS-10534: --- [~zhz] Thanks for reviewing! It depends on DataNode metrics so I can rebase and resolve conflict if necessary, I think. > NameNode WebUI should display DataNode usage histogram > -- > > Key: HDFS-10534 > URL: https://issues.apache.org/jira/browse/HDFS-10534 > Project: Hadoop HDFS > Issue Type: Improvement > Components: namenode, ui >Reporter: Zhe Zhang >Assignee: Kai Sasaki > Fix For: 2.9.0, 3.0.0-alpha3, 2.8.1 > > Attachments: HDFS-10534.01.patch, HDFS-10534.02.patch, > HDFS-10534.03.patch, HDFS-10534.04.patch, HDFS-10534.05.patch, > HDFS-10534.06.patch, HDFS-10534.07.patch, HDFS-10534.08.patch, > HDFS-10534.09.patch, HDFS-10534.10.patch, HDFS-10534.11.patch, Screen Shot > 2016-06-23 at 6.25.50 AM.png, Screen Shot 2016-07-07 at 23.29.14.png, Screen > Shot 2016-11-14 at 4.27.15 PM.png, Screen Shot 2016-11-17 at 0.14.06.png, > table_histogram.html > > > In addition of *Min/Median/Max*, another meaningful metric for cluster > balance is DN usage in histogram style. > Since NN already has provided necessary information to calculate histogram of > DN usage, it can be done in JS side. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Reopened] (HDFS-8377) Support HTTP/2 in datanode
[ https://issues.apache.org/jira/browse/HDFS-8377?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Xiao Chen reopened HDFS-8377: - Reopening to run pre-commit. > Support HTTP/2 in datanode > -- > > Key: HDFS-8377 > URL: https://issues.apache.org/jira/browse/HDFS-8377 > Project: Hadoop HDFS > Issue Type: Sub-task >Reporter: Duo Zhang >Assignee: Duo Zhang > Fix For: 2.8.0, 3.0.0-alpha1 > > Attachments: HDFS-8377.1.patch, HDFS-8377.2.patch, HDFS-8377.patch > > -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-11243) [SPS]: Add a protocol command from NN to DN for dropping the SPS work and queues
[ https://issues.apache.org/jira/browse/HDFS-11243?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15838968#comment-15838968 ] Hadoop QA commented on HDFS-11243: -- | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 20s{color} | {color:blue} Docker mode activated. {color} | | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s{color} | {color:green} The patch appears to include 1 new or modified test files. {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 11m 57s{color} | {color:green} HDFS-10285 passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 51s{color} | {color:green} HDFS-10285 passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 33s{color} | {color:green} HDFS-10285 passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 0s{color} | {color:green} HDFS-10285 passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 15s{color} | {color:green} HDFS-10285 passed {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 56s{color} | {color:green} HDFS-10285 passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 40s{color} | {color:green} HDFS-10285 passed {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 45s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 43s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} cc {color} | {color:green} 0m 43s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 43s{color} | {color:green} the patch passed {color} | | {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange} 0m 27s{color} | {color:orange} hadoop-hdfs-project/hadoop-hdfs: The patch generated 4 new + 293 unchanged - 0 fixed = 297 total (was 293) {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 52s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 11s{color} | {color:green} the patch passed {color} | | {color:red}-1{color} | {color:red} whitespace {color} | {color:red} 0m 0s{color} | {color:red} The patch has 1 line(s) that end in whitespace. Use git apply --whitespace=fix <>. Refer https://git-scm.com/docs/git-apply {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 2m 1s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 36s{color} | {color:green} the patch passed {color} | | {color:red}-1{color} | {color:red} unit {color} | {color:red}120m 32s{color} | {color:red} hadoop-hdfs in the patch failed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 32s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black}145m 27s{color} | {color:black} {color} | \\ \\ || Reason || Tests || | Failed junit tests | hadoop.hdfs.TestDFSStripedOutputStreamWithFailure200 | | | hadoop.hdfs.TestDFSStripedOutputStreamWithFailure110 | | | hadoop.hdfs.TestDFSStripedOutputStreamWithFailure010 | | | hadoop.hdfs.TestFileChecksum | | | hadoop.hdfs.TestDFSStripedOutputStreamWithFailure020 | | | hadoop.hdfs.server.namenode.snapshot.TestRenameWithSnapshots | | | hadoop.hdfs.server.namenode.ha.TestSeveralNameNodes | | | hadoop.hdfs.server.namenode.ha.TestHAStateTransitions | | | hadoop.hdfs.TestDFSRSDefault10x4StripedOutputStreamWithFailure | | | hadoop.hdfs.TestDFSStripedOutputStreamWithFailure170 | | | hadoop.hdfs.TestDFSStripedOutputStreamWithFailure040 | \\ \\ || Subsystem || Report/Notes || | Docker | Image:yetus/hadoop:a9ad5d6 | | JIRA Issue | HDFS-11243 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12849379/HDFS-11243-HDFS-10285-01.patch | | Optional Tests | asflicense compile javac javadoc mvninstall mvnsite unit findbugs checkstyle cc | | uname | Linux b0d774f0a227 3.13.0-106-generic #153-Ubuntu SMP Tue Dec 6 15:44:32 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh | | git revision | HDFS-10285 / bd419bb | | Default Java |
[jira] [Commented] (HDFS-10899) Add functionality to re-encrypt EDEKs.
[ https://issues.apache.org/jira/browse/HDFS-10899?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15838964#comment-15838964 ] Hadoop QA commented on HDFS-10899: -- | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 34s{color} | {color:blue} Docker mode activated. {color} | | {color:blue}0{color} | {color:blue} shelldocs {color} | {color:blue} 0m 0s{color} | {color:blue} Shelldocs was not available. {color} | | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s{color} | {color:green} The patch appears to include 11 new or modified test files. {color} | | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 9s{color} | {color:blue} Maven dependency ordering for branch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 16m 8s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 2m 8s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 1m 26s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 2m 2s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 41s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 4m 13s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 27s{color} | {color:green} trunk passed {color} | | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 9s{color} | {color:blue} Maven dependency ordering for patch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 1m 49s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 52s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} cc {color} | {color:green} 1m 52s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 1m 52s{color} | {color:green} the patch passed {color} | | {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange} 1m 12s{color} | {color:orange} hadoop-hdfs-project: The patch generated 17 new + 1778 unchanged - 5 fixed = 1795 total (was 1783) {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 53s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 33s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} shellcheck {color} | {color:green} 0m 14s{color} | {color:green} The patch generated 0 new + 106 unchanged - 1 fixed = 106 total (was 107) {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} xml {color} | {color:green} 0m 5s{color} | {color:green} The patch has no ill-formed XML file. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 4m 22s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 13s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} unit {color} | {color:green} 1m 12s{color} | {color:green} hadoop-hdfs-client in the patch passed. {color} | | {color:red}-1{color} | {color:red} unit {color} | {color:red}115m 45s{color} | {color:red} hadoop-hdfs in the patch failed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 24s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black}161m 38s{color} | {color:black} {color} | \\ \\ || Reason || Tests || | Failed junit tests | hadoop.hdfs.tools.offlineEditsViewer.TestOfflineEditsViewer | | | hadoop.hdfs.TestAclsEndToEnd | | | hadoop.hdfs.TestDFSStripedOutputStreamWithFailure030 | | Timed out junit tests | org.apache.hadoop.hdfs.TestLeaseRecovery2 | | | org.apache.hadoop.hdfs.server.datanode.TestDataNodeVolumeFailure | \\ \\ || Subsystem || Report/Notes || | Docker | Image:yetus/hadoop:a9ad5d6 | | JIRA Issue | HDFS-10899 | | JIRA Patch URL |
[jira] [Commented] (HDFS-8377) Support HTTP/2 in datanode
[ https://issues.apache.org/jira/browse/HDFS-8377?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15838886#comment-15838886 ] Xiao Chen commented on HDFS-8377: - Hello, After [discussion|https://issues.apache.org/jira/browse/HADOOP-13866?focusedCommentId=15838879=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-15838879] on HADOOP-13866, we decided to revert this from trunk and branch-2. I'll do that later today since Duo already +1'ed the proposal there. Thanks [~Apache9] for the contribution and [~wheat9] for reviewing, hope to see branch-7966 completed and merged to trunk soon! :) > Support HTTP/2 in datanode > -- > > Key: HDFS-8377 > URL: https://issues.apache.org/jira/browse/HDFS-8377 > Project: Hadoop HDFS > Issue Type: Sub-task >Reporter: Duo Zhang >Assignee: Duo Zhang > Fix For: 2.8.0, 3.0.0-alpha1 > > Attachments: HDFS-8377.1.patch, HDFS-8377.2.patch, HDFS-8377.patch > > -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Comment Edited] (HDFS-11369) Change exception message in StorageLocationChecker
[ https://issues.apache.org/jira/browse/HDFS-11369?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15838796#comment-15838796 ] Arpit Agarwal edited comment on HDFS-11369 at 1/25/17 11:34 PM: Restore the exception message used before HDFS-9 and update the test case. was (Author: arpitagarwal): Restore the exception message used before HDFS-11182 and update the test case. > Change exception message in StorageLocationChecker > -- > > Key: HDFS-11369 > URL: https://issues.apache.org/jira/browse/HDFS-11369 > Project: Hadoop HDFS > Issue Type: Bug > Components: datanode >Reporter: Arpit Agarwal >Assignee: Arpit Agarwal >Priority: Minor > Attachments: HDFS-11369.01.patch > > > Change an exception message in StorageLocationChecker.java to use the same > format that was used by the DataNode before HDFS-9. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-11369) Change exception message in StorageLocationChecker
[ https://issues.apache.org/jira/browse/HDFS-11369?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15838807#comment-15838807 ] Jitendra Nath Pandey commented on HDFS-11369: - +1 > Change exception message in StorageLocationChecker > -- > > Key: HDFS-11369 > URL: https://issues.apache.org/jira/browse/HDFS-11369 > Project: Hadoop HDFS > Issue Type: Bug > Components: datanode >Reporter: Arpit Agarwal >Assignee: Arpit Agarwal >Priority: Minor > Attachments: HDFS-11369.01.patch > > > Change an exception message in StorageLocationChecker.java to use the same > format that was used by the DataNode before HDFS-9. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-11369) Change exception message in StorageLocationChecker
[ https://issues.apache.org/jira/browse/HDFS-11369?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Arpit Agarwal updated HDFS-11369: - Status: Patch Available (was: Open) > Change exception message in StorageLocationChecker > -- > > Key: HDFS-11369 > URL: https://issues.apache.org/jira/browse/HDFS-11369 > Project: Hadoop HDFS > Issue Type: Bug > Components: datanode >Reporter: Arpit Agarwal >Assignee: Arpit Agarwal >Priority: Minor > Attachments: HDFS-11369.01.patch > > > Change an exception message in StorageLocationChecker.java to use the same > format that was used by the DataNode before HDFS-9. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-11369) Change exception message in StorageLocationChecker
[ https://issues.apache.org/jira/browse/HDFS-11369?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Arpit Agarwal updated HDFS-11369: - Attachment: HDFS-11369.01.patch Restore the exception message used before HDFS-11182 and update the test case. > Change exception message in StorageLocationChecker > -- > > Key: HDFS-11369 > URL: https://issues.apache.org/jira/browse/HDFS-11369 > Project: Hadoop HDFS > Issue Type: Bug > Components: datanode >Reporter: Arpit Agarwal >Assignee: Arpit Agarwal >Priority: Minor > Attachments: HDFS-11369.01.patch > > > Change an exception message in StorageLocationChecker.java to use the same > format that was used by the DataNode before HDFS-9. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Created] (HDFS-11369) Change exception message in StorageLocationChecker
Arpit Agarwal created HDFS-11369: Summary: Change exception message in StorageLocationChecker Key: HDFS-11369 URL: https://issues.apache.org/jira/browse/HDFS-11369 Project: Hadoop HDFS Issue Type: Bug Components: datanode Affects Versions: 2.9.0 Reporter: Arpit Agarwal Assignee: Arpit Agarwal Priority: Minor Change an exception message in StorageLocationChecker.java to use the same format that was used by the DataNode before HDFS-9. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-11369) Change exception message in StorageLocationChecker
[ https://issues.apache.org/jira/browse/HDFS-11369?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Arpit Agarwal updated HDFS-11369: - Affects Version/s: (was: 2.9.0) > Change exception message in StorageLocationChecker > -- > > Key: HDFS-11369 > URL: https://issues.apache.org/jira/browse/HDFS-11369 > Project: Hadoop HDFS > Issue Type: Bug > Components: datanode >Reporter: Arpit Agarwal >Assignee: Arpit Agarwal >Priority: Minor > > Change an exception message in StorageLocationChecker.java to use the same > format that was used by the DataNode before HDFS-9. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Comment Edited] (HDFS-11243) [SPS]: Add a protocol command from NN to DN for dropping the SPS work and queues
[ https://issues.apache.org/jira/browse/HDFS-11243?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15838754#comment-15838754 ] Uma Maheswara Rao G edited comment on HDFS-11243 at 1/25/17 10:58 PM: -- Thank you [~rakeshr] for the review. Attaching new patch with fixed review comments. For #1: I did that intentionally because after NN restart, it will not have any info related to already scheduled once. It will have to reschedule. So, I thought we can just send to clean in progress stuff. But anyway we can handle this in HDFS-11334. So, now I made to send only in reconfig case. was (Author: umamaheswararao): Thank you [~rakeshr] for the review. Attaching new patch with fixed review comments. > [SPS]: Add a protocol command from NN to DN for dropping the SPS work and > queues > - > > Key: HDFS-11243 > URL: https://issues.apache.org/jira/browse/HDFS-11243 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: datanode, namenode >Reporter: Uma Maheswara Rao G >Assignee: Uma Maheswara Rao G > Attachments: HDFS-11243-HDFS-10285-00.patch, > HDFS-11243-HDFS-10285-01.patch > > > This JIRA is for adding a protocol command from Namenode to Datanode for > dropping SPS work. and Also for dropping in progress queues. > Use case is: when admin deactivated SPS at NN, then internally NN should > issue a command to DNs for dropping in progress queues as well. This command > can be packed via heartbeat. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-11243) [SPS]: Add a protocol command from NN to DN for dropping the SPS work and queues
[ https://issues.apache.org/jira/browse/HDFS-11243?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Uma Maheswara Rao G updated HDFS-11243: --- Attachment: HDFS-11243-HDFS-10285-01.patch Thank you [~rakeshr] for the review. Attaching new patch with fixed review comments. > [SPS]: Add a protocol command from NN to DN for dropping the SPS work and > queues > - > > Key: HDFS-11243 > URL: https://issues.apache.org/jira/browse/HDFS-11243 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: datanode, namenode >Reporter: Uma Maheswara Rao G >Assignee: Uma Maheswara Rao G > Attachments: HDFS-11243-HDFS-10285-00.patch, > HDFS-11243-HDFS-10285-01.patch > > > This JIRA is for adding a protocol command from Namenode to Datanode for > dropping SPS work. and Also for dropping in progress queues. > Use case is: when admin deactivated SPS at NN, then internally NN should > issue a command to DNs for dropping in progress queues as well. This command > can be packed via heartbeat. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-10899) Add functionality to re-encrypt EDEKs.
[ https://issues.apache.org/jira/browse/HDFS-10899?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Xiao Chen updated HDFS-10899: - Attachment: HDFS-10899.06.patch Patch 6 fixes the failed tests. Had a bug that retry cache rpc Id isn't handled correctly on edits. > Add functionality to re-encrypt EDEKs. > -- > > Key: HDFS-10899 > URL: https://issues.apache.org/jira/browse/HDFS-10899 > Project: Hadoop HDFS > Issue Type: New Feature > Components: encryption, kms >Reporter: Xiao Chen >Assignee: Xiao Chen > Attachments: HDFS-10899.01.patch, HDFS-10899.02.patch, > HDFS-10899.03.patch, HDFS-10899.04.patch, HDFS-10899.05.patch, > HDFS-10899.06.patch, HDFS-10899.wip.2.patch, HDFS-10899.wip.patch, Re-encrypt > edek design doc.pdf > > > Currently when an encryption zone (EZ) key is rotated, it only takes effect > on new EDEKs. We should provide a way to re-encrypt EDEKs after the EZ key > rotation, for improved security. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Created] (HDFS-11368) LocalFS does not allow setting storage policy so spew running in local mode
stack created HDFS-11368: Summary: LocalFS does not allow setting storage policy so spew running in local mode Key: HDFS-11368 URL: https://issues.apache.org/jira/browse/HDFS-11368 Project: Hadoop HDFS Issue Type: Bug Reporter: stack Assignee: stack Priority: Minor commit f92a14ade635e4b081f3938620979b5864ac261f Author: Yu LiDate: Mon Jan 9 09:52:58 2017 +0800 HBASE-14061 Support CF-level Storage Policy ...added setting storage policy which is nice. Being able to set storage policy came in in hdfs 2.6.0 (HDFS-6584 Support Archival Storage) but you can only do this for DFS, not for local FS. Upshot is that starting up hbase in standalone mode, which uses localfs, you get this exception every time: {code} 2017-01-25 12:26:53,400 WARN [StoreOpener-93375c645ef2e649620b5d8ed9375985-1] fs.HFileSystem: Failed to set storage policy of [file:/var/folders/d8/8lyxycpd129d4fj7lb684dwhgp/T/hbase-stack/hbase/data/hbase/namespace/93375c645ef2e649620b5d8ed9375985/info] to [HOT] java.lang.UnsupportedOperationException: Cannot find specified method setStoragePolicy at org.apache.hadoop.hbase.util.ReflectionUtils.invokeMethod(ReflectionUtils.java:209) at org.apache.hadoop.hbase.fs.HFileSystem.setStoragePolicy(HFileSystem.java:161) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.hbase.util.ReflectionUtils.invokeMethod(ReflectionUtils.java:207) at org.apache.hadoop.hbase.regionserver.HRegionFileSystem.setStoragePolicy(HRegionFileSystem.java:198) at org.apache.hadoop.hbase.regionserver.HStore.(HStore.java:237) at org.apache.hadoop.hbase.regionserver.HRegion.instantiateHStore(HRegion.java:5265) at org.apache.hadoop.hbase.regionserver.HRegion$1.call(HRegion.java:988) at org.apache.hadoop.hbase.regionserver.HRegion$1.call(HRegion.java:985) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) at java.lang.Thread.run(Thread.java:745) Caused by: java.lang.NoSuchMethodException: org.apache.hadoop.fs.LocalFileSystem.setStoragePolicy(org.apache.hadoop.fs.Path, java.lang.String) at java.lang.Class.getMethod(Class.java:1786) at org.apache.hadoop.hbase.util.ReflectionUtils.invokeMethod(ReflectionUtils.java:205) ... {code} It is distracting at the least. Let me fix. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-11124) Report blockIds of internal blocks for EC files in Fsck
[ https://issues.apache.org/jira/browse/HDFS-11124?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15838573#comment-15838573 ] Hadoop QA commented on HDFS-11124: -- | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 13s{color} | {color:blue} Docker mode activated. {color} | | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s{color} | {color:green} The patch appears to include 1 new or modified test files. {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 13m 9s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 49s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 28s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 52s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 13s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 49s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 44s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 1m 0s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 1s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 1m 1s{color} | {color:green} the patch passed {color} | | {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange} 0m 28s{color} | {color:orange} hadoop-hdfs-project/hadoop-hdfs: The patch generated 1 new + 222 unchanged - 2 fixed = 223 total (was 224) {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 5s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 16s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 2m 10s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 41s{color} | {color:green} the patch passed {color} | | {color:red}-1{color} | {color:red} unit {color} | {color:red} 85m 27s{color} | {color:red} hadoop-hdfs in the patch failed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 32s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black}112m 28s{color} | {color:black} {color} | \\ \\ || Reason || Tests || | Failed junit tests | hadoop.hdfs.server.datanode.TestDirectoryScanner | | Timed out junit tests | org.apache.hadoop.hdfs.server.datanode.TestDataNodeVolumeFailureReporting | | | org.apache.hadoop.hdfs.server.blockmanagement.TestBlockStatsMXBean | \\ \\ || Subsystem || Report/Notes || | Docker | Image:yetus/hadoop:a9ad5d6 | | JIRA Issue | HDFS-11124 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12849254/HDFS-11124.3.patch | | Optional Tests | asflicense compile javac javadoc mvninstall mvnsite unit findbugs checkstyle | | uname | Linux 9bd125b8863e 3.13.0-107-generic #154-Ubuntu SMP Tue Dec 20 09:57:27 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh | | git revision | trunk / 18e1d68 | | Default Java | 1.8.0_121 | | findbugs | v3.0.0 | | checkstyle | https://builds.apache.org/job/PreCommit-HDFS-Build/18258/artifact/patchprocess/diff-checkstyle-hadoop-hdfs-project_hadoop-hdfs.txt | | unit | https://builds.apache.org/job/PreCommit-HDFS-Build/18258/artifact/patchprocess/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt | | Test Results | https://builds.apache.org/job/PreCommit-HDFS-Build/18258/testReport/ | | modules | C: hadoop-hdfs-project/hadoop-hdfs U: hadoop-hdfs-project/hadoop-hdfs | | Console output | https://builds.apache.org/job/PreCommit-HDFS-Build/18258/console | | Powered by | Apache Yetus 0.5.0-SNAPSHOT http://yetus.apache.org | This message was automatically
[jira] [Updated] (HDFS-10620) StringBuilder created and appended even if logging is disabled
[ https://issues.apache.org/jira/browse/HDFS-10620?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Andrew Wang updated HDFS-10620: --- Fix Version/s: (was: 3.0.0-alpha2) 3.0.0-alpha3 > StringBuilder created and appended even if logging is disabled > -- > > Key: HDFS-10620 > URL: https://issues.apache.org/jira/browse/HDFS-10620 > Project: Hadoop HDFS > Issue Type: Bug > Components: namenode >Affects Versions: 2.6.4 >Reporter: Staffan Friberg >Assignee: Staffan Friberg > Fix For: 3.0.0-alpha3 > > Attachments: HDFS-10620.001.patch, HDFS-10620.002.patch, > HDFS-10620-branch-2.01.patch > > > In BlockManager.addToInvalidates the StringBuilder is appended to during the > delete even if logging isn't active. > Could avoid allocating the StringBuilder as well, but not sure if it is > really worth it to add null handling in the code. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-11201) Spelling errors in the logging, help, assertions and exception messages
[ https://issues.apache.org/jira/browse/HDFS-11201?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Andrew Wang updated HDFS-11201: --- Fix Version/s: (was: 3.0.0-alpha2) 3.0.0-alpha3 > Spelling errors in the logging, help, assertions and exception messages > --- > > Key: HDFS-11201 > URL: https://issues.apache.org/jira/browse/HDFS-11201 > Project: Hadoop HDFS > Issue Type: Bug > Components: datanode, diskbalancer, httpfs, namenode, nfs >Affects Versions: 3.0.0-alpha1 >Reporter: Grant Sohn >Priority: Trivial > Fix For: 3.0.0-alpha3 > > Attachments: HDFS-11201.1.patch, HDFS-11201.2.patch, > HDFS-11201.3.patch, HDFS-11201.4.patch > > > Found a set of spelling errors in the user-facing code. > Examples are: > odlest -> oldest > Illagal -> Illegal > bounday -> boundary -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-11167) IBM java can not start secure datanode with HDFS_DATANODE_SECURE_EXTRA_OPTS "-jvm server"
[ https://issues.apache.org/jira/browse/HDFS-11167?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Andrew Wang updated HDFS-11167: --- Fix Version/s: (was: 3.0.0-alpha2) 3.0.0-alpha3 > IBM java can not start secure datanode with HDFS_DATANODE_SECURE_EXTRA_OPTS > "-jvm server" > - > > Key: HDFS-11167 > URL: https://issues.apache.org/jira/browse/HDFS-11167 > Project: Hadoop HDFS > Issue Type: Bug > Components: scripts >Affects Versions: 3.0.0-alpha1 > Environment: IBM PowerPC >Reporter: Pan Yuxuan >Assignee: Pan Yuxuan > Fix For: 3.0.0-alpha3 > > Attachments: HDFS-11167-0001.patch, HDFS-11167-0002.patch > > > When we run secure datanode on IBM PowerPC with IBM JDK, the jsvc wrong with > error > {noformat} > jsvc error: Invalid JVM name specified server > {noformat} > This is because of the HDFS_DATANODE_SECURE_EXTRA_OPTS "-jvm server". > For IBM JDK it is enough to run it without -jvm server. > So I think we can check if IBM jdk in hdfs-config.sh before setting > HDFS_DATANODE_SECURE_EXTRA_OPTS. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-11026) Convert BlockTokenIdentifier to use Protobuf
[ https://issues.apache.org/jira/browse/HDFS-11026?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Andrew Wang updated HDFS-11026: --- Fix Version/s: (was: 3.0.0-alpha2) 3.0.0-alpha3 > Convert BlockTokenIdentifier to use Protobuf > > > Key: HDFS-11026 > URL: https://issues.apache.org/jira/browse/HDFS-11026 > Project: Hadoop HDFS > Issue Type: Task > Components: hdfs, hdfs-client >Affects Versions: 2.9.0, 3.0.0-alpha1 >Reporter: Ewan Higgs > Fix For: 3.0.0-alpha3 > > Attachments: blocktokenidentifier-protobuf.patch, > HDFS-11026.002.patch, HDFS-11026.003.patch > > > {{BlockTokenIdentifier}} currently uses a {{DataInput}}/{{DataOutput}} > (basically a {{byte[]}}) and manual serialization to get data into and out of > the encrypted buffer (in {{BlockKeyProto}}). Other TokenIdentifiers (e.g. > {{ContainerTokenIdentifier}}, {{AMRMTokenIdentifier}}) use Protobuf. The > {{BlockTokenIdenfitier}} should use Protobuf as well so it can be expanded > more easily and will be consistent with the rest of the system. > NB: Release of this will require a version update since 2.8.x won't be able > to decipher {{BlockKeyProto.keyBytes}} from 2.8.y. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-11124) Report blockIds of internal blocks for EC files in Fsck
[ https://issues.apache.org/jira/browse/HDFS-11124?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15838479#comment-15838479 ] Hudson commented on HDFS-11124: --- SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #11171 (See [https://builds.apache.org/job/Hadoop-trunk-Commit/11171/]) HDFS-11124. Report blockIds of internal blocks for EC files in Fsck. (jing9: rev b782bf2156dd9d43610c0bc47d458b2db297589f) * (edit) hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockManager.java * (edit) hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestFsck.java * (edit) hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockInfoStriped.java * (edit) hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/NamenodeFsck.java > Report blockIds of internal blocks for EC files in Fsck > --- > > Key: HDFS-11124 > URL: https://issues.apache.org/jira/browse/HDFS-11124 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: erasure-coding >Affects Versions: 3.0.0-alpha1 >Reporter: Takanobu Asanuma >Assignee: Takanobu Asanuma > Labels: hdfs-ec-3.0-nice-to-have > Fix For: 3.0.0-alpha3 > > Attachments: HDFS-11124.1.patch, HDFS-11124.2.patch, > HDFS-11124.3.patch > > > At the moment, when we do fsck for an EC file which has corrupt blocks and > missing blocks, the result of fsck is like this: > {quote} > /data/striped 393216 bytes, erasure-coded: policy=RS-DEFAULT-6-3-64k, 1 > block(s): > /data/striped: CORRUPT blockpool BP-1204772930-172.16.165.209-1478761131832 > block blk_-9223372036854775792 > CORRUPT 1 blocks of total size 393216 B > 0. BP-1204772930-172.16.165.209-1478761131832:blk_-9223372036854775792_1001 > len=393216 Live_repl=4 > [DatanodeInfoWithStorage[127.0.0.1:61617,DS-bcfebe1f-ff54-4d57-9258-ff5bdfde01b5,DISK](CORRUPT), > > DatanodeInfoWithStorage[127.0.0.1:61601,DS-9abf64d0-bb6b-434c-8c5e-de8e3b278f91,DISK](CORRUPT), > > DatanodeInfoWithStorage[127.0.0.1:61596,DS-62698e61-c13f-44f2-9da5-614945960221,DISK](CORRUPT), > > DatanodeInfoWithStorage[127.0.0.1:61605,DS-bbce6708-16fe-44ca-9f1c-506cf00f7e0d,DISK](LIVE), > > DatanodeInfoWithStorage[127.0.0.1:61592,DS-9cdd4afd-2dc8-40da-8805-09712e2afcc4,DISK](LIVE), > > DatanodeInfoWithStorage[127.0.0.1:61621,DS-f2a72d28-c880-4ffe-a70f-0f403e374504,DISK](LIVE), > > DatanodeInfoWithStorage[127.0.0.1:61629,DS-fa6ac558-2c38-41fe-9ef8-222b3f6b2b3c,DISK](LIVE)] > {quote} > It would be useful for admins if it reports the blockIds of the internal > blocks. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-11345) Document the configuration key for FSNamesystem lock fairness
[ https://issues.apache.org/jira/browse/HDFS-11345?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15838449#comment-15838449 ] Hadoop QA commented on HDFS-11345: -- | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 15s{color} | {color:blue} Docker mode activated. {color} | | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s{color} | {color:green} The patch appears to include 1 new or modified test files. {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 13m 27s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 47s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 31s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 56s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 16s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 56s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 43s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 52s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 56s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 56s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 32s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 58s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 12s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} xml {color} | {color:green} 0m 1s{color} | {color:green} The patch has no ill-formed XML file. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 2m 3s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 44s{color} | {color:green} the patch passed {color} | | {color:red}-1{color} | {color:red} unit {color} | {color:red}129m 42s{color} | {color:red} hadoop-hdfs in the patch failed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 33s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black}156m 59s{color} | {color:black} {color} | \\ \\ || Reason || Tests || | Failed junit tests | hadoop.hdfs.TestEncryptionZones | | | hadoop.hdfs.server.datanode.fsdataset.impl.TestLazyPersistReplicaRecovery | | | hadoop.hdfs.server.datanode.TestDirectoryScanner | | Timed out junit tests | org.apache.hadoop.hdfs.TestLeaseRecovery2 | | | org.apache.hadoop.hdfs.server.datanode.TestDataNodeVolumeFailureReporting | \\ \\ || Subsystem || Report/Notes || | Docker | Image:yetus/hadoop:a9ad5d6 | | JIRA Issue | HDFS-11345 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12849323/HADOOP-11345.001.patch | | Optional Tests | asflicense compile javac javadoc mvninstall mvnsite unit findbugs checkstyle xml | | uname | Linux d3f68412d55d 3.13.0-96-generic #143-Ubuntu SMP Mon Aug 29 20:15:20 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh | | git revision | trunk / 5a56520 | | Default Java | 1.8.0_121 | | findbugs | v3.0.0 | | unit | https://builds.apache.org/job/PreCommit-HDFS-Build/18256/artifact/patchprocess/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt | | Test Results | https://builds.apache.org/job/PreCommit-HDFS-Build/18256/testReport/ | | modules | C: hadoop-hdfs-project/hadoop-hdfs U: hadoop-hdfs-project/hadoop-hdfs | | Console output | https://builds.apache.org/job/PreCommit-HDFS-Build/18256/console | | Powered by | Apache Yetus 0.5.0-SNAPSHOT http://yetus.apache.org | This message was
[jira] [Updated] (HDFS-11124) Report blockIds of internal blocks for EC files in Fsck
[ https://issues.apache.org/jira/browse/HDFS-11124?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jing Zhao updated HDFS-11124: - Resolution: Fixed Hadoop Flags: Reviewed Fix Version/s: (was: 3.0.0-alpha2) 3.0.0-alpha3 Target Version/s: (was: 3.0.0-alpha3) Status: Resolved (was: Patch Available) The failed tests also passed in my local machine. The patch looks good to me. +1. I've committed it to trunk. Thanks a lot for the contribution, [~tasanuma0829]! > Report blockIds of internal blocks for EC files in Fsck > --- > > Key: HDFS-11124 > URL: https://issues.apache.org/jira/browse/HDFS-11124 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: erasure-coding >Affects Versions: 3.0.0-alpha1 >Reporter: Takanobu Asanuma >Assignee: Takanobu Asanuma > Labels: hdfs-ec-3.0-nice-to-have > Fix For: 3.0.0-alpha3 > > Attachments: HDFS-11124.1.patch, HDFS-11124.2.patch, > HDFS-11124.3.patch > > > At the moment, when we do fsck for an EC file which has corrupt blocks and > missing blocks, the result of fsck is like this: > {quote} > /data/striped 393216 bytes, erasure-coded: policy=RS-DEFAULT-6-3-64k, 1 > block(s): > /data/striped: CORRUPT blockpool BP-1204772930-172.16.165.209-1478761131832 > block blk_-9223372036854775792 > CORRUPT 1 blocks of total size 393216 B > 0. BP-1204772930-172.16.165.209-1478761131832:blk_-9223372036854775792_1001 > len=393216 Live_repl=4 > [DatanodeInfoWithStorage[127.0.0.1:61617,DS-bcfebe1f-ff54-4d57-9258-ff5bdfde01b5,DISK](CORRUPT), > > DatanodeInfoWithStorage[127.0.0.1:61601,DS-9abf64d0-bb6b-434c-8c5e-de8e3b278f91,DISK](CORRUPT), > > DatanodeInfoWithStorage[127.0.0.1:61596,DS-62698e61-c13f-44f2-9da5-614945960221,DISK](CORRUPT), > > DatanodeInfoWithStorage[127.0.0.1:61605,DS-bbce6708-16fe-44ca-9f1c-506cf00f7e0d,DISK](LIVE), > > DatanodeInfoWithStorage[127.0.0.1:61592,DS-9cdd4afd-2dc8-40da-8805-09712e2afcc4,DISK](LIVE), > > DatanodeInfoWithStorage[127.0.0.1:61621,DS-f2a72d28-c880-4ffe-a70f-0f403e374504,DISK](LIVE), > > DatanodeInfoWithStorage[127.0.0.1:61629,DS-fa6ac558-2c38-41fe-9ef8-222b3f6b2b3c,DISK](LIVE)] > {quote} > It would be useful for admins if it reports the blockIds of the internal > blocks. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-10534) NameNode WebUI should display DataNode usage histogram
[ https://issues.apache.org/jira/browse/HDFS-10534?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15838402#comment-15838402 ] Hudson commented on HDFS-10534: --- SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #11170 (See [https://builds.apache.org/job/Hadoop-trunk-Commit/11170/]) HDFS-10534. NameNode WebUI should display DataNode usage histogram. (zhz: rev 18e1d6820926646999e7ec248c504b4145cf1a76) * (edit) hadoop-hdfs-project/hadoop-hdfs/pom.xml * (edit) hadoop-hdfs-project/hadoop-hdfs/src/main/webapps/static/hadoop.css * (edit) hadoop-hdfs-project/hadoop-hdfs/src/main/webapps/hdfs/dfshealth.html * (edit) hadoop-hdfs-project/hadoop-hdfs/src/main/webapps/hdfs/dfshealth.js * (add) hadoop-hdfs-project/hadoop-hdfs/src/main/webapps/static/d3-v4.1.1.min.js > NameNode WebUI should display DataNode usage histogram > -- > > Key: HDFS-10534 > URL: https://issues.apache.org/jira/browse/HDFS-10534 > Project: Hadoop HDFS > Issue Type: Improvement > Components: namenode, ui >Reporter: Zhe Zhang >Assignee: Kai Sasaki > Fix For: 2.9.0, 3.0.0-alpha3, 2.8.1 > > Attachments: HDFS-10534.01.patch, HDFS-10534.02.patch, > HDFS-10534.03.patch, HDFS-10534.04.patch, HDFS-10534.05.patch, > HDFS-10534.06.patch, HDFS-10534.07.patch, HDFS-10534.08.patch, > HDFS-10534.09.patch, HDFS-10534.10.patch, HDFS-10534.11.patch, Screen Shot > 2016-06-23 at 6.25.50 AM.png, Screen Shot 2016-07-07 at 23.29.14.png, Screen > Shot 2016-11-14 at 4.27.15 PM.png, Screen Shot 2016-11-17 at 0.14.06.png, > table_histogram.html > > > In addition of *Min/Median/Max*, another meaningful metric for cluster > balance is DN usage in histogram style. > Since NN already has provided necessary information to calculate histogram of > DN usage, it can be done in JS side. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-10534) NameNode WebUI should display DataNode usage histogram
[ https://issues.apache.org/jira/browse/HDFS-10534?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15838362#comment-15838362 ] Zhe Zhang commented on HDFS-10534: -- I committed the patch to trunk, branch-2, and branch-2.8. Backport to branch-2.7 has conflicts due to HDFS-6407. I'm trying to figure out whether that is a valid improvement for branch-2.7. > NameNode WebUI should display DataNode usage histogram > -- > > Key: HDFS-10534 > URL: https://issues.apache.org/jira/browse/HDFS-10534 > Project: Hadoop HDFS > Issue Type: Improvement > Components: namenode, ui >Reporter: Zhe Zhang >Assignee: Kai Sasaki > Fix For: 2.9.0, 3.0.0-alpha3, 2.8.1 > > Attachments: HDFS-10534.01.patch, HDFS-10534.02.patch, > HDFS-10534.03.patch, HDFS-10534.04.patch, HDFS-10534.05.patch, > HDFS-10534.06.patch, HDFS-10534.07.patch, HDFS-10534.08.patch, > HDFS-10534.09.patch, HDFS-10534.10.patch, HDFS-10534.11.patch, Screen Shot > 2016-06-23 at 6.25.50 AM.png, Screen Shot 2016-07-07 at 23.29.14.png, Screen > Shot 2016-11-14 at 4.27.15 PM.png, Screen Shot 2016-11-17 at 0.14.06.png, > table_histogram.html > > > In addition of *Min/Median/Max*, another meaningful metric for cluster > balance is DN usage in histogram style. > Since NN already has provided necessary information to calculate histogram of > DN usage, it can be done in JS side. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-10534) NameNode WebUI should display DataNode usage histogram
[ https://issues.apache.org/jira/browse/HDFS-10534?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Zhe Zhang updated HDFS-10534: - Fix Version/s: 2.8.1 3.0.0-alpha3 2.9.0 > NameNode WebUI should display DataNode usage histogram > -- > > Key: HDFS-10534 > URL: https://issues.apache.org/jira/browse/HDFS-10534 > Project: Hadoop HDFS > Issue Type: Improvement > Components: namenode, ui >Reporter: Zhe Zhang >Assignee: Kai Sasaki > Fix For: 2.9.0, 3.0.0-alpha3, 2.8.1 > > Attachments: HDFS-10534.01.patch, HDFS-10534.02.patch, > HDFS-10534.03.patch, HDFS-10534.04.patch, HDFS-10534.05.patch, > HDFS-10534.06.patch, HDFS-10534.07.patch, HDFS-10534.08.patch, > HDFS-10534.09.patch, HDFS-10534.10.patch, HDFS-10534.11.patch, Screen Shot > 2016-06-23 at 6.25.50 AM.png, Screen Shot 2016-07-07 at 23.29.14.png, Screen > Shot 2016-11-14 at 4.27.15 PM.png, Screen Shot 2016-11-17 at 0.14.06.png, > table_histogram.html > > > In addition of *Min/Median/Max*, another meaningful metric for cluster > balance is DN usage in histogram style. > Since NN already has provided necessary information to calculate histogram of > DN usage, it can be done in JS side. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-11295) Check storage remaining instead of node remaining in BlockPlacementPolicyDefault.chooseReplicaToDelete()
[ https://issues.apache.org/jira/browse/HDFS-11295?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15838195#comment-15838195 ] Elek, Marton commented on HDFS-11295: - I am not sure what was the problem: {code} Tests run: 11, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 7.671 sec - in org.apache.hadoop.fs.TestFcHdfsCreateMkdir Tests run: 61, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 20.081 sec - in org.apache.hadoop.fs.TestSWebHdfsFileContextMainOperations Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 10.439 sec - in org.apache.hadoop.fs.TestUnbuffer Tests run: 10, Failures: 0, Errors: 0, Skipped: 1, Time elapsed: 24.75 sec - in org.apache.hadoop.fs.TestEnhancedByteBufferAccess Results : Tests run: 4967, Failures: 0, Errors: 0, Skipped: 48 [INFO] [INFO] BUILD FAILURE [INFO] [INFO] Total time: 1:32:00.004s [INFO] Finished at: Sun Jan 22 22:26:18 UTC 2017 [INFO] Final Memory: 29M/243M [INFO] [WARNING] The requested profile "native" could not be activated because it does not exist. [WARNING] The requested profile "yarn-ui" could not be activated because it does not exist. [ERROR] Failed to execute goal org.apache.maven.plugins:maven-surefire-plugin:2.17:test (default-test) on project hadoop-hdfs: There was a timeout or other error in the fork -> [Help 1] [ERROR] [ERROR] To see the full stack trace of the errors, re-run Maven with the -e switch. [ERROR] Re-run Maven using the -X switch to enable full debug logging. {code} I will upload the same patch again to trigger a new jenkins build. > Check storage remaining instead of node remaining in > BlockPlacementPolicyDefault.chooseReplicaToDelete() > > > Key: HDFS-11295 > URL: https://issues.apache.org/jira/browse/HDFS-11295 > Project: Hadoop HDFS > Issue Type: Improvement > Components: namenode >Affects Versions: 2.7.1 >Reporter: Xiao Liang >Assignee: Elek, Marton > Attachments: HDFS-11295.001.patch, HDFS-11295.002.patch > > > Currently in BlockPlacementPolicyDefault.chooseReplicaToDelete() the logic > for choosing replica to delete is to pick the node with the least free > space(node.getRemaining()), if all hearbeats are within the tolerable > heartbeat interval. > However, a node may have multiple storages and node.getRemaining() is a sum > of the remainings of them, if free space of the storage with the block to be > delete is low, free space of the node could still be high due to other > storages of the node, finally the storage chosen may not be the storage with > least free space. > So using storage.getRemaining() to choose a storage with least free space for > choosing replica to delete may be a better way to balance storage usage. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-11265) Extend visualization for Maintenance Mode under Datanode tab in the NameNode UI
[ https://issues.apache.org/jira/browse/HDFS-11265?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Elek, Marton updated HDFS-11265: Attachment: ex.png x.png > Extend visualization for Maintenance Mode under Datanode tab in the NameNode > UI > --- > > Key: HDFS-11265 > URL: https://issues.apache.org/jira/browse/HDFS-11265 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: datanode, namenode >Affects Versions: 3.0.0-alpha1 >Reporter: Manoj Govindassamy >Assignee: Elek, Marton > Attachments: ex.png, HDFS-11265.001.patch, icons.png, x.png > > > With HDFS-9391, DataNodes in MaintenanceModes states are shown under DataNode > page in NameNode UI, but they are lacking icon visualization like the ones > shown for other node states. Need to extend the icon visualization to cover > Maintenance Mode. > {code} >
[jira] [Commented] (HDFS-11265) Extend visualization for Maintenance Mode under Datanode tab in the NameNode UI
[ https://issues.apache.org/jira/browse/HDFS-11265?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15838190#comment-15838190 ] Elek, Marton commented on HDFS-11265: - Ok. But as I understood, the "decomissioned" state is when the datanode is still running but it's prepared to the shutting down and no other block will be saved to the node. When it's turned off it's "Decomissioned and dead". I uploaded two other possible icon (from the Glyphicons Halflings which is used in the frontend). One is a x in a circle which is a little bit more neutral, other one just an exclamation mark. > Extend visualization for Maintenance Mode under Datanode tab in the NameNode > UI > --- > > Key: HDFS-11265 > URL: https://issues.apache.org/jira/browse/HDFS-11265 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: datanode, namenode >Affects Versions: 3.0.0-alpha1 >Reporter: Manoj Govindassamy >Assignee: Elek, Marton > Attachments: HDFS-11265.001.patch, icons.png > > > With HDFS-9391, DataNodes in MaintenanceModes states are shown under DataNode > page in NameNode UI, but they are lacking icon visualization like the ones > shown for other node states. Need to extend the icon visualization to cover > Maintenance Mode. > {code} >
[jira] [Updated] (HDFS-11345) Document the configuration key for FSNamesystem lock fairness
[ https://issues.apache.org/jira/browse/HDFS-11345?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Erik Krogen updated HDFS-11345: --- Attachment: HADOOP-11345.001.patch v000 patch carelessly only updated the {{hdfs-default.xml}} without updating the code paths... Attaching v001 patch which actually updates it everywhere. > Document the configuration key for FSNamesystem lock fairness > - > > Key: HDFS-11345 > URL: https://issues.apache.org/jira/browse/HDFS-11345 > Project: Hadoop HDFS > Issue Type: Improvement > Components: documentation, namenode >Reporter: Zhe Zhang >Assignee: Erik Krogen >Priority: Minor > Attachments: HADOOP-11345.000.patch, HADOOP-11345.001.patch > > > Per [earlier | > https://issues.apache.org/jira/browse/HDFS-5239?focusedCommentId=15536471=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-15536471] > discussion. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-11106) libhdfs++: Some refactoring to better organize files
[ https://issues.apache.org/jira/browse/HDFS-11106?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] James Clampffer updated HDFS-11106: --- Description: I propose splitting some of the files that have grown wild over time into files that align with more specific functionality. It's probably best to do this in a few pieces so it doesn't invalidate anyone's patches in progress. Here's what I have in mind, looking for feedback if 1) it's not worth doing for some reason 2) it will break your patch and you'd like this to wait. I'd also like to consolidate related functions, mostly protobuf helpers, that are spread around the library into dedicated files. Targets (can split each into a separate patch): * (done in patch 000, committed) separate the implementation of operations from async shim code in files like filesystem.cc (make a filesystem_shims.cc). The shims are just boilerplate code that only need to change if the signature of their async counterparts change. * (done in patch 000, committed) merge base64.cc into util.cc; base64.cc only contains a single utility function. * (done in patch 000, committed) rename hdfs_public_api.h/cc to hdfs_ioservice.h/cc. Originally all of the implementation declarations of the public API classes like FileSystemImpl were going to live in here. Currently only the hdfs::IoServiceImpl lives in there and the other Impl classes have their own dedicated files. * split hdfs.cc into hdfs.cc and hdfs_ext.cc. Already have a separate hdfs_ext.h for C bindings for libhdfs++ specific extensions so implementations of those that live in hdfs.cc would be moved out. Just makes things a little cleaner. * split apart various RPC code based on classes. Things like Request and RpcConnection get defined in rpc_engine.h and then implemented in a handful of files which get confusing to navigate e.g. why would one expect Request's implementation to be in rpc_connection.cc. * Move all of the protobuf<->C++ struct conversion helpers and protobuf wire serialization/deserialization functions into a single file. Gives us less protobuf header includes and less accidental duplication of these sorts of functions. Like any refactoring some of it comes down to personal preferences. My hope is that by breaking these into smaller patches/commits relatively fast forward progress can be made on stuff everyone agrees while things that people are concerned about can be worked out in a way that satisfies everyone. was: I propose splitting some of the files that have grown wild over time into files that align with more specific functionality. It's probably best to do this in a few pieces so it doesn't invalidate anyone's patches in progress. Here's what I have in mind, looking for feedback if 1) it's not worth doing for some reason 2) it will break your patch and you'd like this to wait. I'd also like to consolidate related functions, mostly protobuf helpers, that are spread around the library into dedicated files. Targets (can split each into a separate patch): * split hdfs.cc into hdfs.cc and hdfs_ext.cc. Already have a separate hdfs_ext.h for C bindings for libhdfs++ specific extensions so implementations of those that live in hdfs.cc would be moved out. Just makes things a little cleaner. * separate the implementation of operations from async shim code in files like filesystem.cc (make a filesystem_shims.cc). The shims are just boilerplate code that only need to change if the signature of their async counterparts change. * split apart various RPC code based on classes. Things like Request and RpcConnection get defined in rpc_engine.h and then implemented in a handful of files which get confusing to navigate e.g. why would one expect Request's implementation to be in rpc_connection.cc. * Move all of the protobuf<->C++ struct conversion helpers and protobuf wire serialization/deserialization functions into a single file. Gives us less protobuf header includes and less accidental duplication of these sorts of functions. * merge base64.cc into util.cc; base64.cc only contains a single utility function. * rename hdfs_public_api.h/cc to hdfs_ioservice.h/cc. Originally all of the implementation declarations of the public API classes like FileSystemImpl were going to live in here. Currently only the hdfs::IoServiceImpl lives in there and the other Impl classes have their own dedicated files. Like any refactoring some of it comes down to personal preferences. My hope is that by breaking these into smaller patches/commits relatively fast forward progress can be made on stuff everyone agrees while things that people are concerned about can be worked out in a way that satisfies everyone. > libhdfs++: Some refactoring to better organize files > > > Key: HDFS-11106 > URL:
[jira] [Updated] (HDFS-6708) StorageType should be encoded in the block token
[ https://issues.apache.org/jira/browse/HDFS-6708?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Pieter Reuse updated HDFS-6708: --- Assignee: Ewan Higgs (was: Pieter Reuse) > StorageType should be encoded in the block token > > > Key: HDFS-6708 > URL: https://issues.apache.org/jira/browse/HDFS-6708 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: datanode, namenode >Affects Versions: 2.4.1 >Reporter: Arpit Agarwal >Assignee: Ewan Higgs > > HDFS-6702 is adding support for file creation based on StorageType. > The block token is used as a tamper-proof channel for communicating block > parameters from the NN to the DN during block creation. The StorageType > should be included in this block token. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-6708) StorageType should be encoded in the block token
[ https://issues.apache.org/jira/browse/HDFS-6708?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15837893#comment-15837893 ] Ewan Higgs commented on HDFS-6708: -- Hi all. I took this work over from Pieter. I've got some questions about how we want this to work. First, if I understand correctly we want to check the token mainly in the DataXceiver, passing the token and the locally available StorageTypes whenever we call {{DataXceiver.checkAccess}} (which ends up calling {{BlockTokenSecretManager.checkAccess}}) for writes only (i.e. {{replaceBlock}}, {{transferBlock}}, {{writeBlock}} - and *not* {{blockChecksum}}, {{blockGroupChecksum}}, or {{requestShortCircuitFds}}). If we're only checking the storage type when we write data then we use the passed in {{StorageType}}. However, if we want to do this on reads as well, we need to gather the available {{StorageType[]}}. We can get the locally available storage types as follows: {code} // only needed if we want to use StorageType for reads as well... private static StorageType[] getStorageTypes(DataNode datanode) { final FsDatasetSpi dataset = datanode.getFSDataset(); final FsDatasetSpi.FsVolumeReferences vols; vols = dataset.getFsVolumeReferences(); List storageTypes = new ArrayList(vols.size()); Iterator iter = vols.iterator(); while (iter.hasNext()) { FsVolumeSpi vol = iter.next(); StorageType storageType = vol.getStorageType(); if (storageType != null) { storageTypes.add(storageType); } } return storageTypes.toArray(new StorageType[0]); } {code} The resulting check uses the Token {{StorageType[]}} and compares it to the {{StorageType[]}} passed in by the Protobuf request operation. I think the rules should be as follows: ||{{Token StorageType[]}}||{{Node StorageType[]}}|| Result || |\* | null| Error| |null | \* | Error| | {{\[\]}} | {{\[\]}} | Not OK (maybe Error?) | | {{\[DISK\]}} | {{\[DISK\]}} | OK | | {{\[DISK\]}} | {{\[\]}} | Not OK | | {{\[SSD, DISK\]}} | {{\[DISK\]}} | Not OK| | {{\[SSD, DISK\]}} | {{\[SSD, DISK\]}} | OK | | {{\[\]}} | {{\[SSD, DISK\]}} | OK | Finally, I found that {{TestBalancer#testBalancerWithKeytabs}} and {{TestMover#testMoverWithKeytabs}} fail because they create a cluster with {{\[DISK, ARCHIVE\]}} storage; set the StoragePolicy to {{COLD}} (which has no {{DISK}}) and then try to run the Balancer or Mover. This fails since the token has {{\[DISK\]}} as the {{StorageType}} and the request has {{\[ARCHIVE\]}}. Perhaps the token is stale. > StorageType should be encoded in the block token > > > Key: HDFS-6708 > URL: https://issues.apache.org/jira/browse/HDFS-6708 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: datanode, namenode >Affects Versions: 2.4.1 >Reporter: Arpit Agarwal >Assignee: Pieter Reuse > > HDFS-6702 is adding support for file creation based on StorageType. > The block token is used as a tamper-proof channel for communicating block > parameters from the NN to the DN during block creation. The StorageType > should be included in this block token. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-11124) Report blockIds of internal blocks for EC files in Fsck
[ https://issues.apache.org/jira/browse/HDFS-11124?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15837850#comment-15837850 ] Takanobu Asanuma commented on HDFS-11124: - Failed tests are passed in my laptop. Most of them are timeout error. I think they are not related to the patch. > Report blockIds of internal blocks for EC files in Fsck > --- > > Key: HDFS-11124 > URL: https://issues.apache.org/jira/browse/HDFS-11124 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: erasure-coding >Affects Versions: 3.0.0-alpha1 >Reporter: Takanobu Asanuma >Assignee: Takanobu Asanuma > Labels: hdfs-ec-3.0-nice-to-have > Fix For: 3.0.0-alpha2 > > Attachments: HDFS-11124.1.patch, HDFS-11124.2.patch, > HDFS-11124.3.patch > > > At the moment, when we do fsck for an EC file which has corrupt blocks and > missing blocks, the result of fsck is like this: > {quote} > /data/striped 393216 bytes, erasure-coded: policy=RS-DEFAULT-6-3-64k, 1 > block(s): > /data/striped: CORRUPT blockpool BP-1204772930-172.16.165.209-1478761131832 > block blk_-9223372036854775792 > CORRUPT 1 blocks of total size 393216 B > 0. BP-1204772930-172.16.165.209-1478761131832:blk_-9223372036854775792_1001 > len=393216 Live_repl=4 > [DatanodeInfoWithStorage[127.0.0.1:61617,DS-bcfebe1f-ff54-4d57-9258-ff5bdfde01b5,DISK](CORRUPT), > > DatanodeInfoWithStorage[127.0.0.1:61601,DS-9abf64d0-bb6b-434c-8c5e-de8e3b278f91,DISK](CORRUPT), > > DatanodeInfoWithStorage[127.0.0.1:61596,DS-62698e61-c13f-44f2-9da5-614945960221,DISK](CORRUPT), > > DatanodeInfoWithStorage[127.0.0.1:61605,DS-bbce6708-16fe-44ca-9f1c-506cf00f7e0d,DISK](LIVE), > > DatanodeInfoWithStorage[127.0.0.1:61592,DS-9cdd4afd-2dc8-40da-8805-09712e2afcc4,DISK](LIVE), > > DatanodeInfoWithStorage[127.0.0.1:61621,DS-f2a72d28-c880-4ffe-a70f-0f403e374504,DISK](LIVE), > > DatanodeInfoWithStorage[127.0.0.1:61629,DS-fa6ac558-2c38-41fe-9ef8-222b3f6b2b3c,DISK](LIVE)] > {quote} > It would be useful for admins if it reports the blockIds of the internal > blocks. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Created] (HDFS-11367) AlreadyBeingCreatedException "current leaseholder is trying to recreate file" when trying to append to file
Dmitry Goldenberg created HDFS-11367: Summary: AlreadyBeingCreatedException "current leaseholder is trying to recreate file" when trying to append to file Key: HDFS-11367 URL: https://issues.apache.org/jira/browse/HDFS-11367 Project: Hadoop HDFS Issue Type: Bug Components: hdfs-client Affects Versions: 2.5.0 Environment: Red Hat Enterprise Linux Server release 6.8 Reporter: Dmitry Goldenberg We have code which creates a file in HDFS and continuously appends lines to the file, then closes the file at the end. This is done by a single dedicated thread. We specifically instrumented the code to make sure only one 'client'/thread ever writes to the file because we were seeing "current leaseholder is trying to recreate file" errors. For some background see this for example: https://community.cloudera.com/t5/Storage-Random-Access-HDFS/How-to-append-files-to-HDFS-with-Java-quot-current-leaseholder/m-p/41369 This issue is very critical to us as any error terminates a mission critical application in production. Intermittently, we see the below exception, regardless of what our code is doing which is create the file, keep appending, then close: org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.hdfs.protocol.AlreadyBeingCreatedException): failed to create file /data/records_20170125_1.txt for DFSClient_NONMAPREDUCE_-167421175_1 for client 1XX.2XX.1XX.XXX because current leaseholder is trying to recreate file. at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.recoverLeaseInternal(FSNamesystem.java:3075) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.appendFileInternal(FSNamesystem.java:2905) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.appendFileInt(FSNamesystem.java:3189) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.appendFile(FSNamesystem.java:3153) at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.append(NameNodeRpcServer.java:612) at org.apache.hadoop.hdfs.server.namenode.AuthorizationProviderProxyClientProtocol.append(AuthorizationProviderProxyClientProtocol.java:125) at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.append(ClientNamenodeProtocolServerSideTranslatorPB.java:414) at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java) at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:617) at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1073) at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2086) at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2082) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:415) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1767) at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2080) at org.apache.hadoop.ipc.Client.call(Client.java:1411) at org.apache.hadoop.ipc.Client.call(Client.java:1364) at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:206) at com.sun.proxy.$Proxy24.append(Unknown Source) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:483) at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:187) at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:102) at com.sun.proxy.$Proxy24.append(Unknown Source) at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.append(ClientNamenodeProtocolTranslatorPB.java:282) at org.apache.hadoop.hdfs.DFSClient.callAppend(DFSClient.java:1586) at org.apache.hadoop.hdfs.DFSClient.append(DFSClient.java:1626) at org.apache.hadoop.hdfs.DFSClient.append(DFSClient.java:1614) at org.apache.hadoop.hdfs.DistributedFileSystem$4.doCall(DistributedFileSystem.java:313) at org.apache.hadoop.hdfs.DistributedFileSystem$4.doCall(DistributedFileSystem.java:309) at org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81) at org.apache.hadoop.hdfs.DistributedFileSystem.append(DistributedFileSystem.java:309) at org.apache.hadoop.fs.FileSystem.append(FileSystem.java:1161) at com.myco.MyAppender.getOutputStream(MyAppender.java:147) -- This message was sent by Atlassian JIRA
[jira] [Commented] (HDFS-11124) Report blockIds of internal blocks for EC files in Fsck
[ https://issues.apache.org/jira/browse/HDFS-11124?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15837661#comment-15837661 ] Hadoop QA commented on HDFS-11124: -- | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 22s{color} | {color:blue} Docker mode activated. {color} | | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s{color} | {color:green} The patch appears to include 1 new or modified test files. {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 16m 4s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 0s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 34s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 9s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 16s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 2m 1s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 52s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 1m 8s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 2s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 1m 2s{color} | {color:green} the patch passed {color} | | {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange} 0m 36s{color} | {color:orange} hadoop-hdfs-project/hadoop-hdfs: The patch generated 1 new + 222 unchanged - 2 fixed = 223 total (was 224) {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 16s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 14s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 2m 28s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 48s{color} | {color:green} the patch passed {color} | | {color:red}-1{color} | {color:red} unit {color} | {color:red} 99m 29s{color} | {color:red} hadoop-hdfs in the patch failed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 20s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black}131m 29s{color} | {color:black} {color} | \\ \\ || Reason || Tests || | Failed junit tests | hadoop.hdfs.server.namenode.TestNameNodeMetadataConsistency | | | hadoop.hdfs.server.datanode.TestDataNodeMultipleRegistrations | | | hadoop.hdfs.server.datanode.TestDataNodeVolumeFailure | | | hadoop.hdfs.TestDFSStripedOutputStreamWithFailure010 | \\ \\ || Subsystem || Report/Notes || | Docker | Image:yetus/hadoop:a9ad5d6 | | JIRA Issue | HDFS-11124 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12849254/HDFS-11124.3.patch | | Optional Tests | asflicense compile javac javadoc mvninstall mvnsite unit findbugs checkstyle | | uname | Linux 7882e9f201f6 3.13.0-96-generic #143-Ubuntu SMP Mon Aug 29 20:15:20 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh | | git revision | trunk / 5a56520 | | Default Java | 1.8.0_121 | | findbugs | v3.0.0 | | checkstyle | https://builds.apache.org/job/PreCommit-HDFS-Build/18255/artifact/patchprocess/diff-checkstyle-hadoop-hdfs-project_hadoop-hdfs.txt | | unit | https://builds.apache.org/job/PreCommit-HDFS-Build/18255/artifact/patchprocess/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt | | Test Results | https://builds.apache.org/job/PreCommit-HDFS-Build/18255/testReport/ | | modules | C: hadoop-hdfs-project/hadoop-hdfs U: hadoop-hdfs-project/hadoop-hdfs | | Console output | https://builds.apache.org/job/PreCommit-HDFS-Build/18255/console | | Powered by | Apache Yetus 0.5.0-SNAPSHOT http://yetus.apache.org | This message
[jira] [Updated] (HDFS-11124) Report blockIds of internal blocks for EC files in Fsck
[ https://issues.apache.org/jira/browse/HDFS-11124?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Takanobu Asanuma updated HDFS-11124: Attachment: HDFS-11124.3.patch Thank you very much for your kind review, [~jingzhao]! I updated the patch based on your comments. bq. {{getReplicaInfo}} may be further optimized I see. If we do that, I think indices information is required here. So we may need to add a new API in {{BlockInfoStriped}}. I will create a separate jira. > Report blockIds of internal blocks for EC files in Fsck > --- > > Key: HDFS-11124 > URL: https://issues.apache.org/jira/browse/HDFS-11124 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: erasure-coding >Affects Versions: 3.0.0-alpha1 >Reporter: Takanobu Asanuma >Assignee: Takanobu Asanuma > Labels: hdfs-ec-3.0-nice-to-have > Fix For: 3.0.0-alpha2 > > Attachments: HDFS-11124.1.patch, HDFS-11124.2.patch, > HDFS-11124.3.patch > > > At the moment, when we do fsck for an EC file which has corrupt blocks and > missing blocks, the result of fsck is like this: > {quote} > /data/striped 393216 bytes, erasure-coded: policy=RS-DEFAULT-6-3-64k, 1 > block(s): > /data/striped: CORRUPT blockpool BP-1204772930-172.16.165.209-1478761131832 > block blk_-9223372036854775792 > CORRUPT 1 blocks of total size 393216 B > 0. BP-1204772930-172.16.165.209-1478761131832:blk_-9223372036854775792_1001 > len=393216 Live_repl=4 > [DatanodeInfoWithStorage[127.0.0.1:61617,DS-bcfebe1f-ff54-4d57-9258-ff5bdfde01b5,DISK](CORRUPT), > > DatanodeInfoWithStorage[127.0.0.1:61601,DS-9abf64d0-bb6b-434c-8c5e-de8e3b278f91,DISK](CORRUPT), > > DatanodeInfoWithStorage[127.0.0.1:61596,DS-62698e61-c13f-44f2-9da5-614945960221,DISK](CORRUPT), > > DatanodeInfoWithStorage[127.0.0.1:61605,DS-bbce6708-16fe-44ca-9f1c-506cf00f7e0d,DISK](LIVE), > > DatanodeInfoWithStorage[127.0.0.1:61592,DS-9cdd4afd-2dc8-40da-8805-09712e2afcc4,DISK](LIVE), > > DatanodeInfoWithStorage[127.0.0.1:61621,DS-f2a72d28-c880-4ffe-a70f-0f403e374504,DISK](LIVE), > > DatanodeInfoWithStorage[127.0.0.1:61629,DS-fa6ac558-2c38-41fe-9ef8-222b3f6b2b3c,DISK](LIVE)] > {quote} > It would be useful for admins if it reports the blockIds of the internal > blocks. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-9924) [umbrella] Nonblocking HDFS Access
[ https://issues.apache.org/jira/browse/HDFS-9924?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15837416#comment-15837416 ] Duo Zhang commented on HDFS-9924: - Any updates here? Seems no commit on HDFS-9924 branch for a long time... I can help a bit here as I still want to move the FanOutOneBlockAsyncDFSOutputStream into HDFS rather than maintain it in HBase... I think the problem here is that, our interface is blocking. It is really awkward to implement async stuffs on top of an blocking interface so I do not like the current approach. I think we can either 1. Use grpc instead of the current rpc. Add a port unification service in front of the grpc server and the old rpc server to support both grpc client and old client. Yeah we need to write lots of code if we choose this way, but I think most code are just boilerplate. Another benefit is that multi language support will be much easier if we use standard grpc. 2. Use grpc but do not use the HTTP/2 transport, implement our own transport. I haven't tried this yet but grpc-java does support customized transport so I think it is possible. The benefit is that we do not need port unification service at server side and do not need to maintain two implementations at server side. 3. Use the old protobuf rpc interface and implement a new rpc framework. The benefit is that we also do not need port unification service at server side and do not need to maintain two implementations at server side. And one more thing is that we do not need to upgrade protobuf to 3.x. 4. As said in the design doc above, generate new interfaces which return a CompletableFuture based on the old blocking interface. And add a new feature in the current rpc implementation to support the new interface. I'm OK with any of the approach above. Can start working on branch HDFS-9924 after we decide which one to use. Thanks. > [umbrella] Nonblocking HDFS Access > -- > > Key: HDFS-9924 > URL: https://issues.apache.org/jira/browse/HDFS-9924 > Project: Hadoop HDFS > Issue Type: New Feature > Components: fs >Reporter: Tsz Wo Nicholas Sze >Assignee: Xiaobing Zhou > Attachments: AsyncHdfs20160510.pdf, Async-HDFS-Performance-Report.pdf > > > This is an umbrella JIRA for supporting Nonblocking HDFS Access. > Currently, all the API methods are blocking calls -- the caller is blocked > until the method returns. It is very slow if a client makes a large number > of independent calls in a single thread since each call has to wait until the > previous call is finished. It is inefficient if a client needs to create a > large number of threads to invoke the calls. > We propose adding a new API to support nonblocking calls, i.e. the caller is > not blocked. The methods in the new API immediately return a Java Future > object. The return value can be obtained by the usual Future.get() method. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org