[jira] [Updated] (HDFS-9929) Duplicate keys in NAMENODE_SPECIFIC_KEYS
[ https://issues.apache.org/jira/browse/HDFS-9929?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Vinayakumar B updated HDFS-9929: Resolution: Fixed Hadoop Flags: Reviewed Fix Version/s: 3.0.0-alpha2 2.9.0 Status: Resolved (was: Patch Available) Committed trunk and branch-2 > Duplicate keys in NAMENODE_SPECIFIC_KEYS > > > Key: HDFS-9929 > URL: https://issues.apache.org/jira/browse/HDFS-9929 > Project: Hadoop HDFS > Issue Type: Bug > Components: namenode >Reporter: Akira Ajisaka >Assignee: Akira Ajisaka >Priority: Minor > Fix For: 2.9.0, 3.0.0-alpha2 > > Attachments: HDFS-9929.01.patch > > > In NameNode.java, {{DFS_HA_FENCE_METHODS_KEY}} occurs twice in > {{NAMENODE_SPECIFIC_KEYS}}. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-8234) DistributedFileSystem and Globber should apply PathFilter early
[ https://issues.apache.org/jira/browse/HDFS-8234?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15610928#comment-15610928 ] Hadoop QA commented on HDFS-8234: - | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 17s{color} | {color:blue} Docker mode activated. {color} | | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s{color} | {color:green} The patch appears to include 1 new or modified test files. {color} | | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 1m 44s{color} | {color:blue} Maven dependency ordering for branch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 8m 22s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 9m 14s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 1m 39s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 3m 7s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 48s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 5m 40s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 2m 1s{color} | {color:green} trunk passed {color} | | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 17s{color} | {color:blue} Maven dependency ordering for patch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 2m 17s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 8m 51s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 8m 51s{color} | {color:green} the patch passed {color} | | {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange} 1m 36s{color} | {color:orange} root: The patch generated 3 new + 32 unchanged - 1 fixed = 35 total (was 33) {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 2m 50s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 42s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 5m 42s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 39s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} unit {color} | {color:green} 8m 1s{color} | {color:green} hadoop-common in the patch passed. {color} | | {color:green}+1{color} | {color:green} unit {color} | {color:green} 1m 1s{color} | {color:green} hadoop-hdfs-client in the patch passed. {color} | | {color:red}-1{color} | {color:red} unit {color} | {color:red} 45m 57s{color} | {color:red} hadoop-hdfs in the patch failed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 23s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black}113m 13s{color} | {color:black} {color} | \\ \\ || Reason || Tests || | Timed out junit tests | org.apache.hadoop.hdfs.server.datanode.TestDataNodeRollingUpgrade | | | org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.TestLazyWriter | | | org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.TestSpaceReservation | | | org.apache.hadoop.hdfs.server.datanode.TestFsDatasetCache | \\ \\ || Subsystem || Report/Notes || | Docker | Image:yetus/hadoop:9560f25 | | JIRA Issue | HDFS-8234 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12835471/HDFS-8234.3.patch | | Optional Tests | asflicense compile javac javadoc mvninstall mvnsite unit findbugs checkstyle | | uname | Linux 8e8899409951 3.13.0-92-generic #139-Ubuntu SMP Tue Jun 28 20:42:26 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh | | git revision | trunk / 9f32364 | | Default Java | 1.8.0_101 | | findbugs | v3.0.0 | | checkstyle | https://
[jira] [Commented] (HDFS-10954) [SPS]: Provide mechanism to send blocks movement result back to NN from coordinator DN
[ https://issues.apache.org/jira/browse/HDFS-10954?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15610917#comment-15610917 ] Hadoop QA commented on HDFS-10954: -- | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 15s{color} | {color:blue} Docker mode activated. {color} | | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s{color} | {color:green} The patch appears to include 13 new or modified test files. {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 9m 54s{color} | {color:green} HDFS-10285 passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 49s{color} | {color:green} HDFS-10285 passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 37s{color} | {color:green} HDFS-10285 passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 57s{color} | {color:green} HDFS-10285 passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 14s{color} | {color:green} HDFS-10285 passed {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 45s{color} | {color:green} HDFS-10285 passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 41s{color} | {color:green} HDFS-10285 passed {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 46s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 49s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} cc {color} | {color:green} 0m 49s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 49s{color} | {color:green} the patch passed {color} | | {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange} 0m 36s{color} | {color:orange} hadoop-hdfs-project/hadoop-hdfs: The patch generated 5 new + 783 unchanged - 4 fixed = 788 total (was 787) {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 48s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 9s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 48s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 35s{color} | {color:green} the patch passed {color} | | {color:red}-1{color} | {color:red} unit {color} | {color:red} 67m 0s{color} | {color:red} hadoop-hdfs in the patch failed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 21s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 89m 17s{color} | {color:black} {color} | \\ \\ || Reason || Tests || | Failed junit tests | hadoop.hdfs.TestCrcCorruption | | | hadoop.hdfs.TestFileAppend3 | | | hadoop.hdfs.TestFileChecksum | | | hadoop.hdfs.server.namenode.snapshot.TestRenameWithSnapshots | \\ \\ || Subsystem || Report/Notes || | Docker | Image:yetus/hadoop:9560f25 | | JIRA Issue | HDFS-10954 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12835476/HDFS-10954-HDFS-10285-02.patch | | Optional Tests | asflicense compile javac javadoc mvninstall mvnsite unit findbugs checkstyle cc | | uname | Linux 1b1efe435976 3.13.0-95-generic #142-Ubuntu SMP Fri Aug 12 17:00:09 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh | | git revision | HDFS-10285 / f705de3 | | Default Java | 1.8.0_101 | | findbugs | v3.0.0 | | checkstyle | https://builds.apache.org/job/PreCommit-HDFS-Build/17310/artifact/patchprocess/diff-checkstyle-hadoop-hdfs-project_hadoop-hdfs.txt | | unit | https://builds.apache.org/job/PreCommit-HDFS-Build/17310/artifact/patchprocess/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt | | Test Results | https://builds.apache.org/job/PreCommit-HDFS-Build/17310/testReport/ | | modules | C: hadoop-hdfs-project/hadoop-hdfs U: hadoop-hdfs-project/hadoop-hdfs | | Console output | https://builds.apache.org/job/PreCo
[jira] [Commented] (HDFS-9929) Duplicate keys in NAMENODE_SPECIFIC_KEYS
[ https://issues.apache.org/jira/browse/HDFS-9929?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15610904#comment-15610904 ] Vinayakumar B commented on HDFS-9929: - +1. Committing. > Duplicate keys in NAMENODE_SPECIFIC_KEYS > > > Key: HDFS-9929 > URL: https://issues.apache.org/jira/browse/HDFS-9929 > Project: Hadoop HDFS > Issue Type: Bug > Components: namenode >Reporter: Akira Ajisaka >Assignee: Akira Ajisaka >Priority: Minor > Attachments: HDFS-9929.01.patch > > > In NameNode.java, {{DFS_HA_FENCE_METHODS_KEY}} occurs twice in > {{NAMENODE_SPECIFIC_KEYS}}. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-9704) terminate progress after namenode recover finished
[ https://issues.apache.org/jira/browse/HDFS-9704?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15610898#comment-15610898 ] Vinayakumar B commented on HDFS-9704: - I think, adding terminate with exitCode 0, is not making any difference in this case. Do you have any specific need for this? > terminate progress after namenode recover finished > -- > > Key: HDFS-9704 > URL: https://issues.apache.org/jira/browse/HDFS-9704 > Project: Hadoop HDFS > Issue Type: Improvement > Components: namenode >Affects Versions: 2.3.0 >Reporter: Liao, Xiaoge >Priority: Minor > Attachments: HDFS-9704.001.patch > > > terminate progress after namenode recover finished -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-8643) Add snapshot names list to SnapshottableDirectoryStatus
[ https://issues.apache.org/jira/browse/HDFS-8643?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15610872#comment-15610872 ] Hadoop QA commented on HDFS-8643: - | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 0s{color} | {color:blue} Docker mode activated. {color} | | {color:red}-1{color} | {color:red} patch {color} | {color:red} 0m 4s{color} | {color:red} HDFS-8643 does not apply to trunk. Rebase required? Wrong Branch? See https://wiki.apache.org/hadoop/HowToContribute for help. {color} | \\ \\ || Subsystem || Report/Notes || | JIRA Issue | HDFS-8643 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12748900/HDFS-8643-01.patch | | Console output | https://builds.apache.org/job/PreCommit-HDFS-Build/17314/console | | Powered by | Apache Yetus 0.4.0-SNAPSHOT http://yetus.apache.org | This message was automatically generated. > Add snapshot names list to SnapshottableDirectoryStatus > --- > > Key: HDFS-8643 > URL: https://issues.apache.org/jira/browse/HDFS-8643 > Project: Hadoop HDFS > Issue Type: Improvement >Reporter: Rakesh R >Assignee: Rakesh R > Attachments: HDFS-8643-00.patch, HDFS-8643-01.patch > > > The idea of this jira to enhance {{SnapshottableDirectoryStatus}} by adding > {{snapshotNames}} attribute into it, presently it has the {{snapshotNumber}}. > IMHO this would help the users to get the list of snapshot names created. > Also, the snapshot names can be used while renaming or deleting the snapshots. > {code} > org.apache.hadoop.hdfs.protocol.SnapshottableDirectoryStatus.java > /** >* @return Snapshot names for the directory. >*/ > public List getSnapshotNames() { > return snapshotNames; > } > {code} -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-8648) Revisit FsDirectory#resolvePath() function usage to check the call is made under proper lock
[ https://issues.apache.org/jira/browse/HDFS-8648?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15610870#comment-15610870 ] Rakesh R commented on HDFS-8648: bq. looks like many of the changes related to this area are already done under uber HDFS-10616. Thanks [~vinayrpet] for pointing out this. Hi [~daryn], sometime back, I have tried an attempt to move all the {{FsDirectory#resolvePath()}} resolving under fsd lock. But I could see many cases which I mentioned in this jira description has been taken care by HDFS-10616. Still I feel few more cases has to be changed. Do you have any jira addressing this, if not, should I move this jira under your umbrella jira and revisits the cases one by one. Does this makes sense to you? > Revisit FsDirectory#resolvePath() function usage to check the call is made > under proper lock > > > Key: HDFS-8648 > URL: https://issues.apache.org/jira/browse/HDFS-8648 > Project: Hadoop HDFS > Issue Type: Bug >Reporter: Rakesh R >Assignee: Rakesh R > Attachments: HDFS-8648-00.patch > > > As per the > [discussion|https://issues.apache.org/jira/browse/HDFS-8493?focusedCommentId=14595735&page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-14595735] > in HDFS-8493 the function {{FsDirectory#resolvePath}} usage needs to be > reviewed. It seems there are many places it has done the resolution > {{fsd.resolvePath(pc, src, pathComponents);}} by acquiring only fsn lock and > not fsd lock. As per the initial analysis following are such cases, probably > it needs to filter out and fix wrong usage. > # FsDirAclOp.java > -> getAclStatus() > -> modifyAclEntries() > -> removeAcl() > -> removeDefaultAcl() > -> setAcl() > -> getAclStatus() > # FsDirDeleteOp.java > -> delete(fsn, src, recursive, logRetryCache) > # FsDirRenameOp.java > -> renameToInt(fsd, srcArg, dstArg, logRetryCache) > -> renameToInt(fsd, srcArg, dstArg, logRetryCache, options) > # FsDirStatAndListingOp.java > -> getContentSummary(fsd, src) > -> getFileInfo(fsd, srcArg, resolveLink) > -> isFileClosed(fsd, src) > -> getListingInt(fsd, srcArg, startAfter, needLocation) > # FsDirWriteFileOp.java > -> abandonBlock() > -> completeFile(fsn, pc, srcArg, holder, last, fileId) > -> getEncryptionKeyInfo(fsn, pc, src, supportedVersions) > -> startFile() > -> validateAddBlock() > # FsDirXAttrOp.java > -> getXAttrs(fsd, srcArg, xAttrs) > -> listXAttrs(fsd, src) > -> setXAttr(fsd, src, xAttr, flag, logRetryCache) > # FSNamesystem.java > -> createEncryptionZoneInt() > -> getEZForPath() > Thanks [~wheat9], [~vinayrpet] for the advice. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-8643) Add snapshot names list to SnapshottableDirectoryStatus
[ https://issues.apache.org/jira/browse/HDFS-8643?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15610830#comment-15610830 ] Vinayakumar B commented on HDFS-8643: - User can get the snapshots list by doing 'listStatusIterator()' on '.snapshot' of snapshottable directory. Ex: /test1 is the snapshottable directory, then 'ls' on '/test1/.snapshot' will list all the snapshot names taken. So, 'listStatusIterator()' is already using batched iterator internally. So having huge number of snapshots may not be a problem. And for 'getSnapshottableDirListing', ideally there will not be very huge number of snapshottable directories, as all these are controlled by admin. IMO, iteration on 'getSnapshottableDirListing' may not be helpful, atleast now. > Add snapshot names list to SnapshottableDirectoryStatus > --- > > Key: HDFS-8643 > URL: https://issues.apache.org/jira/browse/HDFS-8643 > Project: Hadoop HDFS > Issue Type: Improvement >Reporter: Rakesh R >Assignee: Rakesh R > Attachments: HDFS-8643-00.patch, HDFS-8643-01.patch > > > The idea of this jira to enhance {{SnapshottableDirectoryStatus}} by adding > {{snapshotNames}} attribute into it, presently it has the {{snapshotNumber}}. > IMHO this would help the users to get the list of snapshot names created. > Also, the snapshot names can be used while renaming or deleting the snapshots. > {code} > org.apache.hadoop.hdfs.protocol.SnapshottableDirectoryStatus.java > /** >* @return Snapshot names for the directory. >*/ > public List getSnapshotNames() { > return snapshotNames; > } > {code} -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-8648) Revisit FsDirectory#resolvePath() function usage to check the call is made under proper lock
[ https://issues.apache.org/jira/browse/HDFS-8648?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15610820#comment-15610820 ] Hadoop QA commented on HDFS-8648: - | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 0s{color} | {color:blue} Docker mode activated. {color} | | {color:red}-1{color} | {color:red} patch {color} | {color:red} 0m 5s{color} | {color:red} HDFS-8648 does not apply to trunk. Rebase required? Wrong Branch? See https://wiki.apache.org/hadoop/HowToContribute for help. {color} | \\ \\ || Subsystem || Report/Notes || | JIRA Issue | HDFS-8648 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12774471/HDFS-8648-00.patch | | Console output | https://builds.apache.org/job/PreCommit-HDFS-Build/17312/console | | Powered by | Apache Yetus 0.4.0-SNAPSHOT http://yetus.apache.org | This message was automatically generated. > Revisit FsDirectory#resolvePath() function usage to check the call is made > under proper lock > > > Key: HDFS-8648 > URL: https://issues.apache.org/jira/browse/HDFS-8648 > Project: Hadoop HDFS > Issue Type: Bug >Reporter: Rakesh R >Assignee: Rakesh R > Attachments: HDFS-8648-00.patch > > > As per the > [discussion|https://issues.apache.org/jira/browse/HDFS-8493?focusedCommentId=14595735&page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-14595735] > in HDFS-8493 the function {{FsDirectory#resolvePath}} usage needs to be > reviewed. It seems there are many places it has done the resolution > {{fsd.resolvePath(pc, src, pathComponents);}} by acquiring only fsn lock and > not fsd lock. As per the initial analysis following are such cases, probably > it needs to filter out and fix wrong usage. > # FsDirAclOp.java > -> getAclStatus() > -> modifyAclEntries() > -> removeAcl() > -> removeDefaultAcl() > -> setAcl() > -> getAclStatus() > # FsDirDeleteOp.java > -> delete(fsn, src, recursive, logRetryCache) > # FsDirRenameOp.java > -> renameToInt(fsd, srcArg, dstArg, logRetryCache) > -> renameToInt(fsd, srcArg, dstArg, logRetryCache, options) > # FsDirStatAndListingOp.java > -> getContentSummary(fsd, src) > -> getFileInfo(fsd, srcArg, resolveLink) > -> isFileClosed(fsd, src) > -> getListingInt(fsd, srcArg, startAfter, needLocation) > # FsDirWriteFileOp.java > -> abandonBlock() > -> completeFile(fsn, pc, srcArg, holder, last, fileId) > -> getEncryptionKeyInfo(fsn, pc, src, supportedVersions) > -> startFile() > -> validateAddBlock() > # FsDirXAttrOp.java > -> getXAttrs(fsd, srcArg, xAttrs) > -> listXAttrs(fsd, src) > -> setXAttr(fsd, src, xAttr, flag, logRetryCache) > # FSNamesystem.java > -> createEncryptionZoneInt() > -> getEZForPath() > Thanks [~wheat9], [~vinayrpet] for the advice. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-11067) DFS#listStatusIterator(..) should throw FileNotFoundException if the directory deleted before fetching next batch of entries
[ https://issues.apache.org/jira/browse/HDFS-11067?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15610819#comment-15610819 ] Hadoop QA commented on HDFS-11067: -- | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 10s{color} | {color:blue} Docker mode activated. {color} | | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s{color} | {color:green} The patch appears to include 1 new or modified test files. {color} | | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 27s{color} | {color:blue} Maven dependency ordering for branch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 6m 54s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 19s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 29s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 24s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 23s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 3m 3s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 58s{color} | {color:green} trunk passed {color} | | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 6s{color} | {color:blue} Maven dependency ordering for patch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 1m 13s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 19s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 1m 19s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 27s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 19s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 19s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 3m 15s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 54s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} unit {color} | {color:green} 0m 58s{color} | {color:green} hadoop-hdfs-client in the patch passed. {color} | | {color:red}-1{color} | {color:red} unit {color} | {color:red} 53m 12s{color} | {color:red} hadoop-hdfs in the patch failed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 19s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 79m 45s{color} | {color:black} {color} | \\ \\ || Reason || Tests || | Timed out junit tests | org.apache.hadoop.hdfs.server.blockmanagement.TestDatanodeManager | | | org.apache.hadoop.hdfs.server.blockmanagement.TestReplicationPolicy | | | org.apache.hadoop.hdfs.server.diskbalancer.TestDiskBalancerWithMockMover | | | org.apache.hadoop.hdfs.server.diskbalancer.command.TestDiskBalancerCommand | | | org.apache.hadoop.hdfs.server.blockmanagement.TestUnderReplicatedBlocks | | | org.apache.hadoop.hdfs.server.blockmanagement.TestSortLocatedStripedBlock | | | org.apache.hadoop.hdfs.server.balancer.TestBalancer | | | org.apache.hadoop.hdfs.server.blockmanagement.TestBlockManagerSafeMode | | | org.apache.hadoop.hdfs.server.diskbalancer.TestConnectors | | | org.apache.hadoop.hdfs.server.blockmanagement.TestNameNodePrunesMissingStorages | | | org.apache.hadoop.hdfs.server.balancer.TestBalancerWithNodeGroup | \\ \\ || Subsystem || Report/Notes || | Docker | Image:yetus/hadoop:9560f25 | | JIRA Issue | HDFS-11067 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12835470/HDFS-11067-01.patch | | Optional Tests | asflicense compile javac javadoc mvninstall mvnsite unit findbugs checkstyle | | u
[jira] [Commented] (HDFS-11061) Fix dfs -count -t or update its documentation
[ https://issues.apache.org/jira/browse/HDFS-11061?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15610807#comment-15610807 ] Hadoop QA commented on HDFS-11061: -- | (/) *{color:green}+1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 16s{color} | {color:blue} Docker mode activated. {color} | | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s{color} | {color:green} The patch appears to include 1 new or modified test files. {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 7m 3s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 7m 23s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 24s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 5s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 12s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 23s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 42s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 38s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 9m 8s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 9m 8s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 22s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 59s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 11s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 42s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 50s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} unit {color} | {color:green} 10m 19s{color} | {color:green} hadoop-common in the patch passed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 22s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 43m 45s{color} | {color:black} {color} | \\ \\ || Subsystem || Report/Notes || | Docker | Image:yetus/hadoop:9560f25 | | JIRA Issue | HDFS-11061 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12835475/HDFS-11061.002.patch | | Optional Tests | asflicense compile javac javadoc mvninstall mvnsite unit findbugs checkstyle | | uname | Linux fc44fc7d0a08 3.13.0-96-generic #143-Ubuntu SMP Mon Aug 29 20:15:20 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh | | git revision | trunk / 9f32364 | | Default Java | 1.8.0_101 | | findbugs | v3.0.0 | | Test Results | https://builds.apache.org/job/PreCommit-HDFS-Build/17309/testReport/ | | modules | C: hadoop-common-project/hadoop-common U: hadoop-common-project/hadoop-common | | Console output | https://builds.apache.org/job/PreCommit-HDFS-Build/17309/console | | Powered by | Apache Yetus 0.4.0-SNAPSHOT http://yetus.apache.org | This message was automatically generated. > Fix dfs -count -t or update its documentation > - > > Key: HDFS-11061 > URL: https://issues.apache.org/jira/browse/HDFS-11061 > Project: Hadoop HDFS > Issue Type: Bug > Components: fs >Reporter: Wei-Chiu Chuang >Assignee: Yiqun Lin >Priority: Minor > Labels: supportability > Attachments: HDFS-11061.001.patch, HDFS-11061.002.patch > > > According to dfs -count command line help, -t option must be used along with > -q. > * However, the current behavior i
[jira] [Commented] (HDFS-8648) Revisit FsDirectory#resolvePath() function usage to check the call is made under proper lock
[ https://issues.apache.org/jira/browse/HDFS-8648?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15610804#comment-15610804 ] Vinayakumar B commented on HDFS-8648: - looks like many of the changes related to this area are already done under uber HDFS-10616. > Revisit FsDirectory#resolvePath() function usage to check the call is made > under proper lock > > > Key: HDFS-8648 > URL: https://issues.apache.org/jira/browse/HDFS-8648 > Project: Hadoop HDFS > Issue Type: Bug >Reporter: Rakesh R >Assignee: Rakesh R > Attachments: HDFS-8648-00.patch > > > As per the > [discussion|https://issues.apache.org/jira/browse/HDFS-8493?focusedCommentId=14595735&page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-14595735] > in HDFS-8493 the function {{FsDirectory#resolvePath}} usage needs to be > reviewed. It seems there are many places it has done the resolution > {{fsd.resolvePath(pc, src, pathComponents);}} by acquiring only fsn lock and > not fsd lock. As per the initial analysis following are such cases, probably > it needs to filter out and fix wrong usage. > # FsDirAclOp.java > -> getAclStatus() > -> modifyAclEntries() > -> removeAcl() > -> removeDefaultAcl() > -> setAcl() > -> getAclStatus() > # FsDirDeleteOp.java > -> delete(fsn, src, recursive, logRetryCache) > # FsDirRenameOp.java > -> renameToInt(fsd, srcArg, dstArg, logRetryCache) > -> renameToInt(fsd, srcArg, dstArg, logRetryCache, options) > # FsDirStatAndListingOp.java > -> getContentSummary(fsd, src) > -> getFileInfo(fsd, srcArg, resolveLink) > -> isFileClosed(fsd, src) > -> getListingInt(fsd, srcArg, startAfter, needLocation) > # FsDirWriteFileOp.java > -> abandonBlock() > -> completeFile(fsn, pc, srcArg, holder, last, fileId) > -> getEncryptionKeyInfo(fsn, pc, src, supportedVersions) > -> startFile() > -> validateAddBlock() > # FsDirXAttrOp.java > -> getXAttrs(fsd, srcArg, xAttrs) > -> listXAttrs(fsd, src) > -> setXAttr(fsd, src, xAttr, flag, logRetryCache) > # FSNamesystem.java > -> createEncryptionZoneInt() > -> getEZForPath() > Thanks [~wheat9], [~vinayrpet] for the advice. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-10756) Expose getTrashRoot to HTTPFS and WebHDFS
[ https://issues.apache.org/jira/browse/HDFS-10756?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15610779#comment-15610779 ] Yuanbo Liu commented on HDFS-10756: --- Those test cases fail because of "Java heap space", I don't think it's related to my code change. > Expose getTrashRoot to HTTPFS and WebHDFS > - > > Key: HDFS-10756 > URL: https://issues.apache.org/jira/browse/HDFS-10756 > Project: Hadoop HDFS > Issue Type: Improvement > Components: encryption, httpfs, webhdfs >Reporter: Xiao Chen >Assignee: Yuanbo Liu > Attachments: HDFS-10756.001.patch, HDFS-10756.002.patch, > HDFS-10756.003.patch, HDFS-10756.004.patch > > > Currently, hadoop FileSystem API has > [getTrashRoot|https://github.com/apache/hadoop/blob/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/FileSystem.java#L2708] > to determine trash directory at run time. Default trash dir is under > {{/user/$USER}} > For an encrypted file, since moving files between/in/out of EZs are not > allowed, when an EZ file is deleted via CLI, it calls in to [DFS > implementation|https://github.com/apache/hadoop/blob/trunk/hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/DistributedFileSystem.java#L2485] > to move the file to a trash directory under the same EZ. > This works perfectly fine for CLI users or java users who call FileSystem > API. But for users via httpfs/webhdfs, currently there is no way to figure > out what the trash root would be. This jira is proposing we add such > interface to httpfs and webhdfs. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-10954) [SPS]: Provide mechanism to send blocks movement result back to NN from coordinator DN
[ https://issues.apache.org/jira/browse/HDFS-10954?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Rakesh R updated HDFS-10954: Attachment: HDFS-10954-HDFS-10285-02.patch Attached new patch to fix {{TestBPOfferService#testBPInitErrorHandling()}} > [SPS]: Provide mechanism to send blocks movement result back to NN from > coordinator DN > -- > > Key: HDFS-10954 > URL: https://issues.apache.org/jira/browse/HDFS-10954 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: datanode, namenode >Reporter: Rakesh R >Assignee: Rakesh R > Attachments: HDFS-10954-HDFS-10285-00.patch, > HDFS-10954-HDFS-10285-01.patch, HDFS-10954-HDFS-10285-02.patch > > > This jira is a follow-up task of HDFS-10884. As part of HDFS-10884 jira, it > is providing a mechanism to collect all the success/failed block movement > results at the {{co-ordinator datanode}} side. Now, the idea of this jira is > to discuss an efficient way to report these success/failed block movement > results to namenode, so that NN can take necessary action based on this > information. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-11061) Fix dfs -count -t or update its documentation
[ https://issues.apache.org/jira/browse/HDFS-11061?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Yiqun Lin updated HDFS-11061: - Attachment: HDFS-11061.002.patch The failed test is related, post a new patch to make a fix. > Fix dfs -count -t or update its documentation > - > > Key: HDFS-11061 > URL: https://issues.apache.org/jira/browse/HDFS-11061 > Project: Hadoop HDFS > Issue Type: Bug > Components: fs >Reporter: Wei-Chiu Chuang >Assignee: Yiqun Lin >Priority: Minor > Labels: supportability > Attachments: HDFS-11061.001.patch, HDFS-11061.002.patch > > > According to dfs -count command line help, -t option must be used along with > -q. > * However, the current behavior is that -t can be used without -q, it's just > silently ignored. > * In addition, -t may also be used with -u. > * The FileSystemShell doc does not state -t must be used along with -q. This > should either be enforced in the code, or update the doc/command line. > * Also, the list of possible parameters for -t option is not described in the > doc. Looking at the code (Count.java), the list of possible parameters are > either empty string (="all"), "all", "ram_disk", "ssd", "disk" or "archive" > (caseless) -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-8234) DistributedFileSystem and Globber should apply PathFilter early
[ https://issues.apache.org/jira/browse/HDFS-8234?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] J.Andreina updated HDFS-8234: - Attachment: HDFS-8234.3.patch Attaching rebased patch. Please review. > DistributedFileSystem and Globber should apply PathFilter early > --- > > Key: HDFS-8234 > URL: https://issues.apache.org/jira/browse/HDFS-8234 > Project: Hadoop HDFS > Issue Type: Improvement >Reporter: Rohini Palaniswamy >Assignee: J.Andreina > Labels: newbie > Attachments: HDFS-8234.1.patch, HDFS-8234.2.patch, HDFS-8234.3.patch > > > HDFS-985 added partial listing in listStatus to avoid listing entries of > large directory in one go. If listStatus(Path p, PathFilter f) call is made, > filter is applied after fetching all the entries resulting in a big list > being constructed on the client side. If the > DistributedFileSystem.listStatusInternal() applied the PathFilter it would be > more efficient. So DistributedFileSystem should override listStatus(Path f, > PathFilter filter) and apply PathFilter early. > Globber.java also applies filter after calling listStatus. It should call > listStatus with the PathFilter. > {code} > FileStatus[] children = listStatus(candidate.getPath()); >. > for (FileStatus child : children) { > // Set the child path based on the parent path. > child.setPath(new Path(candidate.getPath(), > child.getPath().getName())); > if (globFilter.accept(child.getPath())) { > newCandidates.add(child); > } > } > {code} -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-11067) DFS#listStatusIterator(..) should throw FileNotFoundException if the directory deleted before fetching next batch of entries
[ https://issues.apache.org/jira/browse/HDFS-11067?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Vinayakumar B updated HDFS-11067: - Attachment: HDFS-11067-01.patch Attaching the patch > DFS#listStatusIterator(..) should throw FileNotFoundException if the > directory deleted before fetching next batch of entries > > > Key: HDFS-11067 > URL: https://issues.apache.org/jira/browse/HDFS-11067 > Project: Hadoop HDFS > Issue Type: Bug > Components: hdfs-client >Reporter: Vinayakumar B >Assignee: Vinayakumar B > Labels: incompatible > Attachments: HDFS-11067-01.patch > > > DFS#listStatusIterator(..) currently stops iterating silently when the > directory gets deleted before fetching the next batch of entries. > It should throw FileNotFoundException() and let user know that file is > deleted in the middle. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-11067) DFS#listStatusIterator(..) should throw FileNotFoundException if the directory deleted before fetching next batch of entries
[ https://issues.apache.org/jira/browse/HDFS-11067?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Vinayakumar B updated HDFS-11067: - Labels: (was: incompatible) > DFS#listStatusIterator(..) should throw FileNotFoundException if the > directory deleted before fetching next batch of entries > > > Key: HDFS-11067 > URL: https://issues.apache.org/jira/browse/HDFS-11067 > Project: Hadoop HDFS > Issue Type: Bug > Components: hdfs-client >Reporter: Vinayakumar B >Assignee: Vinayakumar B > Attachments: HDFS-11067-01.patch > > > DFS#listStatusIterator(..) currently stops iterating silently when the > directory gets deleted before fetching the next batch of entries. > It should throw FileNotFoundException() and let user know that file is > deleted in the middle. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-11067) DFS#listStatusIterator(..) should throw FileNotFoundException if the directory deleted before fetching next batch of entries
[ https://issues.apache.org/jira/browse/HDFS-11067?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Vinayakumar B updated HDFS-11067: - Hadoop Flags: Incompatible change Target Version/s: 3.0.0-alpha2 Status: Patch Available (was: Open) > DFS#listStatusIterator(..) should throw FileNotFoundException if the > directory deleted before fetching next batch of entries > > > Key: HDFS-11067 > URL: https://issues.apache.org/jira/browse/HDFS-11067 > Project: Hadoop HDFS > Issue Type: Bug > Components: hdfs-client >Reporter: Vinayakumar B >Assignee: Vinayakumar B > Labels: incompatible > Attachments: HDFS-11067-01.patch > > > DFS#listStatusIterator(..) currently stops iterating silently when the > directory gets deleted before fetching the next batch of entries. > It should throw FileNotFoundException() and let user know that file is > deleted in the middle. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-11067) DFS#listStatusIterator(..) should throw FileNotFoundException if the directory deleted before fetching next batch of entries
[ https://issues.apache.org/jira/browse/HDFS-11067?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Vinayakumar B updated HDFS-11067: - Labels: incompatible (was: ) > DFS#listStatusIterator(..) should throw FileNotFoundException if the > directory deleted before fetching next batch of entries > > > Key: HDFS-11067 > URL: https://issues.apache.org/jira/browse/HDFS-11067 > Project: Hadoop HDFS > Issue Type: Bug > Components: hdfs-client >Reporter: Vinayakumar B >Assignee: Vinayakumar B > Labels: incompatible > > DFS#listStatusIterator(..) currently stops iterating silently when the > directory gets deleted before fetching the next batch of entries. > It should throw FileNotFoundException() and let user know that file is > deleted in the middle. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Created] (HDFS-11067) DFS#listStatusIterator(..) should throw FileNotFoundException if the directory deleted before fetching next batch of entries
Vinayakumar B created HDFS-11067: Summary: DFS#listStatusIterator(..) should throw FileNotFoundException if the directory deleted before fetching next batch of entries Key: HDFS-11067 URL: https://issues.apache.org/jira/browse/HDFS-11067 Project: Hadoop HDFS Issue Type: Bug Components: hdfs-client Reporter: Vinayakumar B Assignee: Vinayakumar B DFS#listStatusIterator(..) currently stops iterating silently when the directory gets deleted before fetching the next batch of entries. It should throw FileNotFoundException() and let user know that file is deleted in the middle. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-11061) Fix dfs -count -t or update its documentation
[ https://issues.apache.org/jira/browse/HDFS-11061?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15610537#comment-15610537 ] Hadoop QA commented on HDFS-11061: -- | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 23s{color} | {color:blue} Docker mode activated. {color} | | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:red}-1{color} | {color:red} test4tests {color} | {color:red} 0m 0s{color} | {color:red} The patch doesn't appear to include any new or modified tests. Please justify why no new tests are needed for this patch. Also please list what manual steps were performed to verify this patch. {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 8m 37s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 8m 48s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 28s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 10s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 16s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 34s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 54s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 50s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 9m 8s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 9m 8s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 28s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 15s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 17s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 49s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 54s{color} | {color:green} the patch passed {color} | | {color:red}-1{color} | {color:red} unit {color} | {color:red} 9m 20s{color} | {color:red} hadoop-common in the patch failed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 22s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 47m 31s{color} | {color:black} {color} | \\ \\ || Reason || Tests || | Failed junit tests | hadoop.fs.shell.TestCount | \\ \\ || Subsystem || Report/Notes || | Docker | Image:yetus/hadoop:9560f25 | | JIRA Issue | HDFS-11061 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12835462/HDFS-11061.001.patch | | Optional Tests | asflicense compile javac javadoc mvninstall mvnsite unit findbugs checkstyle | | uname | Linux 58b5fa982a7a 3.13.0-96-generic #143-Ubuntu SMP Mon Aug 29 20:15:20 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh | | git revision | trunk / 9f32364 | | Default Java | 1.8.0_101 | | findbugs | v3.0.0 | | unit | https://builds.apache.org/job/PreCommit-HDFS-Build/17306/artifact/patchprocess/patch-unit-hadoop-common-project_hadoop-common.txt | | Test Results | https://builds.apache.org/job/PreCommit-HDFS-Build/17306/testReport/ | | modules | C: hadoop-common-project/hadoop-common U: hadoop-common-project/hadoop-common | | Console output | https://builds.apache.org/job/PreCommit-HDFS-Build/17306/console | | Powered by | Apache Yetus 0.4.0-SNAPSHOT http://yetus.apache.org | This message was automatically generated. > Fix dfs -count -t or update its documentation > - > > Key: HDFS-11061 > URL: https://issues.apache.org/jira/browse/HDFS-11061 > Project: Hadoop HDFS > Issue Type: Bug > Compo
[jira] [Commented] (HDFS-11031) Add additional unit test for DataNode startup behavior when volumes fail
[ https://issues.apache.org/jira/browse/HDFS-11031?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15610517#comment-15610517 ] Brahma Reddy Battula commented on HDFS-11031: - TestFailures are unrelated..[~liuml07] thanks for updating the patch, LGTM, I will wait for commit till [~jnp] looks into this issue. bq.Perhaps this came up before, FWIW should we set up Jenkins builds for that? I think, this can good idea. we can start discussion on this and let us see response from others. > Add additional unit test for DataNode startup behavior when volumes fail > > > Key: HDFS-11031 > URL: https://issues.apache.org/jira/browse/HDFS-11031 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: datanode, test >Reporter: Mingliang Liu >Assignee: Mingliang Liu > Attachments: HDFS-11031-branch-2.001.patch, > HDFS-11031-branch-2.002.patch, HDFS-11031-branch-2.003.patch, > HDFS-11031-branch-2.004.patch, HDFS-11031.000.patch, HDFS-11031.001.patch, > HDFS-11031.002.patch, HDFS-11031.003.patch, HDFS-11031.004.patch > > > There are several cases to add in {{TestDataNodeVolumeFailure}}: > - DataNode should not start in case of volumes failure > - DataNode should not start in case of lacking data dir read/write permission > - ... -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-10954) [SPS]: Provide mechanism to send blocks movement result back to NN from coordinator DN
[ https://issues.apache.org/jira/browse/HDFS-10954?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15610508#comment-15610508 ] Hadoop QA commented on HDFS-10954: -- | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 16s{color} | {color:blue} Docker mode activated. {color} | | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s{color} | {color:green} The patch appears to include 13 new or modified test files. {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 7m 40s{color} | {color:green} HDFS-10285 passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 48s{color} | {color:green} HDFS-10285 passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 34s{color} | {color:green} HDFS-10285 passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 55s{color} | {color:green} HDFS-10285 passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 13s{color} | {color:green} HDFS-10285 passed {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 48s{color} | {color:green} HDFS-10285 passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 39s{color} | {color:green} HDFS-10285 passed {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 49s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 47s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} cc {color} | {color:green} 0m 47s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 47s{color} | {color:green} the patch passed {color} | | {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange} 0m 33s{color} | {color:orange} hadoop-hdfs-project/hadoop-hdfs: The patch generated 5 new + 784 unchanged - 4 fixed = 789 total (was 788) {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 55s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 11s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 56s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 36s{color} | {color:green} the patch passed {color} | | {color:red}-1{color} | {color:red} unit {color} | {color:red} 66m 23s{color} | {color:red} hadoop-hdfs in the patch failed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 21s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 86m 43s{color} | {color:black} {color} | \\ \\ || Reason || Tests || | Failed junit tests | hadoop.hdfs.TestEncryptionZones | | | hadoop.hdfs.TestFileChecksum | | | hadoop.hdfs.server.datanode.TestBPOfferService | | | hadoop.hdfs.server.namenode.snapshot.TestRenameWithSnapshots | | | hadoop.hdfs.TestCrcCorruption | | | hadoop.hdfs.server.namenode.TestAddStripedBlockInFBR | \\ \\ || Subsystem || Report/Notes || | Docker | Image:yetus/hadoop:9560f25 | | JIRA Issue | HDFS-10954 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12835454/HDFS-10954-HDFS-10285-01.patch | | Optional Tests | asflicense compile javac javadoc mvninstall mvnsite unit findbugs checkstyle cc | | uname | Linux 58e98b7b1469 3.13.0-92-generic #139-Ubuntu SMP Tue Jun 28 20:42:26 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh | | git revision | HDFS-10285 / f705de3 | | Default Java | 1.8.0_101 | | findbugs | v3.0.0 | | checkstyle | https://builds.apache.org/job/PreCommit-HDFS-Build/17304/artifact/patchprocess/diff-checkstyle-hadoop-hdfs-project_hadoop-hdfs.txt | | unit | https://builds.apache.org/job/PreCommit-HDFS-Build/17304/artifact/patchprocess/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt | | Test Results | https://builds.apache.org/job/PreCommit-HDFS-Build/17304/testReport/ | | modules | C: hadoo
[jira] [Commented] (HDFS-11061) Fix dfs -count -t or update its documentation
[ https://issues.apache.org/jira/browse/HDFS-11061?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15610510#comment-15610510 ] Hadoop QA commented on HDFS-11061: -- | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 12s{color} | {color:blue} Docker mode activated. {color} | | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:red}-1{color} | {color:red} test4tests {color} | {color:red} 0m 0s{color} | {color:red} The patch doesn't appear to include any new or modified tests. Please justify why no new tests are needed for this patch. Also please list what manual steps were performed to verify this patch. {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 7m 14s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 7m 20s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 23s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 57s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 12s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 29s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 42s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 35s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 7m 20s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 7m 20s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 23s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 56s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 12s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 29s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 42s{color} | {color:green} the patch passed {color} | | {color:red}-1{color} | {color:red} unit {color} | {color:red} 15m 42s{color} | {color:red} hadoop-common in the patch failed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 21s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 46m 54s{color} | {color:black} {color} | \\ \\ || Reason || Tests || | Failed junit tests | hadoop.fs.shell.TestCount | \\ \\ || Subsystem || Report/Notes || | Docker | Image:yetus/hadoop:9560f25 | | JIRA Issue | HDFS-11061 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12835460/HDFS-11061.001.patch | | Optional Tests | asflicense compile javac javadoc mvninstall mvnsite unit findbugs checkstyle | | uname | Linux f27ec5338e68 3.13.0-95-generic #142-Ubuntu SMP Fri Aug 12 17:00:09 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh | | git revision | trunk / 22ff0ef | | Default Java | 1.8.0_101 | | findbugs | v3.0.0 | | unit | https://builds.apache.org/job/PreCommit-HDFS-Build/17305/artifact/patchprocess/patch-unit-hadoop-common-project_hadoop-common.txt | | Test Results | https://builds.apache.org/job/PreCommit-HDFS-Build/17305/testReport/ | | modules | C: hadoop-common-project/hadoop-common U: hadoop-common-project/hadoop-common | | Console output | https://builds.apache.org/job/PreCommit-HDFS-Build/17305/console | | Powered by | Apache Yetus 0.4.0-SNAPSHOT http://yetus.apache.org | This message was automatically generated. > Fix dfs -count -t or update its documentation > - > > Key: HDFS-11061 > URL: https://issues.apache.org/jira/browse/HDFS-11061 > Project: Hadoop HDFS > Issue Type: Bug > Compo
[jira] [Commented] (HDFS-11038) DiskBalancer: support running multiple commands in single test
[ https://issues.apache.org/jira/browse/HDFS-11038?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15610461#comment-15610461 ] Hudson commented on HDFS-11038: --- SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #10693 (See [https://builds.apache.org/job/Hadoop-trunk-Commit/10693/]) HDFS-11038. DiskBalancer: support running multiple commands in single (aengineer: rev 9f32364d283dec47dd07490e253d477a0d14ac71) * (edit) hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/diskbalancer/command/Command.java * (edit) hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/diskbalancer/command/TestDiskBalancerCommand.java * (edit) hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/tools/DiskBalancerCLI.java * (edit) hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/diskbalancer/command/PlanCommand.java * (edit) hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/diskbalancer/DiskBalancerTestUtil.java > DiskBalancer: support running multiple commands in single test > -- > > Key: HDFS-11038 > URL: https://issues.apache.org/jira/browse/HDFS-11038 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: balancer & mover >Reporter: Xiaobing Zhou >Assignee: Xiaobing Zhou > Attachments: HDFS-11038.000.patch, HDFS-11038.001.patch > > > Disk balancer follows/reuses one rule designed by HDFS balancer, that is, > only one instance is allowed to run at the same time. This is correct in > production system to avoid any inconsistencies, but it's not ideal to write > and run unit tests. For example, it should be allowed run plan, execute, scan > commands under one setup of disk balancer. One instance rule will throw > exception by complaining 'Another instance is running'. In such a case, > there's no way to do a full life cycle tests which involves a sequence of > commands. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-11061) Fix dfs -count -t or update its documentation
[ https://issues.apache.org/jira/browse/HDFS-11061?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Yiqun Lin updated HDFS-11061: - Attachment: (was: HDFS-11061.001.patch) > Fix dfs -count -t or update its documentation > - > > Key: HDFS-11061 > URL: https://issues.apache.org/jira/browse/HDFS-11061 > Project: Hadoop HDFS > Issue Type: Bug > Components: fs >Reporter: Wei-Chiu Chuang >Assignee: Yiqun Lin >Priority: Minor > Labels: supportability > Attachments: HDFS-11061.001.patch > > > According to dfs -count command line help, -t option must be used along with > -q. > * However, the current behavior is that -t can be used without -q, it's just > silently ignored. > * In addition, -t may also be used with -u. > * The FileSystemShell doc does not state -t must be used along with -q. This > should either be enforced in the code, or update the doc/command line. > * Also, the list of possible parameters for -t option is not described in the > doc. Looking at the code (Count.java), the list of possible parameters are > either empty string (="all"), "all", "ram_disk", "ssd", "disk" or "archive" > (caseless) -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-10930) Refactor: Wrap Datanode IO related operations
[ https://issues.apache.org/jira/browse/HDFS-10930?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Xiaoyu Yao updated HDFS-10930: -- Status: Patch Available (was: Open) > Refactor: Wrap Datanode IO related operations > - > > Key: HDFS-10930 > URL: https://issues.apache.org/jira/browse/HDFS-10930 > Project: Hadoop HDFS > Issue Type: Improvement > Components: datanode >Reporter: Xiaoyu Yao >Assignee: Xiaoyu Yao > Attachments: HDFS-10930.01.patch, HDFS-10930.02.patch, > HDFS-10930.03.patch > > > Datanode IO (Disk/Network) related operations and instrumentations are > currently spilled in many classes such as DataNode.java, BlockReceiver.java, > BlockSender.java, FsDatasetImpl.java, FsVolumeImpl.java, > DirectoryScanner.java, BlockScanner.java, FsDatasetAsyncDiskService.java, > LocalReplica.java, LocalReplicaPipeline.java, Storage.java, etc. > This ticket is opened to consolidate IO related operations for easy > instrumentation, metrics collection, logging and trouble shooting. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-11061) Fix dfs -count -t or update its documentation
[ https://issues.apache.org/jira/browse/HDFS-11061?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Yiqun Lin updated HDFS-11061: - Attachment: HDFS-11061.001.patch > Fix dfs -count -t or update its documentation > - > > Key: HDFS-11061 > URL: https://issues.apache.org/jira/browse/HDFS-11061 > Project: Hadoop HDFS > Issue Type: Bug > Components: fs >Reporter: Wei-Chiu Chuang >Assignee: Yiqun Lin >Priority: Minor > Labels: supportability > Attachments: HDFS-11061.001.patch > > > According to dfs -count command line help, -t option must be used along with > -q. > * However, the current behavior is that -t can be used without -q, it's just > silently ignored. > * In addition, -t may also be used with -u. > * The FileSystemShell doc does not state -t must be used along with -q. This > should either be enforced in the code, or update the doc/command line. > * Also, the list of possible parameters for -t option is not described in the > doc. Looking at the code (Count.java), the list of possible parameters are > either empty string (="all"), "all", "ram_disk", "ssd", "disk" or "archive" > (caseless) -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-9668) Optimize the locking in FsDatasetImpl
[ https://issues.apache.org/jira/browse/HDFS-9668?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15610408#comment-15610408 ] Jingcheng Du commented on HDFS-9668: The test failures are due to OOM, should not be related with this patch. Re-submit the patch to run the test again. > Optimize the locking in FsDatasetImpl > - > > Key: HDFS-9668 > URL: https://issues.apache.org/jira/browse/HDFS-9668 > Project: Hadoop HDFS > Issue Type: Improvement > Components: datanode >Reporter: Jingcheng Du >Assignee: Jingcheng Du > Attachments: HDFS-9668-1.patch, HDFS-9668-10.patch, > HDFS-9668-11.patch, HDFS-9668-12.patch, HDFS-9668-13.patch, > HDFS-9668-14.patch, HDFS-9668-14.patch, HDFS-9668-15.patch, > HDFS-9668-16.patch, HDFS-9668-17.patch, HDFS-9668-18.patch, > HDFS-9668-19.patch, HDFS-9668-2.patch, HDFS-9668-3.patch, HDFS-9668-4.patch, > HDFS-9668-5.patch, HDFS-9668-6.patch, HDFS-9668-7.patch, HDFS-9668-8.patch, > HDFS-9668-9.patch, execution_time.png > > > During the HBase test on a tiered storage of HDFS (WAL is stored in > SSD/RAMDISK, and all other files are stored in HDD), we observe many > long-time BLOCKED threads on FsDatasetImpl in DataNode. The following is part > of the jstack result: > {noformat} > "DataXceiver for client DFSClient_NONMAPREDUCE_-1626037897_1 at > /192.168.50.16:48521 [Receiving block > BP-1042877462-192.168.50.13-1446173170517:blk_1073779272_40852]" - Thread > t@93336 >java.lang.Thread.State: BLOCKED > at > org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl.createRbw(FsDatasetImpl.java:) > - waiting to lock <18324c9> (a > org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl) owned by > "DataXceiver for client DFSClient_NONMAPREDUCE_-1626037897_1 at > /192.168.50.16:48520 [Receiving block > BP-1042877462-192.168.50.13-1446173170517:blk_1073779271_40851]" t@93335 > at > org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl.createRbw(FsDatasetImpl.java:113) > at > org.apache.hadoop.hdfs.server.datanode.BlockReceiver.(BlockReceiver.java:183) > at > org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:615) > at > org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:137) > at > org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:74) > at > org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:235) > at java.lang.Thread.run(Thread.java:745) >Locked ownable synchronizers: > - None > > "DataXceiver for client DFSClient_NONMAPREDUCE_-1626037897_1 at > /192.168.50.16:48520 [Receiving block > BP-1042877462-192.168.50.13-1446173170517:blk_1073779271_40851]" - Thread > t@93335 >java.lang.Thread.State: RUNNABLE > at java.io.UnixFileSystem.createFileExclusively(Native Method) > at java.io.File.createNewFile(File.java:1012) > at > org.apache.hadoop.hdfs.server.datanode.DatanodeUtil.createTmpFile(DatanodeUtil.java:66) > at > org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.BlockPoolSlice.createRbwFile(BlockPoolSlice.java:271) > at > org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsVolumeImpl.createRbwFile(FsVolumeImpl.java:286) > at > org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl.createRbw(FsDatasetImpl.java:1140) > - locked <18324c9> (a > org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl) > at > org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl.createRbw(FsDatasetImpl.java:113) > at > org.apache.hadoop.hdfs.server.datanode.BlockReceiver.(BlockReceiver.java:183) > at > org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:615) > at > org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:137) > at > org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:74) > at > org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:235) > at java.lang.Thread.run(Thread.java:745) >Locked ownable synchronizers: > - None > {noformat} > We measured the execution of some operations in FsDatasetImpl during the > test. Here following is the result. > !execution_time.png! > The operations of finalizeBlock, addBlock and createRbw on HDD in a heavy > load take a really long time. > It means one slow operation of finalizeBlock, addBlock and createRbw in a > slow storage can block all the other same operations in the same DataNode, > especially in HBase when many wal/flusher/compactor are configured. > We need a finer grained lock mechanism in a new FsDatasetImpl implementation > and users can choose the
[jira] [Commented] (HDFS-11061) Fix dfs -count -t or update its documentation
[ https://issues.apache.org/jira/browse/HDFS-11061?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15610406#comment-15610406 ] Yiqun Lin commented on HDFS-11061: -- Thanks [~jojochuang] for reporting this, it's a good catch. I prefer to update related documentations in this JIRA. Attach a initial patch. Thanks for reviewing, > Fix dfs -count -t or update its documentation > - > > Key: HDFS-11061 > URL: https://issues.apache.org/jira/browse/HDFS-11061 > Project: Hadoop HDFS > Issue Type: Bug > Components: fs >Reporter: Wei-Chiu Chuang >Assignee: Yiqun Lin >Priority: Minor > Labels: supportability > > According to dfs -count command line help, -t option must be used along with > -q. > * However, the current behavior is that -t can be used without -q, it's just > silently ignored. > * In addition, -t may also be used with -u. > * The FileSystemShell doc does not state -t must be used along with -q. This > should either be enforced in the code, or update the doc/command line. > * Also, the list of possible parameters for -t option is not described in the > doc. Looking at the code (Count.java), the list of possible parameters are > either empty string (="all"), "all", "ram_disk", "ssd", "disk" or "archive" > (caseless) -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-11038) DiskBalancer: support running multiple commands in single test
[ https://issues.apache.org/jira/browse/HDFS-11038?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Anu Engineer updated HDFS-11038: Resolution: Fixed Hadoop Flags: Reviewed Target Version/s: 3.0.0-alpha2 Status: Resolved (was: Patch Available) [~xiaobingo] Thank you for the contribution. I have committed this to trunk. > DiskBalancer: support running multiple commands in single test > -- > > Key: HDFS-11038 > URL: https://issues.apache.org/jira/browse/HDFS-11038 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: balancer & mover >Reporter: Xiaobing Zhou >Assignee: Xiaobing Zhou > Attachments: HDFS-11038.000.patch, HDFS-11038.001.patch > > > Disk balancer follows/reuses one rule designed by HDFS balancer, that is, > only one instance is allowed to run at the same time. This is correct in > production system to avoid any inconsistencies, but it's not ideal to write > and run unit tests. For example, it should be allowed run plan, execute, scan > commands under one setup of disk balancer. One instance rule will throw > exception by complaining 'Another instance is running'. In such a case, > there's no way to do a full life cycle tests which involves a sequence of > commands. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-11061) Fix dfs -count -t or update its documentation
[ https://issues.apache.org/jira/browse/HDFS-11061?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Yiqun Lin updated HDFS-11061: - Attachment: HDFS-11061.001.patch > Fix dfs -count -t or update its documentation > - > > Key: HDFS-11061 > URL: https://issues.apache.org/jira/browse/HDFS-11061 > Project: Hadoop HDFS > Issue Type: Bug > Components: fs >Reporter: Wei-Chiu Chuang >Assignee: Yiqun Lin >Priority: Minor > Labels: supportability > Attachments: HDFS-11061.001.patch > > > According to dfs -count command line help, -t option must be used along with > -q. > * However, the current behavior is that -t can be used without -q, it's just > silently ignored. > * In addition, -t may also be used with -u. > * The FileSystemShell doc does not state -t must be used along with -q. This > should either be enforced in the code, or update the doc/command line. > * Also, the list of possible parameters for -t option is not described in the > doc. Looking at the code (Count.java), the list of possible parameters are > either empty string (="all"), "all", "ram_disk", "ssd", "disk" or "archive" > (caseless) -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-9668) Optimize the locking in FsDatasetImpl
[ https://issues.apache.org/jira/browse/HDFS-9668?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jingcheng Du updated HDFS-9668: --- Status: Patch Available (was: Open) > Optimize the locking in FsDatasetImpl > - > > Key: HDFS-9668 > URL: https://issues.apache.org/jira/browse/HDFS-9668 > Project: Hadoop HDFS > Issue Type: Improvement > Components: datanode >Reporter: Jingcheng Du >Assignee: Jingcheng Du > Attachments: HDFS-9668-1.patch, HDFS-9668-10.patch, > HDFS-9668-11.patch, HDFS-9668-12.patch, HDFS-9668-13.patch, > HDFS-9668-14.patch, HDFS-9668-14.patch, HDFS-9668-15.patch, > HDFS-9668-16.patch, HDFS-9668-17.patch, HDFS-9668-18.patch, > HDFS-9668-19.patch, HDFS-9668-2.patch, HDFS-9668-3.patch, HDFS-9668-4.patch, > HDFS-9668-5.patch, HDFS-9668-6.patch, HDFS-9668-7.patch, HDFS-9668-8.patch, > HDFS-9668-9.patch, execution_time.png > > > During the HBase test on a tiered storage of HDFS (WAL is stored in > SSD/RAMDISK, and all other files are stored in HDD), we observe many > long-time BLOCKED threads on FsDatasetImpl in DataNode. The following is part > of the jstack result: > {noformat} > "DataXceiver for client DFSClient_NONMAPREDUCE_-1626037897_1 at > /192.168.50.16:48521 [Receiving block > BP-1042877462-192.168.50.13-1446173170517:blk_1073779272_40852]" - Thread > t@93336 >java.lang.Thread.State: BLOCKED > at > org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl.createRbw(FsDatasetImpl.java:) > - waiting to lock <18324c9> (a > org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl) owned by > "DataXceiver for client DFSClient_NONMAPREDUCE_-1626037897_1 at > /192.168.50.16:48520 [Receiving block > BP-1042877462-192.168.50.13-1446173170517:blk_1073779271_40851]" t@93335 > at > org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl.createRbw(FsDatasetImpl.java:113) > at > org.apache.hadoop.hdfs.server.datanode.BlockReceiver.(BlockReceiver.java:183) > at > org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:615) > at > org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:137) > at > org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:74) > at > org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:235) > at java.lang.Thread.run(Thread.java:745) >Locked ownable synchronizers: > - None > > "DataXceiver for client DFSClient_NONMAPREDUCE_-1626037897_1 at > /192.168.50.16:48520 [Receiving block > BP-1042877462-192.168.50.13-1446173170517:blk_1073779271_40851]" - Thread > t@93335 >java.lang.Thread.State: RUNNABLE > at java.io.UnixFileSystem.createFileExclusively(Native Method) > at java.io.File.createNewFile(File.java:1012) > at > org.apache.hadoop.hdfs.server.datanode.DatanodeUtil.createTmpFile(DatanodeUtil.java:66) > at > org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.BlockPoolSlice.createRbwFile(BlockPoolSlice.java:271) > at > org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsVolumeImpl.createRbwFile(FsVolumeImpl.java:286) > at > org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl.createRbw(FsDatasetImpl.java:1140) > - locked <18324c9> (a > org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl) > at > org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl.createRbw(FsDatasetImpl.java:113) > at > org.apache.hadoop.hdfs.server.datanode.BlockReceiver.(BlockReceiver.java:183) > at > org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:615) > at > org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:137) > at > org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:74) > at > org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:235) > at java.lang.Thread.run(Thread.java:745) >Locked ownable synchronizers: > - None > {noformat} > We measured the execution of some operations in FsDatasetImpl during the > test. Here following is the result. > !execution_time.png! > The operations of finalizeBlock, addBlock and createRbw on HDD in a heavy > load take a really long time. > It means one slow operation of finalizeBlock, addBlock and createRbw in a > slow storage can block all the other same operations in the same DataNode, > especially in HBase when many wal/flusher/compactor are configured. > We need a finer grained lock mechanism in a new FsDatasetImpl implementation > and users can choose the implementation by configuring > "dfs.datanode.fsdataset.factory" in DataNode. > We can implement the lock by either storage level or
[jira] [Updated] (HDFS-11061) Fix dfs -count -t or update its documentation
[ https://issues.apache.org/jira/browse/HDFS-11061?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Yiqun Lin updated HDFS-11061: - Status: Patch Available (was: Open) > Fix dfs -count -t or update its documentation > - > > Key: HDFS-11061 > URL: https://issues.apache.org/jira/browse/HDFS-11061 > Project: Hadoop HDFS > Issue Type: Bug > Components: fs >Reporter: Wei-Chiu Chuang >Assignee: Yiqun Lin >Priority: Minor > Labels: supportability > > According to dfs -count command line help, -t option must be used along with > -q. > * However, the current behavior is that -t can be used without -q, it's just > silently ignored. > * In addition, -t may also be used with -u. > * The FileSystemShell doc does not state -t must be used along with -q. This > should either be enforced in the code, or update the doc/command line. > * Also, the list of possible parameters for -t option is not described in the > doc. Looking at the code (Count.java), the list of possible parameters are > either empty string (="all"), "all", "ram_disk", "ssd", "disk" or "archive" > (caseless) -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-9668) Optimize the locking in FsDatasetImpl
[ https://issues.apache.org/jira/browse/HDFS-9668?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jingcheng Du updated HDFS-9668: --- Status: Open (was: Patch Available) > Optimize the locking in FsDatasetImpl > - > > Key: HDFS-9668 > URL: https://issues.apache.org/jira/browse/HDFS-9668 > Project: Hadoop HDFS > Issue Type: Improvement > Components: datanode >Reporter: Jingcheng Du >Assignee: Jingcheng Du > Attachments: HDFS-9668-1.patch, HDFS-9668-10.patch, > HDFS-9668-11.patch, HDFS-9668-12.patch, HDFS-9668-13.patch, > HDFS-9668-14.patch, HDFS-9668-14.patch, HDFS-9668-15.patch, > HDFS-9668-16.patch, HDFS-9668-17.patch, HDFS-9668-18.patch, > HDFS-9668-19.patch, HDFS-9668-2.patch, HDFS-9668-3.patch, HDFS-9668-4.patch, > HDFS-9668-5.patch, HDFS-9668-6.patch, HDFS-9668-7.patch, HDFS-9668-8.patch, > HDFS-9668-9.patch, execution_time.png > > > During the HBase test on a tiered storage of HDFS (WAL is stored in > SSD/RAMDISK, and all other files are stored in HDD), we observe many > long-time BLOCKED threads on FsDatasetImpl in DataNode. The following is part > of the jstack result: > {noformat} > "DataXceiver for client DFSClient_NONMAPREDUCE_-1626037897_1 at > /192.168.50.16:48521 [Receiving block > BP-1042877462-192.168.50.13-1446173170517:blk_1073779272_40852]" - Thread > t@93336 >java.lang.Thread.State: BLOCKED > at > org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl.createRbw(FsDatasetImpl.java:) > - waiting to lock <18324c9> (a > org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl) owned by > "DataXceiver for client DFSClient_NONMAPREDUCE_-1626037897_1 at > /192.168.50.16:48520 [Receiving block > BP-1042877462-192.168.50.13-1446173170517:blk_1073779271_40851]" t@93335 > at > org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl.createRbw(FsDatasetImpl.java:113) > at > org.apache.hadoop.hdfs.server.datanode.BlockReceiver.(BlockReceiver.java:183) > at > org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:615) > at > org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:137) > at > org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:74) > at > org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:235) > at java.lang.Thread.run(Thread.java:745) >Locked ownable synchronizers: > - None > > "DataXceiver for client DFSClient_NONMAPREDUCE_-1626037897_1 at > /192.168.50.16:48520 [Receiving block > BP-1042877462-192.168.50.13-1446173170517:blk_1073779271_40851]" - Thread > t@93335 >java.lang.Thread.State: RUNNABLE > at java.io.UnixFileSystem.createFileExclusively(Native Method) > at java.io.File.createNewFile(File.java:1012) > at > org.apache.hadoop.hdfs.server.datanode.DatanodeUtil.createTmpFile(DatanodeUtil.java:66) > at > org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.BlockPoolSlice.createRbwFile(BlockPoolSlice.java:271) > at > org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsVolumeImpl.createRbwFile(FsVolumeImpl.java:286) > at > org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl.createRbw(FsDatasetImpl.java:1140) > - locked <18324c9> (a > org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl) > at > org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl.createRbw(FsDatasetImpl.java:113) > at > org.apache.hadoop.hdfs.server.datanode.BlockReceiver.(BlockReceiver.java:183) > at > org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:615) > at > org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:137) > at > org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:74) > at > org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:235) > at java.lang.Thread.run(Thread.java:745) >Locked ownable synchronizers: > - None > {noformat} > We measured the execution of some operations in FsDatasetImpl during the > test. Here following is the result. > !execution_time.png! > The operations of finalizeBlock, addBlock and createRbw on HDD in a heavy > load take a really long time. > It means one slow operation of finalizeBlock, addBlock and createRbw in a > slow storage can block all the other same operations in the same DataNode, > especially in HBase when many wal/flusher/compactor are configured. > We need a finer grained lock mechanism in a new FsDatasetImpl implementation > and users can choose the implementation by configuring > "dfs.datanode.fsdataset.factory" in DataNode. > We can implement the lock by either storage level or
[jira] [Updated] (HDFS-10930) Refactor: Wrap Datanode IO related operations
[ https://issues.apache.org/jira/browse/HDFS-10930?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Xiaoyu Yao updated HDFS-10930: -- Status: Open (was: Patch Available) > Refactor: Wrap Datanode IO related operations > - > > Key: HDFS-10930 > URL: https://issues.apache.org/jira/browse/HDFS-10930 > Project: Hadoop HDFS > Issue Type: Improvement > Components: datanode >Reporter: Xiaoyu Yao >Assignee: Xiaoyu Yao > Attachments: HDFS-10930.01.patch, HDFS-10930.02.patch, > HDFS-10930.03.patch > > > Datanode IO (Disk/Network) related operations and instrumentations are > currently spilled in many classes such as DataNode.java, BlockReceiver.java, > BlockSender.java, FsDatasetImpl.java, FsVolumeImpl.java, > DirectoryScanner.java, BlockScanner.java, FsDatasetAsyncDiskService.java, > LocalReplica.java, LocalReplicaPipeline.java, Storage.java, etc. > This ticket is opened to consolidate IO related operations for easy > instrumentation, metrics collection, logging and trouble shooting. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Assigned] (HDFS-11061) Fix dfs -count -t or update its documentation
[ https://issues.apache.org/jira/browse/HDFS-11061?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Yiqun Lin reassigned HDFS-11061: Assignee: Yiqun Lin > Fix dfs -count -t or update its documentation > - > > Key: HDFS-11061 > URL: https://issues.apache.org/jira/browse/HDFS-11061 > Project: Hadoop HDFS > Issue Type: Bug > Components: fs >Reporter: Wei-Chiu Chuang >Assignee: Yiqun Lin >Priority: Minor > Labels: supportability > > According to dfs -count command line help, -t option must be used along with > -q. > * However, the current behavior is that -t can be used without -q, it's just > silently ignored. > * In addition, -t may also be used with -u. > * The FileSystemShell doc does not state -t must be used along with -q. This > should either be enforced in the code, or update the doc/command line. > * Also, the list of possible parameters for -t option is not described in the > doc. Looking at the code (Count.java), the list of possible parameters are > either empty string (="all"), "all", "ram_disk", "ssd", "disk" or "archive" > (caseless) -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-11064) Mention the default NN rpc ports in hdfs-default.xml
[ https://issues.apache.org/jira/browse/HDFS-11064?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15610391#comment-15610391 ] Hadoop QA commented on HDFS-11064: -- | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 21s{color} | {color:blue} Docker mode activated. {color} | | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:red}-1{color} | {color:red} test4tests {color} | {color:red} 0m 0s{color} | {color:red} The patch doesn't appear to include any new or modified tests. Please justify why no new tests are needed for this patch. Also please list what manual steps were performed to verify this patch. {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 7m 42s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 45s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 52s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 13s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 40s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 46s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 44s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 44s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 52s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 10s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} xml {color} | {color:green} 0m 2s{color} | {color:green} The patch has no ill-formed XML file. {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 36s{color} | {color:green} the patch passed {color} | | {color:red}-1{color} | {color:red} unit {color} | {color:red} 59m 1s{color} | {color:red} hadoop-hdfs in the patch failed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 21s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 74m 10s{color} | {color:black} {color} | \\ \\ || Reason || Tests || | Failed junit tests | hadoop.hdfs.server.namenode.TestFsckWithMultipleNameNodes | | Timed out junit tests | org.apache.hadoop.hdfs.server.namenode.ha.TestHAAppend | | | org.apache.hadoop.hdfs.server.namenode.ha.TestStandbyBlockManagement | | | org.apache.hadoop.hdfs.server.namenode.TestDefaultBlockPlacementPolicy | | | org.apache.hadoop.hdfs.server.namenode.ha.TestStandbyCheckpoints | | | org.apache.hadoop.hdfs.server.namenode.ha.TestHAMetrics | | | org.apache.hadoop.hdfs.server.namenode.ha.TestFailoverWithBlockTokensEnabled | | | org.apache.hadoop.hdfs.server.namenode.ha.TestDNFencing | | | org.apache.hadoop.hdfs.server.namenode.ha.TestRetryCacheWithHA | | | org.apache.hadoop.hdfs.server.namenode.TestNameNodeMetadataConsistency | | | org.apache.hadoop.hdfs.server.namenode.ha.TestHASafeMode | \\ \\ || Subsystem || Report/Notes || | Docker | Image:yetus/hadoop:9560f25 | | JIRA Issue | HDFS-11064 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12835442/HDFS-11064.001.patch | | Optional Tests | asflicense compile javac javadoc mvninstall mvnsite unit xml | | uname | Linux 0ab0aaff6736 3.13.0-95-generic #142-Ubuntu SMP Fri Aug 12 17:00:09 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh | | git revision | trunk / 22ff0ef | | Default Java | 1.8.0_101 | | unit | https://builds.apache.org/job/PreCommit-HDFS-Build/17302/artifact/patchprocess/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt | | Test Results | https://builds.apache.org/job/PreCommit-HDFS-Build/17302/testReport/ | | modules | C: hadoop-hdfs-project/hadoop-hdfs U: hadoop-hdfs-project/hadoop-hdfs | | Console output | https://builds.apache.org/job/PreCommit-HDFS-Build/17302/console | | Powered by | Apache Yetus 0.4.0-SNAPSHOT http://yetus
[jira] [Commented] (HDFS-11064) Mention the default NN rpc ports in hdfs-default.xml
[ https://issues.apache.org/jira/browse/HDFS-11064?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15610388#comment-15610388 ] Hadoop QA commented on HDFS-11064: -- | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 16s{color} | {color:blue} Docker mode activated. {color} | | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:red}-1{color} | {color:red} test4tests {color} | {color:red} 0m 0s{color} | {color:red} The patch doesn't appear to include any new or modified tests. Please justify why no new tests are needed for this patch. Also please list what manual steps were performed to verify this patch. {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 6m 38s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 43s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 50s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 12s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 39s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 45s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 41s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 41s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 49s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 10s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} xml {color} | {color:green} 0m 1s{color} | {color:green} The patch has no ill-formed XML file. {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 36s{color} | {color:green} the patch passed {color} | | {color:red}-1{color} | {color:red} unit {color} | {color:red} 48m 49s{color} | {color:red} hadoop-hdfs in the patch failed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 17s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 62m 31s{color} | {color:black} {color} | \\ \\ || Reason || Tests || | Timed out junit tests | org.apache.hadoop.hdfs.TestFileChecksum | | | org.apache.hadoop.hdfs.TestWriteReadStripedFile | | | org.apache.hadoop.hdfs.TestDFSPermission | | | org.apache.hadoop.hdfs.web.TestWebHdfsWithRestCsrfPreventionFilter | | | org.apache.hadoop.hdfs.TestLeaseRecovery | | | org.apache.hadoop.tools.TestJMXGet | | | org.apache.hadoop.hdfs.server.balancer.TestBalancer | | | org.apache.hadoop.hdfs.web.TestWebHDFSForHA | | | org.apache.hadoop.hdfs.TestHFlush | | | org.apache.hadoop.hdfs.TestParallelUnixDomainRead | \\ \\ || Subsystem || Report/Notes || | Docker | Image:yetus/hadoop:9560f25 | | JIRA Issue | HDFS-11064 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12835445/HDFS-11064.001.patch | | Optional Tests | asflicense compile javac javadoc mvninstall mvnsite unit xml | | uname | Linux db6b4f89ef9e 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh | | git revision | trunk / 22ff0ef | | Default Java | 1.8.0_101 | | unit | https://builds.apache.org/job/PreCommit-HDFS-Build/17303/artifact/patchprocess/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt | | Test Results | https://builds.apache.org/job/PreCommit-HDFS-Build/17303/testReport/ | | modules | C: hadoop-hdfs-project/hadoop-hdfs U: hadoop-hdfs-project/hadoop-hdfs | | Console output | https://builds.apache.org/job/PreCommit-HDFS-Build/17303/console | | Powered by | Apache Yetus 0.4.0-SNAPSHOT http://yetus.apache.org | This message was automatically generated. > Mention the default NN rpc ports in hdfs-default.xml > > > Key: HDFS-11064 > URL: https://issues.apache.org/jira/brow
[jira] [Updated] (HDFS-10954) [SPS]: Provide mechanism to send blocks movement result back to NN from coordinator DN
[ https://issues.apache.org/jira/browse/HDFS-10954?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Rakesh R updated HDFS-10954: Attachment: HDFS-10954-HDFS-10285-01.patch Attached new patch fixing few checkstyle warnings and {{TestBPOfferService}} case failures. Other test case failures are unrelated to the patch. > [SPS]: Provide mechanism to send blocks movement result back to NN from > coordinator DN > -- > > Key: HDFS-10954 > URL: https://issues.apache.org/jira/browse/HDFS-10954 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: datanode, namenode >Reporter: Rakesh R >Assignee: Rakesh R > Attachments: HDFS-10954-HDFS-10285-00.patch, > HDFS-10954-HDFS-10285-01.patch > > > This jira is a follow-up task of HDFS-10884. As part of HDFS-10884 jira, it > is providing a mechanism to collect all the success/failed block movement > results at the {{co-ordinator datanode}} side. Now, the idea of this jira is > to discuss an efficient way to report these success/failed block movement > results to namenode, so that NN can take necessary action based on this > information. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-11038) DiskBalancer: support running multiple commands in single test
[ https://issues.apache.org/jira/browse/HDFS-11038?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Anu Engineer updated HDFS-11038: Summary: DiskBalancer: support running multiple commands in single test (was: DiskBalancer: support running multiple commands under one setup of disk balancer) > DiskBalancer: support running multiple commands in single test > -- > > Key: HDFS-11038 > URL: https://issues.apache.org/jira/browse/HDFS-11038 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: balancer & mover >Reporter: Xiaobing Zhou >Assignee: Xiaobing Zhou > Attachments: HDFS-11038.000.patch, HDFS-11038.001.patch > > > Disk balancer follows/reuses one rule designed by HDFS balancer, that is, > only one instance is allowed to run at the same time. This is correct in > production system to avoid any inconsistencies, but it's not ideal to write > and run unit tests. For example, it should be allowed run plan, execute, scan > commands under one setup of disk balancer. One instance rule will throw > exception by complaining 'Another instance is running'. In such a case, > there's no way to do a full life cycle tests which involves a sequence of > commands. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-11064) Mention the default NN rpc ports in hdfs-default.xml
[ https://issues.apache.org/jira/browse/HDFS-11064?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Yiqun Lin updated HDFS-11064: - Attachment: HDFS-11064.001.patch > Mention the default NN rpc ports in hdfs-default.xml > > > Key: HDFS-11064 > URL: https://issues.apache.org/jira/browse/HDFS-11064 > Project: Hadoop HDFS > Issue Type: Improvement > Components: documentation >Affects Versions: 3.0.0-alpha1 >Reporter: Andrew Wang >Assignee: Yiqun Lin >Priority: Minor > Attachments: HDFS-11064.001.patch > > > We updated the default ports at HDFS-9427. However, the NN's default RPC > ports aren't mentioned in hdfs-site.xml. It'd be more user-friendly if we > added them, maybe in the description. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-11064) Mention the default NN rpc ports in hdfs-default.xml
[ https://issues.apache.org/jira/browse/HDFS-11064?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Yiqun Lin updated HDFS-11064: - Attachment: (was: HDFS-11064.001.patch) > Mention the default NN rpc ports in hdfs-default.xml > > > Key: HDFS-11064 > URL: https://issues.apache.org/jira/browse/HDFS-11064 > Project: Hadoop HDFS > Issue Type: Improvement > Components: documentation >Affects Versions: 3.0.0-alpha1 >Reporter: Andrew Wang >Assignee: Yiqun Lin >Priority: Minor > > We updated the default ports at HDFS-9427. However, the NN's default RPC > ports aren't mentioned in hdfs-site.xml. It'd be more user-friendly if we > added them, maybe in the description. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-11064) Mention the default NN rpc ports in hdfs-default.xml
[ https://issues.apache.org/jira/browse/HDFS-11064?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Yiqun Lin updated HDFS-11064: - Attachment: HDFS-11064.001.patch > Mention the default NN rpc ports in hdfs-default.xml > > > Key: HDFS-11064 > URL: https://issues.apache.org/jira/browse/HDFS-11064 > Project: Hadoop HDFS > Issue Type: Improvement > Components: documentation >Affects Versions: 3.0.0-alpha1 >Reporter: Andrew Wang >Assignee: Yiqun Lin >Priority: Minor > Attachments: HDFS-11064.001.patch > > > We updated the default ports at HDFS-9427. However, the NN's default RPC > ports aren't mentioned in hdfs-site.xml. It'd be more user-friendly if we > added them, maybe in the description. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-11064) Mention the default NN rpc ports in hdfs-default.xml
[ https://issues.apache.org/jira/browse/HDFS-11064?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Yiqun Lin updated HDFS-11064: - Status: Patch Available (was: Open) Attach a simple patch to make a change. > Mention the default NN rpc ports in hdfs-default.xml > > > Key: HDFS-11064 > URL: https://issues.apache.org/jira/browse/HDFS-11064 > Project: Hadoop HDFS > Issue Type: Improvement > Components: documentation >Affects Versions: 3.0.0-alpha1 >Reporter: Andrew Wang >Assignee: Yiqun Lin >Priority: Minor > Attachments: HDFS-11064.001.patch > > > We updated the default ports at HDFS-9427. However, the NN's default RPC > ports aren't mentioned in hdfs-site.xml. It'd be more user-friendly if we > added them, maybe in the description. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-11001) Ozone:SCM: Add support for registerNode in SCM
[ https://issues.apache.org/jira/browse/HDFS-11001?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Anu Engineer updated HDFS-11001: Resolution: Fixed Hadoop Flags: Reviewed Status: Resolved (was: Patch Available) [~xyao] Thanks for the code review. I have committed this to the feature branch. > Ozone:SCM: Add support for registerNode in SCM > -- > > Key: HDFS-11001 > URL: https://issues.apache.org/jira/browse/HDFS-11001 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: ozone >Affects Versions: HDFS-7240 >Reporter: Anu Engineer >Assignee: Anu Engineer > Fix For: HDFS-7240 > > Attachments: HDFS-11001-HDFS-7240.001.patch, > HDFS-11001-HDFS-7240.002.patch, HDFS-11001-HDFS-7240.003.patch, > HDFS-11001-HDFS-7240.004.patch > > > Adds support for a datanode registration. Right now SCM relies on Namenode > for the datanode registration. With this change we will be able to run SCM > independently if needed. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-11045) TestDirectoryScanner#testThrottling fails: Throttle is too permissive
[ https://issues.apache.org/jira/browse/HDFS-11045?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15610183#comment-15610183 ] Xiao Chen commented on HDFS-11045: -- Thanks Daniel, LGTM. Could you address Mr. Jenkins' 81-char complaint? +1 pending that. > TestDirectoryScanner#testThrottling fails: Throttle is too permissive > - > > Key: HDFS-11045 > URL: https://issues.apache.org/jira/browse/HDFS-11045 > Project: Hadoop HDFS > Issue Type: Bug > Components: datanode >Affects Versions: 3.0.0-alpha2 >Reporter: John Zhuge >Assignee: Daniel Templeton >Priority: Minor > Attachments: HDFS-11045.001.patch, HDFS-11045.002.patch > > > TestDirectoryScanner.testThrottling:709 Throttle is too permissive > https://builds.apache.org/job/PreCommit-HDFS-Build/17259/artifact/patchprocess/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-11047) Remove deep copies of FinalizedReplica to alleviate heap consumption on DataNode
[ https://issues.apache.org/jira/browse/HDFS-11047?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15610179#comment-15610179 ] Arpit Agarwal commented on HDFS-11047: -- Nice catch [~xiaobingo]. Thanks for reporting and fixing this. I agree with [~jpallas] that we can just change the behavior of getFinalizedBlocks as it is a private interface. We can document the requirement that the caller of {{getFinalizedBlocks}} first get the dataset lock via {{FsDatasetSpi#acquireDatasetLock}}. In addition to the deep copy there is an apparently unnecessary list to array conversion that you removed. I wasn't able to follow the source history past 2011 to see why it was introduced. IAC I can't think of any reason to retain it. > Remove deep copies of FinalizedReplica to alleviate heap consumption on > DataNode > > > Key: HDFS-11047 > URL: https://issues.apache.org/jira/browse/HDFS-11047 > Project: Hadoop HDFS > Issue Type: Improvement > Components: datanode, fs >Reporter: Xiaobing Zhou >Assignee: Xiaobing Zhou > Attachments: HDFS-11047.000.patch > > > DirectoryScanner does scan by deep copying FinalizedReplica. In a deployment > with 500,000+ blocks, we've seen the DN heap usage being accumulated to high > peaks very quickly. Deep copies of FinalizedReplica will make DN heap usage > even worse if directory scans are scheduled more frequently. This proposes > removing unnecessary deep copies since DirectoryScanner#scan already holds > lock of dataset. The sibling work is tracked by AMBARI-18694 > DirectoryScanner#scan > {code} > try(AutoCloseableLock lock = dataset.acquireDatasetLock()) { > for (Entry entry : diskReport.entrySet()) { > String bpid = entry.getKey(); > ScanInfo[] blockpoolReport = entry.getValue(); > > Stats statsRecord = new Stats(bpid); > stats.put(bpid, statsRecord); > LinkedList diffRecord = new LinkedList(); > diffs.put(bpid, diffRecord); > > statsRecord.totalBlocks = blockpoolReport.length; > List bl = dataset.getFinalizedBlocks(bpid); /* deep > copies here*/ > {code} > FsDatasetImpl#getFinalizedBlocks > {code} > public List getFinalizedBlocks(String bpid) { > try (AutoCloseableLock lock = datasetLock.acquire()) { > ArrayList finalized = > new ArrayList(volumeMap.size(bpid)); > for (ReplicaInfo b : volumeMap.replicas(bpid)) { > if (b.getState() == ReplicaState.FINALIZED) { > finalized.add(new ReplicaBuilder(ReplicaState.FINALIZED) > .from(b).build()); /* deep copies here*/ > } > } > return finalized; > } > } > {code} -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-11038) DiskBalancer: support running multiple commands under one setup of disk balancer
[ https://issues.apache.org/jira/browse/HDFS-11038?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15610176#comment-15610176 ] Xiaobing Zhou commented on HDFS-11038: -- The test failure is irrelevant to the patch. I marked it in HDFS-10406. > DiskBalancer: support running multiple commands under one setup of disk > balancer > > > Key: HDFS-11038 > URL: https://issues.apache.org/jira/browse/HDFS-11038 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: balancer & mover >Reporter: Xiaobing Zhou >Assignee: Xiaobing Zhou > Attachments: HDFS-11038.000.patch, HDFS-11038.001.patch > > > Disk balancer follows/reuses one rule designed by HDFS balancer, that is, > only one instance is allowed to run at the same time. This is correct in > production system to avoid any inconsistencies, but it's not ideal to write > and run unit tests. For example, it should be allowed run plan, execute, scan > commands under one setup of disk balancer. One instance rule will throw > exception by complaining 'Another instance is running'. In such a case, > there's no way to do a full life cycle tests which involves a sequence of > commands. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-10406) Test failure on trunk: TestReconstructStripedBlocks
[ https://issues.apache.org/jira/browse/HDFS-10406?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15610173#comment-15610173 ] Xiaobing Zhou commented on HDFS-10406: -- Another failure on different output: {noformat} Stacktrace java.lang.AssertionError at org.apache.hadoop.hdfs.server.namenode.TestReconstructStripedBlocks.testCountLiveReplicas(TestReconstructStripedBlocks.java:326) {noformat} > Test failure on trunk: TestReconstructStripedBlocks > --- > > Key: HDFS-10406 > URL: https://issues.apache.org/jira/browse/HDFS-10406 > Project: Hadoop HDFS > Issue Type: Bug >Affects Versions: 2.8.0 >Reporter: Xiaobing Zhou > > It's been noticed there are some test failures: TestEditLog and > TestReconstructStripedBlocks -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-10996) Ability to specify per-file EC policy at create time
[ https://issues.apache.org/jira/browse/HDFS-10996?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15610164#comment-15610164 ] Kai Zheng commented on HDFS-10996: -- A question maybe not very related to this but may affect the APIs: Would we allow to set erasure coding policy to an existing file? I mean by doing so, when the file is just created right now and hasn't been written yet, then it is set the ec policy, and then the file will be written in striping mode; when the file is already done with content, after it's set the ec policy, it will be transformed into striping mode (a replica file) or transformed into the target striping mode (a striping file already, from one ec policy to another ec policy). The mentioned transformation could be done elsewhere. [~andrew.wang] and [~zhz], any thought here? Thanks! > Ability to specify per-file EC policy at create time > > > Key: HDFS-10996 > URL: https://issues.apache.org/jira/browse/HDFS-10996 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: erasure-coding >Affects Versions: 3.0.0-alpha1 >Reporter: Andrew Wang >Assignee: SammiChen > > Based on discussion in HDFS-10971, it would be useful to specify the EC > policy when the file is created. This is useful for situations where app > requirements do not map nicely to the current directory-level policies. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-11064) Mention the default NN rpc ports in hdfs-default.xml
[ https://issues.apache.org/jira/browse/HDFS-11064?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15610159#comment-15610159 ] Xiaobing Zhou commented on HDFS-11064: -- [~andrew.wang] thanks for reporting this. Are you referring to hdfs-defaut.xml? +1 the proposal since NN RPC port is the one most frequently used from CLI. > Mention the default NN rpc ports in hdfs-default.xml > > > Key: HDFS-11064 > URL: https://issues.apache.org/jira/browse/HDFS-11064 > Project: Hadoop HDFS > Issue Type: Improvement > Components: documentation >Affects Versions: 3.0.0-alpha1 >Reporter: Andrew Wang >Assignee: Yiqun Lin >Priority: Minor > > We updated the default ports at HDFS-9427. However, the NN's default RPC > ports aren't mentioned in hdfs-site.xml. It'd be more user-friendly if we > added them, maybe in the description. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-11038) DiskBalancer: support running multiple commands under one setup of disk balancer
[ https://issues.apache.org/jira/browse/HDFS-11038?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15610134#comment-15610134 ] Hadoop QA commented on HDFS-11038: -- | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 18s{color} | {color:blue} Docker mode activated. {color} | | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s{color} | {color:green} The patch appears to include 2 new or modified test files. {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 7m 19s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 49s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 28s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 51s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 12s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 44s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 42s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 47s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 44s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 44s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 24s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 50s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 10s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 47s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 39s{color} | {color:green} the patch passed {color} | | {color:red}-1{color} | {color:red} unit {color} | {color:red} 58m 35s{color} | {color:red} hadoop-hdfs in the patch failed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 18s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 77m 54s{color} | {color:black} {color} | \\ \\ || Reason || Tests || | Failed junit tests | hadoop.hdfs.server.namenode.TestReconstructStripedBlocks | | Timed out junit tests | org.apache.hadoop.hdfs.server.namenode.ha.TestStandbyBlockManagement | | | org.apache.hadoop.hdfs.server.namenode.ha.TestStandbyCheckpoints | | | org.apache.hadoop.fs.TestEnhancedByteBufferAccess | | | org.apache.hadoop.hdfs.server.namenode.ha.TestBootstrapStandbyWithQJM | | | org.apache.hadoop.hdfs.server.namenode.ha.TestEditLogTailer | | | org.apache.hadoop.hdfs.server.namenode.ha.TestFailureToReadEdits | | | org.apache.hadoop.hdfs.server.namenode.ha.TestDFSUpgradeWithHA | | | org.apache.hadoop.hdfs.server.namenode.ha.TestRetryCacheWithHA | | | org.apache.hadoop.hdfs.server.namenode.ha.TestGetGroupsWithHA | | | org.apache.hadoop.hdfs.server.namenode.ha.TestPipelinesFailover | | | org.apache.hadoop.hdfs.server.namenode.ha.TestQuotasWithHA | | | org.apache.hadoop.hdfs.server.namenode.TestAuditLogs | \\ \\ || Subsystem || Report/Notes || | Docker | Image:yetus/hadoop:9560f25 | | JIRA Issue | HDFS-11038 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12835426/HDFS-11038.001.patch | | Optional Tests | asflicense compile javac javadoc mvninstall mvnsite unit findbugs checkstyle | | uname | Linux 0590dcfda5a2 3.13.0-96-generic #143-Ubuntu SMP Mon Aug 29 20:15:20 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh | | git revision | trunk / 22ff0ef | | Default Java | 1.8.0_101 | | findbugs | v3.0.0 | | unit | https://builds.apache.org/job/PreCommit-H
[jira] [Assigned] (HDFS-11064) Mention the default NN rpc ports in hdfs-default.xml
[ https://issues.apache.org/jira/browse/HDFS-11064?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Yiqun Lin reassigned HDFS-11064: Assignee: Yiqun Lin > Mention the default NN rpc ports in hdfs-default.xml > > > Key: HDFS-11064 > URL: https://issues.apache.org/jira/browse/HDFS-11064 > Project: Hadoop HDFS > Issue Type: Improvement > Components: documentation >Affects Versions: 3.0.0-alpha1 >Reporter: Andrew Wang >Assignee: Yiqun Lin >Priority: Minor > > We updated the default ports at HDFS-9427. However, the NN's default RPC > ports aren't mentioned in hdfs-site.xml. It'd be more user-friendly if we > added them, maybe in the description. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-11031) Add additional unit test for DataNode startup behavior when volumes fail
[ https://issues.apache.org/jira/browse/HDFS-11031?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15610111#comment-15610111 ] Hadoop QA commented on HDFS-11031: -- | (/) *{color:green}+1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 19s{color} | {color:blue} Docker mode activated. {color} | | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s{color} | {color:green} The patch appears to include 1 new or modified test files. {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 7m 1s{color} | {color:green} branch-2 passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 42s{color} | {color:green} branch-2 passed with JDK v1.8.0_101 {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 44s{color} | {color:green} branch-2 passed with JDK v1.7.0_111 {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 29s{color} | {color:green} branch-2 passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 55s{color} | {color:green} branch-2 passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 15s{color} | {color:green} branch-2 passed {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 2m 2s{color} | {color:green} branch-2 passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 57s{color} | {color:green} branch-2 passed with JDK v1.8.0_101 {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 52s{color} | {color:green} branch-2 passed with JDK v1.7.0_111 {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 57s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 49s{color} | {color:green} the patch passed with JDK v1.8.0_101 {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 49s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 51s{color} | {color:green} the patch passed with JDK v1.7.0_111 {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 51s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 31s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 59s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 14s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 2m 26s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 3s{color} | {color:green} the patch passed with JDK v1.8.0_101 {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 45s{color} | {color:green} the patch passed with JDK v1.7.0_111 {color} | | {color:green}+1{color} | {color:green} unit {color} | {color:green} 48m 48s{color} | {color:green} hadoop-hdfs in the patch passed with JDK v1.7.0_111. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 20s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black}137m 2s{color} | {color:black} {color} | \\ \\ || Reason || Tests || | JDK v1.8.0_101 Failed junit tests | hadoop.hdfs.server.datanode.TestDataNodeLifeline | \\ \\ || Subsystem || Report/Notes || | Docker | Image:yetus/hadoop:b59b8b7 | | JIRA Issue | HDFS-11031 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12835419/HDFS-11031-branch-2.004.patch | | Optional Tests | asflicense compile javac javadoc mvninstall mvnsite unit findbugs checkstyle | | uname | Linux 0da42ef2bc94 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh | | git revision | branch-2 / 2aed61d | | Default Java | 1.7.0_111 | | Multi-JDK versions | /usr/lib/jvm
[jira] [Commented] (HDFS-11055) Update log4j.properties for httpfs to imporve test logging
[ https://issues.apache.org/jira/browse/HDFS-11055?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15610076#comment-15610076 ] Xiao Chen commented on HDFS-11055: -- Thanks [~jojochuang] for the patch! Change LGTM. Having more logs in tests won't harm. I'm slightly uncomfortable on the pre-commit though, since it didn't run any actual tests. I'm +1 if you can confirm with this change, all existing tests can pass. (Because I remember sometimes tests could depend on stdout, like shell tests / log tests etc.) > Update log4j.properties for httpfs to imporve test logging > -- > > Key: HDFS-11055 > URL: https://issues.apache.org/jira/browse/HDFS-11055 > Project: Hadoop HDFS > Issue Type: Improvement > Components: httpfs, test >Affects Versions: 0.23.1 >Reporter: Wei-Chiu Chuang >Assignee: Wei-Chiu Chuang > Attachments: HDFS-11055.001.patch > > > I am debugging a httpfs issue but existing log4j.properties does not show > execution logs in editors such as IntelliJ or in Jenkins. This makes > debugging impossible. > File this jira to improve this. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-11055) Update log4j.properties for httpfs to imporve test logging
[ https://issues.apache.org/jira/browse/HDFS-11055?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Xiao Chen updated HDFS-11055: - Component/s: test > Update log4j.properties for httpfs to imporve test logging > -- > > Key: HDFS-11055 > URL: https://issues.apache.org/jira/browse/HDFS-11055 > Project: Hadoop HDFS > Issue Type: Improvement > Components: httpfs, test >Affects Versions: 0.23.1 >Reporter: Wei-Chiu Chuang >Assignee: Wei-Chiu Chuang > Attachments: HDFS-11055.001.patch > > > I am debugging a httpfs issue but existing log4j.properties does not show > execution logs in editors such as IntelliJ or in Jenkins. This makes > debugging impossible. > File this jira to improve this. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-11034) Provide a command line tool to clear decommissioned DataNode information from the NameNode without restarting.
[ https://issues.apache.org/jira/browse/HDFS-11034?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15610068#comment-15610068 ] Chris Nauroth commented on HDFS-11034: -- Hello [~GergelyNovak]. If the decommissioned host is removed from the {{dfs.hosts.exclude}} file, followed by running {{hdfs dfsadmin -refreshNodes}}, then the host is no longer considered to be excluded. If the DataNode process is still running, or if it's restarted accidentally, then that DataNode will re-register with the NameNode, come back into service and become a candidate for writing new blocks. I was imagining a new workflow, where the host remains decommissioned, but the administrator has a way to clear out the in-memory tracked state about that node. It's interesting that you brought up the exclude file. Since that's already the existing mechanism for inclusion/exclusion of hosts, I wonder if there is a way to enhance it to cover this use case, so that administrators wouldn't need to learn a new command. I'll think about it more (and comments are welcome from others who have ideas too). > Provide a command line tool to clear decommissioned DataNode information from > the NameNode without restarting. > -- > > Key: HDFS-11034 > URL: https://issues.apache.org/jira/browse/HDFS-11034 > Project: Hadoop HDFS > Issue Type: Improvement > Components: namenode >Reporter: Chris Nauroth >Assignee: Gergely Novák > > Information about decommissioned DataNodes remains tracked in the NameNode > for the entire NameNode process lifetime. Currently, the only way to clear > this information is to restart the NameNode. This issue proposes to add a > way to clear this information online, without requiring a process restart. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-7343) HDFS smart storage management
[ https://issues.apache.org/jira/browse/HDFS-7343?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15610058#comment-15610058 ] Anu Engineer commented on HDFS-7343: [~drankye] Thanks for the confirmation on Kafka. It does address large part of my concern. I look forward to seeing an updated design doc that contains the final design and what we are solving in the first iteration and how. > HDFS smart storage management > - > > Key: HDFS-7343 > URL: https://issues.apache.org/jira/browse/HDFS-7343 > Project: Hadoop HDFS > Issue Type: Improvement >Reporter: Kai Zheng >Assignee: Wei Zhou > Attachments: HDFS-Smart-Storage-Management.pdf > > > As discussed in HDFS-7285, it would be better to have a comprehensive and > flexible storage policy engine considering file attributes, metadata, data > temperature, storage type, EC codec, available hardware capabilities, > user/application preference and etc. > Modified the title for re-purpose. > We'd extend this effort some bit and aim to work on a comprehensive solution > to provide smart storage management service in order for convenient, > intelligent and effective utilizing of erasure coding or replicas, HDFS cache > facility, HSM offering, and all kinds of tools (balancer, mover, disk > balancer and so on) in a large cluster. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-7343) HDFS smart storage management
[ https://issues.apache.org/jira/browse/HDFS-7343?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15610047#comment-15610047 ] Kai Zheng commented on HDFS-7343: - bq. I am still concerned with introduction of Kafka and injecting a cluster wide dependency graph. Hi Anu, looks like according to latest discussion with Andrew, the {{KafkaService}} isn't a must or depended, so hopefully this solves your concern. > HDFS smart storage management > - > > Key: HDFS-7343 > URL: https://issues.apache.org/jira/browse/HDFS-7343 > Project: Hadoop HDFS > Issue Type: Improvement >Reporter: Kai Zheng >Assignee: Wei Zhou > Attachments: HDFS-Smart-Storage-Management.pdf > > > As discussed in HDFS-7285, it would be better to have a comprehensive and > flexible storage policy engine considering file attributes, metadata, data > temperature, storage type, EC codec, available hardware capabilities, > user/application preference and etc. > Modified the title for re-purpose. > We'd extend this effort some bit and aim to work on a comprehensive solution > to provide smart storage management service in order for convenient, > intelligent and effective utilizing of erasure coding or replicas, HDFS cache > facility, HSM offering, and all kinds of tools (balancer, mover, disk > balancer and so on) in a large cluster. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-11031) Add additional unit test for DataNode startup behavior when volumes fail
[ https://issues.apache.org/jira/browse/HDFS-11031?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15610009#comment-15610009 ] Hadoop QA commented on HDFS-11031: -- | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 15s{color} | {color:blue} Docker mode activated. {color} | | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s{color} | {color:green} The patch appears to include 1 new or modified test files. {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 7m 2s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 44s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 26s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 49s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 12s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 40s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 41s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 52s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 46s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 46s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 25s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 55s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 11s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 56s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 38s{color} | {color:green} the patch passed {color} | | {color:red}-1{color} | {color:red} unit {color} | {color:red} 52m 19s{color} | {color:red} hadoop-hdfs in the patch failed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 18s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 71m 22s{color} | {color:black} {color} | \\ \\ || Reason || Tests || | Timed out junit tests | org.apache.hadoop.hdfs.server.datanode.TestDataNodeRollingUpgrade | | | org.apache.hadoop.hdfs.server.datanode.TestBlockPoolSliceStorage | | | org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.TestLazyWriter | | | org.apache.hadoop.hdfs.server.datanode.TestDiskError | | | org.apache.hadoop.hdfs.server.datanode.TestFsDatasetCache | | | org.apache.hadoop.hdfs.server.datanode.TestDataNodeErasureCodingMetrics | | | org.apache.hadoop.cli.TestHDFSCLI | \\ \\ || Subsystem || Report/Notes || | Docker | Image:yetus/hadoop:9560f25 | | JIRA Issue | HDFS-11031 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12835422/HDFS-11031.004.patch | | Optional Tests | asflicense compile javac javadoc mvninstall mvnsite unit findbugs checkstyle | | uname | Linux 00a607281cc0 3.13.0-92-generic #139-Ubuntu SMP Tue Jun 28 20:42:26 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh | | git revision | trunk / 22ff0ef | | Default Java | 1.8.0_101 | | findbugs | v3.0.0 | | unit | https://builds.apache.org/job/PreCommit-HDFS-Build/17300/artifact/patchprocess/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt | | Test Results | https://builds.apache.org/job/PreCommit-HDFS-Build/17300/testReport/ | | modules | C: hadoop-hdfs-project/hadoop-hdfs U: hadoop-hdfs-project/hadoop-hdfs | | Console output | https://builds.apache.org/job/PreCommit-HDFS-Build/17300/console | | Powered by | Apache Yetus 0.4.0-SNAPSHOT http://yetus.apache.org | This message was automatic
[jira] [Commented] (HDFS-7343) HDFS smart storage management
[ https://issues.apache.org/jira/browse/HDFS-7343?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15610007#comment-15610007 ] Kai Zheng commented on HDFS-7343: - Hi [~andrew.wang], About SSD cases, I thought of an original input for the motivation from a large Hadoop deployment in China. The user evaluated how to deploy certain amounts of SSDs via HSM to see if any help to speed up some workloads. The overall pain mentioned was they don't want to maintain by their operators manually what data should be kept in SSDs and then when to move out as needed or better according to some condition change. I agree fixed SLOs are important but I'm not sure that's all the cases. In interactive queries, for example, data miners may try different queries adjusting the conditions, combinations or the like, against some same data sets. We would expect the later runnings should be faster though understand earlier runnings are slow. For repeatedly running queries like daily jobs, it may be natural to expect them to be faster given there are enough SSDs during that time. > HDFS smart storage management > - > > Key: HDFS-7343 > URL: https://issues.apache.org/jira/browse/HDFS-7343 > Project: Hadoop HDFS > Issue Type: Improvement >Reporter: Kai Zheng >Assignee: Wei Zhou > Attachments: HDFS-Smart-Storage-Management.pdf > > > As discussed in HDFS-7285, it would be better to have a comprehensive and > flexible storage policy engine considering file attributes, metadata, data > temperature, storage type, EC codec, available hardware capabilities, > user/application preference and etc. > Modified the title for re-purpose. > We'd extend this effort some bit and aim to work on a comprehensive solution > to provide smart storage management service in order for convenient, > intelligent and effective utilizing of erasure coding or replicas, HDFS cache > facility, HSM offering, and all kinds of tools (balancer, mover, disk > balancer and so on) in a large cluster. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-11001) Ozone:SCM: Add support for registerNode in SCM
[ https://issues.apache.org/jira/browse/HDFS-11001?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15609977#comment-15609977 ] Xiaoyu Yao commented on HDFS-11001: --- +1 for the latest patch. > Ozone:SCM: Add support for registerNode in SCM > -- > > Key: HDFS-11001 > URL: https://issues.apache.org/jira/browse/HDFS-11001 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: ozone >Affects Versions: HDFS-7240 >Reporter: Anu Engineer >Assignee: Anu Engineer > Fix For: HDFS-7240 > > Attachments: HDFS-11001-HDFS-7240.001.patch, > HDFS-11001-HDFS-7240.002.patch, HDFS-11001-HDFS-7240.003.patch, > HDFS-11001-HDFS-7240.004.patch > > > Adds support for a datanode registration. Right now SCM relies on Namenode > for the datanode registration. With this change we will be able to run SCM > independently if needed. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-11038) DiskBalancer: support running multiple commands under one setup of disk balancer
[ https://issues.apache.org/jira/browse/HDFS-11038?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15609967#comment-15609967 ] Anu Engineer commented on HDFS-11038: - +1, pending Jenkins. LGTM. Thanks for updating the patch. > DiskBalancer: support running multiple commands under one setup of disk > balancer > > > Key: HDFS-11038 > URL: https://issues.apache.org/jira/browse/HDFS-11038 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: balancer & mover >Reporter: Xiaobing Zhou >Assignee: Xiaobing Zhou > Attachments: HDFS-11038.000.patch, HDFS-11038.001.patch > > > Disk balancer follows/reuses one rule designed by HDFS balancer, that is, > only one instance is allowed to run at the same time. This is correct in > production system to avoid any inconsistencies, but it's not ideal to write > and run unit tests. For example, it should be allowed run plan, execute, scan > commands under one setup of disk balancer. One instance rule will throw > exception by complaining 'Another instance is running'. In such a case, > there's no way to do a full life cycle tests which involves a sequence of > commands. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-11001) Ozone:SCM: Add support for registerNode in SCM
[ https://issues.apache.org/jira/browse/HDFS-11001?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15609949#comment-15609949 ] Anu Engineer commented on HDFS-11001: - Test failures are not related to this patch. Verified that these tests work correct on the local machine. > Ozone:SCM: Add support for registerNode in SCM > -- > > Key: HDFS-11001 > URL: https://issues.apache.org/jira/browse/HDFS-11001 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: ozone >Affects Versions: HDFS-7240 >Reporter: Anu Engineer >Assignee: Anu Engineer > Fix For: HDFS-7240 > > Attachments: HDFS-11001-HDFS-7240.001.patch, > HDFS-11001-HDFS-7240.002.patch, HDFS-11001-HDFS-7240.003.patch, > HDFS-11001-HDFS-7240.004.patch > > > Adds support for a datanode registration. Right now SCM relies on Namenode > for the datanode registration. With this change we will be able to run SCM > independently if needed. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-11038) DiskBalancer: support running multiple commands under one setup of disk balancer
[ https://issues.apache.org/jira/browse/HDFS-11038?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15609941#comment-15609941 ] Xiaobing Zhou commented on HDFS-11038: -- Posted a patch v001 to cover the changes aforementioned. > DiskBalancer: support running multiple commands under one setup of disk > balancer > > > Key: HDFS-11038 > URL: https://issues.apache.org/jira/browse/HDFS-11038 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: balancer & mover >Reporter: Xiaobing Zhou >Assignee: Xiaobing Zhou > Attachments: HDFS-11038.000.patch, HDFS-11038.001.patch > > > Disk balancer follows/reuses one rule designed by HDFS balancer, that is, > only one instance is allowed to run at the same time. This is correct in > production system to avoid any inconsistencies, but it's not ideal to write > and run unit tests. For example, it should be allowed run plan, execute, scan > commands under one setup of disk balancer. One instance rule will throw > exception by complaining 'Another instance is running'. In such a case, > there's no way to do a full life cycle tests which involves a sequence of > commands. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-11038) DiskBalancer: support running multiple commands under one setup of disk balancer
[ https://issues.apache.org/jira/browse/HDFS-11038?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Xiaobing Zhou updated HDFS-11038: - Attachment: HDFS-11038.001.patch > DiskBalancer: support running multiple commands under one setup of disk > balancer > > > Key: HDFS-11038 > URL: https://issues.apache.org/jira/browse/HDFS-11038 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: balancer & mover >Reporter: Xiaobing Zhou >Assignee: Xiaobing Zhou > Attachments: HDFS-11038.000.patch, HDFS-11038.001.patch > > > Disk balancer follows/reuses one rule designed by HDFS balancer, that is, > only one instance is allowed to run at the same time. This is correct in > production system to avoid any inconsistencies, but it's not ideal to write > and run unit tests. For example, it should be allowed run plan, execute, scan > commands under one setup of disk balancer. One instance rule will throw > exception by complaining 'Another instance is running'. In such a case, > there's no way to do a full life cycle tests which involves a sequence of > commands. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-11038) DiskBalancer: support running multiple commands under one setup of disk balancer
[ https://issues.apache.org/jira/browse/HDFS-11038?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15609920#comment-15609920 ] Xiaobing Zhou commented on HDFS-11038: -- Sure, we can merge these two, for example Writing plan to: /system/diskbalancer/2016-Oct-26-15-29-58 /system/diskbalancer/2016-Oct-26-15-29-58/dbe1178e-a4fc-4cb7-9419-5fbf6e0f67a3.plan.json will be like Writing plan to: /system/diskbalancer/2016-Oct-26-15-29-58/dbe1178e-a4fc-4cb7-9419-5fbf6e0f67a3.plan.json Thanks [~anu] > DiskBalancer: support running multiple commands under one setup of disk > balancer > > > Key: HDFS-11038 > URL: https://issues.apache.org/jira/browse/HDFS-11038 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: balancer & mover >Reporter: Xiaobing Zhou >Assignee: Xiaobing Zhou > Attachments: HDFS-11038.000.patch > > > Disk balancer follows/reuses one rule designed by HDFS balancer, that is, > only one instance is allowed to run at the same time. This is correct in > production system to avoid any inconsistencies, but it's not ideal to write > and run unit tests. For example, it should be allowed run plan, execute, scan > commands under one setup of disk balancer. One instance rule will throw > exception by complaining 'Another instance is running'. In such a case, > there's no way to do a full life cycle tests which involves a sequence of > commands. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-11038) DiskBalancer: support running multiple commands under one setup of disk balancer
[ https://issues.apache.org/jira/browse/HDFS-11038?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15609877#comment-15609877 ] Anu Engineer commented on HDFS-11038: - Just wondering if it makes sense to print both of this to console ? Should we modify the writing file info trace to have the full path if needed .. > DiskBalancer: support running multiple commands under one setup of disk > balancer > > > Key: HDFS-11038 > URL: https://issues.apache.org/jira/browse/HDFS-11038 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: balancer & mover >Reporter: Xiaobing Zhou >Assignee: Xiaobing Zhou > Attachments: HDFS-11038.000.patch > > > Disk balancer follows/reuses one rule designed by HDFS balancer, that is, > only one instance is allowed to run at the same time. This is correct in > production system to avoid any inconsistencies, but it's not ideal to write > and run unit tests. For example, it should be allowed run plan, execute, scan > commands under one setup of disk balancer. One instance rule will throw > exception by complaining 'Another instance is running'. In such a case, > there's no way to do a full life cycle tests which involves a sequence of > commands. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-10909) De-duplicate code in ErasureCodingWorker#initializeStripedReadThreadPool and DFSClient#initThreadsNumForStripedReads
[ https://issues.apache.org/jira/browse/HDFS-10909?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15609873#comment-15609873 ] Manoj Govindassamy commented on HDFS-10909: --- [~eddyxu], [~jojochuang], can you please review the patch ? > De-duplicate code in ErasureCodingWorker#initializeStripedReadThreadPool and > DFSClient#initThreadsNumForStripedReads > > > Key: HDFS-10909 > URL: https://issues.apache.org/jira/browse/HDFS-10909 > Project: Hadoop HDFS > Issue Type: Improvement >Affects Versions: 3.0.0-alpha2 >Reporter: Wei-Chiu Chuang >Assignee: Manoj Govindassamy >Priority: Minor > Attachments: HDFS-10909.01.patch > > > The two methods are mostly the same. Maybe it make sense to deduplicate the > code. A good place to place the code is DFSUtilClient -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Closed] (HDFS-10950) Add unit tests to verify ACLs in safemode
[ https://issues.apache.org/jira/browse/HDFS-10950?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Mingliang Liu closed HDFS-10950. > Add unit tests to verify ACLs in safemode > - > > Key: HDFS-10950 > URL: https://issues.apache.org/jira/browse/HDFS-10950 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: fs, test >Reporter: Xiaobing Zhou >Assignee: Xiaobing Zhou > > This proposes adding unit tests to validate that getting Acls works when > namende is in safemode, while setting Acls fails. Specifically, the following > needs being covered in newly added tests. > test_getfacl_recursive > test_resetacl > test_setfacl_default -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Closed] (HDFS-11016) Add unit tests for HDFS command 'dfsadmin -set/clrQuota'
[ https://issues.apache.org/jira/browse/HDFS-11016?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Mingliang Liu closed HDFS-11016. > Add unit tests for HDFS command 'dfsadmin -set/clrQuota' > > > Key: HDFS-11016 > URL: https://issues.apache.org/jira/browse/HDFS-11016 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: hdfs-client >Reporter: Xiaobing Zhou >Assignee: Xiaobing Zhou > Labels: fs, shell, test > Attachments: HDFS-11016.000.patch > > > This proposes adding a bunch of unit tests for command 'dfsadmin setQuota' > and 'dfsadmin clrQuota'. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Created] (HDFS-11066) Improve test coverage for Java coder/ISA-L native coder
Wei-Chiu Chuang created HDFS-11066: -- Summary: Improve test coverage for Java coder/ISA-L native coder Key: HDFS-11066 URL: https://issues.apache.org/jira/browse/HDFS-11066 Project: Hadoop HDFS Issue Type: Sub-task Affects Versions: 3.0.0-alpha1 Reporter: Wei-Chiu Chuang HDFS-10935 found&fixed a bug that was only exposed without using native ISA-L library. We should improve test coverage for both Java/native codec, and even for mixed scenario (e.g. some nodes use Java codec while the other use native codec) -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-11065) Add space quota tests for heterogenous storages
[ https://issues.apache.org/jira/browse/HDFS-11065?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Xiaobing Zhou updated HDFS-11065: - Description: There aren't much tests to verify space quota for heterogenous storages. It's quite important to ensure space quota works well in modern clusters typically built out of diversified storage structures. This proposes adding new tests to cover the following scenarios. # Tests space quota for storage policy = ALL_SSD # Tests if overall space quota exceeds even if particular storage space quota is available # Tests spaceQuota for storage policy = Cold # Tests space quota for storage policy = WARM # Tests if quota exceeds for DISK storage even if overall space quota is available # Tests if changing replication factor results in copying file as quota doesn't exceed # Tests space quota with append operation # Sanity Test : Checks if copy command fails if quota is exceeded # Tests if clear quota per heterogenous storage doesnt result in clearing quota for another storage # Tests space quota with remove operation # Tests space quota with Snapshot operation # Tests space quota with truncate operation # Tests space quota remains valid even with Namenode restart > Add space quota tests for heterogenous storages > --- > > Key: HDFS-11065 > URL: https://issues.apache.org/jira/browse/HDFS-11065 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: hdfs >Reporter: Xiaobing Zhou >Assignee: Xiaobing Zhou > Labels: fs, test > > There aren't much tests to verify space quota for heterogenous storages. It's > quite important to ensure space quota works well in modern clusters typically > built out of diversified storage structures. This proposes adding new tests > to cover the following scenarios. > # Tests space quota for storage policy = ALL_SSD > # Tests if overall space quota exceeds even if particular storage space quota > is available > # Tests spaceQuota for storage policy = Cold > # Tests space quota for storage policy = WARM > # Tests if quota exceeds for DISK storage even if overall space quota is > available > # Tests if changing replication factor results in copying file as quota > doesn't exceed > # Tests space quota with append operation > # Sanity Test : Checks if copy command fails if quota is exceeded > # Tests if clear quota per heterogenous storage doesnt result in clearing > quota for another storage > # Tests space quota with remove operation > # Tests space quota with Snapshot operation > # Tests space quota with truncate operation > # Tests space quota remains valid even with Namenode restart -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-10935) TestFileChecksum fails in some cases
[ https://issues.apache.org/jira/browse/HDFS-10935?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15609840#comment-15609840 ] Wei-Chiu Chuang commented on HDFS-10935: Correct, Andrew. I forgot to address that concern. Filing a follow-up jira now. > TestFileChecksum fails in some cases > > > Key: HDFS-10935 > URL: https://issues.apache.org/jira/browse/HDFS-10935 > Project: Hadoop HDFS > Issue Type: Bug > Environment: JDK 1.8.0_91 on Mac OS X Yosemite 10.10.5 >Reporter: Wei-Chiu Chuang >Assignee: SammiChen > Fix For: 3.0.0-alpha2 > > Attachments: HDFS-10935-v1.patch > > > On my Mac, TestFileChecksum has been been failing since HDFS-10460. However, > the jenkins jobs have not reported the failures. Maybe it's an issue with my > Mac or JDK. > 9 out of 21 tests failed. > {noformat} > java.lang.AssertionError: Checksum mismatches! > at org.junit.Assert.fail(Assert.java:88) > at org.junit.Assert.assertTrue(Assert.java:41) > at > org.apache.hadoop.hdfs.TestFileChecksum.testStripedFileChecksumWithMissedDataBlocksRangeQuery(TestFileChecksum.java:227) > at > org.apache.hadoop.hdfs.TestFileChecksum.testStripedFileChecksumWithMissedDataBlocksRangeQuery10(TestFileChecksum.java:336) > at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) > at > sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) > at > sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) > at java.lang.reflect.Method.invoke(Method.java:498) > at > org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:47) > at > org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) > at > org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:44) > at > org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17) > at > org.junit.internal.runners.statements.FailOnTimeout$StatementThread.run(FailOnTimeout.java:74) > {noformat} -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-11065) Add space quota tests for heterogenous storages
[ https://issues.apache.org/jira/browse/HDFS-11065?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Xiaobing Zhou updated HDFS-11065: - Component/s: (was: test) hdfs > Add space quota tests for heterogenous storages > --- > > Key: HDFS-11065 > URL: https://issues.apache.org/jira/browse/HDFS-11065 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: hdfs >Reporter: Xiaobing Zhou >Assignee: Xiaobing Zhou > Labels: fs, test > -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-11065) Add space quota tests for heterogenous storages
[ https://issues.apache.org/jira/browse/HDFS-11065?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Xiaobing Zhou updated HDFS-11065: - Labels: fs test (was: ) > Add space quota tests for heterogenous storages > --- > > Key: HDFS-11065 > URL: https://issues.apache.org/jira/browse/HDFS-11065 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: hdfs >Reporter: Xiaobing Zhou >Assignee: Xiaobing Zhou > Labels: fs, test > -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Created] (HDFS-11065) Add space quota tests for heterogenous storages
Xiaobing Zhou created HDFS-11065: Summary: Add space quota tests for heterogenous storages Key: HDFS-11065 URL: https://issues.apache.org/jira/browse/HDFS-11065 Project: Hadoop HDFS Issue Type: Sub-task Reporter: Xiaobing Zhou Assignee: Xiaobing Zhou -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-11031) Add additional unit test for DataNode startup behavior when volumes fail
[ https://issues.apache.org/jira/browse/HDFS-11031?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Mingliang Liu updated HDFS-11031: - Attachment: HDFS-11031.004.patch Thank you [~brahmareddy] for your testing on Windows. Unfortunately I don't have a Windows dev machine for my testing, and was not aware of the problem. The {{assumeNotWindows()}} looks pretty well to skip the newly added tests. Perhaps this came up before, FWIW should we set up Jenkins builds for that? > Add additional unit test for DataNode startup behavior when volumes fail > > > Key: HDFS-11031 > URL: https://issues.apache.org/jira/browse/HDFS-11031 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: datanode, test >Reporter: Mingliang Liu >Assignee: Mingliang Liu > Attachments: HDFS-11031-branch-2.001.patch, > HDFS-11031-branch-2.002.patch, HDFS-11031-branch-2.003.patch, > HDFS-11031-branch-2.004.patch, HDFS-11031.000.patch, HDFS-11031.001.patch, > HDFS-11031.002.patch, HDFS-11031.003.patch, HDFS-11031.004.patch > > > There are several cases to add in {{TestDataNodeVolumeFailure}}: > - DataNode should not start in case of volumes failure > - DataNode should not start in case of lacking data dir read/write permission > - ... -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-7343) HDFS smart storage management
[ https://issues.apache.org/jira/browse/HDFS-7343?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15609821#comment-15609821 ] Anu Engineer commented on HDFS-7343: +1 on [~andrew.wang]'s comment. I am still concerned with introduction of Kafka and injecting a cluster wide dependency graph. As I said earlier, I do think this is a good effort, but I concur with Andrew that we should focus our efforts, prove that it is useful and then make overarching changes. > HDFS smart storage management > - > > Key: HDFS-7343 > URL: https://issues.apache.org/jira/browse/HDFS-7343 > Project: Hadoop HDFS > Issue Type: Improvement >Reporter: Kai Zheng >Assignee: Wei Zhou > Attachments: HDFS-Smart-Storage-Management.pdf > > > As discussed in HDFS-7285, it would be better to have a comprehensive and > flexible storage policy engine considering file attributes, metadata, data > temperature, storage type, EC codec, available hardware capabilities, > user/application preference and etc. > Modified the title for re-purpose. > We'd extend this effort some bit and aim to work on a comprehensive solution > to provide smart storage management service in order for convenient, > intelligent and effective utilizing of erasure coding or replicas, HDFS cache > facility, HSM offering, and all kinds of tools (balancer, mover, disk > balancer and so on) in a large cluster. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-11038) DiskBalancer: support running multiple commands under one setup of disk balancer
[ https://issues.apache.org/jira/browse/HDFS-11038?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15609816#comment-15609816 ] Xiaobing Zhou commented on HDFS-11038: -- [~anu], planFileFullName is prefix path plus planFileName, for example, planFileName: 1b7b7765-2555-4c6b-ace1-f13321dae758.plan.json planFileFullName: /system/diskbalancer/2016-Oct-26-14-57-10/1b7b7765-2555-4c6b-ace1-f13321dae758.plan.json > DiskBalancer: support running multiple commands under one setup of disk > balancer > > > Key: HDFS-11038 > URL: https://issues.apache.org/jira/browse/HDFS-11038 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: balancer & mover >Reporter: Xiaobing Zhou >Assignee: Xiaobing Zhou > Attachments: HDFS-11038.000.patch > > > Disk balancer follows/reuses one rule designed by HDFS balancer, that is, > only one instance is allowed to run at the same time. This is correct in > production system to avoid any inconsistencies, but it's not ideal to write > and run unit tests. For example, it should be allowed run plan, execute, scan > commands under one setup of disk balancer. One instance rule will throw > exception by complaining 'Another instance is running'. In such a case, > there's no way to do a full life cycle tests which involves a sequence of > commands. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-10935) TestFileChecksum fails in some cases
[ https://issues.apache.org/jira/browse/HDFS-10935?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15609815#comment-15609815 ] Andrew Wang commented on HDFS-10935: Did we file a follow-on JIRA to test both the Java coder as well as ISA-L in the unit tests? This is important for coverage. > TestFileChecksum fails in some cases > > > Key: HDFS-10935 > URL: https://issues.apache.org/jira/browse/HDFS-10935 > Project: Hadoop HDFS > Issue Type: Bug > Environment: JDK 1.8.0_91 on Mac OS X Yosemite 10.10.5 >Reporter: Wei-Chiu Chuang >Assignee: SammiChen > Fix For: 3.0.0-alpha2 > > Attachments: HDFS-10935-v1.patch > > > On my Mac, TestFileChecksum has been been failing since HDFS-10460. However, > the jenkins jobs have not reported the failures. Maybe it's an issue with my > Mac or JDK. > 9 out of 21 tests failed. > {noformat} > java.lang.AssertionError: Checksum mismatches! > at org.junit.Assert.fail(Assert.java:88) > at org.junit.Assert.assertTrue(Assert.java:41) > at > org.apache.hadoop.hdfs.TestFileChecksum.testStripedFileChecksumWithMissedDataBlocksRangeQuery(TestFileChecksum.java:227) > at > org.apache.hadoop.hdfs.TestFileChecksum.testStripedFileChecksumWithMissedDataBlocksRangeQuery10(TestFileChecksum.java:336) > at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) > at > sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) > at > sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) > at java.lang.reflect.Method.invoke(Method.java:498) > at > org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:47) > at > org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) > at > org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:44) > at > org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17) > at > org.junit.internal.runners.statements.FailOnTimeout$StatementThread.run(FailOnTimeout.java:74) > {noformat} -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-11060) make DEFAULT_MAX_CORRUPT_FILEBLOCKS_RETURNED configurable
[ https://issues.apache.org/jira/browse/HDFS-11060?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15609793#comment-15609793 ] Andrew Wang commented on HDFS-11060: Let's make it paginated rather than just allowing a larger batch. > make DEFAULT_MAX_CORRUPT_FILEBLOCKS_RETURNED configurable > - > > Key: HDFS-11060 > URL: https://issues.apache.org/jira/browse/HDFS-11060 > Project: Hadoop HDFS > Issue Type: Improvement > Components: hdfs >Reporter: Lantao Jin >Priority: Minor > > Current, the easiest way to determine which blocks is missing is using NN web > UI or JMX. Unfortunately, because the > DEFAULT_MAX_CORRUPT_FILEBLOCKS_RETURNED=100 is hard code in FSNamesystem, > only 100 missing blocks can be return by UI and JMX. Even the result of URL > "https://nn:50070/fsck?listcorruptfileblocks=1&path=%2F"; is limited by this > hard code value too. > I did know FSCK can return more than 100 result but due to the security > reason(with kerberos), it is very hard to involve to costumer programs and > scripts. > So I think it should add a configurable var "maxCorruptFileBlocksReturned" to > fix above case. > If community also think it's worth to do, I will patch this. If not, please > feel free to tell me the reason. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-11038) DiskBalancer: support running multiple commands under one setup of disk balancer
[ https://issues.apache.org/jira/browse/HDFS-11038?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15609790#comment-15609790 ] Hadoop QA commented on HDFS-11038: -- | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 16s{color} | {color:blue} Docker mode activated. {color} | | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s{color} | {color:green} The patch appears to include 2 new or modified test files. {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 8m 7s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 49s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 27s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 1s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 13s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 53s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 43s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 55s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 50s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 50s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 25s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 59s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 10s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 2m 5s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 41s{color} | {color:green} the patch passed {color} | | {color:red}-1{color} | {color:red} unit {color} | {color:red} 49m 37s{color} | {color:red} hadoop-hdfs in the patch failed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 20s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 70m 50s{color} | {color:black} {color} | \\ \\ || Reason || Tests || | Timed out junit tests | org.apache.hadoop.hdfs.server.datanode.TestDataNodeRollingUpgrade | | | org.apache.hadoop.hdfs.TestFileCreation | | | org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.TestLazyWriter | | | org.apache.hadoop.hdfs.TestFileAppend3 | | | org.apache.hadoop.hdfs.server.balancer.TestBalancerWithSaslDataTransfer | | | org.apache.hadoop.hdfs.TestLeaseRecovery | | | org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.TestSpaceReservation | | | org.apache.hadoop.hdfs.server.datanode.TestFsDatasetCache | | | org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.TestLazyPersistReplicaRecovery | \\ \\ || Subsystem || Report/Notes || | Docker | Image:yetus/hadoop:9560f25 | | JIRA Issue | HDFS-11038 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12835415/HDFS-11038.000.patch | | Optional Tests | asflicense compile javac javadoc mvninstall mvnsite unit findbugs checkstyle | | uname | Linux 74f3221344b9 3.13.0-92-generic #139-Ubuntu SMP Tue Jun 28 20:42:26 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh | | git revision | trunk / f511cc8 | | Default Java | 1.8.0_101 | | findbugs | v3.0.0 | | unit | https://builds.apache.org/job/PreCommit-HDFS-Build/17298/artifact/patchprocess/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt | | Test Results | https://builds.apache.org/job/PreCommit-HDFS-Build/17298/testReport/ | | modules | C: hadoop-hdfs-project/hadoop-hdfs U: hadoop-hdfs-project/hadoop-hdfs | | Console output | https://builds.apac
[jira] [Updated] (HDFS-11031) Add additional unit test for DataNode startup behavior when volumes fail
[ https://issues.apache.org/jira/browse/HDFS-11031?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Mingliang Liu updated HDFS-11031: - Attachment: HDFS-11031-branch-2.004.patch > Add additional unit test for DataNode startup behavior when volumes fail > > > Key: HDFS-11031 > URL: https://issues.apache.org/jira/browse/HDFS-11031 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: datanode, test >Reporter: Mingliang Liu >Assignee: Mingliang Liu > Attachments: HDFS-11031-branch-2.001.patch, > HDFS-11031-branch-2.002.patch, HDFS-11031-branch-2.003.patch, > HDFS-11031-branch-2.004.patch, HDFS-11031.000.patch, HDFS-11031.001.patch, > HDFS-11031.002.patch, HDFS-11031.003.patch > > > There are several cases to add in {{TestDataNodeVolumeFailure}}: > - DataNode should not start in case of volumes failure > - DataNode should not start in case of lacking data dir read/write permission > - ... -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-7343) HDFS smart storage management
[ https://issues.apache.org/jira/browse/HDFS-7343?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15609772#comment-15609772 ] Andrew Wang commented on HDFS-7343: --- I'm not opposed to a branch, but I'd like to see a design doc rev that clarifies what usecases are being targeted for the first iteration of this work, and the corresponding implementation plan. Like I said in an earlier comment, I think archival usecases are the most important for end users, and can be handled with a pretty simple system by looking at atimes / ctimes for paths. The SSD stuff I'm less convinced of, due to the difficulties of providing reliable application-level SLOs. I think the best solutions here need to leverage application-level information about working sets and priorities from YARN and YARN apps. This is much more accurate than trying to determine the working sets via HDFS or OS-level information. > HDFS smart storage management > - > > Key: HDFS-7343 > URL: https://issues.apache.org/jira/browse/HDFS-7343 > Project: Hadoop HDFS > Issue Type: Improvement >Reporter: Kai Zheng >Assignee: Wei Zhou > Attachments: HDFS-Smart-Storage-Management.pdf > > > As discussed in HDFS-7285, it would be better to have a comprehensive and > flexible storage policy engine considering file attributes, metadata, data > temperature, storage type, EC codec, available hardware capabilities, > user/application preference and etc. > Modified the title for re-purpose. > We'd extend this effort some bit and aim to work on a comprehensive solution > to provide smart storage management service in order for convenient, > intelligent and effective utilizing of erasure coding or replicas, HDFS cache > facility, HSM offering, and all kinds of tools (balancer, mover, disk > balancer and so on) in a large cluster. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-11001) Ozone:SCM: Add support for registerNode in SCM
[ https://issues.apache.org/jira/browse/HDFS-11001?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15609749#comment-15609749 ] Hadoop QA commented on HDFS-11001: -- | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 18s{color} | {color:blue} Docker mode activated. {color} | | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s{color} | {color:green} The patch appears to include 1 new or modified test files. {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 8m 34s{color} | {color:green} HDFS-7240 passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 47s{color} | {color:green} HDFS-7240 passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 27s{color} | {color:green} HDFS-7240 passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 52s{color} | {color:green} HDFS-7240 passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 12s{color} | {color:green} HDFS-7240 passed {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 47s{color} | {color:green} HDFS-7240 passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 42s{color} | {color:green} HDFS-7240 passed {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 47s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 43s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} cc {color} | {color:green} 0m 43s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 43s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 24s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 49s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 10s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 53s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 40s{color} | {color:green} the patch passed {color} | | {color:red}-1{color} | {color:red} unit {color} | {color:red} 65m 47s{color} | {color:red} hadoop-hdfs in the patch failed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 19s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 86m 25s{color} | {color:black} {color} | \\ \\ || Reason || Tests || | Failed junit tests | hadoop.ozone.web.TestOzoneRestWithMiniCluster | | | hadoop.hdfs.server.datanode.TestBlockPoolManager | \\ \\ || Subsystem || Report/Notes || | Docker | Image:yetus/hadoop:9560f25 | | JIRA Issue | HDFS-11001 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12835408/HDFS-11001-HDFS-7240.004.patch | | Optional Tests | asflicense compile javac javadoc mvninstall mvnsite unit findbugs checkstyle cc | | uname | Linux 2087fafda1ea 3.13.0-92-generic #139-Ubuntu SMP Tue Jun 28 20:42:26 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh | | git revision | HDFS-7240 / 11ec1c6 | | Default Java | 1.8.0_101 | | findbugs | v3.0.0 | | unit | https://builds.apache.org/job/PreCommit-HDFS-Build/17297/artifact/patchprocess/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt | | Test Results | https://builds.apache.org/job/PreCommit-HDFS-Build/17297/testReport/ | | modules | C: hadoop-hdfs-project/hadoop-hdfs U: hadoop-hdfs-project/hadoop-hdfs | | Console output | https://builds.apache.org/job/PreCommit-HDFS-Build/17297/console | | Powered by | Apache Yetus 0.4.0-SNAPSHOT http://yetus.apache.org | This message was automatically generated. > Ozone:SCM: Add support for registerNode in SCM > -- > > Key: HDFS-11001 > URL: https://issues.a
[jira] [Commented] (HDFS-10921) TestDiskspaceQuotaUpdate doesn't wait for NN to get out of safe mode
[ https://issues.apache.org/jira/browse/HDFS-10921?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15609762#comment-15609762 ] Hudson commented on HDFS-10921: --- SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #10692 (See [https://builds.apache.org/job/Hadoop-trunk-Commit/10692/]) HDFS-10921. TestDiskspaceQuotaUpdate doesn't wait for NN to get out of (liuml07: rev 55e1fb8e3221941321e6f5e04b334246c5f23027) * (edit) hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestDiskspaceQuotaUpdate.java > TestDiskspaceQuotaUpdate doesn't wait for NN to get out of safe mode > > > Key: HDFS-10921 > URL: https://issues.apache.org/jira/browse/HDFS-10921 > Project: Hadoop HDFS > Issue Type: Bug >Reporter: Eric Badger >Assignee: Eric Badger > Fix For: 2.7.4, 3.0.0-alpha2 > > Attachments: HDFS-10921.001.patch, HDFS-10921.002.patch, > HDFS-10921.003.patch, HDFS-10921.004.patch > > > Test fails intermittently because the NN is still in safe mode. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-11038) DiskBalancer: support running multiple commands under one setup of disk balancer
[ https://issues.apache.org/jira/browse/HDFS-11038?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15609747#comment-15609747 ] Anu Engineer commented on HDFS-11038: - Thank you for the patch, just a quick question. In {{PlanCommand.java#execute}} are you sure that planFileName and planFileFullName are different ? > DiskBalancer: support running multiple commands under one setup of disk > balancer > > > Key: HDFS-11038 > URL: https://issues.apache.org/jira/browse/HDFS-11038 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: balancer & mover >Reporter: Xiaobing Zhou >Assignee: Xiaobing Zhou > Attachments: HDFS-11038.000.patch > > > Disk balancer follows/reuses one rule designed by HDFS balancer, that is, > only one instance is allowed to run at the same time. This is correct in > production system to avoid any inconsistencies, but it's not ideal to write > and run unit tests. For example, it should be allowed run plan, execute, scan > commands under one setup of disk balancer. One instance rule will throw > exception by complaining 'Another instance is running'. In such a case, > there's no way to do a full life cycle tests which involves a sequence of > commands. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-11016) Add unit tests for HDFS command 'dfsadmin -set/clrQuota'
[ https://issues.apache.org/jira/browse/HDFS-11016?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Xiaobing Zhou updated HDFS-11016: - Resolution: Implemented Status: Resolved (was: Patch Available) Resolved this as implemented. I checked that all cases (e.g. set/clrQuota on file/non-existent directory, positive/negative quota, non-admin access AND set/clrQuota with HA) are covered in TestQuota and TestQuotaWithHA. > Add unit tests for HDFS command 'dfsadmin -set/clrQuota' > > > Key: HDFS-11016 > URL: https://issues.apache.org/jira/browse/HDFS-11016 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: hdfs-client >Reporter: Xiaobing Zhou >Assignee: Xiaobing Zhou > Labels: fs, shell, test > Attachments: HDFS-11016.000.patch > > > This proposes adding a bunch of unit tests for command 'dfsadmin setQuota' > and 'dfsadmin clrQuota'. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-10921) TestDiskspaceQuotaUpdate doesn't wait for NN to get out of safe mode
[ https://issues.apache.org/jira/browse/HDFS-10921?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Mingliang Liu updated HDFS-10921: - Resolution: Fixed Hadoop Flags: Reviewed Fix Version/s: 3.0.0-alpha2 2.7.4 Status: Resolved (was: Patch Available) Generally I'd prefer a brand-new mini-cluster for each test as it's simpler and clearer. However, in this case the overhead of creating/destroying a cluster for each case is obvious and is a concern. The total runtime at my local test machine for individual vs. shared cluster is 58 seconds vs. 37 seconds. However, using a shared cluster we may have to deal with problems like this. I think the solution is pretty good. Thanks for the good discussion guys. > TestDiskspaceQuotaUpdate doesn't wait for NN to get out of safe mode > > > Key: HDFS-10921 > URL: https://issues.apache.org/jira/browse/HDFS-10921 > Project: Hadoop HDFS > Issue Type: Bug >Reporter: Eric Badger >Assignee: Eric Badger > Fix For: 2.7.4, 3.0.0-alpha2 > > Attachments: HDFS-10921.001.patch, HDFS-10921.002.patch, > HDFS-10921.003.patch, HDFS-10921.004.patch > > > Test fails intermittently because the NN is still in safe mode. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-11048) Audit Log should escape control characters
[ https://issues.apache.org/jira/browse/HDFS-11048?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15609704#comment-15609704 ] Mingliang Liu commented on HDFS-11048: -- +1 for the proposal. {code} 1141 public static boolean containsNonPrintableChar(String s1) { 1142Pattern regex = Pattern.compile("\\p{C}"); 1143return regex.matcher(s1).find(); 1144 } {code} do we better pre-compile this? > Audit Log should escape control characters > -- > > Key: HDFS-11048 > URL: https://issues.apache.org/jira/browse/HDFS-11048 > Project: Hadoop HDFS > Issue Type: Bug >Reporter: Eric Badger >Assignee: Eric Badger > Attachments: HDFS-11048.001.patch > > > Allowing control characters without escaping them allows for spoofing audit > log entries at worst and accidentally breaking log parsing at best. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-10921) TestDiskspaceQuotaUpdate doesn't wait for NN to get out of safe mode
[ https://issues.apache.org/jira/browse/HDFS-10921?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15609657#comment-15609657 ] Mingliang Liu commented on HDFS-10921: -- +1 Will commit shortly. > TestDiskspaceQuotaUpdate doesn't wait for NN to get out of safe mode > > > Key: HDFS-10921 > URL: https://issues.apache.org/jira/browse/HDFS-10921 > Project: Hadoop HDFS > Issue Type: Bug >Reporter: Eric Badger >Assignee: Eric Badger > Attachments: HDFS-10921.001.patch, HDFS-10921.002.patch, > HDFS-10921.003.patch, HDFS-10921.004.patch > > > Test fails intermittently because the NN is still in safe mode. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-10954) [SPS]: Provide mechanism to send blocks movement result back to NN from coordinator DN
[ https://issues.apache.org/jira/browse/HDFS-10954?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15609562#comment-15609562 ] Hadoop QA commented on HDFS-10954: -- | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 48s{color} | {color:blue} Docker mode activated. {color} | | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s{color} | {color:green} The patch appears to include 13 new or modified test files. {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 12m 22s{color} | {color:green} HDFS-10285 passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 48s{color} | {color:green} HDFS-10285 passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 37s{color} | {color:green} HDFS-10285 passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 54s{color} | {color:green} HDFS-10285 passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 13s{color} | {color:green} HDFS-10285 passed {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 43s{color} | {color:green} HDFS-10285 passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 41s{color} | {color:green} HDFS-10285 passed {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 47s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 42s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} cc {color} | {color:green} 0m 42s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 42s{color} | {color:green} the patch passed {color} | | {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange} 0m 33s{color} | {color:orange} hadoop-hdfs-project/hadoop-hdfs: The patch generated 12 new + 783 unchanged - 4 fixed = 795 total (was 787) {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 51s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 10s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 50s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 38s{color} | {color:green} the patch passed {color} | | {color:red}-1{color} | {color:red} unit {color} | {color:red} 80m 7s{color} | {color:red} hadoop-hdfs in the patch failed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 18s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black}105m 20s{color} | {color:black} {color} | \\ \\ || Reason || Tests || | Failed junit tests | hadoop.hdfs.server.namenode.snapshot.TestRenameWithSnapshots | | | hadoop.hdfs.TestFileChecksum | | | hadoop.hdfs.server.datanode.TestBPOfferService | \\ \\ || Subsystem || Report/Notes || | Docker | Image:yetus/hadoop:9560f25 | | JIRA Issue | HDFS-10954 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12835393/HDFS-10954-HDFS-10285-00.patch | | Optional Tests | asflicense compile javac javadoc mvninstall mvnsite unit findbugs checkstyle cc | | uname | Linux 84ae361da621 3.13.0-96-generic #143-Ubuntu SMP Mon Aug 29 20:15:20 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh | | git revision | HDFS-10285 / f705de3 | | Default Java | 1.8.0_101 | | findbugs | v3.0.0 | | checkstyle | https://builds.apache.org/job/PreCommit-HDFS-Build/17296/artifact/patchprocess/diff-checkstyle-hadoop-hdfs-project_hadoop-hdfs.txt | | unit | https://builds.apache.org/job/PreCommit-HDFS-Build/17296/artifact/patchprocess/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt | | Test Results | https://builds.apache.org/job/PreCommit-HDFS-Build/17296/testReport/ | | modules | C: hadoop-hdfs-project/hadoop-hdfs U: hadoop-hdfs-project/hadoop-hdfs | | Console output | https://builds.apache.org/job/PreCommit-HDFS-Build/1
[jira] [Updated] (HDFS-11038) DiskBalancer: support running multiple commands under one setup of disk balancer
[ https://issues.apache.org/jira/browse/HDFS-11038?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Xiaobing Zhou updated HDFS-11038: - Attachment: HDFS-11038.000.patch > DiskBalancer: support running multiple commands under one setup of disk > balancer > > > Key: HDFS-11038 > URL: https://issues.apache.org/jira/browse/HDFS-11038 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: balancer & mover >Reporter: Xiaobing Zhou >Assignee: Xiaobing Zhou > Attachments: HDFS-11038.000.patch > > > Disk balancer follows/reuses one rule designed by HDFS balancer, that is, > only one instance is allowed to run at the same time. This is correct in > production system to avoid any inconsistencies, but it's not ideal to write > and run unit tests. For example, it should be allowed run plan, execute, scan > commands under one setup of disk balancer. One instance rule will throw > exception by complaining 'Another instance is running'. In such a case, > there's no way to do a full life cycle tests which involves a sequence of > commands. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-11038) DiskBalancer: support running multiple commands under one setup of disk balancer
[ https://issues.apache.org/jira/browse/HDFS-11038?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15609581#comment-15609581 ] Xiaobing Zhou commented on HDFS-11038: -- I posted initial patch v000 for reviews. > DiskBalancer: support running multiple commands under one setup of disk > balancer > > > Key: HDFS-11038 > URL: https://issues.apache.org/jira/browse/HDFS-11038 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: balancer & mover >Reporter: Xiaobing Zhou >Assignee: Xiaobing Zhou > Attachments: HDFS-11038.000.patch > > > Disk balancer follows/reuses one rule designed by HDFS balancer, that is, > only one instance is allowed to run at the same time. This is correct in > production system to avoid any inconsistencies, but it's not ideal to write > and run unit tests. For example, it should be allowed run plan, execute, scan > commands under one setup of disk balancer. One instance rule will throw > exception by complaining 'Another instance is running'. In such a case, > there's no way to do a full life cycle tests which involves a sequence of > commands. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-11038) DiskBalancer: support running multiple commands under one setup of disk balancer
[ https://issues.apache.org/jira/browse/HDFS-11038?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Xiaobing Zhou updated HDFS-11038: - Status: Patch Available (was: Reopened) > DiskBalancer: support running multiple commands under one setup of disk > balancer > > > Key: HDFS-11038 > URL: https://issues.apache.org/jira/browse/HDFS-11038 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: balancer & mover >Reporter: Xiaobing Zhou >Assignee: Xiaobing Zhou > Attachments: HDFS-11038.000.patch > > > Disk balancer follows/reuses one rule designed by HDFS balancer, that is, > only one instance is allowed to run at the same time. This is correct in > production system to avoid any inconsistencies, but it's not ideal to write > and run unit tests. For example, it should be allowed run plan, execute, scan > commands under one setup of disk balancer. One instance rule will throw > exception by complaining 'Another instance is running'. In such a case, > there's no way to do a full life cycle tests which involves a sequence of > commands. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-11001) Ozone:SCM: Add support for registerNode in SCM
[ https://issues.apache.org/jira/browse/HDFS-11001?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Anu Engineer updated HDFS-11001: Attachment: HDFS-11001-HDFS-7240.004.patch * Updated the license in most files to remove * fixed a test bug that was introduced in the last patch > Ozone:SCM: Add support for registerNode in SCM > -- > > Key: HDFS-11001 > URL: https://issues.apache.org/jira/browse/HDFS-11001 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: ozone >Affects Versions: HDFS-7240 >Reporter: Anu Engineer >Assignee: Anu Engineer > Fix For: HDFS-7240 > > Attachments: HDFS-11001-HDFS-7240.001.patch, > HDFS-11001-HDFS-7240.002.patch, HDFS-11001-HDFS-7240.003.patch, > HDFS-11001-HDFS-7240.004.patch > > > Adds support for a datanode registration. Right now SCM relies on Namenode > for the datanode registration. With this change we will be able to run SCM > independently if needed. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-10941) Improve BlockManager#processMisReplicatesAsync log
[ https://issues.apache.org/jira/browse/HDFS-10941?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Xiaoyu Yao updated HDFS-10941: -- Assignee: Chen Liang (was: Xiaoyu Yao) > Improve BlockManager#processMisReplicatesAsync log > -- > > Key: HDFS-10941 > URL: https://issues.apache.org/jira/browse/HDFS-10941 > Project: Hadoop HDFS > Issue Type: Improvement > Components: namenode >Reporter: Xiaoyu Yao >Assignee: Chen Liang > > BlockManager#processMisReplicatesAsync is the daemon thread running inside > namenode to handle miserplicated blocks. As shown below, it has a trace log > for each of the block in the cluster being processed (1 blocks per > iteration after sleep 10s). > {code} > MisReplicationResult res = processMisReplicatedBlock(block); > if (LOG.isTraceEnabled()) { > LOG.trace("block " + block + ": " + res); > } > {code} > However, it is not very useful as dumping every block in the cluster will > overwhelm the namenode log without much useful information assuming the > majority of the blocks are not over/under replicated. This ticket is opened > to improve the log for easy troubleshooting of block replication related > issues by: > > 1) add debug log for blocks that get under/over replicated result during > {{processMisReplicatedBlock()}} > 2) or change to trace log for only blocks that get non-OK result during > {{processMisReplicatedBlock()}} -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org