[jira] [Commented] (HDFS-12506) Ozone: ListBucket is too slow
[ https://issues.apache.org/jira/browse/HDFS-12506?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16178569#comment-16178569 ] Hadoop QA commented on HDFS-12506: -- | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 17s{color} | {color:blue} Docker mode activated. {color} | || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 1s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s{color} | {color:green} The patch appears to include 4 new or modified test files. {color} | || || || || {color:brown} HDFS-7240 Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 40s{color} | {color:blue} Maven dependency ordering for branch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 15m 30s{color} | {color:green} HDFS-7240 passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 41s{color} | {color:green} HDFS-7240 passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 46s{color} | {color:green} HDFS-7240 passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 42s{color} | {color:green} HDFS-7240 passed {color} | | {color:red}-1{color} | {color:red} findbugs {color} | {color:red} 1m 52s{color} | {color:red} hadoop-hdfs-project/hadoop-hdfs-client in HDFS-7240 has 1 extant Findbugs warnings. {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 45s{color} | {color:green} HDFS-7240 passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 8s{color} | {color:blue} Maven dependency ordering for patch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 1m 34s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 36s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 1m 36s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 42s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 35s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 4m 0s{color} | {color:green} the patch passed {color} | | {color:red}-1{color} | {color:red} javadoc {color} | {color:red} 0m 51s{color} | {color:red} hadoop-hdfs-project_hadoop-hdfs generated 2 new + 10 unchanged - 0 fixed = 12 total (was 10) {color} | || || || || {color:brown} Other Tests {color} || | {color:green}+1{color} | {color:green} unit {color} | {color:green} 1m 32s{color} | {color:green} hadoop-hdfs-client in the patch passed. {color} | | {color:red}-1{color} | {color:red} unit {color} | {color:red} 94m 50s{color} | {color:red} hadoop-hdfs in the patch failed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 21s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black}135m 59s{color} | {color:black} {color} | \\ \\ || Reason || Tests || | Failed junit tests | hadoop.hdfs.server.datanode.TestDataNodeVolumeFailureReporting | | | hadoop.hdfs.server.blockmanagement.TestBlockStatsMXBean | \\ \\ || Subsystem || Report/Notes || | Docker | Image:yetus/hadoop:71bbb86 | | JIRA Issue | HDFS-12506 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12888770/HDFS-12506-HDFS-7240.007.patch | | Optional Tests | asflicense compile javac javadoc mvninstall mvnsite unit findbugs checkstyle | | uname | Linux 7e1a77b55c73 3.13.0-119-generic #166-Ubuntu SMP Wed May 3 12:18:55 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh | | git revision | HDFS-7240 / 97ff55e | | Default Java | 1.8.0_144 | | findbugs | v3.1.0-RC1 | | findbugs | https://builds.apache.org/job/PreCommit-HDFS-Build/21332/artifact/patchprocess/branch-findbugs-hadoop-hdfs-project_hadoop-hdfs-client-warnings.html | | javadoc |
[jira] [Commented] (HDFS-12525) Ozone: OzoneClient: Verify bucket/volume name in create calls
[ https://issues.apache.org/jira/browse/HDFS-12525?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16178557#comment-16178557 ] Nandakumar commented on HDFS-12525: --- Thanks [~linyiqun] and [~anu] for the review. I agree that we have to have one function to verify the names. The one in hadoop-hdfs {{OzoneUtils}} is used by the old {{OzoneRestClient}}, which we are planning to remove once new REST based {{OzoneClient}} is ready (based on HDFS-12385). The plan is to remove the {{OzoneUtils#verifyResourceName}} while removing the old {{OzoneRestClient}}. Removing it now will result in a bigger patch for this jira with changes in all files inside old rest client. > Ozone: OzoneClient: Verify bucket/volume name in create calls > - > > Key: HDFS-12525 > URL: https://issues.apache.org/jira/browse/HDFS-12525 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: ozone >Reporter: Nandakumar >Assignee: Nandakumar > Labels: ozoneMerge > Attachments: HDFS-12525-HDFS-7240.000.patch, > HDFS-12525-HDFS-7240.000.patch > > > The new OzoneClient API has to verify bucket/volume name during creation > call. Volume/Bucket name shouldn't support any special characters other {{.}} > and {{-}}. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-12291) [SPS]: Provide a mechanism to recursively iterate and satisfy storage policy of all the files under the given dir
[ https://issues.apache.org/jira/browse/HDFS-12291?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Surendra Singh Lilhore updated HDFS-12291: -- Attachment: HDFS-12291-HDFS-10285-06.patch Missed new added file in last patch. Attached updated patch V6.. > [SPS]: Provide a mechanism to recursively iterate and satisfy storage policy > of all the files under the given dir > - > > Key: HDFS-12291 > URL: https://issues.apache.org/jira/browse/HDFS-12291 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: datanode, namenode >Reporter: Rakesh R >Assignee: Surendra Singh Lilhore > Attachments: HDFS-12291-HDFS-10285-01.patch, > HDFS-12291-HDFS-10285-02.patch, HDFS-12291-HDFS-10285-03.patch, > HDFS-12291-HDFS-10285-04.patch, HDFS-12291-HDFS-10285-05.patch, > HDFS-12291-HDFS-10285-06.patch > > > For the given source path directory, presently SPS consider only the files > immediately under the directory(only one level of scanning) for satisfying > the policy. It WON’T do recursive directory scanning and then schedules SPS > tasks to satisfy the storage policy of all the files till the leaf node. > The idea of this jira is to discuss & implement an efficient recursive > directory iteration mechanism and satisfies storage policy for all the files > under the given directory. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-12464) Ozone: More detailed documentation about the ozone components
[ https://issues.apache.org/jira/browse/HDFS-12464?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Anu Engineer updated HDFS-12464: Labels: ozoneDoc (was: ozoneMerge) > Ozone: More detailed documentation about the ozone components > - > > Key: HDFS-12464 > URL: https://issues.apache.org/jira/browse/HDFS-12464 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: HDFS-7240 >Affects Versions: HDFS-7240 >Reporter: Elek, Marton >Assignee: Elek, Marton > Labels: ozoneDoc > Attachments: HDFS-12464-HDFS-7240.001.patch, > HDFS-7240-HDFS-12464.001.patch > > > I started to write a more detailed introduction about the Ozone components. > The goal is to explain the basic responsibility of the components and the > basic network topology (which components sends messages and to where?). > -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-12469) Ozone: Create docker-compose definition to easily test real clusters
[ https://issues.apache.org/jira/browse/HDFS-12469?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16178526#comment-16178526 ] Anu Engineer commented on HDFS-12469: - [~elek] Looks like you posted the {{HDFS-12477-HDFS-7240.000.patch}} accidentally. Could you please repost {{HDFS-12469.001.patch}}? > Ozone: Create docker-compose definition to easily test real clusters > > > Key: HDFS-12469 > URL: https://issues.apache.org/jira/browse/HDFS-12469 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: ozone >Reporter: Elek, Marton >Assignee: Elek, Marton > Labels: ozoneMerge > Fix For: HDFS-7240 > > Attachments: HDFS-12469-HDFS-7240.WIP1.patch, > HDFS-12469-HDFS-7240.WIP2.patch, HDFS-12477-HDFS-7240.000.patch > > > The goal here is to create a docker-compose definition for ozone > pseudo-cluster with docker (one component per container). > Ideally after a full build the ozone cluster could be started easily with > after a simple docker-compose up command. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-11563) Ozone: enforce DependencyConvergence uniqueVersions
[ https://issues.apache.org/jira/browse/HDFS-11563?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16178523#comment-16178523 ] Anu Engineer commented on HDFS-11563: - [~szetszwo] Very nice, +1, Thanks for fixing this. Really appreciate root causing and fixing this. I am sure that single line of change must have taken hours of work. > Ozone: enforce DependencyConvergence uniqueVersions > --- > > Key: HDFS-11563 > URL: https://issues.apache.org/jira/browse/HDFS-11563 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: build, ozone >Reporter: Tsz Wo Nicholas Sze >Assignee: Tsz Wo Nicholas Sze >Priority: Blocker > Labels: ozoneMerge, tocheck > Attachments: HDFS-11563-HDFS-7240.20170923b.patch, > HDFS-11563-HDFS-7240.20170923.patch > > > In HDFS-11519, we disable DependencyConvergence uniqueVersions so that > Jenkins can test the branch with public maven repo. We should re-enable it > before merging the branch. > {code} > // hadoop-project/pom.xml > @@ -1505,7 +1545,9 @@ > > > > - true > + > > > > {code} -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-12525) Ozone: OzoneClient: Verify bucket/volume name in create calls
[ https://issues.apache.org/jira/browse/HDFS-12525?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16178520#comment-16178520 ] Anu Engineer commented on HDFS-12525: - I am also inclined to have one function to verify the name as [~linyiqun] suggested. The issue is that if we have two different verifications and if they go out of sync debugging will be hard. Why don't we define this function on the client side and use it on the server side as well for verification? > Ozone: OzoneClient: Verify bucket/volume name in create calls > - > > Key: HDFS-12525 > URL: https://issues.apache.org/jira/browse/HDFS-12525 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: ozone >Reporter: Nandakumar >Assignee: Nandakumar > Labels: ozoneMerge > Attachments: HDFS-12525-HDFS-7240.000.patch, > HDFS-12525-HDFS-7240.000.patch > > > The new OzoneClient API has to verify bucket/volume name during creation > call. Volume/Bucket name shouldn't support any special characters other {{.}} > and {{-}}. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-12516) Suppress the fsnamesystem lock warning on nn startup
[ https://issues.apache.org/jira/browse/HDFS-12516?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16178519#comment-16178519 ] Hudson commented on HDFS-12516: --- SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #12963 (See [https://builds.apache.org/job/Hadoop-trunk-Commit/12963/]) HDFS-12516. Suppress the fsnamesystem lock warning on nn startup. (aengineer: rev d0b2c5850b523a3888b2fadcfcdf6edbed33f221) * (edit) hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSNamesystem.java * (edit) hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestFSNamesystemLock.java * (edit) hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSNamesystemLock.java > Suppress the fsnamesystem lock warning on nn startup > > > Key: HDFS-12516 > URL: https://issues.apache.org/jira/browse/HDFS-12516 > Project: Hadoop HDFS > Issue Type: Improvement >Reporter: Ajay Kumar >Assignee: Ajay Kumar > Fix For: 3.1.0 > > Attachments: HDFS-12516.01.patch, HDFS-12516.02.patch, > HDFS-12516.03.patch > > > Whenever FsNameSystemLock is held for more than configured value of > {{dfs.namenode.write-lock-reporting-threshold-ms}}, we log stacktrace and an > entry in metrics. Loading FSImage from disk will usually cross this > threshold. We can suppress this FsNamesystem lock warning on NameNode startup. > {code} > 17/09/20 21:41:39 INFO namenode.FSNamesystem: FSNamesystem write lock held > for 7159 ms via > java.lang.Thread.getStackTrace(Thread.java:1552) > org.apache.hadoop.util.StringUtils.getStackTrace(StringUtils.java:945) > org.apache.hadoop.hdfs.server.namenode.FSNamesystem.writeUnlock(FSNamesystem.java:1659) > org.apache.hadoop.hdfs.server.namenode.FSNamesystem.loadFSImage(FSNamesystem.java:1074) > org.apache.hadoop.hdfs.server.namenode.FSNamesystem.loadFromDisk(FSNamesystem.java:703) > org.apache.hadoop.hdfs.server.namenode.NameNode.loadNamesystem(NameNode.java:688) > org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:752) > org.apache.hadoop.hdfs.server.namenode.NameNode.(NameNode.java:992) > org.apache.hadoop.hdfs.server.namenode.NameNode.(NameNode.java:976) > org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1701) > org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1769) > Number of suppressed write-lock reports: 0 > Longest write-lock held interval: 7159 > {code} -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-12540) Ozone: node status text reported by SCM is a confusing
[ https://issues.apache.org/jira/browse/HDFS-12540?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16178517#comment-16178517 ] Anu Engineer commented on HDFS-12540: - I know what is happening, but you are right. That statement is weird. bq. 15 of out of total 1 nodes have reported in. if we are out of Chill mode, how about we write {{Node Manager: Chill mode status: Out of chill mode. 15 nodes have reported in.}} However, if we are not out of Chill mode, we can have the same line as today. That is we need an {{if}} check to print this statement correctly. > Ozone: node status text reported by SCM is a confusing > -- > > Key: HDFS-12540 > URL: https://issues.apache.org/jira/browse/HDFS-12540 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: ozone >Reporter: Weiwei Yang >Priority: Trivial > > At present SCM UI displays node status like following > {noformat} > Node Manager: Chill mode status: Out of chill mode. 15 of out of total 1 > nodes have reported in. > {noformat} > this text is a bit confusing. UI retrieves status from > {{SCMNodeManager#getNodeStatus}}, related call is {{#getChillModeStatus}}. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-12531) Fix conflict in the javadoc of UnderReplicatedBlocks.java in branch-2
[ https://issues.apache.org/jira/browse/HDFS-12531?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Akira Ajisaka updated HDFS-12531: - Fix Version/s: 2.8.3 2.9.0 Thanks [~anu] for the review & commit. Cherry-picked this to branch-2.8. > Fix conflict in the javadoc of UnderReplicatedBlocks.java in branch-2 > - > > Key: HDFS-12531 > URL: https://issues.apache.org/jira/browse/HDFS-12531 > Project: Hadoop HDFS > Issue Type: Bug > Components: documentation >Affects Versions: 2.8.0 >Reporter: Akira Ajisaka >Assignee: Bharat Viswanadham >Priority: Minor > Labels: newbie > Fix For: 2.9.0, 2.8.3 > > Attachments: HDFS-12531-branch-2.01.patch > > > In HDFS-9205, the following was pushed without fixing conflicts. > {noformat} > * > * The policy for choosing which priority to give added blocks > <<< HEAD > * is implemented in {@link #getPriority(int, int, int)}. > === > * is implemented in {@link #getPriority(BlockInfo, int, int, int, int)}. > >>> 5411dc5... HDFS-9205. Do not schedule corrupt blocks for replication. > >>> (szetszwo) > * > {noformat} -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-12506) Ozone: ListBucket is too slow
[ https://issues.apache.org/jira/browse/HDFS-12506?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16178512#comment-16178512 ] Anu Engineer commented on HDFS-12506: - bq. JIRA HDFS-12539 to get these stuff fixed. Does that sound good to you? Perfect, +1 on this change. > Ozone: ListBucket is too slow > - > > Key: HDFS-12506 > URL: https://issues.apache.org/jira/browse/HDFS-12506 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: ozone >Reporter: Weiwei Yang >Assignee: Weiwei Yang >Priority: Blocker > Labels: ozoneMerge, performance > Attachments: HDFS-12506-HDFS-7240.001.patch, > HDFS-12506-HDFS-7240.002.patch, HDFS-12506-HDFS-7240.003.patch, > HDFS-12506-HDFS-7240.004.patch, HDFS-12506-HDFS-7240.005.patch, > HDFS-12506-HDFS-7240.006.patch, HDFS-12506-HDFS-7240.007.patch > > > Generated 3 million keys in ozone, and run {{listBucket}} command to get a > list of buckets under a volume, > {code} > bin/hdfs oz -listBucket http://15oz1.fyre.ibm.com:9864/vol-0-15143 -user wwei > {code} > this call spent over *15 seconds* to finish. The problem was caused by the > inflexible structure of KSM DB. Right now {{ksm.db}} stores keys like > following > {code} > /v1/b1 > /v1/b1/k1 > /v1/b1/k2 > /v1/b1/k3 > /v1/b2 > /v1/b2/k1 > /v1/b2/k2 > /v1/b2/k3 > /v1/b3 > /v1/b4 > {code} > keys are sorted in nature order so when we do list buckets under a volume e.g > /v1, we need to seek to /v1 point and start to iterate and filter keys, this > ends up with scanning all keys under volume /v1. The problem with this design > is we don't have an efficient approach to locate all buckets without scanning > the keys. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-12529) get source for config tags from file name
[ https://issues.apache.org/jira/browse/HDFS-12529?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16178511#comment-16178511 ] Anu Engineer commented on HDFS-12529: - [~ajayydv] Can you please look at the ASF licence warning, is our tests generating a core-site or something? v3 patch tests seem to have a failure with ASF license and it points to {{/testptch/hadoop/hadoop-common-project/hadoop-common/core-site.xml}}, but patch v2 did not have that failure. Not committing till we get a chance to check. > get source for config tags from file name > - > > Key: HDFS-12529 > URL: https://issues.apache.org/jira/browse/HDFS-12529 > Project: Hadoop HDFS > Issue Type: Bug >Reporter: Ajay Kumar >Assignee: Ajay Kumar > Attachments: HDFS-12529.01.patch, HDFS-12529.02.patch, > HDFS-12529.03.patch > > > For tagging related properties together use resource name as source. > Currently it assumes source is configured in xml itself. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-12516) Suppress the fsnamesystem lock warning on nn startup
[ https://issues.apache.org/jira/browse/HDFS-12516?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Anu Engineer updated HDFS-12516: Resolution: Fixed Hadoop Flags: Reviewed Fix Version/s: 3.1.0 Target Version/s: 3.1.0 Status: Resolved (was: Patch Available) [~arpitagarwal] Thanks for the review comments. [~ajayydv] Thank you for the contribution. I have committed this to trunk. > Suppress the fsnamesystem lock warning on nn startup > > > Key: HDFS-12516 > URL: https://issues.apache.org/jira/browse/HDFS-12516 > Project: Hadoop HDFS > Issue Type: Improvement >Reporter: Ajay Kumar >Assignee: Ajay Kumar > Fix For: 3.1.0 > > Attachments: HDFS-12516.01.patch, HDFS-12516.02.patch, > HDFS-12516.03.patch > > > Whenever FsNameSystemLock is held for more than configured value of > {{dfs.namenode.write-lock-reporting-threshold-ms}}, we log stacktrace and an > entry in metrics. Loading FSImage from disk will usually cross this > threshold. We can suppress this FsNamesystem lock warning on NameNode startup. > {code} > 17/09/20 21:41:39 INFO namenode.FSNamesystem: FSNamesystem write lock held > for 7159 ms via > java.lang.Thread.getStackTrace(Thread.java:1552) > org.apache.hadoop.util.StringUtils.getStackTrace(StringUtils.java:945) > org.apache.hadoop.hdfs.server.namenode.FSNamesystem.writeUnlock(FSNamesystem.java:1659) > org.apache.hadoop.hdfs.server.namenode.FSNamesystem.loadFSImage(FSNamesystem.java:1074) > org.apache.hadoop.hdfs.server.namenode.FSNamesystem.loadFromDisk(FSNamesystem.java:703) > org.apache.hadoop.hdfs.server.namenode.NameNode.loadNamesystem(NameNode.java:688) > org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:752) > org.apache.hadoop.hdfs.server.namenode.NameNode.(NameNode.java:992) > org.apache.hadoop.hdfs.server.namenode.NameNode.(NameNode.java:976) > org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1701) > org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1769) > Number of suppressed write-lock reports: 0 > Longest write-lock held interval: 7159 > {code} -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-12533) NNThroughputBenchmark threads get stuck on UGI.getCurrentUser()
[ https://issues.apache.org/jira/browse/HDFS-12533?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16178502#comment-16178502 ] Xiao Chen commented on HDFS-12533: -- I think HADOOP-9747 would get this solved. (I have always wanted to look at that jira, but haven't managed to do so beyond the jira title. :) ) > NNThroughputBenchmark threads get stuck on UGI.getCurrentUser() > --- > > Key: HDFS-12533 > URL: https://issues.apache.org/jira/browse/HDFS-12533 > Project: Hadoop HDFS > Issue Type: Improvement >Reporter: Erik Krogen > > In {{NameNode#getRemoteUser()}}, it first attempts to fetch from the RPC user > (not a synchronized operation), and if there is no RPC call, it will call > {{UserGroupInformation#getCurrentUser()}} (which is {{synchronized}}). This > makes it efficient for RPC operations (the bulk) so that there is not too > much contention. > In NNThroughputBenchmark, however, there is no RPC call since we bypass that > later, so with a high thread count many of the threads are getting stuck. At > one point I attached a profiler and found that quite a few threads had been > waiting for {{#getCurrentUser()}} for 2 minutes ( ! ). When taking this away > I found some improvement in the throughput numbers I was seeing. To more > closely emulate a real NN we should improve this issue. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-12506) Ozone: ListBucket is too slow
[ https://issues.apache.org/jira/browse/HDFS-12506?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16178500#comment-16178500 ] Yiqun Lin commented on HDFS-12506: -- I'm okay on your comment. +1, pending Jenkins. > Ozone: ListBucket is too slow > - > > Key: HDFS-12506 > URL: https://issues.apache.org/jira/browse/HDFS-12506 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: ozone >Reporter: Weiwei Yang >Assignee: Weiwei Yang >Priority: Blocker > Labels: ozoneMerge, performance > Attachments: HDFS-12506-HDFS-7240.001.patch, > HDFS-12506-HDFS-7240.002.patch, HDFS-12506-HDFS-7240.003.patch, > HDFS-12506-HDFS-7240.004.patch, HDFS-12506-HDFS-7240.005.patch, > HDFS-12506-HDFS-7240.006.patch, HDFS-12506-HDFS-7240.007.patch > > > Generated 3 million keys in ozone, and run {{listBucket}} command to get a > list of buckets under a volume, > {code} > bin/hdfs oz -listBucket http://15oz1.fyre.ibm.com:9864/vol-0-15143 -user wwei > {code} > this call spent over *15 seconds* to finish. The problem was caused by the > inflexible structure of KSM DB. Right now {{ksm.db}} stores keys like > following > {code} > /v1/b1 > /v1/b1/k1 > /v1/b1/k2 > /v1/b1/k3 > /v1/b2 > /v1/b2/k1 > /v1/b2/k2 > /v1/b2/k3 > /v1/b3 > /v1/b4 > {code} > keys are sorted in nature order so when we do list buckets under a volume e.g > /v1, we need to seek to /v1 point and start to iterate and filter keys, this > ends up with scanning all keys under volume /v1. The problem with this design > is we don't have an efficient approach to locate all buckets without scanning > the keys. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-12540) Ozone: node status text reported by SCM is a confusing
[ https://issues.apache.org/jira/browse/HDFS-12540?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16178499#comment-16178499 ] Weiwei Yang commented on HDFS-12540: Hi [~anu], any suggestion on the text? :) > Ozone: node status text reported by SCM is a confusing > -- > > Key: HDFS-12540 > URL: https://issues.apache.org/jira/browse/HDFS-12540 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: ozone >Reporter: Weiwei Yang >Priority: Trivial > > At present SCM UI displays node status like following > {noformat} > Node Manager: Chill mode status: Out of chill mode. 15 of out of total 1 > nodes have reported in. > {noformat} > this text is a bit confusing. UI retrieves status from > {{SCMNodeManager#getNodeStatus}}, related call is {{#getChillModeStatus}}. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Created] (HDFS-12540) Ozone: node status text reported by SCM is a confusing
Weiwei Yang created HDFS-12540: -- Summary: Ozone: node status text reported by SCM is a confusing Key: HDFS-12540 URL: https://issues.apache.org/jira/browse/HDFS-12540 Project: Hadoop HDFS Issue Type: Sub-task Components: ozone Reporter: Weiwei Yang Priority: Trivial At present SCM UI displays node status like following {noformat} Node Manager: Chill mode status: Out of chill mode. 15 of out of total 1 nodes have reported in. {noformat} this text is a bit confusing. UI retrieves status from {{SCMNodeManager#getNodeStatus}}, related call is {{#getChillModeStatus}}. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-12506) Ozone: ListBucket is too slow
[ https://issues.apache.org/jira/browse/HDFS-12506?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16178495#comment-16178495 ] Weiwei Yang commented on HDFS-12506: Hi [~linyiqun] I just uploaded v7 patch that hopefully fixed the java doc warnings. And regarding to your comment bq. getSequentialRangeKVs can also make sense in listKeys Actually there are more places should be replaced with {{getSequentialRangeKVs}}, I did not include them in this patch because I haven't tested them all. I will open another JIRA to track this issue, and make sure they get fixed with sufficient testing. Lets keep this JIRA focus on fixing {{listBucket}} issue. Does that sound good to you? [~anu], thanks for reviewing this patch, since your comments are not from the changes introduced by this patch, I have opened another lower priority cleanup JIRA HDFS-12539 to get these stuff fixed. Does that sound good to you? Thanks > Ozone: ListBucket is too slow > - > > Key: HDFS-12506 > URL: https://issues.apache.org/jira/browse/HDFS-12506 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: ozone >Reporter: Weiwei Yang >Assignee: Weiwei Yang >Priority: Blocker > Labels: ozoneMerge, performance > Attachments: HDFS-12506-HDFS-7240.001.patch, > HDFS-12506-HDFS-7240.002.patch, HDFS-12506-HDFS-7240.003.patch, > HDFS-12506-HDFS-7240.004.patch, HDFS-12506-HDFS-7240.005.patch, > HDFS-12506-HDFS-7240.006.patch, HDFS-12506-HDFS-7240.007.patch > > > Generated 3 million keys in ozone, and run {{listBucket}} command to get a > list of buckets under a volume, > {code} > bin/hdfs oz -listBucket http://15oz1.fyre.ibm.com:9864/vol-0-15143 -user wwei > {code} > this call spent over *15 seconds* to finish. The problem was caused by the > inflexible structure of KSM DB. Right now {{ksm.db}} stores keys like > following > {code} > /v1/b1 > /v1/b1/k1 > /v1/b1/k2 > /v1/b1/k3 > /v1/b2 > /v1/b2/k1 > /v1/b2/k2 > /v1/b2/k3 > /v1/b3 > /v1/b4 > {code} > keys are sorted in nature order so when we do list buckets under a volume e.g > /v1, we need to seek to /v1 point and start to iterate and filter keys, this > ends up with scanning all keys under volume /v1. The problem with this design > is we don't have an efficient approach to locate all buckets without scanning > the keys. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Created] (HDFS-12539) Ozone: refactor some functions in KSMMetadataManagerImpl to be more readable and reusable
Weiwei Yang created HDFS-12539: -- Summary: Ozone: refactor some functions in KSMMetadataManagerImpl to be more readable and reusable Key: HDFS-12539 URL: https://issues.apache.org/jira/browse/HDFS-12539 Project: Hadoop HDFS Issue Type: Sub-task Components: ozone Reporter: Weiwei Yang Priority: Minor This is from [~anu]'s review comment in HDFS-12506, [https://issues.apache.org/jira/browse/HDFS-12506?focusedCommentId=16178356=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-16178356]. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-12506) Ozone: ListBucket is too slow
[ https://issues.apache.org/jira/browse/HDFS-12506?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Weiwei Yang updated HDFS-12506: --- Attachment: HDFS-12506-HDFS-7240.007.patch > Ozone: ListBucket is too slow > - > > Key: HDFS-12506 > URL: https://issues.apache.org/jira/browse/HDFS-12506 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: ozone >Reporter: Weiwei Yang >Assignee: Weiwei Yang >Priority: Blocker > Labels: ozoneMerge, performance > Attachments: HDFS-12506-HDFS-7240.001.patch, > HDFS-12506-HDFS-7240.002.patch, HDFS-12506-HDFS-7240.003.patch, > HDFS-12506-HDFS-7240.004.patch, HDFS-12506-HDFS-7240.005.patch, > HDFS-12506-HDFS-7240.006.patch, HDFS-12506-HDFS-7240.007.patch > > > Generated 3 million keys in ozone, and run {{listBucket}} command to get a > list of buckets under a volume, > {code} > bin/hdfs oz -listBucket http://15oz1.fyre.ibm.com:9864/vol-0-15143 -user wwei > {code} > this call spent over *15 seconds* to finish. The problem was caused by the > inflexible structure of KSM DB. Right now {{ksm.db}} stores keys like > following > {code} > /v1/b1 > /v1/b1/k1 > /v1/b1/k2 > /v1/b1/k3 > /v1/b2 > /v1/b2/k1 > /v1/b2/k2 > /v1/b2/k3 > /v1/b3 > /v1/b4 > {code} > keys are sorted in nature order so when we do list buckets under a volume e.g > /v1, we need to seek to /v1 point and start to iterate and filter keys, this > ends up with scanning all keys under volume /v1. The problem with this design > is we don't have an efficient approach to locate all buckets without scanning > the keys. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Assigned] (HDFS-12538) TestInstrumentationService should use Time.monotonicNow
[ https://issues.apache.org/jira/browse/HDFS-12538?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Ajay Kumar reassigned HDFS-12538: - Assignee: Ajay Kumar > TestInstrumentationService should use Time.monotonicNow > --- > > Key: HDFS-12538 > URL: https://issues.apache.org/jira/browse/HDFS-12538 > Project: Hadoop HDFS > Issue Type: Bug >Reporter: Chetna Chaudhari >Assignee: Ajay Kumar >Priority: Minor > Attachments: HDFS-12538-1.patch > > -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Assigned] (HDFS-12538) TestInstrumentationService should use Time.monotonicNow
[ https://issues.apache.org/jira/browse/HDFS-12538?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Ajay Kumar reassigned HDFS-12538: - Assignee: (was: Ajay Kumar) > TestInstrumentationService should use Time.monotonicNow > --- > > Key: HDFS-12538 > URL: https://issues.apache.org/jira/browse/HDFS-12538 > Project: Hadoop HDFS > Issue Type: Bug >Reporter: Chetna Chaudhari >Priority: Minor > Attachments: HDFS-12538-1.patch > > -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-12534) Provide logical BlockLocations for EC files for better split calculation
[ https://issues.apache.org/jira/browse/HDFS-12534?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16178479#comment-16178479 ] Huafeng Wang commented on HDFS-12534: - Hi [~andrew.wang], I have a question here. {quote} Applications depend on HDFS BlockLocation to understand where the split points are. {quote} I think currently the returned logical BlockLocation per block group has all the data block and parity block's locations. Isn't these information enough? What's the difference between splitting a single block group and multiple logical block locations here? > Provide logical BlockLocations for EC files for better split calculation > > > Key: HDFS-12534 > URL: https://issues.apache.org/jira/browse/HDFS-12534 > Project: Hadoop HDFS > Issue Type: Bug > Components: erasure-coding >Affects Versions: 3.0.0-beta1 >Reporter: Andrew Wang > Labels: hdfs-ec-3.0-must-do > > I talked to [~vanzin] and [~alex.behm] some more about split calculation with > EC. It turns out HDFS-1 was resolved prematurely. Applications depend on > HDFS BlockLocation to understand where the split points are. The current > scheme of returning one BlockLocation per block group loses this information. > We should change this to provide logical blocks. Divide the file length by > the block size and provide suitable BlockLocations to match, with virtual > offsets and lengths too. > I'm not marking this as incompatible, since changing it this way would in > fact make it more compatible from the perspective of applications that are > scheduling against replicated files. Thus, it'd be good for beta1 if > possible, but okay for later too. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-12495) TestPendingInvalidateBlock.testPendingDeleteUnknownBlocks fails intermittently
[ https://issues.apache.org/jira/browse/HDFS-12495?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Yiqun Lin updated HDFS-12495: - Status: Patch Available (was: Open) > TestPendingInvalidateBlock.testPendingDeleteUnknownBlocks fails intermittently > -- > > Key: HDFS-12495 > URL: https://issues.apache.org/jira/browse/HDFS-12495 > Project: Hadoop HDFS > Issue Type: Bug >Affects Versions: 2.9.0, 3.0.0-beta1, 2.8.2 >Reporter: Eric Badger >Assignee: Eric Badger > Labels: flaky-test > Attachments: HDFS-12495.001.patch > > > {noformat} > java.net.BindException: Problem binding to [localhost:36701] > java.net.BindException: Address already in use; For more details see: > http://wiki.apache.org/hadoop/BindException > at sun.nio.ch.Net.bind0(Native Method) > at sun.nio.ch.Net.bind(Net.java:433) > at sun.nio.ch.Net.bind(Net.java:425) > at > sun.nio.ch.ServerSocketChannelImpl.bind(ServerSocketChannelImpl.java:223) > at sun.nio.ch.ServerSocketAdaptor.bind(ServerSocketAdaptor.java:74) > at org.apache.hadoop.ipc.Server.bind(Server.java:546) > at org.apache.hadoop.ipc.Server$Listener.(Server.java:955) > at org.apache.hadoop.ipc.Server.(Server.java:2655) > at org.apache.hadoop.ipc.RPC$Server.(RPC.java:968) > at > org.apache.hadoop.ipc.ProtobufRpcEngine$Server.(ProtobufRpcEngine.java:367) > at > org.apache.hadoop.ipc.ProtobufRpcEngine.getServer(ProtobufRpcEngine.java:342) > at org.apache.hadoop.ipc.RPC$Builder.build(RPC.java:810) > at > org.apache.hadoop.hdfs.server.datanode.DataNode.initIpcServer(DataNode.java:954) > at > org.apache.hadoop.hdfs.server.datanode.DataNode.startDataNode(DataNode.java:1314) > at > org.apache.hadoop.hdfs.server.datanode.DataNode.(DataNode.java:481) > at > org.apache.hadoop.hdfs.server.datanode.DataNode.makeInstance(DataNode.java:2611) > at > org.apache.hadoop.hdfs.server.datanode.DataNode.instantiateDataNode(DataNode.java:2499) > at > org.apache.hadoop.hdfs.server.datanode.DataNode.createDataNode(DataNode.java:2546) > at > org.apache.hadoop.hdfs.MiniDFSCluster.restartDataNode(MiniDFSCluster.java:2152) > at > org.apache.hadoop.hdfs.server.blockmanagement.TestPendingInvalidateBlock.testPendingDeleteUnknownBlocks(TestPendingInvalidateBlock.java:175) > {noformat} -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-12495) TestPendingInvalidateBlock.testPendingDeleteUnknownBlocks fails intermittently
[ https://issues.apache.org/jira/browse/HDFS-12495?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16178474#comment-16178474 ] Yiqun Lin commented on HDFS-12495: -- +1, retrigger Jenkins. > TestPendingInvalidateBlock.testPendingDeleteUnknownBlocks fails intermittently > -- > > Key: HDFS-12495 > URL: https://issues.apache.org/jira/browse/HDFS-12495 > Project: Hadoop HDFS > Issue Type: Bug >Affects Versions: 2.9.0, 3.0.0-beta1, 2.8.2 >Reporter: Eric Badger >Assignee: Eric Badger > Labels: flaky-test > Attachments: HDFS-12495.001.patch > > > {noformat} > java.net.BindException: Problem binding to [localhost:36701] > java.net.BindException: Address already in use; For more details see: > http://wiki.apache.org/hadoop/BindException > at sun.nio.ch.Net.bind0(Native Method) > at sun.nio.ch.Net.bind(Net.java:433) > at sun.nio.ch.Net.bind(Net.java:425) > at > sun.nio.ch.ServerSocketChannelImpl.bind(ServerSocketChannelImpl.java:223) > at sun.nio.ch.ServerSocketAdaptor.bind(ServerSocketAdaptor.java:74) > at org.apache.hadoop.ipc.Server.bind(Server.java:546) > at org.apache.hadoop.ipc.Server$Listener.(Server.java:955) > at org.apache.hadoop.ipc.Server.(Server.java:2655) > at org.apache.hadoop.ipc.RPC$Server.(RPC.java:968) > at > org.apache.hadoop.ipc.ProtobufRpcEngine$Server.(ProtobufRpcEngine.java:367) > at > org.apache.hadoop.ipc.ProtobufRpcEngine.getServer(ProtobufRpcEngine.java:342) > at org.apache.hadoop.ipc.RPC$Builder.build(RPC.java:810) > at > org.apache.hadoop.hdfs.server.datanode.DataNode.initIpcServer(DataNode.java:954) > at > org.apache.hadoop.hdfs.server.datanode.DataNode.startDataNode(DataNode.java:1314) > at > org.apache.hadoop.hdfs.server.datanode.DataNode.(DataNode.java:481) > at > org.apache.hadoop.hdfs.server.datanode.DataNode.makeInstance(DataNode.java:2611) > at > org.apache.hadoop.hdfs.server.datanode.DataNode.instantiateDataNode(DataNode.java:2499) > at > org.apache.hadoop.hdfs.server.datanode.DataNode.createDataNode(DataNode.java:2546) > at > org.apache.hadoop.hdfs.MiniDFSCluster.restartDataNode(MiniDFSCluster.java:2152) > at > org.apache.hadoop.hdfs.server.blockmanagement.TestPendingInvalidateBlock.testPendingDeleteUnknownBlocks(TestPendingInvalidateBlock.java:175) > {noformat} -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-12495) TestPendingInvalidateBlock.testPendingDeleteUnknownBlocks fails intermittently
[ https://issues.apache.org/jira/browse/HDFS-12495?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Yiqun Lin updated HDFS-12495: - Status: Open (was: Patch Available) > TestPendingInvalidateBlock.testPendingDeleteUnknownBlocks fails intermittently > -- > > Key: HDFS-12495 > URL: https://issues.apache.org/jira/browse/HDFS-12495 > Project: Hadoop HDFS > Issue Type: Bug >Affects Versions: 2.9.0, 3.0.0-beta1, 2.8.2 >Reporter: Eric Badger >Assignee: Eric Badger > Labels: flaky-test > Attachments: HDFS-12495.001.patch > > > {noformat} > java.net.BindException: Problem binding to [localhost:36701] > java.net.BindException: Address already in use; For more details see: > http://wiki.apache.org/hadoop/BindException > at sun.nio.ch.Net.bind0(Native Method) > at sun.nio.ch.Net.bind(Net.java:433) > at sun.nio.ch.Net.bind(Net.java:425) > at > sun.nio.ch.ServerSocketChannelImpl.bind(ServerSocketChannelImpl.java:223) > at sun.nio.ch.ServerSocketAdaptor.bind(ServerSocketAdaptor.java:74) > at org.apache.hadoop.ipc.Server.bind(Server.java:546) > at org.apache.hadoop.ipc.Server$Listener.(Server.java:955) > at org.apache.hadoop.ipc.Server.(Server.java:2655) > at org.apache.hadoop.ipc.RPC$Server.(RPC.java:968) > at > org.apache.hadoop.ipc.ProtobufRpcEngine$Server.(ProtobufRpcEngine.java:367) > at > org.apache.hadoop.ipc.ProtobufRpcEngine.getServer(ProtobufRpcEngine.java:342) > at org.apache.hadoop.ipc.RPC$Builder.build(RPC.java:810) > at > org.apache.hadoop.hdfs.server.datanode.DataNode.initIpcServer(DataNode.java:954) > at > org.apache.hadoop.hdfs.server.datanode.DataNode.startDataNode(DataNode.java:1314) > at > org.apache.hadoop.hdfs.server.datanode.DataNode.(DataNode.java:481) > at > org.apache.hadoop.hdfs.server.datanode.DataNode.makeInstance(DataNode.java:2611) > at > org.apache.hadoop.hdfs.server.datanode.DataNode.instantiateDataNode(DataNode.java:2499) > at > org.apache.hadoop.hdfs.server.datanode.DataNode.createDataNode(DataNode.java:2546) > at > org.apache.hadoop.hdfs.MiniDFSCluster.restartDataNode(MiniDFSCluster.java:2152) > at > org.apache.hadoop.hdfs.server.blockmanagement.TestPendingInvalidateBlock.testPendingDeleteUnknownBlocks(TestPendingInvalidateBlock.java:175) > {noformat} -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-12257) Expose getSnapshottableDirListing as a public API in HdfsAdmin
[ https://issues.apache.org/jira/browse/HDFS-12257?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16178467#comment-16178467 ] Huafeng Wang commented on HDFS-12257: - Hi [~andrew.wang], can you help to take a look at this one? > Expose getSnapshottableDirListing as a public API in HdfsAdmin > -- > > Key: HDFS-12257 > URL: https://issues.apache.org/jira/browse/HDFS-12257 > Project: Hadoop HDFS > Issue Type: Improvement > Components: snapshots >Affects Versions: 2.6.5 >Reporter: Andrew Wang >Assignee: Huafeng Wang > Attachments: HDFS-12257.001.patch, HDFS-12257.002.patch > > > Found at HIVE-16294. We have a CLI API for listing snapshottable dirs, but no > programmatic API. Other snapshot APIs are exposed in HdfsAdmin, I think we > should expose listing there as well. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-12525) Ozone: OzoneClient: Verify bucket/volume name in create calls
[ https://issues.apache.org/jira/browse/HDFS-12525?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16178453#comment-16178453 ] Yiqun Lin commented on HDFS-12525: -- Thanks for fixing this, [~nandakumar131]. I'm +1 for the fix. But one thing we might improve that we should reuse the verify method in ozone. Based on this change, we have two completely same logic of verify method in hadoop-hdfs and hadoop-hdfs-client project. > Ozone: OzoneClient: Verify bucket/volume name in create calls > - > > Key: HDFS-12525 > URL: https://issues.apache.org/jira/browse/HDFS-12525 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: ozone >Reporter: Nandakumar >Assignee: Nandakumar > Labels: ozoneMerge > Attachments: HDFS-12525-HDFS-7240.000.patch, > HDFS-12525-HDFS-7240.000.patch > > > The new OzoneClient API has to verify bucket/volume name during creation > call. Volume/Bucket name shouldn't support any special characters other {{.}} > and {{-}}. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-12387) Ozone: Support Ratis as a first class replication mechanism
[ https://issues.apache.org/jira/browse/HDFS-12387?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16178391#comment-16178391 ] Hadoop QA commented on HDFS-12387: -- | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 29s{color} | {color:blue} Docker mode activated. {color} | || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 1s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s{color} | {color:green} The patch appears to include 8 new or modified test files. {color} | || || || || {color:brown} HDFS-7240 Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 43s{color} | {color:blue} Maven dependency ordering for branch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 16m 5s{color} | {color:green} HDFS-7240 passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 54s{color} | {color:green} HDFS-7240 passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 51s{color} | {color:green} HDFS-7240 passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 56s{color} | {color:green} HDFS-7240 passed {color} | | {color:red}-1{color} | {color:red} findbugs {color} | {color:red} 1m 59s{color} | {color:red} hadoop-hdfs-project/hadoop-hdfs-client in HDFS-7240 has 1 extant Findbugs warnings. {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 55s{color} | {color:green} HDFS-7240 passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 9s{color} | {color:blue} Maven dependency ordering for patch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 1m 46s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 45s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} cc {color} | {color:green} 1m 45s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 1m 45s{color} | {color:green} the patch passed {color} | | {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange} 0m 47s{color} | {color:orange} hadoop-hdfs-project: The patch generated 74 new + 3 unchanged - 1 fixed = 77 total (was 4) {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 41s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} xml {color} | {color:green} 0m 1s{color} | {color:green} The patch has no ill-formed XML file. {color} | | {color:red}-1{color} | {color:red} findbugs {color} | {color:red} 2m 3s{color} | {color:red} hadoop-hdfs-project/hadoop-hdfs-client generated 1 new + 1 unchanged - 0 fixed = 2 total (was 1) {color} | | {color:red}-1{color} | {color:red} findbugs {color} | {color:red} 2m 15s{color} | {color:red} hadoop-hdfs-project/hadoop-hdfs generated 1 new + 0 unchanged - 0 fixed = 1 total (was 0) {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 44s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:green}+1{color} | {color:green} unit {color} | {color:green} 1m 39s{color} | {color:green} hadoop-hdfs-client in the patch passed. {color} | | {color:red}-1{color} | {color:red} unit {color} | {color:red}131m 38s{color} | {color:red} hadoop-hdfs in the patch failed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 24s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black}175m 44s{color} | {color:black} {color} | \\ \\ || Reason || Tests || | FindBugs | module:hadoop-hdfs-project/hadoop-hdfs-client | | | org.apache.hadoop.scm.container.common.helpers.BlockContainerInfo implements Comparator but not Serializable At BlockContainerInfo.java:Serializable At BlockContainerInfo.java:[lines 30-135] | | FindBugs | module:hadoop-hdfs-project/hadoop-hdfs | | | Dead store to results in org.apache.hadoop.ozone.scm.block.BlockManagerImpl.preAllocateContainers(int, OzoneProtos$ReplicationType, OzoneProtos$ReplicationFactor) At
[jira] [Commented] (HDFS-12516) Suppress the fsnamesystem lock warning on nn startup
[ https://issues.apache.org/jira/browse/HDFS-12516?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16178378#comment-16178378 ] Hadoop QA commented on HDFS-12516: -- | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 20s{color} | {color:blue} Docker mode activated. {color} | || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s{color} | {color:green} The patch appears to include 1 new or modified test files. {color} | || || || || {color:brown} trunk Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 14m 8s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 0s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 41s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 4s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 56s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 44s{color} | {color:green} trunk passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 1m 15s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 17s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 1m 17s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 40s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 11s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 2m 13s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 45s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:red}-1{color} | {color:red} unit {color} | {color:red} 98m 47s{color} | {color:red} hadoop-hdfs in the patch failed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 16s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black}127m 55s{color} | {color:black} {color} | \\ \\ || Reason || Tests || | Failed junit tests | hadoop.hdfs.server.datanode.TestDirectoryScanner | | | hadoop.hdfs.TestReadStripedFileWithMissingBlocks | | | hadoop.hdfs.server.namenode.TestReencryption | \\ \\ || Subsystem || Report/Notes || | Docker | Image:yetus/hadoop:71bbb86 | | JIRA Issue | HDFS-12516 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12888758/HDFS-12516.03.patch | | Optional Tests | asflicense compile javac javadoc mvninstall mvnsite unit findbugs checkstyle | | uname | Linux 027765252d8b 3.13.0-129-generic #178-Ubuntu SMP Fri Aug 11 12:48:20 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh | | git revision | trunk / 415e5a1 | | Default Java | 1.8.0_144 | | findbugs | v3.1.0-RC1 | | unit | https://builds.apache.org/job/PreCommit-HDFS-Build/21331/artifact/patchprocess/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt | | Test Results | https://builds.apache.org/job/PreCommit-HDFS-Build/21331/testReport/ | | modules | C: hadoop-hdfs-project/hadoop-hdfs U: hadoop-hdfs-project/hadoop-hdfs | | Console output | https://builds.apache.org/job/PreCommit-HDFS-Build/21331/console | | Powered by | Apache Yetus 0.6.0-SNAPSHOT http://yetus.apache.org | This message was automatically generated. > Suppress the fsnamesystem lock warning on nn startup > > > Key: HDFS-12516 > URL: https://issues.apache.org/jira/browse/HDFS-12516 > Project: Hadoop HDFS > Issue Type: Improvement >Reporter: Ajay Kumar >Assignee: Ajay Kumar >
[jira] [Commented] (HDFS-12529) get source for config tags from file name
[ https://issues.apache.org/jira/browse/HDFS-12529?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16178359#comment-16178359 ] Hadoop QA commented on HDFS-12529: -- | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 23s{color} | {color:blue} Docker mode activated. {color} | || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s{color} | {color:green} The patch appears to include 1 new or modified test files. {color} | || || || || {color:brown} trunk Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 23m 30s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 14m 22s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 40s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 3s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 24s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 51s{color} | {color:green} trunk passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 40s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 10m 56s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 10m 56s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 39s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 59s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 31s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 51s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:red}-1{color} | {color:red} unit {color} | {color:red} 7m 10s{color} | {color:red} hadoop-common in the patch failed. {color} | | {color:red}-1{color} | {color:red} asflicense {color} | {color:red} 0m 29s{color} | {color:red} The patch generated 1 ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 66m 38s{color} | {color:black} {color} | \\ \\ || Reason || Tests || | Failed junit tests | hadoop.metrics2.sink.TestFileSink | \\ \\ || Subsystem || Report/Notes || | Docker | Image:yetus/hadoop:71bbb86 | | JIRA Issue | HDFS-12529 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12888756/HDFS-12529.03.patch | | Optional Tests | asflicense compile javac javadoc mvninstall mvnsite unit findbugs checkstyle | | uname | Linux fbb05a6ca4ed 3.13.0-123-generic #172-Ubuntu SMP Mon Jun 26 18:04:35 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh | | git revision | trunk / 415e5a1 | | Default Java | 1.8.0_144 | | findbugs | v3.1.0-RC1 | | unit | https://builds.apache.org/job/PreCommit-HDFS-Build/21329/artifact/patchprocess/patch-unit-hadoop-common-project_hadoop-common.txt | | Test Results | https://builds.apache.org/job/PreCommit-HDFS-Build/21329/testReport/ | | asflicense | https://builds.apache.org/job/PreCommit-HDFS-Build/21329/artifact/patchprocess/patch-asflicense-problems.txt | | modules | C: hadoop-common-project/hadoop-common U: hadoop-common-project/hadoop-common | | Console output | https://builds.apache.org/job/PreCommit-HDFS-Build/21329/console | | Powered by | Apache Yetus 0.6.0-SNAPSHOT http://yetus.apache.org | This message was automatically generated. > get source for config tags from file name > - > > Key: HDFS-12529 > URL: https://issues.apache.org/jira/browse/HDFS-12529 > Project: Hadoop HDFS > Issue Type: Bug >Reporter: Ajay Kumar >Assignee: Ajay Kumar > Attachments:
[jira] [Commented] (HDFS-12506) Ozone: ListBucket is too slow
[ https://issues.apache.org/jira/browse/HDFS-12506?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16178356#comment-16178356 ] Anu Engineer commented on HDFS-12506: - [~cheersyang], Thanks for finding and fixing this issue. I am +1 overall on this change, I have some minor comments. 1. nit: Can you please file a later clean up JIRA to rename these functions? I know these are not related to your patch, but I noticed these while reading code. * getBucketKeyPrefix --> getBucketWithDBPrefix * getKeyKeyPrefix --> getKeyWithDBPrefix * getDBKeyForKey-> getDBKeyBytes and it is possible to rewrite getDBKeyForKey as {{return(DFSUtil.string2Bytes(getKeyWithDBPrefix()))}} 2. nit: Again not related to your change, instead of doing this in many places of code {{OzoneConsts.KSM_KEY_PREFIX + volume + OzoneConsts.KSM_KEY_PREFIX + bucket + OzoneConsts.KSM_KEY_PREFIX;}} it might be a good idea to have 3 functions. * getBucketWithDBPrefix * getKeyWithDBPrefix * getVolumeWithDBPrefix and just reuse that everywhere. It will avoid mistakes when we edit code later since all of these places need to be edited for any change in future. > Ozone: ListBucket is too slow > - > > Key: HDFS-12506 > URL: https://issues.apache.org/jira/browse/HDFS-12506 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: ozone >Reporter: Weiwei Yang >Assignee: Weiwei Yang >Priority: Blocker > Labels: ozoneMerge, performance > Attachments: HDFS-12506-HDFS-7240.001.patch, > HDFS-12506-HDFS-7240.002.patch, HDFS-12506-HDFS-7240.003.patch, > HDFS-12506-HDFS-7240.004.patch, HDFS-12506-HDFS-7240.005.patch, > HDFS-12506-HDFS-7240.006.patch > > > Generated 3 million keys in ozone, and run {{listBucket}} command to get a > list of buckets under a volume, > {code} > bin/hdfs oz -listBucket http://15oz1.fyre.ibm.com:9864/vol-0-15143 -user wwei > {code} > this call spent over *15 seconds* to finish. The problem was caused by the > inflexible structure of KSM DB. Right now {{ksm.db}} stores keys like > following > {code} > /v1/b1 > /v1/b1/k1 > /v1/b1/k2 > /v1/b1/k3 > /v1/b2 > /v1/b2/k1 > /v1/b2/k2 > /v1/b2/k3 > /v1/b3 > /v1/b4 > {code} > keys are sorted in nature order so when we do list buckets under a volume e.g > /v1, we need to seek to /v1 point and start to iterate and filter keys, this > ends up with scanning all keys under volume /v1. The problem with this design > is we don't have an efficient approach to locate all buckets without scanning > the keys. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-12516) Suppress the fsnamesystem lock warning on nn startup
[ https://issues.apache.org/jira/browse/HDFS-12516?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16178343#comment-16178343 ] Anu Engineer commented on HDFS-12516: - [~ajayydv] Thanks for updating the patch. +1, v3 patch pending jenkins. > Suppress the fsnamesystem lock warning on nn startup > > > Key: HDFS-12516 > URL: https://issues.apache.org/jira/browse/HDFS-12516 > Project: Hadoop HDFS > Issue Type: Improvement >Reporter: Ajay Kumar >Assignee: Ajay Kumar > Attachments: HDFS-12516.01.patch, HDFS-12516.02.patch, > HDFS-12516.03.patch > > > Whenever FsNameSystemLock is held for more than configured value of > {{dfs.namenode.write-lock-reporting-threshold-ms}}, we log stacktrace and an > entry in metrics. Loading FSImage from disk will usually cross this > threshold. We can suppress this FsNamesystem lock warning on NameNode startup. > {code} > 17/09/20 21:41:39 INFO namenode.FSNamesystem: FSNamesystem write lock held > for 7159 ms via > java.lang.Thread.getStackTrace(Thread.java:1552) > org.apache.hadoop.util.StringUtils.getStackTrace(StringUtils.java:945) > org.apache.hadoop.hdfs.server.namenode.FSNamesystem.writeUnlock(FSNamesystem.java:1659) > org.apache.hadoop.hdfs.server.namenode.FSNamesystem.loadFSImage(FSNamesystem.java:1074) > org.apache.hadoop.hdfs.server.namenode.FSNamesystem.loadFromDisk(FSNamesystem.java:703) > org.apache.hadoop.hdfs.server.namenode.NameNode.loadNamesystem(NameNode.java:688) > org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:752) > org.apache.hadoop.hdfs.server.namenode.NameNode.(NameNode.java:992) > org.apache.hadoop.hdfs.server.namenode.NameNode.(NameNode.java:976) > org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1701) > org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1769) > Number of suppressed write-lock reports: 0 > Longest write-lock held interval: 7159 > {code} -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-12529) get source for config tags from file name
[ https://issues.apache.org/jira/browse/HDFS-12529?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16178341#comment-16178341 ] Anu Engineer commented on HDFS-12529: - [~ajayydv] Thanks for updating the patch. +1, pending jenkins. > get source for config tags from file name > - > > Key: HDFS-12529 > URL: https://issues.apache.org/jira/browse/HDFS-12529 > Project: Hadoop HDFS > Issue Type: Bug >Reporter: Ajay Kumar >Assignee: Ajay Kumar > Attachments: HDFS-12529.01.patch, HDFS-12529.02.patch, > HDFS-12529.03.patch > > > For tagging related properties together use resource name as source. > Currently it assumes source is configured in xml itself. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-12516) Suppress the fsnamesystem lock warning on nn startup
[ https://issues.apache.org/jira/browse/HDFS-12516?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Ajay Kumar updated HDFS-12516: -- Attachment: HDFS-12516.03.patch > Suppress the fsnamesystem lock warning on nn startup > > > Key: HDFS-12516 > URL: https://issues.apache.org/jira/browse/HDFS-12516 > Project: Hadoop HDFS > Issue Type: Improvement >Reporter: Ajay Kumar >Assignee: Ajay Kumar > Attachments: HDFS-12516.01.patch, HDFS-12516.02.patch, > HDFS-12516.03.patch > > > Whenever FsNameSystemLock is held for more than configured value of > {{dfs.namenode.write-lock-reporting-threshold-ms}}, we log stacktrace and an > entry in metrics. Loading FSImage from disk will usually cross this > threshold. We can suppress this FsNamesystem lock warning on NameNode startup. > {code} > 17/09/20 21:41:39 INFO namenode.FSNamesystem: FSNamesystem write lock held > for 7159 ms via > java.lang.Thread.getStackTrace(Thread.java:1552) > org.apache.hadoop.util.StringUtils.getStackTrace(StringUtils.java:945) > org.apache.hadoop.hdfs.server.namenode.FSNamesystem.writeUnlock(FSNamesystem.java:1659) > org.apache.hadoop.hdfs.server.namenode.FSNamesystem.loadFSImage(FSNamesystem.java:1074) > org.apache.hadoop.hdfs.server.namenode.FSNamesystem.loadFromDisk(FSNamesystem.java:703) > org.apache.hadoop.hdfs.server.namenode.NameNode.loadNamesystem(NameNode.java:688) > org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:752) > org.apache.hadoop.hdfs.server.namenode.NameNode.(NameNode.java:992) > org.apache.hadoop.hdfs.server.namenode.NameNode.(NameNode.java:976) > org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1701) > org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1769) > Number of suppressed write-lock reports: 0 > Longest write-lock held interval: 7159 > {code} -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-12516) Suppress the fsnamesystem lock warning on nn startup
[ https://issues.apache.org/jira/browse/HDFS-12516?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16178339#comment-16178339 ] Ajay Kumar commented on HDFS-12516: --- [~anu], you are right. I think this is what [~arpitagarwal] suggested initially. Made the change in patch v3. > Suppress the fsnamesystem lock warning on nn startup > > > Key: HDFS-12516 > URL: https://issues.apache.org/jira/browse/HDFS-12516 > Project: Hadoop HDFS > Issue Type: Improvement >Reporter: Ajay Kumar >Assignee: Ajay Kumar > Attachments: HDFS-12516.01.patch, HDFS-12516.02.patch, > HDFS-12516.03.patch > > > Whenever FsNameSystemLock is held for more than configured value of > {{dfs.namenode.write-lock-reporting-threshold-ms}}, we log stacktrace and an > entry in metrics. Loading FSImage from disk will usually cross this > threshold. We can suppress this FsNamesystem lock warning on NameNode startup. > {code} > 17/09/20 21:41:39 INFO namenode.FSNamesystem: FSNamesystem write lock held > for 7159 ms via > java.lang.Thread.getStackTrace(Thread.java:1552) > org.apache.hadoop.util.StringUtils.getStackTrace(StringUtils.java:945) > org.apache.hadoop.hdfs.server.namenode.FSNamesystem.writeUnlock(FSNamesystem.java:1659) > org.apache.hadoop.hdfs.server.namenode.FSNamesystem.loadFSImage(FSNamesystem.java:1074) > org.apache.hadoop.hdfs.server.namenode.FSNamesystem.loadFromDisk(FSNamesystem.java:703) > org.apache.hadoop.hdfs.server.namenode.NameNode.loadNamesystem(NameNode.java:688) > org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:752) > org.apache.hadoop.hdfs.server.namenode.NameNode.(NameNode.java:992) > org.apache.hadoop.hdfs.server.namenode.NameNode.(NameNode.java:976) > org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1701) > org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1769) > Number of suppressed write-lock reports: 0 > Longest write-lock held interval: 7159 > {code} -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-12538) TestInstrumentationService should use Time.monotonicNow
[ https://issues.apache.org/jira/browse/HDFS-12538?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16178337#comment-16178337 ] Hadoop QA commented on HDFS-12538: -- | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 19s{color} | {color:blue} Docker mode activated. {color} | || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s{color} | {color:green} The patch appears to include 1 new or modified test files. {color} | || || || || {color:brown} trunk Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 15m 30s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 20s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 15s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 29s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 0m 25s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 15s{color} | {color:green} trunk passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 18s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 19s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 19s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 11s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 25s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 0m 32s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 12s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:red}-1{color} | {color:red} unit {color} | {color:red} 3m 37s{color} | {color:red} hadoop-hdfs-httpfs in the patch failed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 14s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 24m 42s{color} | {color:black} {color} | \\ \\ || Reason || Tests || | Failed junit tests | hadoop.lib.service.instrumentation.TestInstrumentationService | \\ \\ || Subsystem || Report/Notes || | Docker | Image:yetus/hadoop:71bbb86 | | JIRA Issue | HDFS-12538 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12888755/HDFS-12538-1.patch | | Optional Tests | asflicense compile javac javadoc mvninstall mvnsite unit findbugs checkstyle | | uname | Linux 6715892829dd 3.13.0-129-generic #178-Ubuntu SMP Fri Aug 11 12:48:20 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh | | git revision | trunk / 415e5a1 | | Default Java | 1.8.0_144 | | findbugs | v3.1.0-RC1 | | unit | https://builds.apache.org/job/PreCommit-HDFS-Build/21328/artifact/patchprocess/patch-unit-hadoop-hdfs-project_hadoop-hdfs-httpfs.txt | | Test Results | https://builds.apache.org/job/PreCommit-HDFS-Build/21328/testReport/ | | modules | C: hadoop-hdfs-project/hadoop-hdfs-httpfs U: hadoop-hdfs-project/hadoop-hdfs-httpfs | | Console output | https://builds.apache.org/job/PreCommit-HDFS-Build/21328/console | | Powered by | Apache Yetus 0.6.0-SNAPSHOT http://yetus.apache.org | This message was automatically generated. > TestInstrumentationService should use Time.monotonicNow > --- > > Key: HDFS-12538 > URL: https://issues.apache.org/jira/browse/HDFS-12538 > Project: Hadoop HDFS > Issue Type: Bug >Reporter: Chetna Chaudhari >Priority: Minor > Attachments: HDFS-12538-1.patch > > -- This message was
[jira] [Updated] (HDFS-12387) Ozone: Support Ratis as a first class replication mechanism
[ https://issues.apache.org/jira/browse/HDFS-12387?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Anu Engineer updated HDFS-12387: Attachment: HDFS-12387-HDFS-7240.003.patch Attaching patch v3 to trigger jenkins and fixing a compiler issue. > Ozone: Support Ratis as a first class replication mechanism > --- > > Key: HDFS-12387 > URL: https://issues.apache.org/jira/browse/HDFS-12387 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: ozone >Affects Versions: HDFS-7240 >Reporter: Anu Engineer >Assignee: Anu Engineer >Priority: Critical > Labels: ozoneMerge > Attachments: HDFS-12387-HDFS-7240.001.patch, > HDFS-12387-HDFS-7240.002.patch, HDFS-12387-HDFS-7240.003.patch > > > Ozone container layer supports pluggable replication policies. This JIRA > brings Apache Ratis based replication to Ozone. Apache Ratis is a java > implementation of Raft protocol. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-12529) get source for config tags from file name
[ https://issues.apache.org/jira/browse/HDFS-12529?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Ajay Kumar updated HDFS-12529: -- Attachment: HDFS-12529.03.patch [~anu], thanks for review. Updated test case to check for source file as well. > get source for config tags from file name > - > > Key: HDFS-12529 > URL: https://issues.apache.org/jira/browse/HDFS-12529 > Project: Hadoop HDFS > Issue Type: Bug >Reporter: Ajay Kumar >Assignee: Ajay Kumar > Attachments: HDFS-12529.01.patch, HDFS-12529.02.patch, > HDFS-12529.03.patch > > > For tagging related properties together use resource name as source. > Currently it assumes source is configured in xml itself. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-12538) TestInstrumentationService should use Time.monotonicNow
[ https://issues.apache.org/jira/browse/HDFS-12538?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Chetna Chaudhari updated HDFS-12538: Status: Patch Available (was: Open) [~jlowe] [~templedf] Please review the patch. > TestInstrumentationService should use Time.monotonicNow > --- > > Key: HDFS-12538 > URL: https://issues.apache.org/jira/browse/HDFS-12538 > Project: Hadoop HDFS > Issue Type: Bug >Reporter: Chetna Chaudhari >Priority: Minor > Attachments: HDFS-12538-1.patch > > -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-12538) TestInstrumentationService should use Time.monotonicNow
[ https://issues.apache.org/jira/browse/HDFS-12538?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Chetna Chaudhari updated HDFS-12538: Attachment: HDFS-12538-1.patch > TestInstrumentationService should use Time.monotonicNow > --- > > Key: HDFS-12538 > URL: https://issues.apache.org/jira/browse/HDFS-12538 > Project: Hadoop HDFS > Issue Type: Bug >Reporter: Chetna Chaudhari >Priority: Minor > Attachments: HDFS-12538-1.patch > > -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Created] (HDFS-12538) TestInstrumentationService should use Time.monotonicNow
Chetna Chaudhari created HDFS-12538: --- Summary: TestInstrumentationService should use Time.monotonicNow Key: HDFS-12538 URL: https://issues.apache.org/jira/browse/HDFS-12538 Project: Hadoop HDFS Issue Type: Bug Reporter: Chetna Chaudhari Priority: Minor -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-12504) Ozone: Improve SQLCLI performance
[ https://issues.apache.org/jira/browse/HDFS-12504?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16178233#comment-16178233 ] Yuanbo Liu commented on HDFS-12504: --- Sorry for the late response, it takes a lot of time to setup a linux development env since the network is horrible here. I've discussed this JIRA with Weiwei, I will take it over and provide patch for it. > Ozone: Improve SQLCLI performance > - > > Key: HDFS-12504 > URL: https://issues.apache.org/jira/browse/HDFS-12504 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: ozone >Reporter: Weiwei Yang >Assignee: Yuanbo Liu > Labels: performance > > In my test, my {{ksm.db}} has *3017660* entries with total size of *128mb*, > SQLCLI tool runs over *2 hours* but still not finish exporting the DB. This > is because it iterates each entry and inserts that to another sqllite DB > file, which is not efficient. We need to improve this to be running more > efficiently on large DB files. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-12291) [SPS]: Provide a mechanism to recursively iterate and satisfy storage policy of all the files under the given dir
[ https://issues.apache.org/jira/browse/HDFS-12291?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16178228#comment-16178228 ] Hadoop QA commented on HDFS-12291: -- | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 15m 49s{color} | {color:blue} Docker mode activated. {color} | || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s{color} | {color:green} The patch appears to include 5 new or modified test files. {color} | || || || || {color:brown} HDFS-10285 Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 18m 31s{color} | {color:green} HDFS-10285 passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 57s{color} | {color:green} HDFS-10285 passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 46s{color} | {color:green} HDFS-10285 passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 9s{color} | {color:green} HDFS-10285 passed {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 58s{color} | {color:green} HDFS-10285 passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 46s{color} | {color:green} HDFS-10285 passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:red}-1{color} | {color:red} mvninstall {color} | {color:red} 0m 23s{color} | {color:red} hadoop-hdfs in the patch failed. {color} | | {color:red}-1{color} | {color:red} compile {color} | {color:red} 0m 25s{color} | {color:red} hadoop-hdfs in the patch failed. {color} | | {color:red}-1{color} | {color:red} javac {color} | {color:red} 0m 25s{color} | {color:red} hadoop-hdfs in the patch failed. {color} | | {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange} 0m 44s{color} | {color:orange} hadoop-hdfs-project/hadoop-hdfs: The patch generated 18 new + 470 unchanged - 0 fixed = 488 total (was 470) {color} | | {color:red}-1{color} | {color:red} mvnsite {color} | {color:red} 0m 24s{color} | {color:red} hadoop-hdfs in the patch failed. {color} | | {color:red}-1{color} | {color:red} whitespace {color} | {color:red} 0m 0s{color} | {color:red} The patch has 2 line(s) that end in whitespace. Use git apply --whitespace=fix <>. Refer https://git-scm.com/docs/git-apply {color} | | {color:red}-1{color} | {color:red} findbugs {color} | {color:red} 0m 22s{color} | {color:red} hadoop-hdfs in the patch failed. {color} | | {color:red}-1{color} | {color:red} javadoc {color} | {color:red} 0m 39s{color} | {color:red} hadoop-hdfs-project_hadoop-hdfs generated 8 new + 9 unchanged - 0 fixed = 17 total (was 9) {color} | || || || || {color:brown} Other Tests {color} || | {color:red}-1{color} | {color:red} unit {color} | {color:red} 0m 20s{color} | {color:red} hadoop-hdfs in the patch failed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 16s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 44m 49s{color} | {color:black} {color} | \\ \\ || Subsystem || Report/Notes || | Docker | Image:yetus/hadoop:14b5c93 | | JIRA Issue | HDFS-12291 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12888742/HDFS-12291-HDFS-10285-05.patch | | Optional Tests | asflicense compile javac javadoc mvninstall mvnsite unit findbugs checkstyle | | uname | Linux 1b179df8d606 3.13.0-129-generic #178-Ubuntu SMP Fri Aug 11 12:48:20 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh | | git revision | HDFS-10285 / 4a2c50b | | Default Java | 1.8.0_144 | | findbugs | v3.1.0-RC1 | | mvninstall | https://builds.apache.org/job/PreCommit-HDFS-Build/21327/artifact/patchprocess/patch-mvninstall-hadoop-hdfs-project_hadoop-hdfs.txt | | compile | https://builds.apache.org/job/PreCommit-HDFS-Build/21327/artifact/patchprocess/patch-compile-hadoop-hdfs-project_hadoop-hdfs.txt | | javac | https://builds.apache.org/job/PreCommit-HDFS-Build/21327/artifact/patchprocess/patch-compile-hadoop-hdfs-project_hadoop-hdfs.txt | | checkstyle | https://builds.apache.org/job/PreCommit-HDFS-Build/21327/artifact/patchprocess/diff-checkstyle-hadoop-hdfs-project_hadoop-hdfs.txt | | mvnsite | https://builds.apache.org/job/PreCommit-HDFS-Build/21327/artifact/patchprocess/patch-mvnsite-hadoop-hdfs-project_hadoop-hdfs.txt | | whitespace |
[jira] [Commented] (HDFS-12291) [SPS]: Provide a mechanism to recursively iterate and satisfy storage policy of all the files under the given dir
[ https://issues.apache.org/jira/browse/HDFS-12291?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16178198#comment-16178198 ] Surendra Singh Lilhore commented on HDFS-12291: --- Thanks [~xiaochen] and [~umamaheswararao] for review.. Attached v5 patch, fixed all above comments > [SPS]: Provide a mechanism to recursively iterate and satisfy storage policy > of all the files under the given dir > - > > Key: HDFS-12291 > URL: https://issues.apache.org/jira/browse/HDFS-12291 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: datanode, namenode >Reporter: Rakesh R >Assignee: Surendra Singh Lilhore > Attachments: HDFS-12291-HDFS-10285-01.patch, > HDFS-12291-HDFS-10285-02.patch, HDFS-12291-HDFS-10285-03.patch, > HDFS-12291-HDFS-10285-04.patch, HDFS-12291-HDFS-10285-05.patch > > > For the given source path directory, presently SPS consider only the files > immediately under the directory(only one level of scanning) for satisfying > the policy. It WON’T do recursive directory scanning and then schedules SPS > tasks to satisfy the storage policy of all the files till the leaf node. > The idea of this jira is to discuss & implement an efficient recursive > directory iteration mechanism and satisfies storage policy for all the files > under the given directory. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-12291) [SPS]: Provide a mechanism to recursively iterate and satisfy storage policy of all the files under the given dir
[ https://issues.apache.org/jira/browse/HDFS-12291?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Surendra Singh Lilhore updated HDFS-12291: -- Attachment: HDFS-12291-HDFS-10285-05.patch > [SPS]: Provide a mechanism to recursively iterate and satisfy storage policy > of all the files under the given dir > - > > Key: HDFS-12291 > URL: https://issues.apache.org/jira/browse/HDFS-12291 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: datanode, namenode >Reporter: Rakesh R >Assignee: Surendra Singh Lilhore > Attachments: HDFS-12291-HDFS-10285-01.patch, > HDFS-12291-HDFS-10285-02.patch, HDFS-12291-HDFS-10285-03.patch, > HDFS-12291-HDFS-10285-04.patch, HDFS-12291-HDFS-10285-05.patch > > > For the given source path directory, presently SPS consider only the files > immediately under the directory(only one level of scanning) for satisfying > the policy. It WON’T do recursive directory scanning and then schedules SPS > tasks to satisfy the storage policy of all the files till the leaf node. > The idea of this jira is to discuss & implement an efficient recursive > directory iteration mechanism and satisfies storage policy for all the files > under the given directory. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org