[jira] [Commented] (HDFS-13165) [SPS]: Collects successfully moved block details via IBR
[ https://issues.apache.org/jira/browse/HDFS-13165?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16451722#comment-16451722 ] Rakesh R commented on HDFS-13165: - Based on [~surendrasingh]'s vote and I believe no objection on the IBR related changes, am planning to commit the latest IBR patch(without datanode block move changes) to the branch. I'd like to finish the bulk changes amap and simultaneously will try to reach consensus on the datanode protocol part. > [SPS]: Collects successfully moved block details via IBR > > > Key: HDFS-13165 > URL: https://issues.apache.org/jira/browse/HDFS-13165 > Project: Hadoop HDFS > Issue Type: Sub-task >Reporter: Rakesh R >Assignee: Rakesh R >Priority: Major > Attachments: HDFS-13165-HDFS-10285-00.patch, > HDFS-13165-HDFS-10285-01.patch, HDFS-13165-HDFS-10285-02.patch, > HDFS-13165-HDFS-10285-03.patch, HDFS-13165-HDFS-10285-04.patch, > HDFS-13165-HDFS-10285-05.patch, HDFS-13165-HDFS-10285-06.patch, > HDFS-13165-HDFS-10285-07.patch, HDFS-13165-HDFS-10285-08.patch, > HDFS-13165-HDFS-10285-09.patch > > > This task to make use of the existing IBR to get moved block details and > remove unwanted future tracking logic exists in BlockStorageMovementTracker > code, this is no more needed as the file level tracking maintained at NN > itself. > Following comments taken from HDFS-10285, > [here|https://issues.apache.org/jira/browse/HDFS-10285?focusedCommentId=16347472&page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-16347472] > Comment-3) > {quote}BPServiceActor > Is it actually sending back the moved blocks? Aren’t IBRs sufficient?{quote} > Comment-21) > {quote} > BlockStorageMovementTracker > Many data structures are riddled with non-threadsafe race conditions and risk > of CMEs. > Ex. The moverTaskFutures map. Adding new blocks and/or adding to a block's > list of futures is synchronized. However the run loop does an unsynchronized > block get, unsynchronized future remove, unsynchronized isEmpty, possibly > another unsynchronized get, only then does it do a synchronized remove of the > block. The whole chunk of code should be synchronized. > Is the problematic moverTaskFutures even needed? It's aggregating futures > per-block for seemingly no reason. Why track all the futures at all instead > of just relying on the completion service? As best I can tell: > It's only used to determine if a future from the completion service should be > ignored during shutdown. Shutdown sets the running boolean to false and > clears the entire datastructure so why not use the running boolean like a > check just a little further down? > As synchronization to sleep up to 2 seconds before performing a blocking > moverCompletionService.take, but only when it thinks there are no active > futures. I'll ignore the missed notify race that the bounded wait masks, but > the real question is why not just do the blocking take? > Why all the complexity? Am I missing something? > BlocksMovementsStatusHandler > Suffers same type of thread safety issues as StoragePolicySatisfyWorker. Ex. > blockIdVsMovementStatus is inconsistent synchronized. Does synchronize to > return an unmodifiable list which sadly does nothing to protect the caller > from CME. > handle is iterating over a non-thread safe list. > {quote} -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-13443) RBF: Update mount table cache immediately after changing (add/update/remove) mount table entries.
[ https://issues.apache.org/jira/browse/HDFS-13443?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16451664#comment-16451664 ] genericqa commented on HDFS-13443: -- | (/) *{color:green}+1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 14s{color} | {color:blue} Docker mode activated. {color} | || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s{color} | {color:green} The patch appears to include 2 new or modified test files. {color} | || || || || {color:brown} trunk Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 24m 57s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 27s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 18s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 30s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 10m 57s{color} | {color:green} branch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 0m 46s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 30s{color} | {color:green} trunk passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 28s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 23s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} cc {color} | {color:green} 0m 23s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 23s{color} | {color:green} the patch passed {color} | | {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange} 0m 14s{color} | {color:orange} hadoop-hdfs-project/hadoop-hdfs-rbf: The patch generated 1 new + 0 unchanged - 0 fixed = 1 total (was 0) {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 26s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} xml {color} | {color:green} 0m 1s{color} | {color:green} The patch has no ill-formed XML file. {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 11m 31s{color} | {color:green} patch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 0m 53s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 29s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:green}+1{color} | {color:green} unit {color} | {color:green} 12m 47s{color} | {color:green} hadoop-hdfs-rbf in the patch passed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 22s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 67m 2s{color} | {color:black} {color} | \\ \\ || Subsystem || Report/Notes || | Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:b78c94f | | JIRA Issue | HDFS-13443 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12920576/HDFS-13443.005.patch | | Optional Tests | asflicense compile javac javadoc mvninstall mvnsite unit shadedclient findbugs checkstyle cc xml | | uname | Linux c4dec207891f 3.13.0-139-generic #188-Ubuntu SMP Tue Jan 9 14:43:09 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | /testptch/patchprocess/precommit/personality/provided.sh | | git revision | trunk / bb3c504 | | maven | version: Apache Maven 3.3.9 | | Default Java | 1.8.0_162 | | findbugs | v3.1.0-RC1 | | checkstyle | https://builds.apache.org/job/PreCommit-HDFS-Build/24063/artifact/out/diff-checkstyle-hadoop-hdfs-project_hadoop-hdfs-rbf.txt | | Test Results | https://builds.apache.org/job/PreCommit-HDFS-Build/24063/t
[jira] [Commented] (HDFS-13492) Limit httpfs binds to certain IP addresses in branch-2
[ https://issues.apache.org/jira/browse/HDFS-13492?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16451661#comment-16451661 ] genericqa commented on HDFS-13492: -- | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 30m 15s{color} | {color:blue} Docker mode activated. {color} | || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:red}-1{color} | {color:red} test4tests {color} | {color:red} 0m 0s{color} | {color:red} The patch doesn't appear to include any new or modified tests. Please justify why no new tests are needed for this patch. Also please list what manual steps were performed to verify this patch. {color} | || || || || {color:brown} branch-2 Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 11m 32s{color} | {color:green} branch-2 passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 23s{color} | {color:green} branch-2 passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 51s{color} | {color:green} branch-2 passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 18s{color} | {color:green} branch-2 passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 29s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 20s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 20s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 43s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} xml {color} | {color:green} 0m 1s{color} | {color:green} The patch has no ill-formed XML file. {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 15s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:red}-1{color} | {color:red} unit {color} | {color:red} 5m 42s{color} | {color:red} hadoop-hdfs-httpfs in the patch failed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 22s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 52m 46s{color} | {color:black} {color} | \\ \\ || Reason || Tests || | Failed junit tests | hadoop.fs.http.client.TestHttpFSFWithWebhdfsFileSystem | \\ \\ || Subsystem || Report/Notes || | Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:f667ef1 | | JIRA Issue | HDFS-13492 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12920548/HDFS-13492.branch-2.002.patch | | Optional Tests | asflicense compile javac javadoc mvninstall mvnsite unit shadedclient xml | | uname | Linux 48125ee21d1d 4.4.0-64-generic #85-Ubuntu SMP Mon Feb 20 11:50:30 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | /testptch/patchprocess/precommit/personality/provided.sh | | git revision | branch-2 / 2b48854 | | maven | version: Apache Maven 3.3.9 (bb52d8502b132ec0a5a3f4c09453c07478323dc5; 2015-11-10T16:41:47+00:00) | | Default Java | 1.7.0_171 | | unit | https://builds.apache.org/job/PreCommit-HDFS-Build/24062/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs-httpfs.txt | | Test Results | https://builds.apache.org/job/PreCommit-HDFS-Build/24062/testReport/ | | Max. process+thread count | 330 (vs. ulimit of 1) | | modules | C: hadoop-hdfs-project/hadoop-hdfs-httpfs U: hadoop-hdfs-project/hadoop-hdfs-httpfs | | Console output | https://builds.apache.org/job/PreCommit-HDFS-Build/24062/console | | Powered by | Apache Yetus 0.8.0-SNAPSHOT http://yetus.apache.org | This message was automatically generated. > Limit httpfs binds to certain IP addresses in branch-2 > -- > > Key: HDFS-13492 > URL: https://issues.apache.org/jira/browse/HDFS-13492 > Project: Hadoop HDFS > Issue Type: Improvement > Components: httpfs >Affects Versions: 2.6.0 >Reporter: Wei-Chiu Chuang >Assignee: Wei-Chiu Chuang >Priority: Major > Attachments: HDFS-
[jira] [Commented] (HDFS-6110) adding more slow action log in critical write path
[ https://issues.apache.org/jira/browse/HDFS-6110?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16451645#comment-16451645 ] Lars Francke commented on HDFS-6110: I know this is old but in case someone else stumbles across it: This was accidentally committed with a commit message pointing to HBASE-6110 instead. > adding more slow action log in critical write path > -- > > Key: HDFS-6110 > URL: https://issues.apache.org/jira/browse/HDFS-6110 > Project: Hadoop HDFS > Issue Type: Improvement > Components: datanode >Affects Versions: 2.3.0, 3.0.0-alpha1 >Reporter: Liang Xie >Assignee: Liang Xie >Priority: Major > Fix For: 2.5.0 > > Attachments: HDFS-6110-v2.txt, HDFS-6110.txt, HDFS-6110v3.txt, > HDFS-6110v4.txt, HDFS-6110v5.txt, HDFS-6110v6.txt > > > After digging a HBase write spike issue caused by slow buffer io in our > cluster, just realize we'd better to add more abnormal latency warning log in > write flow, such that if other guys hit HLog sync spike, we could know more > detail info from HDFS side at the same time. > Patch will be uploaded soon. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-13443) RBF: Update mount table cache immediately after changing (add/update/remove) mount table entries.
[ https://issues.apache.org/jira/browse/HDFS-13443?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16451611#comment-16451611 ] Mohammad Arshad commented on HDFS-13443: Addressed all the comments, submitted new patch HDFS-13443.005.patch > RBF: Update mount table cache immediately after changing (add/update/remove) > mount table entries. > - > > Key: HDFS-13443 > URL: https://issues.apache.org/jira/browse/HDFS-13443 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: fs >Reporter: Mohammad Arshad >Assignee: Mohammad Arshad >Priority: Major > Labels: RBF > Attachments: HDFS-13443-branch-2.001.patch, > HDFS-13443-branch-2.002.patch, HDFS-13443.001.patch, HDFS-13443.002.patch, > HDFS-13443.003.patch, HDFS-13443.004.patch, HDFS-13443.005.patch > > > Currently mount table cache is updated periodically, by default cache is > updated every minute. After change in mount table, user operations may still > use old mount table. This is bit wrong. > To update mount table cache, maybe we can do following > * *Add refresh API in MountTableManager which will update mount table cache.* > * *When there is a change in mount table entries, router admin server can > update its cache and ask other routers to update their cache*. For example if > there are three routers R1,R2,R3 in a cluster then add mount table entry API, > at admin server side, will perform following sequence of action > ## user submit add mount table entry request on R1 > ## R1 adds the mount table entry in state store > ## R1 call refresh API on R2 > ## R1 calls refresh API on R3 > ## R1 directly freshest its cache > ## Add mount table entry response send back to user. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-13443) RBF: Update mount table cache immediately after changing (add/update/remove) mount table entries.
[ https://issues.apache.org/jira/browse/HDFS-13443?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Mohammad Arshad updated HDFS-13443: --- Attachment: HDFS-13443.005.patch > RBF: Update mount table cache immediately after changing (add/update/remove) > mount table entries. > - > > Key: HDFS-13443 > URL: https://issues.apache.org/jira/browse/HDFS-13443 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: fs >Reporter: Mohammad Arshad >Assignee: Mohammad Arshad >Priority: Major > Labels: RBF > Attachments: HDFS-13443-branch-2.001.patch, > HDFS-13443-branch-2.002.patch, HDFS-13443.001.patch, HDFS-13443.002.patch, > HDFS-13443.003.patch, HDFS-13443.004.patch, HDFS-13443.005.patch > > > Currently mount table cache is updated periodically, by default cache is > updated every minute. After change in mount table, user operations may still > use old mount table. This is bit wrong. > To update mount table cache, maybe we can do following > * *Add refresh API in MountTableManager which will update mount table cache.* > * *When there is a change in mount table entries, router admin server can > update its cache and ask other routers to update their cache*. For example if > there are three routers R1,R2,R3 in a cluster then add mount table entry API, > at admin server side, will perform following sequence of action > ## user submit add mount table entry request on R1 > ## R1 adds the mount table entry in state store > ## R1 call refresh API on R2 > ## R1 calls refresh API on R3 > ## R1 directly freshest its cache > ## Add mount table entry response send back to user. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-13286) Add haadmin commands to transition between standby and observer
[ https://issues.apache.org/jira/browse/HDFS-13286?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Chao Sun updated HDFS-13286: Attachment: HDFS-13286-HDFS-12943.002.patch > Add haadmin commands to transition between standby and observer > --- > > Key: HDFS-13286 > URL: https://issues.apache.org/jira/browse/HDFS-13286 > Project: Hadoop HDFS > Issue Type: Sub-task >Reporter: Chao Sun >Assignee: Chao Sun >Priority: Major > Attachments: HDFS-13286-HDFS-12943.000.patch, > HDFS-13286-HDFS-12943.001.patch, HDFS-13286-HDFS-12943.002.patch > > > As discussed in HDFS-12975, we should allow explicit transition between > standby and observer through haadmin command, such as: > {code} > haadmin -transitionToObserver > {code} > Initially we should support transition from observer to standby, and standby > to observer. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-13286) Add haadmin commands to transition between standby and observer
[ https://issues.apache.org/jira/browse/HDFS-13286?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Chao Sun updated HDFS-13286: Attachment: (was: HDFS-13286-HDFS-12943.002.patch) > Add haadmin commands to transition between standby and observer > --- > > Key: HDFS-13286 > URL: https://issues.apache.org/jira/browse/HDFS-13286 > Project: Hadoop HDFS > Issue Type: Sub-task >Reporter: Chao Sun >Assignee: Chao Sun >Priority: Major > Attachments: HDFS-13286-HDFS-12943.000.patch, > HDFS-13286-HDFS-12943.001.patch > > > As discussed in HDFS-12975, we should allow explicit transition between > standby and observer through haadmin command, such as: > {code} > haadmin -transitionToObserver > {code} > Initially we should support transition from observer to standby, and standby > to observer. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-13286) Add haadmin commands to transition between standby and observer
[ https://issues.apache.org/jira/browse/HDFS-13286?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Chao Sun updated HDFS-13286: Attachment: HDFS-13286-HDFS-12943.002.patch > Add haadmin commands to transition between standby and observer > --- > > Key: HDFS-13286 > URL: https://issues.apache.org/jira/browse/HDFS-13286 > Project: Hadoop HDFS > Issue Type: Sub-task >Reporter: Chao Sun >Assignee: Chao Sun >Priority: Major > Attachments: HDFS-13286-HDFS-12943.000.patch, > HDFS-13286-HDFS-12943.001.patch > > > As discussed in HDFS-12975, we should allow explicit transition between > standby and observer through haadmin command, such as: > {code} > haadmin -transitionToObserver > {code} > Initially we should support transition from observer to standby, and standby > to observer. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-13286) Add haadmin commands to transition between standby and observer
[ https://issues.apache.org/jira/browse/HDFS-13286?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Chao Sun updated HDFS-13286: Attachment: (was: HDFS-13286-HDFS-12943.002.patch) > Add haadmin commands to transition between standby and observer > --- > > Key: HDFS-13286 > URL: https://issues.apache.org/jira/browse/HDFS-13286 > Project: Hadoop HDFS > Issue Type: Sub-task >Reporter: Chao Sun >Assignee: Chao Sun >Priority: Major > Attachments: HDFS-13286-HDFS-12943.000.patch, > HDFS-13286-HDFS-12943.001.patch > > > As discussed in HDFS-12975, we should allow explicit transition between > standby and observer through haadmin command, such as: > {code} > haadmin -transitionToObserver > {code} > Initially we should support transition from observer to standby, and standby > to observer. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-13286) Add haadmin commands to transition between standby and observer
[ https://issues.apache.org/jira/browse/HDFS-13286?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Chao Sun updated HDFS-13286: Attachment: (was: HDFS-13286-HDFS-12943.002.patch) > Add haadmin commands to transition between standby and observer > --- > > Key: HDFS-13286 > URL: https://issues.apache.org/jira/browse/HDFS-13286 > Project: Hadoop HDFS > Issue Type: Sub-task >Reporter: Chao Sun >Assignee: Chao Sun >Priority: Major > Attachments: HDFS-13286-HDFS-12943.000.patch, > HDFS-13286-HDFS-12943.001.patch, HDFS-13286-HDFS-12943.002.patch > > > As discussed in HDFS-12975, we should allow explicit transition between > standby and observer through haadmin command, such as: > {code} > haadmin -transitionToObserver > {code} > Initially we should support transition from observer to standby, and standby > to observer. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-13286) Add haadmin commands to transition between standby and observer
[ https://issues.apache.org/jira/browse/HDFS-13286?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Chao Sun updated HDFS-13286: Attachment: HDFS-13286-HDFS-12943.002.patch > Add haadmin commands to transition between standby and observer > --- > > Key: HDFS-13286 > URL: https://issues.apache.org/jira/browse/HDFS-13286 > Project: Hadoop HDFS > Issue Type: Sub-task >Reporter: Chao Sun >Assignee: Chao Sun >Priority: Major > Attachments: HDFS-13286-HDFS-12943.000.patch, > HDFS-13286-HDFS-12943.001.patch, HDFS-13286-HDFS-12943.002.patch > > > As discussed in HDFS-12975, we should allow explicit transition between > standby and observer through haadmin command, such as: > {code} > haadmin -transitionToObserver > {code} > Initially we should support transition from observer to standby, and standby > to observer. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-13286) Add haadmin commands to transition between standby and observer
[ https://issues.apache.org/jira/browse/HDFS-13286?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16451588#comment-16451588 ] Chao Sun commented on HDFS-13286: - Submitted patch v2. [~xkrogen]: I left a TODO to use a {{ObserverState}} class instead of the flag. Will revisit this. Can you take another look? Thanks. > Add haadmin commands to transition between standby and observer > --- > > Key: HDFS-13286 > URL: https://issues.apache.org/jira/browse/HDFS-13286 > Project: Hadoop HDFS > Issue Type: Sub-task >Reporter: Chao Sun >Assignee: Chao Sun >Priority: Major > Attachments: HDFS-13286-HDFS-12943.000.patch, > HDFS-13286-HDFS-12943.001.patch, HDFS-13286-HDFS-12943.002.patch > > > As discussed in HDFS-12975, we should allow explicit transition between > standby and observer through haadmin command, such as: > {code} > haadmin -transitionToObserver > {code} > Initially we should support transition from observer to standby, and standby > to observer. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-13286) Add haadmin commands to transition between standby and observer
[ https://issues.apache.org/jira/browse/HDFS-13286?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Chao Sun updated HDFS-13286: Attachment: HDFS-13286-HDFS-12943.002.patch > Add haadmin commands to transition between standby and observer > --- > > Key: HDFS-13286 > URL: https://issues.apache.org/jira/browse/HDFS-13286 > Project: Hadoop HDFS > Issue Type: Sub-task >Reporter: Chao Sun >Assignee: Chao Sun >Priority: Major > Attachments: HDFS-13286-HDFS-12943.000.patch, > HDFS-13286-HDFS-12943.001.patch, HDFS-13286-HDFS-12943.002.patch > > > As discussed in HDFS-12975, we should allow explicit transition between > standby and observer through haadmin command, such as: > {code} > haadmin -transitionToObserver > {code} > Initially we should support transition from observer to standby, and standby > to observer. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-13468) Add erasure coding metrics into ReadStatistics
[ https://issues.apache.org/jira/browse/HDFS-13468?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16451493#comment-16451493 ] genericqa commented on HDFS-13468: -- | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 35s{color} | {color:blue} Docker mode activated. {color} | || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s{color} | {color:green} The patch appears to include 1 new or modified test files. {color} | || || || || {color:brown} trunk Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 29s{color} | {color:blue} Maven dependency ordering for branch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 28m 4s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 16m 30s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 1m 3s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 2m 6s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 13m 11s{color} | {color:green} branch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 3m 51s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 20s{color} | {color:green} trunk passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 11s{color} | {color:blue} Maven dependency ordering for patch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 2m 1s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 16m 43s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 16m 43s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 1m 4s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 58s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 10m 55s{color} | {color:green} patch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 4m 12s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 17s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:green}+1{color} | {color:green} unit {color} | {color:green} 1m 40s{color} | {color:green} hadoop-hdfs-client in the patch passed. {color} | | {color:red}-1{color} | {color:red} unit {color} | {color:red}110m 43s{color} | {color:red} hadoop-hdfs in the patch failed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 27s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black}217m 14s{color} | {color:black} {color} | \\ \\ || Reason || Tests || | Failed junit tests | hadoop.hdfs.TestLeaseRecovery2 | | | hadoop.hdfs.server.blockmanagement.TestNameNodePrunesMissingStorages | | | hadoop.hdfs.TestReconstructStripedFile | | | hadoop.hdfs.server.namenode.TestReencryption | \\ \\ || Subsystem || Report/Notes || | Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:b78c94f | | JIRA Issue | HDFS-13468 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12920522/HDFS-13468.02.patch | | Optional Tests | asflicense compile javac javadoc mvninstall mvnsite unit shadedclient findbugs checkstyle | | uname | Linux e73fc590aa4c 3.13.0-143-generic #192-Ubuntu SMP Tue Feb 27 10:45:36 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | /testptch/patchprocess/precommit/personality/provided.sh | | git revision | trunk / 9d6befb | | mav
[jira] [Commented] (HDFS-13484) RBF: Disable Nameservices from the federation
[ https://issues.apache.org/jira/browse/HDFS-13484?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16451489#comment-16451489 ] genericqa commented on HDFS-13484: -- | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 30s{color} | {color:blue} Docker mode activated. {color} | || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s{color} | {color:green} The patch appears to include 6 new or modified test files. {color} | || || || || {color:brown} trunk Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 26s{color} | {color:blue} Maven dependency ordering for branch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 22m 59s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 15m 1s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 50s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 21s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 11m 16s{color} | {color:green} branch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 2m 21s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 7s{color} | {color:green} trunk passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 9s{color} | {color:blue} Maven dependency ordering for patch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 1m 18s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 15m 1s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 15m 1s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 48s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 14s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 9m 8s{color} | {color:green} patch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 2m 36s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 5s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:red}-1{color} | {color:red} unit {color} | {color:red}106m 52s{color} | {color:red} hadoop-hdfs in the patch failed. {color} | | {color:green}+1{color} | {color:green} unit {color} | {color:green} 15m 24s{color} | {color:green} hadoop-hdfs-rbf in the patch passed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 25s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black}209m 33s{color} | {color:black} {color} | \\ \\ || Reason || Tests || | Failed junit tests | hadoop.hdfs.server.datanode.TestDataNodeVolumeFailure | | | hadoop.hdfs.server.namenode.ha.TestRetryCacheWithHA | | | hadoop.hdfs.web.TestWebHdfsTimeouts | \\ \\ || Subsystem || Report/Notes || | Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:b78c94f | | JIRA Issue | HDFS-13484 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12920499/HDFS-13484.008.patch | | Optional Tests | asflicense compile javac javadoc mvninstall mvnsite unit shadedclient findbugs checkstyle | | uname | Linux 42b10d4afb2b 4.4.0-64-generic #85-Ubuntu SMP Mon Feb 20 11:50:30 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | /testptch/patchprocess/precommit/personality/provided.sh | | git revision | trunk / 9d6befb | | maven | version: Apache Maven 3.3.9 | | Default Java | 1.8
[jira] [Created] (HDFS-13503) Fix TestFsck test failures on Windows
Xiao Liang created HDFS-13503: - Summary: Fix TestFsck test failures on Windows Key: HDFS-13503 URL: https://issues.apache.org/jira/browse/HDFS-13503 Project: Hadoop HDFS Issue Type: Test Components: hdfs Reporter: Xiao Liang Assignee: Xiao Liang Test failures on Windows caused by the same reason as HDFS-13336, similar fix needed for TestFsck basing on HDFS-13408. MiniDFSCluster also needs a small fix for the getStorageDir() interface, which should use determineDfsBaseDir() to get the correct path of the data directory. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Created] (HDFS-13502) Utility to resolve NameServiceId in federated cluster
Apoorv Naik created HDFS-13502: -- Summary: Utility to resolve NameServiceId in federated cluster Key: HDFS-13502 URL: https://issues.apache.org/jira/browse/HDFS-13502 Project: Hadoop HDFS Issue Type: Improvement Reporter: Apoorv Naik A Utility class within the hadoop-common, that would act as a reverse lookup for : HDFS URLs would be beneficial for deployments having multiple namenodes and nameservices. Consumers would benefit by having a unified namespace across the Federated cluster. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-13399) Make Client field AlignmentContext non-static.
[ https://issues.apache.org/jira/browse/HDFS-13399?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16451440#comment-16451440 ] genericqa commented on HDFS-13399: -- | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 26s{color} | {color:blue} Docker mode activated. {color} | || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s{color} | {color:green} The patch appears to include 6 new or modified test files. {color} | || || || || {color:brown} HDFS-12943 Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 1m 45s{color} | {color:blue} Maven dependency ordering for branch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 23m 21s{color} | {color:green} HDFS-12943 passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 27m 2s{color} | {color:green} HDFS-12943 passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 2m 51s{color} | {color:green} HDFS-12943 passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 3m 4s{color} | {color:green} HDFS-12943 passed {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 16m 5s{color} | {color:green} branch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 4m 54s{color} | {color:green} HDFS-12943 passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 2m 26s{color} | {color:green} HDFS-12943 passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 17s{color} | {color:blue} Maven dependency ordering for patch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 2m 13s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 26m 2s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 26m 2s{color} | {color:green} the patch passed {color} | | {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange} 2m 44s{color} | {color:orange} root: The patch generated 2 new + 544 unchanged - 0 fixed = 546 total (was 544) {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 3m 1s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 9m 42s{color} | {color:green} patch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 5m 13s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 2m 22s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:green}+1{color} | {color:green} unit {color} | {color:green} 8m 25s{color} | {color:green} hadoop-common in the patch passed. {color} | | {color:red}-1{color} | {color:red} unit {color} | {color:red} 0m 52s{color} | {color:red} hadoop-hdfs-client in the patch failed. {color} | | {color:red}-1{color} | {color:red} unit {color} | {color:red} 0m 50s{color} | {color:red} hadoop-hdfs in the patch failed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 38s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black}141m 32s{color} | {color:black} {color} | \\ \\ || Subsystem || Report/Notes || | Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:8620d2b | | JIRA Issue | HDFS-13399 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12920515/HDFS-13399-HDFS-12943.004.patch | | Optional Tests | asflicense compile javac javadoc mvninstall mvnsite unit shadedclient findbugs checkstyle | | uname | Linux c3ae63177aae 4.4.0-116-generic #140-Ubuntu SMP Mon Feb 12 21:23:04 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | /testptch/patchprocess/precommit/personality/provided.sh | | git revision | HDFS-12943 / f8ee212 | | maven
[jira] [Commented] (HDFS-13500) Ozone:Chill Mode to consider percentage of container reports
[ https://issues.apache.org/jira/browse/HDFS-13500?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16451415#comment-16451415 ] genericqa commented on HDFS-13500: -- | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 47s{color} | {color:blue} Docker mode activated. {color} | || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s{color} | {color:green} The patch appears to include 7 new or modified test files. {color} | || || || || {color:brown} trunk Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 1m 34s{color} | {color:blue} Maven dependency ordering for branch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 22m 54s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 26m 3s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 2m 52s{color} | {color:green} trunk passed {color} | | {color:red}-1{color} | {color:red} mvnsite {color} | {color:red} 0m 24s{color} | {color:red} server-scm in trunk failed. {color} | | {color:red}-1{color} | {color:red} mvnsite {color} | {color:red} 0m 21s{color} | {color:red} integration-test in trunk failed. {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 13m 28s{color} | {color:green} branch has no errors when building and testing our client artifacts. {color} | | {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue} 0m 0s{color} | {color:blue} Skipped patched modules with no Java source: hadoop-ozone/integration-test {color} | | {color:red}-1{color} | {color:red} findbugs {color} | {color:red} 0m 48s{color} | {color:red} hadoop-hdds/common in trunk has 1 extant Findbugs warnings. {color} | | {color:red}-1{color} | {color:red} findbugs {color} | {color:red} 0m 15s{color} | {color:red} server-scm in trunk failed. {color} | | {color:red}-1{color} | {color:red} javadoc {color} | {color:red} 0m 18s{color} | {color:red} server-scm in trunk failed. {color} | | {color:red}-1{color} | {color:red} javadoc {color} | {color:red} 0m 14s{color} | {color:red} integration-test in trunk failed. {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 16s{color} | {color:blue} Maven dependency ordering for patch {color} | | {color:red}-1{color} | {color:red} mvninstall {color} | {color:red} 0m 9s{color} | {color:red} server-scm in the patch failed. {color} | | {color:red}-1{color} | {color:red} mvninstall {color} | {color:red} 0m 8s{color} | {color:red} integration-test in the patch failed. {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 27m 42s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 27m 42s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 3m 5s{color} | {color:green} the patch passed {color} | | {color:red}-1{color} | {color:red} mvnsite {color} | {color:red} 0m 17s{color} | {color:red} server-scm in the patch failed. {color} | | {color:red}-1{color} | {color:red} mvnsite {color} | {color:red} 0m 16s{color} | {color:red} integration-test in the patch failed. {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} xml {color} | {color:green} 0m 1s{color} | {color:green} The patch has no ill-formed XML file. {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 8m 47s{color} | {color:green} patch has no errors when building and testing our client artifacts. {color} | | {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue} 0m 0s{color} | {color:blue} Skipped patched modules with no Java source: hadoop-ozone/integration-test {color} | | {color:red}-1{color} | {color:red} findbugs {color} | {color:red} 0m 18s{color} | {color:red} server-scm in the patch failed. {color} | | {color:red}-1{color} | {color:red} javadoc {color} | {color:red} 0m 17s{color} | {color:red} server-scm in the patch failed. {color} | | {color:red}-1{color} | {color:red} javadoc {color} | {color:red} 0m 17s{color} | {color:red} integration-test in the patch failed. {color} | || || || || {color:brown} Other Tests {col
[jira] [Commented] (HDFS-13492) Limit httpfs binds to certain IP addresses in branch-2
[ https://issues.apache.org/jira/browse/HDFS-13492?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16451408#comment-16451408 ] Xiao Chen commented on HDFS-13492: -- +1 pending pre-commit. Thanks Wei-Chiu! > Limit httpfs binds to certain IP addresses in branch-2 > -- > > Key: HDFS-13492 > URL: https://issues.apache.org/jira/browse/HDFS-13492 > Project: Hadoop HDFS > Issue Type: Improvement > Components: httpfs >Affects Versions: 2.6.0 >Reporter: Wei-Chiu Chuang >Assignee: Wei-Chiu Chuang >Priority: Major > Attachments: HDFS-13492.branch-2.001.patch, > HDFS-13492.branch-2.002.patch > > > Currently httpfs binds to all IP addresses of the host by default. Some > operators want to limit httpfs to accept only local connections. > We should provide that option, and it's pretty doable in Hadoop 2.x. > Note that httpfs underlying implementation changed in Hadoop 3, and the Jetty > based httpfs implementation already support that I believe. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-13492) Limit httpfs binds to certain IP addresses in branch-2
[ https://issues.apache.org/jira/browse/HDFS-13492?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16451402#comment-16451402 ] Wei-Chiu Chuang commented on HDFS-13492: Thanks for review and good catch! Uploaded rev002 to address the comments. > Limit httpfs binds to certain IP addresses in branch-2 > -- > > Key: HDFS-13492 > URL: https://issues.apache.org/jira/browse/HDFS-13492 > Project: Hadoop HDFS > Issue Type: Improvement > Components: httpfs >Affects Versions: 2.6.0 >Reporter: Wei-Chiu Chuang >Assignee: Wei-Chiu Chuang >Priority: Major > Attachments: HDFS-13492.branch-2.001.patch, > HDFS-13492.branch-2.002.patch > > > Currently httpfs binds to all IP addresses of the host by default. Some > operators want to limit httpfs to accept only local connections. > We should provide that option, and it's pretty doable in Hadoop 2.x. > Note that httpfs underlying implementation changed in Hadoop 3, and the Jetty > based httpfs implementation already support that I believe. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-13492) Limit httpfs binds to certain IP addresses in branch-2
[ https://issues.apache.org/jira/browse/HDFS-13492?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Wei-Chiu Chuang updated HDFS-13492: --- Attachment: HDFS-13492.branch-2.002.patch > Limit httpfs binds to certain IP addresses in branch-2 > -- > > Key: HDFS-13492 > URL: https://issues.apache.org/jira/browse/HDFS-13492 > Project: Hadoop HDFS > Issue Type: Improvement > Components: httpfs >Affects Versions: 2.6.0 >Reporter: Wei-Chiu Chuang >Assignee: Wei-Chiu Chuang >Priority: Major > Attachments: HDFS-13492.branch-2.001.patch, > HDFS-13492.branch-2.002.patch > > > Currently httpfs binds to all IP addresses of the host by default. Some > operators want to limit httpfs to accept only local connections. > We should provide that option, and it's pretty doable in Hadoop 2.x. > Note that httpfs underlying implementation changed in Hadoop 3, and the Jetty > based httpfs implementation already support that I believe. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-13501) Secure Datanode stop/start from cli does not throw a valid error if HADOOP_SECURE_DN_USER is not set
[ https://issues.apache.org/jira/browse/HDFS-13501?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Ajay Kumar updated HDFS-13501: -- Description: Secure Datanode start/stop from cli does not throw a valid error if HADOOP_SECURE_DN_USER/HDFS_DATANODE_SECURE_USER is not set. If HDFS_DATANODE_SECURE_USER and JSVC_HOME is not set start/stop is expected to fail (when privilege ports are used) but it should some valid message. (was: Secure Datanode start/stop from cli does not throw a valid error if HADOOP_SECURE_DN_USER is not set. If HADOOP_SECURE_DN_USER and JSVC_HOME is not set start/stop is expected to fail (when privilege ports are used) but it should some valid message.) > Secure Datanode stop/start from cli does not throw a valid error if > HADOOP_SECURE_DN_USER is not set > > > Key: HDFS-13501 > URL: https://issues.apache.org/jira/browse/HDFS-13501 > Project: Hadoop HDFS > Issue Type: Improvement >Reporter: Ajay Kumar >Assignee: Ajay Kumar >Priority: Major > > Secure Datanode start/stop from cli does not throw a valid error if > HADOOP_SECURE_DN_USER/HDFS_DATANODE_SECURE_USER is not set. If > HDFS_DATANODE_SECURE_USER and JSVC_HOME is not set start/stop is expected to > fail (when privilege ports are used) but it should some valid message. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-13501) Secure Datanode stop/start from cli does not throw a valid error if HDFS_DATANODE_SECURE_USER is not set
[ https://issues.apache.org/jira/browse/HDFS-13501?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Ajay Kumar updated HDFS-13501: -- Summary: Secure Datanode stop/start from cli does not throw a valid error if HDFS_DATANODE_SECURE_USER is not set (was: Secure Datanode stop/start from cli does not throw a valid error if HADOOP_SECURE_DN_USER is not set) > Secure Datanode stop/start from cli does not throw a valid error if > HDFS_DATANODE_SECURE_USER is not set > > > Key: HDFS-13501 > URL: https://issues.apache.org/jira/browse/HDFS-13501 > Project: Hadoop HDFS > Issue Type: Improvement >Reporter: Ajay Kumar >Assignee: Ajay Kumar >Priority: Major > > Secure Datanode start/stop from cli does not throw a valid error if > HADOOP_SECURE_DN_USER/HDFS_DATANODE_SECURE_USER is not set. If > HDFS_DATANODE_SECURE_USER and JSVC_HOME is not set start/stop is expected to > fail (when privilege ports are used) but it should some valid message. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-13500) Ozone:Chill Mode to consider percentage of container reports
[ https://issues.apache.org/jira/browse/HDFS-13500?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Bharat Viswanadham updated HDFS-13500: -- Attachment: Chill Mode.pdf > Ozone:Chill Mode to consider percentage of container reports > > > Key: HDFS-13500 > URL: https://issues.apache.org/jira/browse/HDFS-13500 > Project: Hadoop HDFS > Issue Type: Sub-task >Reporter: Bharat Viswanadham >Assignee: Bharat Viswanadham >Priority: Major > Attachments: Chill Mode.pdf, HDFS-13500.00.patch > > > To come out of chill mode currenly if one datanode is registered, we come out > of chill mode in SCM. > This needs to be changed to consider percentage of container reports. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-13501) Secure Datanode stop/start from cli does not throw a valid error if HADOOP_SECURE_DN_USER is not set
[ https://issues.apache.org/jira/browse/HDFS-13501?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Ajay Kumar updated HDFS-13501: -- Description: Secure Datanode start/stop from cli does not throw a valid error if HADOOP_SECURE_DN_USER is not set. If HADOOP_SECURE_DN_USER and JSVC_HOME is not set start/stop is expected to fail (when privilege ports are used) but it should some valid message. (was: Secure Datanode start/stop from cli does not throw a valid error if HADOOP_SECURE_DN_USER is not set. ) > Secure Datanode stop/start from cli does not throw a valid error if > HADOOP_SECURE_DN_USER is not set > > > Key: HDFS-13501 > URL: https://issues.apache.org/jira/browse/HDFS-13501 > Project: Hadoop HDFS > Issue Type: Improvement >Reporter: Ajay Kumar >Assignee: Ajay Kumar >Priority: Major > > Secure Datanode start/stop from cli does not throw a valid error if > HADOOP_SECURE_DN_USER is not set. If HADOOP_SECURE_DN_USER and JSVC_HOME is > not set start/stop is expected to fail (when privilege ports are used) but it > should some valid message. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-13443) RBF: Update mount table cache immediately after changing (add/update/remove) mount table entries.
[ https://issues.apache.org/jira/browse/HDFS-13443?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16451371#comment-16451371 ] genericqa commented on HDFS-13443: -- | (/) *{color:green}+1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 15s{color} | {color:blue} Docker mode activated. {color} | || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s{color} | {color:green} The patch appears to include 2 new or modified test files. {color} | || || || || {color:brown} trunk Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 25m 25s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 26s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 18s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 30s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 10m 58s{color} | {color:green} branch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 0m 46s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 31s{color} | {color:green} trunk passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 28s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 23s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} cc {color} | {color:green} 0m 23s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 23s{color} | {color:green} the patch passed {color} | | {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange} 0m 14s{color} | {color:orange} hadoop-hdfs-project/hadoop-hdfs-rbf: The patch generated 1 new + 0 unchanged - 0 fixed = 1 total (was 0) {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 26s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} xml {color} | {color:green} 0m 1s{color} | {color:green} The patch has no ill-formed XML file. {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 11m 48s{color} | {color:green} patch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 0m 56s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 31s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:green}+1{color} | {color:green} unit {color} | {color:green} 13m 21s{color} | {color:green} hadoop-hdfs-rbf in the patch passed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 23s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 68m 33s{color} | {color:black} {color} | \\ \\ || Subsystem || Report/Notes || | Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:b78c94f | | JIRA Issue | HDFS-13443 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12920464/HDFS-13443.004.patch | | Optional Tests | asflicense compile javac javadoc mvninstall mvnsite unit shadedclient findbugs checkstyle cc xml | | uname | Linux 857329dc0e66 3.13.0-139-generic #188-Ubuntu SMP Tue Jan 9 14:43:09 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | /testptch/patchprocess/precommit/personality/provided.sh | | git revision | trunk / 9d6befb | | maven | version: Apache Maven 3.3.9 | | Default Java | 1.8.0_162 | | findbugs | v3.1.0-RC1 | | checkstyle | https://builds.apache.org/job/PreCommit-HDFS-Build/24058/artifact/out/diff-checkstyle-hadoop-hdfs-project_hadoop-hdfs-rbf.txt | | Test Results | https://builds.apache.org/job/PreCommit-HDFS-Build/24058/t
[jira] [Commented] (HDFS-13501) Secure Datanode stop/start from cli does not throw a valid error if HADOOP_SECURE_DN_USER is not set
[ https://issues.apache.org/jira/browse/HDFS-13501?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16451365#comment-16451365 ] Ajay Kumar commented on HDFS-13501: --- {code} [root@n002 hadoop-3.2.0-SNAPSHOT]# hdfs --daemon stop datanode [root@n002 hadoop-3.2.0-SNAPSHOT]# ps -ef|grep datanode root 4046 32214 0 22:47 pts/000:00:00 grep --color=auto datanode root 30589 1 0 21:23 ?00:00:00 jsvc.exec -Dproc_datanode -outfile /opt/hadoop/hadoop-3.2.0-SNAPSHOT/logs/hadoop-hdfs-root-datanode-n002.hdfs.example.com.out -errfile /opt/hadoop/hadoop-3.2.0-SNAPSHOT/logs/privileged-root-datanode-n002.hdfs.example.com.err -pidfile /tmp/hadoop-hdfs-root-datanode.pid -nodetach -user hdfs -cp /opt/hadoop/hadoop-3.2.0-SNAPSHOT/etc/hadoop:/opt/hadoop/hadoop-3.2.0-SNAPSHOT/share/hadoop/common/lib/*:/opt/hadoop/hadoop-3.2.0-SNAPSHOT/share/hadoop/common/*:/opt/hadoop/hadoop-3.2.0-SNAPSHOT/share/hadoop/hdfs:/opt/hadoop/hadoop-3.2.0-SNAPSHOT/share/hadoop/hdfs/lib/*:/opt/hadoop/hadoop-3.2.0-SNAPSHOT/share/hadoop/hdfs/*:/opt/hadoop/hadoop-3.2.0-SNAPSHOT/share/hadoop/mapreduce/*:/opt/hadoop/hadoop-3.2.0-SNAPSHOT/share/hadoop/yarn/lib/*:/opt/hadoop/hadoop-3.2.0-SNAPSHOT/share/hadoop/yarn/* -Djava.net.preferIPv4Stack=true -Djava.rmi.server.hostname=n002.hdfs.example.com -Dcom.sun.management.jmxremote.port=1035 -Dcom.sun.management.jmxremote.authenticate=false -Dcom.sun.management.jmxremote.ssl=false -jvm server -Dyarn.log.dir=/opt/hadoop/hadoop-3.2.0-SNAPSHOT/logs -Dyarn.log.file=hadoop-hdfs-root-datanode-n002.hdfs.example.com.log -Dyarn.home.dir=/opt/hadoop/hadoop-3.2.0-SNAPSHOT -Dyarn.root.logger=INFO,console -Dhadoop.log.dir=/opt/hadoop/hadoop-3.2.0-SNAPSHOT/logs -Dhadoop.log.file=hadoop-hdfs-root-datanode-n002.hdfs.example.com.log -Dhadoop.home.dir=/opt/hadoop/hadoop-3.2.0-SNAPSHOT -Dhadoop.id.str=root -Dhadoop.root.logger=INFO,RFA -Dhadoop.policy.file=hadoop-policy.xml -Dhadoop.security.logger=INFO,NullAppender org.apache.hadoop.hdfs.server.datanode.SecureDataNodeStarter hdfs 30593 30589 0 21:23 ?00:00:16 jsvc.exec -Dproc_datanode -outfile /opt/hadoop/hadoop-3.2.0-SNAPSHOT/logs/hadoop-hdfs-root-datanode-n002.hdfs.example.com.out -errfile /opt/hadoop/hadoop-3.2.0-SNAPSHOT/logs/privileged-root-datanode-n002.hdfs.example.com.err -pidfile /tmp/hadoop-hdfs-root-datanode.pid -nodetach -user hdfs -cp /opt/hadoop/hadoop-3.2.0-SNAPSHOT/etc/hadoop:/opt/hadoop/hadoop-3.2.0-SNAPSHOT/share/hadoop/common/lib/*:/opt/hadoop/hadoop-3.2.0-SNAPSHOT/share/hadoop/common/*:/opt/hadoop/hadoop-3.2.0-SNAPSHOT/share/hadoop/hdfs:/opt/hadoop/hadoop-3.2.0-SNAPSHOT/share/hadoop/hdfs/lib/*:/opt/hadoop/hadoop-3.2.0-SNAPSHOT/share/hadoop/hdfs/*:/opt/hadoop/hadoop-3.2.0-SNAPSHOT/share/hadoop/mapreduce/*:/opt/hadoop/hadoop-3.2.0-SNAPSHOT/share/hadoop/yarn/lib/*:/opt/hadoop/hadoop-3.2.0-SNAPSHOT/share/hadoop/yarn/* -Djava.net.preferIPv4Stack=true -Djava.rmi.server.hostname=n002.hdfs.example.com -Dcom.sun.management.jmxremote.port=1035 -Dcom.sun.management.jmxremote.authenticate=false -Dcom.sun.management.jmxremote.ssl=false -jvm server -Dyarn.log.dir=/opt/hadoop/hadoop-3.2.0-SNAPSHOT/logs -Dyarn.log.file=hadoop-hdfs-root-datanode-n002.hdfs.example.com.log -Dyarn.home.dir=/opt/hadoop/hadoop-3.2.0-SNAPSHOT -Dyarn.root.logger=INFO,console -Dhadoop.log.dir=/opt/hadoop/hadoop-3.2.0-SNAPSHOT/logs -Dhadoop.log.file=hadoop-hdfs-root-datanode-n002.hdfs.example.com.log -Dhadoop.home.dir=/opt/hadoop/hadoop-3.2.0-SNAPSHOT -Dhadoop.id.str=root -Dhadoop.root.logger=INFO,RFA -Dhadoop.policy.file=hadoop-policy.xml -Dhadoop.security.logger=INFO,NullAppender org.apache.hadoop.hdfs.server.datanode.SecureDataNodeStarter {code} > Secure Datanode stop/start from cli does not throw a valid error if > HADOOP_SECURE_DN_USER is not set > > > Key: HDFS-13501 > URL: https://issues.apache.org/jira/browse/HDFS-13501 > Project: Hadoop HDFS > Issue Type: Improvement >Reporter: Ajay Kumar >Assignee: Ajay Kumar >Priority: Major > > Secure Datanode start/stop from cli does not throw a valid error if > HADOOP_SECURE_DN_USER is not set. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Created] (HDFS-13501) Secure Datanode stop/start from cli does not throw a valid error if HADOOP_SECURE_DN_USER is not set
Ajay Kumar created HDFS-13501: - Summary: Secure Datanode stop/start from cli does not throw a valid error if HADOOP_SECURE_DN_USER is not set Key: HDFS-13501 URL: https://issues.apache.org/jira/browse/HDFS-13501 Project: Hadoop HDFS Issue Type: Improvement Reporter: Ajay Kumar Assignee: Ajay Kumar Secure Datanode start/stop from cli does not throw a valid error if HADOOP_SECURE_DN_USER is not set. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Comment Edited] (HDFS-13399) Make Client field AlignmentContext non-static.
[ https://issues.apache.org/jira/browse/HDFS-13399?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16451323#comment-16451323 ] Konstantin Shvachko edited comment on HDFS-13399 at 4/24/18 10:42 PM: -- There is some misunderstanding. Why do you still need {{createNonHAProxyWithClientProtocol()}} and {{createNonHAProxy()}} with {{AlignmentContext}} as an argument? I thought you will revert this. was (Author: shv): There is some misunderstanding. Why do you still need {{createNonHAProxyWithClientProtocol()}} with {{AlignmentContext}} as an argument? I thought you will revert this. > Make Client field AlignmentContext non-static. > -- > > Key: HDFS-13399 > URL: https://issues.apache.org/jira/browse/HDFS-13399 > Project: Hadoop HDFS > Issue Type: Sub-task >Affects Versions: HDFS-12943 >Reporter: Plamen Jeliazkov >Assignee: Plamen Jeliazkov >Priority: Major > Attachments: HDFS-13399-HDFS-12943.000.patch, > HDFS-13399-HDFS-12943.001.patch, HDFS-13399-HDFS-12943.002.patch, > HDFS-13399-HDFS-12943.003.patch, HDFS-13399-HDFS-12943.004.patch > > > In HDFS-12977, DFSClient's constructor was altered to make use of a new > static method in Client that allowed one to set an AlignmentContext. This > work is to remove that static field and make each DFSClient pass it's > AlignmentContext down to the proxy Call level. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-13399) Make Client field AlignmentContext non-static.
[ https://issues.apache.org/jira/browse/HDFS-13399?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16451323#comment-16451323 ] Konstantin Shvachko commented on HDFS-13399: There is some misunderstanding. Why do you still need {{createNonHAProxyWithClientProtocol()}} with {{AlignmentContext}} as an argument? I thought you will revert this. > Make Client field AlignmentContext non-static. > -- > > Key: HDFS-13399 > URL: https://issues.apache.org/jira/browse/HDFS-13399 > Project: Hadoop HDFS > Issue Type: Sub-task >Affects Versions: HDFS-12943 >Reporter: Plamen Jeliazkov >Assignee: Plamen Jeliazkov >Priority: Major > Attachments: HDFS-13399-HDFS-12943.000.patch, > HDFS-13399-HDFS-12943.001.patch, HDFS-13399-HDFS-12943.002.patch, > HDFS-13399-HDFS-12943.003.patch, HDFS-13399-HDFS-12943.004.patch > > > In HDFS-12977, DFSClient's constructor was altered to make use of a new > static method in Client that allowed one to set an AlignmentContext. This > work is to remove that static field and make each DFSClient pass it's > AlignmentContext down to the proxy Call level. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-13492) Limit httpfs binds to certain IP addresses in branch-2
[ https://issues.apache.org/jira/browse/HDFS-13492?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16451232#comment-16451232 ] Xiao Chen commented on HDFS-13492: -- Thanks Wei-Chiu for the fix, good find. I think this is more of a bug because it exists in httpfs-config.sh and httpfs.sh but not the tomcat xml. John fixed this during jetty migration I think. Could you please add the param to {{ServerSetup.md.vm}}? +1 once that's done. > Limit httpfs binds to certain IP addresses in branch-2 > -- > > Key: HDFS-13492 > URL: https://issues.apache.org/jira/browse/HDFS-13492 > Project: Hadoop HDFS > Issue Type: Improvement > Components: httpfs >Affects Versions: 2.6.0 >Reporter: Wei-Chiu Chuang >Assignee: Wei-Chiu Chuang >Priority: Major > Attachments: HDFS-13492.branch-2.001.patch > > > Currently httpfs binds to all IP addresses of the host by default. Some > operators want to limit httpfs to accept only local connections. > We should provide that option, and it's pretty doable in Hadoop 2.x. > Note that httpfs underlying implementation changed in Hadoop 3, and the Jetty > based httpfs implementation already support that I believe. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-13468) Add erasure coding metrics into ReadStatistics
[ https://issues.apache.org/jira/browse/HDFS-13468?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16451233#comment-16451233 ] Xiao Chen commented on HDFS-13468: -- +1. Thanks Eddy! > Add erasure coding metrics into ReadStatistics > -- > > Key: HDFS-13468 > URL: https://issues.apache.org/jira/browse/HDFS-13468 > Project: Hadoop HDFS > Issue Type: Improvement > Components: erasure-coding >Affects Versions: 3.1.0, 3.0.1 >Reporter: Lei (Eddy) Xu >Assignee: Lei (Eddy) Xu >Priority: Major > Attachments: HDFS-13468.00.patch, HDFS-13468.01.patch, > HDFS-13468.02.patch > > > Expose Erasure Coding related metrics for InputStream in ReadStatistics. > -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-13468) Add erasure coding metrics into ReadStatistics
[ https://issues.apache.org/jira/browse/HDFS-13468?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16451230#comment-16451230 ] Lei (Eddy) Xu commented on HDFS-13468: -- Thanks for the review, [~xiaochen]. Upload the new patch to fix typo. > Add erasure coding metrics into ReadStatistics > -- > > Key: HDFS-13468 > URL: https://issues.apache.org/jira/browse/HDFS-13468 > Project: Hadoop HDFS > Issue Type: Improvement > Components: erasure-coding >Affects Versions: 3.1.0, 3.0.1 >Reporter: Lei (Eddy) Xu >Assignee: Lei (Eddy) Xu >Priority: Major > Attachments: HDFS-13468.00.patch, HDFS-13468.01.patch, > HDFS-13468.02.patch > > > Expose Erasure Coding related metrics for InputStream in ReadStatistics. > -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-13468) Add erasure coding metrics into ReadStatistics
[ https://issues.apache.org/jira/browse/HDFS-13468?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Lei (Eddy) Xu updated HDFS-13468: - Attachment: HDFS-13468.02.patch > Add erasure coding metrics into ReadStatistics > -- > > Key: HDFS-13468 > URL: https://issues.apache.org/jira/browse/HDFS-13468 > Project: Hadoop HDFS > Issue Type: Improvement > Components: erasure-coding >Affects Versions: 3.1.0, 3.0.1 >Reporter: Lei (Eddy) Xu >Assignee: Lei (Eddy) Xu >Priority: Major > Attachments: HDFS-13468.00.patch, HDFS-13468.01.patch, > HDFS-13468.02.patch > > > Expose Erasure Coding related metrics for InputStream in ReadStatistics. > -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-13500) Ozone:Chill Mode to consider percentage of container reports
[ https://issues.apache.org/jira/browse/HDFS-13500?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Bharat Viswanadham updated HDFS-13500: -- Status: Patch Available (was: Open) > Ozone:Chill Mode to consider percentage of container reports > > > Key: HDFS-13500 > URL: https://issues.apache.org/jira/browse/HDFS-13500 > Project: Hadoop HDFS > Issue Type: Sub-task >Reporter: Bharat Viswanadham >Assignee: Bharat Viswanadham >Priority: Major > Attachments: HDFS-13500.00.patch > > > To come out of chill mode currenly if one datanode is registered, we come out > of chill mode in SCM. > This needs to be changed to consider percentage of container reports. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-13500) Ozone:Chill Mode to consider percentage of container reports
[ https://issues.apache.org/jira/browse/HDFS-13500?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Bharat Viswanadham updated HDFS-13500: -- Attachment: HDFS-13500.00.patch > Ozone:Chill Mode to consider percentage of container reports > > > Key: HDFS-13500 > URL: https://issues.apache.org/jira/browse/HDFS-13500 > Project: Hadoop HDFS > Issue Type: Sub-task >Reporter: Bharat Viswanadham >Assignee: Bharat Viswanadham >Priority: Major > Attachments: HDFS-13500.00.patch > > > To come out of chill mode currenly if one datanode is registered, we come out > of chill mode in SCM. > This needs to be changed to consider percentage of container reports. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-13415) Ozone: Remove cblock code from HDFS-7240
[ https://issues.apache.org/jira/browse/HDFS-13415?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16451206#comment-16451206 ] Hudson commented on HDFS-13415: --- SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #14057 (See [https://builds.apache.org/job/Hadoop-trunk-Commit/14057/]) HDFS-13415. Ozone: Remove cblock code from HDFS-7240. Contributed by (msingh: rev ea85801ce32eeccaac2f6c0726024c37ee1fe192) * (delete) hadoop-cblock/server/src/main/java/org/apache/hadoop/cblock/meta/VolumeDescriptor.java * (delete) hadoop-cblock/server/src/main/resources/cblock-default.xml * (edit) hadoop-project/pom.xml * (edit) hadoop-ozone/common/src/main/shellprofile.d/hadoop-ozone.sh * (delete) hadoop-cblock/server/src/main/java/org/apache/hadoop/cblock/exception/package-info.java * (delete) hadoop-cblock/server/pom.xml * (delete) hadoop-cblock/tools/src/main/java/org/apache/hadoop/cblock/cli/package-info.java * (delete) hadoop-cblock/server/src/main/java/org/apache/hadoop/cblock/kubernetes/package-info.java * (edit) hadoop-dist/pom.xml * (delete) hadoop-cblock/server/src/main/java/org/apache/hadoop/cblock/jscsiHelper/cache/LogicalBlock.java * (delete) hadoop-cblock/server/src/test/java/org/apache/hadoop/cblock/TestBufferManager.java * (delete) hadoop-cblock/server/src/test/java/org/apache/hadoop/cblock/kubernetes/TestDynamicProvisioner.java * (delete) hadoop-cblock/server/src/main/java/org/apache/hadoop/cblock/jscsiHelper/cache/impl/DiskBlock.java * (delete) hadoop-cblock/server/src/test/java/org/apache/hadoop/cblock/util/ContainerLookUpService.java * (delete) hadoop-cblock/server/src/main/java/org/apache/hadoop/cblock/jscsiHelper/CBlockTargetServer.java * (delete) hadoop-cblock/tools/src/test/org/apache/hadoop/cblock/TestCBlockCLI.java * (delete) hadoop-cblock/server/src/main/java/org/apache/hadoop/cblock/util/package-info.java * (delete) hadoop-cblock/server/src/main/java/org/apache/hadoop/cblock/jscsiHelper/CBlockClientProtocolClientSideTranslatorPB.java * (delete) hadoop-cblock/server/src/main/java/org/apache/hadoop/cblock/package-info.java * (delete) hadoop-cblock/server/src/test/java/org/apache/hadoop/cblock/util/MockStorageClient.java * (delete) hadoop-cblock/tools/pom.xml * (delete) hadoop-cblock/server/src/main/java/org/apache/hadoop/cblock/proto/MountVolumeResponse.java * (delete) hadoop-cblock/server/src/test/java/org/apache/hadoop/cblock/TestCBlockServerPersistence.java * (delete) hadoop-cblock/server/src/main/java/org/apache/hadoop/cblock/jscsiHelper/cache/impl/BlockBufferManager.java * (edit) hadoop-assemblies/src/main/resources/assemblies/hadoop-src.xml * (delete) hadoop-cblock/server/dev-support/findbugsExcludeFile.xml * (delete) hadoop-cblock/server/src/main/java/org/apache/hadoop/cblock/jscsiHelper/package-info.java * (delete) hadoop-cblock/server/src/main/java/org/apache/hadoop/cblock/meta/package-info.java * (delete) hadoop-cblock/pom.xml * (delete) hadoop-cblock/server/src/main/java/org/apache/hadoop/cblock/protocolPB/CBlockServiceProtocolServerSideTranslatorPB.java * (edit) hadoop-common-project/hadoop-common/src/main/conf/hadoop-env.sh * (delete) hadoop-cblock/server/src/main/java/org/apache/hadoop/cblock/storage/StorageManager.java * (delete) hadoop-cblock/tools/src/main/java/org/apache/hadoop/cblock/cli/CBlockCli.java * (delete) hadoop-cblock/server/src/main/java/org/apache/hadoop/cblock/client/CBlockServiceProtocolClientSideTranslatorPB.java * (delete) hadoop-cblock/server/src/main/java/org/apache/hadoop/cblock/client/CBlockVolumeClient.java * (delete) hadoop-cblock/server/src/main/java/org/apache/hadoop/cblock/jscsiHelper/CBlockTargetMetrics.java * (delete) hadoop-cblock/server/src/main/java/org/apache/hadoop/cblock/jscsiHelper/CBlockIStorageImpl.java * (delete) hadoop-cblock/server/src/main/java/org/apache/hadoop/cblock/jscsiHelper/cache/impl/BlockBufferFlushTask.java * (delete) hadoop-cblock/server/src/main/java/org/apache/hadoop/cblock/protocolPB/CBlockClientServerProtocolServerSideTranslatorPB.java * (delete) hadoop-cblock/server/src/main/java/org/apache/hadoop/cblock/protocolPB/CBlockClientServerProtocolPB.java * (edit) hadoop-common-project/hadoop-common/src/main/bin/hadoop-functions.sh * (delete) hadoop-cblock/server/src/main/java/org/apache/hadoop/cblock/proto/CBlockServiceProtocol.java * (delete) hadoop-cblock/server/src/main/java/org/apache/hadoop/cblock/client/package-info.java * (delete) hadoop-cblock/server/src/main/java/org/apache/hadoop/cblock/jscsiHelper/cache/impl/SyncBlockReader.java * (delete) hadoop-dist/src/main/compose/cblock/docker-compose.yaml * (edit) hadoop-ozone/common/src/main/bin/ozone * (delete) hadoop-cblock/server/src/main/java/org/apache/hadoop/cblock/jscsiHelper/cache/package-info.java * (delete) hadoop-cblock/server/src/test/java/org/apache/hadoop/cblock/TestCBlockReadWrite.java * (delete) hadoop-cblock/server/src/main/java/org/apache/hado
[jira] [Commented] (HDFS-13414) Ozone: Update existing Ozone documentation according to the recent changes
[ https://issues.apache.org/jira/browse/HDFS-13414?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16451208#comment-16451208 ] Hudson commented on HDFS-13414: --- SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #14057 (See [https://builds.apache.org/job/Hadoop-trunk-Commit/14057/]) HDFS-13414. Ozone: Update existing Ozone documentation according to the (msingh: rev dd43835b3644aab7266718213e6323f38b8ea1bb) * (edit) hadoop-ozone/ozone-manager/src/main/site/markdown/OzoneRest.md * (edit) hadoop-ozone/ozone-manager/src/main/site/markdown/OzoneGettingStarted.md.vm * (edit) hadoop-ozone/ozone-manager/src/main/site/markdown/OzoneCommandShell.md > Ozone: Update existing Ozone documentation according to the recent changes > -- > > Key: HDFS-13414 > URL: https://issues.apache.org/jira/browse/HDFS-13414 > Project: Hadoop HDFS > Issue Type: Sub-task >Reporter: Elek, Marton >Assignee: Elek, Marton >Priority: Minor > Fix For: HDFS-7240 > > Attachments: HDFS-13414-HDFS-7240.001.patch, > HDFS-13414-HDFS-7240.002.patch, HDFS-13414-HDFS-7240.003.patch > > > 1. Datanode port has been changed > 2. remove the references to the branch (prepare to merge) > 3. CLI commands are changed (eg. ozone scm) -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-13407) Ozone: Use separated version schema for Hdds/Ozone projects
[ https://issues.apache.org/jira/browse/HDFS-13407?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16451216#comment-16451216 ] Hudson commented on HDFS-13407: --- SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #14057 (See [https://builds.apache.org/job/Hadoop-trunk-Commit/14057/]) HDFS-13407. Ozone: Use separated version schema for Hdds/Ozone projects. (xyao: rev eea3128fdbdb8553dd6f8a4d20de62cb130c6e39) * (edit) hadoop-hdds/pom.xml * (edit) hadoop-hdds/framework/pom.xml * (edit) hadoop-ozone/common/pom.xml * (edit) hadoop-hdds/container-service/pom.xml * (edit) hadoop-dist/pom.xml * (edit) hadoop-hdds/common/pom.xml * (edit) hadoop-hdds/server-scm/pom.xml * (edit) hadoop-ozone/tools/pom.xml * (edit) hadoop-ozone/integration-test/pom.xml * (edit) hadoop-ozone/ozone-manager/pom.xml * (edit) hadoop-hdds/tools/pom.xml * (edit) hadoop-hdds/client/pom.xml * (edit) dev-support/bin/dist-layout-stitching * (edit) hadoop-ozone/client/pom.xml * (edit) hadoop-ozone/objectstore-service/pom.xml * (edit) hadoop-project/pom.xml * (edit) pom.xml * (edit) hadoop-ozone/pom.xml > Ozone: Use separated version schema for Hdds/Ozone projects > --- > > Key: HDFS-13407 > URL: https://issues.apache.org/jira/browse/HDFS-13407 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: HDFS-7240 >Reporter: Elek, Marton >Assignee: Elek, Marton >Priority: Major > Fix For: HDFS-7240 > > Attachments: HDFS-13407-HDFS-7240.001.patch, > HDFS-13407-HDFS-7240.002.patch > > > The community is voted to manage Hdds/Ozone in-tree but with different > release cycle. To achieve this we need to separated the versions of > hdds/ozone projects from the mainline hadoop version (3.2.0-currently). -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-13416) Ozone: TestNodeManager tests fail
[ https://issues.apache.org/jira/browse/HDFS-13416?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16451204#comment-16451204 ] Hudson commented on HDFS-13416: --- SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #14057 (See [https://builds.apache.org/job/Hadoop-trunk-Commit/14057/]) HDFS-13416. Ozone: TestNodeManager tests fail. Contributed by Bharat (nanda: rev e6da4d8da4f55f3b26842f7e92935957aec83160) * (edit) hadoop-hdds/common/src/main/java/org/apache/hadoop/hdds/protocol/DatanodeDetails.java * (edit) hadoop-hdds/server-scm/src/test/java/org/apache/hadoop/hdds/scm/node/TestNodeManager.java * (edit) hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/node/SCMNodeManager.java > Ozone: TestNodeManager tests fail > - > > Key: HDFS-13416 > URL: https://issues.apache.org/jira/browse/HDFS-13416 > Project: Hadoop HDFS > Issue Type: Sub-task >Reporter: Bharat Viswanadham >Assignee: Bharat Viswanadham >Priority: Major > Attachments: HDFS-13416-HDFS-7240.00.patch, > HDFS-13416-HDFS-7240.01.patch > > > java.lang.IllegalArgumentException: Invalid UUID string: h0 > at java.util.UUID.fromString(UUID.java:194) > at > org.apache.hadoop.hdds.protocol.DatanodeDetails.(DatanodeDetails.java:68) > at > org.apache.hadoop.hdds.protocol.DatanodeDetails.(DatanodeDetails.java:36) > at > org.apache.hadoop.hdds.protocol.DatanodeDetails$Builder.build(DatanodeDetails.java:416) > at org.apache.hadoop.hdds.scm.TestUtils.getDatanodeDetails(TestUtils.java:95) > at org.apache.hadoop.hdds.scm.TestUtils.getDatanodeDetails(TestUtils.java:48) > at > org.apache.hadoop.hdds.scm.node.TestNodeManager.createNodeSet(TestNodeManager.java:719) > at > org.apache.hadoop.hdds.scm.node.TestNodeManager.testScmLogsHeartbeatFlooding(TestNodeManager.java:913) > at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) > at > sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) > at > sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) > at java.lang.reflect.Method.invoke(Method.java:498) > at > org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:47) > at > org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) > at > org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:44) > at > org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17) > at > org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) > at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) > at > org.junit.rules.ExpectedException$ExpectedExceptionStatement.evaluate(ExpectedException.java:168) > at org.junit.rules.RunRules.evaluate(RunRules.java:20) > at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:271) > at > org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:70) > at > org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:50) > at org.junit.runners.ParentRunner$3.run(ParentRunner.java:238) > at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:63) > at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:236) > at org.junit.runners.ParentRunner.access$000(ParentRunner.java:53) > at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:229) > at > org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) > at org.junit.runners.ParentRunner.run(ParentRunner.java:309) > at org.junit.runner.JUnitCore.run(JUnitCore.java:160) > at > com.intellij.junit4.JUnit4IdeaTestRunner.startRunnerWithArgs(JUnit4IdeaTestRunner.java:68) > at > com.intellij.rt.execution.junit.IdeaTestRunner$Repeater.startRunnerWithArgs(IdeaTestRunner.java:47) > at > com.intellij.rt.execution.junit.JUnitStarter.prepareStreamsAndStart(JUnitStarter.java:242) > at com.intellij.rt.execution.junit.JUnitStarter.main(JUnitStarter.java:70) > > This is happening after this change HDFS-13300 > cc [~nandakumar131] -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-13423) Ozone: Clean-up of ozone related change from hadoop-hdfs-project
[ https://issues.apache.org/jira/browse/HDFS-13423?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16451210#comment-16451210 ] Hudson commented on HDFS-13423: --- SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #14057 (See [https://builds.apache.org/job/Hadoop-trunk-Commit/14057/]) HDFS-13423. Ozone: Clean-up of ozone related change from (msingh: rev 584c573a5604d49522c4b7766fc52f4d3eb92496) * (edit) hadoop-ozone/integration-test/src/test/java/org/apache/hadoop/ozone/ksm/TestKeySpaceManagerRestInterface.java * (edit) hadoop-ozone/integration-test/src/test/java/org/apache/hadoop/ozone/MiniOzoneTestHelper.java * (edit) hadoop-ozone/integration-test/src/test/java/org/apache/hadoop/ozone/container/ozoneimpl/TestRatisManager.java * (edit) hadoop-ozone/integration-test/src/test/java/org/apache/hadoop/ozone/web/TestOzoneWebAccess.java * (edit) hadoop-ozone/objectstore-service/src/main/java/org/apache/hadoop/ozone/web/netty/ObjectStoreRestHttpServer.java * (edit) hadoop-ozone/integration-test/src/test/java/org/apache/hadoop/ozone/web/client/TestVolume.java * (edit) hadoop-ozone/integration-test/src/test/java/org/apache/hadoop/ozone/TestMiniOzoneCluster.java * (edit) hadoop-ozone/integration-test/src/test/java/org/apache/hadoop/ozone/web/TestDistributedOzoneVolumes.java * (edit) hadoop-ozone/integration-test/src/test/java/org/apache/hadoop/ozone/ozShell/TestOzoneShell.java * (edit) hadoop-ozone/integration-test/src/test/java/org/apache/hadoop/ozone/web/client/TestKeys.java * (edit) hadoop-ozone/integration-test/src/test/java/org/apache/hadoop/ozone/scm/TestSCMCli.java * (edit) hadoop-hdds/container-service/src/main/java/org/apache/hadoop/ozone/HddsDatanodeService.java * (edit) hadoop-hdfs-project/hadoop-hdfs/src/main/proto/HdfsServer.proto * (edit) hadoop-hdds/common/src/main/java/org/apache/hadoop/hdds/HddsUtils.java * (edit) hadoop-ozone/integration-test/src/test/java/org/apache/hadoop/ozone/web/TestLocalOzoneVolumes.java * (edit) hadoop-ozone/integration-test/src/test/java/org/apache/hadoop/ozone/container/ozoneimpl/TestOzoneContainerRatis.java * (edit) hadoop-ozone/integration-test/src/test/java/org/apache/hadoop/ozone/TestStorageContainerManagerHelper.java * (edit) hadoop-ozone/integration-test/src/test/java/org/apache/hadoop/ozone/RatisTestHelper.java * (edit) hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/DataNode.java * (edit) hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/web/RestCsrfPreventionFilterHandler.java * (edit) hadoop-ozone/integration-test/src/test/java/org/apache/hadoop/ozone/TestOzoneHelper.java * (edit) hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/common/HdfsServerConstants.java * (edit) hadoop-ozone/integration-test/src/test/java/org/apache/hadoop/ozone/MiniOzoneClassicCluster.java * (edit) hadoop-ozone/integration-test/src/test/java/org/apache/hadoop/ozone/web/client/TestBuckets.java * (edit) hadoop-ozone/integration-test/src/test/java/org/apache/hadoop/ozone/scm/TestSCMMetrics.java * (edit) hadoop-ozone/integration-test/src/test/java/org/apache/hadoop/ozone/web/client/TestOzoneClient.java * (edit) hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestFavoredNodesEndToEnd.java * (edit) hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/common/StorageInfo.java * (edit) hadoop-ozone/integration-test/src/test/java/org/apache/hadoop/ozone/container/common/statemachine/commandhandler/TestCloseContainerHandler.java > Ozone: Clean-up of ozone related change from hadoop-hdfs-project > > > Key: HDFS-13423 > URL: https://issues.apache.org/jira/browse/HDFS-13423 > Project: Hadoop HDFS > Issue Type: Sub-task >Reporter: Nanda kumar >Assignee: Nanda kumar >Priority: Major > Fix For: HDFS-7240 > > Attachments: HDFS-13423-HDFS-7240.000.patch, > HDFS-13423-HDFS-7240.001.patch, HDFS-13423-HDFS-7240.002.patch, > HDFS-13423-HDFS-7240.003.patch > > > This jira is for tracking the clean-up and rever the changes made in > hadoop-hdfs-project which are related to ozone. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-13424) Ozone: Refactor MiniOzoneClassicCluster
[ https://issues.apache.org/jira/browse/HDFS-13424?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16451213#comment-16451213 ] Hudson commented on HDFS-13424: --- SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #14057 (See [https://builds.apache.org/job/Hadoop-trunk-Commit/14057/]) HDFS-13424. Ozone: Refactor MiniOzoneClassicCluster. Contributed by (msingh: rev 06d228a354b130c8a04c86a6647b52b24c886281) * (edit) hadoop-ozone/integration-test/pom.xml * (edit) hadoop-ozone/integration-test/src/test/java/org/apache/hadoop/ozone/TestContainerOperations.java * (edit) hadoop-ozone/integration-test/src/test/java/org/apache/hadoop/ozone/scm/TestSCMMXBean.java * (edit) hadoop-ozone/integration-test/src/test/java/org/apache/hadoop/ozone/web/TestLocalOzoneVolumes.java * (edit) hadoop-ozone/integration-test/src/test/java/org/apache/hadoop/ozone/web/client/TestVolume.java * (edit) hadoop-ozone/integration-test/src/test/java/org/apache/hadoop/ozone/ksm/TestKeySpaceManagerRestInterface.java * (edit) hadoop-tools/hadoop-ozone/src/test/java/org/apache/hadoop/fs/ozone/contract/OzoneContract.java * (edit) hadoop-ozone/integration-test/src/test/java/org/apache/hadoop/ozone/scm/TestSCMMetrics.java * (edit) hadoop-ozone/tools/src/main/java/org/apache/hadoop/ozone/scm/cli/SQLCLI.java * (delete) hadoop-ozone/integration-test/src/test/java/org/apache/hadoop/ozone/MiniOzoneTestHelper.java * (edit) hadoop-ozone/integration-test/src/test/java/org/apache/hadoop/ozone/web/client/TestKeys.java * (edit) hadoop-ozone/integration-test/src/test/java/org/apache/hadoop/ozone/TestStorageContainerManagerHelper.java * (edit) hadoop-ozone/integration-test/src/test/java/org/apache/hadoop/ozone/ksm/TestKsmBlockVersioning.java * (delete) hadoop-ozone/integration-test/src/test/java/org/apache/hadoop/ozone/MiniOzoneClassicCluster.java * (edit) hadoop-ozone/integration-test/src/test/java/org/apache/hadoop/ozone/ksm/TestContainerReportWithKeys.java * (edit) hadoop-ozone/integration-test/src/test/java/org/apache/hadoop/ozone/container/ozoneimpl/TestOzoneContainerRatis.java * (add) hadoop-ozone/integration-test/src/test/java/org/apache/hadoop/ozone/MiniOzoneClusterImpl.java * (edit) hadoop-ozone/integration-test/src/test/java/org/apache/hadoop/ozone/scm/TestSCMCli.java * (edit) hadoop-ozone/integration-test/src/test/java/org/apache/hadoop/ozone/web/client/TestKeysRatis.java * (edit) hadoop-ozone/integration-test/src/test/java/org/apache/hadoop/ozone/web/TestOzoneRestWithMiniCluster.java * (edit) hadoop-tools/hadoop-ozone/src/test/java/org/apache/hadoop/fs/ozone/TestOzoneFSInputStream.java * (edit) hadoop-ozone/integration-test/src/test/java/org/apache/hadoop/ozone/ksm/TestKeySpaceManager.java * (edit) hadoop-ozone/integration-test/src/test/java/org/apache/hadoop/ozone/container/ozoneimpl/TestOzoneContainer.java * (edit) hadoop-ozone/integration-test/src/test/java/org/apache/hadoop/ozone/ksm/TestKSMSQLCli.java * (edit) hadoop-ozone/integration-test/src/test/java/org/apache/hadoop/hdds/scm/container/TestContainerStateManager.java * (edit) hadoop-ozone/integration-test/src/test/java/org/apache/hadoop/ozone/TestMiniOzoneCluster.java * (edit) hadoop-ozone/integration-test/src/test/java/org/apache/hadoop/ozone/scm/node/TestQueryNode.java * (edit) hadoop-ozone/integration-test/src/test/java/org/apache/hadoop/ozone/scm/TestAllocateContainer.java * (edit) hadoop-ozone/integration-test/src/test/java/org/apache/hadoop/ozone/scm/TestContainerSmallFile.java * (edit) hadoop-ozone/integration-test/src/test/java/org/apache/hadoop/ozone/web/TestOzoneWebAccess.java * (add) hadoop-ozone/integration-test/src/test/resources/log4j.properties * (edit) hadoop-ozone/integration-test/src/test/java/org/apache/hadoop/ozone/TestStorageContainerManager.java * (edit) hadoop-ozone/integration-test/src/test/java/org/apache/hadoop/ozone/container/ozoneimpl/TestRatisManager.java * (edit) hadoop-ozone/integration-test/src/test/java/org/apache/hadoop/ozone/ksm/TestMultipleContainerReadWrite.java * (edit) hadoop-ozone/integration-test/src/test/java/org/apache/hadoop/ozone/freon/TestFreon.java * (edit) hadoop-ozone/integration-test/src/test/java/org/apache/hadoop/ozone/ozShell/TestOzoneShell.java * (edit) hadoop-ozone/integration-test/src/test/java/org/apache/hadoop/ozone/web/client/TestVolumeRatis.java * (edit) hadoop-ozone/integration-test/src/test/java/org/apache/hadoop/ozone/ksm/TestKSMMetrcis.java * (edit) hadoop-hdds/container-service/src/main/java/org/apache/hadoop/ozone/HddsDatanodeService.java * (edit) hadoop-ozone/integration-test/src/test/java/org/apache/hadoop/ozone/container/common/statemachine/commandhandler/TestCloseContainerHandler.java * (edit) hadoop-ozone/integration-test/src/test/java/org/apache/hadoop/ozone/client/rpc/TestOzoneRpcClient.java * (edit) hadoop-ozone/integration-test/src/test/java/org/apache/hadoop/ozone/web/TestDistributedOzoneVolumes
[jira] [Commented] (HDFS-13422) Ozone: Fix whitespaces and license issues in HDFS-7240 branch
[ https://issues.apache.org/jira/browse/HDFS-13422?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16451217#comment-16451217 ] Hudson commented on HDFS-13422: --- SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #14057 (See [https://builds.apache.org/job/Hadoop-trunk-Commit/14057/]) HDFS-13422. Ozone: Fix whitespaces and license issues in HDFS-7240 (xyao: rev 94cb164dece9d63bc2ac0f62c3bd5f3713d65235) * (edit) hadoop-ozone/tools/src/main/java/org/apache/hadoop/ozone/genesis/BenchMarkMetadataStoreReads.java * (edit) hadoop-ozone/acceptance-test/README.md * (edit) hadoop-ozone/tools/src/main/java/org/apache/hadoop/ozone/genesis/BenchMarkMetadataStoreWrites.java * (edit) pom.xml * (edit) hadoop-hdds/framework/pom.xml * (edit) hadoop-ozone/acceptance-test/pom.xml * (edit) hadoop-hdds/framework/README.md * (edit) hadoop-hdds/pom.xml * (edit) hadoop-hdds/tools/pom.xml * (edit) hadoop-ozone/tools/src/main/java/org/apache/hadoop/ozone/genesis/BenchMarkDatanodeDispatcher.java * (edit) hadoop-hdds/common/src/main/java/org/apache/hadoop/hdds/HddsConfigKeys.java * (edit) hadoop-ozone/tools/src/main/java/org/apache/hadoop/ozone/genesis/GenesisUtil.java * (edit) hadoop-ozone/pom.xml > Ozone: Fix whitespaces and license issues in HDFS-7240 branch > - > > Key: HDFS-13422 > URL: https://issues.apache.org/jira/browse/HDFS-13422 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: ozone >Affects Versions: HDFS-7240 >Reporter: Mukul Kumar Singh >Assignee: Lokesh Jain >Priority: Major > Fix For: HDFS-7240 > > Attachments: HDFS-13422-HDFS-7240.001.patch, > HDFS-13422-HDFS-7240.002.patch > > > This jira will be used to fix various findbugs, javac, license and findbugs > issues in HDFS-7240 branch. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-13444) Ozone: Fix checkstyle issues in HDFS-7240
[ https://issues.apache.org/jira/browse/HDFS-13444?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16451215#comment-16451215 ] Hudson commented on HDFS-13444: --- SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #14057 (See [https://builds.apache.org/job/Hadoop-trunk-Commit/14057/]) HDFS-13444. Ozone: Fix checkstyle issues in HDFS-7240. Contributed by (nanda: rev c10788ec8fe31754cec5c39623ffbf979ca14c3b) * (add) hadoop-hdds/common/src/main/java/org/apache/hadoop/ozone/web/utils/package-info.java * (edit) hadoop-ozone/tools/src/main/java/org/apache/hadoop/ozone/genesis/BenchMarkContainerStateMap.java * (edit) hadoop-ozone/integration-test/src/test/java/org/apache/hadoop/ozone/scm/TestContainerSQLCli.java * (edit) hadoop-hdds/server-scm/src/test/java/org/apache/hadoop/hdds/scm/TestUtils.java * (edit) hadoop-ozone/common/src/main/java/org/apache/hadoop/ozone/KsmUtils.java * (edit) hadoop-hdds/common/src/main/java/org/apache/hadoop/hdds/HddsUtils.java * (edit) hadoop-hdds/container-service/src/main/java/org/apache/hadoop/ozone/container/common/impl/ChunkManagerImpl.java * (edit) hadoop-ozone/tools/src/main/java/org/apache/hadoop/ozone/genesis/Genesis.java * (edit) hadoop-hdds/framework/src/main/java/org/apache/hadoop/hdds/server/ServerUtils.java * (add) hadoop-hdds/container-service/src/main/java/org/apache/hadoop/hdds/scm/package-info.java * (edit) hadoop-hdds/tools/src/main/java/org/apache/hadoop/hdds/scm/cli/SCMCLI.java * (edit) hadoop-hdds/tools/src/main/java/org/apache/hadoop/hdds/scm/cli/container/CloseContainerHandler.java * (edit) hadoop-ozone/integration-test/src/test/java/org/apache/hadoop/ozone/MiniOzoneClusterImpl.java * (edit) hadoop-hdds/container-service/src/main/java/org/apache/hadoop/hdds/scm/HddsServerUtil.java * (edit) hadoop-ozone/tools/src/main/java/org/apache/hadoop/ozone/genesis/BenchMarkMetadataStoreReads.java * (add) hadoop-hdds/common/src/main/java/org/apache/hadoop/hdds/scm/container/package-info.java * (edit) hadoop-ozone/common/src/main/java/org/apache/hadoop/ozone/freon/OzoneGetConf.java * (edit) hadoop-hdds/common/src/main/java/org/apache/hadoop/hdds/conf/HddsConfServlet.java * (edit) hadoop-hdds/container-service/src/main/java/org/apache/hadoop/ozone/protocol/commands/RegisteredCommand.java * (edit) hadoop-ozone/integration-test/src/test/java/org/apache/hadoop/hdds/scm/container/TestContainerStateManager.java * (edit) hadoop-ozone/integration-test/src/test/java/org/apache/hadoop/ozone/scm/node/TestQueryNode.java * (add) hadoop-hdds/container-service/src/main/java/org/apache/hadoop/ozone/protocolPB/package-info.java * (edit) hadoop-hdds/container-service/src/test/java/org/apache/hadoop/ozone/container/common/ContainerTestUtils.java * (edit) hadoop-hdds/container-service/src/main/java/org/apache/hadoop/ozone/container/common/impl/Dispatcher.java * (edit) hadoop-hdds/container-service/src/main/java/org/apache/hadoop/ozone/HddsDatanodeService.java * (edit) hadoop-ozone/tools/src/main/java/org/apache/hadoop/ozone/genesis/BenchMarkDatanodeDispatcher.java * (edit) hadoop-ozone/tools/src/main/java/org/apache/hadoop/ozone/genesis/BenchMarkRocksDbStore.java * (edit) hadoop-hdds/tools/src/main/java/org/apache/hadoop/hdds/scm/cli/OzoneCommandHandler.java * (edit) hadoop-ozone/client/src/test/java/org/apache/hadoop/ozone/client/TestHddsClientUtils.java * (edit) hadoop-ozone/tools/src/main/java/org/apache/hadoop/ozone/genesis/GenesisUtil.java * (edit) hadoop-hdds/server-scm/src/test/java/org/apache/hadoop/ozone/container/placement/TestContainerPlacement.java * (edit) hadoop-ozone/integration-test/src/test/java/org/apache/hadoop/ozone/ksm/TestContainerReportWithKeys.java * (edit) hadoop-hdds/common/src/main/java/org/apache/hadoop/hdds/HddsConfigKeys.java * (edit) hadoop-ozone/tools/src/main/java/org/apache/hadoop/ozone/genesis/BenchMarkMetadataStoreWrites.java * (edit) hadoop-ozone/integration-test/src/test/java/org/apache/hadoop/ozone/MiniOzoneCluster.java > Ozone: Fix checkstyle issues in HDFS-7240 > - > > Key: HDFS-13444 > URL: https://issues.apache.org/jira/browse/HDFS-13444 > Project: Hadoop HDFS > Issue Type: Sub-task >Reporter: Lokesh Jain >Assignee: Lokesh Jain >Priority: Major > Fix For: HDFS-7240 > > Attachments: HDFS-13444-HDFS-7240.001.patch, > HDFS-13444-HDFS-7240.002.patch, HDFS-13444-HDFS-7240.003.patch, > HDFS-13444-HDFS-7240.004.patch, HDFS-7240.007.patch > > -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-13413) Ozone: ClusterId and DatanodeUuid should be marked mandatory fields in SCMRegisteredCmdResponseProto
[ https://issues.apache.org/jira/browse/HDFS-13413?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16451209#comment-16451209 ] Hudson commented on HDFS-13413: --- SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #14057 (See [https://builds.apache.org/job/Hadoop-trunk-Commit/14057/]) HDFS-13413. Ozone: ClusterId and DatanodeUuid should be marked mandatory (nanda: rev c36a850af5f554f210010e7fb8039953de283746) * (edit) hadoop-hdds/container-service/src/main/java/org/apache/hadoop/ozone/protocol/commands/RegisteredCommand.java * (edit) hadoop-hdds/container-service/src/main/proto/StorageContainerDatanodeProtocol.proto * (edit) hadoop-hdds/container-service/src/main/java/org/apache/hadoop/ozone/container/common/states/endpoint/RegisterEndpointTask.java > Ozone: ClusterId and DatanodeUuid should be marked mandatory fields in > SCMRegisteredCmdResponseProto > > > Key: HDFS-13413 > URL: https://issues.apache.org/jira/browse/HDFS-13413 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: ozone >Reporter: Shashikant Banerjee >Assignee: Shashikant Banerjee >Priority: Minor > Fix For: HDFS-7240 > > Attachments: HDFS-13413-HDFS-7240.000.patch, > HDFS-13413-HDFS-7240.001.patch, HDFS-13413-HDFS-7240.002.patch > > > ClusterId as well as the DatanodeUuid are optional fields in > {{SCMRegisteredCmdResponseProto}} > currently. We have to make both clusterId and DatanodeUuid as required field > and handle it properly. As of now, we don't do anything with the response of > datanode registration. We should validate the clusterId and also the > datanodeUuid -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-13459) Ozone: Clean-up of ozone related change from MiniDFSCluste
[ https://issues.apache.org/jira/browse/HDFS-13459?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16451214#comment-16451214 ] Hudson commented on HDFS-13459: --- SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #14057 (See [https://builds.apache.org/job/Hadoop-trunk-Commit/14057/]) HDFS-13459. Ozone: Clean-up of ozone related change from MiniDFSCluste. (aengineer: rev 1e0507ac56a517437066ac580e1cc775e22dc314) * (edit) hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/MiniDFSClusterWithNodeGroup.java * (edit) hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/MiniDFSCluster.java * (edit) hadoop-hdfs-project/hadoop-hdfs/src/test/resources/hadoop-22-dfs-dir.tgz > Ozone: Clean-up of ozone related change from MiniDFSCluste > -- > > Key: HDFS-13459 > URL: https://issues.apache.org/jira/browse/HDFS-13459 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: ozone >Reporter: Nanda kumar >Assignee: Nanda kumar >Priority: Major > Fix For: HDFS-7240 > > Attachments: HDFS-13459-HDFS-7240.000.patch, > HDFS-13459-HDFS-7240.001.patch > > > After HDFS-13424 {{MiniOzoneCluster}} doesn't depend on {{MiniDFSCluste}}, > ozone related changes made in MiniDFSCluste can be reverted. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-13348) Ozone: Update IP and hostname in Datanode from SCM's response to the register call
[ https://issues.apache.org/jira/browse/HDFS-13348?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16451205#comment-16451205 ] Hudson commented on HDFS-13348: --- SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #14057 (See [https://builds.apache.org/job/Hadoop-trunk-Commit/14057/]) HDFS-13348. Ozone: Update IP and hostname in Datanode from SCM's (nanda: rev 4a8aa0e1c85944342a80b0a2110fd6210853b0b7) * (edit) hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/node/SCMNodeManager.java * (edit) hadoop-hdds/container-service/src/main/java/org/apache/hadoop/ozone/container/common/states/endpoint/RegisterEndpointTask.java * (edit) hadoop-hdds/container-service/src/main/java/org/apache/hadoop/ozone/protocol/commands/RegisteredCommand.java * (edit) hadoop-hdds/container-service/src/main/proto/StorageContainerDatanodeProtocol.proto * (edit) hadoop-hdds/server-scm/src/test/java/org/apache/hadoop/ozone/container/common/TestEndPoint.java > Ozone: Update IP and hostname in Datanode from SCM's response to the register > call > -- > > Key: HDFS-13348 > URL: https://issues.apache.org/jira/browse/HDFS-13348 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: ozone >Reporter: Nanda kumar >Assignee: Shashikant Banerjee >Priority: Major > Attachments: HDFS-13348-HDFS-7240.000.patch, > HDFS-13348-HDFS-7240.001.patch, HDFS-13348-HDFS-7240.002.patch > > > Whenever a Datanode registers with SCM, the SCM resolves the IP address and > hostname of the Datanode form the RPC call. This IP address and hostname > should be sent back to Datanode in the response to register call and the > Datanode has to update the values from the response to its > {{DatanodeDetails}}. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-13425) Ozone: Clean-up of ozone related change from hadoop-common-project
[ https://issues.apache.org/jira/browse/HDFS-13425?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16451207#comment-16451207 ] Hudson commented on HDFS-13425: --- SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #14057 (See [https://builds.apache.org/job/Hadoop-trunk-Commit/14057/]) HDFS-13425. Ozone: Clean-up of ozone related change from (msingh: rev 40398d357b97ce26d0b347ad7d78df3188eab44a) * (edit) hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/FileUtil.java * (edit) hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/ipc/RPC.java * (delete) hadoop-hdds/common/src/test/java/org/apache/hadoop/hdds/scm/TestArchive.java * (edit) hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/util/concurrent/HadoopExecutors.java * (edit) hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/container/replication/ContainerSupervisor.java * (edit) hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/util/Time.java > Ozone: Clean-up of ozone related change from hadoop-common-project > -- > > Key: HDFS-13425 > URL: https://issues.apache.org/jira/browse/HDFS-13425 > Project: Hadoop HDFS > Issue Type: Sub-task >Reporter: Nanda kumar >Assignee: Lokesh Jain >Priority: Major > Fix For: HDFS-7240 > > Attachments: HDFS-13425-HDFS-7240.001.patch, > HDFS-13425-HDFS-7240.002.patch, HDFS-13425-HDFS-7240.003.patch > > > This jira is for tracking the clean-up and revert the changes made in > hadoop-common-project which are related to ozone. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-13197) Ozone: Fix ConfServlet#getOzoneTags cmd
[ https://issues.apache.org/jira/browse/HDFS-13197?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16451211#comment-16451211 ] Hudson commented on HDFS-13197: --- SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #14057 (See [https://builds.apache.org/job/Hadoop-trunk-Commit/14057/]) HDFS-13197. Ozone: Fix ConfServlet#getOzoneTags cmd. Contributed by Ajay (xyao: rev 66610b5fd5dc29c1bff006874bf46d426d3a9dfa) * (add) hadoop-hdds/common/src/main/java/org/apache/hadoop/hdds/conf/HddsConfServlet.java * (edit) hadoop-hdds/framework/src/main/java/org/apache/hadoop/hdds/server/BaseHttpServer.java * (edit) hadoop-hdds/common/src/main/resources/ozone-default.xml * (edit) hadoop-hdds/framework/src/main/resources/webapps/static/ozone.js * (edit) hadoop-hdds/framework/src/main/resources/webapps/static/templates/config.html > Ozone: Fix ConfServlet#getOzoneTags cmd > --- > > Key: HDFS-13197 > URL: https://issues.apache.org/jira/browse/HDFS-13197 > Project: Hadoop HDFS > Issue Type: Sub-task >Affects Versions: HDFS-7240 >Reporter: Xiaoyu Yao >Assignee: Ajay Kumar >Priority: Major > Fix For: HDFS-7240 > > Attachments: HDFS-13197-HDFS-7240.000.patch, > HDFS-13197-HDFS-7240.001.patch, HDFS-13197-HDFS-7240.002.patch, > HDFS-13197-HDFS-7240.003.patch, Screen Shot 2018-04-10 at 2.05.35 PM.png > > > This is broken after merging trunk change HADOOP-15007 into HDFS-7240 branch. > I remove the cmd and related test to have a clean merge. [~ajakumar], please > fix the cmd and bring back the related test. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-13446) Ozone: Fix OzoneFileSystem contract test failures
[ https://issues.apache.org/jira/browse/HDFS-13446?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16451212#comment-16451212 ] Hudson commented on HDFS-13446: --- SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #14057 (See [https://builds.apache.org/job/Hadoop-trunk-Commit/14057/]) HDFS-13446. Ozone: Fix OzoneFileSystem contract test failures. (nanda: rev fd84dea03e9aa3f8d98247c8663def31823d614f) * (delete) hadoop-tools/hadoop-ozone/src/todo/java/org/apache/hadoop/fs/ozone/contract/ITestOzoneContractCreate.java * (delete) hadoop-tools/hadoop-ozone/src/todo/java/org/apache/hadoop/fs/ozone/contract/ITestOzoneContractMkdir.java * (add) hadoop-tools/hadoop-ozone/src/test/java/org/apache/hadoop/fs/ozone/contract/ITestOzoneContractRootDir.java * (delete) hadoop-tools/hadoop-ozone/src/todo/java/org/apache/hadoop/fs/ozone/contract/ITestOzoneContractDelete.java * (delete) hadoop-tools/hadoop-ozone/src/todo/java/org/apache/hadoop/fs/ozone/contract/ITestOzoneContractRootDir.java * (delete) hadoop-tools/hadoop-ozone/src/todo/java/org/apache/hadoop/fs/ozone/contract/ITestOzoneContractSeek.java * (edit) hadoop-ozone/client/src/main/java/org/apache/hadoop/ozone/client/io/ChunkGroupOutputStream.java * (delete) hadoop-tools/hadoop-ozone/src/todo/java/org/apache/hadoop/fs/ozone/contract/ITestOzoneContractDistCp.java * (delete) hadoop-tools/hadoop-ozone/src/todo/java/org/apache/hadoop/fs/ozone/TestOzoneFileInterfaces.java * (delete) hadoop-tools/hadoop-ozone/src/todo/java/org/apache/hadoop/fs/ozone/contract/ITestOzoneContractOpen.java * (add) hadoop-tools/hadoop-ozone/src/test/java/org/apache/hadoop/fs/ozone/contract/ITestOzoneContractDelete.java * (edit) hadoop-hdds/container-service/pom.xml * (edit) hadoop-tools/hadoop-ozone/pom.xml * (add) hadoop-tools/hadoop-ozone/src/test/java/org/apache/hadoop/fs/ozone/contract/ITestOzoneContractCreate.java * (add) hadoop-tools/hadoop-ozone/src/test/resources/log4j.properties * (delete) hadoop-tools/hadoop-ozone/src/todo/java/org/apache/hadoop/fs/ozone/contract/ITestOzoneContractGetFileStatus.java * (add) hadoop-tools/hadoop-ozone/src/test/java/org/apache/hadoop/fs/ozone/contract/ITestOzoneContractRename.java * (edit) hadoop-hdds/server-scm/pom.xml * (add) hadoop-tools/hadoop-ozone/src/test/java/org/apache/hadoop/fs/ozone/contract/ITestOzoneContractGetFileStatus.java * (add) hadoop-tools/hadoop-ozone/src/test/resources/contract/ozone.xml * (add) hadoop-hdds/container-service/src/test/resources/log4j.properties * (edit) hadoop-tools/hadoop-ozone/src/test/java/org/apache/hadoop/fs/ozone/TestOzoneFSInputStream.java * (edit) hadoop-hdds/server-scm/src/test/java/org/apache/hadoop/hdds/scm/block/TestDeletedBlockLog.java * (add) hadoop-tools/hadoop-ozone/src/test/java/org/apache/hadoop/fs/ozone/contract/OzoneContract.java * (add) hadoop-tools/hadoop-ozone/src/test/java/org/apache/hadoop/fs/ozone/contract/ITestOzoneContractOpen.java * (add) hadoop-tools/hadoop-ozone/src/test/java/org/apache/hadoop/fs/ozone/TestOzoneFileInterfaces.java * (edit) hadoop-hdds/server-scm/src/test/java/org/apache/hadoop/hdds/scm/container/MockNodeManager.java * (delete) hadoop-tools/hadoop-ozone/src/todo/resources/log4j.properties * (add) hadoop-tools/hadoop-ozone/src/test/java/org/apache/hadoop/fs/ozone/contract/ITestOzoneContractMkdir.java * (delete) hadoop-tools/hadoop-ozone/src/todo/java/org/apache/hadoop/fs/ozone/contract/OzoneContract.java * (add) hadoop-tools/hadoop-ozone/src/test/java/org/apache/hadoop/fs/ozone/contract/ITestOzoneContractSeek.java * (add) hadoop-tools/hadoop-ozone/src/test/java/org/apache/hadoop/fs/ozone/contract/ITestOzoneContractDistCp.java * (delete) hadoop-tools/hadoop-ozone/src/todo/java/org/apache/hadoop/fs/ozone/contract/ITestOzoneContractRename.java * (delete) hadoop-tools/hadoop-ozone/src/todo/resources/contract/ozone.xml * (edit) hadoop-hdds/server-scm/src/test/java/org/apache/hadoop/hdds/scm/TestStorageContainerManagerHttpServer.java * (edit) hadoop-hdds/container-service/src/main/java/org/apache/hadoop/ozone/container/common/states/datanode/InitDatanodeState.java > Ozone: Fix OzoneFileSystem contract test failures > - > > Key: HDFS-13446 > URL: https://issues.apache.org/jira/browse/HDFS-13446 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: ozone >Affects Versions: HDFS-7240 >Reporter: Mukul Kumar Singh >Assignee: Mukul Kumar Singh >Priority: Major > Fix For: HDFS-7240 > > Attachments: HDFS-13446-HDFS-7240.001.patch > > > This jira refactors contract tests to the src/test directory and also fixes > the ozone filsystem contract tests as well. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe
[jira] [Commented] (HDFS-13258) Ozone: restructure Hdsl/Ozone code to separated maven subprojects
[ https://issues.apache.org/jira/browse/HDFS-13258?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16451183#comment-16451183 ] Hudson commented on HDFS-13258: --- SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #14057 (See [https://builds.apache.org/job/Hadoop-trunk-Commit/14057/]) HDFS-13258. Ozone: restructure Hdsl/Ozone code to separated maven (aengineer: rev ce23d9adf004358013825f2a1ec684f35a953b4a) * (add) hadoop-hdsl/container-service/src/main/java/org/apache/hadoop/ozone/container/ozoneimpl/OzoneContainer.java * (delete) hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/ozone/scm/cli/container/CloseContainerHandler.java * (add) hadoop-ozone/integration-test/src/test/java/org/apache/hadoop/ozone/web/client/TestKeysRatis.java * (delete) hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/ozone/container/common/statemachine/DatanodeStateMachine.java * (add) hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/ksm/BucketManager.java * (delete) hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/ozone/web/handlers/BucketProcessTemplate.java * (delete) hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/ozone/container/common/statemachine/commandhandler/CommandDispatcher.java * (delete) hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/ozone/ksm/VolumeManager.java * (add) hadoop-cblock/server/src/test/java/org/apache/hadoop/cblock/TestCBlockReadWrite.java * (add) hadoop-hdsl/common/src/main/java/org/apache/hadoop/ozone/client/OzoneQuota.java * (add) hadoop-cblock/tools/src/main/java/org/apache/hadoop/cblock/cli/package-info.java * (delete) hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/ozone/container/common/statemachine/commandhandler/CommandHandler.java * (add) hadoop-ozone/objectstore-service/src/main/java/org/apache/hadoop/ozone/web/ObjectStoreApplication.java * (delete) hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/ozone/scm/cli/container/ContainerCommandHandler.java * (delete) hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/ozone/web/ozShell/volume/DeleteVolumeHandler.java * (add) hadoop-hdsl/server-scm/src/test/java/org/apache/hadoop/ozone/scm/node/TestContainerPlacement.java * (delete) hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/ozone/scm/container/closer/package-info.java * (add) hadoop-hdsl/client/src/main/java/org/apache/hadoop/scm/client/ContainerOperationClient.java * (add) hadoop-hdsl/server-scm/src/main/java/org/apache/hadoop/ozone/scm/SCMStorage.java * (delete) hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/ozone/scm/pipelines/package-info.java * (add) hadoop-cblock/server/src/main/proto/CBlockClientServerProtocol.proto * (add) hadoop-hdsl/server-scm/src/main/java/org/apache/hadoop/ozone/scm/container/placement/algorithms/SCMContainerPlacementRandom.java * (add) hadoop-ozone/common/src/main/java/org/apache/hadoop/ozone/ksm/helpers/KsmBucketArgs.java * (add) hadoop-ozone/client/src/main/java/org/apache/hadoop/ozone/web/client/OzoneKey.java * (delete) hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/ozone/web/localstorage/LocalStorageHandler.java * (add) hadoop-hdsl/framework/src/main/webapps/static/angular-route-1.6.4.min.js * (add) hadoop-hdsl/tools/src/main/java/org/apache/hadoop/ozone/scm/cli/container/InfoContainerHandler.java * (add) hadoop-ozone/common/src/main/java/org/apache/hadoop/ozone/client/rest/headers/package-info.java * (add) hadoop-hdsl/common/src/main/java/org/apache/hadoop/ozone/common/InconsistentStorageStateException.java * (delete) hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/ozone/scm/pipelines/ratis/package-info.java * (add) hadoop-hdsl/client/src/main/java/org/apache/hadoop/scm/XceiverClientManager.java * (add) hadoop-cblock/server/src/main/java/org/apache/hadoop/cblock/jscsiHelper/cache/LogicalBlock.java * (delete) hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/ozone/container/common/statemachine/commandhandler/TestCloseContainerHandler.java * (delete) hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/ozone/ksm/TestKeySpaceManagerRestInterface.java * (add) hadoop-ozone/objectstore-service/src/main/java/org/apache/hadoop/ozone/web/messages/StringMessageBodyWriter.java * (delete) hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/ozone/ksm/exceptions/package-info.java * (delete) hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/ozone/ksm/protocolPB/KeySpaceManagerProtocolPB.java * (add) hadoop-hdsl/common/src/main/java/org/apache/hadoop/ozone/common/statemachine/StateMachine.java * (add) hadoop-cblock/server/src/main/java/org/apache/hadoop/cblock/jscsiHelper/package-info.java * (add) hadoop-hdsl/server-scm/src/main/java/org/apache/hadoop/ozone/scm/node/SCMNodePoolManager.java * (delete)
[jira] [Commented] (HDFS-13127) Fix TestContainerStateManager and TestOzoneConfigurationFields
[ https://issues.apache.org/jira/browse/HDFS-13127?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16451167#comment-16451167 ] Hudson commented on HDFS-13127: --- SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #14057 (See [https://builds.apache.org/job/Hadoop-trunk-Commit/14057/]) HDFS-13127. Fix TestContainerStateManager and (aengineer: rev 3c9a9a117d6a12220752a52d68cc7e41ca816631) * (edit) hadoop-hdfs-project/hadoop-hdfs/src/main/resources/ozone-default.xml * (edit) hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/ozone/scm/container/TestContainerStateManager.java * (edit) hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/ozone/client/rest/RestClient.java * (edit) hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/ozone/scm/container/ContainerStateManager.java > Fix TestContainerStateManager and TestOzoneConfigurationFields > -- > > Key: HDFS-13127 > URL: https://issues.apache.org/jira/browse/HDFS-13127 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: ozone >Affects Versions: HDFS-7240 >Reporter: Mukul Kumar Singh >Assignee: Mukul Kumar Singh >Priority: Major > Fix For: HDFS-7240 > > Attachments: HDFS-13127-HDFS-7240.001.patch, > HDFS-13127-HDFS-7240.002.patch > > > TestContainerStateManager is failing because SCM is unable to find a > container with enough free space to allocate a new block in the container. > TestOzoneConfigurationFields is failing because configs "ozone.rest.servers" > and "ozone.rest.client.port" are added in ozone-default.xml however they > aren't specified as any of the config keys. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-13008) Ozone: Add DN container open/close state to container report
[ https://issues.apache.org/jira/browse/HDFS-13008?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16451180#comment-16451180 ] Hudson commented on HDFS-13008: --- SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #14057 (See [https://builds.apache.org/job/Hadoop-trunk-Commit/14057/]) HDFS-13008. Ozone: Add DN container open/close state to container (nanda: rev 4c03c58452aee728941931d305168ba96349e4c8) * (edit) hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/ozone/scm/cli/container/InfoContainerHandler.java * (edit) hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/ozone/container/common/TestBlockDeletingService.java * (edit) hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/ozone/container/common/impl/TestContainerDeletionChoosingPolicy.java * (edit) hadoop-hdfs-project/hadoop-hdfs-client/src/main/proto/DatanodeContainerProtocol.proto * (edit) hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/ozone/container/common/impl/ContainerManagerImpl.java * (edit) hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/ozone/container/common/impl/TestContainerPersistence.java * (edit) hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/ozone/container/common/helpers/ContainerData.java * (edit) hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/ozone/scm/StorageContainerManager.java * (edit) hadoop-hdfs-project/hadoop-hdfs/src/main/proto/StorageContainerDatanodeProtocol.proto * (add) hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/ozone/ksm/TestContainerReportWithKeys.java > Ozone: Add DN container open/close state to container report > > > Key: HDFS-13008 > URL: https://issues.apache.org/jira/browse/HDFS-13008 > Project: Hadoop HDFS > Issue Type: Sub-task >Affects Versions: HDFS-7240 >Reporter: Xiaoyu Yao >Assignee: Xiaoyu Yao >Priority: Major > Attachments: HDFS-13008-HDFS-7240.001.patch, > HDFS-13008-HDFS-7240.002.patch, HDFS-13008-HDFS-7240.003.patch, > HDFS-13008-HDFS-7240.004.patch, HDFS-13008-HDFS-7240.005.patch > > > HDFS-12799 added support to allow SCM send closeContainerCommand to DNs. This > ticket is opened to add the DN container close state to container report so > that SCM container state manager can update state from closing to closed when > DN side container is fully closed. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-13341) Ozone: Move ozone specific ServiceRuntimeInfo utility to the framework
[ https://issues.apache.org/jira/browse/HDFS-13341?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16451194#comment-16451194 ] Hudson commented on HDFS-13341: --- SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #14057 (See [https://builds.apache.org/job/Hadoop-trunk-Commit/14057/]) HDFS-13341. Ozone: Move ozone specific ServiceRuntimeInfo utility to the (aengineer: rev ac77b180375651dfb1d8cf2be7d15f39fe706aee) * (delete) hadoop-hdsl/framework/src/main/java/org/apache/hadoop/ozone/web/package-info.java * (add) hadoop-hdsl/framework/src/main/java/org/apache/hadoop/hdsl/server/ServerUtils.java * (add) hadoop-hdsl/framework/src/test/java/org/apache/hadoop/hdsl/server/TestBaseHttpServer.java * (delete) hadoop-hdsl/framework/src/main/java/org/apache/hadoop/ozone/web/OzoneHttpServer.java * (edit) hadoop-hdsl/server-scm/src/main/java/org/apache/hadoop/ozone/scm/StorageContainerManager.java * (delete) hadoop-hdsl/framework/src/main/java/org/apache/hadoop/ozone/web/util/ServerUtils.java * (add) hadoop-hdsl/framework/src/main/java/org/apache/hadoop/hdsl/server/ServiceRuntimeInfoImpl.java * (edit) hadoop-hdsl/server-scm/src/main/java/org/apache/hadoop/ozone/scm/StorageContainerManagerHttpServer.java * (edit) hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/ksm/KSMStorage.java * (edit) hadoop-hdsl/server-scm/src/main/java/org/apache/hadoop/ozone/scm/SCMMXBean.java * (edit) hadoop-hdsl/container-service/src/main/java/org/apache/hadoop/ozone/scm/HdslServerUtil.java * (edit) hadoop-hdsl/server-scm/src/main/java/org/apache/hadoop/ozone/scm/node/SCMNodePoolManager.java * (delete) hadoop-ozone/common/src/test/java/org/apache/hadoop/ozone/web/TestOzoneHttpServer.java * (edit) hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/ksm/KeySpaceManagerHttpServer.java * (edit) hadoop-hdsl/server-scm/src/main/java/org/apache/hadoop/ozone/scm/block/BlockManagerImpl.java * (add) hadoop-hdsl/framework/README.md * (edit) hadoop-ozone/client/src/main/java/org/apache/hadoop/ozone/web/client/OzoneVolume.java * (delete) hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/jmx/ServiceRuntimeInfo.java * (edit) hadoop-hdsl/server-scm/src/main/java/org/apache/hadoop/ozone/scm/SCMStorage.java * (edit) hadoop-ozone/client/src/main/java/org/apache/hadoop/ozone/web/client/OzoneBucket.java * (delete) hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/jmx/ServiceRuntimeInfoImpl.java * (edit) hadoop-hdsl/server-scm/src/main/java/org/apache/hadoop/ozone/scm/block/DeletedBlockLogImpl.java * (edit) hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/ksm/KSMMXBean.java * (edit) hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/ksm/KSMMetadataManagerImpl.java * (add) hadoop-hdsl/framework/src/main/java/org/apache/hadoop/hdsl/server/package-info.java * (edit) hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/ksm/KeySpaceManager.java * (delete) hadoop-hdsl/framework/src/main/java/org/apache/hadoop/ozone/web/util/package-info.java * (edit) hadoop-cblock/server/src/main/java/org/apache/hadoop/cblock/CBlockManager.java * (edit) hadoop-hdsl/server-scm/src/main/java/org/apache/hadoop/ozone/scm/container/ContainerMapping.java * (add) hadoop-hdsl/framework/src/main/java/org/apache/hadoop/hdsl/server/ServiceRuntimeInfo.java * (edit) hadoop-ozone/client/src/main/java/org/apache/hadoop/ozone/web/client/OzoneRestClient.java * (add) hadoop-hdsl/framework/src/main/java/org/apache/hadoop/hdsl/server/BaseHttpServer.java > Ozone: Move ozone specific ServiceRuntimeInfo utility to the framework > -- > > Key: HDFS-13341 > URL: https://issues.apache.org/jira/browse/HDFS-13341 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: HDFS-7240 >Affects Versions: HDFS-7240 >Reporter: Elek, Marton >Assignee: Elek, Marton >Priority: Major > Fix For: HDFS-7240 > > Attachments: HDFS-13341-HDFS-7240.001.patch, > HDFS-13341-HDFS-7240.002.patch, HDFS-13341-HDFS-7240.003.patch > > > ServiceRuntimeInfo is a generic interface to provide common information via > JMX beans (such as build version, compile info, started time). > Currently it is used only by KSM/SCM, I suggest to move it to the > hadoop-hdsl/framework project from hadoop-commons. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-13022) Block Storage: Kubernetes dynamic persistent volume provisioner
[ https://issues.apache.org/jira/browse/HDFS-13022?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16451169#comment-16451169 ] Hudson commented on HDFS-13022: --- SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #14057 (See [https://builds.apache.org/job/Hadoop-trunk-Commit/14057/]) HDFS-13022. Block Storage: Kubernetes dynamic persistent volume (msingh: rev eb5e66a1c46f3b542b22ee1e2046d1c728abc479) * (add) hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/cblock/kubernetes/package-info.java * (edit) hadoop-hdfs-project/hadoop-hdfs/pom.xml * (add) hadoop-hdfs-project/hadoop-hdfs/src/test/resources/dynamicprovisioner/expected1-pv.json * (edit) hadoop-hdfs-project/hadoop-hdfs/src/main/resources/ozone-default.xml * (edit) hadoop-minicluster/pom.xml * (edit) hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/cblock/cli/CBlockCli.java * (add) hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/cblock/kubernetes/TestDynamicProvisioner.java * (add) hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/cblock/kubernetes/DynamicProvisioner.java * (add) hadoop-hdfs-project/hadoop-hdfs/src/test/resources/dynamicprovisioner/input1-pvc.json * (edit) hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/cblock/CBlockConfigKeys.java * (edit) hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/conf/OzonePropertyTag.java * (edit) hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/cblock/storage/StorageManager.java * (edit) hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/cblock/CBlockManager.java > Block Storage: Kubernetes dynamic persistent volume provisioner > --- > > Key: HDFS-13022 > URL: https://issues.apache.org/jira/browse/HDFS-13022 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: HDFS-7240 >Affects Versions: HDFS-7240 >Reporter: Elek, Marton >Assignee: Elek, Marton >Priority: Major > Fix For: HDFS-7240 > > Attachments: HDFS-13022-HDFS-7240.001.patch, > HDFS-13022-HDFS-7240.002.patch, HDFS-13022-HDFS-7240.003.patch, > HDFS-13022-HDFS-7240.004.patch, HDFS-13022-HDFS-7240.005.patch, > HDFS-13022-HDFS-7240.006.patch, HDFS-13022-HDFS-7240.007.patch > > > {color:#FF}{color} > With HDFS-13017 and HDFS-13018 the cblock/jscsi server could be used in a > kubernetes cluster as the backend for iscsi persistent volumes. > Unfortunatelly we need to create all the required cblocks manually with 'hdfs > cblok -c user volume...' for all the Persistent Volumes. > > But it could be handled with a simple optional component. An additional > service could listen on the kubernetes event stream. In case of new > PersistentVolumeClaim (where the storageClassName is cblock) the cblock > server could create cblock in advance AND create the persistent volume could > be created. > > The code is very simple, and this additional component could be optional in > the cblock server. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-13317) Ozone: docker-compose should only be copied to hadoop-dist if Phdsl is enabled.
[ https://issues.apache.org/jira/browse/HDFS-13317?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16451187#comment-16451187 ] Hudson commented on HDFS-13317: --- SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #14057 (See [https://builds.apache.org/job/Hadoop-trunk-Commit/14057/]) HDFS-13317. Ozone: docker-compose should only be copied to hadoop-dist (xyao: rev b49ec73204e764516bc104a83ac9ec7e1ba2ce20) * (edit) hadoop-dist/pom.xml > Ozone: docker-compose should only be copied to hadoop-dist if Phdsl is > enabled. > --- > > Key: HDFS-13317 > URL: https://issues.apache.org/jira/browse/HDFS-13317 > Project: Hadoop HDFS > Issue Type: Sub-task >Reporter: Xiaoyu Yao >Assignee: Xiaoyu Yao >Priority: Major > Fix For: HDFS-7240 > > Attachments: HDFS-13317-HDFS-7240.001.patch > > -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-13316) Ozone: Update the ksm/scm CLI usage info
[ https://issues.apache.org/jira/browse/HDFS-13316?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16451197#comment-16451197 ] Hudson commented on HDFS-13316: --- SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #14057 (See [https://builds.apache.org/job/Hadoop-trunk-Commit/14057/]) HDFS-13316. Ozone: Update the ksm/scm CLI usage info. Contributed by (nanda: rev 9a177ff5ccffc87da42ff6b94bf8a2d553d3a198) * (edit) hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/ksm/KeySpaceManager.java * (edit) hadoop-hdsl/server-scm/src/main/java/org/apache/hadoop/ozone/scm/StorageContainerManager.java > Ozone: Update the ksm/scm CLI usage info > > > Key: HDFS-13316 > URL: https://issues.apache.org/jira/browse/HDFS-13316 > Project: Hadoop HDFS > Issue Type: Sub-task >Reporter: Xiaoyu Yao >Assignee: Xiaoyu Yao >Priority: Major > Attachments: HDFS-13315-HDFS-7240.001.patch > > > This should be oz instead of hdfs. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-13395) Ozone: Plugins support in HDSL Datanode Service
[ https://issues.apache.org/jira/browse/HDFS-13395?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16451202#comment-16451202 ] Hudson commented on HDFS-13395: --- SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #14057 (See [https://builds.apache.org/job/Hadoop-trunk-Commit/14057/]) HDFS-13395. Ozone: Plugins support in HDSL Datanode Service. Contributed (xyao: rev bb3c07fa3e4f5b5c38c251e882a357eddab0957f) * (edit) hadoop-hdds/common/src/main/resources/ozone-default.xml * (delete) hadoop-ozone/objectstore-service/src/main/java/org/apache/hadoop/ozone/web/ObjectStoreRestPlugin.java * (edit) hadoop-dist/src/main/compose/ozone/docker-config * (add) hadoop-ozone/objectstore-service/src/main/java/org/apache/hadoop/ozone/web/OzoneHddsDatanodeService.java * (edit) hadoop-dist/src/main/compose/cblock/docker-config * (edit) hadoop-hdds/container-service/src/main/java/org/apache/hadoop/ozone/container/common/statemachine/DatanodeStateMachine.java * (edit) hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/DataNode.java * (edit) hadoop-hdds/container-service/src/main/java/org/apache/hadoop/ozone/HddsDatanodeService.java * (delete) hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/DataNodeServicePlugin.java * (edit) hadoop-ozone/integration-test/src/test/java/org/apache/hadoop/ozone/MiniOzoneClassicCluster.java * (edit) hadoop-ozone/integration-test/src/test/java/org/apache/hadoop/ozone/MiniOzoneTestHelper.java * (edit) hadoop-ozone/acceptance-test/src/test/compose/docker-config * (edit) hadoop-hdds/common/src/main/java/org/apache/hadoop/ozone/OzoneConfigKeys.java > Ozone: Plugins support in HDSL Datanode Service > --- > > Key: HDFS-13395 > URL: https://issues.apache.org/jira/browse/HDFS-13395 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: ozone >Reporter: Nanda kumar >Assignee: Nanda kumar >Priority: Major > Fix For: HDFS-7240 > > Attachments: HDFS-13395-HDFS-7240.000.patch, > HDFS-13395-HDFS-7240.001.patch, HDFS-13395-HDFS-7240.002.patch > > > As part of Datanode, we start {{HdslDatanodeService}} if {{ozone}} is > enabled. We need provision to load plugins like {{Ozone Rest Service}} as > part of {{HdslDatanodeService}} start. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-13302) Ozone: fix classpath of yarn components
[ https://issues.apache.org/jira/browse/HDFS-13302?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16451185#comment-16451185 ] Hudson commented on HDFS-13302: --- SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #14057 (See [https://builds.apache.org/job/Hadoop-trunk-Commit/14057/]) HDFS-13302. Ozone: fix classpath of yarn components. Contributed by (aengineer: rev fb9ba120ce9a36fc7ad0b88257464c3884bdb957) * (edit) dev-support/bin/dist-layout-stitching > Ozone: fix classpath of yarn components > --- > > Key: HDFS-13302 > URL: https://issues.apache.org/jira/browse/HDFS-13302 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: HDFS-7240 >Affects Versions: HDFS-7240 >Reporter: Elek, Marton >Assignee: Elek, Marton >Priority: Major > Fix For: HDFS-7240 > > Attachments: HDFS-13302-HDFS-7240.001.patch > > > HDFS-13258 introduced a separated classpath for hdsl/cblock/ozone components. > The reason is the behaviour of the dist-layout-stitching. The internal copy > command in the dist-layout-stitching copies the jar files only if they don > not exist in any of the existing subfolder of the share/hadoop directory. > With the new separated classpath if some of the dependencies are already > copied to share/hadoop/hdsl or share/hadoop/ozone, they won't be copied to > the share/hadoop/yarn directory and won't be added to the classpath. (as the > hdsl copy is before the yarn copy commands) > Could be fixed easily with moving the ozone/cblock/hdsl related stuff to the > end of the dist-layout-stitching. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-13298) Ozone: Make ozone/hdsl/cblock modules turned off by default
[ https://issues.apache.org/jira/browse/HDFS-13298?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16451184#comment-16451184 ] Hudson commented on HDFS-13298: --- SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #14057 (See [https://builds.apache.org/job/Hadoop-trunk-Commit/14057/]) HDFS-13298. Ozone: Make ozone/hdsl/cblock modules turned off by default. (aengineer: rev 542e0d34234d4ea47d7e3bf96da3b2dc7de931c6) * (edit) hadoop-ozone/ozone-manager/pom.xml * (edit) hadoop-tools/hadoop-tools-dist/pom.xml * (delete) hadoop-hdsl/framework/src/main/webapps/static/angular-nvd3-1.0.9.min.js * (delete) hadoop-hdsl/framework/src/main/webapps/static/nvd3-1.8.5.min.js * (delete) hadoop-hdsl/framework/src/main/webapps/static/angular-route-1.6.4.min.js * (edit) hadoop-dist/pom.xml * (delete) hadoop-hdsl/framework/src/main/webapps/static/nvd3-1.8.5.min.css.map * (delete) hadoop-hdsl/framework/src/main/webapps/static/templates/overview.html * (delete) hadoop-hdsl/framework/src/main/webapps/static/templates/config.html * (delete) hadoop-hdsl/framework/src/main/webapps/static/d3-3.5.17.min.js * (delete) hadoop-hdsl/framework/src/main/webapps/static/ozone.js * (edit) hadoop-hdsl/framework/pom.xml * (delete) hadoop-hdsl/framework/src/main/webapps/static/templates/rpc-metrics.html * (delete) hadoop-hdsl/framework/src/main/webapps/datanode/dn.js * (delete) hadoop-hdsl/framework/src/main/webapps/static/dfs-dust.js * (edit) hadoop-hdsl/server-scm/pom.xml * (delete) hadoop-hdsl/framework/src/main/webapps/static/nvd3-1.8.5.min.css * (delete) hadoop-hdsl/framework/src/main/webapps/static/nvd3-1.8.5.min.js.map * (edit) hadoop-tools/pom.xml * (delete) hadoop-hdsl/framework/src/main/webapps/static/templates/jvm.html * (edit) dev-support/bin/dist-layout-stitching * (delete) hadoop-hdsl/framework/src/main/webapps/static/angular-1.6.4.min.js * (delete) hadoop-hdsl/framework/src/main/webapps/static/templates/menu.html * (delete) hadoop-hdsl/framework/src/main/webapps/static/ozone.css * (edit) pom.xml HDFS-13298. Addendum: Ozone: Make ozone/hdsl/cblock modules turned off (aengineer: rev 7ace05b3fe35f7774f66651db025bbbfb9498b8e) * (add) hadoop-hdsl/framework/src/main/resources/webapps/static/ozone.js * (add) hadoop-hdsl/framework/src/main/resources/webapps/static/angular-1.6.4.min.js * (add) hadoop-hdsl/framework/src/main/resources/webapps/static/nvd3-1.8.5.min.css.map * (add) hadoop-hdsl/framework/src/main/resources/webapps/static/angular-route-1.6.4.min.js * (add) hadoop-hdsl/framework/src/main/resources/webapps/static/dfs-dust.js * (add) hadoop-hdsl/framework/src/main/resources/webapps/static/templates/menu.html * (add) hadoop-hdsl/framework/src/main/resources/webapps/static/nvd3-1.8.5.min.css * (add) hadoop-hdsl/framework/src/main/resources/webapps/static/angular-nvd3-1.0.9.min.js * (add) hadoop-hdsl/framework/src/main/resources/webapps/static/ozone.css * (add) hadoop-hdsl/framework/src/main/resources/webapps/static/nvd3-1.8.5.min.js * (add) hadoop-hdsl/framework/src/main/resources/webapps/static/templates/overview.html * (add) hadoop-hdsl/framework/src/main/resources/webapps/static/templates/jvm.html * (add) hadoop-hdsl/framework/src/main/resources/webapps/static/nvd3-1.8.5.min.js.map * (add) hadoop-hdsl/framework/src/main/resources/webapps/static/templates/rpc-metrics.html * (add) hadoop-hdsl/framework/src/main/resources/webapps/static/d3-3.5.17.min.js * (add) hadoop-hdsl/framework/src/main/resources/webapps/datanode/dn.js * (add) hadoop-hdsl/framework/src/main/resources/webapps/static/templates/config.html > Ozone: Make ozone/hdsl/cblock modules turned off by default > --- > > Key: HDFS-13298 > URL: https://issues.apache.org/jira/browse/HDFS-13298 > Project: Hadoop HDFS > Issue Type: Sub-task >Reporter: Elek, Marton >Assignee: Elek, Marton >Priority: Major > Fix For: HDFS-7240 > > Attachments: HDFS-13298-HDFS-7240.001.patch, > HDFS-13298-HDFS-7240.missing.patch > > > According to the > [proposal|https://mail-archives.apache.org/mod_mbox/hadoop-common-dev/201803.mbox/%3CCAHfHakEoHTVFo9R3FoNTbYF-ovEEaCExtPqxhxv0UV0HXjhrhw%40mail.gmail.com%3E] > from [~owen.omalley] Hdsl/Ozone/Cblock projects could be activated by an > optional maven profile. > At HDFS-13258 we moved out Hdsl/Ozone/Cblock code out from hdfs/common > projects, this issue is about introducing the new profile to turn off/on the > hdsl compilation/packaging. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-11699) Ozone:SCM: Add support for close containers in SCM
[ https://issues.apache.org/jira/browse/HDFS-11699?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16451176#comment-16451176 ] Hudson commented on HDFS-11699: --- SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #14057 (See [https://builds.apache.org/job/Hadoop-trunk-Commit/14057/]) HDFS-11699. Ozone:SCM: Add support for close containers in SCM. (nanda: rev d985f735c62d52538d505128a1d90313574de155) * (add) hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/ozone/scm/container/closer/package-info.java * (edit) hadoop-hdfs-project/hadoop-hdfs/src/main/resources/ozone-default.xml * (add) hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/ozone/scm/container/closer/ContainerCloser.java * (edit) hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/ozone/scm/node/NodeManager.java * (edit) hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/ozone/scm/container/ContainerMapping.java * (edit) hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/ozone/scm/container/MockNodeManager.java * (add) hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/ozone/scm/container/closer/TestContainerCloser.java > Ozone:SCM: Add support for close containers in SCM > -- > > Key: HDFS-11699 > URL: https://issues.apache.org/jira/browse/HDFS-11699 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: ozone >Affects Versions: HDFS-7240 >Reporter: Anu Engineer >Assignee: Anu Engineer >Priority: Major > Fix For: HDFS-7240 > > Attachments: HDFS-11699-HDFS-7240.001.patch, > HDFS-11699-HDFS-7240.002.patch, HDFS-11699-HDFS-7240.003.patch, > HDFS-11699-HDFS-7240.004.patch > > > Add support for closed containers in SCM. When a container is closed, SCM > needs to make a set of decisions like which pool and which machines are > expected to have this container. SCM also needs to issue a copyContainer > command to the target datanodes so that these nodes can replicate data from > the original locations. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-13309) Ozone: Improve error message in case of missing nodes
[ https://issues.apache.org/jira/browse/HDFS-13309?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16451198#comment-16451198 ] Hudson commented on HDFS-13309: --- SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #14057 (See [https://builds.apache.org/job/Hadoop-trunk-Commit/14057/]) HDFS-13309. Ozone: Improve error message in case of missing nodes. (nanda: rev b2974fff0677d1aee86bc93baaa169276729a2bc) * (edit) hadoop-hdsl/server-scm/src/main/java/org/apache/hadoop/ozone/scm/container/ContainerStateManager.java > Ozone: Improve error message in case of missing nodes > - > > Key: HDFS-13309 > URL: https://issues.apache.org/jira/browse/HDFS-13309 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: HDFS-7240 >Affects Versions: HDFS-7240 >Reporter: Elek, Marton >Assignee: Elek, Marton >Priority: Minor > Attachments: HDFS-13309-HDFS-7240.001.patch, > HDFS-13309-HDFS-7240.002.patch > > > During testing ozonefs with spark I found multiple error messages in the log: > {code} > scm_1 | java.lang.NullPointerException > scm_1 | at > org.apache.hadoop.ozone.scm.container.ContainerStates.ContainerStateMap.addContainer(ContainerStateMap.java:129) > scm_1 | at > org.apache.hadoop.ozone.scm.container.ContainerStateManager.allocateContainer(ContainerStateManager.java:308) > scm_1 | at > org.apache.hadoop.ozone.scm.container.ContainerMapping.allocateContainer(ContainerMapping.java:244) > scm_1 | at > org.apache.hadoop.ozone.scm.block.BlockManagerImpl.preAllocateContainers(BlockManagerImpl.java:189) > scm_1 | at > org.apache.hadoop.ozone.scm.block.BlockManagerImpl.allocateBlock(BlockManagerImpl.java:291) > scm_1 | at > org.apache.hadoop.ozone.scm.StorageContainerManager.allocateBlock(StorageContainerManager.java:1131) > scm_1 | at > org.apache.hadoop.ozone.protocolPB.ScmBlockLocationProtocolServerSideTranslatorPB.allocateScmBlock(ScmBlockLocationProtocolServerSideTranslatorPB.java:109) > scm_1 | at > org.apache.hadoop.hdsl.protocol.proto.ScmBlockLocationProtocolProtos$ScmBlockLocationProtocolService$2.callBlockingMethod(ScmBlockLocationProtocolProtos.java:8038) > scm_1 | at > org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:524) > scm_1 | at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1007) > scm_1 | at > org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:873) > scm_1 | at > org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:819) > scm_1 | at java.security.AccessController.doPrivileged(Native > Method) > scm_1 | at javax.security.auth.Subject.doAs(Subject.java:422) > scm_1 | at > org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1682) > scm_1 | at > org.apache.hadoop.ipc.Server$Handler.run(Server.java:2679) > {code} > The problem is that PiplineManager..getPipeline() may return with null if > pipline couldn't be found/establised (for example if I have not enogh nodes > for a ratis ring). > In ContainerStateMap.addContainer this pipline is expected to be not null. > I suggest to do an additional check in > ContainerStateManager.allocateContainer and return with more meaningfull > error message. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-12965) Ozone: Documentation : Add ksm -createObjectStore command documentation.
[ https://issues.apache.org/jira/browse/HDFS-12965?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16451177#comment-16451177 ] Hudson commented on HDFS-12965: --- SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #14057 (See [https://builds.apache.org/job/Hadoop-trunk-Commit/14057/]) HDFS-12965. Ozone: Documentation : Add ksm -createObjectStore command (nanda: rev 4ebce9ce95eb6e916db36c6b19a58e0d919ac9f6) * (edit) hadoop-hdfs-project/hadoop-hdfs/src/site/markdown/OzoneGettingStarted.md.vm > Ozone: Documentation : Add ksm -createObjectStore command documentation. > > > Key: HDFS-12965 > URL: https://issues.apache.org/jira/browse/HDFS-12965 > Project: Hadoop HDFS > Issue Type: Sub-task >Affects Versions: HDFS-7240 >Reporter: Shashikant Banerjee >Assignee: Shashikant Banerjee >Priority: Major > Fix For: HDFS-7240 > > Attachments: HDFS-12965-HDFS-7240.001.patch > > > ksm -createObjectStore command once executed gets the cluster id and scm id > from the scm instance running and persist it locally. Once ksm starts , it > verifies whether the scm instance its connecting to, has the same cluster id > and scm id as present in the version file in KSM and fails in case the info > does not match. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-13391) Ozone: Make dependency of internal sub-module scope as provided in maven.
[ https://issues.apache.org/jira/browse/HDFS-13391?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16451195#comment-16451195 ] Hudson commented on HDFS-13391: --- SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #14057 (See [https://builds.apache.org/job/Hadoop-trunk-Commit/14057/]) HDFS-13391. Ozone: Make dependency of internal sub-module scope as (nanda: rev f3447873c2b6befeee7a98c7cfead3b6d7acd5bb) * (edit) hadoop-hdsl/common/pom.xml * (edit) hadoop-ozone/ozone-manager/pom.xml * (edit) hadoop-ozone/pom.xml * (edit) hadoop-ozone/objectstore-service/pom.xml * (edit) dev-support/bin/dist-layout-stitching * (edit) hadoop-hdsl/server-scm/pom.xml * (edit) hadoop-ozone/common/pom.xml * (edit) hadoop-ozone/integration-test/pom.xml * (edit) hadoop-ozone/client/pom.xml * (edit) hadoop-cblock/pom.xml * (edit) hadoop-hdsl/framework/pom.xml * (edit) hadoop-ozone/tools/pom.xml * (edit) hadoop-hdsl/pom.xml * (edit) hadoop-hdsl/container-service/pom.xml * (edit) hadoop-hdsl/client/pom.xml * (edit) hadoop-hdsl/tools/pom.xml * (edit) hadoop-tools/hadoop-ozone/pom.xml > Ozone: Make dependency of internal sub-module scope as provided in maven. > - > > Key: HDFS-13391 > URL: https://issues.apache.org/jira/browse/HDFS-13391 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: ozone >Reporter: Nanda kumar >Assignee: Nanda kumar >Priority: Major > Attachments: HDFS-13391-HDFS-7240.000.patch > > > Whenever an internal sub-module is added as a dependency the scope has to be > set to {{provided}}. > If the scope is not mentioned it falls back to default scope which is > {{compile}}, this makes the dependency jar (sub-module jar) to be copied to > {{share//lib}} directory while packaging. Since we use > {{copyifnotexists}} logic, the binary jar of the actual sub-module will not > be copied. This will result in the jar being placed in the wrong location > inside the distribution. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-13196) Ozone: dozone: make example docker-compose files version independent
[ https://issues.apache.org/jira/browse/HDFS-13196?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16451178#comment-16451178 ] Hudson commented on HDFS-13196: --- SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #14057 (See [https://builds.apache.org/job/Hadoop-trunk-Commit/14057/]) HDFS-13196. Ozone: dozone: make example docker-compose files version (aengineer: rev 966705ad8638c8089ce699af0d5aec640470d4d3) * (edit) hadoop-dist/pom.xml * (delete) dev-support/compose/ozone/docker-config * (add) hadoop-dist/src/main/compose/cblock/README.md * (add) hadoop-dist/src/main/compose/ozone/docker-compose.yaml * (add) hadoop-dist/src/main/compose/cblock/.env * (add) hadoop-dist/src/main/compose/cblock/docker-config * (delete) dev-support/compose/ozone/docker-compose.yaml * (delete) dev-support/compose/cblock/docker-compose.yaml * (delete) dev-support/compose/cblock/docker-config * (add) hadoop-dist/src/main/compose/cblock/docker-compose.yaml * (add) hadoop-dist/src/main/compose/ozone/docker-config * (delete) dev-support/compose/cblock/.env * (delete) dev-support/compose/ozone/.env * (add) hadoop-dist/src/main/compose/ozone/.env * (delete) dev-support/compose/cblock/README.md > Ozone: dozone: make example docker-compose files version independent > > > Key: HDFS-13196 > URL: https://issues.apache.org/jira/browse/HDFS-13196 > Project: Hadoop HDFS > Issue Type: Sub-task >Reporter: Elek, Marton >Assignee: Elek, Marton >Priority: Major > Fix For: HDFS-7240 > > Attachments: HDFS-13196-HDFS-7240.001.patch > > > The current version of the docker-compose files at dev-support/compose/ are > version dependent as the path of the dist folder should be added to the > docker-compose file. > In this patch I propose to add these compose files to the dist project and > replace the version number in the docker-compose files during the build. > Please note: it doesn't mean that the docker-compose files are part of the > distribution. The compose files are copied to hadoopd-dist/target/compose > which is not part of the hadoop distribution. > This patch is aligned with HADOOP-15257 which uses exactly the same approach > to provide example docker-compose file for hdfs cluster. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-13133) Ozone: OzoneFileSystem: Calling delete with non-existing path shouldn't be logged on ERROR level
[ https://issues.apache.org/jira/browse/HDFS-13133?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16451171#comment-16451171 ] Hudson commented on HDFS-13133: --- SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #14057 (See [https://builds.apache.org/job/Hadoop-trunk-Commit/14057/]) HDFS-13133. Ozone: OzoneFileSystem: Calling delete with non-existing (msingh: rev f3d07efac1a7007ed0493486a6aff26cbaa09b22) * (edit) hadoop-tools/hadoop-ozone/src/main/java/org/apache/hadoop/fs/ozone/OzoneFileSystem.java > Ozone: OzoneFileSystem: Calling delete with non-existing path shouldn't be > logged on ERROR level > > > Key: HDFS-13133 > URL: https://issues.apache.org/jira/browse/HDFS-13133 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: ozone >Affects Versions: HDFS-7240 >Reporter: Elek, Marton >Assignee: Elek, Marton >Priority: Major > Fix For: HDFS-7240 > > Attachments: HDFS-13133-HDFS-7240.001.patch > > > During the test of OzoneFileSystem with spark I noticed ERROR messages > multiple times: > Something like this: > {code} > 2018-02-11 15:54:54 ERROR OzoneFileSystem:409 - Couldn't delete > o3://bucket1.test/user/hadoop/.sparkStaging/application_1518349702045_0008 - > does not exist > {code} > I checked the other implemetations, and they use DEBUG level. I think it's > expected that the path sometimes points to a non-existing dir/file. > To be consistent with the other implemetation I propose to lower the log > level to debug. > Examples from other file systems: > S3AFileSystem: > {code} > } catch (FileNotFoundException e) { > LOG.debug("Couldn't delete {} - does not exist", f); > instrumentation.errorIgnored(); > return false; > } catch (AmazonClientException e) { > throw translateException("delete", f, e); > } > {code} > Alyun: > {code} >try { > return innerDelete(getFileStatus(path), recursive); > } catch (FileNotFoundException e) { > LOG.debug("Couldn't delete {} - does not exist", path); > return false; > } > {code} > SFTP: > {code} >} catch (FileNotFoundException e) { > // file not found, no need to delete, return true > return false; > } > {code} > SwiftNativeFileSystem: > {code} > try { > return store.delete(path, recursive); > } catch (FileNotFoundException e) { > //base path was not found. > return false; > } > {code} -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-13368) Ozone:TestEndPoint tests are failing consistently
[ https://issues.apache.org/jira/browse/HDFS-13368?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16451193#comment-16451193 ] Hudson commented on HDFS-13368: --- SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #14057 (See [https://builds.apache.org/job/Hadoop-trunk-Commit/14057/]) HDFS-13368. Ozone:TestEndPoint tests are failing consistently. (nanda: rev d0488c781ba50bc293f0c960efd1970dffa01731) * (edit) hadoop-hdsl/server-scm/pom.xml * (edit) hadoop-hdsl/server-scm/src/test/java/org/apache/hadoop/ozone/container/common/TestEndPoint.java > Ozone:TestEndPoint tests are failing consistently > - > > Key: HDFS-13368 > URL: https://issues.apache.org/jira/browse/HDFS-13368 > Project: Hadoop HDFS > Issue Type: Bug > Components: ozone >Affects Versions: HDFS-7240 >Reporter: Shashikant Banerjee >Assignee: Shashikant Banerjee >Priority: Major > Fix For: HDFS-7240 > > Attachments: HDFS-13368-HDFS-7240.000.patch, > HDFS-13368-HDFS-7240.001.patch > > > With HDFS-13300, the hostName and IpAdress in the DatanodeDetails.proto file > made required fiields. These parameters are not set in TestEndPoint which > lead these to fail consistently. > TestEndPoint#testRegisterToInvalidEndpoint > {code:java} > com.google.protobuf.UninitializedMessageException: Message missing required > fields: ipAddress, hostName > at > com.google.protobuf.AbstractMessage$Builder.newUninitializedMessageException(AbstractMessage.java:770) > at > org.apache.hadoop.hdsl.protocol.proto.HdslProtos$DatanodeDetailsProto$Builder.build(HdslProtos.java:1756) > at > org.apache.hadoop.ozone.container.common.TestEndPoint.registerTaskHelper(TestEndPoint.java:236) > at > org.apache.hadoop.ozone.container.common.TestEndPoint.testRegisterToInvalidEndpoint(TestEndPoint.java:257) > {code} > TestEndPoint#testHeartbeatTaskToInvalidNode > {code:java} > 2018-03-29 18:14:54,140 WARN impl.RaftServerProxy: FAILED new RaftServerProxy > attempt #1/5: java.lang.BootstrapMethodError: java.lang.NoClassDefFoundError: > com/codahale/metrics/Timer, sleep 500ms and then retry. > java.lang.BootstrapMethodError: java.lang.NoClassDefFoundError: > com/codahale/metrics/Timer > at > org.apache.ratis.server.storage.RaftLogWorker.(RaftLogWorker.java:104) > at > org.apache.ratis.server.storage.SegmentedRaftLog.(SegmentedRaftLog.java:113) > at org.apache.ratis.server.impl.ServerState.initLog(ServerState.java:151) > at org.apache.ratis.server.impl.ServerState.(ServerState.java:101){code} -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-13405) Ozone: Rename HDSL to HDDS
[ https://issues.apache.org/jira/browse/HDFS-13405?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16451199#comment-16451199 ] Hudson commented on HDFS-13405: --- SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #14057 (See [https://builds.apache.org/job/Hadoop-trunk-Commit/14057/]) HDFS-13405. Ozone: Rename HDSL to HDDS. Contributed by Ajay Kumar, Elek (aengineer: rev 651a05a18135ee39b6640f7e386acb086be1cf51) * (delete) hadoop-hdsl/server-scm/src/main/java/org/apache/hadoop/ozone/scm/container/replication/InProgressPool.java * (delete) hadoop-hdsl/common/pom.xml * (delete) hadoop-hdsl/common/src/main/java/org/apache/hadoop/ozone/common/statemachine/StateMachine.java * (edit) hadoop-cblock/server/src/main/java/org/apache/hadoop/cblock/proto/MountVolumeResponse.java * (delete) hadoop-hdsl/server-scm/src/main/java/org/apache/hadoop/ozone/scm/node/package-info.java * (delete) hadoop-hdsl/common/src/main/java/org/apache/hadoop/scm/container/common/helpers/PipelineChannel.java * (edit) hadoop-ozone/client/src/main/java/org/apache/hadoop/ozone/client/rpc/RpcClient.java * (add) hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/container/closer/package-info.java * (add) hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/container/placement/metrics/NodeStat.java * (add) hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/container/placement/metrics/SCMNodeStat.java * (add) hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/container/replication/ContainerSupervisor.java * (add) hadoop-hdds/tools/src/main/java/org/apache/hadoop/hdds/scm/cli/OzoneCommandHandler.java * (delete) hadoop-hdsl/common/src/test/java/org/apache/hadoop/ozone/TestOzoneAcls.java * (delete) hadoop-hdsl/server-scm/src/main/java/org/apache/hadoop/ozone/scm/node/HeartbeatQueueItem.java * (edit) hadoop-ozone/objectstore-service/src/main/java/org/apache/hadoop/ozone/OzoneRestUtils.java * (delete) hadoop-hdsl/server-scm/src/main/java/org/apache/hadoop/ozone/scm/StorageContainerManagerHttpServer.java * (add) hadoop-hdds/common/src/main/java/org/apache/hadoop/ozone/lease/package-info.java * (delete) hadoop-hdsl/server-scm/src/main/java/org/apache/hadoop/ozone/scm/node/SCMNodeManager.java * (delete) hadoop-hdsl/container-service/src/main/java/org/apache/hadoop/ozone/protocol/package-info.java * (delete) hadoop-hdsl/server-scm/src/main/java/org/apache/hadoop/ozone/scm/container/ContainerMapping.java * (delete) hadoop-hdsl/server-scm/src/main/webapps/scm/scm-overview.html * (delete) hadoop-hdsl/container-service/src/main/java/org/apache/hadoop/ozone/container/common/states/endpoint/RegisterEndpointTask.java * (delete) hadoop-hdsl/common/src/main/java/org/apache/hadoop/scm/protocol/ScmBlockLocationProtocol.java * (delete) hadoop-hdsl/container-service/src/main/java/org/apache/hadoop/ozone/package-info.java * (add) hadoop-hdds/common/src/main/resources/ozone-default.xml * (add) hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/container/closer/ContainerCloser.java * (edit) hadoop-ozone/objectstore-service/src/main/java/org/apache/hadoop/hdfs/server/datanode/ObjectStoreHandler.java * (delete) hadoop-hdsl/common/src/main/java/org/apache/hadoop/scm/container/common/helpers/DeleteBlockResult.java * (add) hadoop-hdds/framework/src/main/resources/webapps/static/angular-nvd3-1.0.9.min.js * (add) hadoop-hdds/server-scm/src/test/java/org/apache/hadoop/ozone/container/replication/package-info.java * (edit) hadoop-ozone/integration-test/src/test/java/org/apache/hadoop/ozone/scm/TestAllocateContainer.java * (edit) hadoop-ozone/tools/src/main/java/org/apache/hadoop/ozone/genesis/Genesis.java * (add) hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/container/placement/package-info.java * (add) hadoop-hdds/container-service/src/main/java/org/apache/hadoop/ozone/container/common/utils/ContainerCache.java * (delete) hadoop-hdsl/container-service/src/main/java/org/apache/hadoop/ozone/container/common/transport/server/XceiverServer.java * (delete) hadoop-hdsl/server-scm/src/main/java/org/apache/hadoop/ozone/scm/pipelines/ratis/package-info.java * (add) hadoop-hdds/common/src/main/java/org/apache/hadoop/ozone/protocolPB/ScmBlockLocationProtocolServerSideTranslatorPB.java * (add) hadoop-hdds/container-service/src/main/java/org/apache/hadoop/ozone/protocol/VersionResponse.java * (add) hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/container/placement/algorithms/ContainerPlacementPolicy.java * (edit) hadoop-ozone/ozone-manager/pom.xml * (edit) hadoop-dist/src/main/compose/ozone/docker-config * (delete) hadoop-hdsl/common/src/main/java/org/apache/hadoop/ozone/protocolPB/package-info.java * (add) hadoop-hdds/client/src/main/java/org/apache/hadoop/hdds/scm/XceiverClientRatis.java * (delete) hadoop-hdsl/container-service/src/main/java/org/apache/hadoop/ozone/contain
[jira] [Commented] (HDFS-12986) Ozone: Update ozone to latest ratis snapshot build (0.1.1-alpha-0f7169d-SNAPSHOT)
[ https://issues.apache.org/jira/browse/HDFS-12986?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16451158#comment-16451158 ] Hudson commented on HDFS-12986: --- SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #14057 (See [https://builds.apache.org/job/Hadoop-trunk-Commit/14057/]) HDFS-12986. Ozone: Update ozone to latest ratis snapshot build (szetszwo: rev 65b90385fdf78aeef7067cd55b41177df3782a8b) * (edit) hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/scm/XceiverClientRatis.java * (edit) hadoop-client-modules/hadoop-client-runtime/pom.xml * (edit) hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/ozone/container/common/transport/server/ratis/XceiverServerRatis.java * (edit) hadoop-project/pom.xml > Ozone: Update ozone to latest ratis snapshot build > (0.1.1-alpha-0f7169d-SNAPSHOT) > - > > Key: HDFS-12986 > URL: https://issues.apache.org/jira/browse/HDFS-12986 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: ozone >Affects Versions: HDFS-7240 >Reporter: Mukul Kumar Singh >Assignee: Lokesh Jain >Priority: Major > Fix For: HDFS-7240 > > Attachments: HDFS-12986-HDFS-7240.001.patch, > HDFS-12986-HDFS-7240.002.patch, HDFS-12986-HDFS-7240.003.patch, > HDFS-12986-HDFS-7240.004.patch, HDFS-12986-HDFS-7240.005.patch > > > This jira will update ozone to latest snapshot > release-0.1.1-alpha-0f7169d-SNAPSHOT -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-13343) Ozone: Provide docker based acceptance testing on pseudo cluster
[ https://issues.apache.org/jira/browse/HDFS-13343?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16451190#comment-16451190 ] Hudson commented on HDFS-13343: --- SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #14057 (See [https://builds.apache.org/job/Hadoop-trunk-Commit/14057/]) HDFS-13343. Ozone: Provide docker based acceptance testing on pseudo (aengineer: rev eabea3a559337b7f9a0d7a6a18f1682fd80f709b) * (add) hadoop-ozone/acceptance-test/src/test/compose/.env * (add) hadoop-ozone/acceptance-test/src/test/robotframework/acceptance/ozone.robot * (add) hadoop-ozone/acceptance-test/pom.xml * (add) hadoop-ozone/acceptance-test/src/test/compose/docker-compose.yaml * (add) hadoop-ozone/acceptance-test/src/test/compose/docker-config * (edit) pom.xml * (add) hadoop-ozone/acceptance-test/README.md > Ozone: Provide docker based acceptance testing on pseudo cluster > > > Key: HDFS-13343 > URL: https://issues.apache.org/jira/browse/HDFS-13343 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: HDFS-7240 >Affects Versions: HDFS-7240 >Reporter: Elek, Marton >Assignee: Elek, Marton >Priority: Major > Fix For: HDFS-7240 > > Attachments: HDFS-13343-HDFS-7240.001.patch > > > As a complement of exisiting MiniOzoneCluster based intergration test we need > to test somehow the final artifacts. > I propose to create an additional maven project which could contain simple > test scenarios to start/stop cluster, use cli, etc. > This could be done with the declarative approach of robot framework. It could > be integrated to maven and could start and stop docker based pesudo clusters > similar to the existing dev docker-compose approach. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-13080) Ozone: Make finalhash in ContainerInfo of StorageContainerDatanodeProtocol.proto optional
[ https://issues.apache.org/jira/browse/HDFS-13080?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16451165#comment-16451165 ] Hudson commented on HDFS-13080: --- SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #14057 (See [https://builds.apache.org/job/Hadoop-trunk-Commit/14057/]) HDFS-13080. Ozone: Make finalhash in ContainerInfo of (nanda: rev c17782456b4be8abb21868ea4bfb259eda1f1bed) * (edit) hadoop-hdfs-project/hadoop-hdfs/src/main/proto/StorageContainerDatanodeProtocol.proto * (edit) hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/ozone/container/common/impl/ContainerManagerImpl.java > Ozone: Make finalhash in ContainerInfo of > StorageContainerDatanodeProtocol.proto optional > - > > Key: HDFS-13080 > URL: https://issues.apache.org/jira/browse/HDFS-13080 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: ozone >Reporter: Nanda kumar >Assignee: Elek, Marton >Priority: Major > Attachments: HDFS-13080-HDFS-7240.000.patch, > HDFS-13080-HDFS-7240.001.patch > > > ContainerInfo in StorageContainerDatanodeProtocol.proto has a required field, > {{finalhash}} which will be null for an open container, this has to be made > as an optional field. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-13018) Block Storage: make the iscsi target addres configurable for discovery
[ https://issues.apache.org/jira/browse/HDFS-13018?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16451160#comment-16451160 ] Hudson commented on HDFS-13018: --- SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #14057 (See [https://builds.apache.org/job/Hadoop-trunk-Commit/14057/]) HDFS-13018. Block Storage: make the iscsi target addres configurable for (msingh: rev 7ea3a3aa5d575b6783353b517e08d4fa6595b0d9) * (edit) hadoop-hdfs-project/hadoop-hdfs/pom.xml * (edit) hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/cblock/jscsiHelper/SCSITargetDaemon.java * (edit) hadoop-hdfs-project/hadoop-hdfs/src/main/resources/ozone-default.xml * (edit) hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/cblock/CBlockConfigKeys.java > Block Storage: make the iscsi target addres configurable for discovery > -- > > Key: HDFS-13018 > URL: https://issues.apache.org/jira/browse/HDFS-13018 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: ozone >Affects Versions: HDFS-7240 >Reporter: Elek, Marton >Assignee: Elek, Marton >Priority: Major > Fix For: HDFS-7240 > > Attachments: HDFS-13018-HDFS-7240.001.patch, > HDFS-13018-HDFS-7240.002.patch > > > Current jscsi server returns with the targetAddress (as ip) and 3260 (as > port) during the iscsi discovery. But in some cases we need to configure > these values. > For example in kubernetes the iscsi server could run behind a service where > the address (where the jscsi server is available from the cluster) could be > different from the targetAddress where the server is listening. > I propose to add two more configuration key to override the default > address/port for configuration but it also requires modification in the jscsi > project. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-12522) Ozone: Remove the Priority Queues used in the Container State Manager
[ https://issues.apache.org/jira/browse/HDFS-12522?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16451166#comment-16451166 ] Hudson commented on HDFS-12522: --- SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #14057 (See [https://builds.apache.org/job/Hadoop-trunk-Commit/14057/]) HDFS-12522. Ozone: Remove the Priority Queues used in the Container (aengineer: rev 7955ee40457fbbe7453071e3aed6d12601fd7953) * (edit) hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/scm/container/common/helpers/ContainerInfo.java * (edit) hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/ozone/container/common/helpers/ContainerReport.java * (edit) hadoop-hdfs-project/hadoop-hdfs-client/src/main/proto/Ozone.proto * (edit) hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/ozone/container/ContainerTestHelper.java * (edit) hadoop-hdfs-project/hadoop-hdfs/src/main/proto/StorageContainerDatanodeProtocol.proto * (add) hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/ozone/scm/container/ContainerStates/TestContainerStateMap.java * (add) hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/ozone/scm/container/ContainerStates/BenchmarkContainerStateMap.java * (add) hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/ozone/scm/container/ContainerStates/package-info.java * (edit) hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/ozone/scm/container/ContainerStateManager.java * (add) hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/ozone/scm/container/ContainerStates/ContainerAttribute.java * (edit) hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/ozone/scm/container/TestContainerMapping.java * (edit) hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/ozone/scm/container/TestContainerStateManager.java * (add) hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/ozone/scm/container/ContainerStates/TestContainerAttribute.java * (add) hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/ozone/scm/container/ContainerStates/package-info.java * (add) hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/ozone/scm/container/ContainerStates/ContainerState.java * (add) hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/ozone/scm/container/ContainerStates/ContainerStateMap.java * (edit) hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/ozone/scm/container/ContainerMapping.java * (edit) hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/ozone/container/TestUtils/ReplicationDatanodeStateManager.java * (edit) hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/ozone/container/common/TestEndPoint.java * (add) hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/ozone/scm/container/ContainerStates/ContainerID.java * (edit) hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/ozone/scm/cli/container/ContainerCommandHandler.java * (edit) hadoop-hdfs-project/hadoop-hdfs/pom.xml > Ozone: Remove the Priority Queues used in the Container State Manager > - > > Key: HDFS-12522 > URL: https://issues.apache.org/jira/browse/HDFS-12522 > Project: Hadoop HDFS > Issue Type: Sub-task >Reporter: Anu Engineer >Assignee: Anu Engineer >Priority: Major > Fix For: HDFS-7240 > > Attachments: HDFS-12522-HDFS-7240.001.patch, > HDFS-12522-HDFS-7240.002.patch, HDFS-12522-HDFS-7240.003.patch, > HDFS-12522-HDFS-7240.004.patch > > > During code review of HDFS-12387, it was suggested that we remove the > priority queues that was used in ContainerStateManager. This JIRA tracks that > issue. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-13078) Ozone: Update Ratis on Ozone to 0.1.1-alpha-8fd74ed-SNAPSHOT, to fix large chunk reads (>4M) from Datanodes
[ https://issues.apache.org/jira/browse/HDFS-13078?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16451174#comment-16451174 ] Hudson commented on HDFS-13078: --- SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #14057 (See [https://builds.apache.org/job/Hadoop-trunk-Commit/14057/]) HDFS-13078. Ozone: Update Ratis on Ozone to (aengineer: rev c8a8ee5000692feb2354edb42d86578155b9c6d2) * (add) hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/ozone/tools/TestDataValidate.java * (edit) hadoop-project/pom.xml * (edit) hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/ratis/RatisHelper.java * (edit) hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/ozone/container/common/transport/server/ratis/ContainerStateMachine.java * (edit) hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/ozone/tools/TestFreon.java > Ozone: Update Ratis on Ozone to 0.1.1-alpha-8fd74ed-SNAPSHOT, to fix large > chunk reads (>4M) from Datanodes > --- > > Key: HDFS-13078 > URL: https://issues.apache.org/jira/browse/HDFS-13078 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: ozone >Affects Versions: HDFS-7240 >Reporter: Mukul Kumar Singh >Assignee: Mukul Kumar Singh >Priority: Major > Fix For: HDFS-7240 > > Attachments: HDFS-13078-HDFS-7240.001.patch, > HDFS-13078-HDFS-7240.002.patch, HDFS-13078-HDFS-7240.003.patch, > HDFS-13078-HDFS-7240.004.patch, HDFS-13078-HDFS-7240.005.patch > > > In Ozone, reads from Ratis read fail because stream is closed before the > reply is received. > {code} > Jan 23, 2018 1:27:14 PM > org.apache.ratis.shaded.io.grpc.netty.NettyServerHandler onStreamError > WARNING: Stream Error > org.apache.ratis.shaded.io.netty.handler.codec.http2.Http2Exception$StreamException: > Stream closed before write could take place > at > org.apache.ratis.shaded.io.netty.handler.codec.http2.Http2Exception.streamError(Http2Exception.java:149) > at > org.apache.ratis.shaded.io.netty.handler.codec.http2.DefaultHttp2RemoteFlowController$FlowState.cancel(DefaultHttp2RemoteFlowController.java:499) > at > org.apache.ratis.shaded.io.netty.handler.codec.http2.DefaultHttp2RemoteFlowController$FlowState.cancel(DefaultHttp2RemoteFlowController.java:480) > at > org.apache.ratis.shaded.io.netty.handler.codec.http2.DefaultHttp2RemoteFlowController$1.onStreamClosed(DefaultHttp2RemoteFlowController.java:105) > at > org.apache.ratis.shaded.io.netty.handler.codec.http2.DefaultHttp2Connection.notifyClosed(DefaultHttp2Connection.java:349) > at > org.apache.ratis.shaded.io.netty.handler.codec.http2.DefaultHttp2Connection$ActiveStreams.removeFromActiveStreams(DefaultHttp2Connection.java:985) > at > org.apache.ratis.shaded.io.netty.handler.codec.http2.DefaultHttp2Connection$ActiveStreams.deactivate(DefaultHttp2Connection.java:941) > at > org.apache.ratis.shaded.io.netty.handler.codec.http2.DefaultHttp2Connection$DefaultStream.close(DefaultHttp2Connection.java:497) > at > org.apache.ratis.shaded.io.netty.handler.codec.http2.DefaultHttp2Connection$DefaultStream.close(DefaultHttp2Connection.java:503) > at > org.apache.ratis.shaded.io.netty.handler.codec.http2.Http2ConnectionHandler.closeStream(Http2ConnectionHandler.java:587) > at > org.apache.ratis.shaded.io.netty.handler.codec.http2.DefaultHttp2ConnectionDecoder$FrameReadListener.onRstStreamRead(DefaultHttp2ConnectionDecoder.java:356) > at > org.apache.ratis.shaded.io.netty.handler.codec.http2.Http2InboundFrameLogger$1.onRstStreamRead(Http2InboundFrameLogger.java:80) > at > org.apache.ratis.shaded.io.netty.handler.codec.http2.DefaultHttp2FrameReader.readRstStreamFrame(DefaultHttp2FrameReader.java:516) > at > org.apache.ratis.shaded.io.netty.handler.codec.http2.DefaultHttp2FrameReader.processPayloadState(DefaultHttp2FrameReader.java:260) > at > org.apache.ratis.shaded.io.netty.handler.codec.http2.DefaultHttp2FrameReader.readFrame(DefaultHttp2FrameReader.java:160) > at > org.apache.ratis.shaded.io.netty.handler.codec.http2.Http2InboundFrameLogger.readFrame(Http2InboundFrameLogger.java:41) > at > org.apache.ratis.shaded.io.netty.handler.codec.http2.DefaultHttp2ConnectionDecoder.decodeFrame(DefaultHttp2ConnectionDecoder.java:118) > at > org.apache.ratis.shaded.io.netty.handler.codec.http2.Http2ConnectionHandler$FrameDecoder.decode(Http2ConnectionHandler.java:388) > at > org.apache.ratis.shaded.io.netty.handler.codec.http2.Http2ConnectionHandler.decode(Http2ConnectionHandler.java:448) > at > org.apache.ratis.shaded.io.netty.handler.codec.ByteToMessageDecode
[jira] [Commented] (HDFS-13320) Ozone:Support for MicrobenchMarking Tool
[ https://issues.apache.org/jira/browse/HDFS-13320?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16451196#comment-16451196 ] Hudson commented on HDFS-13320: --- SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #14057 (See [https://builds.apache.org/job/Hadoop-trunk-Commit/14057/]) HDFS-13320. Ozone: Support for MicrobenchMarking Tool. Contributed by (nanda: rev d34288ae34142670f2b8569f63926b86ccc0ff2b) * (edit) hadoop-ozone/common/src/main/bin/oz * (add) hadoop-ozone/tools/src/main/java/org/apache/hadoop/ozone/genesis/Genesis.java * (delete) hadoop-ozone/tools/src/main/java/org/apache/hadoop/ozone/tools/Freon.java * (add) hadoop-ozone/integration-test/src/test/java/org/apache/hadoop/ozone/freon/TestDataValidate.java * (edit) hadoop-project/pom.xml * (delete) hadoop-ozone/integration-test/src/test/java/org/apache/hadoop/ozone/scm/container/ContainerStates/BenchmarkContainerStateMap.java * (delete) hadoop-ozone/common/src/main/java/org/apache/hadoop/ozone/tools/OzoneGetConf.java * (add) hadoop-ozone/common/src/main/java/org/apache/hadoop/ozone/freon/package-info.java * (add) hadoop-ozone/tools/src/main/java/org/apache/hadoop/ozone/genesis/BenchMarkMetadataStoreWrites.java * (add) hadoop-ozone/tools/src/main/java/org/apache/hadoop/ozone/freon/package-info.java * (add) hadoop-ozone/common/src/main/java/org/apache/hadoop/ozone/freon/OzoneGetConf.java * (delete) hadoop-ozone/integration-test/src/test/java/org/apache/hadoop/ozone/tools/package-info.java * (edit) hadoop-hdsl/server-scm/src/main/java/org/apache/hadoop/ozone/scm/container/ContainerStates/ContainerStateMap.java * (add) hadoop-ozone/tools/src/main/java/org/apache/hadoop/ozone/genesis/BenchMarkDatanodeDispatcher.java * (add) hadoop-ozone/integration-test/src/test/java/org/apache/hadoop/ozone/freon/TestFreon.java * (add) hadoop-ozone/tools/src/main/java/org/apache/hadoop/ozone/genesis/BenchMarkMetadataStoreReads.java * (edit) hadoop-ozone/tools/pom.xml * (delete) hadoop-ozone/integration-test/src/test/java/org/apache/hadoop/ozone/tools/TestDataValidate.java * (delete) hadoop-ozone/common/src/main/java/org/apache/hadoop/ozone/tools/package-info.java * (add) hadoop-ozone/integration-test/src/test/java/org/apache/hadoop/ozone/freon/package-info.java * (add) hadoop-ozone/tools/src/main/java/org/apache/hadoop/ozone/genesis/GenesisUtil.java * (add) hadoop-ozone/tools/src/main/java/org/apache/hadoop/ozone/genesis/package-info.java * (add) hadoop-ozone/tools/src/main/java/org/apache/hadoop/ozone/freon/Freon.java * (add) hadoop-ozone/tools/src/main/java/org/apache/hadoop/ozone/genesis/BenchMarkRocksDbStore.java * (delete) hadoop-ozone/tools/src/main/java/org/apache/hadoop/ozone/tools/package-info.java * (delete) hadoop-ozone/integration-test/src/test/java/org/apache/hadoop/ozone/scm/container/ContainerStates/TestContainerStateMap.java * (delete) hadoop-ozone/integration-test/src/test/java/org/apache/hadoop/ozone/tools/TestFreon.java * (edit) hadoop-ozone/tools/src/test/java/org/apache/hadoop/test/OzoneTestDriver.java * (add) hadoop-ozone/tools/src/main/java/org/apache/hadoop/ozone/genesis/BenchMarkContainerStateMap.java > Ozone:Support for MicrobenchMarking Tool > > > Key: HDFS-13320 > URL: https://issues.apache.org/jira/browse/HDFS-13320 > Project: Hadoop HDFS > Issue Type: Sub-task >Reporter: Shashikant Banerjee >Assignee: Shashikant Banerjee >Priority: Major > Attachments: HDFS-13320-HDFS-7240.001.patch, > HDFS-13320-HDFS-7240.002.patch, HDFS-13320-HDFS-7240.003.patch, > HDFS-13320-HDFS-7240.004.patch > > > This Jira proposes to add a micro benchmarking tool called Genesis which > executes a set of HDSL/Ozone benchmarks. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-13149) Ozone: Rename Corona to Freon
[ https://issues.apache.org/jira/browse/HDFS-13149?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16451172#comment-16451172 ] Hudson commented on HDFS-13149: --- SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #14057 (See [https://builds.apache.org/job/Hadoop-trunk-Commit/14057/]) HDFS-13149. Ozone: Rename Corona to Freon. Contributed by Anu Engineer. (aengineer: rev fc84744f757992b4a1dfdd41bc7a6303f17d0406) * (delete) hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/ozone/tools/TestCorona.java * (add) hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/ozone/tools/Freon.java * (edit) hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/test/OzoneTestDriver.java * (edit) hadoop-hdfs-project/hadoop-hdfs/src/main/bin/hdfs * (delete) hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/ozone/tools/Corona.java * (edit) hadoop-hdfs-project/hadoop-hdfs/src/site/markdown/OzoneGettingStarted.md.vm * (edit) hadoop-hdfs-project/hadoop-hdfs/src/site/markdown/OzoneOverview.md * (add) hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/ozone/tools/TestFreon.java * (edit) hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/impl/FsDatasetImpl.java > Ozone: Rename Corona to Freon > - > > Key: HDFS-13149 > URL: https://issues.apache.org/jira/browse/HDFS-13149 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: ozone >Reporter: Anu Engineer >Assignee: Anu Engineer >Priority: Trivial > Fix For: HDFS-7240 > > Attachments: HDFS-13149-HDFS-7240.001.patch > > > While reviewing Ozone [~jghoman] and in the a comment in HDFS-12992, > [~chris.douglas] > both pointed out the Corona is a name used by a YARN project from Facebook. > This Jira proposes to rename Corona(a chemical process that produces Ozone) > to Freon (CFCs) something that stresses Ozone. Thank to [~arpitagarwal] for > coming up with both names. > -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-13324) Ozone: Remove InfoPort and InfoSecurePort from DatanodeDetails
[ https://issues.apache.org/jira/browse/HDFS-13324?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16451203#comment-16451203 ] Hudson commented on HDFS-13324: --- SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #14057 (See [https://builds.apache.org/job/Hadoop-trunk-Commit/14057/]) HDFS-13324. Ozone: Remove InfoPort and InfoSecurePort from (nanda: rev d8fd9220dadcfac5e1168ebee7d5c1380646d419) * (edit) hadoop-hdds/common/src/main/java/org/apache/hadoop/hdds/protocol/DatanodeDetails.java * (edit) hadoop-hdds/server-scm/src/test/java/org/apache/hadoop/hdds/scm/block/TestDeletedBlockLog.java * (edit) hadoop-ozone/integration-test/src/test/java/org/apache/hadoop/ozone/container/ContainerTestHelper.java * (edit) hadoop-hdds/container-service/src/main/java/org/apache/hadoop/ozone/HddsDatanodeService.java * (edit) hadoop-ozone/tools/src/main/java/org/apache/hadoop/ozone/scm/cli/package-info.java * (edit) hadoop-ozone/integration-test/src/test/java/org/apache/hadoop/ozone/scm/TestSCMMetrics.java * (edit) hadoop-hdds/container-service/src/test/java/org/apache/hadoop/ozone/container/common/TestDatanodeStateMachine.java * (edit) hadoop-ozone/integration-test/src/test/java/org/apache/hadoop/ozone/ksm/TestKSMSQLCli.java * (edit) hadoop-hdds/common/src/main/proto/hdds.proto * (edit) hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/ksm/KeySpaceManager.java * (edit) hadoop-ozone/tools/src/main/java/org/apache/hadoop/ozone/genesis/GenesisUtil.java * (edit) hadoop-hdds/server-scm/src/test/java/org/apache/hadoop/hdds/scm/TestUtils.java * (edit) hadoop-ozone/integration-test/src/test/java/org/apache/hadoop/ozone/scm/TestContainerSQLCli.java * (edit) hadoop-ozone/tools/src/main/java/org/apache/hadoop/ozone/scm/cli/SQLCLI.java > Ozone: Remove InfoPort and InfoSecurePort from DatanodeDetails > -- > > Key: HDFS-13324 > URL: https://issues.apache.org/jira/browse/HDFS-13324 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: ozone >Reporter: Nanda kumar >Assignee: Shashikant Banerjee >Priority: Major > Labels: newbie > Fix For: HDFS-7240 > > Attachments: HDFS-13324-HDFS-7240.000.patch, > HDFS-13324-HDFS-7240.001.patch, HDFS-13324-HDFS-7240.002.patch, > HDFS-13324-HDFS-7240.003.patch > > > We have removed the dependency of DatanodeID in HDSL/Ozone and there is no > need for InfoPort and InfoSecurePort. It is now safe to remove InfoPort and > InfoSecurePort from DatanodeDetails. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-13070) Ozone: SCM: Support for container replica reconciliation - 1
[ https://issues.apache.org/jira/browse/HDFS-13070?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16451175#comment-16451175 ] Hudson commented on HDFS-13070: --- SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #14057 (See [https://builds.apache.org/job/Hadoop-trunk-Commit/14057/]) HDFS-13070. Ozone: SCM: Support for container replica reconciliation - (nanda: rev 74484754ac30300fce5b5682ee4ba464dfe3108d) * (add) hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/ozone/scm/container/replication/ContainerSupervisor.java * (edit) hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/ozone/scm/container/replication/InProgressPool.java * (edit) hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/ozone/scm/container/Mapping.java * (edit) hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/ozone/scm/node/NodeManager.java * (edit) hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/ozone/container/TestUtils/ReplicationNodeManagerMock.java * (delete) hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/ozone/scm/container/replication/ContainerReplicationManager.java * (edit) hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/ozone/scm/node/SCMNodeManager.java * (edit) hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/scm/ScmConfigKeys.java * (delete) hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/ozone/container/replication/TestContainerReplicationManager.java * (edit) hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/ozone/scm/container/MockNodeManager.java * (edit) hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/ozone/scm/StorageContainerManager.java * (edit) hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/ozone/scm/container/ContainerMapping.java * (add) hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/ozone/container/replication/TestContainerSupervisor.java * (edit) hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/ozone/scm/container/TestContainerMapping.java > Ozone: SCM: Support for container replica reconciliation - 1 > > > Key: HDFS-13070 > URL: https://issues.apache.org/jira/browse/HDFS-13070 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: ozone >Reporter: Nanda kumar >Assignee: Nanda kumar >Priority: Major > Fix For: HDFS-7240 > > Attachments: HDFS-13070-HDFS-7240.000.patch, > HDFS-13070-HDFS-7240.001.patch > > > SCM should process container reports and identify under replicated containers > for re-replication. {{ContainerSupervisor}} should take one NodePool at a > time and start processing the container reports of datanodes in that > NodePool. In this jira we just integrate {{ContainerSupervisor}} into SCM, > actual reconciliation logic will be handled in follow-up jiras. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-13137) Ozone: Ozonefs read fails because ChunkGroupInputStream#read does not iterate through all the blocks in the key
[ https://issues.apache.org/jira/browse/HDFS-13137?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16451182#comment-16451182 ] Hudson commented on HDFS-13137: --- SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #14057 (See [https://builds.apache.org/job/Hadoop-trunk-Commit/14057/]) HDFS-13137. Ozone: Ozonefs read fails because ChunkGroupInputStream#read (aengineer: rev faa01f32027de4593026941f0a36cae644f6d436) * (edit) hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/ozone/scm/pipelines/ratis/RatisManagerImpl.java * (edit) hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/ozone/client/rpc/RpcClient.java * (edit) hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/ozone/protocolPB/KeySpaceManagerProtocolServerSideTranslatorPB.java * (add) hadoop-tools/hadoop-ozone/src/test/java/org/apache/hadoop/fs/ozone/TestOzoneFSInputStream.java * (edit) hadoop-tools/hadoop-ozone/src/test/java/org/apache/hadoop/fs/ozone/TestOzoneFileInterfaces.java * (edit) hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/ozone/ksm/KeyManagerImpl.java * (edit) hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/ozone/client/io/ChunkGroupInputStream.java * (edit) hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/ozone/scm/pipelines/standalone/StandaloneManagerImpl.java * (edit) hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/ozone/client/OzoneClientFactory.java * (edit) hadoop-tools/hadoop-ozone/src/main/java/org/apache/hadoop/fs/ozone/OzoneFSInputStream.java * (edit) hadoop-tools/hadoop-ozone/src/main/java/org/apache/hadoop/fs/ozone/OzoneFileSystem.java * (edit) hadoop-hdfs-project/hadoop-hdfs/src/main/resources/ozone-default.xml > Ozone: Ozonefs read fails because ChunkGroupInputStream#read does not iterate > through all the blocks in the key > --- > > Key: HDFS-13137 > URL: https://issues.apache.org/jira/browse/HDFS-13137 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: ozone >Affects Versions: HDFS-7240 >Reporter: Mukul Kumar Singh >Assignee: Mukul Kumar Singh >Priority: Major > Fix For: HDFS-7240 > > Attachments: HDFS-13137-HDFS-7240.001.patch, > HDFS-13137-HDFS-7240.002.patch, HDFS-13137-HDFS-7240.003.patch, > HDFS-13137-HDFS-7240.004.patch, HDFS-13137-HDFS-7240.005.patch > > > OzoneFilesystem put is failing with the following exception. This happens > because ChunkGroupInputStream#read does not iterate through all the blocks in > the key. > {code} > [hdfs@y129 ~]$ time /opt/hadoop/hadoop-3.1.0-SNAPSHOT/bin/hdfs dfs -put test3 > /test3a > 18/02/12 13:36:21 WARN util.NativeCodeLoader: Unable to load native-hadoop > library for your platform... using builtin-java classes where applicable > 2018-02-12 13:36:22,211 [main] INFO - Using > org.apache.hadoop.ozone.client.rpc.RpcClient as client protocol. > 18/02/12 13:37:25 WARN util.ShutdownHookManager: ShutdownHook > 'ClientFinalizer' timeout, java.util.concurrent.TimeoutException > java.util.concurrent.TimeoutException > at java.util.concurrent.FutureTask.get(FutureTask.java:205) > at > org.apache.hadoop.util.ShutdownHookManager$1.run(ShutdownHookManager.java:68) > {code} -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-13072) Ozone: DatanodeStateMachine: Handling Uncaught Exception in command handler thread
[ https://issues.apache.org/jira/browse/HDFS-13072?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16451163#comment-16451163 ] Hudson commented on HDFS-13072: --- SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #14057 (See [https://builds.apache.org/job/Hadoop-trunk-Commit/14057/]) HDFS-13072. Ozone: DatanodeStateMachine: Handling Uncaught Exception in (nanda: rev d069734a418f741f58d794014d65f9b6a3243235) * (edit) hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/ozone/container/common/statemachine/DatanodeStateMachine.java > Ozone: DatanodeStateMachine: Handling Uncaught Exception in command handler > thread > -- > > Key: HDFS-13072 > URL: https://issues.apache.org/jira/browse/HDFS-13072 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: ozone >Reporter: Nanda kumar >Assignee: Nanda kumar >Priority: Major > Attachments: HDFS-13072-HDFS-7240.000.patch > > > Datanode has {{CommandHandlerThread}} which is responsible for handling the > commands that are sent by SCM. Whenever we encounter an unhandled exception > in this Thread, we have to restart the Thread after logging an error message. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-13335) Ozone: remove hdsl/cblock/ozone source code from the official apache source release artifact
[ https://issues.apache.org/jira/browse/HDFS-13335?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16451189#comment-16451189 ] Hudson commented on HDFS-13335: --- SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #14057 (See [https://builds.apache.org/job/Hadoop-trunk-Commit/14057/]) HDFS-13335. Ozone: remove hdsl/cblock/ozone source code from the (aengineer: rev 14e268d5792c8bad4c37ed2793346e6c9b90e777) * (add) hadoop-assemblies/src/main/resources/assemblies/hadoop-src-with-hdsl.xml * (edit) pom.xml * (edit) hadoop-assemblies/src/main/resources/assemblies/hadoop-src.xml > Ozone: remove hdsl/cblock/ozone source code from the official apache source > release artifact > > > Key: HDFS-13335 > URL: https://issues.apache.org/jira/browse/HDFS-13335 > Project: Hadoop HDFS > Issue Type: Sub-task >Reporter: Elek, Marton >Assignee: Elek, Marton >Priority: Major > Fix For: HDFS-7240 > > Attachments: HDFS-13335-HDFS-7240.001.patch > > > [~andrew.wang] pointed in the [adoption > thread|https://mail-archives.apache.org/mod_mbox/hadoop-hdfs-dev/201803.mbox/%3CCAGB5D2a5qL%3DG266pqWm1_%2ByTvpPyjJ%2B_ebt0uf-bk9LHbnp1TA%40mail.gmail.com%3E] > that the official hadoop source release should not contain cblock/hdsl/ozone > code. > This patch contains a simple adjustment of the assembly descriptor to remove > them. And an additional separated profile (hdsl-src) to create a full-source > tar file (won't be invoked by the create-release script). -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-12879) Ozone : add scm init command to document.
[ https://issues.apache.org/jira/browse/HDFS-12879?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16451164#comment-16451164 ] Hudson commented on HDFS-12879: --- SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #14057 (See [https://builds.apache.org/job/Hadoop-trunk-Commit/14057/]) HDFS-12879. Ozone : add scm init command to document. Contributed by (xyao: rev 4f84fe1544f8961c28dc7a42c1189583d765c0c7) * (edit) hadoop-hdfs-project/hadoop-hdfs/src/site/markdown/OzoneGettingStarted.md.vm * (edit) hadoop-hdfs-project/hadoop-hdfs/src/site/markdown/OzoneCommandShell.md > Ozone : add scm init command to document. > - > > Key: HDFS-12879 > URL: https://issues.apache.org/jira/browse/HDFS-12879 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: ozone >Reporter: Chen Liang >Assignee: Rahul Pathak >Priority: Minor > Labels: newbie > Fix For: HDFS-7240 > > Attachments: HDFS-12879-HDFS-7240.001.patch > > > When an Ozone cluster is initialized, before starting SCM through {{hdfs > --daemon start scm}}, the command {{hdfs scm -init}} needs to be called > first. But seems this command is not being documented. We should add this > note to document. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-13301) Ozone: Remove containerPort, ratisPort and ozoneRestPort from DatanodeID and DatanodeIDProto
[ https://issues.apache.org/jira/browse/HDFS-13301?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16451201#comment-16451201 ] Hudson commented on HDFS-13301: --- SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #14057 (See [https://builds.apache.org/job/Hadoop-trunk-Commit/14057/]) HDFS-13301. Ozone: Remove containerPort, ratisPort and ozoneRestPort (nanda: rev 8475d6bb55c9b3e78478d6b9d1e4be65e5b604cf) * (edit) hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/protocolPB/PBHelperClient.java * (edit) hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/protocol/DatanodeID.java * (edit) hadoop-hdfs-project/hadoop-hdfs-client/src/main/proto/hdfs.proto > Ozone: Remove containerPort, ratisPort and ozoneRestPort from DatanodeID and > DatanodeIDProto > > > Key: HDFS-13301 > URL: https://issues.apache.org/jira/browse/HDFS-13301 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: ozone >Reporter: Nanda kumar >Assignee: Shashikant Banerjee >Priority: Major > Labels: newbie > Attachments: HDFS-13301-HDFS-7240.000.patch, > HDFS-13301-HDFS-7240.001.patch > > > HDFS-13300 decouples DatanodeID from HDSL/Ozone, it's now safe to remove > {{containerPort}}, {{ratisPort}} and {{ozoneRestPort}} from {{DatanodeID}} > and {{DatanodeIDProto}}. This jira is to track the removal of Ozone related > fields from {{DatanodeID}} and {{DatanodeIDProto}}. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-13340) Ozone: Fix false positive RAT warning when project built without hds/cblock
[ https://issues.apache.org/jira/browse/HDFS-13340?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16451191#comment-16451191 ] Hudson commented on HDFS-13340: --- SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #14057 (See [https://builds.apache.org/job/Hadoop-trunk-Commit/14057/]) HDFS-13340. Ozone: Fix false positive RAT warning when project built (xyao: rev ab8fb012482c98f9c22b06769bfd6f859c3283d2) * (edit) hadoop-cblock/server/src/test/resources/dynamicprovisioner/expected1-pv.json * (edit) hadoop-cblock/server/pom.xml * (edit) hadoop-cblock/server/src/test/resources/dynamicprovisioner/input1-pvc.json * (edit) hadoop-ozone/common/dev-support/findbugsExcludeFile.xml * (edit) pom.xml * (edit) hadoop-ozone/pom.xml > Ozone: Fix false positive RAT warning when project built without hds/cblock > --- > > Key: HDFS-13340 > URL: https://issues.apache.org/jira/browse/HDFS-13340 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: HDFS-7240 >Affects Versions: HDFS-7240 >Reporter: Elek, Marton >Assignee: Elek, Marton >Priority: Major > Fix For: HDFS-7240 > > Attachments: HDFS-13340-HDFS-7240.001.patch > > > First of all: All the licence headers are handled well on this branch. > Unfortunatelly maven don't know it. If the project is built *without* -P > hdsl. The rat exclude rules in the hdsl/cblock/ozone projects are not used as > these projects are not used as maven project they are handled as static files. > The solutions is: > 1. Instead proper exclude I added the licence headers to some test file > 2. I added an additional exclude to the root pom.xml -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-13024) Ozone: ContainerStateMachine should synchronize operations between createContainer and writeChunk
[ https://issues.apache.org/jira/browse/HDFS-13024?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16451157#comment-16451157 ] Hudson commented on HDFS-13024: --- SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #14057 (See [https://builds.apache.org/job/Hadoop-trunk-Commit/14057/]) HDFS-13024. Ozone: ContainerStateMachine should synchronize operations (msingh: rev a6c2b6694b6607bdff8465bf6641716711e166c6) * (edit) hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/ozone/OzoneConfigKeys.java * (edit) hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/ozone/container/common/transport/server/ratis/ContainerStateMachine.java * (edit) hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/scm/ScmConfigKeys.java * (edit) hadoop-hdfs-project/hadoop-hdfs/src/main/resources/ozone-default.xml * (edit) hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/ozone/container/common/transport/server/ratis/XceiverServerRatis.java > Ozone: ContainerStateMachine should synchronize operations between > createContainer and writeChunk > - > > Key: HDFS-13024 > URL: https://issues.apache.org/jira/browse/HDFS-13024 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: ozone >Affects Versions: HDFS-7240 >Reporter: Mukul Kumar Singh >Assignee: Mukul Kumar Singh >Priority: Major > Fix For: HDFS-7240 > > Attachments: HDFS-13024-HDFS-7240.001.patch, > HDFS-13024-HDFS-7240.002.patch, HDFS-13024-HDFS-7240.003.patch, > HDFS-13024-HDFS-7240.004.patch, HDFS-13024-HDFS-7240.005.patch, > HDFS-13024-HDFS-7240.006.patch, HDFS-13024-HDFS-7240.007.patch, > HDFS-13024-HDFS-7240.008.patch > > > This issue happens after HDFS-12853. with HDFS-12853, writeChunk op has been > divided into two stages 1) the log write phase (here the state machine data > is written) 2) ApplyTransaction. > With a 3 node ratis ring, ratis leader will append the log entry to its log > and forward it to its followers. However there is no guarantee on when the > followers will apply the log to the state machine in {{applyTransaction}}. > This issue happens in the following order > 1) Leader accepts create container > 2) Leader add entries to its logs and forwards to followers > 3) Followers append the entry to its log and Ack to the raft leader (Please > note that the transaction still hasn't been applied) > 4) Leader applies the transaction and now replies > 5) write chunk call is sent to the Leader > 6) Leader now forwards the call to the followers > 7) Followers try to apply the log by calling {{Dispatcher#dispatch}} however > the create container call in 3) still hasn't been applied > 8) write chunk call on followers fail. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-12940) Ozone: KSM: TestKeySpaceManager#testExpiredOpenKey fails occasionally
[ https://issues.apache.org/jira/browse/HDFS-12940?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16451156#comment-16451156 ] Hudson commented on HDFS-12940: --- SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #14057 (See [https://builds.apache.org/job/Hadoop-trunk-Commit/14057/]) HDFS-12940. Ozone: KSM: TestKeySpaceManager#testExpiredOpenKey fails (nanda: rev b1c8c1de7c2ed74b9620a8ca0472cafcf80f6a76) * (edit) hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/ozone/ksm/KeyManagerImpl.java * (edit) hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/utils/BackgroundService.java * (edit) hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/ozone/ksm/KeySpaceManager.java * (edit) hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/ozone/ksm/TestKeySpaceManager.java > Ozone: KSM: TestKeySpaceManager#testExpiredOpenKey fails occasionally > - > > Key: HDFS-12940 > URL: https://issues.apache.org/jira/browse/HDFS-12940 > Project: Hadoop HDFS > Issue Type: Sub-task >Reporter: Nanda kumar >Assignee: Nanda kumar >Priority: Major > Attachments: HDFS-12940-HDFS-7240.000.patch, > HDFS-12940-HDFS-7240.001.patch, HDFS-12940-HDFS-7240.002.patch, > HDFS-12940-HDFS-7240.003.patch, HDFS-12940-HDFS-7240.004.patch > > > {{TestKeySpaceManager#testExpiredOpenKey}} is flaky. > In {{testExpiredOpenKey}} we are opening four keys for writing and wait for > them to expire (without committing). Verification/Assertion is done by > querying {{MiniOzoneCluster}} and matching the count. Since the {{cluster}} > instance of {{MiniOzoneCluster}} is shared between test-cases in > {{TestKeySpaceManager}}, we should not rely on the count. The verification > should only happen by matching the keyNames and not with the count. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-13304) Document: update the new ozone docker file location
[ https://issues.apache.org/jira/browse/HDFS-13304?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16451186#comment-16451186 ] Hudson commented on HDFS-13304: --- SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #14057 (See [https://builds.apache.org/job/Hadoop-trunk-Commit/14057/]) HDFS-13304. Document: update the new ozone docker file location. (aengineer: rev f76819c7c693f08e48fe3da6764bf8f815cbb68d) * (edit) hadoop-ozone/ozone-manager/src/main/site/markdown/OzoneGettingStarted.md.vm > Document: update the new ozone docker file location > > > Key: HDFS-13304 > URL: https://issues.apache.org/jira/browse/HDFS-13304 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: documentation >Reporter: Xiaoyu Yao >Assignee: Xiaoyu Yao >Priority: Major > Fix For: HDFS-7240 > > Attachments: HDFS-13304-HDFS-7240.001.patch > > > The docker compose file has been moved from dev-support/compose/ozone to > hadoop-dist/target/compose/ozone, we need to update the document in > OzoneGettingStarted.md.vm. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-13319) Ozone: start-ozone.sh/stop-ozone.sh fail because of unresolved path
[ https://issues.apache.org/jira/browse/HDFS-13319?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16451188#comment-16451188 ] Hudson commented on HDFS-13319: --- SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #14057 (See [https://builds.apache.org/job/Hadoop-trunk-Commit/14057/]) HDFS-13319. Ozone: start-ozone.sh/stop-ozone.sh fail because of (aengineer: rev e0262147d60524d5718b2bfed48391a876dc3662) * (edit) hadoop-ozone/common/src/main/bin/oz > Ozone: start-ozone.sh/stop-ozone.sh fail because of unresolved path > --- > > Key: HDFS-13319 > URL: https://issues.apache.org/jira/browse/HDFS-13319 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: ozone >Affects Versions: HDFS-7240 >Reporter: Mukul Kumar Singh >Assignee: Mukul Kumar Singh >Priority: Major > Fix For: HDFS-7240 > > Attachments: HDFS-13319-HDFS-7240.001.patch > > > start-ozone.sh/stop-ozone.sh fails because the shell script fails to resolve > the full path. Interestingly, the start-dfs.sh script work as expected. > {code} > /opt/hadoop/hadoop-3.2.0-SNAPSHOT/sbin/start-ozone.sh > y129.l42scl.hortonworks.com: bash: /bin/oz: No such file or directory > y129.l42scl.hortonworks.com: bash: /bin/oz: No such file or directory > {code} -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-13108) Ozone: OzoneFileSystem: Simplified url schema for Ozone File System
[ https://issues.apache.org/jira/browse/HDFS-13108?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16451181#comment-16451181 ] Hudson commented on HDFS-13108: --- SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #14057 (See [https://builds.apache.org/job/Hadoop-trunk-Commit/14057/]) HDFS-13108. Ozone: OzoneFileSystem: Simplified url schema for Ozone File (aengineer: rev 6690ae7385359180330b92251493cadee91d2c56) * (edit) hadoop-tools/hadoop-ozone/src/main/java/org/apache/hadoop/fs/ozone/OzoneFileSystem.java * (edit) hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/ozone/web/interfaces/StorageHandler.java * (edit) hadoop-tools/hadoop-ozone/src/test/java/org/apache/hadoop/fs/ozone/contract/OzoneContract.java * (edit) hadoop-tools/hadoop-ozone/src/test/java/org/apache/hadoop/fs/ozone/TestOzoneFileInterfaces.java > Ozone: OzoneFileSystem: Simplified url schema for Ozone File System > --- > > Key: HDFS-13108 > URL: https://issues.apache.org/jira/browse/HDFS-13108 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: ozone >Affects Versions: HDFS-7240 >Reporter: Elek, Marton >Assignee: Elek, Marton >Priority: Major > Fix For: HDFS-7240 > > Attachments: HDFS-13108-HDFS-7240.001.patch, > HDFS-13108-HDFS-7240.002.patch, HDFS-13108-HDFS-7240.003.patch, > HDFS-13108-HDFS-7240.005.patch, HDFS-13108-HDFS-7240.006.patch, > HDFS-13108-HDFS-7240.007.patch > > > A. Current state > > 1. The datanode host / bucket /volume should be defined in the defaultFS (eg. > o3://datanode:9864/test/bucket1) > 2. The root file system points to the bucket (eg. 'dfs -ls /' lists all the > keys from the bucket1) > It works very well, but there are some limitations. > B. Problem one > The current code doesn't support fully qualified locations. For example 'dfs > -ls o3://datanode:9864/test/bucket1/dir1' is not working. > C.) Problem two > I tried to fix the previous problem, but it's not trivial. The biggest > problem is that there is a Path.makeQualified call which could transform > unqualified url to qualified url. This is part of the Path.java so it's > common for all the Hadoop file systems. > In the current implementations it qualifies an url with keeping the schema > (eg. o3:// ) and authority (eg: datanode: 9864) from the defaultfs and use > the relative path as the end of the qualified url. For example: > makeQualfied(defaultUri=o3://datanode:9864/test/bucket1, path=dir1/file) will > return o3://datanode:9864/dir1/file which is obviously wrong (the good would > be o3://datanode:9864/TEST/BUCKET1/dir1/file). I tried to do a workaround > with using a custom makeQualified in the Ozone code and it worked from > command line but couldn't work with Spark which use the Hadoop api and the > original makeQualified path. > D.) Solution > We should support makeQualified calls, so we can use any path in the > defaultFS. > > I propose to use a simplified schema as o3://bucket.volume/ > This is similar to the s3a format where the pattern is s3a://bucket.region/ > We don't need to set the hostname of the datanode (or ksm in case of service > discovery) but it would be configurable with additional hadoop configuraion > values such as fs.o3.bucket.buckename.volumename.address=http://datanode:9864 > (this is how the s3a works today, as I know). > We also need to define restrictions for the volume names (in our case it > should not include dot any more). > ps: some spark output > 2018-02-03 18:43:04 WARN Client:66 - Neither spark.yarn.jars nor > spark.yarn.archive is set, falling back to uploading libraries under > SPARK_HOME. > 2018-02-03 18:43:05 INFO Client:54 - Uploading resource > file:/tmp/spark-03119be0-9c3d-440c-8e9f-48c692412ab5/__spark_libs__244044896784490.zip > -> > o3://datanode:9864/user/hadoop/.sparkStaging/application_1517611085375_0001/__spark_libs__244044896784490.zip > My default fs was o3://datanode:9864/test/bucket1, but spark qualified the > name of the home directory. > -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-12868) Ozone: Service Discovery API
[ https://issues.apache.org/jira/browse/HDFS-12868?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16451161#comment-16451161 ] Hudson commented on HDFS-12868: --- SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #14057 (See [https://builds.apache.org/job/Hadoop-trunk-Commit/14057/]) HDFS-12868. Ozone: Service Discovery API. Contributed by Nanda Kumar. (xyao: rev 75d8dd0e698f95b0a6ddd4e09e748cd5285f3011) * (edit) hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/ozone/client/rpc/TestOzoneRpcClient.java * (edit) hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/ozone/container/common/statemachine/commandhandler/TestCloseContainerHandler.java * (edit) hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/ozone/client/rpc/RpcClient.java * (add) hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/ozone/client/rest/RestServerSelector.java * (edit) hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/ozone/client/rest/TestOzoneRestClient.java * (edit) hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/ozone/client/rest/RestClient.java * (edit) hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/ozone/client/OzoneClientFactory.java * (add) hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/ozone/client/rest/DefaultRestServerSelector.java * (edit) hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/ozone/tools/Corona.java * (edit) hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/ozone/OzoneConfigKeys.java > Ozone: Service Discovery API > > > Key: HDFS-12868 > URL: https://issues.apache.org/jira/browse/HDFS-12868 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: ozone >Reporter: Nanda kumar >Assignee: Nanda kumar >Priority: Major > Fix For: HFDS-7240 > > Attachments: HDFS-12868-HDFS-7240.000.patch, > HDFS-12868-HDFS-7240.001.patch, HDFS-12868-HDFS-7240.002.patch > > > Currently if a client wants to connect to Ozone cluster we need multiple > properties to be configured in the client. > For RPC based connection we need > {{ozone.ksm.address}} > {{ozone.scm.client.address}} > and the ports if something other than default is configured. > For REST based connection > {{ozone.rest.servers}} > and port if something other than default is configured. > With the introduction of Service Discovery API the client should be able to > discover all the configurations needed for the connection. Service discovery > calls will be handled by KSM, at the client side, we only need to configure > {{ozone.ksm.address}}. The client should first connect to KSM and get all the > required configurations. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-13300) Ozone: Remove DatanodeID dependency from HDSL and Ozone
[ https://issues.apache.org/jira/browse/HDFS-13300?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16451192#comment-16451192 ] Hudson commented on HDFS-13300: --- SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #14057 (See [https://builds.apache.org/job/Hadoop-trunk-Commit/14057/]) HDFS-13300. Ozone: Remove DatanodeID dependency from HDSL and Ozone. (aengineer: rev 3440ca6e0c76bd50854eb5b72fa1486cfe4b6575) * (edit) hadoop-ozone/objectstore-service/pom.xml * (edit) hadoop-hdsl/client/src/main/java/org/apache/hadoop/scm/XceiverClientHandler.java * (edit) hadoop-ozone/integration-test/src/test/java/org/apache/hadoop/ozone/MiniOzoneTestHelper.java * (edit) hadoop-hdsl/common/src/main/java/org/apache/hadoop/scm/storage/ContainerProtocolCalls.java * (edit) hadoop-hdsl/container-service/src/main/java/org/apache/hadoop/ozone/container/common/statemachine/DatanodeStateMachine.java * (edit) hadoop-hdsl/server-scm/src/main/java/org/apache/hadoop/ozone/scm/block/SCMBlockDeletingService.java * (edit) hadoop-dist/src/main/compose/cblock/docker-config * (edit) hadoop-hdsl/server-scm/src/main/java/org/apache/hadoop/ozone/scm/container/replication/InProgressPool.java * (delete) hadoop-hdsl/container-service/src/main/java/org/apache/hadoop/ozone/HdslServerPlugin.java * (edit) hadoop-hdsl/server-scm/src/main/java/org/apache/hadoop/ozone/scm/StorageContainerManager.java * (edit) hadoop-hdsl/server-scm/src/main/java/org/apache/hadoop/ozone/scm/container/placement/algorithms/ContainerPlacementPolicy.java * (edit) hadoop-hdsl/server-scm/src/test/java/org/apache/hadoop/ozone/scm/container/closer/TestContainerCloser.java * (edit) hadoop-ozone/integration-test/src/test/java/org/apache/hadoop/ozone/container/ozoneimpl/TestRatisManager.java * (edit) hadoop-hdsl/container-service/src/main/java/org/apache/hadoop/ozone/container/common/transport/server/XceiverServer.java * (edit) hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/DataNode.java * (edit) hadoop-ozone/integration-test/src/test/java/org/apache/hadoop/ozone/TestMiniOzoneCluster.java * (edit) hadoop-ozone/integration-test/src/test/java/org/apache/hadoop/ozone/TestStorageContainerManager.java * (edit) hadoop-hdsl/container-service/src/test/java/org/apache/hadoop/ozone/container/common/ScmTestMock.java * (edit) hadoop-hdsl/server-scm/src/test/java/org/apache/hadoop/ozone/container/replication/TestContainerSupervisor.java * (edit) hadoop-hdsl/container-service/src/test/java/org/apache/hadoop/ozone/container/common/TestDatanodeStateMachine.java * (edit) hadoop-hdsl/server-scm/src/test/java/org/apache/hadoop/ozone/container/testutils/ReplicationDatanodeStateManager.java * (edit) hadoop-hdsl/server-scm/src/test/java/org/apache/hadoop/ozone/scm/container/TestContainerMapping.java * (edit) hadoop-ozone/common/src/main/java/org/apache/hadoop/ozone/web/utils/OzoneUtils.java * (edit) hadoop-hdsl/container-service/src/main/java/org/apache/hadoop/ozone/container/common/impl/ContainerManagerImpl.java * (edit) hadoop-hdsl/client/src/main/java/org/apache/hadoop/scm/XceiverClientRatis.java * (add) hadoop-hdsl/common/src/main/java/org/apache/hadoop/hdsl/protocol/package-info.java * (edit) hadoop-hdsl/container-service/src/main/java/org/apache/hadoop/ozone/protocolPB/StorageContainerDatanodeProtocolServerSideTranslatorPB.java * (edit) hadoop-hdsl/server-scm/src/test/java/org/apache/hadoop/ozone/container/testutils/ReplicationNodeManagerMock.java * (edit) hadoop-hdsl/common/src/main/java/org/apache/ratis/RatisHelper.java * (edit) hadoop-hdsl/server-scm/src/main/java/org/apache/hadoop/ozone/scm/block/DatanodeDeletedBlockTransactions.java * (edit) hadoop-ozone/integration-test/src/test/java/org/apache/hadoop/ozone/web/TestLocalOzoneVolumes.java * (edit) hadoop-hdsl/server-scm/src/main/java/org/apache/hadoop/ozone/scm/node/CommandQueue.java * (edit) hadoop-hdsl/common/src/main/proto/hdsl.proto * (edit) hadoop-ozone/tools/src/main/java/org/apache/hadoop/ozone/scm/cli/SQLCLI.java * (edit) hadoop-hdsl/container-service/src/main/java/org/apache/hadoop/ozone/container/common/states/datanode/InitDatanodeState.java * (edit) hadoop-ozone/integration-test/src/test/java/org/apache/hadoop/ozone/container/common/impl/TestContainerDeletionChoosingPolicy.java * (edit) hadoop-hdsl/server-scm/src/main/java/org/apache/hadoop/ozone/scm/node/NodePoolManager.java * (edit) hadoop-ozone/integration-test/src/test/java/org/apache/hadoop/ozone/web/client/TestVolume.java * (edit) hadoop-dist/src/main/compose/ozone/docker-config * (edit) hadoop-hdsl/common/src/main/java/org/apache/hadoop/scm/XceiverClientSpi.java * (edit) hadoop-hdsl/server-scm/src/main/java/org/apache/hadoop/ozone/scm/pipelines/standalone/StandaloneManagerImpl.java * (edit) hadoop-hdsl/server-scm/src/test/java/org/apache/hadoop/ozone/scm/node/TestContainerPlacement.java * (edit) hadoop-ozone/integrat
[jira] [Commented] (HDFS-13342) Ozone: Rename and fix ozone CLI scripts
[ https://issues.apache.org/jira/browse/HDFS-13342?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16451200#comment-16451200 ] Hudson commented on HDFS-13342: --- SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #14057 (See [https://builds.apache.org/job/Hadoop-trunk-Commit/14057/]) HDFS-13342. Ozone: Rename and fix ozone CLI scripts. Contributed by (msingh: rev 8658ed7dccd2e53aed55f0158293886e7a8a45c8) * (delete) hadoop-ozone/common/src/main/bin/oz * (edit) hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/web/ozShell/Shell.java * (edit) hadoop-ozone/common/src/main/bin/start-ozone.sh * (edit) hadoop-ozone/acceptance-test/src/test/robotframework/acceptance/ozone.robot * (edit) hadoop-dist/src/main/compose/ozone/docker-compose.yaml * (edit) hadoop-ozone/ozone-manager/src/main/site/markdown/OzoneCommandShell.md * (edit) hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/StorageContainerManager.java * (edit) hadoop-ozone/common/src/main/bin/stop-ozone.sh * (edit) hadoop-ozone/common/src/main/shellprofile.d/hadoop-ozone.sh * (edit) hadoop-ozone/acceptance-test/src/test/compose/docker-compose.yaml * (edit) hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/ksm/KeySpaceManager.java * (edit) hadoop-ozone/common/src/main/java/org/apache/hadoop/ozone/freon/OzoneGetConf.java * (edit) hadoop-dist/src/main/compose/cblock/docker-compose.yaml * (add) hadoop-ozone/common/src/main/bin/ozone > Ozone: Rename and fix ozone CLI scripts > --- > > Key: HDFS-13342 > URL: https://issues.apache.org/jira/browse/HDFS-13342 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: HDFS-7240 >Affects Versions: HDFS-7240 >Reporter: Shashikant Banerjee >Assignee: Shashikant Banerjee >Priority: Major > Fix For: HDFS-7240 > > Attachments: HDFS-13342-HDFS-7240.001.patch, > HDFS-13342-HDFS-7240.002.patch, HDFS-13342-HDFS-7240.003.patch, > HDFS-13342-HDFS-7240.004.patch, HDFS-13342-HDFS-7240.005.patch > > > The Ozone (oz script) has wrong classnames for freon etc, As a result of > which freon cannot be started from command line. This Jira proposes to fix > all these. The oz script will be renamed to Ozone as well. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-12983) Block Storage: provide docker-compose file for cblock clusters
[ https://issues.apache.org/jira/browse/HDFS-12983?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16451173#comment-16451173 ] Hudson commented on HDFS-12983: --- SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #14057 (See [https://builds.apache.org/job/Hadoop-trunk-Commit/14057/]) HDFS-12983. Block Storage: provide docker-compose file for cblock (aengineer: rev 9af91f5e03d069f93f9f581563aa6862a33e3b7b) * (add) dev-support/compose/cblock/README.md * (add) dev-support/compose/cblock/docker-compose.yaml * (add) dev-support/compose/cblock/docker-config * (add) dev-support/compose/cblock/.env > Block Storage: provide docker-compose file for cblock clusters > -- > > Key: HDFS-12983 > URL: https://issues.apache.org/jira/browse/HDFS-12983 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: ozone >Affects Versions: ozone >Reporter: Elek, Marton >Assignee: Elek, Marton >Priority: Major > Fix For: HDFS-7240 > > Attachments: HDFS-12983-HDFS-7240.001.patch, > HDFS-12983-HDFS-7240.002.patch, HDFS-12983-HDFS-7240.003.patch, > HDFS-12983-HDFS-7240.004.patch > > > Since HDFS-12469 we have a docker compose file at dev-support/compose/ozone > which makes it easy to start local ozone clusers with multiple datanodes. > In this patch I propose similar config file for the cblock/iscsi servers > (jscsi + cblock + scm + namenode + datanode) to make it easier to check the > latest state. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-12636) Ozone: OzoneFileSystem: Implement seek functionality for rpc client
[ https://issues.apache.org/jira/browse/HDFS-12636?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16451170#comment-16451170 ] Hudson commented on HDFS-12636: --- SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #14057 (See [https://builds.apache.org/job/Hadoop-trunk-Commit/14057/]) HDFS-12636. Ozone: OzoneFileSystem: Implement seek functionality for rpc (msingh: rev 8bbc28a974285ce725a248bc504e4defdeb7682e) * (delete) hadoop-tools/hadoop-ozone/src/main/java/org/apache/hadoop/fs/ozone/OzoneInputStream.java * (edit) hadoop-tools/hadoop-ozone/src/main/java/org/apache/hadoop/fs/ozone/OzoneFileSystem.java * (edit) hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/ozone/client/io/ChunkGroupOutputStream.java * (edit) hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/ozone/MiniOzoneClassicCluster.java * (edit) hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/ozone/client/io/OzoneOutputStream.java * (edit) hadoop-tools/hadoop-ozone/src/test/java/org/apache/hadoop/fs/ozone/TestOzoneFileInterfaces.java * (edit) hadoop-tools/hadoop-ozone/src/test/java/org/apache/hadoop/fs/ozone/contract/OzoneContract.java * (add) hadoop-tools/hadoop-ozone/src/main/java/org/apache/hadoop/fs/ozone/OzoneFSOutputStream.java * (edit) hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/ozone/client/io/OzoneInputStream.java * (edit) hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/scm/storage/ChunkInputStream.java * (add) hadoop-tools/hadoop-ozone/src/main/java/org/apache/hadoop/fs/ozone/OzoneFSInputStream.java * (edit) hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/ozone/ksm/TestChunkStreams.java * (delete) hadoop-tools/hadoop-ozone/src/main/java/org/apache/hadoop/fs/ozone/OzoneOutputStream.java * (edit) hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/ozone/client/io/ChunkGroupInputStream.java > Ozone: OzoneFileSystem: Implement seek functionality for rpc client > --- > > Key: HDFS-12636 > URL: https://issues.apache.org/jira/browse/HDFS-12636 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: ozone >Affects Versions: HDFS-7240 >Reporter: Mukul Kumar Singh >Assignee: Lokesh Jain >Priority: Major > Fix For: HDFS-7240 > > Attachments: HDFS-12636-HDFS-7240.001.patch, > HDFS-12636-HDFS-7240.002.patch, HDFS-12636-HDFS-7240.003.patch, > HDFS-12636-HDFS-7240.004.patch, HDFS-12636-HDFS-7240.005.patch, > HDFS-12636-HDFS-7240.006.patch, HDFS-12636-HDFS-7240.007.patch > > > OzoneClient library provides a method to invoke both RPC as well as REST > based methods to ozone. This api will help in the improving both the > performance as well as the interface management in OzoneFileSystem. > This jira will be used to convert the REST based calls to use this new > unified client. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-13017) Block Storage: implement simple iscsi discovery in jscsi server
[ https://issues.apache.org/jira/browse/HDFS-13017?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16451159#comment-16451159 ] Hudson commented on HDFS-13017: --- SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #14057 (See [https://builds.apache.org/job/Hadoop-trunk-Commit/14057/]) HDFS-13017. Block Storage: implement simple iscsi discovery in jscsi (msingh: rev c908b1ea2d5e2b7bca8c16449e8b93049773711b) * (edit) hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/cblock/CBlockManager.java * (edit) hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/cblock/jscsiHelper/CBlockManagerHandler.java * (edit) hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/cblock/TestCBlockServer.java * (edit) hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/cblock/protocolPB/CBlockClientServerProtocolServerSideTranslatorPB.java * (edit) hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/cblock/jscsiHelper/CBlockTargetServer.java * (edit) hadoop-hdfs-project/hadoop-hdfs/src/main/proto/CBlockClientServerProtocol.proto * (edit) hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/cblock/jscsiHelper/CBlockClientProtocolClientSideTranslatorPB.java * (edit) hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/cblock/proto/CBlockClientProtocol.java > Block Storage: implement simple iscsi discovery in jscsi server > --- > > Key: HDFS-13017 > URL: https://issues.apache.org/jira/browse/HDFS-13017 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: ozone >Affects Versions: HDFS-7240 >Reporter: Elek, Marton >Assignee: Elek, Marton >Priority: Major > Fix For: HDFS-7240 > > Attachments: HDFS-13017-HDFS-7240.001.patch, > HDFS-13017-HDFS-7240.002.patch, HDFS-13017-HDFS-7240.003.patch, > HDFS-13017-HDFS-7240.004.patch > > > The current jscsi server doesn't support iscsi discovery. > To use jscsi server as a kubernetes storage backend we need the discovery. > jScsi supports it we need just override a method and add an additional call > to the server protocl to get the list of the available cblocks. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-12812) Ozone: dozone: initialize scm and ksm directory on cluster startup
[ https://issues.apache.org/jira/browse/HDFS-12812?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16451162#comment-16451162 ] Hudson commented on HDFS-12812: --- SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #14057 (See [https://builds.apache.org/job/Hadoop-trunk-Commit/14057/]) HDFS-12812. Ozone: dozone: initialize scm and ksm directory on cluster (xyao: rev 47149e7de86bf6545523e56da7bef015060f710a) * (edit) dev-support/compose/ozone/docker-compose.yaml > Ozone: dozone: initialize scm and ksm directory on cluster startup > -- > > Key: HDFS-12812 > URL: https://issues.apache.org/jira/browse/HDFS-12812 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: ozone >Affects Versions: HDFS-7240 >Reporter: Elek, Marton >Assignee: Elek, Marton >Priority: Major > Fix For: HDFS-7240 > > Attachments: HADOOP-12812-HDFS-7240.001.patch, > HDFS-12812-HDFS-7240.002.patch > > > HDFS-12739 fixed the scm, but after the patch it couldn't be started without > a separated `hdfs scm -init` any more. Unfortunatelly it breaks the > docker-compose functionality. > This is handled int the starter script of the base runner docker image for > namenode. I also fixed this in the runner docker image > (https://github.com/elek/hadoop/commit/b347eb4bfca37d84dbcdd8c4bf353219d876a9b7) > will upload the improved patch to the HADOOP-14898. > In this patch I just add a new environment variable to init the scm if the > version file doesn't exist. > UPDATE: the patch also contains envrionment variable to initialize the ksm. > To test: > Do a full build and go to the dev-support/compose/ozone: > ``` > docker-compose pull > docker-compose up -d > ``` > Note: the pull is important as the new docker image -- which could handle the > new environment variable -- should be used -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-13116) Ozone: Refactor Pipeline to have transport and container specific information
[ https://issues.apache.org/jira/browse/HDFS-13116?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16451168#comment-16451168 ] Hudson commented on HDFS-13116: --- SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #14057 (See [https://builds.apache.org/job/Hadoop-trunk-Commit/14057/]) HDFS-13116. Ozone: Refactor Pipeline to have transport and container (aengineer: rev ea5c79285d6724ddcd0181e341759c3bdbd9e55c) * (edit) hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/ozone/container/common/impl/TestContainerPersistence.java * (edit) hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/ozone/scm/pipelines/PipelineManager.java * (edit) hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/ozone/container/ContainerTestHelper.java * (edit) hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/ozone/scm/pipelines/ratis/RatisManagerImpl.java * (edit) hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/ozone/scm/cli/SQLCLI.java * (edit) hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/scm/container/common/helpers/Pipeline.java * (edit) hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/cblock/TestCBlockReadWrite.java * (edit) hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/ozone/TestMiniOzoneCluster.java * (edit) hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/ozone/scm/block/TestDeletedBlockLog.java * (edit) hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/ozone/scm/node/TestContainerPlacement.java * (add) hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/scm/container/common/helpers/PipelineChannel.java * (edit) hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/ozone/scm/pipelines/PipelineSelector.java * (edit) hadoop-hdfs-project/hadoop-hdfs-client/src/main/proto/Ozone.proto * (edit) hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/ozone/scm/pipelines/standalone/StandaloneManagerImpl.java > Ozone: Refactor Pipeline to have transport and container specific information > - > > Key: HDFS-13116 > URL: https://issues.apache.org/jira/browse/HDFS-13116 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: ozone >Affects Versions: HDFS-7240 >Reporter: Mukul Kumar Singh >Assignee: Mukul Kumar Singh >Priority: Major > Fix For: HDFS-7240 > > Attachments: HDFS-13116-HDFS-7240.001.patch, > HDFS-13116-HDFS-7240.002.patch, HDFS-13116-HDFS-7240.003.patch, > HDFS-13116-HDFS-7240.004.patch, HDFS-13116-HDFS-7240.005.patch, > HDFS-13116-HDFS-7240.006.patch, HDFS-13116-HDFS-7240.007.patch, > HDFS-13116-HDFS-7240.008.patch > > > Currently pipeline has information about both the container as well Transport > layer. This results in cases where a new pipeline (i.e. transport) > information is allocated for each container creation. > This code can be refactored so that the Transport information is separated > from the container, then the {{Transport}} can be shared between multiple > pipelines/containers. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-13012) TestOzoneConfigurationFields fails due to missing configs in ozone-default.xml
[ https://issues.apache.org/jira/browse/HDFS-13012?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16451155#comment-16451155 ] Hudson commented on HDFS-13012: --- SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #14057 (See [https://builds.apache.org/job/Hadoop-trunk-Commit/14057/]) HDFS-13012. TestOzoneConfigurationFields fails due to missing configs in (msingh: rev d9e7678aa0203063d5074bdc77e580405c4fbe02) * (edit) hadoop-hdfs-project/hadoop-hdfs/src/main/resources/ozone-default.xml > TestOzoneConfigurationFields fails due to missing configs in ozone-default.xml > -- > > Key: HDFS-13012 > URL: https://issues.apache.org/jira/browse/HDFS-13012 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: test >Affects Versions: HDFS-7240 >Reporter: Xiaoyu Yao >Assignee: Xiaoyu Yao >Priority: Minor > Fix For: HDFS-7240 > > Attachments: HDFS-13012-HDFS-7240.001.patch > > > "dfs.container.ratis.num.write.chunk.threads" and > "dfs.container.ratis.segment.size" were added with HDFS-12853, they need to > be added to ozone-default.xml to unblock test failures in > TestOzoneConfigurationFields. cc: [~msingh] -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org