[jira] [Commented] (HADOOP-13665) Erasure Coding codec should support fallback coder
[ https://issues.apache.org/jira/browse/HADOOP-13665?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15963918#comment-15963918 ] Kai Zheng commented on HADOOP-13665: Thanks [~lewuathe] for the update! The latest patch LGTM. Guess [~jojochuang] will take another look and commit it. Thanks. > Erasure Coding codec should support fallback coder > -- > > Key: HADOOP-13665 > URL: https://issues.apache.org/jira/browse/HADOOP-13665 > Project: Hadoop Common > Issue Type: Sub-task > Components: io >Reporter: Wei-Chiu Chuang >Assignee: Kai Sasaki >Priority: Blocker > Labels: hdfs-ec-3.0-must-do > Attachments: HADOOP-13665.01.patch, HADOOP-13665.02.patch, > HADOOP-13665.03.patch, HADOOP-13665.04.patch, HADOOP-13665.05.patch, > HADOOP-13665.06.patch, HADOOP-13665.07.patch, HADOOP-13665.08.patch, > HADOOP-13665.09.patch, HADOOP-13665.10.patch, HADOOP-13665.11.patch, > HADOOP-13665.12.patch > > > The current EC codec supports a single coder only (by default pure Java > implementation). If the native coder is specified but is unavailable, it > should fallback to pure Java implementation. > One possible solution is to follow the convention of existing Hadoop native > codec, such as transport encryption (see {{CryptoCodec.java}}). It supports > fallback by specifying two or multiple coders as the value of property, and > loads coders in order. -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-14296) Move logging APIs over to slf4j in hadoop-tools
[ https://issues.apache.org/jira/browse/HADOOP-14296?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15963903#comment-15963903 ] Hadoop QA commented on HADOOP-14296: | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 18s{color} | {color:blue} Docker mode activated. {color} | | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s{color} | {color:green} The patch appears to include 2 new or modified test files. {color} | | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 9s{color} | {color:blue} Maven dependency ordering for branch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 13m 19s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 23s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 25s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 42s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 32s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 0m 57s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 28s{color} | {color:green} trunk passed {color} | | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 9s{color} | {color:blue} Maven dependency ordering for patch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 35s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 24s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 1m 24s{color} | {color:green} the patch passed {color} | | {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange} 0m 25s{color} | {color:orange} hadoop-tools: The patch generated 6 new + 92 unchanged - 7 fixed = 98 total (was 99) {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 40s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 32s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 10s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 26s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} unit {color} | {color:green} 1m 23s{color} | {color:green} hadoop-azure in the patch passed. {color} | | {color:red}-1{color} | {color:red} unit {color} | {color:red} 1m 0s{color} | {color:red} hadoop-sls in the patch failed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 19s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 33m 31s{color} | {color:black} {color} | \\ \\ || Reason || Tests || | Failed junit tests | hadoop.yarn.sls.appmaster.TestAMSimulator | \\ \\ || Subsystem || Report/Notes || | Docker | Image:yetus/hadoop:612578f | | JIRA Issue | HADOOP-14296 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12862785/HADOOP-14296.02.patch | | Optional Tests | asflicense compile javac javadoc mvninstall mvnsite unit findbugs checkstyle | | uname | Linux 5265c31bc678 3.13.0-106-generic #153-Ubuntu SMP Tue Dec 6 15:44:32 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh | | git revision | trunk / aabf08d | | Default Java | 1.8.0_121 | | findbugs | v3.0.0 | | checkstyle | https://builds.apache.org/job/PreCommit-HADOOP-Build/12078/artifact/patchprocess/diff-checkstyle-hadoop-tools.txt | | unit | https://builds.apache.org/job/PreCommit-HADOOP-Build/12078/artifact/patchprocess/patch-unit-hadoop-tools_hadoop-sls.txt | | Test Results | https://builds.apache.org/job/PreCommit-HADOOP-Build/12078/testReport/ | | modules | C: hadoop-tools/hadoop-azure hadoop-to
[jira] [Updated] (HADOOP-14296) Move logging APIs over to slf4j in hadoop-tools
[ https://issues.apache.org/jira/browse/HADOOP-14296?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Akira Ajisaka updated HADOOP-14296: --- Attachment: HADOOP-14296.02.patch 02 patch * Undo the change in hadoop-rumen > Move logging APIs over to slf4j in hadoop-tools > --- > > Key: HADOOP-14296 > URL: https://issues.apache.org/jira/browse/HADOOP-14296 > Project: Hadoop Common > Issue Type: Sub-task >Reporter: Akira Ajisaka >Assignee: Akira Ajisaka > Attachments: HADOOP-14296.01.patch, HADOOP-14296.02.patch > > -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-14296) Move logging APIs over to slf4j in hadoop-tools
[ https://issues.apache.org/jira/browse/HADOOP-14296?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15963876#comment-15963876 ] Akira Ajisaka commented on HADOOP-14296: bq. can't rumen just leave everything alone and rely on people to edit their log4j settings? I like the idea for people to edit their log4j settings instead of setting the log level programatically. Can I remove the following code, or keep it as-is? {code} // turn off the warning w.r.t deprecated mapreduce keys static { Logger.getLogger(Configuration.class).setLevel(Level.OFF); } {code} > Move logging APIs over to slf4j in hadoop-tools > --- > > Key: HADOOP-14296 > URL: https://issues.apache.org/jira/browse/HADOOP-14296 > Project: Hadoop Common > Issue Type: Sub-task >Reporter: Akira Ajisaka >Assignee: Akira Ajisaka > Attachments: HADOOP-14296.01.patch > > -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-13545) Upgrade HSQLDB to 2.3.4
[ https://issues.apache.org/jira/browse/HADOOP-13545?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15963872#comment-15963872 ] Hudson commented on HADOOP-13545: - SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #11565 (See [https://builds.apache.org/job/Hadoop-trunk-Commit/11565/]) HADOOP-13545. Update HSQLDB to 2.3.4. Contributed by Giovanni Matteo (aajisaka: rev aabf08dd0707995e471d41ec4158c6af597c55dd) * (edit) LICENSE.txt * (edit) hadoop-project/pom.xml > Upgrade HSQLDB to 2.3.4 > --- > > Key: HADOOP-13545 > URL: https://issues.apache.org/jira/browse/HADOOP-13545 > Project: Hadoop Common > Issue Type: Bug > Components: build >Affects Versions: 2.9.0 >Reporter: Giovanni Matteo Fumarola >Assignee: Giovanni Matteo Fumarola >Priority: Minor > Fix For: 2.9.0, 3.0.0-alpha3 > > Attachments: HADOOP-13545.v1.patch, HADOOP-13545.v2.patch, > HADOOP-13545.v3.patch > > > Upgrade HSQLDB from 2.0.0 to 2.3.4. > Version 2.3.4 is fully multithreaded and supports high performance 2PL and > MVCC (multiversion concurrency control) transaction control models. -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-13545) Upgrade HSQLDB to 2.3.4
[ https://issues.apache.org/jira/browse/HADOOP-13545?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15963863#comment-15963863 ] Carlo Curino commented on HADOOP-13545: --- Thanks [~ajisakaa] for reviewing and committing, and [~giovanni.fumarola] for the patch. > Upgrade HSQLDB to 2.3.4 > --- > > Key: HADOOP-13545 > URL: https://issues.apache.org/jira/browse/HADOOP-13545 > Project: Hadoop Common > Issue Type: Bug > Components: build >Affects Versions: 2.9.0 >Reporter: Giovanni Matteo Fumarola >Assignee: Giovanni Matteo Fumarola >Priority: Minor > Fix For: 2.9.0, 3.0.0-alpha3 > > Attachments: HADOOP-13545.v1.patch, HADOOP-13545.v2.patch, > HADOOP-13545.v3.patch > > > Upgrade HSQLDB from 2.0.0 to 2.3.4. > Version 2.3.4 is fully multithreaded and supports high performance 2PL and > MVCC (multiversion concurrency control) transaction control models. -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-13545) Upgrade HSQLDB to 2.3.4
[ https://issues.apache.org/jira/browse/HADOOP-13545?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Akira Ajisaka updated HADOOP-13545: --- Resolution: Fixed Hadoop Flags: Reviewed Fix Version/s: 3.0.0-alpha3 Status: Resolved (was: Patch Available) Committed to trunk and branch-2. > Upgrade HSQLDB to 2.3.4 > --- > > Key: HADOOP-13545 > URL: https://issues.apache.org/jira/browse/HADOOP-13545 > Project: Hadoop Common > Issue Type: Bug > Components: build >Affects Versions: 2.9.0 >Reporter: Giovanni Matteo Fumarola >Assignee: Giovanni Matteo Fumarola >Priority: Minor > Fix For: 2.9.0, 3.0.0-alpha3 > > Attachments: HADOOP-13545.v1.patch, HADOOP-13545.v2.patch, > HADOOP-13545.v3.patch > > > Upgrade HSQLDB from 2.0.0 to 2.3.4. > Version 2.3.4 is fully multithreaded and supports high performance 2PL and > MVCC (multiversion concurrency control) transaction control models. -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Assigned] (HADOOP-13545) Upgrade HSQLDB to 2.3.4
[ https://issues.apache.org/jira/browse/HADOOP-13545?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Akira Ajisaka reassigned HADOOP-13545: -- Assignee: Giovanni Matteo Fumarola (was: Akira Ajisaka) > Upgrade HSQLDB to 2.3.4 > --- > > Key: HADOOP-13545 > URL: https://issues.apache.org/jira/browse/HADOOP-13545 > Project: Hadoop Common > Issue Type: Bug > Components: build >Affects Versions: 2.9.0 >Reporter: Giovanni Matteo Fumarola >Assignee: Giovanni Matteo Fumarola >Priority: Minor > Fix For: 2.9.0 > > Attachments: HADOOP-13545.v1.patch, HADOOP-13545.v2.patch, > HADOOP-13545.v3.patch > > > Upgrade HSQLDB from 2.0.0 to 2.3.4. > Version 2.3.4 is fully multithreaded and supports high performance 2PL and > MVCC (multiversion concurrency control) transaction control models. -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Assigned] (HADOOP-13545) Upgrade HSQLDB to 2.3.4
[ https://issues.apache.org/jira/browse/HADOOP-13545?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Akira Ajisaka reassigned HADOOP-13545: -- Assignee: Akira Ajisaka > Upgrade HSQLDB to 2.3.4 > --- > > Key: HADOOP-13545 > URL: https://issues.apache.org/jira/browse/HADOOP-13545 > Project: Hadoop Common > Issue Type: Bug > Components: build >Affects Versions: 2.9.0 >Reporter: Giovanni Matteo Fumarola >Assignee: Akira Ajisaka >Priority: Minor > Fix For: 2.9.0 > > Attachments: HADOOP-13545.v1.patch, HADOOP-13545.v2.patch, > HADOOP-13545.v3.patch > > > Upgrade HSQLDB from 2.0.0 to 2.3.4. > Version 2.3.4 is fully multithreaded and supports high performance 2PL and > MVCC (multiversion concurrency control) transaction control models. -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-13545) Upgrade HSQLDB to 2.3.4
[ https://issues.apache.org/jira/browse/HADOOP-13545?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15963847#comment-15963847 ] Akira Ajisaka commented on HADOOP-13545: +1, thanks [~giovanni.fumarola] for the update and thanks [~curino] for the review. > Upgrade HSQLDB to 2.3.4 > --- > > Key: HADOOP-13545 > URL: https://issues.apache.org/jira/browse/HADOOP-13545 > Project: Hadoop Common > Issue Type: Bug > Components: build >Affects Versions: 2.9.0 >Reporter: Giovanni Matteo Fumarola >Priority: Minor > Fix For: 2.9.0 > > Attachments: HADOOP-13545.v1.patch, HADOOP-13545.v2.patch, > HADOOP-13545.v3.patch > > > Upgrade HSQLDB from 2.0.0 to 2.3.4. > Version 2.3.4 is fully multithreaded and supports high performance 2PL and > MVCC (multiversion concurrency control) transaction control models. -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-14277) TestTrash.testTrashRestarts is flaky
[ https://issues.apache.org/jira/browse/HADOOP-14277?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15963775#comment-15963775 ] Hadoop QA commented on HADOOP-14277: | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 23s{color} | {color:blue} Docker mode activated. {color} | | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s{color} | {color:green} The patch appears to include 1 new or modified test files. {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 13m 26s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 15m 52s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 36s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 10s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 21s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 26s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 50s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 38s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 14m 0s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 14m 0s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 37s{color} | {color:green} hadoop-common-project/hadoop-common: The patch generated 0 new + 67 unchanged - 1 fixed = 67 total (was 68) {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 2s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 21s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 34s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 49s{color} | {color:green} the patch passed {color} | | {color:red}-1{color} | {color:red} unit {color} | {color:red} 7m 52s{color} | {color:red} hadoop-common in the patch failed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 34s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 63m 25s{color} | {color:black} {color} | \\ \\ || Reason || Tests || | Failed junit tests | hadoop.security.TestKDiag | \\ \\ || Subsystem || Report/Notes || | Docker | Image:yetus/hadoop:612578f | | JIRA Issue | HADOOP-14277 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12862769/HADOOP-14277.002.patch | | Optional Tests | asflicense compile javac javadoc mvninstall mvnsite unit findbugs checkstyle | | uname | Linux 37300289ae02 3.13.0-106-generic #153-Ubuntu SMP Tue Dec 6 15:44:32 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh | | git revision | trunk / 7999318a | | Default Java | 1.8.0_121 | | findbugs | v3.0.0 | | unit | https://builds.apache.org/job/PreCommit-HADOOP-Build/12077/artifact/patchprocess/patch-unit-hadoop-common-project_hadoop-common.txt | | Test Results | https://builds.apache.org/job/PreCommit-HADOOP-Build/12077/testReport/ | | modules | C: hadoop-common-project/hadoop-common U: hadoop-common-project/hadoop-common | | Console output | https://builds.apache.org/job/PreCommit-HADOOP-Build/12077/console | | Powered by | Apache Yetus 0.5.0-SNAPSHOT http://yetus.apache.org | This message was automatically generated. > TestTrash.testTrashRestarts is flaky > > > Key: HADOOP-14277 > URL: https://issues.apache.org/jira/browse/HADOOP-14277 > Project: Hadoop Common > Issue Type: Bug >Reporter: Eric Badger >
[jira] [Updated] (HADOOP-14277) TestTrash.testTrashRestarts is flaky
[ https://issues.apache.org/jira/browse/HADOOP-14277?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Weiwei Yang updated HADOOP-14277: - Attachment: HADOOP-14277.002.patch > TestTrash.testTrashRestarts is flaky > > > Key: HADOOP-14277 > URL: https://issues.apache.org/jira/browse/HADOOP-14277 > Project: Hadoop Common > Issue Type: Bug >Reporter: Eric Badger >Assignee: Weiwei Yang > Attachments: HADOOP-14277.001.patch, HADOOP-14277.002.patch > > > {noformat} > junit.framework.AssertionFailedError: Expected num of checkpoints is 2, but > actual is 3 expected:<2> but was:<3> > at junit.framework.Assert.fail(Assert.java:57) > at junit.framework.Assert.failNotEquals(Assert.java:329) > at junit.framework.Assert.assertEquals(Assert.java:78) > at junit.framework.Assert.assertEquals(Assert.java:234) > at junit.framework.TestCase.assertEquals(TestCase.java:401) > at > org.apache.hadoop.fs.TestTrash.verifyAuditableTrashEmptier(TestTrash.java:892) > at org.apache.hadoop.fs.TestTrash.testTrashRestarts(TestTrash.java:593) > {noformat} -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-14295) Authentication proxy filter on firewall cluster may fail authorization because of getRemoteAddr
[ https://issues.apache.org/jira/browse/HADOOP-14295?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15963728#comment-15963728 ] Hadoop QA commented on HADOOP-14295: | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 19s{color} | {color:blue} Docker mode activated. {color} | | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:red}-1{color} | {color:red} test4tests {color} | {color:red} 0m 0s{color} | {color:red} The patch doesn't appear to include any new or modified tests. Please justify why no new tests are needed for this patch. Also please list what manual steps were performed to verify this patch. {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 13m 19s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 15m 27s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 35s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 5s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 20s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 25s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 47s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 37s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 13m 51s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 13m 51s{color} | {color:green} the patch passed {color} | | {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange} 0m 36s{color} | {color:orange} hadoop-common-project/hadoop-common: The patch generated 4 new + 0 unchanged - 0 fixed = 4 total (was 0) {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 2s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 20s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 1s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 33s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 48s{color} | {color:green} the patch passed {color} | | {color:red}-1{color} | {color:red} unit {color} | {color:red} 7m 51s{color} | {color:red} hadoop-common in the patch failed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 34s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 62m 30s{color} | {color:black} {color} | \\ \\ || Reason || Tests || | Failed junit tests | hadoop.security.TestKDiag | \\ \\ || Subsystem || Report/Notes || | Docker | Image:yetus/hadoop:612578f | | JIRA Issue | HADOOP-14295 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12862762/hadoop-14295.001.patch | | Optional Tests | asflicense compile javac javadoc mvninstall mvnsite unit findbugs checkstyle | | uname | Linux d8ca685aa11a 3.13.0-106-generic #153-Ubuntu SMP Tue Dec 6 15:44:32 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh | | git revision | trunk / 7999318a | | Default Java | 1.8.0_121 | | findbugs | v3.0.0 | | checkstyle | https://builds.apache.org/job/PreCommit-HADOOP-Build/12076/artifact/patchprocess/diff-checkstyle-hadoop-common-project_hadoop-common.txt | | unit | https://builds.apache.org/job/PreCommit-HADOOP-Build/12076/artifact/patchprocess/patch-unit-hadoop-common-project_hadoop-common.txt | | Test Results | https://builds.apache.org/job/PreCommit-HADOOP-Build/12076/testReport/ | | modules | C: hadoop-common-project/hadoop-common U: hadoop-common-project/hadoop-common | | Console output | https://builds.apache.org/job/PreCommit-HADOOP-Build/12076/console | | Powered by | Apache Yetus 0.5.0-SNAPSHOT http://yetus.apache.org | This message was automatically generated. > Authenticatio
[jira] [Updated] (HADOOP-14295) Authentication proxy filter on firewall cluster may fail authorization because of getRemoteAddr
[ https://issues.apache.org/jira/browse/HADOOP-14295?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jeffrey E Rodriguez updated HADOOP-14295: -- Fix Version/s: 3.0.0-alpha2 Status: Patch Available (was: Open) > Authentication proxy filter on firewall cluster may fail authorization > because of getRemoteAddr > --- > > Key: HADOOP-14295 > URL: https://issues.apache.org/jira/browse/HADOOP-14295 > Project: Hadoop Common > Issue Type: Bug > Components: common >Affects Versions: 3.0.0-alpha2 >Reporter: Jeffrey E Rodriguez >Assignee: Jeffrey E Rodriguez >Priority: Critical > Fix For: 3.0.0-alpha2 > > Attachments: hadoop-14295.001.patch > > > Many production environments use firewalls to protect network traffic. In the > specific case of DataNode UI and other Hadoop server for which their ports > may fall on the list of firewalled ports the > org.apache.hadoop.security.AuthenticationWithProxyUserFilter user getRemotAdd > (HttpServletRequest) which may return the firewall host such as 127.0.0.1. > This is unfortunately bad since if you are using a proxy in addition to do > perimeter protection, and you have added your proxy as a super user when > checking for the proxy IP to authorize user this would fail since > getRemoteAdd would return the IP of the firewall (127.0.0.1). > "2017-04-08 07:01:23,029 ERROR security.AuthenticationWithProxyUserFilter > (AuthenticationWithProxyUserFilter.java:getRemoteUser(94)) - Unable to verify > proxy user: Unauthorized connection for super-user: knox from IP 127.0.0.1" > I propese to add a check for x-forwarded-for header since proxys usually > inject that header before we do a getRemoteAddr -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-14295) Authentication proxy filter on firewall cluster may fail authorization because of getRemoteAddr
[ https://issues.apache.org/jira/browse/HADOOP-14295?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jeffrey E Rodriguez updated HADOOP-14295: -- Attachment: hadoop-14295.001.patch Patch for AuthenticationWithProxyUserFilter.java to use "x-forwarded-server" header (set by Knox proxy server) in case we get 127.0.0.1 when calling request gerRemoteAddr. > Authentication proxy filter on firewall cluster may fail authorization > because of getRemoteAddr > --- > > Key: HADOOP-14295 > URL: https://issues.apache.org/jira/browse/HADOOP-14295 > Project: Hadoop Common > Issue Type: Bug > Components: common >Affects Versions: 3.0.0-alpha2 >Reporter: Jeffrey E Rodriguez >Assignee: Jeffrey E Rodriguez >Priority: Critical > Attachments: hadoop-14295.001.patch > > > Many production environments use firewalls to protect network traffic. In the > specific case of DataNode UI and other Hadoop server for which their ports > may fall on the list of firewalled ports the > org.apache.hadoop.security.AuthenticationWithProxyUserFilter user getRemotAdd > (HttpServletRequest) which may return the firewall host such as 127.0.0.1. > This is unfortunately bad since if you are using a proxy in addition to do > perimeter protection, and you have added your proxy as a super user when > checking for the proxy IP to authorize user this would fail since > getRemoteAdd would return the IP of the firewall (127.0.0.1). > "2017-04-08 07:01:23,029 ERROR security.AuthenticationWithProxyUserFilter > (AuthenticationWithProxyUserFilter.java:getRemoteUser(94)) - Unable to verify > proxy user: Unauthorized connection for super-user: knox from IP 127.0.0.1" > I propese to add a check for x-forwarded-for header since proxys usually > inject that header before we do a getRemoteAddr -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-14289) Move logging APIs over to slf4j in hadoop-common
[ https://issues.apache.org/jira/browse/HADOOP-14289?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15963570#comment-15963570 ] Allen Wittenauer commented on HADOOP-14289: --- This is a job for perl, not sed: {code} find . -name '*.java' | xargs perl -pi -e 's,Logger\.getLogger\(,LoggerFactory\.getLogger\(,g' {code} > Move logging APIs over to slf4j in hadoop-common > > > Key: HADOOP-14289 > URL: https://issues.apache.org/jira/browse/HADOOP-14289 > Project: Hadoop Common > Issue Type: Sub-task >Reporter: Akira Ajisaka > Attachments: HADOOP-14289.sample.patch > > -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-13545) Upgrade HSQLDB to 2.3.4
[ https://issues.apache.org/jira/browse/HADOOP-13545?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15963569#comment-15963569 ] Carlo Curino commented on HADOOP-13545: --- [~steve_l] I don't see much evidence of production use in the codebase, it seems that dependencies are rather old and mostly tests. Things should be fine committing this. [~ajisakaa] thanks for the review. I checked the patch as well, it looks good to me and I believe [~giovanni.fumarola] addressed your comments. Are you ok to commit the patch? If you don't have time to commit it right away (and are ok with me doing it), I will commit this tomorrow morning, since this is blocking YARN-3663. > Upgrade HSQLDB to 2.3.4 > --- > > Key: HADOOP-13545 > URL: https://issues.apache.org/jira/browse/HADOOP-13545 > Project: Hadoop Common > Issue Type: Bug > Components: build >Affects Versions: 2.9.0 >Reporter: Giovanni Matteo Fumarola >Priority: Minor > Fix For: 2.9.0 > > Attachments: HADOOP-13545.v1.patch, HADOOP-13545.v2.patch, > HADOOP-13545.v3.patch > > > Upgrade HSQLDB from 2.0.0 to 2.3.4. > Version 2.3.4 is fully multithreaded and supports high performance 2PL and > MVCC (multiversion concurrency control) transaction control models. -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Comment Edited] (HADOOP-14285) Update minimum version of Maven from 3.0 to 3.3
[ https://issues.apache.org/jira/browse/HADOOP-14285?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15963564#comment-15963564 ] Allen Wittenauer edited comment on HADOOP-14285 at 4/10/17 10:10 PM: - Why was ant removed? It's needed to build the front end website. EDIT: Never mind. I see the other apt-get now. was (Author: aw): Why was ant removed? It's needed to build the front end website. > Update minimum version of Maven from 3.0 to 3.3 > --- > > Key: HADOOP-14285 > URL: https://issues.apache.org/jira/browse/HADOOP-14285 > Project: Hadoop Common > Issue Type: Improvement >Reporter: Akira Ajisaka >Assignee: Akira Ajisaka > Fix For: 3.0.0-alpha3 > > Attachments: HADOOP-14285.01.patch > > > YARN-6421 requires Apache Maven 3.1+ -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Comment Edited] (HADOOP-14285) Update minimum version of Maven from 3.0 to 3.3
[ https://issues.apache.org/jira/browse/HADOOP-14285?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15963564#comment-15963564 ] Allen Wittenauer edited comment on HADOOP-14285 at 4/10/17 10:09 PM: - Why was ant removed? It's needed to build the front end website. was (Author: aw): Why was ant removed? It's needed to be the front end website. > Update minimum version of Maven from 3.0 to 3.3 > --- > > Key: HADOOP-14285 > URL: https://issues.apache.org/jira/browse/HADOOP-14285 > Project: Hadoop Common > Issue Type: Improvement >Reporter: Akira Ajisaka >Assignee: Akira Ajisaka > Fix For: 3.0.0-alpha3 > > Attachments: HADOOP-14285.01.patch > > > YARN-6421 requires Apache Maven 3.1+ -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-14285) Update minimum version of Maven from 3.0 to 3.3
[ https://issues.apache.org/jira/browse/HADOOP-14285?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15963564#comment-15963564 ] Allen Wittenauer commented on HADOOP-14285: --- Why was ant removed? It's needed to be the front end website. > Update minimum version of Maven from 3.0 to 3.3 > --- > > Key: HADOOP-14285 > URL: https://issues.apache.org/jira/browse/HADOOP-14285 > Project: Hadoop Common > Issue Type: Improvement >Reporter: Akira Ajisaka >Assignee: Akira Ajisaka > Fix For: 3.0.0-alpha3 > > Attachments: HADOOP-14285.01.patch > > > YARN-6421 requires Apache Maven 3.1+ -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-14248) Retire SharedInstanceProfileCredentialsProvider in trunk; deprecate in branch-2
[ https://issues.apache.org/jira/browse/HADOOP-14248?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15963537#comment-15963537 ] Chris Nauroth commented on HADOOP-14248: Hello [~liuml07]. This looks good overall. I have a comment on the branch-2 patch. {code} private SharedInstanceProfileCredentialsProvider() { -super(); +InstanceProfileCredentialsProvider.getInstance(); } {code} I don't think this change is necessary. The call to {{InstanceProfileCredentialsProvider#getInstance()}} returns an instance (always the same one now that we've upgraded the AWS SDK), but then it never saves a reference to that instance or does anything else with it. > Retire SharedInstanceProfileCredentialsProvider in trunk; deprecate in > branch-2 > --- > > Key: HADOOP-14248 > URL: https://issues.apache.org/jira/browse/HADOOP-14248 > Project: Hadoop Common > Issue Type: Sub-task > Components: fs/s3 >Affects Versions: 3.0.0-alpha3 >Reporter: Mingliang Liu >Assignee: Mingliang Liu > Attachments: HADOOP-14248.000.patch, HADOOP-14248.001.patch, > HADOOP-14248-branch-2.001.patch > > > This is from the discussion in [HADOOP-13050]. > So [HADOOP-13727] added the SharedInstanceProfileCredentialsProvider, which > effectively reduces high number of connections to EC2 Instance Metadata > Service caused by InstanceProfileCredentialsProvider. That patch, in order to > prevent the throttling problem, defined new class > {{SharedInstanceProfileCredentialsProvider}} as a subclass of > {{InstanceProfileCredentialsProvider}}, which enforces creation of only a > single instance. > Per [HADOOP-13050], we upgraded the AWS Java SDK. Since then, the > {{InstanceProfileCredentialsProvider}} in SDK code internally enforces a > singleton. That confirms that our effort in [HADOOP-13727] makes 100% sense. > Meanwhile, {{SharedInstanceProfileCredentialsProvider}} can retire gracefully > in trunk branch. -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-13726) Enforce that FileSystem initializes only a single instance of the requested FileSystem.
[ https://issues.apache.org/jira/browse/HADOOP-13726?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15963519#comment-15963519 ] Chris Nauroth commented on HADOOP-13726: Thank you, [~manju_hadoop]! Your last comment looks to me like a good way to go. Please feel free to attach a patch file as described in the [HowToContribute|https://wiki.apache.org/hadoop/HowToContribute] wiki page. bq. ...if the thread which succeeded in getting the lock throws an exception during FileSystem initialization, then all other threads waiting for the result will get ExecutionException and wouldnot retry serially... It's good that you remapped the {{ExecutionException}} back to {{IOException}} in your example. Typical callers are equipped to handle an {{IOException}}. I think this is acceptable, as there has never been any stated contract around {{FileSystem#get}} retrying internally. Calling code that wants to be resilient against transient failure already must have retry logic of its own. > Enforce that FileSystem initializes only a single instance of the requested > FileSystem. > --- > > Key: HADOOP-13726 > URL: https://issues.apache.org/jira/browse/HADOOP-13726 > Project: Hadoop Common > Issue Type: Improvement > Components: fs >Reporter: Chris Nauroth >Assignee: Manjunath Anand > > The {{FileSystem}} cache is intended to guarantee reuse of instances by > multiple call sites or multiple threads. The current implementation does > provide this guarantee, but there is a brief race condition window during > which multiple threads could perform redundant initialization. If the file > system implementation has expensive initialization logic, then this is > wasteful. This issue proposes to eliminate that race condition and guarantee > initialization of only a single instance. -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-14225) Remove xmlenc dependency
[ https://issues.apache.org/jira/browse/HADOOP-14225?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15963385#comment-15963385 ] Hudson commented on HADOOP-14225: - SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #11562 (See [https://builds.apache.org/job/Hadoop-trunk-Commit/11562/]) HADOOP-14225. Remove xmlenc dependency (cdouglas: rev a5e57df3c56bf753f40809a2994d556095594de2) * (edit) hadoop-common-project/hadoop-common/pom.xml * (edit) hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/DfsServlet.java * (edit) hadoop-client-modules/hadoop-client-minicluster/pom.xml * (edit) hadoop-hdfs-project/hadoop-hdfs/pom.xml * (edit) hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/MD5MD5CRC32FileChecksum.java * (edit) LICENSE.txt * (edit) hadoop-hdfs-project/hadoop-hdfs-nfs/pom.xml * (edit) hadoop-project/pom.xml > Remove xmlenc dependency > > > Key: HADOOP-14225 > URL: https://issues.apache.org/jira/browse/HADOOP-14225 > Project: Hadoop Common > Issue Type: Improvement >Reporter: Chris Douglas >Assignee: Chris Douglas >Priority: Minor > Fix For: 3.0.0-alpha3 > > Attachments: HADOOP-14225.001.patch, HADOOP-14225.002.patch, > HADOOP-14225.003.patch > > > The xmlenc library is used only in the following two classes: > {noformat} > o.a.h.fs.MD5MD5CRC32FileChecksum > o.a.h.hdfs.server.namenode.DfsServlet > {noformat} > Given that Hadoop already includes other fast XML encoders as dependencies, > we can lose this one. -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-14293) Initialize FakeTimer with a less trivial value
[ https://issues.apache.org/jira/browse/HADOOP-14293?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15963349#comment-15963349 ] Hudson commented on HADOOP-14293: - SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #11561 (See [https://builds.apache.org/job/Hadoop-trunk-Commit/11561/]) HADOOP-14293. Initialize FakeTimer with a less trivial value. (wang: rev be144117a885cb39bc192279c96cbe3790dc77b1) * (edit) hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/util/FakeTimer.java > Initialize FakeTimer with a less trivial value > -- > > Key: HADOOP-14293 > URL: https://issues.apache.org/jira/browse/HADOOP-14293 > Project: Hadoop Common > Issue Type: Bug > Components: test >Affects Versions: 3.0.0-alpha3 >Reporter: Andrew Wang >Assignee: Andrew Wang > Fix For: 2.9.0, 2.7.4, 2.8.1, 3.0.0-alpha3 > > Attachments: HADOOP-14293.001.patch > > > HADOOP-14276 broke TestFsDatasetImpl#testLoadingDfsUsedForVolumes which uses > a FakeTimer. -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-14225) Remove xmlenc dependency
[ https://issues.apache.org/jira/browse/HADOOP-14225?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Chris Douglas updated HADOOP-14225: --- Resolution: Fixed Hadoop Flags: Incompatible change,Reviewed (was: Incompatible change) Fix Version/s: 3.0.0-alpha3 Status: Resolved (was: Patch Available) I committed this > Remove xmlenc dependency > > > Key: HADOOP-14225 > URL: https://issues.apache.org/jira/browse/HADOOP-14225 > Project: Hadoop Common > Issue Type: Improvement >Reporter: Chris Douglas >Assignee: Chris Douglas >Priority: Minor > Fix For: 3.0.0-alpha3 > > Attachments: HADOOP-14225.001.patch, HADOOP-14225.002.patch, > HADOOP-14225.003.patch > > > The xmlenc library is used only in the following two classes: > {noformat} > o.a.h.fs.MD5MD5CRC32FileChecksum > o.a.h.hdfs.server.namenode.DfsServlet > {noformat} > Given that Hadoop already includes other fast XML encoders as dependencies, > we can lose this one. -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-14066) VersionInfo should be marked as public API
[ https://issues.apache.org/jira/browse/HADOOP-14066?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15963328#comment-15963328 ] Andrew Wang commented on HADOOP-14066: -- Should this also go into 2.9/2.8/2.7, since the point of this API is doing cross-version shims? > VersionInfo should be marked as public API > -- > > Key: HADOOP-14066 > URL: https://issues.apache.org/jira/browse/HADOOP-14066 > Project: Hadoop Common > Issue Type: Bug > Components: common >Reporter: Thejas M Nair >Assignee: Akira Ajisaka >Priority: Critical > Fix For: 3.0.0-alpha3 > > Attachments: HADOOP-14066.01.patch > > > org.apache.hadoop.util.VersionInfo is commonly used by applications that work > with multiple versions of Hadoop. > In case of Hive, this is used in a shims layer to identify the version of > hadoop and use different shim code based on version (and the corresponding > api it supports). > I checked Pig and Hbase as well and they also use this class to get version > information. > However, this method is annotated as "@private" and "@unstable". > This code has actually been stable for long time and is widely used like a > public api. I think we should mark it as such. > Note that there are apis to find the version of server components in hadoop, > however, this class necessary for finding the version of client. -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-14293) Initialize FakeTimer with a less trivial value
[ https://issues.apache.org/jira/browse/HADOOP-14293?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Andrew Wang updated HADOOP-14293: - Resolution: Fixed Fix Version/s: 3.0.0-alpha3 2.8.1 2.7.4 2.9.0 Status: Resolved (was: Patch Available) Committed down through 2.7, thanks for reviewing Steve, Erik! > Initialize FakeTimer with a less trivial value > -- > > Key: HADOOP-14293 > URL: https://issues.apache.org/jira/browse/HADOOP-14293 > Project: Hadoop Common > Issue Type: Bug > Components: test >Affects Versions: 3.0.0-alpha3 >Reporter: Andrew Wang >Assignee: Andrew Wang > Fix For: 2.9.0, 2.7.4, 2.8.1, 3.0.0-alpha3 > > Attachments: HADOOP-14293.001.patch > > > HADOOP-14276 broke TestFsDatasetImpl#testLoadingDfsUsedForVolumes which uses > a FakeTimer. -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-14297) Update the documentation about the new ec codecs config keys
[ https://issues.apache.org/jira/browse/HADOOP-14297?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15963311#comment-15963311 ] Wei-Chiu Chuang commented on HADOOP-14297: -- The patch is mostly good. I'd like to commit after HADOOP-13665, which I am still reviewing it for one more pass before I +1. Thanks, > Update the documentation about the new ec codecs config keys > > > Key: HADOOP-14297 > URL: https://issues.apache.org/jira/browse/HADOOP-14297 > Project: Hadoop Common > Issue Type: Sub-task > Components: documentation >Reporter: Kai Sasaki >Assignee: Kai Sasaki > Attachments: HADOOP-14297.01.patch, HADOOP-14297.02.patch > > > In HADOOP-13665, > io.erasurecode.codec.{rs-legacy.rawcoder,rs.rawcoder,xor.rawcoder} are no > more used. > It is necessary to update {{HDFSErasureCoding.md}} to show new config keys > io.erasurecode.codec.{rs-legacy.rawcoders,rs.rawcoders,xor.rawcoders} instead. -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-13786) Add S3Guard committer for zero-rename commits to consistent S3 endpoints
[ https://issues.apache.org/jira/browse/HADOOP-13786?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15963272#comment-15963272 ] Hadoop QA commented on HADOOP-13786: | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 29s{color} | {color:blue} Docker mode activated. {color} | | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s{color} | {color:green} The patch appears to include 37 new or modified test files. {color} | | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 1m 53s{color} | {color:blue} Maven dependency ordering for branch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 12m 21s{color} | {color:green} HADOOP-13345 passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 10m 16s{color} | {color:green} HADOOP-13345 passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 2m 2s{color} | {color:green} HADOOP-13345 passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 2m 24s{color} | {color:green} HADOOP-13345 passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 1m 15s{color} | {color:green} HADOOP-13345 passed {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 3m 15s{color} | {color:green} HADOOP-13345 passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 58s{color} | {color:green} HADOOP-13345 passed {color} | | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 16s{color} | {color:blue} Maven dependency ordering for patch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 1m 28s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 10m 8s{color} | {color:green} the patch passed {color} | | {color:red}-1{color} | {color:red} javac {color} | {color:red} 10m 8s{color} | {color:red} root generated 13 new + 762 unchanged - 1 fixed = 775 total (was 763) {color} | | {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange} 2m 5s{color} | {color:orange} root: The patch generated 78 new + 98 unchanged - 14 fixed = 176 total (was 112) {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 2m 29s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 1m 25s{color} | {color:green} the patch passed {color} | | {color:red}-1{color} | {color:red} whitespace {color} | {color:red} 0m 0s{color} | {color:red} The patch has 18 line(s) that end in whitespace. Use git apply --whitespace=fix <>. Refer https://git-scm.com/docs/git-apply {color} | | {color:green}+1{color} | {color:green} xml {color} | {color:green} 0m 2s{color} | {color:green} The patch has no ill-formed XML file. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 3m 48s{color} | {color:green} the patch passed {color} | | {color:red}-1{color} | {color:red} javadoc {color} | {color:red} 0m 31s{color} | {color:red} hadoop-aws in the patch failed. {color} | | {color:red}-1{color} | {color:red} unit {color} | {color:red} 8m 25s{color} | {color:red} hadoop-common in the patch failed. {color} | | {color:green}+1{color} | {color:green} unit {color} | {color:green} 3m 3s{color} | {color:green} hadoop-mapreduce-client-core in the patch passed. {color} | | {color:red}-1{color} | {color:red} unit {color} | {color:red} 1m 45s{color} | {color:red} hadoop-aws in the patch failed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 50s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 99m 27s{color} | {color:black} {color} | \\ \\ || Reason || Tests || | Failed junit tests | hadoop.security.TestKDiag | | | hadoop.fs.s3a.commit.staging.TestStagingMRJob | \\ \\ || Subsystem || Report/Notes || | Docker | Image:yetus/hadoop:a9ad5d6 | | JIRA Issue | HADOOP-13786 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12862721/HADOOP-13786-HADOOP-13345-023.patch | | Optional Tests | asflicense compile javac javadoc mvninstall mvnsite unit findbugs checkstyle xml | | uname | Linux d719806ef3e7 3.13.0-106-generic #153-Ubuntu SMP Tue Dec 6 15:44:32 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | /testptch/hadoop/
[jira] [Commented] (HADOOP-14292) Transient TestAdlContractRootDirLive failure
[ https://issues.apache.org/jira/browse/HADOOP-14292?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15963122#comment-15963122 ] John Zhuge commented on HADOOP-14292: - Could be a consistency issue. Digging into why this line of code did not kick in to display the path: https://github.com/Azure/azure-data-lake-store-java/blob/2.1.4/src/main/java/com/microsoft/azure/datalake/store/ADLStoreClient.java#L527 [~ASikaria], could you please take a look? Might need to look thru ADLS backend logs. > Transient TestAdlContractRootDirLive failure > > > Key: HADOOP-14292 > URL: https://issues.apache.org/jira/browse/HADOOP-14292 > Project: Hadoop Common > Issue Type: Bug > Components: fs/adl >Affects Versions: 3.0.0-alpha3 >Reporter: John Zhuge >Assignee: Vishwajeet Dusane > > Got the test failure once, but could not reproduce it the second time. Maybe > a transient ADLS error? > {noformat} > Tests run: 9, Failures: 0, Errors: 1, Skipped: 0, Time elapsed: 13.641 sec > <<< FAILURE! - in org.apache.hadoop.fs.adl.live.TestAdlContractRootDirLive > testRecursiveRootListing(org.apache.hadoop.fs.adl.live.TestAdlContractRootDirLive) > Time elapsed: 3.841 sec <<< ERROR! > org.apache.hadoop.security.AccessControlException: LISTSTATUS failed with > error 0x83090aa2 (Forbidden. ACL verification failed. Either the resource > does not exist or the user is not authorized to perform the requested > operation.). > [db432517-4060-4d96-9aad-7309f8469489][2017-04-07T10:24:54.1708810-07:00] > at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) > at > sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62) > at > sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) > at java.lang.reflect.Constructor.newInstance(Constructor.java:422) > at > com.microsoft.azure.datalake.store.ADLStoreClient.getRemoteException(ADLStoreClient.java:1144) > at > com.microsoft.azure.datalake.store.ADLStoreClient.getExceptionFromResponse(ADLStoreClient.java:1106) > at > com.microsoft.azure.datalake.store.ADLStoreClient.enumerateDirectoryInternal(ADLStoreClient.java:527) > at > com.microsoft.azure.datalake.store.ADLStoreClient.enumerateDirectory(ADLStoreClient.java:504) > at > com.microsoft.azure.datalake.store.ADLStoreClient.enumerateDirectory(ADLStoreClient.java:368) > at > org.apache.hadoop.fs.adl.AdlFileSystem.listStatus(AdlFileSystem.java:473) > at org.apache.hadoop.fs.FileSystem.listStatus(FileSystem.java:1824) > at org.apache.hadoop.fs.FileSystem.listStatus(FileSystem.java:1866) > at org.apache.hadoop.fs.FileSystem$4.(FileSystem.java:2028) > at > org.apache.hadoop.fs.FileSystem.listLocatedStatus(FileSystem.java:2027) > at > org.apache.hadoop.fs.FileSystem.listLocatedStatus(FileSystem.java:2010) > at > org.apache.hadoop.fs.FileSystem$5.handleFileStat(FileSystem.java:2168) > at org.apache.hadoop.fs.FileSystem$5.hasNext(FileSystem.java:2145) > at > org.apache.hadoop.fs.contract.ContractTestUtils$TreeScanResults.(ContractTestUtils.java:1252) > at > org.apache.hadoop.fs.contract.AbstractContractRootDirectoryTest.testRecursiveRootListing(AbstractContractRootDirectoryTest.java:219) > at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) > at > sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) > at > sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) > at java.lang.reflect.Method.invoke(Method.java:497) > at > org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:47) > at > org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) > at > org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:44) > at > org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17) > at > org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) > at > org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) > at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:55) > at > org.junit.internal.runners.statements.FailOnTimeout$StatementThread.run(FailOnTimeout.java:74) > {noformat} -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-13786) Add S3Guard committer for zero-rename commits to consistent S3 endpoints
[ https://issues.apache.org/jira/browse/HADOOP-13786?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Steve Loughran updated HADOOP-13786: Attachment: HADOOP-13786-HADOOP-13345-023.patch Patch 23 Work done while mostly offline/travelling, so more housekeeping than featur. People on conference wifi get upset if you do s3 scale tests. * Unified constants across committer and with S3 core (e.g. same partition sized used for staging committer part uploads as block output) * made all the import structures consistent * WiP on documenting config options. * Magic committer's abort outcomes more explicit by moving to enum of outcomes rather than a simple success/fail, adding ABORTED and ABORT_FAILED to achieve this. Test changes primarily related to intermittent test failures; one problem was that if >1 committer test ran in parallel, they could interfere. The Fork ID is now used for the Job ID. That is trickier than you'd think, hence changes to the hadoop-aws POM. Next todo items * Rebase onto latest HADOOP-13345 ( after that catches up with trunk again) * Move staging onto s3a FS methods * instrument committer operations (maybe add counters for the committers? Or just have them access some new FS stats, "pending commits", "completed"... > Add S3Guard committer for zero-rename commits to consistent S3 endpoints > > > Key: HADOOP-13786 > URL: https://issues.apache.org/jira/browse/HADOOP-13786 > Project: Hadoop Common > Issue Type: New Feature > Components: fs/s3 >Affects Versions: HADOOP-13345 >Reporter: Steve Loughran >Assignee: Steve Loughran > Attachments: HADOOP-13786-HADOOP-13345-001.patch, > HADOOP-13786-HADOOP-13345-002.patch, HADOOP-13786-HADOOP-13345-003.patch, > HADOOP-13786-HADOOP-13345-004.patch, HADOOP-13786-HADOOP-13345-005.patch, > HADOOP-13786-HADOOP-13345-006.patch, HADOOP-13786-HADOOP-13345-006.patch, > HADOOP-13786-HADOOP-13345-007.patch, HADOOP-13786-HADOOP-13345-009.patch, > HADOOP-13786-HADOOP-13345-010.patch, HADOOP-13786-HADOOP-13345-011.patch, > HADOOP-13786-HADOOP-13345-012.patch, HADOOP-13786-HADOOP-13345-013.patch, > HADOOP-13786-HADOOP-13345-015.patch, HADOOP-13786-HADOOP-13345-016.patch, > HADOOP-13786-HADOOP-13345-017.patch, HADOOP-13786-HADOOP-13345-018.patch, > HADOOP-13786-HADOOP-13345-019.patch, HADOOP-13786-HADOOP-13345-020.patch, > HADOOP-13786-HADOOP-13345-021.patch, HADOOP-13786-HADOOP-13345-022.patch, > HADOOP-13786-HADOOP-13345-023.patch, s3committer-master.zip > > > A goal of this code is "support O(1) commits to S3 repositories in the > presence of failures". Implement it, including whatever is needed to > demonstrate the correctness of the algorithm. (that is, assuming that s3guard > provides a consistent view of the presence/absence of blobs, show that we can > commit directly). > I consider ourselves free to expose the blobstore-ness of the s3 output > streams (ie. not visible until the close()), if we need to use that to allow > us to abort commit operations. -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-13665) Erasure Coding codec should support fallback coder
[ https://issues.apache.org/jira/browse/HADOOP-13665?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15963065#comment-15963065 ] Hadoop QA commented on HADOOP-13665: | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 13m 5s{color} | {color:blue} Docker mode activated. {color} | | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s{color} | {color:green} The patch appears to include 8 new or modified test files. {color} | | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 1m 37s{color} | {color:blue} Maven dependency ordering for branch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 14m 23s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 15m 59s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 1m 52s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 2m 3s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 42s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 3m 25s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 42s{color} | {color:green} trunk passed {color} | | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 16s{color} | {color:blue} Maven dependency ordering for patch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 1m 31s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 13m 56s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 13m 56s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 1m 55s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 2m 7s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 48s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} xml {color} | {color:green} 0m 1s{color} | {color:green} The patch has no ill-formed XML file. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 3m 44s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 45s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} unit {color} | {color:green} 8m 0s{color} | {color:green} hadoop-common in the patch passed. {color} | | {color:red}-1{color} | {color:red} unit {color} | {color:red} 66m 47s{color} | {color:red} hadoop-hdfs in the patch failed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 42s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black}181m 10s{color} | {color:black} {color} | \\ \\ || Reason || Tests || | Failed junit tests | hadoop.hdfs.server.datanode.fsdataset.impl.TestFsDatasetImpl | | Timed out junit tests | org.apache.hadoop.hdfs.server.namenode.TestStartupOptionUpgrade | \\ \\ || Subsystem || Report/Notes || | Docker | Image:yetus/hadoop:612578f | | JIRA Issue | HADOOP-13665 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12862689/HADOOP-13665.12.patch | | Optional Tests | asflicense compile javac javadoc mvninstall mvnsite unit findbugs checkstyle xml | | uname | Linux 979da39d1436 3.13.0-103-generic #150-Ubuntu SMP Thu Nov 24 10:34:17 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh | | git revision | trunk / 443aa51 | | Default Java | 1.8.0_121 | | findbugs | v3.0.0 | | unit | https://builds.apache.org/job/PreCommit-HADOOP-Build/12074/artifact/patchprocess/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt | | Test Results | https://builds.apache.org/job/PreCommit-HADOOP-Buil
[jira] [Commented] (HADOOP-14293) Initialize FakeTimer with a less trivial value
[ https://issues.apache.org/jira/browse/HADOOP-14293?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15963052#comment-15963052 ] Erik Krogen commented on HADOOP-14293: -- Interesting... Can't quite tell why the change broke this test but thank you for catching it, [~andrew.wang]! > Initialize FakeTimer with a less trivial value > -- > > Key: HADOOP-14293 > URL: https://issues.apache.org/jira/browse/HADOOP-14293 > Project: Hadoop Common > Issue Type: Bug > Components: test >Affects Versions: 3.0.0-alpha3 >Reporter: Andrew Wang >Assignee: Andrew Wang > Attachments: HADOOP-14293.001.patch > > > HADOOP-14276 broke TestFsDatasetImpl#testLoadingDfsUsedForVolumes which uses > a FakeTimer. -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Comment Edited] (HADOOP-14284) Shade Guava everywhere
[ https://issues.apache.org/jira/browse/HADOOP-14284?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15962982#comment-15962982 ] Tsuyoshi Ozawa edited comment on HADOOP-14284 at 4/10/17 3:17 PM: -- I prefer the way which HBase choose, because it enable us to debug code easily without uploading jar. Let me know if you have any idea. was (Author: ozawa): I prefer to the way which HBase choose, because it enable us to debug code easily without uploading jar. Let me know if you have any idea. > Shade Guava everywhere > -- > > Key: HADOOP-14284 > URL: https://issues.apache.org/jira/browse/HADOOP-14284 > Project: Hadoop Common > Issue Type: Bug > Components: build >Affects Versions: 3.0.0-alpha3 >Reporter: Andrew Wang >Assignee: Tsuyoshi Ozawa >Priority: Blocker > Attachments: HADOOP-14238.pre001.patch, HADOOP-14284.002.patch, > HADOOP-14284.004.patch > > > HADOOP-10101 upgraded the guava version for 3.x to 21. > Guava is broadly used by Java projects that consume our artifacts. > Unfortunately, these projects also consume our private artifacts like > {{hadoop-hdfs}}. They also are unlikely on the new shaded client introduced > by HADOOP-11804, currently only available in 3.0.0-alpha2. > We should shade Guava everywhere to proactively avoid breaking downstreams. > This isn't a requirement for all dependency upgrades, but it's necessary for > known-bad dependencies like Guava. -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-14284) Shade Guava everywhere
[ https://issues.apache.org/jira/browse/HADOOP-14284?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15962982#comment-15962982 ] Tsuyoshi Ozawa commented on HADOOP-14284: - I prefer to the way which HBase choose, because it enable us to debug code easily without uploading jar. Let me know if you have any idea. > Shade Guava everywhere > -- > > Key: HADOOP-14284 > URL: https://issues.apache.org/jira/browse/HADOOP-14284 > Project: Hadoop Common > Issue Type: Bug > Components: build >Affects Versions: 3.0.0-alpha3 >Reporter: Andrew Wang >Assignee: Tsuyoshi Ozawa >Priority: Blocker > Attachments: HADOOP-14238.pre001.patch, HADOOP-14284.002.patch, > HADOOP-14284.004.patch > > > HADOOP-10101 upgraded the guava version for 3.x to 21. > Guava is broadly used by Java projects that consume our artifacts. > Unfortunately, these projects also consume our private artifacts like > {{hadoop-hdfs}}. They also are unlikely on the new shaded client introduced > by HADOOP-11804, currently only available in 3.0.0-alpha2. > We should shade Guava everywhere to proactively avoid breaking downstreams. > This isn't a requirement for all dependency upgrades, but it's necessary for > known-bad dependencies like Guava. -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-14284) Shade Guava everywhere
[ https://issues.apache.org/jira/browse/HADOOP-14284?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Tsuyoshi Ozawa updated HADOOP-14284: Status: Open (was: Patch Available) > Shade Guava everywhere > -- > > Key: HADOOP-14284 > URL: https://issues.apache.org/jira/browse/HADOOP-14284 > Project: Hadoop Common > Issue Type: Bug > Components: build >Affects Versions: 3.0.0-alpha3 >Reporter: Andrew Wang >Assignee: Tsuyoshi Ozawa >Priority: Blocker > Attachments: HADOOP-14238.pre001.patch, HADOOP-14284.002.patch, > HADOOP-14284.004.patch > > > HADOOP-10101 upgraded the guava version for 3.x to 21. > Guava is broadly used by Java projects that consume our artifacts. > Unfortunately, these projects also consume our private artifacts like > {{hadoop-hdfs}}. They also are unlikely on the new shaded client introduced > by HADOOP-11804, currently only available in 3.0.0-alpha2. > We should shade Guava everywhere to proactively avoid breaking downstreams. > This isn't a requirement for all dependency upgrades, but it's necessary for > known-bad dependencies like Guava. -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-14296) Move logging APIs over to slf4j in hadoop-tools
[ https://issues.apache.org/jira/browse/HADOOP-14296?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15962904#comment-15962904 ] Steve Loughran commented on HADOOP-14296: - Didn't know what Rumen was up to there. Given there's an explicit log for deprecation, can't rumen just leave everything alone and rely on people to edit their log4j settings? It's what I do to shut things up > Move logging APIs over to slf4j in hadoop-tools > --- > > Key: HADOOP-14296 > URL: https://issues.apache.org/jira/browse/HADOOP-14296 > Project: Hadoop Common > Issue Type: Sub-task >Reporter: Akira Ajisaka >Assignee: Akira Ajisaka > Attachments: HADOOP-14296.01.patch > > -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-14292) Transient TestAdlContractRootDirLive failure
[ https://issues.apache.org/jira/browse/HADOOP-14292?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15962901#comment-15962901 ] Steve Loughran commented on HADOOP-14292: - " Either the resource does not exist ". Do you think it could be a consistency failure? If the error message should included the path being listed, that could help for the test and for users > Transient TestAdlContractRootDirLive failure > > > Key: HADOOP-14292 > URL: https://issues.apache.org/jira/browse/HADOOP-14292 > Project: Hadoop Common > Issue Type: Bug > Components: fs/adl >Affects Versions: 3.0.0-alpha3 >Reporter: John Zhuge >Assignee: Vishwajeet Dusane > > Got the test failure once, but could not reproduce it the second time. Maybe > a transient ADLS error? > {noformat} > Tests run: 9, Failures: 0, Errors: 1, Skipped: 0, Time elapsed: 13.641 sec > <<< FAILURE! - in org.apache.hadoop.fs.adl.live.TestAdlContractRootDirLive > testRecursiveRootListing(org.apache.hadoop.fs.adl.live.TestAdlContractRootDirLive) > Time elapsed: 3.841 sec <<< ERROR! > org.apache.hadoop.security.AccessControlException: LISTSTATUS failed with > error 0x83090aa2 (Forbidden. ACL verification failed. Either the resource > does not exist or the user is not authorized to perform the requested > operation.). > [db432517-4060-4d96-9aad-7309f8469489][2017-04-07T10:24:54.1708810-07:00] > at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) > at > sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62) > at > sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) > at java.lang.reflect.Constructor.newInstance(Constructor.java:422) > at > com.microsoft.azure.datalake.store.ADLStoreClient.getRemoteException(ADLStoreClient.java:1144) > at > com.microsoft.azure.datalake.store.ADLStoreClient.getExceptionFromResponse(ADLStoreClient.java:1106) > at > com.microsoft.azure.datalake.store.ADLStoreClient.enumerateDirectoryInternal(ADLStoreClient.java:527) > at > com.microsoft.azure.datalake.store.ADLStoreClient.enumerateDirectory(ADLStoreClient.java:504) > at > com.microsoft.azure.datalake.store.ADLStoreClient.enumerateDirectory(ADLStoreClient.java:368) > at > org.apache.hadoop.fs.adl.AdlFileSystem.listStatus(AdlFileSystem.java:473) > at org.apache.hadoop.fs.FileSystem.listStatus(FileSystem.java:1824) > at org.apache.hadoop.fs.FileSystem.listStatus(FileSystem.java:1866) > at org.apache.hadoop.fs.FileSystem$4.(FileSystem.java:2028) > at > org.apache.hadoop.fs.FileSystem.listLocatedStatus(FileSystem.java:2027) > at > org.apache.hadoop.fs.FileSystem.listLocatedStatus(FileSystem.java:2010) > at > org.apache.hadoop.fs.FileSystem$5.handleFileStat(FileSystem.java:2168) > at org.apache.hadoop.fs.FileSystem$5.hasNext(FileSystem.java:2145) > at > org.apache.hadoop.fs.contract.ContractTestUtils$TreeScanResults.(ContractTestUtils.java:1252) > at > org.apache.hadoop.fs.contract.AbstractContractRootDirectoryTest.testRecursiveRootListing(AbstractContractRootDirectoryTest.java:219) > at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) > at > sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) > at > sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) > at java.lang.reflect.Method.invoke(Method.java:497) > at > org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:47) > at > org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) > at > org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:44) > at > org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17) > at > org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) > at > org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) > at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:55) > at > org.junit.internal.runners.statements.FailOnTimeout$StatementThread.run(FailOnTimeout.java:74) > {noformat} -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-14284) Shade Guava everywhere
[ https://issues.apache.org/jira/browse/HADOOP-14284?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15962889#comment-15962889 ] Hadoop QA commented on HADOOP-14284: | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 24s{color} | {color:blue} Docker mode activated. {color} | | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:red}-1{color} | {color:red} test4tests {color} | {color:red} 0m 0s{color} | {color:red} The patch doesn't appear to include any new or modified tests. Please justify why no new tests are needed for this patch. Also please list what manual steps were performed to verify this patch. {color} | | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 14s{color} | {color:blue} Maven dependency ordering for branch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 15m 28s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 17m 38s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 11m 32s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 1m 57s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 5m 40s{color} | {color:green} trunk passed {color} | | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 18s{color} | {color:blue} Maven dependency ordering for patch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 34m 15s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 14m 11s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 14m 11s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 9m 27s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 1m 39s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} xml {color} | {color:green} 0m 37s{color} | {color:green} The patch has no ill-formed XML file. {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 4m 23s{color} | {color:green} the patch passed {color} | | {color:red}-1{color} | {color:red} unit {color} | {color:red} 16m 50s{color} | {color:red} root in the patch failed. {color} | | {color:red}-1{color} | {color:red} asflicense {color} | {color:red} 0m 32s{color} | {color:red} The patch generated 2 ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black}136m 5s{color} | {color:black} {color} | \\ \\ || Reason || Tests || | Failed junit tests | hadoop.fs.sftp.TestSFTPFileSystem | | | hadoop.security.TestKDiag | \\ \\ || Subsystem || Report/Notes || | Docker | Image:yetus/hadoop:612578f | | JIRA Issue | HADOOP-14284 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12862683/HADOOP-14284.004.patch | | Optional Tests | asflicense compile javac javadoc mvninstall mvnsite unit xml | | uname | Linux b877768c7b40 3.13.0-106-generic #153-Ubuntu SMP Tue Dec 6 15:44:32 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh | | git revision | trunk / 443aa51 | | Default Java | 1.8.0_121 | | unit | https://builds.apache.org/job/PreCommit-HADOOP-Build/12072/artifact/patchprocess/patch-unit-root.txt | | Test Results | https://builds.apache.org/job/PreCommit-HADOOP-Build/12072/testReport/ | | asflicense | https://builds.apache.org/job/PreCommit-HADOOP-Build/12072/artifact/patchprocess/patch-asflicense-problems.txt | | modules | C: hadoop-project hadoop-shaded-thirdparty hadoop-common-project/hadoop-common hadoop-common-project/hadoop-nfs hadoop-common-project/hadoop-kms hadoop-hdfs-project/hadoop-hdfs hadoop-hdfs-project/hadoop-hdfs-httpfs hadoop-hdfs-project/hadoop-hdfs-nfs hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-common hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager hadoop-yarn-project/hadoop-yarn/hadoop-yarn-
[jira] [Commented] (HADOOP-14284) Shade Guava everywhere
[ https://issues.apache.org/jira/browse/HADOOP-14284?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15962870#comment-15962870 ] Tsuyoshi Ozawa commented on HADOOP-14284: - Thanks Sean! I've suspected the possibility of not using third-party jar. Hence, the information of Avro and HBase is helpful for me. Let me check the documentation for a while. > Shade Guava everywhere > -- > > Key: HADOOP-14284 > URL: https://issues.apache.org/jira/browse/HADOOP-14284 > Project: Hadoop Common > Issue Type: Bug > Components: build >Affects Versions: 3.0.0-alpha3 >Reporter: Andrew Wang >Assignee: Tsuyoshi Ozawa >Priority: Blocker > Attachments: HADOOP-14238.pre001.patch, HADOOP-14284.002.patch, > HADOOP-14284.004.patch > > > HADOOP-10101 upgraded the guava version for 3.x to 21. > Guava is broadly used by Java projects that consume our artifacts. > Unfortunately, these projects also consume our private artifacts like > {{hadoop-hdfs}}. They also are unlikely on the new shaded client introduced > by HADOOP-11804, currently only available in 3.0.0-alpha2. > We should shade Guava everywhere to proactively avoid breaking downstreams. > This isn't a requirement for all dependency upgrades, but it's necessary for > known-bad dependencies like Guava. -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-14284) Shade Guava everywhere
[ https://issues.apache.org/jira/browse/HADOOP-14284?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15962810#comment-15962810 ] Sean Busbey commented on HADOOP-14284: -- Sorry, late on these {code} diff --git hadoop-client-modules/hadoop-client-api/pom.xml hadoop-client-modules/hadoop-client-api/pom.xml 164 165 com/google/common/* {code} {code} diff --git hadoop-client-modules/hadoop-client-minicluster/pom.xml hadoop-client-modules/hadoop-client-minicluster/pom.xml 691 692 com/google/common/* {code} {code} diff --git hadoop-client-modules/hadoop-client-runtime/pom.xml hadoop-client-modules/hadoop-client-runtime/pom.xml 237 238 com/google/common/* {code} Won't all of these have been rewritten by the third party module? Otherwise mustn't whatever is in hadoop-common be referring to a non-relocated Guava? Related to the above, I don't see where the modules that now depend on shaded-third-party are changed to reference the relocated version of Guava. We can do it either in the source code or in the build, but I don't see us doing either ATM. For example, [apache avro does this by making a temp jar with non-relocated classes they reference, then doing the relocation for both the classes and their references in the user facing module|https://github.com/apache/avro/commit/f7ec67996b66444f9de2284d1ddfaa66297ba51e]. Apache HBase does the update-the-source approach for their relocated Google Protobuf. (There's not a simple thing to point at for that example, unfortunately. [the README for the module that does the relocation|https://github.com/apache/hbase/blob/48439e57201ee3be5eb12e6187002501af305a35/hbase-protocol-shaded/README.txt] is the closest, since it at least has pointers to what's going on and the commits involved.) > Shade Guava everywhere > -- > > Key: HADOOP-14284 > URL: https://issues.apache.org/jira/browse/HADOOP-14284 > Project: Hadoop Common > Issue Type: Bug > Components: build >Affects Versions: 3.0.0-alpha3 >Reporter: Andrew Wang >Assignee: Tsuyoshi Ozawa >Priority: Blocker > Attachments: HADOOP-14238.pre001.patch, HADOOP-14284.002.patch, > HADOOP-14284.004.patch > > > HADOOP-10101 upgraded the guava version for 3.x to 21. > Guava is broadly used by Java projects that consume our artifacts. > Unfortunately, these projects also consume our private artifacts like > {{hadoop-hdfs}}. They also are unlikely on the new shaded client introduced > by HADOOP-11804, currently only available in 3.0.0-alpha2. > We should shade Guava everywhere to proactively avoid breaking downstreams. > This isn't a requirement for all dependency upgrades, but it's necessary for > known-bad dependencies like Guava. -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-14297) Update the documentation about the new ec codecs config keys
[ https://issues.apache.org/jira/browse/HADOOP-14297?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15962806#comment-15962806 ] Hadoop QA commented on HADOOP-14297: | (/) *{color:green}+1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 25s{color} | {color:blue} Docker mode activated. {color} | | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 14m 27s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 5s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 0s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 19s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 17m 45s{color} | {color:black} {color} | \\ \\ || Subsystem || Report/Notes || | Docker | Image:yetus/hadoop:612578f | | JIRA Issue | HADOOP-14297 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12862688/HADOOP-14297.02.patch | | Optional Tests | asflicense mvnsite | | uname | Linux 4ee983a4268e 3.13.0-106-generic #153-Ubuntu SMP Tue Dec 6 15:44:32 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh | | git revision | trunk / 443aa51 | | modules | C: hadoop-hdfs-project/hadoop-hdfs U: hadoop-hdfs-project/hadoop-hdfs | | Console output | https://builds.apache.org/job/PreCommit-HADOOP-Build/12073/console | | Powered by | Apache Yetus 0.5.0-SNAPSHOT http://yetus.apache.org | This message was automatically generated. > Update the documentation about the new ec codecs config keys > > > Key: HADOOP-14297 > URL: https://issues.apache.org/jira/browse/HADOOP-14297 > Project: Hadoop Common > Issue Type: Sub-task > Components: documentation >Reporter: Kai Sasaki >Assignee: Kai Sasaki > Attachments: HADOOP-14297.01.patch, HADOOP-14297.02.patch > > > In HADOOP-13665, > io.erasurecode.codec.{rs-legacy.rawcoder,rs.rawcoder,xor.rawcoder} are no > more used. > It is necessary to update {{HDFSErasureCoding.md}} to show new config keys > io.erasurecode.codec.{rs-legacy.rawcoders,rs.rawcoders,xor.rawcoders} instead. -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Comment Edited] (HADOOP-14284) Shade Guava everywhere
[ https://issues.apache.org/jira/browse/HADOOP-14284?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15962737#comment-15962737 ] Tsuyoshi Ozawa edited comment on HADOOP-14284 at 4/10/17 12:45 PM: --- Thanks Sean for your review. {quote} I presume we end up using our project wide LICENSE/NOTICE in this jar? {quote} {quote} What are we trying to exclude here? I don't see much of consequence in the guava jar. {quote} I think we shouldn't include our L&N files here since we only use original Guava. Hence, my intension to exclude META-INF/** is to remove our L&N files and to include L file which is included in original Guava. Precisely, the L file is COPYING file in Guava's repository. The change of excluding META-INF/**, however, was not correct solution as you pointed out - it doesn't work. Thanks for pointing out. Instead of that, I added dummy maven-remote-resources-plugin's process-resources section to remove our original L&N files. {quote} Don't we need to have a dependency reduced pom in order to avoid having Guava show up as a transitive dependency? {quote} I addressed to make the flag true in v4 patch. was (Author: ozawa): Thanks Sean for your review. {quote} I presume we end up using our project wide LICENSE/NOTICE in this jar? {quote} {quote} What are we trying to exclude here? I don't see much of consequence in the guava jar. {quote} I think we shouldn't our L&N files here since we only use original Guava. Hence, my intension is to remove our L&N files and to include L file which is included in original Guava. Precisely, the L file is COPYING file in Guava's repository. The change of excluding META-INF/**, however was my mistake - it doesn't work. Thanks for pointing out. Instead of that, I added dummy maven-remote-resources-plugin's process-resources section to remove our original L&N files. {quote} Don't we need to have a dependency reduced pom in order to avoid having Guava show up as a transitive dependency? {quote} I addressed to make the flag true in v4 patch. > Shade Guava everywhere > -- > > Key: HADOOP-14284 > URL: https://issues.apache.org/jira/browse/HADOOP-14284 > Project: Hadoop Common > Issue Type: Bug > Components: build >Affects Versions: 3.0.0-alpha3 >Reporter: Andrew Wang >Assignee: Tsuyoshi Ozawa >Priority: Blocker > Attachments: HADOOP-14238.pre001.patch, HADOOP-14284.002.patch, > HADOOP-14284.004.patch > > > HADOOP-10101 upgraded the guava version for 3.x to 21. > Guava is broadly used by Java projects that consume our artifacts. > Unfortunately, these projects also consume our private artifacts like > {{hadoop-hdfs}}. They also are unlikely on the new shaded client introduced > by HADOOP-11804, currently only available in 3.0.0-alpha2. > We should shade Guava everywhere to proactively avoid breaking downstreams. > This isn't a requirement for all dependency upgrades, but it's necessary for > known-bad dependencies like Guava. -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-13665) Erasure Coding codec should support fallback coder
[ https://issues.apache.org/jira/browse/HADOOP-13665?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Kai Sasaki updated HADOOP-13665: Attachment: HADOOP-13665.12.patch > Erasure Coding codec should support fallback coder > -- > > Key: HADOOP-13665 > URL: https://issues.apache.org/jira/browse/HADOOP-13665 > Project: Hadoop Common > Issue Type: Sub-task > Components: io >Reporter: Wei-Chiu Chuang >Assignee: Kai Sasaki >Priority: Blocker > Labels: hdfs-ec-3.0-must-do > Attachments: HADOOP-13665.01.patch, HADOOP-13665.02.patch, > HADOOP-13665.03.patch, HADOOP-13665.04.patch, HADOOP-13665.05.patch, > HADOOP-13665.06.patch, HADOOP-13665.07.patch, HADOOP-13665.08.patch, > HADOOP-13665.09.patch, HADOOP-13665.10.patch, HADOOP-13665.11.patch, > HADOOP-13665.12.patch > > > The current EC codec supports a single coder only (by default pure Java > implementation). If the native coder is specified but is unavailable, it > should fallback to pure Java implementation. > One possible solution is to follow the convention of existing Hadoop native > codec, such as transport encryption (see {{CryptoCodec.java}}). It supports > fallback by specifying two or multiple coders as the value of property, and > loads coders in order. -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-14284) Shade Guava everywhere
[ https://issues.apache.org/jira/browse/HADOOP-14284?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Sean Busbey updated HADOOP-14284: - Component/s: build > Shade Guava everywhere > -- > > Key: HADOOP-14284 > URL: https://issues.apache.org/jira/browse/HADOOP-14284 > Project: Hadoop Common > Issue Type: Bug > Components: build >Affects Versions: 3.0.0-alpha3 >Reporter: Andrew Wang >Assignee: Tsuyoshi Ozawa >Priority: Blocker > Attachments: HADOOP-14238.pre001.patch, HADOOP-14284.002.patch, > HADOOP-14284.004.patch > > > HADOOP-10101 upgraded the guava version for 3.x to 21. > Guava is broadly used by Java projects that consume our artifacts. > Unfortunately, these projects also consume our private artifacts like > {{hadoop-hdfs}}. They also are unlikely on the new shaded client introduced > by HADOOP-11804, currently only available in 3.0.0-alpha2. > We should shade Guava everywhere to proactively avoid breaking downstreams. > This isn't a requirement for all dependency upgrades, but it's necessary for > known-bad dependencies like Guava. -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Assigned] (HADOOP-14284) Shade Guava everywhere
[ https://issues.apache.org/jira/browse/HADOOP-14284?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Sean Busbey reassigned HADOOP-14284: Assignee: Tsuyoshi Ozawa (was: Sean Busbey) > Shade Guava everywhere > -- > > Key: HADOOP-14284 > URL: https://issues.apache.org/jira/browse/HADOOP-14284 > Project: Hadoop Common > Issue Type: Bug > Components: build >Affects Versions: 3.0.0-alpha3 >Reporter: Andrew Wang >Assignee: Tsuyoshi Ozawa >Priority: Blocker > Attachments: HADOOP-14238.pre001.patch, HADOOP-14284.002.patch, > HADOOP-14284.004.patch > > > HADOOP-10101 upgraded the guava version for 3.x to 21. > Guava is broadly used by Java projects that consume our artifacts. > Unfortunately, these projects also consume our private artifacts like > {{hadoop-hdfs}}. They also are unlikely on the new shaded client introduced > by HADOOP-11804, currently only available in 3.0.0-alpha2. > We should shade Guava everywhere to proactively avoid breaking downstreams. > This isn't a requirement for all dependency upgrades, but it's necessary for > known-bad dependencies like Guava. -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Assigned] (HADOOP-14284) Shade Guava everywhere
[ https://issues.apache.org/jira/browse/HADOOP-14284?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Sean Busbey reassigned HADOOP-14284: Assignee: Sean Busbey > Shade Guava everywhere > -- > > Key: HADOOP-14284 > URL: https://issues.apache.org/jira/browse/HADOOP-14284 > Project: Hadoop Common > Issue Type: Bug >Affects Versions: 3.0.0-alpha3 >Reporter: Andrew Wang >Assignee: Sean Busbey >Priority: Blocker > Attachments: HADOOP-14238.pre001.patch, HADOOP-14284.002.patch, > HADOOP-14284.004.patch > > > HADOOP-10101 upgraded the guava version for 3.x to 21. > Guava is broadly used by Java projects that consume our artifacts. > Unfortunately, these projects also consume our private artifacts like > {{hadoop-hdfs}}. They also are unlikely on the new shaded client introduced > by HADOOP-11804, currently only available in 3.0.0-alpha2. > We should shade Guava everywhere to proactively avoid breaking downstreams. > This isn't a requirement for all dependency upgrades, but it's necessary for > known-bad dependencies like Guava. -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-14297) Update the documentation about the new ec codecs config keys
[ https://issues.apache.org/jira/browse/HADOOP-14297?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Kai Sasaki updated HADOOP-14297: Attachment: HADOOP-14297.02.patch > Update the documentation about the new ec codecs config keys > > > Key: HADOOP-14297 > URL: https://issues.apache.org/jira/browse/HADOOP-14297 > Project: Hadoop Common > Issue Type: Sub-task > Components: documentation >Reporter: Kai Sasaki >Assignee: Kai Sasaki > Attachments: HADOOP-14297.01.patch, HADOOP-14297.02.patch > > > In HADOOP-13665, > io.erasurecode.codec.{rs-legacy.rawcoder,rs.rawcoder,xor.rawcoder} are no > more used. > It is necessary to update {{HDFSErasureCoding.md}} to show new config keys > io.erasurecode.codec.{rs-legacy.rawcoders,rs.rawcoders,xor.rawcoders} instead. -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Comment Edited] (HADOOP-14284) Shade Guava everywhere
[ https://issues.apache.org/jira/browse/HADOOP-14284?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15962737#comment-15962737 ] Tsuyoshi Ozawa edited comment on HADOOP-14284 at 4/10/17 11:42 AM: --- Thanks Sean for your review. {quote} I presume we end up using our project wide LICENSE/NOTICE in this jar? {quote} {quote} What are we trying to exclude here? I don't see much of consequence in the guava jar. {quote} I think we shouldn't our L&N files here since we only use original Guava. Hence, my intension is to remove our L&N files and to include L file which is included in original Guava. Precisely, the L file is COPYING file in Guava's repository. The change of excluding META-INF/**, however was my mistake - it doesn't work. Thanks for pointing out. Instead of that, I added dummy maven-remote-resources-plugin's process-resources section to remove our original L&N files. {quote} Don't we need to have a dependency reduced pom in order to avoid having Guava show up as a transitive dependency? {quote} I addressed to make the flag true in v4 patch. was (Author: ozawa): Thanks Seam for your review. {quote} I presume we end up using our project wide LICENSE/NOTICE in this jar? {quote} {quote} What are we trying to exclude here? I don't see much of consequence in the guava jar. {quote} I think we shouldn't our L&N files here since we only use original Guava. Hence, my intension is to remove our L&N files and to include L file which is included in original Guava. Precisely, the L file is COPYING file in Guava's repository. The change of excluding META-INF/**, however was my mistake - it doesn't work. Thanks for pointing out. Instead of that, I added dummy maven-remote-resources-plugin's process-resources section to remove our original L&N files. {quote} Don't we need to have a dependency reduced pom in order to avoid having Guava show up as a transitive dependency? {quote} I addressed to make the flag true in v4 patch. > Shade Guava everywhere > -- > > Key: HADOOP-14284 > URL: https://issues.apache.org/jira/browse/HADOOP-14284 > Project: Hadoop Common > Issue Type: Bug >Affects Versions: 3.0.0-alpha3 >Reporter: Andrew Wang >Priority: Blocker > Attachments: HADOOP-14238.pre001.patch, HADOOP-14284.002.patch, > HADOOP-14284.004.patch > > > HADOOP-10101 upgraded the guava version for 3.x to 21. > Guava is broadly used by Java projects that consume our artifacts. > Unfortunately, these projects also consume our private artifacts like > {{hadoop-hdfs}}. They also are unlikely on the new shaded client introduced > by HADOOP-11804, currently only available in 3.0.0-alpha2. > We should shade Guava everywhere to proactively avoid breaking downstreams. > This isn't a requirement for all dependency upgrades, but it's necessary for > known-bad dependencies like Guava. -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-14284) Shade Guava everywhere
[ https://issues.apache.org/jira/browse/HADOOP-14284?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15962737#comment-15962737 ] Tsuyoshi Ozawa commented on HADOOP-14284: - Thanks Seam for your review. {quote} I presume we end up using our project wide LICENSE/NOTICE in this jar? {quote} {quote} What are we trying to exclude here? I don't see much of consequence in the guava jar. {quote} I think we shouldn't our L&N files here since we only use original Guava. Hence, my intension is to remove our L&N files and to include L file which is included in original Guava. Precisely, the L file is COPYING file in Guava's repository. The change of excluding META-INF/**, however was my mistake - it doesn't work. Thanks for pointing out. Instead of that, I added dummy maven-remote-resources-plugin's process-resources section to remove our original L&N files. {quote} Don't we need to have a dependency reduced pom in order to avoid having Guava show up as a transitive dependency? {quote} I addressed to make the flag true in v4 patch. > Shade Guava everywhere > -- > > Key: HADOOP-14284 > URL: https://issues.apache.org/jira/browse/HADOOP-14284 > Project: Hadoop Common > Issue Type: Bug >Affects Versions: 3.0.0-alpha3 >Reporter: Andrew Wang >Priority: Blocker > Attachments: HADOOP-14238.pre001.patch, HADOOP-14284.002.patch, > HADOOP-14284.004.patch > > > HADOOP-10101 upgraded the guava version for 3.x to 21. > Guava is broadly used by Java projects that consume our artifacts. > Unfortunately, these projects also consume our private artifacts like > {{hadoop-hdfs}}. They also are unlikely on the new shaded client introduced > by HADOOP-11804, currently only available in 3.0.0-alpha2. > We should shade Guava everywhere to proactively avoid breaking downstreams. > This isn't a requirement for all dependency upgrades, but it's necessary for > known-bad dependencies like Guava. -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-14284) Shade Guava everywhere
[ https://issues.apache.org/jira/browse/HADOOP-14284?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Tsuyoshi Ozawa updated HADOOP-14284: Attachment: HADOOP-14284.004.patch > Shade Guava everywhere > -- > > Key: HADOOP-14284 > URL: https://issues.apache.org/jira/browse/HADOOP-14284 > Project: Hadoop Common > Issue Type: Bug >Affects Versions: 3.0.0-alpha3 >Reporter: Andrew Wang >Priority: Blocker > Attachments: HADOOP-14238.pre001.patch, HADOOP-14284.002.patch, > HADOOP-14284.004.patch > > > HADOOP-10101 upgraded the guava version for 3.x to 21. > Guava is broadly used by Java projects that consume our artifacts. > Unfortunately, these projects also consume our private artifacts like > {{hadoop-hdfs}}. They also are unlikely on the new shaded client introduced > by HADOOP-11804, currently only available in 3.0.0-alpha2. > We should shade Guava everywhere to proactively avoid breaking downstreams. > This isn't a requirement for all dependency upgrades, but it's necessary for > known-bad dependencies like Guava. -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-14284) Shade Guava everywhere
[ https://issues.apache.org/jira/browse/HADOOP-14284?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Tsuyoshi Ozawa updated HADOOP-14284: Attachment: (was: HADOOP-14284.003.patch) > Shade Guava everywhere > -- > > Key: HADOOP-14284 > URL: https://issues.apache.org/jira/browse/HADOOP-14284 > Project: Hadoop Common > Issue Type: Bug >Affects Versions: 3.0.0-alpha3 >Reporter: Andrew Wang >Priority: Blocker > Attachments: HADOOP-14238.pre001.patch, HADOOP-14284.002.patch, > HADOOP-14284.004.patch > > > HADOOP-10101 upgraded the guava version for 3.x to 21. > Guava is broadly used by Java projects that consume our artifacts. > Unfortunately, these projects also consume our private artifacts like > {{hadoop-hdfs}}. They also are unlikely on the new shaded client introduced > by HADOOP-11804, currently only available in 3.0.0-alpha2. > We should shade Guava everywhere to proactively avoid breaking downstreams. > This isn't a requirement for all dependency upgrades, but it's necessary for > known-bad dependencies like Guava. -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-14284) Shade Guava everywhere
[ https://issues.apache.org/jira/browse/HADOOP-14284?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Tsuyoshi Ozawa updated HADOOP-14284: Status: Patch Available (was: Open) > Shade Guava everywhere > -- > > Key: HADOOP-14284 > URL: https://issues.apache.org/jira/browse/HADOOP-14284 > Project: Hadoop Common > Issue Type: Bug >Affects Versions: 3.0.0-alpha3 >Reporter: Andrew Wang >Priority: Blocker > Attachments: HADOOP-14238.pre001.patch, HADOOP-14284.002.patch, > HADOOP-14284.003.patch > > > HADOOP-10101 upgraded the guava version for 3.x to 21. > Guava is broadly used by Java projects that consume our artifacts. > Unfortunately, these projects also consume our private artifacts like > {{hadoop-hdfs}}. They also are unlikely on the new shaded client introduced > by HADOOP-11804, currently only available in 3.0.0-alpha2. > We should shade Guava everywhere to proactively avoid breaking downstreams. > This isn't a requirement for all dependency upgrades, but it's necessary for > known-bad dependencies like Guava. -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-14284) Shade Guava everywhere
[ https://issues.apache.org/jira/browse/HADOOP-14284?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Tsuyoshi Ozawa updated HADOOP-14284: Attachment: HADOOP-14284.003.patch > Shade Guava everywhere > -- > > Key: HADOOP-14284 > URL: https://issues.apache.org/jira/browse/HADOOP-14284 > Project: Hadoop Common > Issue Type: Bug >Affects Versions: 3.0.0-alpha3 >Reporter: Andrew Wang >Priority: Blocker > Attachments: HADOOP-14238.pre001.patch, HADOOP-14284.002.patch, > HADOOP-14284.003.patch > > > HADOOP-10101 upgraded the guava version for 3.x to 21. > Guava is broadly used by Java projects that consume our artifacts. > Unfortunately, these projects also consume our private artifacts like > {{hadoop-hdfs}}. They also are unlikely on the new shaded client introduced > by HADOOP-11804, currently only available in 3.0.0-alpha2. > We should shade Guava everywhere to proactively avoid breaking downstreams. > This isn't a requirement for all dependency upgrades, but it's necessary for > known-bad dependencies like Guava. -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-14277) TestTrash.testTrashRestarts is flaky
[ https://issues.apache.org/jira/browse/HADOOP-14277?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15962706#comment-15962706 ] Hadoop QA commented on HADOOP-14277: | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 19s{color} | {color:blue} Docker mode activated. {color} | | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s{color} | {color:green} The patch appears to include 1 new or modified test files. {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 13m 47s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 16m 24s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 38s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 7s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 20s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 34s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 50s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 40s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 14m 11s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 14m 11s{color} | {color:green} the patch passed {color} | | {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange} 0m 35s{color} | {color:orange} hadoop-common-project/hadoop-common: The patch generated 1 new + 67 unchanged - 1 fixed = 68 total (was 68) {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 1s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 20s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 35s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 48s{color} | {color:green} the patch passed {color} | | {color:red}-1{color} | {color:red} unit {color} | {color:red} 7m 55s{color} | {color:red} hadoop-common in the patch failed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 34s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 64m 34s{color} | {color:black} {color} | \\ \\ || Reason || Tests || | Failed junit tests | hadoop.security.TestKDiag | \\ \\ || Subsystem || Report/Notes || | Docker | Image:yetus/hadoop:612578f | | JIRA Issue | HADOOP-14277 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12862664/HADOOP-14277.001.patch | | Optional Tests | asflicense compile javac javadoc mvninstall mvnsite unit findbugs checkstyle | | uname | Linux a63ecd466e48 3.13.0-106-generic #153-Ubuntu SMP Tue Dec 6 15:44:32 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh | | git revision | trunk / 443aa51 | | Default Java | 1.8.0_121 | | findbugs | v3.0.0 | | checkstyle | https://builds.apache.org/job/PreCommit-HADOOP-Build/12071/artifact/patchprocess/diff-checkstyle-hadoop-common-project_hadoop-common.txt | | unit | https://builds.apache.org/job/PreCommit-HADOOP-Build/12071/artifact/patchprocess/patch-unit-hadoop-common-project_hadoop-common.txt | | Test Results | https://builds.apache.org/job/PreCommit-HADOOP-Build/12071/testReport/ | | modules | C: hadoop-common-project/hadoop-common U: hadoop-common-project/hadoop-common | | Console output | https://builds.apache.org/job/PreCommit-HADOOP-Build/12071/console | | Powered by | Apache Yetus 0.5.0-SNAPSHOT http://yetus.apache.org | This message was automatically generated. > TestTrash.testTrashRestarts is flaky > > > Key: HADOOP-14277 > URL: ht
[jira] [Updated] (HADOOP-14277) TestTrash.testTrashRestarts is flaky
[ https://issues.apache.org/jira/browse/HADOOP-14277?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Weiwei Yang updated HADOOP-14277: - Status: Patch Available (was: Open) > TestTrash.testTrashRestarts is flaky > > > Key: HADOOP-14277 > URL: https://issues.apache.org/jira/browse/HADOOP-14277 > Project: Hadoop Common > Issue Type: Bug >Reporter: Eric Badger >Assignee: Weiwei Yang > Attachments: HADOOP-14277.001.patch > > > {noformat} > junit.framework.AssertionFailedError: Expected num of checkpoints is 2, but > actual is 3 expected:<2> but was:<3> > at junit.framework.Assert.fail(Assert.java:57) > at junit.framework.Assert.failNotEquals(Assert.java:329) > at junit.framework.Assert.assertEquals(Assert.java:78) > at junit.framework.Assert.assertEquals(Assert.java:234) > at junit.framework.TestCase.assertEquals(TestCase.java:401) > at > org.apache.hadoop.fs.TestTrash.verifyAuditableTrashEmptier(TestTrash.java:892) > at org.apache.hadoop.fs.TestTrash.testTrashRestarts(TestTrash.java:593) > {noformat} -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-14277) TestTrash.testTrashRestarts is flaky
[ https://issues.apache.org/jira/browse/HADOOP-14277?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15962660#comment-15962660 ] Weiwei Yang commented on HADOOP-14277: -- Improved {{TestTrash#testTrashRestart}} in following ways # Renamed {{testTrashRestart}} to {{testTrashInterfaces}} because this test generally verifies {{Trash}} and {{TrashPolicy}} API, and the expected behavior how it can be used. # Removed inner classes {{AuditableCheckpoints}}, {{AuditableTrashPolicy}}, these helper classes were used to log counts for checkpoints. This patch now re-implement this with mockito approach, to verify if the method gets called by given times. Now most of short slice of sleeps are removed, instead using count down hatch to simulate time interval, it should be more reliable. # Improved java docs > TestTrash.testTrashRestarts is flaky > > > Key: HADOOP-14277 > URL: https://issues.apache.org/jira/browse/HADOOP-14277 > Project: Hadoop Common > Issue Type: Bug >Reporter: Eric Badger >Assignee: Weiwei Yang > Attachments: HADOOP-14277.001.patch > > > {noformat} > junit.framework.AssertionFailedError: Expected num of checkpoints is 2, but > actual is 3 expected:<2> but was:<3> > at junit.framework.Assert.fail(Assert.java:57) > at junit.framework.Assert.failNotEquals(Assert.java:329) > at junit.framework.Assert.assertEquals(Assert.java:78) > at junit.framework.Assert.assertEquals(Assert.java:234) > at junit.framework.TestCase.assertEquals(TestCase.java:401) > at > org.apache.hadoop.fs.TestTrash.verifyAuditableTrashEmptier(TestTrash.java:892) > at org.apache.hadoop.fs.TestTrash.testTrashRestarts(TestTrash.java:593) > {noformat} -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-14277) TestTrash.testTrashRestarts is flaky
[ https://issues.apache.org/jira/browse/HADOOP-14277?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Weiwei Yang updated HADOOP-14277: - Attachment: HADOOP-14277.001.patch > TestTrash.testTrashRestarts is flaky > > > Key: HADOOP-14277 > URL: https://issues.apache.org/jira/browse/HADOOP-14277 > Project: Hadoop Common > Issue Type: Bug >Reporter: Eric Badger >Assignee: Weiwei Yang > Attachments: HADOOP-14277.001.patch > > > {noformat} > junit.framework.AssertionFailedError: Expected num of checkpoints is 2, but > actual is 3 expected:<2> but was:<3> > at junit.framework.Assert.fail(Assert.java:57) > at junit.framework.Assert.failNotEquals(Assert.java:329) > at junit.framework.Assert.assertEquals(Assert.java:78) > at junit.framework.Assert.assertEquals(Assert.java:234) > at junit.framework.TestCase.assertEquals(TestCase.java:401) > at > org.apache.hadoop.fs.TestTrash.verifyAuditableTrashEmptier(TestTrash.java:892) > at org.apache.hadoop.fs.TestTrash.testTrashRestarts(TestTrash.java:593) > {noformat} -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-12633) Extend Erasure Code to support POWER Chip acceleration
[ https://issues.apache.org/jira/browse/HADOOP-12633?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15962628#comment-15962628 ] Ayappan commented on HADOOP-12633: -- What's the status of this ? > Extend Erasure Code to support POWER Chip acceleration > -- > > Key: HADOOP-12633 > URL: https://issues.apache.org/jira/browse/HADOOP-12633 > Project: Hadoop Common > Issue Type: New Feature >Reporter: wqijun >Assignee: wqijun > Attachments: hadoopec-ACC.patch > > > Erasure Code is a very important feature in new HDFS version. This JIRA will > focus on how to extend EC to support multiple types of EC acceleration by C > library and other hardware method, like GPU or FPGA. Compared with > Hadoop-11887, this JIRA will more focus on how to leverage POWER Chip > capability to accelerate the EC calculating. -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-14225) Remove xmlenc dependency
[ https://issues.apache.org/jira/browse/HADOOP-14225?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15962546#comment-15962546 ] Akira Ajisaka commented on HADOOP-14225: LGTM, +1 > Remove xmlenc dependency > > > Key: HADOOP-14225 > URL: https://issues.apache.org/jira/browse/HADOOP-14225 > Project: Hadoop Common > Issue Type: Improvement >Reporter: Chris Douglas >Assignee: Chris Douglas >Priority: Minor > Attachments: HADOOP-14225.001.patch, HADOOP-14225.002.patch, > HADOOP-14225.003.patch > > > The xmlenc library is used only in the following two classes: > {noformat} > o.a.h.fs.MD5MD5CRC32FileChecksum > o.a.h.hdfs.server.namenode.DfsServlet > {noformat} > Given that Hadoop already includes other fast XML encoders as dependencies, > we can lose this one. -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org