[jira] [Commented] (HADOOP-11794) Enable distcp to copy blocks in parallel
[ https://issues.apache.org/jira/browse/HADOOP-11794?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15965371#comment-15965371 ] Omkar Aradhya K S commented on HADOOP-11794: [~steve_l] I was able to test the bits with HDI 3.3, which is *2.7.1*. However, I was wondering if we can go as back as *2.5.x*/*2.2.x*? > Enable distcp to copy blocks in parallel > > > Key: HADOOP-11794 > URL: https://issues.apache.org/jira/browse/HADOOP-11794 > Project: Hadoop Common > Issue Type: Improvement > Components: tools/distcp >Affects Versions: 0.21.0 >Reporter: dhruba borthakur >Assignee: Yongjun Zhang > Attachments: HADOOP-11794.001.patch, HADOOP-11794.002.patch, > HADOOP-11794.003.patch, HADOOP-11794.004.patch, HADOOP-11794.005.patch, > HADOOP-11794.006.patch, HADOOP-11794.007.patch, HADOOP-11794.008.patch, > HADOOP-11794.009.patch, HADOOP-11794.010.branch2.patch, > HADOOP-11794.010.patch, MAPREDUCE-2257.patch > > > The minimum unit of work for a distcp task is a file. We have files that are > greater than 1 TB with a block size of 1 GB. If we use distcp to copy these > files, the tasks either take a long long long time or finally fails. A better > way for distcp would be to copy all the source blocks in parallel, and then > stich the blocks back to files at the destination via the HDFS Concat API > (HDFS-222) -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-14295) Authentication proxy filter on firewall cluster may fail authorization because of getRemoteAddr
[ https://issues.apache.org/jira/browse/HADOOP-14295?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15965343#comment-15965343 ] Yuanbo Liu commented on HADOOP-14295: - [~jeffreyr97] Thanks for filing this JIRA and good summary. [~jojochuang] Thanks for looking into this JIRA. Wei-chui, If you look into {{DatanodeHttpServer.java}}, you can find that it uses a Netty to set up a internal proxy server. I also take a look at the http server in NameNode, there is no such kind of proxy server. So getRemoteAddr doesn't work as expected if users access some links in Datanode. Hope this info can help you get the background of this JIRA. The patch from Jeff looks nice and we've tested it in our personal cluster. After Wei-chui's comments are addressed, I'm +1(no-binding) for your patch. > Authentication proxy filter on firewall cluster may fail authorization > because of getRemoteAddr > --- > > Key: HADOOP-14295 > URL: https://issues.apache.org/jira/browse/HADOOP-14295 > Project: Hadoop Common > Issue Type: Bug > Components: common >Affects Versions: 2.7.4, 3.0.0-alpha2, 2.8.1 >Reporter: Jeffrey E Rodriguez >Assignee: Jeffrey E Rodriguez >Priority: Critical > Fix For: 3.0.0-alpha2 > > Attachments: hadoop-14295.001.patch > > > Many production environments use firewalls to protect network traffic. In the > specific case of DataNode UI and other Hadoop server for which their ports > may fall on the list of firewalled ports the > org.apache.hadoop.security.AuthenticationWithProxyUserFilter user getRemotAdd > (HttpServletRequest) which may return the firewall host such as 127.0.0.1. > This is unfortunately bad since if you are using a proxy in addition to do > perimeter protection, and you have added your proxy as a super user when > checking for the proxy IP to authorize user this would fail since > getRemoteAdd would return the IP of the firewall (127.0.0.1). > "2017-04-08 07:01:23,029 ERROR security.AuthenticationWithProxyUserFilter > (AuthenticationWithProxyUserFilter.java:getRemoteUser(94)) - Unable to verify > proxy user: Unauthorized connection for super-user: knox from IP 127.0.0.1" > I propese to add a check for x-forwarded-for header since proxys usually > inject that header before we do a getRemoteAddr -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-14295) Authentication proxy filter on firewall cluster may fail authorization because of getRemoteAddr
[ https://issues.apache.org/jira/browse/HADOOP-14295?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15965316#comment-15965316 ] Jeffrey E Rodriguez commented on HADOOP-14295: --- Thanks for your comments Wei-Chiu Chuang. I will add some testcases and I've corrected the "Affects version" Let me explain what it brought up this issue and what my colleagues found. When we turn on Hadoop UI Kerberos and try to access Datanode /logs the proxy (Knox) would get an Authorization failure and it hosts would should as 127.0.0.1 even though Knox wasn't in local host to Datanode. We were able to figure out that Datanode have Jetty listening on localhost and that Netty is used to server request to DataNode, this was a measure to improve performance because of Netty Async NIO design. The drawback is that the way Authentication proxy filter uses to figure out the remote server HttpRequest getRemoteAddr would not work since Netty is a proxy to Knox proxy. Some of my colleagues suggested to turn use ChannelHandlerContext.getChannel().getRemoteAddress(); to figure out Knox server host. I think that it is still code on the Netty side and eventually we would need to set a Header for Jetty to consume. Thus I think it is better to not add the header on Netty and rely on Knox X-forwarded headers. In any other proxy the solution would be the same to add the X-forwarded headers. The impact of this defect on users is that if their Kerberized Hadoop UI, access to Datanode logs would not work. (you will get a 403). There are workarounds such as being more permisive on the hostnames from which proxy super user runs. > Authentication proxy filter on firewall cluster may fail authorization > because of getRemoteAddr > --- > > Key: HADOOP-14295 > URL: https://issues.apache.org/jira/browse/HADOOP-14295 > Project: Hadoop Common > Issue Type: Bug > Components: common >Affects Versions: 2.7.4, 3.0.0-alpha2, 2.8.1 >Reporter: Jeffrey E Rodriguez >Assignee: Jeffrey E Rodriguez >Priority: Critical > Fix For: 3.0.0-alpha2 > > Attachments: hadoop-14295.001.patch > > > Many production environments use firewalls to protect network traffic. In the > specific case of DataNode UI and other Hadoop server for which their ports > may fall on the list of firewalled ports the > org.apache.hadoop.security.AuthenticationWithProxyUserFilter user getRemotAdd > (HttpServletRequest) which may return the firewall host such as 127.0.0.1. > This is unfortunately bad since if you are using a proxy in addition to do > perimeter protection, and you have added your proxy as a super user when > checking for the proxy IP to authorize user this would fail since > getRemoteAdd would return the IP of the firewall (127.0.0.1). > "2017-04-08 07:01:23,029 ERROR security.AuthenticationWithProxyUserFilter > (AuthenticationWithProxyUserFilter.java:getRemoteUser(94)) - Unable to verify > proxy user: Unauthorized connection for super-user: knox from IP 127.0.0.1" > I propese to add a check for x-forwarded-for header since proxys usually > inject that header before we do a getRemoteAddr -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-14217) Object Storage: support colon in object path
[ https://issues.apache.org/jira/browse/HADOOP-14217?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15965236#comment-15965236 ] Tsz Wo Nicholas Sze commented on HADOOP-14217: -- Yes, we should fix it. We probably should first define a grammar so that our implementation stands on a more rigorous ground. > Object Storage: support colon in object path > > > Key: HADOOP-14217 > URL: https://issues.apache.org/jira/browse/HADOOP-14217 > Project: Hadoop Common > Issue Type: Sub-task > Components: fs, fs/oss >Reporter: Genmao Yu > -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-13665) Erasure Coding codec should support fallback coder
[ https://issues.apache.org/jira/browse/HADOOP-13665?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15965200#comment-15965200 ] Kai Sasaki commented on HADOOP-13665: - [~drankye] [~jojochuang] Thanks for review! > Erasure Coding codec should support fallback coder > -- > > Key: HADOOP-13665 > URL: https://issues.apache.org/jira/browse/HADOOP-13665 > Project: Hadoop Common > Issue Type: Sub-task > Components: io >Reporter: Wei-Chiu Chuang >Assignee: Kai Sasaki >Priority: Blocker > Labels: hdfs-ec-3.0-must-do > Fix For: 3.0.0-alpha2 > > Attachments: HADOOP-13665.01.patch, HADOOP-13665.02.patch, > HADOOP-13665.03.patch, HADOOP-13665.04.patch, HADOOP-13665.05.patch, > HADOOP-13665.06.patch, HADOOP-13665.07.patch, HADOOP-13665.08.patch, > HADOOP-13665.09.patch, HADOOP-13665.10.patch, HADOOP-13665.11.patch, > HADOOP-13665.12.patch > > > The current EC codec supports a single coder only (by default pure Java > implementation). If the native coder is specified but is unavailable, it > should fallback to pure Java implementation. > One possible solution is to follow the convention of existing Hadoop native > codec, such as transport encryption (see {{CryptoCodec.java}}). It supports > fallback by specifying two or multiple coders as the value of property, and > loads coders in order. -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-14107) ITestS3GuardListConsistency fails intermittently
[ https://issues.apache.org/jira/browse/HADOOP-14107?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Mingliang Liu updated HADOOP-14107: --- Priority: Minor (was: Major) > ITestS3GuardListConsistency fails intermittently > > > Key: HADOOP-14107 > URL: https://issues.apache.org/jira/browse/HADOOP-14107 > Project: Hadoop Common > Issue Type: Sub-task > Components: fs/s3 >Affects Versions: HADOOP-13345 >Reporter: Mingliang Liu >Assignee: Mingliang Liu >Priority: Minor > Attachments: HADOOP-14107-HADOOP-13345.000.patch > > > {code} > mvn -Dit.test='ITestS3GuardListConsistency' -Dtest=none -Dscale -Ds3guard > -Ddynamo -q clean verify > --- > T E S T S > --- > Running org.apache.hadoop.fs.s3a.ITestS3GuardListConsistency > Tests run: 2, Failures: 1, Errors: 0, Skipped: 0, Time elapsed: 4.544 sec <<< > FAILURE! - in org.apache.hadoop.fs.s3a.ITestS3GuardListConsistency > testListStatusWriteBack(org.apache.hadoop.fs.s3a.ITestS3GuardListConsistency) > Time elapsed: 3.147 sec <<< FAILURE! > java.lang.AssertionError: Unexpected number of results from metastore. > Metastore should only know about /XYZ: > DirListingMetadata{path=s3a://mliu-s3guard/test/ListStatusWriteBack, > listMap={s3a://mliu-s3guard/test/ListStatusWriteBack/XYZ=PathMetadata{fileStatus=S3AFileStatus{path=s3a://mliu-s3guard/test/ListStatusWriteBack/XYZ; > isDirectory=true; modification_time=0; access_time=0; owner=mliu; > group=mliu; permission=rwxrwxrwx; isSymlink=false} isEmptyDirectory=true}, > s3a://mliu-s3guard/test/ListStatusWriteBack/123=PathMetadata{fileStatus=S3AFileStatus{path=s3a://mliu-s3guard/test/ListStatusWriteBack/123; > isDirectory=true; modification_time=0; access_time=0; owner=mliu; > group=mliu; permission=rwxrwxrwx; isSymlink=false} isEmptyDirectory=true}}, > isAuthoritative=false} > at org.junit.Assert.fail(Assert.java:88) > at org.junit.Assert.assertTrue(Assert.java:41) > at > org.apache.hadoop.fs.s3a.ITestS3GuardListConsistency.testListStatusWriteBack(ITestS3GuardListConsistency.java:127) > {code} > See discussion on the parent JIRA [HADOOP-13345]. -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-14217) Object Storage: support colon in object path
[ https://issues.apache.org/jira/browse/HADOOP-14217?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15965155#comment-15965155 ] Pankaj Sharma commented on HADOOP-14217: I voted for this issue it will enable Hive tables over private home directories in S3. The Home directories over S3 are required to have a colon in their object path as: aws:username, aws:userid, and aws:principaltype http://docs.aws.amazon.com/IAM/latest/UserGuide/reference_policies_variables.html 3257 has been open for 8+ years now - hoping enough Hadoop users have interest for it to get attention. > Object Storage: support colon in object path > > > Key: HADOOP-14217 > URL: https://issues.apache.org/jira/browse/HADOOP-14217 > Project: Hadoop Common > Issue Type: Sub-task > Components: fs, fs/oss >Reporter: Genmao Yu > -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-14066) VersionInfo should be marked as public API
[ https://issues.apache.org/jira/browse/HADOOP-14066?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Andrew Wang updated HADOOP-14066: - Fix Version/s: 2.9.0 > VersionInfo should be marked as public API > -- > > Key: HADOOP-14066 > URL: https://issues.apache.org/jira/browse/HADOOP-14066 > Project: Hadoop Common > Issue Type: Bug > Components: common >Reporter: Thejas M Nair >Assignee: Akira Ajisaka >Priority: Critical > Fix For: 2.9.0, 2.7.4, 2.8.1, 3.0.0-alpha3 > > Attachments: HADOOP-14066.01.patch > > > org.apache.hadoop.util.VersionInfo is commonly used by applications that work > with multiple versions of Hadoop. > In case of Hive, this is used in a shims layer to identify the version of > hadoop and use different shim code based on version (and the corresponding > api it supports). > I checked Pig and Hbase as well and they also use this class to get version > information. > However, this method is annotated as "@private" and "@unstable". > This code has actually been stable for long time and is widely used like a > public api. I think we should mark it as such. > Note that there are apis to find the version of server components in hadoop, > however, this class necessary for finding the version of client. -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-14066) VersionInfo should be marked as public API
[ https://issues.apache.org/jira/browse/HADOOP-14066?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Andrew Wang updated HADOOP-14066: - Fix Version/s: 2.8.1 2.7.4 Pushed to some additional branches, thanks Steve for the confirmation. > VersionInfo should be marked as public API > -- > > Key: HADOOP-14066 > URL: https://issues.apache.org/jira/browse/HADOOP-14066 > Project: Hadoop Common > Issue Type: Bug > Components: common >Reporter: Thejas M Nair >Assignee: Akira Ajisaka >Priority: Critical > Fix For: 2.9.0, 2.7.4, 2.8.1, 3.0.0-alpha3 > > Attachments: HADOOP-14066.01.patch > > > org.apache.hadoop.util.VersionInfo is commonly used by applications that work > with multiple versions of Hadoop. > In case of Hive, this is used in a shims layer to identify the version of > hadoop and use different shim code based on version (and the corresponding > api it supports). > I checked Pig and Hbase as well and they also use this class to get version > information. > However, this method is annotated as "@private" and "@unstable". > This code has actually been stable for long time and is widely used like a > public api. I think we should mark it as such. > Note that there are apis to find the version of server components in hadoop, > however, this class necessary for finding the version of client. -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-14248) Retire SharedInstanceProfileCredentialsProvider in trunk; deprecate in branch-2
[ https://issues.apache.org/jira/browse/HADOOP-14248?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15964957#comment-15964957 ] Mingliang Liu commented on HADOOP-14248: Thanks [~cnauroth] for taking care of this. I'm totally +1 on the proposal. > Retire SharedInstanceProfileCredentialsProvider in trunk; deprecate in > branch-2 > --- > > Key: HADOOP-14248 > URL: https://issues.apache.org/jira/browse/HADOOP-14248 > Project: Hadoop Common > Issue Type: Sub-task > Components: fs/s3 >Affects Versions: 3.0.0-alpha3 >Reporter: Mingliang Liu >Assignee: Mingliang Liu > Attachments: HADOOP-14248.000.patch, HADOOP-14248.001.patch, > HADOOP-14248-branch-2.001.patch, HADOOP-14248-branch-2.002.patch > > > This is from the discussion in [HADOOP-13050]. > So [HADOOP-13727] added the SharedInstanceProfileCredentialsProvider, which > effectively reduces high number of connections to EC2 Instance Metadata > Service caused by InstanceProfileCredentialsProvider. That patch, in order to > prevent the throttling problem, defined new class > {{SharedInstanceProfileCredentialsProvider}} as a subclass of > {{InstanceProfileCredentialsProvider}}, which enforces creation of only a > single instance. > Per [HADOOP-13050], we upgraded the AWS Java SDK. Since then, the > {{InstanceProfileCredentialsProvider}} in SDK code internally enforces a > singleton. That confirms that our effort in [HADOOP-13727] makes 100% sense. > Meanwhile, {{SharedInstanceProfileCredentialsProvider}} can retire gracefully > in trunk branch. -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-14266) S3Guard: S3AFileSystem::listFiles() to employ MetadataStore
[ https://issues.apache.org/jira/browse/HADOOP-14266?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15964953#comment-15964953 ] Aaron Fabbri commented on HADOOP-14266: --- Hi [~liuml07].. I can review this on Friday when I get back from vacation. > S3Guard: S3AFileSystem::listFiles() to employ MetadataStore > --- > > Key: HADOOP-14266 > URL: https://issues.apache.org/jira/browse/HADOOP-14266 > Project: Hadoop Common > Issue Type: Sub-task > Components: fs/s3 >Affects Versions: HADOOP-13345 >Reporter: Mingliang Liu >Assignee: Mingliang Liu > Attachments: HADOOP-14266-HADOOP-13345.000.patch, > HADOOP-14266-HADOOP-13345.001.patch, HADOOP-14266-HADOOP-13345.002.patch, > HADOOP-14266-HADOOP-13345.003.patch, HADOOP-14266-HADOOP-13345.003.patch, > HADOOP-14266-HADOOP-13345.004.patch > > > Similar to [HADOOP-13926], this is to track the effort of employing > MetadataStore in {{S3AFileSystem::listFiles()}}. -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-14248) Retire SharedInstanceProfileCredentialsProvider in trunk; deprecate in branch-2
[ https://issues.apache.org/jira/browse/HADOOP-14248?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15964937#comment-15964937 ] Chris Nauroth commented on HADOOP-14248: [~liuml07], this looks good now, and I confirmed a full test run against US-west-2. However, I'm now thinking that we need two separates JIRA issues, just for the sake of accurate tracking against target versions and release notes. HADOOP-14248 would track the removal from trunk (already covered by the current release note), and the new issue would be targeted to 2.9.0 with a different release note describing deprecation instead of removal. (No need to repeat pre-commit. I'd just comment on the new JIRA that pre-commit for branch-2 was already covered here.) Do you think that makes sense? If so, I'd be happy to be the JIRA janitor and finish off committing this. :-) > Retire SharedInstanceProfileCredentialsProvider in trunk; deprecate in > branch-2 > --- > > Key: HADOOP-14248 > URL: https://issues.apache.org/jira/browse/HADOOP-14248 > Project: Hadoop Common > Issue Type: Sub-task > Components: fs/s3 >Affects Versions: 3.0.0-alpha3 >Reporter: Mingliang Liu >Assignee: Mingliang Liu > Attachments: HADOOP-14248.000.patch, HADOOP-14248.001.patch, > HADOOP-14248-branch-2.001.patch, HADOOP-14248-branch-2.002.patch > > > This is from the discussion in [HADOOP-13050]. > So [HADOOP-13727] added the SharedInstanceProfileCredentialsProvider, which > effectively reduces high number of connections to EC2 Instance Metadata > Service caused by InstanceProfileCredentialsProvider. That patch, in order to > prevent the throttling problem, defined new class > {{SharedInstanceProfileCredentialsProvider}} as a subclass of > {{InstanceProfileCredentialsProvider}}, which enforces creation of only a > single instance. > Per [HADOOP-13050], we upgraded the AWS Java SDK. Since then, the > {{InstanceProfileCredentialsProvider}} in SDK code internally enforces a > singleton. That confirms that our effort in [HADOOP-13727] makes 100% sense. > Meanwhile, {{SharedInstanceProfileCredentialsProvider}} can retire gracefully > in trunk branch. -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-14284) Shade Guava everywhere
[ https://issues.apache.org/jira/browse/HADOOP-14284?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15964934#comment-15964934 ] Hadoop QA commented on HADOOP-14284: | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 40s{color} | {color:blue} Docker mode activated. {color} | | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s{color} | {color:green} The patch appears to include 322 new or modified test files. {color} | | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 17s{color} | {color:blue} Maven dependency ordering for branch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 15m 11s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 17m 14s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 7m 54s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 10m 25s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 1m 43s{color} | {color:green} trunk passed {color} | | {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue} 0m 0s{color} | {color:blue} Skipped patched modules with no Java source: hadoop-project hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-tests hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-timelineservice-hbase-tests hadoop-mapreduce-project/hadoop-mapreduce-client hadoop-client-modules/hadoop-client hadoop-client-modules/hadoop-client-minicluster . hadoop-mapreduce-project {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 25m 54s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 4m 32s{color} | {color:green} trunk passed {color} | | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 14s{color} | {color:blue} Maven dependency ordering for patch {color} | | {color:red}-1{color} | {color:red} mvninstall {color} | {color:red} 13m 59s{color} | {color:red} root in the patch failed. {color} | | {color:red}-1{color} | {color:red} compile {color} | {color:red} 0m 29s{color} | {color:red} root in the patch failed. {color} | | {color:red}-1{color} | {color:red} javac {color} | {color:red} 0m 29s{color} | {color:red} root in the patch failed. {color} | | {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange} 8m 6s{color} | {color:orange} root: The patch generated 150 new + 25248 unchanged - 15 fixed = 25398 total (was 25263) {color} | | {color:red}-1{color} | {color:red} mvnsite {color} | {color:red} 0m 24s{color} | {color:red} root in the patch failed. {color} | | {color:red}-1{color} | {color:red} mvneclipse {color} | {color:red} 1m 35s{color} | {color:red} root in the patch failed. {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} xml {color} | {color:green} 0m 38s{color} | {color:green} The patch has no ill-formed XML file. {color} | | {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue} 0m 0s{color} | {color:blue} Skipped patched modules with no Java source: hadoop-project hadoop-shaded-thirdparty . hadoop-client-modules/hadoop-client hadoop-client-modules/hadoop-client-minicluster hadoop-mapreduce-project hadoop-mapreduce-project/hadoop-mapreduce-client hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-tests hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-timelineservice-hbase-tests {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 30m 11s{color} | {color:green} the patch passed {color} | | {color:red}-1{color} | {color:red} javadoc {color} | {color:red} 2m 51s{color} | {color:red} root in the patch failed. {color} | | {color:red}-1{color} | {color:red} unit {color} | {color:red} 7m 12s{color} | {color:red} root in the patch failed. {color} | | {color:red}-1{color} | {color:red} asflicense {color} | {color:red} 0m 38s{color} | {color:red} The patch generated 2 ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black}212m 48s{color} | {color:black} {color} | \\ \\ || Subsystem || Report/Notes || | Docker | Image:yetus/hadoop:612578f | | JIRA Issue | HADOOP-14284 | |
[jira] [Commented] (HADOOP-14146) KerberosAuthenticationHandler should authenticate with SPN in AP-REQ
[ https://issues.apache.org/jira/browse/HADOOP-14146?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15964916#comment-15964916 ] Rushabh S Shah commented on HADOOP-14146: - KeyTab.getUnboundInstance is not supported in java 7. Its a newly added api in java 8. I noticed that the patch applies without any conflicts to branch-2.8 also but would be nice if you can create branch-2/branch-2.8 patch so that jenkins can build with java 7 and java8. > KerberosAuthenticationHandler should authenticate with SPN in AP-REQ > > > Key: HADOOP-14146 > URL: https://issues.apache.org/jira/browse/HADOOP-14146 > Project: Hadoop Common > Issue Type: Bug > Components: security >Affects Versions: 2.5.0 >Reporter: Daryn Sharp >Assignee: Daryn Sharp > Attachments: HADOOP-14146.1.patch, HADOOP-14146.patch > > > Many attempts (HADOOP-10158, HADOOP-11628, HADOOP-13565) have tried to add > multiple SPN host and/or realm support to spnego authentication. The basic > problem is the server tries to guess and/or brute force what SPN the client > used. The server should just decode the SPN from the AP-REQ. -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-14298) TestHadoopArchiveLogsRunner fails
[ https://issues.apache.org/jira/browse/HADOOP-14298?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15964911#comment-15964911 ] Andrew Wang commented on HADOOP-14298: -- This goes back to HDFS-11596, where hadoop-hdfs no longer pulls in hadoop-hdfs-client as a compile-scoped dependency. This affects apps that use hadoop-hdfs directly (and there are a lot of them). We'd like them to use hadoop-hdfs-client instead, but maybe this is too big a shift. The bug being fixed in HDFS-11596 is minor overall. [~ste...@apache.org], additional thoughts? I'm fine reverting HDFS-11596 and related. > TestHadoopArchiveLogsRunner fails > - > > Key: HADOOP-14298 > URL: https://issues.apache.org/jira/browse/HADOOP-14298 > Project: Hadoop Common > Issue Type: Bug > Components: test >Reporter: Akira Ajisaka >Assignee: Akira Ajisaka > Attachments: HADOOP-14298.01.patch > > > {noformat} > Running org.apache.hadoop.tools.TestHadoopArchiveLogsRunner > Tests run: 1, Failures: 0, Errors: 1, Skipped: 0, Time elapsed: 4.687 sec <<< > FAILURE! - in org.apache.hadoop.tools.TestHadoopArchiveLogsRunner > testHadoopArchiveLogs(org.apache.hadoop.tools.TestHadoopArchiveLogsRunner) > Time elapsed: 4.631 sec <<< ERROR! > java.lang.NoClassDefFoundError: org/apache/hadoop/hdfs/HdfsConfiguration > at java.net.URLClassLoader.findClass(URLClassLoader.java:381) > at java.lang.ClassLoader.loadClass(ClassLoader.java:424) > at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:331) > at java.lang.ClassLoader.loadClass(ClassLoader.java:357) > at > org.apache.hadoop.hdfs.MiniDFSCluster$Builder.build(MiniDFSCluster.java:450) > at > org.apache.hadoop.tools.TestHadoopArchiveLogsRunner.testHadoopArchiveLogs(TestHadoopArchiveLogsRunner.java:66) > {noformat} -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-14248) Retire SharedInstanceProfileCredentialsProvider in trunk; deprecate in branch-2
[ https://issues.apache.org/jira/browse/HADOOP-14248?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15964891#comment-15964891 ] Hadoop QA commented on HADOOP-14248: | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 21s{color} | {color:blue} Docker mode activated. {color} | | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s{color} | {color:green} The patch appears to include 1 new or modified test files. {color} | | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 1m 47s{color} | {color:blue} Maven dependency ordering for branch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 6m 48s{color} | {color:green} branch-2 passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 6m 5s{color} | {color:green} branch-2 passed with JDK v1.8.0_121 {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 6m 50s{color} | {color:green} branch-2 passed with JDK v1.7.0_121 {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 1m 29s{color} | {color:green} branch-2 passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 28s{color} | {color:green} branch-2 passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 1m 25s{color} | {color:green} branch-2 passed {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 2m 21s{color} | {color:green} branch-2 passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 2s{color} | {color:green} branch-2 passed with JDK v1.8.0_121 {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 10s{color} | {color:green} branch-2 passed with JDK v1.7.0_121 {color} | | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 16s{color} | {color:blue} Maven dependency ordering for patch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 1m 4s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 5m 39s{color} | {color:green} the patch passed with JDK v1.8.0_121 {color} | | {color:red}-1{color} | {color:red} javac {color} | {color:red} 5m 39s{color} | {color:red} root-jdk1.8.0_121 with JDK v1.8.0_121 generated 2 new + 899 unchanged - 1 fixed = 901 total (was 900) {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 6m 39s{color} | {color:green} the patch passed with JDK v1.7.0_121 {color} | | {color:red}-1{color} | {color:red} javac {color} | {color:red} 6m 39s{color} | {color:red} root-jdk1.7.0_121 with JDK v1.7.0_121 generated 2 new + 992 unchanged - 1 fixed = 994 total (was 993) {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 1m 33s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 38s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 36s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} xml {color} | {color:green} 0m 1s{color} | {color:green} The patch has no ill-formed XML file. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 2m 49s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 7s{color} | {color:green} the patch passed with JDK v1.8.0_121 {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 19s{color} | {color:green} the patch passed with JDK v1.7.0_121 {color} | | {color:green}+1{color} | {color:green} unit {color} | {color:green} 8m 5s{color} | {color:green} hadoop-common in the patch passed with JDK v1.7.0_121. {color} | | {color:green}+1{color} | {color:green} unit {color} | {color:green} 0m 30s{color} | {color:green} hadoop-aws in the patch passed with JDK v1.7.0_121. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 28s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 96m 36s{color} | {color:black} {color} | \\ \\ || Subsystem || Report/Notes ||
[jira] [Commented] (HADOOP-14284) Shade Guava everywhere
[ https://issues.apache.org/jira/browse/HADOOP-14284?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15964870#comment-15964870 ] Hadoop QA commented on HADOOP-14284: | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 31s{color} | {color:blue} Docker mode activated. {color} | | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s{color} | {color:green} The patch appears to include 322 new or modified test files. {color} | | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 14s{color} | {color:blue} Maven dependency ordering for branch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 13m 52s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 15m 21s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 8m 30s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 9m 29s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 1m 38s{color} | {color:green} trunk passed {color} | | {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue} 0m 0s{color} | {color:blue} Skipped patched modules with no Java source: hadoop-project hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-tests hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-timelineservice-hbase-tests hadoop-mapreduce-project/hadoop-mapreduce-client hadoop-client-modules/hadoop-client hadoop-client-modules/hadoop-client-runtime hadoop-client-modules/hadoop-client-minicluster . hadoop-mapreduce-project {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 26m 6s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 4m 25s{color} | {color:green} trunk passed {color} | | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 17s{color} | {color:blue} Maven dependency ordering for patch {color} | | {color:red}-1{color} | {color:red} mvninstall {color} | {color:red} 14m 59s{color} | {color:red} root in the patch failed. {color} | | {color:red}-1{color} | {color:red} compile {color} | {color:red} 0m 32s{color} | {color:red} root in the patch failed. {color} | | {color:red}-1{color} | {color:red} javac {color} | {color:red} 0m 32s{color} | {color:red} root in the patch failed. {color} | | {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange} 8m 3s{color} | {color:orange} root: The patch generated 150 new + 25248 unchanged - 15 fixed = 25398 total (was 25263) {color} | | {color:red}-1{color} | {color:red} mvnsite {color} | {color:red} 0m 25s{color} | {color:red} root in the patch failed. {color} | | {color:red}-1{color} | {color:red} mvneclipse {color} | {color:red} 1m 35s{color} | {color:red} root in the patch failed. {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} xml {color} | {color:green} 0m 44s{color} | {color:green} The patch has no ill-formed XML file. {color} | | {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue} 0m 0s{color} | {color:blue} Skipped patched modules with no Java source: hadoop-project hadoop-shaded-thirdparty hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-tests hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-timelineservice-hbase-tests hadoop-mapreduce-project/hadoop-mapreduce-client hadoop-client-modules/hadoop-client hadoop-client-modules/hadoop-client-runtime hadoop-client-modules/hadoop-client-minicluster . hadoop-mapreduce-project {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 33m 7s{color} | {color:green} the patch passed {color} | | {color:red}-1{color} | {color:red} javadoc {color} | {color:red} 3m 0s{color} | {color:red} root in the patch failed. {color} | | {color:red}-1{color} | {color:red} unit {color} | {color:red} 7m 37s{color} | {color:red} root in the patch failed. {color} | | {color:red}-1{color} | {color:red} asflicense {color} | {color:red} 0m 40s{color} | {color:red} The patch generated 2 ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black}214m 12s{color} | {color:black} {color} | \\ \\ || Subsystem ||
[jira] [Commented] (HADOOP-14266) S3Guard: S3AFileSystem::listFiles() to employ MetadataStore
[ https://issues.apache.org/jira/browse/HADOOP-14266?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15964817#comment-15964817 ] Hadoop QA commented on HADOOP-14266: | (/) *{color:green}+1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 19s{color} | {color:blue} Docker mode activated. {color} | | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s{color} | {color:green} The patch appears to include 2 new or modified test files. {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 14m 29s{color} | {color:green} HADOOP-13345 passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 23s{color} | {color:green} HADOOP-13345 passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 18s{color} | {color:green} HADOOP-13345 passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 27s{color} | {color:green} HADOOP-13345 passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 14s{color} | {color:green} HADOOP-13345 passed {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 0m 34s{color} | {color:green} HADOOP-13345 passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 16s{color} | {color:green} HADOOP-13345 passed {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 22s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 20s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 20s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 12s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 24s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 11s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 0m 35s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 12s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} unit {color} | {color:green} 0m 38s{color} | {color:green} hadoop-aws in the patch passed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 17s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 21m 32s{color} | {color:black} {color} | \\ \\ || Subsystem || Report/Notes || | Docker | Image:yetus/hadoop:a9ad5d6 | | JIRA Issue | HADOOP-14266 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12862905/HADOOP-14266-HADOOP-13345.004.patch | | Optional Tests | asflicense compile javac javadoc mvninstall mvnsite unit findbugs checkstyle | | uname | Linux dc92cf8f7170 3.13.0-108-generic #155-Ubuntu SMP Wed Jan 11 16:58:52 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh | | git revision | HADOOP-13345 / 13fafee | | Default Java | 1.8.0_121 | | findbugs | v3.0.0 | | Test Results | https://builds.apache.org/job/PreCommit-HADOOP-Build/12085/testReport/ | | modules | C: hadoop-tools/hadoop-aws U: hadoop-tools/hadoop-aws | | Console output | https://builds.apache.org/job/PreCommit-HADOOP-Build/12085/console | | Powered by | Apache Yetus 0.5.0-SNAPSHOT http://yetus.apache.org | This message was automatically generated. > S3Guard: S3AFileSystem::listFiles() to employ MetadataStore > --- > > Key: HADOOP-14266 > URL: https://issues.apache.org/jira/browse/HADOOP-14266 > Project: Hadoop Common > Issue Type: Sub-task > Components: fs/s3 >Affects Versions: HADOOP-13345 >Reporter: Mingliang Liu >Assignee: Mingliang Liu > Attachments: HADOOP-14266-HADOOP-13345.000.patch, > HADOOP-14266-HADOOP-13345.001.patch, HADOOP-14266-HADOOP-13345.002.patch,
[jira] [Commented] (HADOOP-14284) Shade Guava everywhere
[ https://issues.apache.org/jira/browse/HADOOP-14284?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15964803#comment-15964803 ] Akira Ajisaka commented on HADOOP-14284: The failure can be fixed by the following setting {code:title=hadoop-client-modules/hadoop-client-runtime/pom.xml} package shade org.apache.hadoop:hadoop-client-api + com.google.guava:guava {code} > Shade Guava everywhere > -- > > Key: HADOOP-14284 > URL: https://issues.apache.org/jira/browse/HADOOP-14284 > Project: Hadoop Common > Issue Type: Bug > Components: build >Affects Versions: 3.0.0-alpha3 >Reporter: Andrew Wang >Assignee: Tsuyoshi Ozawa >Priority: Blocker > Attachments: HADOOP-14238.pre001.patch, HADOOP-14284.002.patch, > HADOOP-14284.004.patch, HADOOP-14284.007.patch > > > HADOOP-10101 upgraded the guava version for 3.x to 21. > Guava is broadly used by Java projects that consume our artifacts. > Unfortunately, these projects also consume our private artifacts like > {{hadoop-hdfs}}. They also are unlikely on the new shaded client introduced > by HADOOP-11804, currently only available in 3.0.0-alpha2. > We should shade Guava everywhere to proactively avoid breaking downstreams. > This isn't a requirement for all dependency upgrades, but it's necessary for > known-bad dependencies like Guava. -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-14274) Azure: Simplify Ranger-WASB policy model
[ https://issues.apache.org/jira/browse/HADOOP-14274?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15964792#comment-15964792 ] Mingliang Liu commented on HADOOP-14274: The checkstyle warning is fine to skip in this patch. How about the testing? > Azure: Simplify Ranger-WASB policy model > > > Key: HADOOP-14274 > URL: https://issues.apache.org/jira/browse/HADOOP-14274 > Project: Hadoop Common > Issue Type: Improvement > Components: fs/azure >Reporter: Sivaguru Sankaridurg >Assignee: Sivaguru Sankaridurg > Attachments: HADOOP-14274-001.patch, HADOOP-14274.002.patch > > > This improvement seeks to simplify the WASB-Ranger policy model -- both the > policy specification and policy enforcement. > More specifically, WASB-Ranger checks do not follow the same policy model and > enforcement as Ranger-HDFS. > Ranger-HDFS hands off to HDFS-ACLs when a policy-match is not found. The > handoff requires the Ranger policies follow the same model as HDFS-ACLs. This > is not true with Ranger+WASB. > We seek to simplify the policy specification and enforcement by dropping the > 'x' bit altogether. > This JIRA tracks this improvement, alongwith a few more minor bugfixes that > were found during Ranger-WASB testing. -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-14284) Shade Guava everywhere
[ https://issues.apache.org/jira/browse/HADOOP-14284?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15964788#comment-15964788 ] Akira Ajisaka commented on HADOOP-14284: Thanks [~ozawa] for the very tough work! I tried the v7 patch and {{mvn install -DskipTests}} failed in hadoop-client-check-invariants module: {noformat} [WARNING] Rule 1: org.apache.maven.plugins.enforcer.BanDuplicateClasses failed with message: Duplicate classes found: Found in: org.apache.hadoop:hadoop-client-runtime:jar:3.0.0-alpha3-SNAPSHOT:compile org.apache.hadoop:hadoop-client-api:jar:3.0.0-alpha3-SNAPSHOT:compile {noformat} The failure occurs because both the two module has shaded guava. Would you remove shaded guava from hadoop-client-runtime module? > Shade Guava everywhere > -- > > Key: HADOOP-14284 > URL: https://issues.apache.org/jira/browse/HADOOP-14284 > Project: Hadoop Common > Issue Type: Bug > Components: build >Affects Versions: 3.0.0-alpha3 >Reporter: Andrew Wang >Assignee: Tsuyoshi Ozawa >Priority: Blocker > Attachments: HADOOP-14238.pre001.patch, HADOOP-14284.002.patch, > HADOOP-14284.004.patch, HADOOP-14284.007.patch > > > HADOOP-10101 upgraded the guava version for 3.x to 21. > Guava is broadly used by Java projects that consume our artifacts. > Unfortunately, these projects also consume our private artifacts like > {{hadoop-hdfs}}. They also are unlikely on the new shaded client introduced > by HADOOP-11804, currently only available in 3.0.0-alpha2. > We should shade Guava everywhere to proactively avoid breaking downstreams. > This isn't a requirement for all dependency upgrades, but it's necessary for > known-bad dependencies like Guava. -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-14266) S3Guard: S3AFileSystem::listFiles() to employ MetadataStore
[ https://issues.apache.org/jira/browse/HADOOP-14266?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Mingliang Liu updated HADOOP-14266: --- Attachment: HADOOP-14266-HADOOP-13345.004.patch Rebase from feature branch. > S3Guard: S3AFileSystem::listFiles() to employ MetadataStore > --- > > Key: HADOOP-14266 > URL: https://issues.apache.org/jira/browse/HADOOP-14266 > Project: Hadoop Common > Issue Type: Sub-task > Components: fs/s3 >Affects Versions: HADOOP-13345 >Reporter: Mingliang Liu >Assignee: Mingliang Liu > Attachments: HADOOP-14266-HADOOP-13345.000.patch, > HADOOP-14266-HADOOP-13345.001.patch, HADOOP-14266-HADOOP-13345.002.patch, > HADOOP-14266-HADOOP-13345.003.patch, HADOOP-14266-HADOOP-13345.003.patch, > HADOOP-14266-HADOOP-13345.004.patch > > > Similar to [HADOOP-13926], this is to track the effort of employing > MetadataStore in {{S3AFileSystem::listFiles()}}. -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-14248) Retire SharedInstanceProfileCredentialsProvider in trunk; deprecate in branch-2
[ https://issues.apache.org/jira/browse/HADOOP-14248?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Mingliang Liu updated HADOOP-14248: --- Attachment: HADOOP-14248-branch-2.002.patch Thanks [~cnauroth] very much for your review. The v2 patch for {{branch-2}} addresses your comment. Are you +1 on the current patch (trunk and branch-2 respectively)? > Retire SharedInstanceProfileCredentialsProvider in trunk; deprecate in > branch-2 > --- > > Key: HADOOP-14248 > URL: https://issues.apache.org/jira/browse/HADOOP-14248 > Project: Hadoop Common > Issue Type: Sub-task > Components: fs/s3 >Affects Versions: 3.0.0-alpha3 >Reporter: Mingliang Liu >Assignee: Mingliang Liu > Attachments: HADOOP-14248.000.patch, HADOOP-14248.001.patch, > HADOOP-14248-branch-2.001.patch, HADOOP-14248-branch-2.002.patch > > > This is from the discussion in [HADOOP-13050]. > So [HADOOP-13727] added the SharedInstanceProfileCredentialsProvider, which > effectively reduces high number of connections to EC2 Instance Metadata > Service caused by InstanceProfileCredentialsProvider. That patch, in order to > prevent the throttling problem, defined new class > {{SharedInstanceProfileCredentialsProvider}} as a subclass of > {{InstanceProfileCredentialsProvider}}, which enforces creation of only a > single instance. > Per [HADOOP-13050], we upgraded the AWS Java SDK. Since then, the > {{InstanceProfileCredentialsProvider}} in SDK code internally enforces a > singleton. That confirms that our effort in [HADOOP-13727] makes 100% sense. > Meanwhile, {{SharedInstanceProfileCredentialsProvider}} can retire gracefully > in trunk branch. -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-13545) Upgrade HSQLDB to 2.3.4
[ https://issues.apache.org/jira/browse/HADOOP-13545?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15964702#comment-15964702 ] Giovanni Matteo Fumarola commented on HADOOP-13545: --- Thanks [~curino] and [~ajisakaa]. > Upgrade HSQLDB to 2.3.4 > --- > > Key: HADOOP-13545 > URL: https://issues.apache.org/jira/browse/HADOOP-13545 > Project: Hadoop Common > Issue Type: Bug > Components: build >Affects Versions: 2.9.0 >Reporter: Giovanni Matteo Fumarola >Assignee: Giovanni Matteo Fumarola >Priority: Minor > Fix For: 2.9.0, 3.0.0-alpha3 > > Attachments: HADOOP-13545.v1.patch, HADOOP-13545.v2.patch, > HADOOP-13545.v3.patch > > > Upgrade HSQLDB from 2.0.0 to 2.3.4. > Version 2.3.4 is fully multithreaded and supports high performance 2PL and > MVCC (multiversion concurrency control) transaction control models. -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Comment Edited] (HADOOP-14284) Shade Guava everywhere
[ https://issues.apache.org/jira/browse/HADOOP-14284?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15964651#comment-15964651 ] Tsuyoshi Ozawa edited comment on HADOOP-14284 at 4/11/17 5:10 PM: -- v7 patch includes following changes: * Created hadoop-shaded-thirdparty which includes shaded guava. It, however, doesn't include com.google.common.base.Function, Predicate, and TypeToken. It's because the classes are required to use Apache Curator: https://cwiki.apache.org/confluence/display/CURATOR/TN13 * Changed namespaces of Guava to use shaded Guava under org.apache.hadoop.com.google.common. This change subsume what HADOOP-14238 is addressing. was (Author: ozawa): v7 patch includes following changes: * hadoop-shaded-thirdparty includes shaded guava, but it doesn't include com.google.common.base.Function, Predicate, and TypeToken. They are required to use Apache Curator: https://cwiki.apache.org/confluence/display/CURATOR/TN13 * Changed namespaces of Guava to use shaded Guava under org.apache.hadoop.com.google.common. This change subsume what HADOOP-14238 is addressing. > Shade Guava everywhere > -- > > Key: HADOOP-14284 > URL: https://issues.apache.org/jira/browse/HADOOP-14284 > Project: Hadoop Common > Issue Type: Bug > Components: build >Affects Versions: 3.0.0-alpha3 >Reporter: Andrew Wang >Assignee: Tsuyoshi Ozawa >Priority: Blocker > Attachments: HADOOP-14238.pre001.patch, HADOOP-14284.002.patch, > HADOOP-14284.004.patch, HADOOP-14284.007.patch > > > HADOOP-10101 upgraded the guava version for 3.x to 21. > Guava is broadly used by Java projects that consume our artifacts. > Unfortunately, these projects also consume our private artifacts like > {{hadoop-hdfs}}. They also are unlikely on the new shaded client introduced > by HADOOP-11804, currently only available in 3.0.0-alpha2. > We should shade Guava everywhere to proactively avoid breaking downstreams. > This isn't a requirement for all dependency upgrades, but it's necessary for > known-bad dependencies like Guava. -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-14284) Shade Guava everywhere
[ https://issues.apache.org/jira/browse/HADOOP-14284?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15964651#comment-15964651 ] Tsuyoshi Ozawa commented on HADOOP-14284: - v7 patch includes following changes: * hadoop-shaded-thirdparty includes shaded guava, but it doesn't include com.google.common.base.Function, Predicate, and TypeToken. They are required to use Apache Curator: https://cwiki.apache.org/confluence/display/CURATOR/TN13 * Changed namespaces of Guava to use shaded Guava under org.apache.hadoop.com.google.common. This change subsume what HADOOP-14238 is addressing. > Shade Guava everywhere > -- > > Key: HADOOP-14284 > URL: https://issues.apache.org/jira/browse/HADOOP-14284 > Project: Hadoop Common > Issue Type: Bug > Components: build >Affects Versions: 3.0.0-alpha3 >Reporter: Andrew Wang >Assignee: Tsuyoshi Ozawa >Priority: Blocker > Attachments: HADOOP-14238.pre001.patch, HADOOP-14284.002.patch, > HADOOP-14284.004.patch, HADOOP-14284.007.patch > > > HADOOP-10101 upgraded the guava version for 3.x to 21. > Guava is broadly used by Java projects that consume our artifacts. > Unfortunately, these projects also consume our private artifacts like > {{hadoop-hdfs}}. They also are unlikely on the new shaded client introduced > by HADOOP-11804, currently only available in 3.0.0-alpha2. > We should shade Guava everywhere to proactively avoid breaking downstreams. > This isn't a requirement for all dependency upgrades, but it's necessary for > known-bad dependencies like Guava. -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-14284) Shade Guava everywhere
[ https://issues.apache.org/jira/browse/HADOOP-14284?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15964645#comment-15964645 ] ASF GitHub Bot commented on HADOOP-14284: - GitHub user oza opened a pull request: https://github.com/apache/hadoop/pull/210 HADOOP-14284. Shade Guava everywhere. You can merge this pull request into a Git repository by running: $ git pull https://github.com/oza/hadoop HADOOP-14284 Alternatively you can review and apply these changes as the patch at: https://github.com/apache/hadoop/pull/210.patch To close this pull request, make a commit to your master/trunk branch with (at least) the following in the commit message: This closes #210 commit 8a38c006ecc25bec26d40da4abda03cdccff85f5 Author: Tsuyoshi OzawaDate: 2017-04-06T10:49:42Z HADOOP-14284. Shade Guava everywhere. > Shade Guava everywhere > -- > > Key: HADOOP-14284 > URL: https://issues.apache.org/jira/browse/HADOOP-14284 > Project: Hadoop Common > Issue Type: Bug > Components: build >Affects Versions: 3.0.0-alpha3 >Reporter: Andrew Wang >Assignee: Tsuyoshi Ozawa >Priority: Blocker > Attachments: HADOOP-14238.pre001.patch, HADOOP-14284.002.patch, > HADOOP-14284.004.patch, HADOOP-14284.007.patch > > > HADOOP-10101 upgraded the guava version for 3.x to 21. > Guava is broadly used by Java projects that consume our artifacts. > Unfortunately, these projects also consume our private artifacts like > {{hadoop-hdfs}}. They also are unlikely on the new shaded client introduced > by HADOOP-11804, currently only available in 3.0.0-alpha2. > We should shade Guava everywhere to proactively avoid breaking downstreams. > This isn't a requirement for all dependency upgrades, but it's necessary for > known-bad dependencies like Guava. -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-14284) Shade Guava everywhere
[ https://issues.apache.org/jira/browse/HADOOP-14284?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Tsuyoshi Ozawa updated HADOOP-14284: Attachment: HADOOP-14284.007.patch Some changes in v6 were not intended, so fixing it in v7. > Shade Guava everywhere > -- > > Key: HADOOP-14284 > URL: https://issues.apache.org/jira/browse/HADOOP-14284 > Project: Hadoop Common > Issue Type: Bug > Components: build >Affects Versions: 3.0.0-alpha3 >Reporter: Andrew Wang >Assignee: Tsuyoshi Ozawa >Priority: Blocker > Attachments: HADOOP-14238.pre001.patch, HADOOP-14284.002.patch, > HADOOP-14284.004.patch, HADOOP-14284.007.patch > > > HADOOP-10101 upgraded the guava version for 3.x to 21. > Guava is broadly used by Java projects that consume our artifacts. > Unfortunately, these projects also consume our private artifacts like > {{hadoop-hdfs}}. They also are unlikely on the new shaded client introduced > by HADOOP-11804, currently only available in 3.0.0-alpha2. > We should shade Guava everywhere to proactively avoid breaking downstreams. > This isn't a requirement for all dependency upgrades, but it's necessary for > known-bad dependencies like Guava. -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-14284) Shade Guava everywhere
[ https://issues.apache.org/jira/browse/HADOOP-14284?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Tsuyoshi Ozawa updated HADOOP-14284: Attachment: (was: HADOOP-14284.005.patch) > Shade Guava everywhere > -- > > Key: HADOOP-14284 > URL: https://issues.apache.org/jira/browse/HADOOP-14284 > Project: Hadoop Common > Issue Type: Bug > Components: build >Affects Versions: 3.0.0-alpha3 >Reporter: Andrew Wang >Assignee: Tsuyoshi Ozawa >Priority: Blocker > Attachments: HADOOP-14238.pre001.patch, HADOOP-14284.002.patch, > HADOOP-14284.004.patch > > > HADOOP-10101 upgraded the guava version for 3.x to 21. > Guava is broadly used by Java projects that consume our artifacts. > Unfortunately, these projects also consume our private artifacts like > {{hadoop-hdfs}}. They also are unlikely on the new shaded client introduced > by HADOOP-11804, currently only available in 3.0.0-alpha2. > We should shade Guava everywhere to proactively avoid breaking downstreams. > This isn't a requirement for all dependency upgrades, but it's necessary for > known-bad dependencies like Guava. -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-9747) Reduce unnecessary UGI synchronization
[ https://issues.apache.org/jira/browse/HADOOP-9747?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15964640#comment-15964640 ] Daryn Sharp commented on HADOOP-9747: - [~owen.omalley], do you have any cycles? > Reduce unnecessary UGI synchronization > -- > > Key: HADOOP-9747 > URL: https://issues.apache.org/jira/browse/HADOOP-9747 > Project: Hadoop Common > Issue Type: Bug > Components: security >Affects Versions: 0.23.0, 2.0.0-alpha, 3.0.0-alpha1 >Reporter: Daryn Sharp >Assignee: Daryn Sharp >Priority: Critical > Attachments: HADOOP-9747.2.branch-2.patch, HADOOP-9747.2.trunk.patch, > HADOOP-9747.branch-2.patch, HADOOP-9747.trunk.patch > > > Jstacks of heavily loaded NNs show up to dozens of threads blocking in the > UGI. -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-14284) Shade Guava everywhere
[ https://issues.apache.org/jira/browse/HADOOP-14284?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Tsuyoshi Ozawa updated HADOOP-14284: Attachment: (was: HADOOP-14284.006.patch) > Shade Guava everywhere > -- > > Key: HADOOP-14284 > URL: https://issues.apache.org/jira/browse/HADOOP-14284 > Project: Hadoop Common > Issue Type: Bug > Components: build >Affects Versions: 3.0.0-alpha3 >Reporter: Andrew Wang >Assignee: Tsuyoshi Ozawa >Priority: Blocker > Attachments: HADOOP-14238.pre001.patch, HADOOP-14284.002.patch, > HADOOP-14284.004.patch > > > HADOOP-10101 upgraded the guava version for 3.x to 21. > Guava is broadly used by Java projects that consume our artifacts. > Unfortunately, these projects also consume our private artifacts like > {{hadoop-hdfs}}. They also are unlikely on the new shaded client introduced > by HADOOP-11804, currently only available in 3.0.0-alpha2. > We should shade Guava everywhere to proactively avoid breaking downstreams. > This isn't a requirement for all dependency upgrades, but it's necessary for > known-bad dependencies like Guava. -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-14295) Authentication proxy filter on firewall cluster may fail authorization because of getRemoteAddr
[ https://issues.apache.org/jira/browse/HADOOP-14295?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jeffrey E Rodriguez updated HADOOP-14295: -- Affects Version/s: 2.8.1 2.7.4 > Authentication proxy filter on firewall cluster may fail authorization > because of getRemoteAddr > --- > > Key: HADOOP-14295 > URL: https://issues.apache.org/jira/browse/HADOOP-14295 > Project: Hadoop Common > Issue Type: Bug > Components: common >Affects Versions: 2.7.4, 3.0.0-alpha2, 2.8.1 >Reporter: Jeffrey E Rodriguez >Assignee: Jeffrey E Rodriguez >Priority: Critical > Fix For: 3.0.0-alpha2 > > Attachments: hadoop-14295.001.patch > > > Many production environments use firewalls to protect network traffic. In the > specific case of DataNode UI and other Hadoop server for which their ports > may fall on the list of firewalled ports the > org.apache.hadoop.security.AuthenticationWithProxyUserFilter user getRemotAdd > (HttpServletRequest) which may return the firewall host such as 127.0.0.1. > This is unfortunately bad since if you are using a proxy in addition to do > perimeter protection, and you have added your proxy as a super user when > checking for the proxy IP to authorize user this would fail since > getRemoteAdd would return the IP of the firewall (127.0.0.1). > "2017-04-08 07:01:23,029 ERROR security.AuthenticationWithProxyUserFilter > (AuthenticationWithProxyUserFilter.java:getRemoteUser(94)) - Unable to verify > proxy user: Unauthorized connection for super-user: knox from IP 127.0.0.1" > I propese to add a check for x-forwarded-for header since proxys usually > inject that header before we do a getRemoteAddr -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Comment Edited] (HADOOP-14297) Update the documentation about the new ec codecs config keys
[ https://issues.apache.org/jira/browse/HADOOP-14297?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15964511#comment-15964511 ] Wei-Chiu Chuang edited comment on HADOOP-14297 at 4/11/17 4:30 PM: --- Hi [~lewuathe] I think we can do a bigger change than just updating the configuration key: In section "Architecture" bq. The earlier factory is prior to followings in case of failure of creating raw coders. The default implementation classes which has the highest priority of RS and XOR codec are native codecs using Intel ISA-L to improve the performance. If the native library is not available, the codec should fallback to pure Java implementation. You can change the priority by changing these configuration keys. How about if we say "These codec factories are loaded in the order specified by the configuration values, until a codec is loaded successfully. The default RS and XOR codec configuration prefers native implementation over the pure Java one. There is no RS-LEGACY native codec implementation so the default is pure Java implementation only." In section "Enable Intel ISA-L" bq. HDFS native implementation of default RS codec leverages Intel ISA-L library to improve the encoding and decoding calculation. To enable and use Intel ISA-L, there are three steps. 1. Build ISA-L library. Please refer to the official site “https://github.com/01org/isa-l/” for detail information. 2. Build Hadoop with ISA-L support. Please refer to “Intel ISA-L build options” section in “Build instructions for Hadoop” in (BUILDING.txt) in the source code. Use -Dbundle.isal to copy the contents of the isal.lib directory into the final tar file. Deploy Hadoop with the tar file. Make sure ISA-L is available on HDFS clients and DataNodes. 3. Configure the io.erasurecode.codec.rs.rawcoder key with value org.apache.hadoop.io.erasurecode.rawcoder.NativeRSRawErasureCoderFactory on HDFS clients and DataNodes. The 3rd step is not needed anymore. The 2nd step looks a little long and can be split into two steps. It would also be nice if you can make the steps as bulletins. So like {noformat} 1. 2. ... {noformat} bq. To enable the native implementation of the XOR codec, perform the same first two steps as above to build and deploy Hadoop with ISA-L support. Afterwards, configure the io.erasurecode.codec.xor.rawcoder key with org.apache.hadoop.io.erasurecode.rawcoder.NativeXORRawErasureCoderFactory on both HDFS client and DataNodes. This paragraph can now be removed since the native codec is preferred by default. was (Author: jojochuang): Hi [~lewuathe] I think we can do a bigger change than just updating the configuration key: In section "Architecture" bq. The earlier factory is prior to followings in case of failure of creating raw coders. The default implementation classes which has the highest priority of RS and XOR codec are native codecs using Intel ISA-L to improve the performance. If the native library is not available, the codec should fallback to pure Java implementation. You can change the priority by changing these configuration keys. How about if we say "These codec factories are loaded in the order specified by the configuration values, until a codec is loaded successfully. The default RS and XOR codec configuration prefers native implementation versus the pure Java one." In section "Enable Intel ISA-L" bq. HDFS native implementation of default RS codec leverages Intel ISA-L library to improve the encoding and decoding calculation. To enable and use Intel ISA-L, there are three steps. 1. Build ISA-L library. Please refer to the official site “https://github.com/01org/isa-l/” for detail information. 2. Build Hadoop with ISA-L support. Please refer to “Intel ISA-L build options” section in “Build instructions for Hadoop” in (BUILDING.txt) in the source code. Use -Dbundle.isal to copy the contents of the isal.lib directory into the final tar file. Deploy Hadoop with the tar file. Make sure ISA-L is available on HDFS clients and DataNodes. 3. Configure the io.erasurecode.codec.rs.rawcoder key with value org.apache.hadoop.io.erasurecode.rawcoder.NativeRSRawErasureCoderFactory on HDFS clients and DataNodes. The 3rd step is not needed anymore. The 2nd step looks a little long and can be split into two steps. It would also be nice if you can make the steps as bulletins. So like {noformat} 1. 2. ... {noformat} bq. To enable the native implementation of the XOR codec, perform the same first two steps as above to build and deploy Hadoop with ISA-L support. Afterwards, configure the io.erasurecode.codec.xor.rawcoder key with org.apache.hadoop.io.erasurecode.rawcoder.NativeXORRawErasureCoderFactory on both HDFS client and DataNodes. This paragraph can now be removed since the native codec is preferred by default. > Update the documentation about the new ec codecs config keys >
[jira] [Commented] (HADOOP-14146) KerberosAuthenticationHandler should authenticate with SPN in AP-REQ
[ https://issues.apache.org/jira/browse/HADOOP-14146?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15964604#comment-15964604 ] Daryn Sharp commented on HADOOP-14146: -- [~drankye], this is an internal blocker and would really like to not internally maintain it. Any objections to the actual implementation? > KerberosAuthenticationHandler should authenticate with SPN in AP-REQ > > > Key: HADOOP-14146 > URL: https://issues.apache.org/jira/browse/HADOOP-14146 > Project: Hadoop Common > Issue Type: Bug > Components: security >Affects Versions: 2.5.0 >Reporter: Daryn Sharp >Assignee: Daryn Sharp > Attachments: HADOOP-14146.1.patch, HADOOP-14146.patch > > > Many attempts (HADOOP-10158, HADOOP-11628, HADOOP-13565) have tried to add > multiple SPN host and/or realm support to spnego authentication. The basic > problem is the server tries to guess and/or brute force what SPN the client > used. The server should just decode the SPN from the AP-REQ. -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-14284) Shade Guava everywhere
[ https://issues.apache.org/jira/browse/HADOOP-14284?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Tsuyoshi Ozawa updated HADOOP-14284: Attachment: HADOOP-14284.006.patch Rebasing on trunk. > Shade Guava everywhere > -- > > Key: HADOOP-14284 > URL: https://issues.apache.org/jira/browse/HADOOP-14284 > Project: Hadoop Common > Issue Type: Bug > Components: build >Affects Versions: 3.0.0-alpha3 >Reporter: Andrew Wang >Assignee: Tsuyoshi Ozawa >Priority: Blocker > Attachments: HADOOP-14238.pre001.patch, HADOOP-14284.002.patch, > HADOOP-14284.004.patch, HADOOP-14284.005.patch, HADOOP-14284.006.patch > > > HADOOP-10101 upgraded the guava version for 3.x to 21. > Guava is broadly used by Java projects that consume our artifacts. > Unfortunately, these projects also consume our private artifacts like > {{hadoop-hdfs}}. They also are unlikely on the new shaded client introduced > by HADOOP-11804, currently only available in 3.0.0-alpha2. > We should shade Guava everywhere to proactively avoid breaking downstreams. > This isn't a requirement for all dependency upgrades, but it's necessary for > known-bad dependencies like Guava. -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-14284) Shade Guava everywhere
[ https://issues.apache.org/jira/browse/HADOOP-14284?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Tsuyoshi Ozawa updated HADOOP-14284: Status: Patch Available (was: Open) > Shade Guava everywhere > -- > > Key: HADOOP-14284 > URL: https://issues.apache.org/jira/browse/HADOOP-14284 > Project: Hadoop Common > Issue Type: Bug > Components: build >Affects Versions: 3.0.0-alpha3 >Reporter: Andrew Wang >Assignee: Tsuyoshi Ozawa >Priority: Blocker > Attachments: HADOOP-14238.pre001.patch, HADOOP-14284.002.patch, > HADOOP-14284.004.patch, HADOOP-14284.005.patch > > > HADOOP-10101 upgraded the guava version for 3.x to 21. > Guava is broadly used by Java projects that consume our artifacts. > Unfortunately, these projects also consume our private artifacts like > {{hadoop-hdfs}}. They also are unlikely on the new shaded client introduced > by HADOOP-11804, currently only available in 3.0.0-alpha2. > We should shade Guava everywhere to proactively avoid breaking downstreams. > This isn't a requirement for all dependency upgrades, but it's necessary for > known-bad dependencies like Guava. -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-14284) Shade Guava everywhere
[ https://issues.apache.org/jira/browse/HADOOP-14284?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Tsuyoshi Ozawa updated HADOOP-14284: Attachment: HADOOP-14284.005.patch > Shade Guava everywhere > -- > > Key: HADOOP-14284 > URL: https://issues.apache.org/jira/browse/HADOOP-14284 > Project: Hadoop Common > Issue Type: Bug > Components: build >Affects Versions: 3.0.0-alpha3 >Reporter: Andrew Wang >Assignee: Tsuyoshi Ozawa >Priority: Blocker > Attachments: HADOOP-14238.pre001.patch, HADOOP-14284.002.patch, > HADOOP-14284.004.patch, HADOOP-14284.005.patch > > > HADOOP-10101 upgraded the guava version for 3.x to 21. > Guava is broadly used by Java projects that consume our artifacts. > Unfortunately, these projects also consume our private artifacts like > {{hadoop-hdfs}}. They also are unlikely on the new shaded client introduced > by HADOOP-11804, currently only available in 3.0.0-alpha2. > We should shade Guava everywhere to proactively avoid breaking downstreams. > This isn't a requirement for all dependency upgrades, but it's necessary for > known-bad dependencies like Guava. -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-14277) TestTrash.testTrashRestarts is flaky
[ https://issues.apache.org/jira/browse/HADOOP-14277?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15964551#comment-15964551 ] Eric Badger commented on HADOOP-14277: -- [~cheersyang], thanks for the patch! I have just a few nits. {noformat} + while(aliveCounts-- > 0) { +Thread.sleep(100); +countdownEmptier.countDown(); {noformat} We shouldn't be sleeping for this long. Especially when aliveCounts is set to 5, that's an extra 1 second that we're adding to the test that shouldn't be necessary, since we verify twice. I don't think that we should be depending on sleeps at all. Anytime we are sleeping, we could instead be using a GenericTestUtils.waitFor() or something similar with a very low interval. Basically using a polling-based approach with configurable timeouts instead of trying to guess how long we need to wait. That way we aren't sitting around doing nothing for long periods of time. {noformat} +@Override public void run() { + while (true) { +// Once counts down to 0, new another latch for next interval +this.intervalSignal = new CountDownLatch(interval); {noformat} It would be nice if we didn't have to instantiate a new CountDownLatch every time we loop. Doesn't look like you can reset the CountDownLatch, but maybe use something like a CyclicBarrier with the threads set to 1? Though I'm not sure how expensive the overhead is for a CyclicBarrier. > TestTrash.testTrashRestarts is flaky > > > Key: HADOOP-14277 > URL: https://issues.apache.org/jira/browse/HADOOP-14277 > Project: Hadoop Common > Issue Type: Bug >Reporter: Eric Badger >Assignee: Weiwei Yang > Attachments: HADOOP-14277.001.patch, HADOOP-14277.002.patch > > > {noformat} > junit.framework.AssertionFailedError: Expected num of checkpoints is 2, but > actual is 3 expected:<2> but was:<3> > at junit.framework.Assert.fail(Assert.java:57) > at junit.framework.Assert.failNotEquals(Assert.java:329) > at junit.framework.Assert.assertEquals(Assert.java:78) > at junit.framework.Assert.assertEquals(Assert.java:234) > at junit.framework.TestCase.assertEquals(TestCase.java:401) > at > org.apache.hadoop.fs.TestTrash.verifyAuditableTrashEmptier(TestTrash.java:892) > at org.apache.hadoop.fs.TestTrash.testTrashRestarts(TestTrash.java:593) > {noformat} -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-13786) Add S3Guard committer for zero-rename commits to consistent S3 endpoints
[ https://issues.apache.org/jira/browse/HADOOP-13786?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15964534#comment-15964534 ] Steve Loughran commented on HADOOP-13786: - Looks like the latest code is failing in {{TestStagingMRJob}} when there's no network as it fails to bond to the test object store. Even if there's binding to the mock FS, looks like something is also trying to talk to the real one {code} "Finalizer" #3 daemon prio=8 os_prio=31 tid=0x7fca50035000 nid=0x3803 in Object.wait() [0x75281000] java.lang.Thread.State: WAITING (on object monitor) at java.lang.Object.wait(Native Method) - waiting on <0x0007800339c8> (a java.lang.ref.ReferenceQueue$Lock) at java.lang.ref.ReferenceQueue.remove(ReferenceQueue.java:143) - locked <0x0007800339c8> (a java.lang.ref.ReferenceQueue$Lock) at java.lang.ref.ReferenceQueue.remove(ReferenceQueue.java:164) at java.lang.ref.Finalizer$FinalizerThread.run(Finalizer.java:209) "Reference Handler" #2 daemon prio=10 os_prio=31 tid=0x7fca4e80f000 nid=0x3603 in Object.wait() [0x7517e000] java.lang.Thread.State: WAITING (on object monitor) at java.lang.Object.wait(Native Method) - waiting on <0x000780033b80> (a java.lang.ref.Reference$Lock) at java.lang.Object.wait(Object.java:502) at java.lang.ref.Reference.tryHandlePending(Reference.java:191) - locked <0x000780033b80> (a java.lang.ref.Reference$Lock) at java.lang.ref.Reference$ReferenceHandler.run(Reference.java:153) "main" #1 prio=5 os_prio=31 tid=0x7fca4d002000 nid=0x1e03 waiting on condition [0x7475e000] java.lang.Thread.State: TIMED_WAITING (sleeping) at java.lang.Thread.sleep(Native Method) at com.amazonaws.http.AmazonHttpClient$RequestExecutor.doPauseBeforeRetry(AmazonHttpClient.java:1654) at com.amazonaws.http.AmazonHttpClient$RequestExecutor.pauseBeforeRetry(AmazonHttpClient.java:1628) at com.amazonaws.http.AmazonHttpClient$RequestExecutor.executeOneRequest(AmazonHttpClient.java:1139) at com.amazonaws.http.AmazonHttpClient$RequestExecutor.executeHelper(AmazonHttpClient.java:1035) at com.amazonaws.http.AmazonHttpClient$RequestExecutor.doExecute(AmazonHttpClient.java:747) at com.amazonaws.http.AmazonHttpClient$RequestExecutor.executeWithTimer(AmazonHttpClient.java:721) at com.amazonaws.http.AmazonHttpClient$RequestExecutor.execute(AmazonHttpClient.java:704) at com.amazonaws.http.AmazonHttpClient$RequestExecutor.access$500(AmazonHttpClient.java:672) at com.amazonaws.http.AmazonHttpClient$RequestExecutionBuilderImpl.execute(AmazonHttpClient.java:654) at com.amazonaws.http.AmazonHttpClient.execute(AmazonHttpClient.java:518) at com.amazonaws.services.s3.AmazonS3Client.invoke(AmazonS3Client.java:4185) at com.amazonaws.services.s3.AmazonS3Client.getBucketRegionViaHeadRequest(AmazonS3Client.java:4903) at com.amazonaws.services.s3.AmazonS3Client.fetchRegionFromCache(AmazonS3Client.java:4878) at com.amazonaws.services.s3.AmazonS3Client.invoke(AmazonS3Client.java:4170) at com.amazonaws.services.s3.AmazonS3Client.invoke(AmazonS3Client.java:4132) at com.amazonaws.services.s3.AmazonS3Client.headBucket(AmazonS3Client.java:1302) at com.amazonaws.services.s3.AmazonS3Client.doesBucketExist(AmazonS3Client.java:1259) at org.apache.hadoop.fs.s3a.S3AFileSystem.verifyBucketExists(S3AFileSystem.java:317) at org.apache.hadoop.fs.s3a.S3AFileSystem.initialize(S3AFileSystem.java:255) at org.apache.hadoop.fs.FileSystem.createFileSystem(FileSystem.java:3257) at org.apache.hadoop.fs.FileSystem.access$200(FileSystem.java:123) at org.apache.hadoop.fs.FileSystem$Cache.getInternal(FileSystem.java:3306) at org.apache.hadoop.fs.FileSystem$Cache.get(FileSystem.java:3274) at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:476) at org.apache.hadoop.fs.Path.getFileSystem(Path.java:361) at org.apache.hadoop.fs.s3a.commit.staging.MockedStagingCommitter.getDestination(MockedStagingCommitter.java:69) at org.apache.hadoop.fs.s3a.commit.AbstractS3GuardCommitter.initOutput(AbstractS3GuardCommitter.java:122) at org.apache.hadoop.fs.s3a.commit.AbstractS3GuardCommitter.(AbstractS3GuardCommitter.java:85) at org.apache.hadoop.fs.s3a.commit.AbstractS3GuardCommitter.(AbstractS3GuardCommitter.java:114) at org.apache.hadoop.fs.s3a.commit.staging.StagingS3GuardCommitter.(StagingS3GuardCommitter.java:153) at org.apache.hadoop.fs.s3a.commit.staging.MockedStagingCommitter.(MockedStagingCommitter.java:49) at org.apache.hadoop.fs.s3a.commit.staging.TestStagingMRJob$S3TextOutputFormat.getOutputCommitter(TestStagingMRJob.java:94) - locked
[jira] [Commented] (HADOOP-14297) Update the documentation about the new ec codecs config keys
[ https://issues.apache.org/jira/browse/HADOOP-14297?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15964511#comment-15964511 ] Wei-Chiu Chuang commented on HADOOP-14297: -- Hi [~lewuathe] I think we can do a bigger change than just updating the configuration key: In section "Architecture" bq. The earlier factory is prior to followings in case of failure of creating raw coders. The default implementation classes which has the highest priority of RS and XOR codec are native codecs using Intel ISA-L to improve the performance. If the native library is not available, the codec should fallback to pure Java implementation. You can change the priority by changing these configuration keys. How about if we say "These codec factories are loaded in the order specified by the configuration values, until a codec is loaded successfully. The default RS and XOR codec configuration prefers native implementation versus the pure Java one." In section "Enable Intel ISA-L" bq. HDFS native implementation of default RS codec leverages Intel ISA-L library to improve the encoding and decoding calculation. To enable and use Intel ISA-L, there are three steps. 1. Build ISA-L library. Please refer to the official site “https://github.com/01org/isa-l/” for detail information. 2. Build Hadoop with ISA-L support. Please refer to “Intel ISA-L build options” section in “Build instructions for Hadoop” in (BUILDING.txt) in the source code. Use -Dbundle.isal to copy the contents of the isal.lib directory into the final tar file. Deploy Hadoop with the tar file. Make sure ISA-L is available on HDFS clients and DataNodes. 3. Configure the io.erasurecode.codec.rs.rawcoder key with value org.apache.hadoop.io.erasurecode.rawcoder.NativeRSRawErasureCoderFactory on HDFS clients and DataNodes. The 3rd step is not needed anymore. The 2nd step looks a little long and can be split into two steps. It would also be nice if you can make the steps as bulletins. So like {noformat} 1. 2. ... {noformat} bq. To enable the native implementation of the XOR codec, perform the same first two steps as above to build and deploy Hadoop with ISA-L support. Afterwards, configure the io.erasurecode.codec.xor.rawcoder key with org.apache.hadoop.io.erasurecode.rawcoder.NativeXORRawErasureCoderFactory on both HDFS client and DataNodes. This paragraph can now be removed since the native codec is preferred by default. > Update the documentation about the new ec codecs config keys > > > Key: HADOOP-14297 > URL: https://issues.apache.org/jira/browse/HADOOP-14297 > Project: Hadoop Common > Issue Type: Sub-task > Components: documentation >Reporter: Kai Sasaki >Assignee: Kai Sasaki > Attachments: HADOOP-14297.01.patch, HADOOP-14297.02.patch > > > In HADOOP-13665, > io.erasurecode.codec.{rs-legacy.rawcoder,rs.rawcoder,xor.rawcoder} are no > more used. > It is necessary to update {{HDFSErasureCoding.md}} to show new config keys > io.erasurecode.codec.{rs-legacy.rawcoders,rs.rawcoders,xor.rawcoders} instead. -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-13665) Erasure Coding codec should support fallback coder
[ https://issues.apache.org/jira/browse/HADOOP-13665?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15964481#comment-15964481 ] Hudson commented on HADOOP-13665: - SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #11569 (See [https://builds.apache.org/job/Hadoop-trunk-Commit/11569/]) HADOOP-13665. Erasure Coding codec should support fallback coder. (weichiu: rev f050afb5785dc38875cf644fd4f80a219d4345e7) * (edit) hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/io/erasurecode/coder/TestHHXORErasureCoder.java * (edit) hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestDFSStripedOutputStreamWithFailure.java * (edit) hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestDFSStripedOutputStream.java * (edit) hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/io/erasurecode/coder/TestRSErasureCoder.java * (edit) hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestReconstructStripedFile.java * (edit) hadoop-common-project/hadoop-common/src/main/resources/core-default.xml * (edit) hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/io/erasurecode/TestCodecRawCoderMapping.java * (edit) hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestUnsetAndChangeDirectoryEcPolicy.java * (edit) hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/erasurecode/CodecUtil.java * (edit) hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestDFSStripedInputStream.java > Erasure Coding codec should support fallback coder > -- > > Key: HADOOP-13665 > URL: https://issues.apache.org/jira/browse/HADOOP-13665 > Project: Hadoop Common > Issue Type: Sub-task > Components: io >Reporter: Wei-Chiu Chuang >Assignee: Kai Sasaki >Priority: Blocker > Labels: hdfs-ec-3.0-must-do > Fix For: 3.0.0-alpha2 > > Attachments: HADOOP-13665.01.patch, HADOOP-13665.02.patch, > HADOOP-13665.03.patch, HADOOP-13665.04.patch, HADOOP-13665.05.patch, > HADOOP-13665.06.patch, HADOOP-13665.07.patch, HADOOP-13665.08.patch, > HADOOP-13665.09.patch, HADOOP-13665.10.patch, HADOOP-13665.11.patch, > HADOOP-13665.12.patch > > > The current EC codec supports a single coder only (by default pure Java > implementation). If the native coder is specified but is unavailable, it > should fallback to pure Java implementation. > One possible solution is to follow the convention of existing Hadoop native > codec, such as transport encryption (see {{CryptoCodec.java}}). It supports > fallback by specifying two or multiple coders as the value of property, and > loads coders in order. -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-13665) Erasure Coding codec should support fallback coder
[ https://issues.apache.org/jira/browse/HADOOP-13665?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Wei-Chiu Chuang updated HADOOP-13665: - Resolution: Fixed Hadoop Flags: Incompatible change,Reviewed (was: Incompatible change) Fix Version/s: 3.0.0-alpha2 Release Note: Use configuration properties io.erasurecode.codec.{rs-legacy,rs,xor}.rawcoders to control erasure coding codec. These properties support codec fallback in case the previous codec is not loaded. Status: Resolved (was: Patch Available) Committed to trunk. Thanks [~drankye] for driving the review and [~lewuathe] for contributing this patch! > Erasure Coding codec should support fallback coder > -- > > Key: HADOOP-13665 > URL: https://issues.apache.org/jira/browse/HADOOP-13665 > Project: Hadoop Common > Issue Type: Sub-task > Components: io >Reporter: Wei-Chiu Chuang >Assignee: Kai Sasaki >Priority: Blocker > Labels: hdfs-ec-3.0-must-do > Fix For: 3.0.0-alpha2 > > Attachments: HADOOP-13665.01.patch, HADOOP-13665.02.patch, > HADOOP-13665.03.patch, HADOOP-13665.04.patch, HADOOP-13665.05.patch, > HADOOP-13665.06.patch, HADOOP-13665.07.patch, HADOOP-13665.08.patch, > HADOOP-13665.09.patch, HADOOP-13665.10.patch, HADOOP-13665.11.patch, > HADOOP-13665.12.patch > > > The current EC codec supports a single coder only (by default pure Java > implementation). If the native coder is specified but is unavailable, it > should fallback to pure Java implementation. > One possible solution is to follow the convention of existing Hadoop native > codec, such as transport encryption (see {{CryptoCodec.java}}). It supports > fallback by specifying two or multiple coders as the value of property, and > loads coders in order. -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-13665) Erasure Coding codec should support fallback coder
[ https://issues.apache.org/jira/browse/HADOOP-13665?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15964383#comment-15964383 ] Wei-Chiu Chuang commented on HADOOP-13665: -- +1 test failure is unrelated. Will commit soon. > Erasure Coding codec should support fallback coder > -- > > Key: HADOOP-13665 > URL: https://issues.apache.org/jira/browse/HADOOP-13665 > Project: Hadoop Common > Issue Type: Sub-task > Components: io >Reporter: Wei-Chiu Chuang >Assignee: Kai Sasaki >Priority: Blocker > Labels: hdfs-ec-3.0-must-do > Attachments: HADOOP-13665.01.patch, HADOOP-13665.02.patch, > HADOOP-13665.03.patch, HADOOP-13665.04.patch, HADOOP-13665.05.patch, > HADOOP-13665.06.patch, HADOOP-13665.07.patch, HADOOP-13665.08.patch, > HADOOP-13665.09.patch, HADOOP-13665.10.patch, HADOOP-13665.11.patch, > HADOOP-13665.12.patch > > > The current EC codec supports a single coder only (by default pure Java > implementation). If the native coder is specified but is unavailable, it > should fallback to pure Java implementation. > One possible solution is to follow the convention of existing Hadoop native > codec, such as transport encryption (see {{CryptoCodec.java}}). It supports > fallback by specifying two or multiple coders as the value of property, and > loads coders in order. -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-11794) Enable distcp to copy blocks in parallel
[ https://issues.apache.org/jira/browse/HADOOP-11794?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15964324#comment-15964324 ] Steve Loughran commented on HADOOP-11794: - Omkar: what version are you looking at? We could talk about a backport to 2.8.1, but given it's a feature I don't see it being pulled back any earlier > Enable distcp to copy blocks in parallel > > > Key: HADOOP-11794 > URL: https://issues.apache.org/jira/browse/HADOOP-11794 > Project: Hadoop Common > Issue Type: Improvement > Components: tools/distcp >Affects Versions: 0.21.0 >Reporter: dhruba borthakur >Assignee: Yongjun Zhang > Attachments: HADOOP-11794.001.patch, HADOOP-11794.002.patch, > HADOOP-11794.003.patch, HADOOP-11794.004.patch, HADOOP-11794.005.patch, > HADOOP-11794.006.patch, HADOOP-11794.007.patch, HADOOP-11794.008.patch, > HADOOP-11794.009.patch, HADOOP-11794.010.branch2.patch, > HADOOP-11794.010.patch, MAPREDUCE-2257.patch > > > The minimum unit of work for a distcp task is a file. We have files that are > greater than 1 TB with a block size of 1 GB. If we use distcp to copy these > files, the tasks either take a long long long time or finally fails. A better > way for distcp would be to copy all the source blocks in parallel, and then > stich the blocks back to files at the destination via the HDFS Concat API > (HDFS-222) -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-14298) TestHadoopArchiveLogsRunner fails
[ https://issues.apache.org/jira/browse/HADOOP-14298?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15964320#comment-15964320 ] Steve Loughran commented on HADOOP-14298: - Before rushing to apply the patch: what's happened here to cause it? Because if it's a POM dependency change, it's going to surface downstream. We should make sure that the root cause is fixed, rather than fixing here what's going to surface everywhere > TestHadoopArchiveLogsRunner fails > - > > Key: HADOOP-14298 > URL: https://issues.apache.org/jira/browse/HADOOP-14298 > Project: Hadoop Common > Issue Type: Bug > Components: test >Reporter: Akira Ajisaka >Assignee: Akira Ajisaka > Attachments: HADOOP-14298.01.patch > > > {noformat} > Running org.apache.hadoop.tools.TestHadoopArchiveLogsRunner > Tests run: 1, Failures: 0, Errors: 1, Skipped: 0, Time elapsed: 4.687 sec <<< > FAILURE! - in org.apache.hadoop.tools.TestHadoopArchiveLogsRunner > testHadoopArchiveLogs(org.apache.hadoop.tools.TestHadoopArchiveLogsRunner) > Time elapsed: 4.631 sec <<< ERROR! > java.lang.NoClassDefFoundError: org/apache/hadoop/hdfs/HdfsConfiguration > at java.net.URLClassLoader.findClass(URLClassLoader.java:381) > at java.lang.ClassLoader.loadClass(ClassLoader.java:424) > at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:331) > at java.lang.ClassLoader.loadClass(ClassLoader.java:357) > at > org.apache.hadoop.hdfs.MiniDFSCluster$Builder.build(MiniDFSCluster.java:450) > at > org.apache.hadoop.tools.TestHadoopArchiveLogsRunner.testHadoopArchiveLogs(TestHadoopArchiveLogsRunner.java:66) > {noformat} -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-14289) Move logging APIs over to slf4j in hadoop-common
[ https://issues.apache.org/jira/browse/HADOOP-14289?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15964311#comment-15964311 ] Steve Loughran commented on HADOOP-14289: - You just sed the P-word > Move logging APIs over to slf4j in hadoop-common > > > Key: HADOOP-14289 > URL: https://issues.apache.org/jira/browse/HADOOP-14289 > Project: Hadoop Common > Issue Type: Sub-task >Reporter: Akira Ajisaka > Attachments: HADOOP-14289.sample.patch > > -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-14066) VersionInfo should be marked as public API
[ https://issues.apache.org/jira/browse/HADOOP-14066?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15964309#comment-15964309 ] Steve Loughran commented on HADOOP-14066: - There's no harm, though as all we are doing is tagging the interface as public in future, we're implicitly marking it as public everywhere. > VersionInfo should be marked as public API > -- > > Key: HADOOP-14066 > URL: https://issues.apache.org/jira/browse/HADOOP-14066 > Project: Hadoop Common > Issue Type: Bug > Components: common >Reporter: Thejas M Nair >Assignee: Akira Ajisaka >Priority: Critical > Fix For: 3.0.0-alpha3 > > Attachments: HADOOP-14066.01.patch > > > org.apache.hadoop.util.VersionInfo is commonly used by applications that work > with multiple versions of Hadoop. > In case of Hive, this is used in a shims layer to identify the version of > hadoop and use different shim code based on version (and the corresponding > api it supports). > I checked Pig and Hbase as well and they also use this class to get version > information. > However, this method is annotated as "@private" and "@unstable". > This code has actually been stable for long time and is widely used like a > public api. I think we should mark it as such. > Note that there are apis to find the version of server components in hadoop, > however, this class necessary for finding the version of client. -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-14274) Azure: Simplify Ranger-WASB policy model
[ https://issues.apache.org/jira/browse/HADOOP-14274?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15964282#comment-15964282 ] Sivaguru Sankaridurg commented on HADOOP-14274: --- [~liuml07]. I fixed all your code review comments in the latest patch. Please review the changes. I have not broken up NativeAzureFileSystem.java into smaller file. That is the only checkstyle item still pending. I'll fix it in future changes. > Azure: Simplify Ranger-WASB policy model > > > Key: HADOOP-14274 > URL: https://issues.apache.org/jira/browse/HADOOP-14274 > Project: Hadoop Common > Issue Type: Improvement > Components: fs/azure >Reporter: Sivaguru Sankaridurg >Assignee: Sivaguru Sankaridurg > Attachments: HADOOP-14274-001.patch, HADOOP-14274.002.patch > > > This improvement seeks to simplify the WASB-Ranger policy model -- both the > policy specification and policy enforcement. > More specifically, WASB-Ranger checks do not follow the same policy model and > enforcement as Ranger-HDFS. > Ranger-HDFS hands off to HDFS-ACLs when a policy-match is not found. The > handoff requires the Ranger policies follow the same model as HDFS-ACLs. This > is not true with Ranger+WASB. > We seek to simplify the policy specification and enforcement by dropping the > 'x' bit altogether. > This JIRA tracks this improvement, alongwith a few more minor bugfixes that > were found during Ranger-WASB testing. -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-14274) Azure: Simplify Ranger-WASB policy model
[ https://issues.apache.org/jira/browse/HADOOP-14274?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15964260#comment-15964260 ] Hadoop QA commented on HADOOP-14274: | (/) *{color:green}+1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 18s{color} | {color:blue} Docker mode activated. {color} | | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s{color} | {color:green} The patch appears to include 3 new or modified test files. {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 13m 13s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 18s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 13s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 20s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 15s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 0m 27s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 14s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 17s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 16s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 16s{color} | {color:green} the patch passed {color} | | {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange} 0m 11s{color} | {color:orange} hadoop-tools/hadoop-azure: The patch generated 1 new + 42 unchanged - 7 fixed = 43 total (was 49) {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 18s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 12s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 0m 32s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 11s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} unit {color} | {color:green} 1m 17s{color} | {color:green} hadoop-azure in the patch passed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 16s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 20m 11s{color} | {color:black} {color} | \\ \\ || Subsystem || Report/Notes || | Docker | Image:yetus/hadoop:612578f | | JIRA Issue | HADOOP-14274 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12862834/HADOOP-14274.002.patch | | Optional Tests | asflicense compile javac javadoc mvninstall mvnsite unit findbugs checkstyle | | uname | Linux 51542d6074b9 3.13.0-106-generic #153-Ubuntu SMP Tue Dec 6 15:44:32 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh | | git revision | trunk / aabf08d | | Default Java | 1.8.0_121 | | findbugs | v3.0.0 | | checkstyle | https://builds.apache.org/job/PreCommit-HADOOP-Build/12081/artifact/patchprocess/diff-checkstyle-hadoop-tools_hadoop-azure.txt | | Test Results | https://builds.apache.org/job/PreCommit-HADOOP-Build/12081/testReport/ | | modules | C: hadoop-tools/hadoop-azure U: hadoop-tools/hadoop-azure | | Console output | https://builds.apache.org/job/PreCommit-HADOOP-Build/12081/console | | Powered by | Apache Yetus 0.5.0-SNAPSHOT http://yetus.apache.org | This message was automatically generated. > Azure: Simplify Ranger-WASB policy model > > > Key: HADOOP-14274 > URL: https://issues.apache.org/jira/browse/HADOOP-14274 > Project: Hadoop Common > Issue Type: Improvement > Components: fs/azure >Reporter: Sivaguru Sankaridurg >Assignee: Sivaguru Sankaridurg > Attachments:
[jira] [Updated] (HADOOP-14274) Azure: Simplify Ranger-WASB policy model
[ https://issues.apache.org/jira/browse/HADOOP-14274?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Sivaguru Sankaridurg updated HADOOP-14274: -- Attachment: HADOOP-14274.002.patch Addressed Ming Liang's code review comments on 06-Apr-2017 > Azure: Simplify Ranger-WASB policy model > > > Key: HADOOP-14274 > URL: https://issues.apache.org/jira/browse/HADOOP-14274 > Project: Hadoop Common > Issue Type: Improvement > Components: fs/azure >Reporter: Sivaguru Sankaridurg >Assignee: Sivaguru Sankaridurg > Attachments: HADOOP-14274-001.patch, HADOOP-14274.002.patch > > > This improvement seeks to simplify the WASB-Ranger policy model -- both the > policy specification and policy enforcement. > More specifically, WASB-Ranger checks do not follow the same policy model and > enforcement as Ranger-HDFS. > Ranger-HDFS hands off to HDFS-ACLs when a policy-match is not found. The > handoff requires the Ranger policies follow the same model as HDFS-ACLs. This > is not true with Ranger+WASB. > We seek to simplify the policy specification and enforcement by dropping the > 'x' bit altogether. > This JIRA tracks this improvement, alongwith a few more minor bugfixes that > were found during Ranger-WASB testing. -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Comment Edited] (HADOOP-14295) Authentication proxy filter on firewall cluster may fail authorization because of getRemoteAddr
[ https://issues.apache.org/jira/browse/HADOOP-14295?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15964158#comment-15964158 ] Wei-Chiu Chuang edited comment on HADOOP-14295 at 4/11/17 11:20 AM: Hello [~jeffreyr97] thanks for filing this. IIUC, AuthenticationWithProxyUserFilter was added in HADOOP-13119, which was fixed in 2.7.4, 3.0.0-alpha2, 2.8.1, and therefore please update affects versions and target versions accordingly. Also, it would be awesome if you could also attach a test case. Thanks! was (Author: jojochuang): Hello [~jeffreyr97] thanks for filing this. IIUC, AuthenticationWithProxyUserFilter was added in HADOOP-13119, which was fixed in 2.7.4, 3.0.0-alpha2, 2.8.1, and therefore please update affects versions and target versions accordingly. > Authentication proxy filter on firewall cluster may fail authorization > because of getRemoteAddr > --- > > Key: HADOOP-14295 > URL: https://issues.apache.org/jira/browse/HADOOP-14295 > Project: Hadoop Common > Issue Type: Bug > Components: common >Affects Versions: 3.0.0-alpha2 >Reporter: Jeffrey E Rodriguez >Assignee: Jeffrey E Rodriguez >Priority: Critical > Fix For: 3.0.0-alpha2 > > Attachments: hadoop-14295.001.patch > > > Many production environments use firewalls to protect network traffic. In the > specific case of DataNode UI and other Hadoop server for which their ports > may fall on the list of firewalled ports the > org.apache.hadoop.security.AuthenticationWithProxyUserFilter user getRemotAdd > (HttpServletRequest) which may return the firewall host such as 127.0.0.1. > This is unfortunately bad since if you are using a proxy in addition to do > perimeter protection, and you have added your proxy as a super user when > checking for the proxy IP to authorize user this would fail since > getRemoteAdd would return the IP of the firewall (127.0.0.1). > "2017-04-08 07:01:23,029 ERROR security.AuthenticationWithProxyUserFilter > (AuthenticationWithProxyUserFilter.java:getRemoteUser(94)) - Unable to verify > proxy user: Unauthorized connection for super-user: knox from IP 127.0.0.1" > I propese to add a check for x-forwarded-for header since proxys usually > inject that header before we do a getRemoteAddr -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-14295) Authentication proxy filter on firewall cluster may fail authorization because of getRemoteAddr
[ https://issues.apache.org/jira/browse/HADOOP-14295?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15964158#comment-15964158 ] Wei-Chiu Chuang commented on HADOOP-14295: -- Hello [~jeffreyr97] thanks for filing this. IIUC, AuthenticationWithProxyUserFilter was added in HADOOP-13119, which was fixed in 2.7.4, 3.0.0-alpha2, 2.8.1, and therefore please update affects versions and target versions accordingly. > Authentication proxy filter on firewall cluster may fail authorization > because of getRemoteAddr > --- > > Key: HADOOP-14295 > URL: https://issues.apache.org/jira/browse/HADOOP-14295 > Project: Hadoop Common > Issue Type: Bug > Components: common >Affects Versions: 3.0.0-alpha2 >Reporter: Jeffrey E Rodriguez >Assignee: Jeffrey E Rodriguez >Priority: Critical > Fix For: 3.0.0-alpha2 > > Attachments: hadoop-14295.001.patch > > > Many production environments use firewalls to protect network traffic. In the > specific case of DataNode UI and other Hadoop server for which their ports > may fall on the list of firewalled ports the > org.apache.hadoop.security.AuthenticationWithProxyUserFilter user getRemotAdd > (HttpServletRequest) which may return the firewall host such as 127.0.0.1. > This is unfortunately bad since if you are using a proxy in addition to do > perimeter protection, and you have added your proxy as a super user when > checking for the proxy IP to authorize user this would fail since > getRemoteAdd would return the IP of the firewall (127.0.0.1). > "2017-04-08 07:01:23,029 ERROR security.AuthenticationWithProxyUserFilter > (AuthenticationWithProxyUserFilter.java:getRemoteUser(94)) - Unable to verify > proxy user: Unauthorized connection for super-user: knox from IP 127.0.0.1" > I propese to add a check for x-forwarded-for header since proxys usually > inject that header before we do a getRemoteAddr -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-13200) Seeking a better approach allowing to customize and configure erasure coders
[ https://issues.apache.org/jira/browse/HADOOP-13200?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15964139#comment-15964139 ] Hadoop QA commented on HADOOP-13200: | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 0s{color} | {color:blue} Docker mode activated. {color} | | {color:red}-1{color} | {color:red} patch {color} | {color:red} 0m 8s{color} | {color:red} HADOOP-13200 does not apply to trunk. Rebase required? Wrong Branch? See https://wiki.apache.org/hadoop/HowToContribute for help. {color} | \\ \\ || Subsystem || Report/Notes || | JIRA Issue | HADOOP-13200 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12862795/HADOOP-13200.01.patch | | Console output | https://builds.apache.org/job/PreCommit-HADOOP-Build/12080/console | | Powered by | Apache Yetus 0.5.0-SNAPSHOT http://yetus.apache.org | This message was automatically generated. > Seeking a better approach allowing to customize and configure erasure coders > > > Key: HADOOP-13200 > URL: https://issues.apache.org/jira/browse/HADOOP-13200 > Project: Hadoop Common > Issue Type: Sub-task >Reporter: Kai Zheng >Assignee: Tim Yao >Priority: Blocker > Labels: hdfs-ec-3.0-must-do > Attachments: HADOOP-13200.01.patch > > > This is a follow-on task for HADOOP-13010 as discussed over there. There may > be some better approach allowing to customize and configure erasure coders > than the current having raw coder factory, as [~cmccabe] suggested. Will copy > the relevant comments here to continue the discussion. -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-11794) Enable distcp to copy blocks in parallel
[ https://issues.apache.org/jira/browse/HADOOP-11794?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15964044#comment-15964044 ] Omkar Aradhya K S commented on HADOOP-11794: {quote} Yongjun Zhang Sure, I will finish testing this by early next week. {quote} [~yzhangal] I was able to do some basic tests and it works! Thanks for the patch. The branch-2 is *2.9.0*. However, will this patch work on older versions like *2.2.x*? > Enable distcp to copy blocks in parallel > > > Key: HADOOP-11794 > URL: https://issues.apache.org/jira/browse/HADOOP-11794 > Project: Hadoop Common > Issue Type: Improvement > Components: tools/distcp >Affects Versions: 0.21.0 >Reporter: dhruba borthakur >Assignee: Yongjun Zhang > Attachments: HADOOP-11794.001.patch, HADOOP-11794.002.patch, > HADOOP-11794.003.patch, HADOOP-11794.004.patch, HADOOP-11794.005.patch, > HADOOP-11794.006.patch, HADOOP-11794.007.patch, HADOOP-11794.008.patch, > HADOOP-11794.009.patch, HADOOP-11794.010.branch2.patch, > HADOOP-11794.010.patch, MAPREDUCE-2257.patch > > > The minimum unit of work for a distcp task is a file. We have files that are > greater than 1 TB with a block size of 1 GB. If we use distcp to copy these > files, the tasks either take a long long long time or finally fails. A better > way for distcp would be to copy all the source blocks in parallel, and then > stich the blocks back to files at the destination via the HDFS Concat API > (HDFS-222) -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-13200) Seeking a better approach allowing to customize and configure erasure coders
[ https://issues.apache.org/jira/browse/HADOOP-13200?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Tim Yao updated HADOOP-13200: - Status: Patch Available (was: Open) > Seeking a better approach allowing to customize and configure erasure coders > > > Key: HADOOP-13200 > URL: https://issues.apache.org/jira/browse/HADOOP-13200 > Project: Hadoop Common > Issue Type: Sub-task >Reporter: Kai Zheng >Assignee: Tim Yao >Priority: Blocker > Labels: hdfs-ec-3.0-must-do > Attachments: HADOOP-13200.01.patch > > > This is a follow-on task for HADOOP-13010 as discussed over there. There may > be some better approach allowing to customize and configure erasure coders > than the current having raw coder factory, as [~cmccabe] suggested. Will copy > the relevant comments here to continue the discussion. -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-13200) Seeking a better approach allowing to customize and configure erasure coders
[ https://issues.apache.org/jira/browse/HADOOP-13200?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Tim Yao updated HADOOP-13200: - Attachment: HADOOP-13200.01.patch Add CoderRegistry to realize a dynamic class loader of RawErasureCoderFactory using ServiceLoader. > Seeking a better approach allowing to customize and configure erasure coders > > > Key: HADOOP-13200 > URL: https://issues.apache.org/jira/browse/HADOOP-13200 > Project: Hadoop Common > Issue Type: Sub-task >Reporter: Kai Zheng >Assignee: Tim Yao >Priority: Blocker > Labels: hdfs-ec-3.0-must-do > Attachments: HADOOP-13200.01.patch > > > This is a follow-on task for HADOOP-13010 as discussed over there. There may > be some better approach allowing to customize and configure erasure coders > than the current having raw coder factory, as [~cmccabe] suggested. Will copy > the relevant comments here to continue the discussion. -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-14298) TestHadoopArchiveLogsRunner fails
[ https://issues.apache.org/jira/browse/HADOOP-14298?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15963982#comment-15963982 ] Akira Ajisaka commented on HADOOP-14298: Hi [~andrew.wang], would you review this patch? > TestHadoopArchiveLogsRunner fails > - > > Key: HADOOP-14298 > URL: https://issues.apache.org/jira/browse/HADOOP-14298 > Project: Hadoop Common > Issue Type: Bug > Components: test >Reporter: Akira Ajisaka >Assignee: Akira Ajisaka > Attachments: HADOOP-14298.01.patch > > > {noformat} > Running org.apache.hadoop.tools.TestHadoopArchiveLogsRunner > Tests run: 1, Failures: 0, Errors: 1, Skipped: 0, Time elapsed: 4.687 sec <<< > FAILURE! - in org.apache.hadoop.tools.TestHadoopArchiveLogsRunner > testHadoopArchiveLogs(org.apache.hadoop.tools.TestHadoopArchiveLogsRunner) > Time elapsed: 4.631 sec <<< ERROR! > java.lang.NoClassDefFoundError: org/apache/hadoop/hdfs/HdfsConfiguration > at java.net.URLClassLoader.findClass(URLClassLoader.java:381) > at java.lang.ClassLoader.loadClass(ClassLoader.java:424) > at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:331) > at java.lang.ClassLoader.loadClass(ClassLoader.java:357) > at > org.apache.hadoop.hdfs.MiniDFSCluster$Builder.build(MiniDFSCluster.java:450) > at > org.apache.hadoop.tools.TestHadoopArchiveLogsRunner.testHadoopArchiveLogs(TestHadoopArchiveLogsRunner.java:66) > {noformat} -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-14298) TestHadoopArchiveLogsRunner fails
[ https://issues.apache.org/jira/browse/HADOOP-14298?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15963965#comment-15963965 ] Hadoop QA commented on HADOOP-14298: | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 29s{color} | {color:blue} Docker mode activated. {color} | | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:red}-1{color} | {color:red} test4tests {color} | {color:red} 0m 0s{color} | {color:red} The patch doesn't appear to include any new or modified tests. Please justify why no new tests are needed for this patch. Also please list what manual steps were performed to verify this patch. {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 14m 11s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 17s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 19s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 18s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 12s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 15s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 13s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 13s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 16s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 15s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} xml {color} | {color:green} 0m 1s{color} | {color:green} The patch has no ill-formed XML file. {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 9s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} unit {color} | {color:green} 0m 34s{color} | {color:green} hadoop-archive-logs in the patch passed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 17s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 18m 34s{color} | {color:black} {color} | \\ \\ || Subsystem || Report/Notes || | Docker | Image:yetus/hadoop:612578f | | JIRA Issue | HADOOP-14298 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12862792/HADOOP-14298.01.patch | | Optional Tests | asflicense compile javac javadoc mvninstall mvnsite unit xml | | uname | Linux cfb51971099d 3.13.0-106-generic #153-Ubuntu SMP Tue Dec 6 15:44:32 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh | | git revision | trunk / aabf08d | | Default Java | 1.8.0_121 | | Test Results | https://builds.apache.org/job/PreCommit-HADOOP-Build/12079/testReport/ | | modules | C: hadoop-tools/hadoop-archive-logs U: hadoop-tools/hadoop-archive-logs | | Console output | https://builds.apache.org/job/PreCommit-HADOOP-Build/12079/console | | Powered by | Apache Yetus 0.5.0-SNAPSHOT http://yetus.apache.org | This message was automatically generated. > TestHadoopArchiveLogsRunner fails > - > > Key: HADOOP-14298 > URL: https://issues.apache.org/jira/browse/HADOOP-14298 > Project: Hadoop Common > Issue Type: Bug > Components: test >Reporter: Akira Ajisaka >Assignee: Akira Ajisaka > Attachments: HADOOP-14298.01.patch > > > {noformat} > Running org.apache.hadoop.tools.TestHadoopArchiveLogsRunner > Tests run: 1, Failures: 0, Errors: 1, Skipped: 0, Time elapsed: 4.687 sec <<< > FAILURE! - in org.apache.hadoop.tools.TestHadoopArchiveLogsRunner > testHadoopArchiveLogs(org.apache.hadoop.tools.TestHadoopArchiveLogsRunner) > Time elapsed: 4.631 sec <<< ERROR! > java.lang.NoClassDefFoundError: org/apache/hadoop/hdfs/HdfsConfiguration > at java.net.URLClassLoader.findClass(URLClassLoader.java:381) > at
[jira] [Updated] (HADOOP-14298) TestHadoopArchiveLogsRunner fails
[ https://issues.apache.org/jira/browse/HADOOP-14298?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Akira Ajisaka updated HADOOP-14298: --- Target Version/s: 3.0.0-alpha3 (was: 2.8.1, 3.0.0-alpha3) > TestHadoopArchiveLogsRunner fails > - > > Key: HADOOP-14298 > URL: https://issues.apache.org/jira/browse/HADOOP-14298 > Project: Hadoop Common > Issue Type: Bug > Components: test >Reporter: Akira Ajisaka >Assignee: Akira Ajisaka > Attachments: HADOOP-14298.01.patch > > > {noformat} > Running org.apache.hadoop.tools.TestHadoopArchiveLogsRunner > Tests run: 1, Failures: 0, Errors: 1, Skipped: 0, Time elapsed: 4.687 sec <<< > FAILURE! - in org.apache.hadoop.tools.TestHadoopArchiveLogsRunner > testHadoopArchiveLogs(org.apache.hadoop.tools.TestHadoopArchiveLogsRunner) > Time elapsed: 4.631 sec <<< ERROR! > java.lang.NoClassDefFoundError: org/apache/hadoop/hdfs/HdfsConfiguration > at java.net.URLClassLoader.findClass(URLClassLoader.java:381) > at java.lang.ClassLoader.loadClass(ClassLoader.java:424) > at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:331) > at java.lang.ClassLoader.loadClass(ClassLoader.java:357) > at > org.apache.hadoop.hdfs.MiniDFSCluster$Builder.build(MiniDFSCluster.java:450) > at > org.apache.hadoop.tools.TestHadoopArchiveLogsRunner.testHadoopArchiveLogs(TestHadoopArchiveLogsRunner.java:66) > {noformat} -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-14298) TestHadoopArchiveLogsRunner fails
[ https://issues.apache.org/jira/browse/HADOOP-14298?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Akira Ajisaka updated HADOOP-14298: --- Assignee: Akira Ajisaka Target Version/s: 2.8.1, 3.0.0-alpha3 Status: Patch Available (was: Open) > TestHadoopArchiveLogsRunner fails > - > > Key: HADOOP-14298 > URL: https://issues.apache.org/jira/browse/HADOOP-14298 > Project: Hadoop Common > Issue Type: Bug > Components: test >Reporter: Akira Ajisaka >Assignee: Akira Ajisaka > Attachments: HADOOP-14298.01.patch > > > {noformat} > Running org.apache.hadoop.tools.TestHadoopArchiveLogsRunner > Tests run: 1, Failures: 0, Errors: 1, Skipped: 0, Time elapsed: 4.687 sec <<< > FAILURE! - in org.apache.hadoop.tools.TestHadoopArchiveLogsRunner > testHadoopArchiveLogs(org.apache.hadoop.tools.TestHadoopArchiveLogsRunner) > Time elapsed: 4.631 sec <<< ERROR! > java.lang.NoClassDefFoundError: org/apache/hadoop/hdfs/HdfsConfiguration > at java.net.URLClassLoader.findClass(URLClassLoader.java:381) > at java.lang.ClassLoader.loadClass(ClassLoader.java:424) > at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:331) > at java.lang.ClassLoader.loadClass(ClassLoader.java:357) > at > org.apache.hadoop.hdfs.MiniDFSCluster$Builder.build(MiniDFSCluster.java:450) > at > org.apache.hadoop.tools.TestHadoopArchiveLogsRunner.testHadoopArchiveLogs(TestHadoopArchiveLogsRunner.java:66) > {noformat} -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-14298) TestHadoopArchiveLogsRunner fails
[ https://issues.apache.org/jira/browse/HADOOP-14298?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Akira Ajisaka updated HADOOP-14298: --- Attachment: HADOOP-14298.01.patch Add hadoop-hdfs-client dependency to hadoop-archive-logs module. > TestHadoopArchiveLogsRunner fails > - > > Key: HADOOP-14298 > URL: https://issues.apache.org/jira/browse/HADOOP-14298 > Project: Hadoop Common > Issue Type: Bug > Components: test >Reporter: Akira Ajisaka > Attachments: HADOOP-14298.01.patch > > > {noformat} > Running org.apache.hadoop.tools.TestHadoopArchiveLogsRunner > Tests run: 1, Failures: 0, Errors: 1, Skipped: 0, Time elapsed: 4.687 sec <<< > FAILURE! - in org.apache.hadoop.tools.TestHadoopArchiveLogsRunner > testHadoopArchiveLogs(org.apache.hadoop.tools.TestHadoopArchiveLogsRunner) > Time elapsed: 4.631 sec <<< ERROR! > java.lang.NoClassDefFoundError: org/apache/hadoop/hdfs/HdfsConfiguration > at java.net.URLClassLoader.findClass(URLClassLoader.java:381) > at java.lang.ClassLoader.loadClass(ClassLoader.java:424) > at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:331) > at java.lang.ClassLoader.loadClass(ClassLoader.java:357) > at > org.apache.hadoop.hdfs.MiniDFSCluster$Builder.build(MiniDFSCluster.java:450) > at > org.apache.hadoop.tools.TestHadoopArchiveLogsRunner.testHadoopArchiveLogs(TestHadoopArchiveLogsRunner.java:66) > {noformat} -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Created] (HADOOP-14298) TestHadoopArchiveLogsRunner fails
Akira Ajisaka created HADOOP-14298: -- Summary: TestHadoopArchiveLogsRunner fails Key: HADOOP-14298 URL: https://issues.apache.org/jira/browse/HADOOP-14298 Project: Hadoop Common Issue Type: Bug Components: test Reporter: Akira Ajisaka {noformat} Running org.apache.hadoop.tools.TestHadoopArchiveLogsRunner Tests run: 1, Failures: 0, Errors: 1, Skipped: 0, Time elapsed: 4.687 sec <<< FAILURE! - in org.apache.hadoop.tools.TestHadoopArchiveLogsRunner testHadoopArchiveLogs(org.apache.hadoop.tools.TestHadoopArchiveLogsRunner) Time elapsed: 4.631 sec <<< ERROR! java.lang.NoClassDefFoundError: org/apache/hadoop/hdfs/HdfsConfiguration at java.net.URLClassLoader.findClass(URLClassLoader.java:381) at java.lang.ClassLoader.loadClass(ClassLoader.java:424) at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:331) at java.lang.ClassLoader.loadClass(ClassLoader.java:357) at org.apache.hadoop.hdfs.MiniDFSCluster$Builder.build(MiniDFSCluster.java:450) at org.apache.hadoop.tools.TestHadoopArchiveLogsRunner.testHadoopArchiveLogs(TestHadoopArchiveLogsRunner.java:66) {noformat} -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-13665) Erasure Coding codec should support fallback coder
[ https://issues.apache.org/jira/browse/HADOOP-13665?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15963918#comment-15963918 ] Kai Zheng commented on HADOOP-13665: Thanks [~lewuathe] for the update! The latest patch LGTM. Guess [~jojochuang] will take another look and commit it. Thanks. > Erasure Coding codec should support fallback coder > -- > > Key: HADOOP-13665 > URL: https://issues.apache.org/jira/browse/HADOOP-13665 > Project: Hadoop Common > Issue Type: Sub-task > Components: io >Reporter: Wei-Chiu Chuang >Assignee: Kai Sasaki >Priority: Blocker > Labels: hdfs-ec-3.0-must-do > Attachments: HADOOP-13665.01.patch, HADOOP-13665.02.patch, > HADOOP-13665.03.patch, HADOOP-13665.04.patch, HADOOP-13665.05.patch, > HADOOP-13665.06.patch, HADOOP-13665.07.patch, HADOOP-13665.08.patch, > HADOOP-13665.09.patch, HADOOP-13665.10.patch, HADOOP-13665.11.patch, > HADOOP-13665.12.patch > > > The current EC codec supports a single coder only (by default pure Java > implementation). If the native coder is specified but is unavailable, it > should fallback to pure Java implementation. > One possible solution is to follow the convention of existing Hadoop native > codec, such as transport encryption (see {{CryptoCodec.java}}). It supports > fallback by specifying two or multiple coders as the value of property, and > loads coders in order. -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-14296) Move logging APIs over to slf4j in hadoop-tools
[ https://issues.apache.org/jira/browse/HADOOP-14296?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15963903#comment-15963903 ] Hadoop QA commented on HADOOP-14296: | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 18s{color} | {color:blue} Docker mode activated. {color} | | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s{color} | {color:green} The patch appears to include 2 new or modified test files. {color} | | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 9s{color} | {color:blue} Maven dependency ordering for branch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 13m 19s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 23s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 25s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 42s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 32s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 0m 57s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 28s{color} | {color:green} trunk passed {color} | | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 9s{color} | {color:blue} Maven dependency ordering for patch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 35s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 24s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 1m 24s{color} | {color:green} the patch passed {color} | | {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange} 0m 25s{color} | {color:orange} hadoop-tools: The patch generated 6 new + 92 unchanged - 7 fixed = 98 total (was 99) {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 40s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 32s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 10s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 26s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} unit {color} | {color:green} 1m 23s{color} | {color:green} hadoop-azure in the patch passed. {color} | | {color:red}-1{color} | {color:red} unit {color} | {color:red} 1m 0s{color} | {color:red} hadoop-sls in the patch failed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 19s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 33m 31s{color} | {color:black} {color} | \\ \\ || Reason || Tests || | Failed junit tests | hadoop.yarn.sls.appmaster.TestAMSimulator | \\ \\ || Subsystem || Report/Notes || | Docker | Image:yetus/hadoop:612578f | | JIRA Issue | HADOOP-14296 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12862785/HADOOP-14296.02.patch | | Optional Tests | asflicense compile javac javadoc mvninstall mvnsite unit findbugs checkstyle | | uname | Linux 5265c31bc678 3.13.0-106-generic #153-Ubuntu SMP Tue Dec 6 15:44:32 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh | | git revision | trunk / aabf08d | | Default Java | 1.8.0_121 | | findbugs | v3.0.0 | | checkstyle | https://builds.apache.org/job/PreCommit-HADOOP-Build/12078/artifact/patchprocess/diff-checkstyle-hadoop-tools.txt | | unit | https://builds.apache.org/job/PreCommit-HADOOP-Build/12078/artifact/patchprocess/patch-unit-hadoop-tools_hadoop-sls.txt | | Test Results | https://builds.apache.org/job/PreCommit-HADOOP-Build/12078/testReport/ | | modules | C: hadoop-tools/hadoop-azure hadoop-tools/hadoop-sls
[jira] [Updated] (HADOOP-14296) Move logging APIs over to slf4j in hadoop-tools
[ https://issues.apache.org/jira/browse/HADOOP-14296?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Akira Ajisaka updated HADOOP-14296: --- Attachment: HADOOP-14296.02.patch 02 patch * Undo the change in hadoop-rumen > Move logging APIs over to slf4j in hadoop-tools > --- > > Key: HADOOP-14296 > URL: https://issues.apache.org/jira/browse/HADOOP-14296 > Project: Hadoop Common > Issue Type: Sub-task >Reporter: Akira Ajisaka >Assignee: Akira Ajisaka > Attachments: HADOOP-14296.01.patch, HADOOP-14296.02.patch > > -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-14296) Move logging APIs over to slf4j in hadoop-tools
[ https://issues.apache.org/jira/browse/HADOOP-14296?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15963876#comment-15963876 ] Akira Ajisaka commented on HADOOP-14296: bq. can't rumen just leave everything alone and rely on people to edit their log4j settings? I like the idea for people to edit their log4j settings instead of setting the log level programatically. Can I remove the following code, or keep it as-is? {code} // turn off the warning w.r.t deprecated mapreduce keys static { Logger.getLogger(Configuration.class).setLevel(Level.OFF); } {code} > Move logging APIs over to slf4j in hadoop-tools > --- > > Key: HADOOP-14296 > URL: https://issues.apache.org/jira/browse/HADOOP-14296 > Project: Hadoop Common > Issue Type: Sub-task >Reporter: Akira Ajisaka >Assignee: Akira Ajisaka > Attachments: HADOOP-14296.01.patch > > -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org