[jira] [Created] (HADOOP-11344) KMS kms-config.sh sets a default value for the keystore password even in non-ssl setup
Arun Suresh created HADOOP-11344: Summary: KMS kms-config.sh sets a default value for the keystore password even in non-ssl setup Key: HADOOP-11344 URL: https://issues.apache.org/jira/browse/HADOOP-11344 Project: Hadoop Common Issue Type: Bug Reporter: Arun Suresh Assignee: Arun Suresh This results in kms always starting up in ssl mode. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Created] (HADOOP-11343) Overflow is not properly handled in caclulating final iv for AES CTR
Jerry Chen created HADOOP-11343: --- Summary: Overflow is not properly handled in caclulating final iv for AES CTR Key: HADOOP-11343 URL: https://issues.apache.org/jira/browse/HADOOP-11343 Project: Hadoop Common Issue Type: Bug Components: security Affects Versions: trunk-win Reporter: Jerry Chen In the AesCtrCryptoCodec calculateIV, as the init IV is a random generated 16 bytes, final byte[] iv = new byte[cc.getCipherSuite().getAlgorithmBlockSize()]; cc.generateSecureRandom(iv); Then the following calculation of iv and counter on 8 bytes (64bit) space would easily cause overflow and this overflow gets lost. The result would be the 128 bit data block was encrypted with a wrong counter and cannot be decrypted by standard aes-ctr. /** * The IV is produced by adding the initial IV to the counter. IV length * should be the same as {@link #AES_BLOCK_SIZE} */ @Override public void calculateIV(byte[] initIV, long counter, byte[] IV) { Preconditions.checkArgument(initIV.length == AES_BLOCK_SIZE); Preconditions.checkArgument(IV.length == AES_BLOCK_SIZE); System.arraycopy(initIV, 0, IV, 0, CTR_OFFSET); long l = 0; for (int i = 0; i < 8; i++) { l = ((l << 8) | (initIV[CTR_OFFSET + i] & 0xff)); } l += counter; IV[CTR_OFFSET + 0] = (byte) (l >>> 56); IV[CTR_OFFSET + 1] = (byte) (l >>> 48); IV[CTR_OFFSET + 2] = (byte) (l >>> 40); IV[CTR_OFFSET + 3] = (byte) (l >>> 32); IV[CTR_OFFSET + 4] = (byte) (l >>> 24); IV[CTR_OFFSET + 5] = (byte) (l >>> 16); IV[CTR_OFFSET + 6] = (byte) (l >>> 8); IV[CTR_OFFSET + 7] = (byte) (l); } -- This message was sent by Atlassian JIRA (v6.3.4#6332)
Test case failures with Hadoop trunk
Hi all, My name is Vijay Bhat and I am looking to contribute to the Hadoop YARN project. I have been using and benefiting from Hadoop ecosystem technologies for a few years now and I want to give back to the community that makes this happen. I forked the apache/hadoop branch on github and synced to the last commit (https://github.com/apache/hadoop/commit/1556f86a31a54733d6550363aa0e027acca7823b) that successfully built on the Apache build server (https://builds.apache.org/view/All/job/Hadoop-Yarn-trunk/758/). However, I get test case failures when I build the Hadoop source code on a VM running Ubuntu 12.04 LTS. The maven command I am running from the hadoop base directory is: mvn clean install -U Console output Tests run: 9, Failures: 2, Errors: 0, Skipped: 0, Time elapsed: 2.392 sec <<< FAILURE! - in org.apache.hadoop.ipc.TestDecayRpcScheduler testAccumulate(org.apache.hadoop.ipc.TestDecayRpcScheduler) Time elapsed: 0.084 sec <<< FAILURE! java.lang.AssertionError: expected:<3> but was:<2> at org.junit.Assert.fail(Assert.java:88) at org.junit.Assert.failNotEquals(Assert.java:743) at org.junit.Assert.assertEquals(Assert.java:118) at org.junit.Assert.assertEquals(Assert.java:555) at org.junit.Assert.assertEquals(Assert.java:542) at org.apache.hadoop.ipc.TestDecayRpcScheduler.testAccumulate(TestDecayRpcScheduler.java:136) testPriority(org.apache.hadoop.ipc.TestDecayRpcScheduler) Time elapsed: 0.052 sec <<< FAILURE! java.lang.AssertionError: expected:<1> but was:<0> at org.junit.Assert.fail(Assert.java:88) at org.junit.Assert.failNotEquals(Assert.java:743) at org.junit.Assert.assertEquals(Assert.java:118) at org.junit.Assert.assertEquals(Assert.java:555) at org.junit.Assert.assertEquals(Assert.java:542) at org.apache.hadoop.ipc.TestDecayRpcScheduler.testPriority(TestDecayRpcScheduler.java:197) Tests run: 3, Failures: 0, Errors: 1, Skipped: 0, Time elapsed: 111.519 sec <<< FAILURE! - in org.apache.hadoop.ha.TestZKFailoverControllerStress testExpireBackAndForth(org.apache.hadoop.ha.TestZKFailoverControllerStress) Time elapsed: 45.46 sec <<< ERROR! java.lang.Exception: test timed out after 4 milliseconds at java.lang.Thread.sleep(Native Method) at org.apache.hadoop.ha.MiniZKFCCluster.waitForHAState(MiniZKFCCluster.java:164) at org.apache.hadoop.ha.MiniZKFCCluster.expireAndVerifyFailover(MiniZKFCCluster.java:236) at org.apache.hadoop.ha.TestZKFailoverControllerStress.testExpireBackAndForth(TestZKFailoverControllerStress.java:79) Tests run: 19, Failures: 0, Errors: 1, Skipped: 0, Time elapsed: 62.514 sec <<< FAILURE! - in org.apache.hadoop.ha.TestZKFailoverController testGracefulFailoverFailBecomingStandby(org.apache.hadoop.ha.TestZKFailoverController) Time elapsed: 15.062 sec <<< ERROR! java.lang.Exception: test timed out after 15000 milliseconds at java.lang.Object.wait(Native Method) at org.apache.hadoop.ha.ZKFailoverController.waitForActiveAttempt(ZKFailoverController.java:467) at org.apache.hadoop.ha.ZKFailoverController.doGracefulFailover(ZKFailoverController.java:657) at org.apache.hadoop.ha.ZKFailoverController.access$400(ZKFailoverController.java:61) at org.apache.hadoop.ha.ZKFailoverController$3.run(ZKFailoverController.java:602) at org.apache.hadoop.ha.ZKFailoverController$3.run(ZKFailoverController.java:599) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:396) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1683) at org.apache.hadoop.ha.ZKFailoverController.gracefulFailoverToYou(ZKFailoverController.java:599) at org.apache.hadoop.ha.ZKFCRpcServer.gracefulFailover(ZKFCRpcServer.java:94) at org.apache.hadoop.ha.TestZKFailoverController.testGracefulFailoverFailBecomingStandby(TestZKFailoverController.java:532) When I skip the tests, the source code compiles successfully. mvn clean install -U –DskipTests Is there something I’m doing incorrectly that’s causing the test cases to fail? I’d really appreciate any insight from folks who have gone through this process before. I’ve looked at the JIRAs labeled newbie (http://wiki.apache.org/hadoop/HowToContribute) but didn’t find promising leads. Thanks for the help! -Vijay The information contained in this e-mail is confidential and/or proprietary to Capital One and/or its affiliates. The information transmitted herewith is intended only for use by the individual or entity to which it is addressed. If the reader of this message is not the intended recipient, you are hereby notified that any review, retransmission, dissemination, distribution, copying or other use of, or taking of any action in reliance upon this information is strictly prohibited. If you have received this communication in error, please contact the sender and delete the material from your computer.
[jira] [Resolved] (HADOOP-11331) rename shell daemon functions to specify java
[ https://issues.apache.org/jira/browse/HADOOP-11331?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Allen Wittenauer resolved HADOOP-11331. --- Resolution: Won't Fix Closing this as won't fix. It's unnecessary at this time. > rename shell daemon functions to specify java > - > > Key: HADOOP-11331 > URL: https://issues.apache.org/jira/browse/HADOOP-11331 > Project: Hadoop Common > Issue Type: Bug >Affects Versions: 3.0.0 >Reporter: Allen Wittenauer > Attachments: HADOOP-11331.txt > > > In order to support KMS and other outliers, we should rename the daemon > handlers to specify java vs. non-java runtimes. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
Re: Thinking ahead to hadoop-2.7
Thanks for starting this thread, Arun. Your proposal seems reasonable to me. I suppose we would like new features and improvements to go into 2.8 then? If yes, what time frame are we looking at for 2.8? Looking at YARN, it would be nice to get a release with shared-cache and a stable version of reservation work. I believe they are well under way and should be ready in a few weeks. Regarding 2.7 release specifics, do you plan to create a branch off of current branch-2.6 and update all issues marked fixed for 2.7 to be fixed for 2.8? Thanks Karthik On Mon, Dec 1, 2014 at 2:42 PM, Arun Murthy wrote: > Folks, > > With hadoop-2.6 out it's time to think ahead. > > As we've discussed in the past, 2.6 was the last release which supports > JDK6. > > I'm thinking it's best to try get 2.7 out in a few weeks (maybe by the > holidays) with just the switch to JDK7 (HADOOP-10530) and possibly > support for JDK-1.8 (as a runtime) via HADOOP-11090. > > This way we can start with the stable base of 2.6 and switch over to > JDK7 to allow our downstream projects to use either for a short time > (hadoop-2.6 or hadoop-2.7). > > I'll update the Roadmap wiki accordingly. > > Thoughts? > > thanks, > Arun > > -- > CONFIDENTIALITY NOTICE > NOTICE: This message is intended for the use of the individual or entity to > which it is addressed and may contain information that is confidential, > privileged and exempt from disclosure under applicable law. If the reader > of this message is not the intended recipient, you are hereby notified that > any printing, copying, dissemination, distribution, disclosure or > forwarding of this communication is strictly prohibited. If you have > received this communication in error, please contact the sender immediately > and delete it from your system. Thank You. >
Thinking ahead to hadoop-2.7
Folks, With hadoop-2.6 out it's time to think ahead. As we've discussed in the past, 2.6 was the last release which supports JDK6. I'm thinking it's best to try get 2.7 out in a few weeks (maybe by the holidays) with just the switch to JDK7 (HADOOP-10530) and possibly support for JDK-1.8 (as a runtime) via HADOOP-11090. This way we can start with the stable base of 2.6 and switch over to JDK7 to allow our downstream projects to use either for a short time (hadoop-2.6 or hadoop-2.7). I'll update the Roadmap wiki accordingly. Thoughts? thanks, Arun -- CONFIDENTIALITY NOTICE NOTICE: This message is intended for the use of the individual or entity to which it is addressed and may contain information that is confidential, privileged and exempt from disclosure under applicable law. If the reader of this message is not the intended recipient, you are hereby notified that any printing, copying, dissemination, distribution, disclosure or forwarding of this communication is strictly prohibited. If you have received this communication in error, please contact the sender immediately and delete it from your system. Thank You.
Re: a friendly suggestion for developers when uploading patches
On Wed, Nov 26, 2014 at 2:58 PM, Karthik Kambatla wrote: > Yongjun, thanks for starting this thread. I personally like Steve's > suggestions, but think two digits should be enough. > > I propose we limit the restrictions to versioning the patches with version > numbers and .patch extension. People have their own preferences for the > rest of the name (e.g. MAPREDUCE, MapReduce, MR, mr, mapred) and I don't > see a gain in forcing everyone to use one. > > Putting the suggestions (tight and loose) on the wiki would help new > contributors as well. > > +1 best, Colin > On Wed, Nov 26, 2014 at 2:43 PM, Eric Payne > > wrote: > > > +1.The "different color for newest patch" doesn't work very well if you > > are color blind, so I do appreciate a revision number in the name. > > > > From: Yongjun Zhang > > To: common-dev@hadoop.apache.org > > Sent: Tuesday, November 25, 2014 11:37 PM > > Subject: Re: a friendly suggestion for developers when uploading patches > > > > Thanks Harsh for the info and Andrew for sharing the script. It looks > that > > the script is intelligent enough to pick the latest attachment even if > all > > attachments have the same name. > > > > Yet, I hope we use the following as the guideline for patch names: > > > > <*projectName*>-<*jiraNum*>-<*revNum*>.patch > > > > > > So we can easily identify individual patch revs. > > > > Thanks. > > > > --Yongjun > > > > On Tue, Nov 25, 2014 at 5:54 PM, Andrew Wang > > wrote: > > > > > This might be a good time to mention my fetch-patch script, I use it to > > > easily download the latest attachment on a jira: > > > > > > https://github.com/umbrant/dotfiles/blob/master/bin/fetch-patch > > > > > > On Tue, Nov 25, 2014 at 5:44 PM, Harsh J wrote: > > > > > > > For the same filename, you can observe also that the JIRA colors the > > > > latest one to be different than the older ones automatically - this > is > > > > what I rely on. > > > > > > > > On Sat, Nov 22, 2014 at 12:36 AM, Yongjun Zhang > > > > > wrote: > > > > > Hi, > > > > > > > > > > When I look at patches uploaded to jiras, from time to time I > notice > > > that > > > > > different revisions of the patch is uploaded with the same patch > file > > > > name, > > > > > some time for quite a few times. It's confusing which is which. > > > > > > > > > > I'd suggest that as a guideline, we do the following when > uploading a > > > > patch: > > > > > > > > > >- include a revision number in the patch file name.A > > > > >- include a comment, stating that a new patch is uploaded, > > including > > > > the > > > > >revision number of the patch in the comment. > > > > > > > > > > This way, it's easier to refer to a specific version of a patch, > and > > to > > > > > know which patch a comment is made about. > > > > > > > > > > Hope that makes sense to you. > > > > > > > > > > Thanks. > > > > > > > > > > --Yongjun > > > > > > > > > > > > > > > > -- > > > > Harsh J > > > > > > > > > > > > > > > >
Re: Switching to Java 7
Hi Steve, I think the pre-commit Jenkins are running Java 6, they need to be switched to Java 7 as well. Haohui > On Dec 1, 2014, at 5:41 AM, Steve Loughran wrote: > > I'm planning to flip the Javac language & JVM settings to java 7 this week > > https://issues.apache.org/jira/browse/HADOOP-10530 > > the latest patch also has a profile that sets the language to java8, for > the curious; one bit of code will need patching to compile there. > > The plan for the change ASF-side is: > > 1 -switch jenkins patch/regular commits to java7 > 2 -apply the HADOOP-10530 patch > > locally, anyone who runs Jenkins with Java 6 will have to upgrade/switch > JVM after (2), and anyone with JAVA_HOME set to a jdk 6 JDK is going to > have to edit their environment variables. > > Is there anything else we need to do before the big Java 7 switch? > > -Steve > > -- > CONFIDENTIALITY NOTICE > NOTICE: This message is intended for the use of the individual or entity to > which it is addressed and may contain information that is confidential, > privileged and exempt from disclosure under applicable law. If the reader > of this message is not the intended recipient, you are hereby notified that > any printing, copying, dissemination, distribution, disclosure or > forwarding of this communication is strictly prohibited. If you have > received this communication in error, please contact the sender immediately > and delete it from your system. Thank You. -- CONFIDENTIALITY NOTICE NOTICE: This message is intended for the use of the individual or entity to which it is addressed and may contain information that is confidential, privileged and exempt from disclosure under applicable law. If the reader of this message is not the intended recipient, you are hereby notified that any printing, copying, dissemination, distribution, disclosure or forwarding of this communication is strictly prohibited. If you have received this communication in error, please contact the sender immediately and delete it from your system. Thank You.
Switching to Java 7
I'm planning to flip the Javac language & JVM settings to java 7 this week https://issues.apache.org/jira/browse/HADOOP-10530 the latest patch also has a profile that sets the language to java8, for the curious; one bit of code will need patching to compile there. The plan for the change ASF-side is: 1 -switch jenkins patch/regular commits to java7 2 -apply the HADOOP-10530 patch locally, anyone who runs Jenkins with Java 6 will have to upgrade/switch JVM after (2), and anyone with JAVA_HOME set to a jdk 6 JDK is going to have to edit their environment variables. Is there anything else we need to do before the big Java 7 switch? -Steve -- CONFIDENTIALITY NOTICE NOTICE: This message is intended for the use of the individual or entity to which it is addressed and may contain information that is confidential, privileged and exempt from disclosure under applicable law. If the reader of this message is not the intended recipient, you are hereby notified that any printing, copying, dissemination, distribution, disclosure or forwarding of this communication is strictly prohibited. If you have received this communication in error, please contact the sender immediately and delete it from your system. Thank You.