Jenkins build is back to normal : Hadoop-Common-trunk #1855

2015-10-15 Thread Apache Jenkins Server
See 



Build failed in Jenkins: Hadoop-Common-trunk #1854

2015-10-15 Thread Apache Jenkins Server
See 

Changes:

[szetszwo] HDFS-9205. Do not schedule corrupt blocks for replication.  
(szetszwo)

[szetszwo] Move HDFS-9205 to trunk in CHANGES.txt.

--
[...truncated 5396 lines...]
Running org.apache.hadoop.crypto.TestOpensslCipher
Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.234 sec - in 
org.apache.hadoop.crypto.TestOpensslCipher
Running org.apache.hadoop.crypto.TestCryptoCodec
Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 15.223 sec - in 
org.apache.hadoop.crypto.TestCryptoCodec
Running org.apache.hadoop.crypto.TestCryptoStreams
Tests run: 14, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 23.577 sec - 
in org.apache.hadoop.crypto.TestCryptoStreams
Running org.apache.hadoop.crypto.TestCryptoStreamsForLocalFS
Tests run: 14, Failures: 0, Errors: 0, Skipped: 5, Time elapsed: 12.828 sec - 
in org.apache.hadoop.crypto.TestCryptoStreamsForLocalFS
Running org.apache.hadoop.crypto.key.TestKeyProviderCryptoExtension
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 1.1 sec - in 
org.apache.hadoop.crypto.key.TestKeyProviderCryptoExtension
Running org.apache.hadoop.crypto.key.TestValueQueue
Tests run: 8, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 1.184 sec - in 
org.apache.hadoop.crypto.key.TestValueQueue
Running org.apache.hadoop.crypto.key.TestKeyProviderFactory
Tests run: 8, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 1.627 sec - in 
org.apache.hadoop.crypto.key.TestKeyProviderFactory
Running org.apache.hadoop.crypto.key.TestKeyShell
Tests run: 9, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 1.663 sec - in 
org.apache.hadoop.crypto.key.TestKeyShell
Running org.apache.hadoop.crypto.key.TestKeyProviderDelegationTokenExtension
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.808 sec - in 
org.apache.hadoop.crypto.key.TestKeyProviderDelegationTokenExtension
Running org.apache.hadoop.crypto.key.TestKeyProvider
Tests run: 8, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.846 sec - in 
org.apache.hadoop.crypto.key.TestKeyProvider
Running org.apache.hadoop.crypto.key.kms.TestLoadBalancingKMSClientProvider
Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 1.01 sec - in 
org.apache.hadoop.crypto.key.kms.TestLoadBalancingKMSClientProvider
Running org.apache.hadoop.crypto.key.TestCachingKeyProvider
Tests run: 5, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 2.138 sec - in 
org.apache.hadoop.crypto.key.TestCachingKeyProvider
Running org.apache.hadoop.crypto.TestCryptoStreamsNormal
Tests run: 14, Failures: 0, Errors: 0, Skipped: 8, Time elapsed: 7.748 sec - in 
org.apache.hadoop.crypto.TestCryptoStreamsNormal
Running org.apache.hadoop.crypto.TestCryptoStreamsWithOpensslAesCtrCryptoCodec
Tests run: 14, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 23.882 sec - 
in org.apache.hadoop.crypto.TestCryptoStreamsWithOpensslAesCtrCryptoCodec
Running org.apache.hadoop.crypto.TestCryptoStreamsWithJceAesCtrCryptoCodec
Tests run: 14, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 26.653 sec - 
in org.apache.hadoop.crypto.TestCryptoStreamsWithJceAesCtrCryptoCodec
Running org.apache.hadoop.crypto.random.TestOsSecureRandom
Tests run: 6, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.659 sec - in 
org.apache.hadoop.crypto.random.TestOsSecureRandom
Running org.apache.hadoop.crypto.random.TestOpensslSecureRandom
Tests run: 5, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.197 sec - in 
org.apache.hadoop.crypto.random.TestOpensslSecureRandom
Running org.apache.hadoop.jmx.TestJMXJsonServlet
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 1.006 sec - in 
org.apache.hadoop.jmx.TestJMXJsonServlet
Running org.apache.hadoop.tracing.TestTraceUtils
Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.592 sec - in 
org.apache.hadoop.tracing.TestTraceUtils
Running org.apache.hadoop.io.TestMD5Hash
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.208 sec - in 
org.apache.hadoop.io.TestMD5Hash
Running org.apache.hadoop.io.serializer.TestSerializationFactory
Tests run: 5, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.606 sec - in 
org.apache.hadoop.io.serializer.TestSerializationFactory
Running org.apache.hadoop.io.serializer.avro.TestAvroSerialization
Tests run: 5, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.71 sec - in 
org.apache.hadoop.io.serializer.avro.TestAvroSerialization
Running org.apache.hadoop.io.serializer.TestWritableSerialization
Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.419 sec - in 
org.apache.hadoop.io.serializer.TestWritableSerialization
Running org.apache.hadoop.io.TestSecureIOUtils
Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.867 sec - in 
org.apache.hadoop.io.TestSecureIOUtils
Running org.apache.hadoop.io.TestDataByteBuffers
Tests run: 5, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.425 sec - 

Build failed in Jenkins: Hadoop-Common-trunk #1853

2015-10-15 Thread Apache Jenkins Server
See 

Changes:

[vvasudev] YARN-4258. Add support for controlling capabilities for docker

--
[...truncated 5392 lines...]
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.463 sec - in 
org.apache.hadoop.metrics2.impl.TestGangliaMetrics
Running org.apache.hadoop.metrics2.impl.TestMetricsCollectorImpl
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.284 sec - in 
org.apache.hadoop.metrics2.impl.TestMetricsCollectorImpl
Running org.apache.hadoop.metrics2.impl.TestMetricsSystemImpl
Tests run: 12, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 1.889 sec - in 
org.apache.hadoop.metrics2.impl.TestMetricsSystemImpl
Running org.apache.hadoop.metrics2.impl.TestMetricsConfig
Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.331 sec - in 
org.apache.hadoop.metrics2.impl.TestMetricsConfig
Running org.apache.hadoop.metrics2.impl.TestGraphiteMetrics
Tests run: 5, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.395 sec - in 
org.apache.hadoop.metrics2.impl.TestGraphiteMetrics
Running org.apache.hadoop.metrics2.impl.TestMetricsVisitor
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.392 sec - in 
org.apache.hadoop.metrics2.impl.TestMetricsVisitor
Running org.apache.hadoop.metrics2.source.TestJvmMetrics
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.551 sec - in 
org.apache.hadoop.metrics2.source.TestJvmMetrics
Running org.apache.hadoop.metrics2.sink.TestFileSink
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.491 sec - in 
org.apache.hadoop.metrics2.sink.TestFileSink
Running org.apache.hadoop.metrics2.sink.ganglia.TestGangliaSink
Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.265 sec - in 
org.apache.hadoop.metrics2.sink.ganglia.TestGangliaSink
Running org.apache.hadoop.metrics2.filter.TestPatternFilter
Tests run: 5, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.493 sec - in 
org.apache.hadoop.metrics2.filter.TestPatternFilter
Running org.apache.hadoop.log.TestLogLevel
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.859 sec - in 
org.apache.hadoop.log.TestLogLevel
Running org.apache.hadoop.log.TestLog4Json
Tests run: 5, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.406 sec - in 
org.apache.hadoop.log.TestLog4Json
Running org.apache.hadoop.jmx.TestJMXJsonServlet
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 1.078 sec - in 
org.apache.hadoop.jmx.TestJMXJsonServlet
Running org.apache.hadoop.ipc.TestIPCServerResponder
Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 8.402 sec - in 
org.apache.hadoop.ipc.TestIPCServerResponder
Running org.apache.hadoop.ipc.TestRPCWaitForProxy
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 1.299 sec - in 
org.apache.hadoop.ipc.TestRPCWaitForProxy
Running org.apache.hadoop.ipc.TestSocketFactory
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.351 sec - in 
org.apache.hadoop.ipc.TestSocketFactory
Running org.apache.hadoop.ipc.TestCallQueueManager
Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 7.385 sec - in 
org.apache.hadoop.ipc.TestCallQueueManager
Running org.apache.hadoop.ipc.TestIdentityProviders
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.601 sec - in 
org.apache.hadoop.ipc.TestIdentityProviders
Running org.apache.hadoop.ipc.TestWeightedRoundRobinMultiplexer
Tests run: 6, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.617 sec - in 
org.apache.hadoop.ipc.TestWeightedRoundRobinMultiplexer
Running org.apache.hadoop.ipc.TestRPCCompatibility
Tests run: 8, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 1.111 sec - in 
org.apache.hadoop.ipc.TestRPCCompatibility
Running org.apache.hadoop.ipc.TestProtoBufRpc
Tests run: 6, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 14.12 sec - in 
org.apache.hadoop.ipc.TestProtoBufRpc
Running org.apache.hadoop.ipc.TestMultipleProtocolServer
Tests run: 5, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 1.081 sec - in 
org.apache.hadoop.ipc.TestMultipleProtocolServer
Running org.apache.hadoop.ipc.TestRPCCallBenchmark
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 11.014 sec - in 
org.apache.hadoop.ipc.TestRPCCallBenchmark
Running org.apache.hadoop.ipc.TestRetryCacheMetrics
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.441 sec - in 
org.apache.hadoop.ipc.TestRetryCacheMetrics
Running org.apache.hadoop.ipc.TestMiniRPCBenchmark
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.948 sec - in 
org.apache.hadoop.ipc.TestMiniRPCBenchmark
Running org.apache.hadoop.ipc.TestIPC
Tests run: 34, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 93.019 sec - 
in org.apache.hadoop.ipc.TestIPC
Running org.apache.hadoop.ipc.TestDecayRpcScheduler
Tests run: 9, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 1.109 sec - in 

Re: [DISCUSS] About the details of JDK-8 support

2015-10-15 Thread Steve Loughran

> On 15 Oct 2015, at 00:28, Allen Wittenauer  wrote:
> 
> 
> If people want, I could setup a cut off of yetus master to run the jenkins 
> test-patch.  (multiple maven repos, docker support, multijdk support, … ) 
> Yetus would get some real world testing out of it and hadoop common-dev could 
> stop spinning in circles over some of the same issues month after month. ;)
> 
> 

+1.



[jira] [Created] (HADOOP-12480) Run precommit javadoc only for changed modules

2015-10-15 Thread Vinayakumar B (JIRA)
Vinayakumar B created HADOOP-12480:
--

 Summary: Run precommit javadoc only for changed modules
 Key: HADOOP-12480
 URL: https://issues.apache.org/jira/browse/HADOOP-12480
 Project: Hadoop Common
  Issue Type: Improvement
  Components: build
Reporter: Vinayakumar B
Assignee: Vinayakumar B


Currently Precommit javadoc check will happen on root of hadoop,

IMO Its sufficient to run for only changed modules.
This way Pre-commit will take even lesser time as Javadoc will take significant 
time compare to other checks



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Re: [DISCUSS] About the details of JDK-8 support

2015-10-15 Thread Karthik Kambatla
On Wed, Oct 14, 2015 at 4:28 PM, Allen Wittenauer  wrote:

>
> If people want, I could setup a cut off of yetus master to run the jenkins
> test-patch.  (multiple maven repos, docker support, multijdk support, … )
> Yetus would get some real world testing out of it and hadoop common-dev
> could stop spinning in circles over some of the same issues month after
> month. ;)
>

Seems like a step in the right direction.

Should we expect a downtime and is there a good/bad time to do this?


>
>
> On Oct 14, 2015, at 3:05 PM, Robert Kanter  wrote:
>
> > The only problem with trying to get the JDK 8 trunk builds green (or
> blue I
> > guess) is that it's like trying to hit a moving target because of how
> many
> > new commits keep coming in.  I was looking at fixing these a while ago,
> and
> > managed to at least make them compile and fixed (or worked with others to
> > fix) some of the unit tests.  I've been really busy on other tasks and
> > haven't had time to continue working on this in quite a while though.
> >
> > Currently, it looks like Common is still green mostly, Yarn is having a
> > build failure with checkstyle, MR has between 1 and 10 test failures, and
> > HDFS had between 3 and 10 test failures.
> >
> > I think it's going to be difficult to get these green, and to keep them
> > green, unless we get more buy in from everyone on new commits being
> tested
> > against JDK 8.  Otherwise, it's too hard to keep up with the number of
> > commits coming in, even if we do get it green.  Perhaps we could have
> > test-patch also run the patch against JDK 8?
> >
> >
> > - Robert
> >
> > On Wed, Oct 14, 2015 at 8:27 AM, Steve Loughran 
> > wrote:
> >
> >>
> >>> On 13 Oct 2015, at 17:32, Haohui Mai  wrote:
> >>>
> >>> Just to echo Steve's idea -- if we're seriously considering supporting
> >>> JDK 8, maybe the first thing to do is to set up the jenkins to run
> >>> with JDK 8? I'm happy to help. Does anyone know who I can talk to if I
> >>> need to play around with all the Jenkins knob?
> >>
> >> Jenkins is building with JAva 7 and 8. all that's needed is to turn off
> >> the Java 7 build, which I will  happily do. The POM can be changed to
> set
> >> the minimum JVM version -though that's most likely to be visible to
> people
> >> building locally, as you'll need to make sure that you have access to
> java
> >> 7 and java 8 JVMs if you want to build and test for both.
> >>
> >> Jenkins-wise, the big issue is one I've mentioned before: the builds are
> >> failing an not enough people are caring
> >>
> >>
> >>
> https://builds.apache.org/view/H-L/view/Hadoop/job/Hadoop-Hdfs-trunk-Java8/488/
> >>
> >> Please, lets fix this
> >>
> >>
>
>


[jira] [Reopened] (HADOOP-12436) GlobPattern regex library has performance issues with wildcard characters

2015-10-15 Thread Allen Wittenauer (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12436?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Allen Wittenauer reopened HADOOP-12436:
---

I've reverted this patch.

It appears to have broken the HDFS unit test TestGlobPaths.pTestCurlyBracket:
com.google.re2j.PatternSyntaxException: error parsing regexp: Unclosed group at 
pos 10: `myuser}{bc`
at org.apache.hadoop.fs.GlobPattern.error(GlobPattern.java:168)
at org.apache.hadoop.fs.GlobPattern.set(GlobPattern.java:154)
at org.apache.hadoop.fs.GlobPattern.(GlobPattern.java:42)
at org.apache.hadoop.fs.GlobFilter.init(GlobFilter.java:67)
at org.apache.hadoop.fs.GlobFilter.(GlobFilter.java:50)
at org.apache.hadoop.fs.Globber.doGlob(Globber.java:209)
at org.apache.hadoop.fs.Globber.glob(Globber.java:148)
at org.apache.hadoop.fs.FileSystem.globStatus(FileSystem.java:1664)
at org.apache.hadoop.fs.TestGlobPaths.prepareTesting(TestGlobPaths.java:758)
at org.apache.hadoop.fs.TestGlobPaths.pTestCurlyBracket(TestGlobPaths.java:724)

> GlobPattern regex library has performance issues with wildcard characters
> -
>
> Key: HADOOP-12436
> URL: https://issues.apache.org/jira/browse/HADOOP-12436
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: fs
>Affects Versions: 2.2.0, 2.7.1
>Reporter: Matthew Paduano
>Assignee: Matthew Paduano
> Fix For: 3.0.0
>
> Attachments: HADOOP-12436.01.patch, HADOOP-12436.02.patch, 
> HADOOP-12436.03.patch, HADOOP-12436.04.patch
>
>
> java.util.regex classes have performance problems with certain wildcard 
> patterns.  Namely, consecutive * characters in a file name (not properly 
> escaped as literals) will cause commands such as "hadoop fs -ls 
> file**name" to consume 100% CPU and probably never return in a reasonable 
> time (time scales with number of *'s). 
> Here is an example:
> {noformat}
> hadoop fs -touchz 
> /user/mattp/job_1429571161900_4222-1430338332599-tda%2D%2D\\\+\\\*\\\*\\\*\\\*\\\*\\\*\\\*\\\*\\\*\\\*\\\*\\\*\\\*\\\*\\\*\\\*\\\*\\\*\\\*\\\*\\\*\\\*\\\*\\\*\\\*\\\*\\\*\\\*\\\*\\\*\\\+\\\+\\\+...%270%27%28Stage-1430338580443-39-2000-SUCCEEDED-production%2Dhigh-1430338340360.jhist
> hadoop fs -ls 
> /user/mattp/job_1429571161900_4222-1430338332599-tda%2D%2D+**+++...%270%27%28Stage-1430338580443-39-2000-SUCCEEDED-production%2Dhigh-1430338340360.jhist
> {noformat}
> causes:
> {noformat}
> PIDCOMMAND  %CPU   TIME  
> 14526  java 100.0  01:18.85 
> {noformat}
> Not every string of *'s causes this, but the above filename reproduces this 
> reliably.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Re: [DISCUSS] About the details of JDK-8 support

2015-10-15 Thread Sangjin Lee
+1

On Thu, Oct 15, 2015 at 7:57 AM, Steve Loughran 
wrote:

>
> > On 15 Oct 2015, at 14:42, Karthik Kambatla  wrote:
> >
> > On Wed, Oct 14, 2015 at 4:28 PM, Allen Wittenauer 
> wrote:
> >
> >>
> >> If people want, I could setup a cut off of yetus master to run the
> jenkins
> >> test-patch.  (multiple maven repos, docker support, multijdk support, …
> )
> >> Yetus would get some real world testing out of it and hadoop common-dev
> >> could stop spinning in circles over some of the same issues month after
> >> month. ;)
> >>
> >
> > Seems like a step in the right direction.
> >
> > Should we expect a downtime and is there a good/bad time to do this?
>
>
>
> Weekends are the times to work on Jenkins
>
> do it on a saturday morning (PST) and the'res 24-48 h to stabilise before
> monday morning PST.
>
> Two days time, then?


Unsubscribe Me

2015-10-15 Thread vijay.verma


Visit our website at http://www.ubs.com 

This message contains confidential information and is intended only 
for the individual named. If you are not the named addressee you 
should not disseminate, distribute or copy this e-mail. Please 
notify the sender immediately by e-mail if you have received this 
e-mail by mistake and delete this e-mail from your system. 

E-mails are not encrypted and cannot be guaranteed to be secure or 
error-free as information could be intercepted, corrupted, lost, 
destroyed, arrive late or incomplete, or contain viruses. The sender 
therefore does not accept liability for any errors or omissions in the 
contents of this message which arise as a result of e-mail transmission. 
If verification is required please request a hard-copy version. This 
message is provided for informational purposes and should not be 
construed as a solicitation or offer to buy or sell any securities 
or related financial instruments. 

UBS Limited is a company limited by shares incorporated in the United 
Kingdom registered in England and Wales with number 2035362.  
Registered Office: 1 Finsbury Avenue, London EC2M 2PP
UBS Limited is authorised by the Prudential Regulation Authority 
and regulated by the Financial Conduct Authority and the Prudential 
Regulation Authority.

UBS AG is a public company incorporated with limited liability in
Switzerland domiciled in the Canton of Basel-City and the Canton of
Zurich respectively registered at the Commercial Registry offices in
those Cantons with new Identification No: CHE-101.329.561 as from 18
December 2013 (and prior to 18 December 2013 with Identification
No: CH-270.3.004.646-4) and having respective head offices at
Aeschenvorstadt 1, 4051 Basel and Bahnhofstrasse 45, 8001 Zurich,
Switzerland and is authorised and regulated by the Financial Market
Supervisory Authority in Switzerland.  Registered in the United
Kingdom as a foreign company with No: FC021146 and having a UK
Establishment registered at Companies House, Cardiff, with
No: BR 004507.  The principal office of UK Establishment: 1 Finsbury
Avenue, London EC2M 2PP.  In the United Kingdom, UBS AG is authorised
by the Prudential Regulation Authority and subject to regulation
by the Financial Conduct Authority and limited regulation by the
Prudential Regulation Authority.  Details about the extent of our
regulation by the Prudential Regulation Authority are available
from us on request.

UBS reserves the right to retain all messages. Messages are protected 
and accessed only in legally justified cases. 

Re: [DISCUSS] About the details of JDK-8 support

2015-10-15 Thread Steve Loughran

> On 15 Oct 2015, at 14:42, Karthik Kambatla  wrote:
> 
> On Wed, Oct 14, 2015 at 4:28 PM, Allen Wittenauer  wrote:
> 
>> 
>> If people want, I could setup a cut off of yetus master to run the jenkins
>> test-patch.  (multiple maven repos, docker support, multijdk support, … )
>> Yetus would get some real world testing out of it and hadoop common-dev
>> could stop spinning in circles over some of the same issues month after
>> month. ;)
>> 
> 
> Seems like a step in the right direction.
> 
> Should we expect a downtime and is there a good/bad time to do this?



Weekends are the times to work on Jenkins

do it on a saturday morning (PST) and the'res 24-48 h to stabilise before 
monday morning PST. 

Two days time, then?

RE: Unsubscribe Me

2015-10-15 Thread Brahma Reddy Battula

Please go through following link to unsubsribe.

https://hadoop.apache.org/mailing_lists.html



Thanks & Regards
 Brahma Reddy Battula

From: vijay.ve...@ubs.com [vijay.ve...@ubs.com]
Sent: Thursday, October 15, 2015 9:07 PM
To: common-dev@hadoop.apache.org; hdfs-...@hadoop.apache.org
Cc: yarn-...@hadoop.apache.org; mapreduce-...@hadoop.apache.org
Subject: Unsubscribe Me


Build failed in Jenkins: Hadoop-Common-trunk #1860

2015-10-15 Thread Apache Jenkins Server
See 

Changes:

[cnauroth] HADOOP-12475. Move attribution to 2.8.0 section of CHANGES.txt.

[cnauroth] HADOOP-12479. ProtocMojo does not log the reason for a protoc

[cnauroth] HADOOP-12481. JWTRedirectAuthenticationHandler doesn't Retain 
Original

--
[...truncated 5489 lines...]
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 1.372 sec - in 
org.apache.hadoop.io.TestSequenceFileSerialization
Running org.apache.hadoop.security.TestNetgroupCache
Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.158 sec - in 
org.apache.hadoop.security.TestNetgroupCache
Running org.apache.hadoop.security.TestUserFromEnv
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.705 sec - in 
org.apache.hadoop.security.TestUserFromEnv
Running org.apache.hadoop.security.TestGroupsCaching
Tests run: 10, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 1.961 sec - in 
org.apache.hadoop.security.TestGroupsCaching
Running org.apache.hadoop.security.ssl.TestSSLFactory
Tests run: 15, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 3.694 sec - in 
org.apache.hadoop.security.ssl.TestSSLFactory
Running org.apache.hadoop.security.ssl.TestReloadingX509TrustManager
Tests run: 5, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 2.67 sec - in 
org.apache.hadoop.security.ssl.TestReloadingX509TrustManager
Running org.apache.hadoop.security.TestUGILoginFromKeytab
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 5.665 sec - in 
org.apache.hadoop.security.TestUGILoginFromKeytab
Running org.apache.hadoop.security.TestUserGroupInformation
Tests run: 34, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 3.772 sec - in 
org.apache.hadoop.security.TestUserGroupInformation
Running org.apache.hadoop.security.TestUGIWithExternalKdc
Tests run: 1, Failures: 0, Errors: 0, Skipped: 1, Time elapsed: 0.107 sec - in 
org.apache.hadoop.security.TestUGIWithExternalKdc
Running org.apache.hadoop.security.TestProxyUserFromEnv
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.651 sec - in 
org.apache.hadoop.security.TestProxyUserFromEnv
Running org.apache.hadoop.security.authorize.TestProxyUsers
Tests run: 15, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.979 sec - in 
org.apache.hadoop.security.authorize.TestProxyUsers
Running org.apache.hadoop.security.authorize.TestProxyServers
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.462 sec - in 
org.apache.hadoop.security.authorize.TestProxyServers
Running org.apache.hadoop.security.authorize.TestServiceAuthorization
Tests run: 7, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 1.346 sec - in 
org.apache.hadoop.security.authorize.TestServiceAuthorization
Running org.apache.hadoop.security.authorize.TestAccessControlList
Tests run: 8, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.879 sec - in 
org.apache.hadoop.security.authorize.TestAccessControlList
Running org.apache.hadoop.security.alias.TestCredShell
Tests run: 9, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 1.322 sec - in 
org.apache.hadoop.security.alias.TestCredShell
Running org.apache.hadoop.security.alias.TestCredentialProviderFactory
Tests run: 6, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 2.023 sec - in 
org.apache.hadoop.security.alias.TestCredentialProviderFactory
Running org.apache.hadoop.security.alias.TestCredentialProvider
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.125 sec - in 
org.apache.hadoop.security.alias.TestCredentialProvider
Running org.apache.hadoop.security.TestAuthenticationFilter
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 1.099 sec - in 
org.apache.hadoop.security.TestAuthenticationFilter
Running org.apache.hadoop.security.TestLdapGroupsMapping
Tests run: 5, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 2.743 sec - in 
org.apache.hadoop.security.TestLdapGroupsMapping
Running org.apache.hadoop.security.token.TestToken
Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.767 sec - in 
org.apache.hadoop.security.token.TestToken
Running org.apache.hadoop.security.token.delegation.TestDelegationToken
Tests run: 15, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 31.137 sec - 
in org.apache.hadoop.security.token.delegation.TestDelegationToken
Running 
org.apache.hadoop.security.token.delegation.web.TestDelegationTokenAuthenticationHandlerWithMocks
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 2.47 sec - in 
org.apache.hadoop.security.token.delegation.web.TestDelegationTokenAuthenticationHandlerWithMocks
Running 
org.apache.hadoop.security.token.delegation.web.TestDelegationTokenManager
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 1.133 sec - in 
org.apache.hadoop.security.token.delegation.web.TestDelegationTokenManager
Running org.apache.hadoop.security.token.delegation.web.TestWebDelegationToken
Tests run: 12, Failures: 0, 

[jira] [Created] (HADOOP-12485) Don't prefer IPV4 stack on tests.

2015-10-15 Thread Elliott Clark (JIRA)
Elliott Clark created HADOOP-12485:
--

 Summary: Don't prefer IPV4 stack on tests.
 Key: HADOOP-12485
 URL: https://issues.apache.org/jira/browse/HADOOP-12485
 Project: Hadoop Common
  Issue Type: Sub-task
  Components: test
Affects Versions: HADOOP-11890
Reporter: Elliott Clark
Assignee: Elliott Clark






--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Build failed in Jenkins: Hadoop-common-trunk-Java8 #559

2015-10-15 Thread Apache Jenkins Server
See 

Changes:

[cnauroth] HADOOP-12475. Move attribution to 2.8.0 section of CHANGES.txt.

[cnauroth] HADOOP-12479. ProtocMojo does not log the reason for a protoc

[cnauroth] HADOOP-12481. JWTRedirectAuthenticationHandler doesn't Retain 
Original

[jianhe] YARN-4000. RM crashes with NPE if leaf queue becomes parent queue 
during

--
[...truncated 5793 lines...]
Tests run: 5, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 1.075 sec - in 
org.apache.hadoop.fs.contract.rawlocal.TestRawlocalContractMkdir
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running 
org.apache.hadoop.fs.contract.rawlocal.TestRawLocalContractUnderlyingFileBehavior
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.273 sec - in 
org.apache.hadoop.fs.contract.rawlocal.TestRawLocalContractUnderlyingFileBehavior
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.fs.contract.rawlocal.TestRawlocalContractCreate
Tests run: 6, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 1.034 sec - in 
org.apache.hadoop.fs.contract.rawlocal.TestRawlocalContractCreate
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.fs.contract.ftp.TestFTPContractCreate
Tests run: 6, Failures: 0, Errors: 0, Skipped: 6, Time elapsed: 0.615 sec - in 
org.apache.hadoop.fs.contract.ftp.TestFTPContractCreate
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.fs.contract.ftp.TestFTPContractDelete
Tests run: 8, Failures: 0, Errors: 0, Skipped: 8, Time elapsed: 0.772 sec - in 
org.apache.hadoop.fs.contract.ftp.TestFTPContractDelete
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.fs.contract.ftp.TestFTPContractRename
Tests run: 6, Failures: 0, Errors: 0, Skipped: 6, Time elapsed: 0.629 sec - in 
org.apache.hadoop.fs.contract.ftp.TestFTPContractRename
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.fs.contract.ftp.TestFTPContractMkdir
Tests run: 5, Failures: 0, Errors: 0, Skipped: 5, Time elapsed: 0.679 sec - in 
org.apache.hadoop.fs.contract.ftp.TestFTPContractMkdir
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.fs.contract.ftp.TestFTPContractOpen
Tests run: 6, Failures: 0, Errors: 0, Skipped: 6, Time elapsed: 0.791 sec - in 
org.apache.hadoop.fs.contract.ftp.TestFTPContractOpen
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.fs.contract.localfs.TestLocalFSContractAppend
Tests run: 5, Failures: 0, Errors: 0, Skipped: 5, Time elapsed: 0.964 sec - in 
org.apache.hadoop.fs.contract.localfs.TestLocalFSContractAppend
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.fs.contract.localfs.TestLocalFSContractGetFileStatus
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.858 sec - in 
org.apache.hadoop.fs.contract.localfs.TestLocalFSContractGetFileStatus
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.fs.contract.localfs.TestLocalFSContractOpen
Tests run: 6, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 1.107 sec - in 
org.apache.hadoop.fs.contract.localfs.TestLocalFSContractOpen
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.fs.contract.localfs.TestLocalFSContractLoaded
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.96 sec - in 
org.apache.hadoop.fs.contract.localfs.TestLocalFSContractLoaded
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.fs.contract.localfs.TestLocalFSContractDelete
Tests run: 8, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 1.04 sec - in 
org.apache.hadoop.fs.contract.localfs.TestLocalFSContractDelete
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.fs.contract.localfs.TestLocalFSContractMkdir
Tests run: 5, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 1.025 sec - in 
org.apache.hadoop.fs.contract.localfs.TestLocalFSContractMkdir
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.fs.contract.localfs.TestLocalFSContractRename
Tests run: 6, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 

Jenkins build is back to normal : Hadoop-Common-trunk #1861

2015-10-15 Thread Apache Jenkins Server
See 



[jira] [Created] (HADOOP-12486) Mockito missing in pom.xml of hadoop-kafka

2015-10-15 Thread Chengbing Liu (JIRA)
Chengbing Liu created HADOOP-12486:
--

 Summary: Mockito missing in pom.xml of hadoop-kafka
 Key: HADOOP-12486
 URL: https://issues.apache.org/jira/browse/HADOOP-12486
 Project: Hadoop Common
  Issue Type: Bug
  Components: build
Affects Versions: 2.8.0
Reporter: Chengbing Liu
Assignee: Chengbing Liu


Eclipse will generate build errors without the following:
{code}

  org.mockito
  mockito-all
  test

{code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Build failed in Jenkins: Hadoop-common-trunk-Java8 #558

2015-10-15 Thread Apache Jenkins Server
See 

Changes:

[sjlee] HADOOP-12475. Replace guava Cache with ConcurrentHashMap for caching

--
[...truncated 5879 lines...]
Tests run: 10, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.798 sec - in 
org.apache.hadoop.io.erasurecode.coder.TestRSErasureCoder
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.io.erasurecode.coder.TestXORCoder
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.203 sec - in 
org.apache.hadoop.io.erasurecode.coder.TestXORCoder
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.io.TestSequenceFileSync
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.855 sec - in 
org.apache.hadoop.io.TestSequenceFileSync
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.io.TestVersionedWritable
Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.202 sec - in 
org.apache.hadoop.io.TestVersionedWritable
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.io.TestWritable
Tests run: 7, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.381 sec - in 
org.apache.hadoop.io.TestWritable
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.io.TestBloomMapFile
Tests run: 7, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 1.653 sec - in 
org.apache.hadoop.io.TestBloomMapFile
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.io.TestSequenceFileAppend
Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 1.076 sec - in 
org.apache.hadoop.io.TestSequenceFileAppend
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.io.TestEnumSetWritable
Tests run: 6, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.514 sec - in 
org.apache.hadoop.io.TestEnumSetWritable
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.io.TestMapWritable
Tests run: 5, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.266 sec - in 
org.apache.hadoop.io.TestMapWritable
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.io.TestBooleanWritable
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.209 sec - in 
org.apache.hadoop.io.TestBooleanWritable
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.io.TestBytesWritable
Tests run: 6, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.273 sec - in 
org.apache.hadoop.io.TestBytesWritable
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.io.TestSequenceFile
Tests run: 8, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 10.274 sec - in 
org.apache.hadoop.io.TestSequenceFile
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.io.TestTextNonUTF8
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.168 sec - in 
org.apache.hadoop.io.TestTextNonUTF8
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.io.TestObjectWritableProtos
Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.311 sec - in 
org.apache.hadoop.io.TestObjectWritableProtos
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.io.TestDefaultStringifier
Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.476 sec - in 
org.apache.hadoop.io.TestDefaultStringifier
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.io.retry.TestRetryProxy
Tests run: 11, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 1.225 sec - in 
org.apache.hadoop.io.retry.TestRetryProxy
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.io.retry.TestDefaultRetryPolicy
Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.371 sec - in 
org.apache.hadoop.io.retry.TestDefaultRetryPolicy
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.io.retry.TestFailoverProxy
Tests run: 9, Failures: 0, Errors: 0, 

Re: [DISCUSS] About the details of JDK-8 support

2015-10-15 Thread Tsuyoshi Ozawa
> We should target source-level support of JDK 8 too, around which you outlined 
> a bunch of issues around dependencies. I also found a bunch of issues around 
> generating documentation, site etc. I propose that we track them under the 
> umbrella JIRA and make progress there first.

OK. I will summarise it on HADOOP-11090.

> If people want, I could setup a cut off of yetus master to run the jenkins
> test-patch.  (multiple maven repos, docker support, multijdk support, … )
> Yetus would get some real world testing out of it and hadoop common-dev
> could stop spinning in circles over some of the same issues month after
> month. ;)

Thanks very much, Allen.

> Seems like a step in the right direction.
> Should we expect a downtime and is there a good/bad time to do this?
>
> Weekends are the times to work on Jenkins
> do it on a saturday morning (PST) and the'res 24-48 h to stabilise before 
> monday morning PST.
> Two days time, then?

+1

> sun.security.krb5.KrbApReq was creating a static MD5 digest object and not 
> synchronizing access.
> This has been fixed in jdk8u60.
>
> http://hg.openjdk.java.net/jdk8u/jdk8u60/jdk/rev/02d6b1096e89

Thank for the sharing, Kihwal. We should describe it on Wiki and
documents. I'll do it based on information based on this thread.

Thanks,
- Tsuyoshi

On Thu, Oct 15, 2015 at 11:57 PM, Steve Loughran  wrote:
>
>> On 15 Oct 2015, at 14:42, Karthik Kambatla  wrote:
>>
>> On Wed, Oct 14, 2015 at 4:28 PM, Allen Wittenauer  wrote:
>>
>>>
>>> If people want, I could setup a cut off of yetus master to run the jenkins
>>> test-patch.  (multiple maven repos, docker support, multijdk support, … )
>>> Yetus would get some real world testing out of it and hadoop common-dev
>>> could stop spinning in circles over some of the same issues month after
>>> month. ;)
>>>
>>
>> Seems like a step in the right direction.
>>
>> Should we expect a downtime and is there a good/bad time to do this?
>
>
>
> Weekends are the times to work on Jenkins
>
> do it on a saturday morning (PST) and the'res 24-48 h to stabilise before 
> monday morning PST.
>
> Two days time, then?


Build failed in Jenkins: Hadoop-Common-trunk #1859

2015-10-15 Thread Apache Jenkins Server
See 

Changes:

[sjlee] HADOOP-12475. Replace guava Cache with ConcurrentHashMap for caching

--
[...truncated 3879 lines...]
[INFO] 
[INFO] Building Apache Hadoop MiniKDC 3.0.0-SNAPSHOT
[INFO] 
[INFO] 
[INFO] --- maven-clean-plugin:2.5:clean (default-clean) @ hadoop-minikdc ---
[INFO] Deleting 

[INFO] 
[INFO] --- findbugs-maven-plugin:3.0.0:findbugs (default-cli) @ hadoop-minikdc 
---
[INFO] 
[INFO] --- maven-checkstyle-plugin:2.15:checkstyle (default-cli) @ 
hadoop-minikdc ---
[INFO] There are 10 errors reported by Checkstyle 6.6 with 
checkstyle/checkstyle.xml ruleset.
[WARNING] Unable to locate Source XRef to link to - DISABLED
[INFO] 
[INFO] --- maven-antrun-plugin:1.7:run (create-testdirs) @ hadoop-minikdc ---
[INFO] Executing tasks

main:
[mkdir] Created dir: 

[INFO] Executed tasks
[INFO] 
[INFO] --- maven-resources-plugin:2.6:resources (default-resources) @ 
hadoop-minikdc ---
[INFO] Using 'UTF-8' encoding to copy filtered resources.
[INFO] Copying 3 resources
[INFO] 
[INFO] --- maven-compiler-plugin:3.1:compile (default-compile) @ hadoop-minikdc 
---
[INFO] Changes detected - recompiling the module!
[INFO] Compiling 2 source files to 

[INFO] 
[INFO] --- maven-resources-plugin:2.6:testResources (default-testResources) @ 
hadoop-minikdc ---
[INFO] Using 'UTF-8' encoding to copy filtered resources.
[INFO] skip non existing resourceDirectory 

[INFO] 
[INFO] --- maven-compiler-plugin:3.1:testCompile (default-testCompile) @ 
hadoop-minikdc ---
[INFO] Changes detected - recompiling the module!
[INFO] Compiling 2 source files to 

[INFO] 
[INFO] --- maven-surefire-plugin:2.17:test (default-test) @ hadoop-minikdc ---
[INFO] Surefire report directory: 


---
 T E S T S
---

---
 T E S T S
---
Running org.apache.hadoop.minikdc.TestChangeOrgNameAndDomain
Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 9.016 sec - in 
org.apache.hadoop.minikdc.TestChangeOrgNameAndDomain
Running org.apache.hadoop.minikdc.TestMiniKdc
Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 8.624 sec - in 
org.apache.hadoop.minikdc.TestMiniKdc

Results :

Tests run: 6, Failures: 0, Errors: 0, Skipped: 0

[INFO] 
[INFO] --- maven-jar-plugin:2.5:jar (default-jar) @ hadoop-minikdc ---
[INFO] Building jar: 

[INFO] 
[INFO] --- maven-source-plugin:2.3:jar-no-fork (hadoop-java-sources) @ 
hadoop-minikdc ---
[INFO] Building jar: 

[INFO] 
[INFO] --- maven-source-plugin:2.3:test-jar-no-fork (hadoop-java-sources) @ 
hadoop-minikdc ---
[INFO] Building jar: 

[INFO] 
[INFO] --- maven-enforcer-plugin:1.3.1:enforce (dist-enforce) @ hadoop-minikdc 
---
[INFO] 
[INFO] --- maven-site-plugin:3.4:attach-descriptor (attach-descriptor) @ 
hadoop-minikdc ---
[INFO] 
[INFO] --- maven-javadoc-plugin:2.8.1:jar (module-javadocs) @ hadoop-minikdc ---
[INFO] 
Loading source files for package org.apache.hadoop.minikdc...
Constructing Javadoc information...
Standard Doclet version 1.7.0_55
Building tree for all the packages and classes...
Generating 

Generating 

Generating 

Generating 

[jira] [Created] (HADOOP-12484) Single File Rename Throws Incorrectly In Potential Race Condition Scenarios

2015-10-15 Thread Gaurav Kanade (JIRA)
Gaurav Kanade created HADOOP-12484:
--

 Summary: Single File Rename Throws Incorrectly In Potential Race 
Condition Scenarios
 Key: HADOOP-12484
 URL: https://issues.apache.org/jira/browse/HADOOP-12484
 Project: Hadoop Common
  Issue Type: Bug
  Components: tools
Reporter: Gaurav Kanade






--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HADOOP-12481) JWTRedirectAuthenticationHandler doesn't Retain Original Query String

2015-10-15 Thread Larry McCay (JIRA)
Larry McCay created HADOOP-12481:


 Summary: JWTRedirectAuthenticationHandler doesn't Retain Original 
Query String
 Key: HADOOP-12481
 URL: https://issues.apache.org/jira/browse/HADOOP-12481
 Project: Hadoop Common
  Issue Type: Improvement
  Components: security
Reporter: Larry McCay
Assignee: Larry McCay
 Fix For: 2.8.0


Extend AltKerberosAuthenticationHandler to provide WebSSO flow for UIs.

The actual authentication is done by some external service that the handler 
will redirect to when there is no hadoop.auth cookie and no JWT token found in 
the incoming request.

Using JWT provides a number of benefits:

* It is not tied to any specific authentication mechanism - so buys us many SSO 
integrations
* It is cryptographically verifiable for determining whether it can be trusted
* Checking for expiration allows for a limited lifetime and window for 
compromised use

This will introduce the use of nimbus-jose-jwt library for processing, 
validating and parsing JWT tokens.





--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Reopened] (HADOOP-11685) StorageException complaining " no lease ID" during HBase distributed log splitting

2015-10-15 Thread Duo Xu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-11685?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Duo Xu reopened HADOOP-11685:
-

The same exceptions were found in one customer's cluster.

> StorageException complaining " no lease ID" during HBase distributed log 
> splitting
> --
>
> Key: HADOOP-11685
> URL: https://issues.apache.org/jira/browse/HADOOP-11685
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: tools
>Reporter: Duo Xu
>Assignee: Duo Xu
>
> This is similar to HADOOP-11523, but in a different place. During HBase 
> distributed log splitting, multiple threads will access the same folder 
> called "recovered.edits". However, lots of places in our WASB code did not 
> acquire lease and simply passed null to Azure storage, which caused this 
> issue.
> {code}
> 2015-02-26 03:21:28,871 WARN 
> org.apache.hadoop.hbase.regionserver.SplitLogWorker: log splitting of 
> WALs/workernode4.hbaseproddm2001.g6.internal.cloudapp.net,60020,1422071058425-splitting/workernode4.hbaseproddm2001.g6.internal.cloudapp.net%2C60020%2C1422071058425.1424914216773
>  failed, returning error
> java.io.IOException: org.apache.hadoop.fs.azure.AzureException: 
> java.io.IOException
>   at 
> org.apache.hadoop.hbase.regionserver.wal.HLogSplitter.checkForErrors(HLogSplitter.java:633)
>   at 
> org.apache.hadoop.hbase.regionserver.wal.HLogSplitter.access$000(HLogSplitter.java:121)
>   at 
> org.apache.hadoop.hbase.regionserver.wal.HLogSplitter$OutputSink.finishWriting(HLogSplitter.java:964)
>   at 
> org.apache.hadoop.hbase.regionserver.wal.HLogSplitter$LogRecoveredEditsOutputSink.finishWritingAndClose(HLogSplitter.java:1019)
>   at 
> org.apache.hadoop.hbase.regionserver.wal.HLogSplitter.splitLogFile(HLogSplitter.java:359)
>   at 
> org.apache.hadoop.hbase.regionserver.wal.HLogSplitter.splitLogFile(HLogSplitter.java:223)
>   at 
> org.apache.hadoop.hbase.regionserver.SplitLogWorker$1.exec(SplitLogWorker.java:142)
>   at 
> org.apache.hadoop.hbase.regionserver.handler.HLogSplitterHandler.process(HLogSplitterHandler.java:79)
>   at 
> org.apache.hadoop.hbase.executor.EventHandler.run(EventHandler.java:128)
>   at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
>   at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
>   at java.lang.Thread.run(Thread.java:745)
> Caused by: org.apache.hadoop.fs.azure.AzureException: java.io.IOException
>   at 
> org.apache.hadoop.fs.azurenative.AzureNativeFileSystemStore.storeEmptyFolder(AzureNativeFileSystemStore.java:1477)
>   at 
> org.apache.hadoop.fs.azurenative.NativeAzureFileSystem.mkdirs(NativeAzureFileSystem.java:1862)
>   at 
> org.apache.hadoop.fs.azurenative.NativeAzureFileSystem.mkdirs(NativeAzureFileSystem.java:1812)
>   at org.apache.hadoop.fs.FileSystem.mkdirs(FileSystem.java:1815)
>   at 
> org.apache.hadoop.hbase.regionserver.wal.HLogSplitter.getRegionSplitEditsPath(HLogSplitter.java:502)
>   at 
> org.apache.hadoop.hbase.regionserver.wal.HLogSplitter$LogRecoveredEditsOutputSink.createWAP(HLogSplitter.java:1211)
>   at 
> org.apache.hadoop.hbase.regionserver.wal.HLogSplitter$LogRecoveredEditsOutputSink.getWriterAndPath(HLogSplitter.java:1200)
>   at 
> org.apache.hadoop.hbase.regionserver.wal.HLogSplitter$LogRecoveredEditsOutputSink.append(HLogSplitter.java:1243)
>   at 
> org.apache.hadoop.hbase.regionserver.wal.HLogSplitter$WriterThread.writeBuffer(HLogSplitter.java:851)
>   at 
> org.apache.hadoop.hbase.regionserver.wal.HLogSplitter$WriterThread.doRun(HLogSplitter.java:843)
>   at 
> org.apache.hadoop.hbase.regionserver.wal.HLogSplitter$WriterThread.run(HLogSplitter.java:813)
> Caused by: java.io.IOException
>   at 
> com.microsoft.windowsazure.storage.core.Utility.initIOException(Utility.java:493)
>   at 
> com.microsoft.windowsazure.storage.blob.BlobOutputStream.close(BlobOutputStream.java:282)
>   at 
> org.apache.hadoop.fs.azurenative.AzureNativeFileSystemStore.storeEmptyFolder(AzureNativeFileSystemStore.java:1472)
>   ... 10 more
> Caused by: com.microsoft.windowsazure.storage.StorageException: There is 
> currently a lease on the blob and no lease ID was specified in the request.
>   at 
> com.microsoft.windowsazure.storage.StorageException.translateException(StorageException.java:163)
>   at 
> com.microsoft.windowsazure.storage.core.StorageRequest.materializeException(StorageRequest.java:306)
>   at 
> com.microsoft.windowsazure.storage.core.ExecutionEngine.executeWithRetry(ExecutionEngine.java:229)
>   at 
> com.microsoft.windowsazure.storage.blob.CloudBlockBlob.commitBlockList(CloudBlockBlob.java:248)
>   at 
> 

[jira] [Created] (HADOOP-12482) Race condition in JMX cache update

2015-10-15 Thread Tony Wu (JIRA)
Tony Wu created HADOOP-12482:


 Summary: Race condition in JMX cache update
 Key: HADOOP-12482
 URL: https://issues.apache.org/jira/browse/HADOOP-12482
 Project: Hadoop Common
  Issue Type: Bug
Affects Versions: 2.7.1
Reporter: Tony Wu
Assignee: Tony Wu


updateJmxCache() was in HADOOP-11301. However the patch introduced a race 
condition. In updateJmxCache() function in MetricsSourceAdapter.java:

{code:java}
  private void updateJmxCache() {
boolean getAllMetrics = false;
synchronized (this) {
  if (Time.now() - jmxCacheTS >= jmxCacheTTL) {
// temporarilly advance the expiry while updating the cache
jmxCacheTS = Time.now() + jmxCacheTTL;
if (lastRecs == null) {
  getAllMetrics = true;
}
  } else {
return;
  }

  if (getAllMetrics) {
MetricsCollectorImpl builder = new MetricsCollectorImpl();
getMetrics(builder, true);
  }

  updateAttrCache();
  if (getAllMetrics) {
updateInfoCache();
  }
  jmxCacheTS = Time.now();
  lastRecs = null; // in case regular interval update is not running
}
  }
{code}

Notice that getAllMetrics is set to true when:
# jmxCacheTTL has passed
# lastRecs == null

lastRecs is set to null in the same function, but gets reassigned by 
getMetrics().

However getMetrics() can be called from a different thread:
# MetricsSystemImpl.onTimerEvent()
# MetricsSystemImpl.publishMetricsNow()

Consider the following sequence:
# updateJmxCache() is called by getMBeanInfo() from a thread getting cached 
info. 
** lastRecs is set to null.
# metrics sources is updated with new value/field.
# getMetrics() is called by publishMetricsNow() or onTimerEvent() from a 
different thread getting the latest metrics. 
** lastRecs is updated (!= null).
# jmxCacheTTL passed.
# updateJmxCache() is called again via getMBeanInfo().
** However because lastRecs is already updated (!= null), getAllMetrics will 
not be set to true. So updateInfoCache() is not called and getMBeanInfo() 
returns the old cached info.

We ran into this issue on a cluster where a new metric did not get published 
until much later.

The case can be made worse by a periodic call to getMetrics() (driven by an 
external program or script). In such case getMBeanInfo() may never be able to 
retrieve the new record.

The desired behavior should be that updateJmxCache() will guarantee to call 
updateInfoCache() once after jmxCacheTTL, if lastRecs has been set to null by 
updateJmxCache() itself.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HADOOP-12483) Maintain wrapped SASL ordering for postponed IPC responses

2015-10-15 Thread Daryn Sharp (JIRA)
Daryn Sharp created HADOOP-12483:


 Summary: Maintain wrapped SASL ordering for postponed IPC responses
 Key: HADOOP-12483
 URL: https://issues.apache.org/jira/browse/HADOOP-12483
 Project: Hadoop Common
  Issue Type: Bug
  Components: ipc
Affects Versions: 2.8.0
Reporter: Daryn Sharp
Assignee: Daryn Sharp
Priority: Critical


A SASL encryption algorithm (wrapping) may have a required ordering for 
encrypted responses.  The IPC layer encrypts when the response is set based on 
the assumption it is being immediately sent.  Postponed responses violate that 
assumption.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)