Re: Local repo sharing for maven builds

2015-09-18 Thread Andrew Wang
I just filed YETUS-4 for supporting additional maven args.

https://issues.apache.org/jira/browse/YETUS-4

Theoretically, we should be able to run unittests without a full `mvn
install` right? The "test" phase comes before "package" or "install", so I
figured it only needed class files. Maybe the multi-module-ness screws this
up.

On Fri, Sep 18, 2015 at 9:23 PM, Roman Shaposhnik 
wrote:

> On Fri, Sep 18, 2015 at 2:42 PM, Allen Wittenauer 
> wrote:
> > As far as Yetus goes, we've got a JIRA open to provide for per-instance
> caches when
> > using the docker container code. I've got it in my head how I think we
> can do it, but just
> > haven't had a chance to code it.  So once that gets written up + turning
> on containers
> > should make the problem go away without any significant impact on test
> time.
> > Of course, that won't help the scheduled builds but those happen at an
> even smaller rate.
>
> I'm about to start doing quite a bit of dockerized builds on ASF
> Jenkins and any best
> practices around caching packages and Maven repos would be greatly
> appreciated.
>
> If nothing else, that'll reduce the I/O load on ASF infra.
>
> Thanks,
> Roman.
>


Re: Local repo sharing for maven builds

2015-09-18 Thread Roman Shaposhnik
On Fri, Sep 18, 2015 at 2:42 PM, Allen Wittenauer  wrote:
> As far as Yetus goes, we've got a JIRA open to provide for per-instance 
> caches when
> using the docker container code. I've got it in my head how I think we can do 
> it, but just
> haven't had a chance to code it.  So once that gets written up + turning on 
> containers
> should make the problem go away without any significant impact on test time.
> Of course, that won't help the scheduled builds but those happen at an even 
> smaller rate.

I'm about to start doing quite a bit of dockerized builds on ASF
Jenkins and any best
practices around caching packages and Maven repos would be greatly appreciated.

If nothing else, that'll reduce the I/O load on ASF infra.

Thanks,
Roman.


Re: Local repo sharing for maven builds

2015-09-18 Thread Allen Wittenauer
a) Multi-module patches are always troublesome because it makes the test system 
do significantly more work.  For Yetus, we've pared it down as far as we can go 
to get *some* speed increases, but if a patch does something like hit every 
pom.xml file, there's nothing that can be done to make it better other than 
splitting up the patch.

b) It's worth noting that it happens more often to HDFS patches because HDFS 
unit tests take too damn long.  Some individual tests take 10 minutes! They 
invariably collide with the various full builds (NOT pre commit! Those other 
things that Steve pointed out that we're ignoring).  While Yetus has support 
for running unit tests in parallel, Hadoop does not.  

c) mvn install is pretty much required for a not insignificant amount of 
multi-module patches, esp if they hit hadoop-common.  For a large chunk of "oh 
just make it one patch", it's effectively a death sentence on the Jenkins side.

d) I'm a big fan of d. 

e) File a bug against Yetus and we'll add the ability to set ant/gradle/maven 
args from the command line.  I thought I had it in there when I rewrote the 
support for multiple build tools, gradle, etc, but I clearly dropped it on the 
floor.

f) Any time you "give the option to the patch submitter", you generate a not 
insignificant amount of work on the test infrastructure to determine intent 
because it effectively means implementing some parsing of a comment.  It's not 
particularly easy because humans rarely follow the rules.  Just see how well we 
are at following the Hadoop Compatibility Guidelines. Har har.  No really: 
people still struggle with filling in JIRA headers correctly and naming patches 
to trigger the appropriate branch for the test.

g) It's worth noting that Hadoop trunk is *not* using the latest test-patch 
code.  So there are some significant improvements on the way as soon as we get 
a release out the door.



On Sep 18, 2015, at 7:56 PM, Ming Ma  wrote:

> The increase of frequency might have been due to the refactor of
> hadoop-hdfs-client-*.jar
> out of the main
> hadoop-hdfs-*.jar. I don't have the oveall metrics of how often this
> happens when anyone changes protobuf. But based on HDFS-9004, 4 of 5 runs
> have this issue, which is a lot for any patch that changes APIs. This isn't
> limited to HDFS. There are cases YARN API changes causing MR unit tests to
> fail.
> 
> So far, the work around I use is to keep resubmitting the build until it
> succeed. Another approach we can consider is to provide an option for the
> patch submitter to use its local repo when it submits the patch. In that
> way, the majority of patches can still use the shared local repo.
> 
> On Fri, Sep 18, 2015 at 3:14 PM, Andrew Wang 
> wrote:
> 
>> Okay, some browsing of Jenkins docs [1] says that we could key the
>> maven.repo.local off of $EXECUTOR_NUMBER to do per-executor repos like
>> Bernd recommended, but that still requires some hook into test-patch.sh.
>> 
>> Regarding install, I thought all we needed to install was
>> hadoop-maven-plugins, but we do more than that now in test-patch.sh. Not
>> sure if we can reduce that.
>> 
>> [1]
>> 
>> https://wiki.jenkins-ci.org/display/JENKINS/Building+a+software+project#Buildingasoftwareproject-JenkinsSetEnvironmentVariables
>> 
>> On Fri, Sep 18, 2015 at 2:42 PM, Allen Wittenauer 
>> wrote:
>> 
>>> 
>>> The collisions have been happening for about a year now.   The frequency
>>> is increasing, but not enough to be particularly worrisome. (So I'm
>>> slightly amused that one blowing up is suddenly a major freakout.)
>>> 
>>> Making changes to the configuration without knowing what one is doing is
>>> probably a bad idea. For example, if people are removing the shared
>> cache,
>>> I hope they're also prepared for the bitching that is going to go with
>> the
>>> extremely significant slow down caused by downloading the java prereqs
>> for
>>> building for every test...
>>> 
>>> As far as Yetus goes, we've got a JIRA open to provide for per-instance
>>> caches when using the docker container code. I've got it in my head how I
>>> think we can do it, but just haven't had a chance to code it.  So once
>> that
>>> gets written up + turning on containers should make the problem go away
>>> without any significant impact on test time.  Of course, that won't help
>>> the scheduled builds but those happen at an even smaller rate.
>>> 
>>> 
>>> On Sep 18, 2015, at 12:19 PM, Andrew Wang 
>>> wrote:
>>> 
 Sangjin, you should have access to the precommit jobs if you log in
>> with
 your Apache credentials, even as a branch committer.
 
 https://builds.apache.org/job/PreCommit-HDFS-Build/configure
 
 The actual maven invocation is managed by test-patch.sh though.
 test-patch.sh has a MAVEN_ARGS which looks like what we want, but I
>> don't
 think we can just set it before calling test-patch, since it'd get
>>> squashed
 by setup_defaults.
 
 Allen/Chris/Yetus folks, any guidanc

Re: Local repo sharing for maven builds

2015-09-18 Thread Ming Ma
The increase of frequency might have been due to the refactor of
hadoop-hdfs-client-*.jar
out of the main
hadoop-hdfs-*.jar. I don't have the oveall metrics of how often this
happens when anyone changes protobuf. But based on HDFS-9004, 4 of 5 runs
have this issue, which is a lot for any patch that changes APIs. This isn't
limited to HDFS. There are cases YARN API changes causing MR unit tests to
fail.

So far, the work around I use is to keep resubmitting the build until it
succeed. Another approach we can consider is to provide an option for the
patch submitter to use its local repo when it submits the patch. In that
way, the majority of patches can still use the shared local repo.

On Fri, Sep 18, 2015 at 3:14 PM, Andrew Wang 
wrote:

> Okay, some browsing of Jenkins docs [1] says that we could key the
> maven.repo.local off of $EXECUTOR_NUMBER to do per-executor repos like
> Bernd recommended, but that still requires some hook into test-patch.sh.
>
> Regarding install, I thought all we needed to install was
> hadoop-maven-plugins, but we do more than that now in test-patch.sh. Not
> sure if we can reduce that.
>
> [1]
>
> https://wiki.jenkins-ci.org/display/JENKINS/Building+a+software+project#Buildingasoftwareproject-JenkinsSetEnvironmentVariables
>
> On Fri, Sep 18, 2015 at 2:42 PM, Allen Wittenauer 
> wrote:
>
> >
> > The collisions have been happening for about a year now.   The frequency
> > is increasing, but not enough to be particularly worrisome. (So I'm
> > slightly amused that one blowing up is suddenly a major freakout.)
> >
> > Making changes to the configuration without knowing what one is doing is
> > probably a bad idea. For example, if people are removing the shared
> cache,
> > I hope they're also prepared for the bitching that is going to go with
> the
> > extremely significant slow down caused by downloading the java prereqs
> for
> > building for every test...
> >
> > As far as Yetus goes, we've got a JIRA open to provide for per-instance
> > caches when using the docker container code. I've got it in my head how I
> > think we can do it, but just haven't had a chance to code it.  So once
> that
> > gets written up + turning on containers should make the problem go away
> > without any significant impact on test time.  Of course, that won't help
> > the scheduled builds but those happen at an even smaller rate.
> >
> >
> > On Sep 18, 2015, at 12:19 PM, Andrew Wang 
> > wrote:
> >
> > > Sangjin, you should have access to the precommit jobs if you log in
> with
> > > your Apache credentials, even as a branch committer.
> > >
> > > https://builds.apache.org/job/PreCommit-HDFS-Build/configure
> > >
> > > The actual maven invocation is managed by test-patch.sh though.
> > > test-patch.sh has a MAVEN_ARGS which looks like what we want, but I
> don't
> > > think we can just set it before calling test-patch, since it'd get
> > squashed
> > > by setup_defaults.
> > >
> > > Allen/Chris/Yetus folks, any guidance here?
> > >
> > > Thanks,
> > > Andrew
> > >
> > > On Fri, Sep 18, 2015 at 11:55 AM,  wrote:
> > >
> > >> You can use one per build processor, that reduces concurrent updates
> but
> > >> still keeps the cache function. And then try to avoid using install.
> > >>
> > >> --
> > >> http://bernd.eckenfels.net
> > >>
> > >> -Original Message-
> > >> From: Andrew Wang 
> > >> To: "common-dev@hadoop.apache.org" 
> > >> Cc: Andrew Bayer , Sangjin Lee <
> > sj...@twitter.com>,
> > >> Lei Xu , infrastruct...@apache.org
> > >> Sent: Fr., 18 Sep. 2015 20:42
> > >> Subject: Re: Local repo sharing for maven builds
> > >>
> > >> I think each job should use a maven.repo.local within its workspace
> like
> > >> abayer said. This means lots of downloading, but it's isolated.
> > >>
> > >> If we care about download time, we could also bootstrap with a tarred
> > >> .m2/repository after we've run a `mvn compile`, so before it installs
> > the
> > >> hadoop artifacts.
> > >>
> > >> On Fri, Sep 18, 2015 at 11:02 AM, Ming Ma  >
> > >> wrote:
> > >>
> > >>> +hadoop common dev. Any suggestions?
> > >>>
> > >>>
> > >>> On Fri, Sep 18, 2015 at 10:41 AM, Andrew Bayer <
> andrew.ba...@gmail.com
> > >
> > >>> wrote:
> > >>>
> >  You can change your maven call to use a different repository - I
> > >> believe
> >  you do that with -Dmaven.repository.local=path/to/repo
> >  On Sep 18, 2015 19:39, "Ming Ma"  wrote:
> > 
> > > Hi,
> > >
> > > We are seeing some strange behaviors in HDFS precommit build. It
> > seems
> > > like it is caused by the local repo on the same machine being used
> by
> > > different concurrent jobs which can cause issues.
> > >
> > > In HDFS, the build and test of "hadoop-hdfs-project/hdfs" depend on
> > > "hadoop-hdfs-project/hdfs-client"'s  hadoop-hdfs-client-3.0.0-
> > > SNAPSHOT.jar. HDFS-9004 adds some new method to
> > >>> hadoop-hdfs-client-3.0.0-SNAPSHOT.jar.
> > > In the precommit build for HDFS-9004, unit tests for
> > >

Jenkins build is back to normal : Hadoop-common-trunk-Java8 #428

2015-09-18 Thread Apache Jenkins Server
See 



[jira] [Created] (HADOOP-12424) Add a function to build unique cache key for Token

2015-09-18 Thread Xiaobing Zhou (JIRA)
Xiaobing Zhou created HADOOP-12424:
--

 Summary: Add a function to build unique cache key for Token
 Key: HADOOP-12424
 URL: https://issues.apache.org/jira/browse/HADOOP-12424
 Project: Hadoop Common
  Issue Type: Bug
Reporter: Xiaobing Zhou
Assignee: Xiaobing Zhou






--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Build failed in Jenkins: Hadoop-common-trunk-Java8 #427

2015-09-18 Thread Apache Jenkins Server
See 

Changes:

[Arun Suresh] YARN-3920. FairScheduler container reservation on a node should 
be configurable to limit it to large containers (adhoot via asuresh)

--
[...truncated 5604 lines...]
Tests run: 21, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 1.132 sec - in 
org.apache.hadoop.io.TestMapFile
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.io.TestUTF8
Tests run: 8, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.526 sec - in 
org.apache.hadoop.io.TestUTF8
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.io.TestBoundedByteArrayOutputStream
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.045 sec - in 
org.apache.hadoop.io.TestBoundedByteArrayOutputStream
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.io.TestSequenceFileSync
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.755 sec - in 
org.apache.hadoop.io.TestSequenceFileSync
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.io.TestVersionedWritable
Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.161 sec - in 
org.apache.hadoop.io.TestVersionedWritable
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.io.TestWritable
Tests run: 7, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.326 sec - in 
org.apache.hadoop.io.TestWritable
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.io.TestBloomMapFile
Tests run: 7, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 1.324 sec - in 
org.apache.hadoop.io.TestBloomMapFile
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.io.TestSequenceFileAppend
Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.7 sec - in 
org.apache.hadoop.io.TestSequenceFileAppend
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.io.TestEnumSetWritable
Tests run: 6, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.412 sec - in 
org.apache.hadoop.io.TestEnumSetWritable
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.io.TestMapWritable
Tests run: 5, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.184 sec - in 
org.apache.hadoop.io.TestMapWritable
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.io.TestBooleanWritable
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.185 sec - in 
org.apache.hadoop.io.TestBooleanWritable
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.io.TestBytesWritable
Tests run: 6, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.182 sec - in 
org.apache.hadoop.io.TestBytesWritable
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.io.TestSequenceFile
Tests run: 8, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 8.676 sec - in 
org.apache.hadoop.io.TestSequenceFile
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.io.TestTextNonUTF8
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.17 sec - in 
org.apache.hadoop.io.TestTextNonUTF8
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.io.TestObjectWritableProtos
Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.274 sec - in 
org.apache.hadoop.io.TestObjectWritableProtos
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.io.TestDefaultStringifier
Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.424 sec - in 
org.apache.hadoop.io.TestDefaultStringifier
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.io.retry.TestRetryProxy
Tests run: 11, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 1.209 sec - in 
org.apache.hadoop.io.retry.TestRetryProxy
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.io.retry.TestDefaultRetryPolicy
Tests run: 3, Failures: 0, Err

Re: Local repo sharing for maven builds

2015-09-18 Thread Andrew Wang
Okay, some browsing of Jenkins docs [1] says that we could key the
maven.repo.local off of $EXECUTOR_NUMBER to do per-executor repos like
Bernd recommended, but that still requires some hook into test-patch.sh.

Regarding install, I thought all we needed to install was
hadoop-maven-plugins, but we do more than that now in test-patch.sh. Not
sure if we can reduce that.

[1]
https://wiki.jenkins-ci.org/display/JENKINS/Building+a+software+project#Buildingasoftwareproject-JenkinsSetEnvironmentVariables

On Fri, Sep 18, 2015 at 2:42 PM, Allen Wittenauer  wrote:

>
> The collisions have been happening for about a year now.   The frequency
> is increasing, but not enough to be particularly worrisome. (So I'm
> slightly amused that one blowing up is suddenly a major freakout.)
>
> Making changes to the configuration without knowing what one is doing is
> probably a bad idea. For example, if people are removing the shared cache,
> I hope they're also prepared for the bitching that is going to go with the
> extremely significant slow down caused by downloading the java prereqs for
> building for every test...
>
> As far as Yetus goes, we've got a JIRA open to provide for per-instance
> caches when using the docker container code. I've got it in my head how I
> think we can do it, but just haven't had a chance to code it.  So once that
> gets written up + turning on containers should make the problem go away
> without any significant impact on test time.  Of course, that won't help
> the scheduled builds but those happen at an even smaller rate.
>
>
> On Sep 18, 2015, at 12:19 PM, Andrew Wang 
> wrote:
>
> > Sangjin, you should have access to the precommit jobs if you log in with
> > your Apache credentials, even as a branch committer.
> >
> > https://builds.apache.org/job/PreCommit-HDFS-Build/configure
> >
> > The actual maven invocation is managed by test-patch.sh though.
> > test-patch.sh has a MAVEN_ARGS which looks like what we want, but I don't
> > think we can just set it before calling test-patch, since it'd get
> squashed
> > by setup_defaults.
> >
> > Allen/Chris/Yetus folks, any guidance here?
> >
> > Thanks,
> > Andrew
> >
> > On Fri, Sep 18, 2015 at 11:55 AM,  wrote:
> >
> >> You can use one per build processor, that reduces concurrent updates but
> >> still keeps the cache function. And then try to avoid using install.
> >>
> >> --
> >> http://bernd.eckenfels.net
> >>
> >> -Original Message-
> >> From: Andrew Wang 
> >> To: "common-dev@hadoop.apache.org" 
> >> Cc: Andrew Bayer , Sangjin Lee <
> sj...@twitter.com>,
> >> Lei Xu , infrastruct...@apache.org
> >> Sent: Fr., 18 Sep. 2015 20:42
> >> Subject: Re: Local repo sharing for maven builds
> >>
> >> I think each job should use a maven.repo.local within its workspace like
> >> abayer said. This means lots of downloading, but it's isolated.
> >>
> >> If we care about download time, we could also bootstrap with a tarred
> >> .m2/repository after we've run a `mvn compile`, so before it installs
> the
> >> hadoop artifacts.
> >>
> >> On Fri, Sep 18, 2015 at 11:02 AM, Ming Ma 
> >> wrote:
> >>
> >>> +hadoop common dev. Any suggestions?
> >>>
> >>>
> >>> On Fri, Sep 18, 2015 at 10:41 AM, Andrew Bayer  >
> >>> wrote:
> >>>
>  You can change your maven call to use a different repository - I
> >> believe
>  you do that with -Dmaven.repository.local=path/to/repo
>  On Sep 18, 2015 19:39, "Ming Ma"  wrote:
> 
> > Hi,
> >
> > We are seeing some strange behaviors in HDFS precommit build. It
> seems
> > like it is caused by the local repo on the same machine being used by
> > different concurrent jobs which can cause issues.
> >
> > In HDFS, the build and test of "hadoop-hdfs-project/hdfs" depend on
> > "hadoop-hdfs-project/hdfs-client"'s  hadoop-hdfs-client-3.0.0-
> > SNAPSHOT.jar. HDFS-9004 adds some new method to
> >>> hadoop-hdfs-client-3.0.0-SNAPSHOT.jar.
> > In the precommit build for HDFS-9004, unit tests for
> >>> "hadoop-hdfs-project/hdfs"
> > complain the method isn't defined
> > https://builds.apache.org/job/PreCommit-HDFS-Build/12522/testReport/
> .
> > Interestingly sometimes it just works fine
> > https://builds.apache.org/job/PreCommit-HDFS-Build/12507/testReport/
> .
> >
> > So we are suspecting that there is another job running at the same
> >> time
> > that published different version of
> >>> hadoop-hdfs-client-3.0.0-SNAPSHOT.jar
> > which doesn't have the new methods defined to the local repo which is
> > shared by all jobs on that machine.
> >
> > If the above analysis is correct, what is the best way to fix the
> >> issue
> > so that different jobs can use their own maven local repo for build
> >> and
> > test?
> >
> > Thanks.
> >
> > Ming
> >
> 
> >>>
> >>
>
>


Re: Local repo sharing for maven builds

2015-09-18 Thread Allen Wittenauer

The collisions have been happening for about a year now.   The frequency is 
increasing, but not enough to be particularly worrisome. (So I'm slightly 
amused that one blowing up is suddenly a major freakout.) 

Making changes to the configuration without knowing what one is doing is 
probably a bad idea. For example, if people are removing the shared cache, I 
hope they're also prepared for the bitching that is going to go with the 
extremely significant slow down caused by downloading the java prereqs for 
building for every test...

As far as Yetus goes, we've got a JIRA open to provide for per-instance caches 
when using the docker container code. I've got it in my head how I think we can 
do it, but just haven't had a chance to code it.  So once that gets written up 
+ turning on containers should make the problem go away without any significant 
impact on test time.  Of course, that won't help the scheduled builds but those 
happen at an even smaller rate.


On Sep 18, 2015, at 12:19 PM, Andrew Wang  wrote:

> Sangjin, you should have access to the precommit jobs if you log in with
> your Apache credentials, even as a branch committer.
> 
> https://builds.apache.org/job/PreCommit-HDFS-Build/configure
> 
> The actual maven invocation is managed by test-patch.sh though.
> test-patch.sh has a MAVEN_ARGS which looks like what we want, but I don't
> think we can just set it before calling test-patch, since it'd get squashed
> by setup_defaults.
> 
> Allen/Chris/Yetus folks, any guidance here?
> 
> Thanks,
> Andrew
> 
> On Fri, Sep 18, 2015 at 11:55 AM,  wrote:
> 
>> You can use one per build processor, that reduces concurrent updates but
>> still keeps the cache function. And then try to avoid using install.
>> 
>> --
>> http://bernd.eckenfels.net
>> 
>> -Original Message-
>> From: Andrew Wang 
>> To: "common-dev@hadoop.apache.org" 
>> Cc: Andrew Bayer , Sangjin Lee ,
>> Lei Xu , infrastruct...@apache.org
>> Sent: Fr., 18 Sep. 2015 20:42
>> Subject: Re: Local repo sharing for maven builds
>> 
>> I think each job should use a maven.repo.local within its workspace like
>> abayer said. This means lots of downloading, but it's isolated.
>> 
>> If we care about download time, we could also bootstrap with a tarred
>> .m2/repository after we've run a `mvn compile`, so before it installs the
>> hadoop artifacts.
>> 
>> On Fri, Sep 18, 2015 at 11:02 AM, Ming Ma 
>> wrote:
>> 
>>> +hadoop common dev. Any suggestions?
>>> 
>>> 
>>> On Fri, Sep 18, 2015 at 10:41 AM, Andrew Bayer 
>>> wrote:
>>> 
 You can change your maven call to use a different repository - I
>> believe
 you do that with -Dmaven.repository.local=path/to/repo
 On Sep 18, 2015 19:39, "Ming Ma"  wrote:
 
> Hi,
> 
> We are seeing some strange behaviors in HDFS precommit build. It seems
> like it is caused by the local repo on the same machine being used by
> different concurrent jobs which can cause issues.
> 
> In HDFS, the build and test of "hadoop-hdfs-project/hdfs" depend on
> "hadoop-hdfs-project/hdfs-client"'s  hadoop-hdfs-client-3.0.0-
> SNAPSHOT.jar. HDFS-9004 adds some new method to
>>> hadoop-hdfs-client-3.0.0-SNAPSHOT.jar.
> In the precommit build for HDFS-9004, unit tests for
>>> "hadoop-hdfs-project/hdfs"
> complain the method isn't defined
> https://builds.apache.org/job/PreCommit-HDFS-Build/12522/testReport/.
> Interestingly sometimes it just works fine
> https://builds.apache.org/job/PreCommit-HDFS-Build/12507/testReport/.
> 
> So we are suspecting that there is another job running at the same
>> time
> that published different version of
>>> hadoop-hdfs-client-3.0.0-SNAPSHOT.jar
> which doesn't have the new methods defined to the local repo which is
> shared by all jobs on that machine.
> 
> If the above analysis is correct, what is the best way to fix the
>> issue
> so that different jobs can use their own maven local repo for build
>> and
> test?
> 
> Thanks.
> 
> Ming
> 
 
>>> 
>> 



Build failed in Jenkins: Hadoop-common-trunk-Java8 #426

2015-09-18 Thread Apache Jenkins Server
See 

Changes:

[wheat9] MAPREDUCE-6483. Replace deprecated method NameNode.getUri() with 
DFSUtilClient.getNNUri() in TestMRCredentials. Contributed by Mingliang Liu.

--
[...truncated 3758 lines...]
[INFO] --- maven-compiler-plugin:3.1:compile (default-compile) @ hadoop-minikdc 
---
[INFO] Changes detected - recompiling the module!
[INFO] Compiling 2 source files to 

[INFO] 
[INFO] --- maven-resources-plugin:2.6:testResources (default-testResources) @ 
hadoop-minikdc ---
[INFO] Using 'UTF-8' encoding to copy filtered resources.
[INFO] skip non existing resourceDirectory 

[INFO] 
[INFO] --- maven-compiler-plugin:3.1:testCompile (default-testCompile) @ 
hadoop-minikdc ---
[INFO] Changes detected - recompiling the module!
[INFO] Compiling 2 source files to 

[INFO] 
[INFO] --- maven-surefire-plugin:2.17:test (default-test) @ hadoop-minikdc ---
[INFO] Surefire report directory: 


---
 T E S T S
---

---
 T E S T S
---
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.minikdc.TestChangeOrgNameAndDomain
Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 48.143 sec - in 
org.apache.hadoop.minikdc.TestChangeOrgNameAndDomain
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.minikdc.TestMiniKdc
Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 47.994 sec - in 
org.apache.hadoop.minikdc.TestMiniKdc

Results :

Tests run: 6, Failures: 0, Errors: 0, Skipped: 0

[INFO] 
[INFO] --- maven-jar-plugin:2.5:jar (default-jar) @ hadoop-minikdc ---
[INFO] Building jar: 

[INFO] 
[INFO] --- maven-source-plugin:2.3:jar-no-fork (hadoop-java-sources) @ 
hadoop-minikdc ---
[INFO] Building jar: 

[INFO] 
[INFO] --- maven-source-plugin:2.3:test-jar-no-fork (hadoop-java-sources) @ 
hadoop-minikdc ---
[INFO] Building jar: 

[INFO] 
[INFO] --- maven-enforcer-plugin:1.3.1:enforce (dist-enforce) @ hadoop-minikdc 
---
[INFO] 
[INFO] --- maven-site-plugin:3.4:attach-descriptor (attach-descriptor) @ 
hadoop-minikdc ---
[INFO] 
[INFO] --- maven-javadoc-plugin:2.8.1:jar (module-javadocs) @ hadoop-minikdc ---
[INFO] 
Loading source files for package org.apache.hadoop.minikdc...
Constructing Javadoc information...
Standard Doclet version 1.8.0
Building tree for all the packages and classes...
Generating 

Generating 

Generating 

Generating 

Generating 

Generating 

Generating 

Generating 

Generating 


Build failed in Jenkins: Hadoop-common-trunk-Java8 #425

2015-09-18 Thread Apache Jenkins Server
See 

Changes:

[jlowe] Update CHANGES.txt to reflect commit of MR-5982 to 2.7.2

--
[...truncated 5939 lines...]
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.fs.contract.rawlocal.TestRawlocalContractMkdir
Tests run: 5, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.708 sec - in 
org.apache.hadoop.fs.contract.rawlocal.TestRawlocalContractMkdir
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running 
org.apache.hadoop.fs.contract.rawlocal.TestRawLocalContractUnderlyingFileBehavior
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.198 sec - in 
org.apache.hadoop.fs.contract.rawlocal.TestRawLocalContractUnderlyingFileBehavior
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.fs.contract.rawlocal.TestRawlocalContractCreate
Tests run: 6, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.792 sec - in 
org.apache.hadoop.fs.contract.rawlocal.TestRawlocalContractCreate
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.fs.contract.ftp.TestFTPContractCreate
Tests run: 6, Failures: 0, Errors: 0, Skipped: 6, Time elapsed: 0.487 sec - in 
org.apache.hadoop.fs.contract.ftp.TestFTPContractCreate
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.fs.contract.ftp.TestFTPContractDelete
Tests run: 8, Failures: 0, Errors: 0, Skipped: 8, Time elapsed: 0.52 sec - in 
org.apache.hadoop.fs.contract.ftp.TestFTPContractDelete
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.fs.contract.ftp.TestFTPContractRename
Tests run: 6, Failures: 0, Errors: 0, Skipped: 6, Time elapsed: 0.471 sec - in 
org.apache.hadoop.fs.contract.ftp.TestFTPContractRename
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.fs.contract.ftp.TestFTPContractMkdir
Tests run: 5, Failures: 0, Errors: 0, Skipped: 5, Time elapsed: 0.442 sec - in 
org.apache.hadoop.fs.contract.ftp.TestFTPContractMkdir
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.fs.contract.ftp.TestFTPContractOpen
Tests run: 6, Failures: 0, Errors: 0, Skipped: 6, Time elapsed: 0.567 sec - in 
org.apache.hadoop.fs.contract.ftp.TestFTPContractOpen
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.fs.contract.localfs.TestLocalFSContractAppend
Tests run: 5, Failures: 0, Errors: 0, Skipped: 5, Time elapsed: 0.677 sec - in 
org.apache.hadoop.fs.contract.localfs.TestLocalFSContractAppend
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.fs.contract.localfs.TestLocalFSContractGetFileStatus
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.642 sec - in 
org.apache.hadoop.fs.contract.localfs.TestLocalFSContractGetFileStatus
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.fs.contract.localfs.TestLocalFSContractOpen
Tests run: 6, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.847 sec - in 
org.apache.hadoop.fs.contract.localfs.TestLocalFSContractOpen
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.fs.contract.localfs.TestLocalFSContractLoaded
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.643 sec - in 
org.apache.hadoop.fs.contract.localfs.TestLocalFSContractLoaded
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.fs.contract.localfs.TestLocalFSContractDelete
Tests run: 8, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.757 sec - in 
org.apache.hadoop.fs.contract.localfs.TestLocalFSContractDelete
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.fs.contract.localfs.TestLocalFSContractMkdir
Tests run: 5, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.715 sec - in 
org.apache.hadoop.fs.contract.localfs.TestLocalFSContractMkdir
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.fs.contract.localfs.TestLocalFSContractRename
Tests run: 6, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.853 sec - in 
org.apache.hadoop.fs.contract.localfs.TestLocalFSContractRena

Re: Local repo sharing for maven builds

2015-09-18 Thread Andrew Wang
Sangjin, you should have access to the precommit jobs if you log in with
your Apache credentials, even as a branch committer.

https://builds.apache.org/job/PreCommit-HDFS-Build/configure

The actual maven invocation is managed by test-patch.sh though.
test-patch.sh has a MAVEN_ARGS which looks like what we want, but I don't
think we can just set it before calling test-patch, since it'd get squashed
by setup_defaults.

Allen/Chris/Yetus folks, any guidance here?

Thanks,
Andrew

On Fri, Sep 18, 2015 at 11:55 AM,  wrote:

> You can use one per build processor, that reduces concurrent updates but
> still keeps the cache function. And then try to avoid using install.
>
> --
> http://bernd.eckenfels.net
>
> -Original Message-
> From: Andrew Wang 
> To: "common-dev@hadoop.apache.org" 
> Cc: Andrew Bayer , Sangjin Lee ,
> Lei Xu , infrastruct...@apache.org
> Sent: Fr., 18 Sep. 2015 20:42
> Subject: Re: Local repo sharing for maven builds
>
> I think each job should use a maven.repo.local within its workspace like
> abayer said. This means lots of downloading, but it's isolated.
>
> If we care about download time, we could also bootstrap with a tarred
> .m2/repository after we've run a `mvn compile`, so before it installs the
> hadoop artifacts.
>
> On Fri, Sep 18, 2015 at 11:02 AM, Ming Ma 
> wrote:
>
> > +hadoop common dev. Any suggestions?
> >
> >
> > On Fri, Sep 18, 2015 at 10:41 AM, Andrew Bayer 
> > wrote:
> >
> > > You can change your maven call to use a different repository - I
> believe
> > > you do that with -Dmaven.repository.local=path/to/repo
> > > On Sep 18, 2015 19:39, "Ming Ma"  wrote:
> > >
> > >> Hi,
> > >>
> > >> We are seeing some strange behaviors in HDFS precommit build. It seems
> > >> like it is caused by the local repo on the same machine being used by
> > >> different concurrent jobs which can cause issues.
> > >>
> > >> In HDFS, the build and test of "hadoop-hdfs-project/hdfs" depend on
> > >> "hadoop-hdfs-project/hdfs-client"'s  hadoop-hdfs-client-3.0.0-
> > >> SNAPSHOT.jar. HDFS-9004 adds some new method to
> > hadoop-hdfs-client-3.0.0-SNAPSHOT.jar.
> > >> In the precommit build for HDFS-9004, unit tests for
> > "hadoop-hdfs-project/hdfs"
> > >> complain the method isn't defined
> > >> https://builds.apache.org/job/PreCommit-HDFS-Build/12522/testReport/.
> > >> Interestingly sometimes it just works fine
> > >> https://builds.apache.org/job/PreCommit-HDFS-Build/12507/testReport/.
> > >>
> > >> So we are suspecting that there is another job running at the same
> time
> > >> that published different version of
> > hadoop-hdfs-client-3.0.0-SNAPSHOT.jar
> > >> which doesn't have the new methods defined to the local repo which is
> > >> shared by all jobs on that machine.
> > >>
> > >> If the above analysis is correct, what is the best way to fix the
> issue
> > >> so that different jobs can use their own maven local repo for build
> and
> > >> test?
> > >>
> > >> Thanks.
> > >>
> > >> Ming
> > >>
> > >
> >
>


Re: Local repo sharing for maven builds

2015-09-18 Thread ecki
You can use one per build processor, that reduces concurrent updates but still 
keeps the cache function. And then try to avoid using install.

-- 
http://bernd.eckenfels.net

-Original Message-
From: Andrew Wang 
To: "common-dev@hadoop.apache.org" 
Cc: Andrew Bayer , Sangjin Lee , Lei 
Xu , infrastruct...@apache.org
Sent: Fr., 18 Sep. 2015 20:42
Subject: Re: Local repo sharing for maven builds

I think each job should use a maven.repo.local within its workspace like
abayer said. This means lots of downloading, but it's isolated.

If we care about download time, we could also bootstrap with a tarred
.m2/repository after we've run a `mvn compile`, so before it installs the
hadoop artifacts.

On Fri, Sep 18, 2015 at 11:02 AM, Ming Ma 
wrote:

> +hadoop common dev. Any suggestions?
>
>
> On Fri, Sep 18, 2015 at 10:41 AM, Andrew Bayer 
> wrote:
>
> > You can change your maven call to use a different repository - I believe
> > you do that with -Dmaven.repository.local=path/to/repo
> > On Sep 18, 2015 19:39, "Ming Ma"  wrote:
> >
> >> Hi,
> >>
> >> We are seeing some strange behaviors in HDFS precommit build. It seems
> >> like it is caused by the local repo on the same machine being used by
> >> different concurrent jobs which can cause issues.
> >>
> >> In HDFS, the build and test of "hadoop-hdfs-project/hdfs" depend on
> >> "hadoop-hdfs-project/hdfs-client"'s  hadoop-hdfs-client-3.0.0-
> >> SNAPSHOT.jar. HDFS-9004 adds some new method to
> hadoop-hdfs-client-3.0.0-SNAPSHOT.jar.
> >> In the precommit build for HDFS-9004, unit tests for
> "hadoop-hdfs-project/hdfs"
> >> complain the method isn't defined
> >> https://builds.apache.org/job/PreCommit-HDFS-Build/12522/testReport/.
> >> Interestingly sometimes it just works fine
> >> https://builds.apache.org/job/PreCommit-HDFS-Build/12507/testReport/.
> >>
> >> So we are suspecting that there is another job running at the same time
> >> that published different version of
> hadoop-hdfs-client-3.0.0-SNAPSHOT.jar
> >> which doesn't have the new methods defined to the local repo which is
> >> shared by all jobs on that machine.
> >>
> >> If the above analysis is correct, what is the best way to fix the issue
> >> so that different jobs can use their own maven local repo for build and
> >> test?
> >>
> >> Thanks.
> >>
> >> Ming
> >>
> >
>


Re: Local repo sharing for maven builds

2015-09-18 Thread Sangjin Lee
Are we using maven.repo.local in our pre-commit or commit jobs? We cannot
see the configuration of these jenkins jobs.

On Fri, Sep 18, 2015 at 11:41 AM, Andrew Wang 
wrote:

> I think each job should use a maven.repo.local within its workspace like
> abayer said. This means lots of downloading, but it's isolated.
>
> If we care about download time, we could also bootstrap with a tarred
> .m2/repository after we've run a `mvn compile`, so before it installs the
> hadoop artifacts.
>
> On Fri, Sep 18, 2015 at 11:02 AM, Ming Ma 
> wrote:
>
> > +hadoop common dev. Any suggestions?
> >
> >
> > On Fri, Sep 18, 2015 at 10:41 AM, Andrew Bayer 
> > wrote:
> >
> > > You can change your maven call to use a different repository - I
> believe
> > > you do that with -Dmaven.repository.local=path/to/repo
> > > On Sep 18, 2015 19:39, "Ming Ma"  wrote:
> > >
> > >> Hi,
> > >>
> > >> We are seeing some strange behaviors in HDFS precommit build. It seems
> > >> like it is caused by the local repo on the same machine being used by
> > >> different concurrent jobs which can cause issues.
> > >>
> > >> In HDFS, the build and test of "hadoop-hdfs-project/hdfs" depend on
> > >> "hadoop-hdfs-project/hdfs-client"'s  hadoop-hdfs-client-3.0.0-
> > >> SNAPSHOT.jar. HDFS-9004 adds some new method to
> > hadoop-hdfs-client-3.0.0-SNAPSHOT.jar.
> > >> In the precommit build for HDFS-9004, unit tests for
> > "hadoop-hdfs-project/hdfs"
> > >> complain the method isn't defined
> > >> https://builds.apache.org/job/PreCommit-HDFS-Build/12522/testReport/.
> > >> Interestingly sometimes it just works fine
> > >> https://builds.apache.org/job/PreCommit-HDFS-Build/12507/testReport/.
> > >>
> > >> So we are suspecting that there is another job running at the same
> time
> > >> that published different version of
> > hadoop-hdfs-client-3.0.0-SNAPSHOT.jar
> > >> which doesn't have the new methods defined to the local repo which is
> > >> shared by all jobs on that machine.
> > >>
> > >> If the above analysis is correct, what is the best way to fix the
> issue
> > >> so that different jobs can use their own maven local repo for build
> and
> > >> test?
> > >>
> > >> Thanks.
> > >>
> > >> Ming
> > >>
> > >
> >
>


Re: Local repo sharing for maven builds

2015-09-18 Thread Andrew Wang
I think each job should use a maven.repo.local within its workspace like
abayer said. This means lots of downloading, but it's isolated.

If we care about download time, we could also bootstrap with a tarred
.m2/repository after we've run a `mvn compile`, so before it installs the
hadoop artifacts.

On Fri, Sep 18, 2015 at 11:02 AM, Ming Ma 
wrote:

> +hadoop common dev. Any suggestions?
>
>
> On Fri, Sep 18, 2015 at 10:41 AM, Andrew Bayer 
> wrote:
>
> > You can change your maven call to use a different repository - I believe
> > you do that with -Dmaven.repository.local=path/to/repo
> > On Sep 18, 2015 19:39, "Ming Ma"  wrote:
> >
> >> Hi,
> >>
> >> We are seeing some strange behaviors in HDFS precommit build. It seems
> >> like it is caused by the local repo on the same machine being used by
> >> different concurrent jobs which can cause issues.
> >>
> >> In HDFS, the build and test of "hadoop-hdfs-project/hdfs" depend on
> >> "hadoop-hdfs-project/hdfs-client"'s  hadoop-hdfs-client-3.0.0-
> >> SNAPSHOT.jar. HDFS-9004 adds some new method to
> hadoop-hdfs-client-3.0.0-SNAPSHOT.jar.
> >> In the precommit build for HDFS-9004, unit tests for
> "hadoop-hdfs-project/hdfs"
> >> complain the method isn't defined
> >> https://builds.apache.org/job/PreCommit-HDFS-Build/12522/testReport/.
> >> Interestingly sometimes it just works fine
> >> https://builds.apache.org/job/PreCommit-HDFS-Build/12507/testReport/.
> >>
> >> So we are suspecting that there is another job running at the same time
> >> that published different version of
> hadoop-hdfs-client-3.0.0-SNAPSHOT.jar
> >> which doesn't have the new methods defined to the local repo which is
> >> shared by all jobs on that machine.
> >>
> >> If the above analysis is correct, what is the best way to fix the issue
> >> so that different jobs can use their own maven local repo for build and
> >> test?
> >>
> >> Thanks.
> >>
> >> Ming
> >>
> >
>


Re: Local repo sharing for maven builds

2015-09-18 Thread Ming Ma
+hadoop common dev. Any suggestions?


On Fri, Sep 18, 2015 at 10:41 AM, Andrew Bayer 
wrote:

> You can change your maven call to use a different repository - I believe
> you do that with -Dmaven.repository.local=path/to/repo
> On Sep 18, 2015 19:39, "Ming Ma"  wrote:
>
>> Hi,
>>
>> We are seeing some strange behaviors in HDFS precommit build. It seems
>> like it is caused by the local repo on the same machine being used by
>> different concurrent jobs which can cause issues.
>>
>> In HDFS, the build and test of "hadoop-hdfs-project/hdfs" depend on
>> "hadoop-hdfs-project/hdfs-client"'s  hadoop-hdfs-client-3.0.0-
>> SNAPSHOT.jar. HDFS-9004 adds some new method to 
>> hadoop-hdfs-client-3.0.0-SNAPSHOT.jar.
>> In the precommit build for HDFS-9004, unit tests for 
>> "hadoop-hdfs-project/hdfs"
>> complain the method isn't defined
>> https://builds.apache.org/job/PreCommit-HDFS-Build/12522/testReport/.
>> Interestingly sometimes it just works fine
>> https://builds.apache.org/job/PreCommit-HDFS-Build/12507/testReport/.
>>
>> So we are suspecting that there is another job running at the same time
>> that published different version of hadoop-hdfs-client-3.0.0-SNAPSHOT.jar
>> which doesn't have the new methods defined to the local repo which is
>> shared by all jobs on that machine.
>>
>> If the above analysis is correct, what is the best way to fix the issue
>> so that different jobs can use their own maven local repo for build and
>> test?
>>
>> Thanks.
>>
>> Ming
>>
>


Re: [YETUS] Yetus TLP approved

2015-09-18 Thread Sean Busbey
The Apache Yetus dev list is now active:

http://mail-archives.apache.org/mod_mbox/yetus-dev/

The first post there has a pointer to the rest of the project
resources and the status of getting things set up.

On Thu, Sep 17, 2015 at 10:59 AM, Sean Busbey  wrote:
> Hi Folks!
>
> At yesterday's ASF board meeting the Apache Yetus TLP was approved. There's
> still some ASF Infra work to get done[1] before we can start transitioning
> our mailing list, jira, and code over.
>
> Thanks to all the folks in Hadoop who've helped us along this process. I
> look forward to our communities maintaining a healthy working relationship
> in the future.
>
> [1]: https://issues.apache.org/jira/browse/INFRA-10447
>
> --
> Sean



-- 
Sean


Jenkins build is back to normal : Hadoop-common-trunk-Java8 #422

2015-09-18 Thread Apache Jenkins Server
See 



Jenkins build is back to normal : Hadoop-Common-trunk #1719

2015-09-18 Thread Apache Jenkins Server
See 



Build failed in Jenkins: Hadoop-common-trunk-Java8 #421

2015-09-18 Thread Apache Jenkins Server
See 

Changes:

[yzhang] HDFS-5802. NameNode does not check for inode type before traversing 
down a path. (Xiao Chen via Yongjun Zhang)

--
[...truncated 5634 lines...]
Tests run: 5, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.587 sec - in 
org.apache.hadoop.util.TestChunkedArrayList
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.util.TestVersionUtil
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.189 sec - in 
org.apache.hadoop.util.TestVersionUtil
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.util.TestProtoUtil
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.235 sec - in 
org.apache.hadoop.util.TestProtoUtil
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.util.TestLightWeightGSet
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.161 sec - in 
org.apache.hadoop.util.TestLightWeightGSet
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.util.TestGSet
Tests run: 6, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 1.009 sec - in 
org.apache.hadoop.util.TestGSet
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.util.TestStringInterner
Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.126 sec - in 
org.apache.hadoop.util.TestStringInterner
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.util.TestZKUtil
Tests run: 9, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.114 sec - in 
org.apache.hadoop.util.TestZKUtil
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.util.TestStringUtils
Tests run: 14, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.272 sec - in 
org.apache.hadoop.util.TestStringUtils
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.util.TestFindClass
Tests run: 16, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.525 sec - in 
org.apache.hadoop.util.TestFindClass
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.util.TestGenericOptionsParser
Tests run: 7, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.726 sec - in 
org.apache.hadoop.util.TestGenericOptionsParser
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.util.TestRunJar
Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.383 sec - in 
org.apache.hadoop.util.TestRunJar
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.util.TestSysInfoLinux
Tests run: 5, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.21 sec - in 
org.apache.hadoop.util.TestSysInfoLinux
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.util.TestDirectBufferPool
Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.144 sec - in 
org.apache.hadoop.util.TestDirectBufferPool
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.util.TestFileBasedIPList
Tests run: 8, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.188 sec - in 
org.apache.hadoop.util.TestFileBasedIPList
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.util.TestIndexedSort
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.63 sec - in 
org.apache.hadoop.util.TestIndexedSort
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.util.TestIdentityHashStore
Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.175 sec - in 
org.apache.hadoop.util.TestIdentityHashStore
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.util.TestMachineList
Tests run: 13, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.857 sec - in 
org.apache.hadoop.util.TestMachineList
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.util.TestWinUtils
Tests run: 11, Failures: 0, Errors: 0, Skipped: 11, Time elapsed: 0.198 sec - 
in org.ap

Build failed in Jenkins: Hadoop-common-trunk-Java8 #420

2015-09-18 Thread Apache Jenkins Server
See 

Changes:

[vinayakumarb] HDFS-6955. DN should reserve disk space for a full block when 
creating tmp files (Contributed by Kanaka Kumar Avvaru)

--
[...truncated 5600 lines...]
Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.729 sec - in 
org.apache.hadoop.io.TestSequenceFileAppend
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.io.TestBytesWritable
Tests run: 6, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.189 sec - in 
org.apache.hadoop.io.TestBytesWritable
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.io.TestSequenceFileSerialization
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.668 sec - in 
org.apache.hadoop.io.TestSequenceFileSerialization
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.io.TestDataByteBuffers
Tests run: 5, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.348 sec - in 
org.apache.hadoop.io.TestDataByteBuffers
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.io.file.tfile.TestTFileComparators
Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.571 sec - in 
org.apache.hadoop.io.file.tfile.TestTFileComparators
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.io.file.tfile.TestTFileSeek
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 2.507 sec - in 
org.apache.hadoop.io.file.tfile.TestTFileSeek
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.io.file.tfile.TestTFileLzoCodecsByteArrays
Tests run: 25, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.306 sec - in 
org.apache.hadoop.io.file.tfile.TestTFileLzoCodecsByteArrays
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.io.file.tfile.TestTFileLzoCodecsStreams
Tests run: 19, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.278 sec - in 
org.apache.hadoop.io.file.tfile.TestTFileLzoCodecsStreams
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.io.file.tfile.TestTFileUnsortedByteArrays
Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.611 sec - in 
org.apache.hadoop.io.file.tfile.TestTFileUnsortedByteArrays
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.io.file.tfile.TestTFileStreams
Tests run: 19, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 1.073 sec - in 
org.apache.hadoop.io.file.tfile.TestTFileStreams
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.io.file.tfile.TestTFile
Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.957 sec - in 
org.apache.hadoop.io.file.tfile.TestTFile
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running 
org.apache.hadoop.io.file.tfile.TestTFileNoneCodecsJClassComparatorByteArrays
Tests run: 25, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.992 sec - in 
org.apache.hadoop.io.file.tfile.TestTFileNoneCodecsJClassComparatorByteArrays
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.io.file.tfile.TestTFileJClassComparatorByteArrays
Tests run: 25, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 1.766 sec - in 
org.apache.hadoop.io.file.tfile.TestTFileJClassComparatorByteArrays
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.io.file.tfile.TestTFileNoneCodecsStreams
Tests run: 19, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 1.059 sec - in 
org.apache.hadoop.io.file.tfile.TestTFileNoneCodecsStreams
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.io.file.tfile.TestTFileSplit
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 3.626 sec - in 
org.apache.hadoop.io.file.tfile.TestTFileSplit
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.io.file.tfile.TestTFileComparator2
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.916 sec - in 
org.apache.hadoop.io.file.tfile.TestTFileComparator2
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=

Build failed in Jenkins: Hadoop-Common-trunk #1718

2015-09-18 Thread Apache Jenkins Server
See 

Changes:

[vinayakumarb] HDFS-6955. DN should reserve disk space for a full block when 
creating tmp files (Contributed by Kanaka Kumar Avvaru)

--
[...truncated 8253 lines...]
  [javadoc] Loading source files for package org.apache.hadoop.metrics.util...
  [javadoc] Loading source files for package org.apache.hadoop.metrics2...
  [javadoc] Loading source files for package 
org.apache.hadoop.metrics2.annotation...
  [javadoc] Loading source files for package 
org.apache.hadoop.metrics2.filter...
  [javadoc] Loading source files for package org.apache.hadoop.metrics2.impl...
  [javadoc] Loading source files for package org.apache.hadoop.metrics2.lib...
  [javadoc] Loading source files for package org.apache.hadoop.metrics2.sink...
  [javadoc] Loading source files for package 
org.apache.hadoop.metrics2.sink.ganglia...
  [javadoc] Loading source files for package 
org.apache.hadoop.metrics2.source...
  [javadoc] Loading source files for package org.apache.hadoop.metrics2.util...
  [javadoc] Loading source files for package org.apache.hadoop.net...
  [javadoc] Loading source files for package org.apache.hadoop.net.unix...
  [javadoc] Loading source files for package org.apache.hadoop.security...
  [javadoc] Loading source files for package org.apache.hadoop.security.alias...
  [javadoc] Loading source files for package 
org.apache.hadoop.security.authorize...
  [javadoc] Loading source files for package 
org.apache.hadoop.security.protocolPB...
  [javadoc] Loading source files for package org.apache.hadoop.security.ssl...
  [javadoc] Loading source files for package org.apache.hadoop.security.token...
  [javadoc] Loading source files for package 
org.apache.hadoop.security.token.delegation...
  [javadoc] Loading source files for package 
org.apache.hadoop.security.token.delegation.web...
  [javadoc] Loading source files for package org.apache.hadoop.service...
  [javadoc] Loading source files for package org.apache.hadoop.tools...
  [javadoc] Loading source files for package 
org.apache.hadoop.tools.protocolPB...
  [javadoc] Loading source files for package org.apache.hadoop.tracing...
  [javadoc] Loading source files for package org.apache.hadoop.util...
  [javadoc] Loading source files for package org.apache.hadoop.util.bloom...
  [javadoc] Loading source files for package org.apache.hadoop.util.curator...
  [javadoc] Loading source files for package org.apache.hadoop.util.hash...
  [javadoc] Constructing Javadoc information...
  [javadoc] 
:25:
 warning: Unsafe is internal proprietary API and may be removed in a future 
release
  [javadoc] import sun.misc.Unsafe;
  [javadoc]^
  [javadoc] 
:46:
 warning: Unsafe is internal proprietary API and may be removed in a future 
release
  [javadoc] import sun.misc.Unsafe;
  [javadoc]^
  [javadoc] 
:50:
 warning: ResolverConfiguration is internal proprietary API and may be removed 
in a future release
  [javadoc] import sun.net.dns.ResolverConfiguration;
  [javadoc]   ^
  [javadoc] 
:51:
 warning: IPAddressUtil is internal proprietary API and may be removed in a 
future release
  [javadoc] import sun.net.util.IPAddressUtil;
  [javadoc]^
  [javadoc] 
:21:
 warning: Signal is internal proprietary API and may be removed in a future 
release
  [javadoc] import sun.misc.Signal;
  [javadoc]^
  [javadoc] 
:22:
 warning: SignalHandler is internal proprietary API and may be removed in a 
future release
  [javadoc] import sun.misc.SignalHandler;
  [javadoc]^
  [javadoc] 
:44:
 warning: SignalHandler is internal proprietary API and may be removed in a 
future release
  [javadoc]   private static class Handler implements SignalHandler {
  [javadoc]   ^
  [javadoc] ExcludePrivateAnnotationsJDiffDoclet
  [javadoc] JDiff: doc

[jira] [Created] (HADOOP-12423) ShutdownHookManager throws exception if JVM is already being shut down

2015-09-18 Thread Abhishek Agarwal (JIRA)
Abhishek Agarwal created HADOOP-12423:
-

 Summary: ShutdownHookManager throws exception if JVM is already 
being shut down
 Key: HADOOP-12423
 URL: https://issues.apache.org/jira/browse/HADOOP-12423
 Project: Hadoop Common
  Issue Type: Bug
  Components: fs
Reporter: Abhishek Agarwal
Assignee: Abhishek Agarwal
Priority: Minor


If JVM is under shutdown, static method in ShutdownHookManager will throw an 
IllegalStateException. This exception should be caught and ignored while 
registering the hooks. 




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Build failed in Jenkins: Hadoop-Common-trunk #1717

2015-09-18 Thread Apache Jenkins Server
See 

Changes:

[stevel] YARN-2597 MiniYARNCluster should propagate reason for AHS not starting

--
[...truncated 5218 lines...]
Running org.apache.hadoop.metrics2.impl.TestSinkQueue
Tests run: 8, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.488 sec - in 
org.apache.hadoop.metrics2.impl.TestSinkQueue
Running org.apache.hadoop.metrics2.impl.TestMetricsSourceAdapter
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 1.44 sec - in 
org.apache.hadoop.metrics2.impl.TestMetricsSourceAdapter
Running org.apache.hadoop.metrics2.impl.TestMetricsVisitor
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.384 sec - in 
org.apache.hadoop.metrics2.impl.TestMetricsVisitor
Running org.apache.hadoop.metrics2.lib.TestMutableMetrics
Tests run: 5, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 28.47 sec - in 
org.apache.hadoop.metrics2.lib.TestMutableMetrics
Running org.apache.hadoop.metrics2.lib.TestMetricsRegistry
Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.394 sec - in 
org.apache.hadoop.metrics2.lib.TestMetricsRegistry
Running org.apache.hadoop.metrics2.lib.TestMetricsAnnotations
Tests run: 9, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.457 sec - in 
org.apache.hadoop.metrics2.lib.TestMetricsAnnotations
Running org.apache.hadoop.metrics2.lib.TestUniqNames
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.081 sec - in 
org.apache.hadoop.metrics2.lib.TestUniqNames
Running org.apache.hadoop.metrics2.lib.TestInterns
Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.223 sec - in 
org.apache.hadoop.metrics2.lib.TestInterns
Running org.apache.hadoop.metrics2.source.TestJvmMetrics
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.543 sec - in 
org.apache.hadoop.metrics2.source.TestJvmMetrics
Running org.apache.hadoop.metrics2.filter.TestPatternFilter
Tests run: 5, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.422 sec - in 
org.apache.hadoop.metrics2.filter.TestPatternFilter
Running org.apache.hadoop.conf.TestConfigurationSubclass
Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.415 sec - in 
org.apache.hadoop.conf.TestConfigurationSubclass
Running org.apache.hadoop.conf.TestGetInstances
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.326 sec - in 
org.apache.hadoop.conf.TestGetInstances
Running org.apache.hadoop.conf.TestConfigurationDeprecation
Tests run: 9, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 9.276 sec - in 
org.apache.hadoop.conf.TestConfigurationDeprecation
Running org.apache.hadoop.conf.TestDeprecatedKeys
Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.586 sec - in 
org.apache.hadoop.conf.TestDeprecatedKeys
Running org.apache.hadoop.conf.TestConfiguration
Tests run: 62, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 25.894 sec - 
in org.apache.hadoop.conf.TestConfiguration
Running org.apache.hadoop.conf.TestReconfiguration
Tests run: 5, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 2.406 sec - in 
org.apache.hadoop.conf.TestReconfiguration
Running org.apache.hadoop.conf.TestConfServlet
Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.65 sec - in 
org.apache.hadoop.conf.TestConfServlet
Running org.apache.hadoop.test.TestJUnitSetup
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.168 sec - in 
org.apache.hadoop.test.TestJUnitSetup
Running org.apache.hadoop.test.TestMultithreadedTestUtil
Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 3.185 sec - in 
org.apache.hadoop.test.TestMultithreadedTestUtil
Running org.apache.hadoop.test.TestTimedOutTestsListener
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.23 sec - in 
org.apache.hadoop.test.TestTimedOutTestsListener
Running org.apache.hadoop.metrics.TestMetricsServlet
Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.087 sec - in 
org.apache.hadoop.metrics.TestMetricsServlet
Running org.apache.hadoop.metrics.spi.TestOutputRecord
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.053 sec - in 
org.apache.hadoop.metrics.spi.TestOutputRecord
Running org.apache.hadoop.metrics.ganglia.TestGangliaContext
Tests run: 5, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.183 sec - in 
org.apache.hadoop.metrics.ganglia.TestGangliaContext
Running org.apache.hadoop.net.TestNetUtils
Tests run: 40, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 3.746 sec - in 
org.apache.hadoop.net.TestNetUtils
Running org.apache.hadoop.net.TestDNS
Tests run: 8, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.262 sec - in 
org.apache.hadoop.net.TestDNS
Running org.apache.hadoop.net.TestSocketIOWithTimeout
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 5.369 sec - in 
org.apache.hadoop.net.TestSocketIOWithTimeout
Running org.apache.hadoop.net.TestNetworkTopologyWithNodeGroup
Tests run: 8, Failures: