Build failed in Jenkins: Hadoop-common-trunk-Java8 #751

2015-11-25 Thread Apache Jenkins Server
See 

Changes:

[vinayakumarb] HDFS-9426. Rollingupgrade finalization is not backward compatible

--
[...truncated 5787 lines...]
Running org.apache.hadoop.util.TestVersionUtil
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.239 sec - in 
org.apache.hadoop.util.TestVersionUtil
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.util.TestProtoUtil
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.377 sec - in 
org.apache.hadoop.util.TestProtoUtil
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.util.TestLightWeightGSet
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.258 sec - in 
org.apache.hadoop.util.TestLightWeightGSet
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.util.TestGSet
Tests run: 7, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 2.919 sec - in 
org.apache.hadoop.util.TestGSet
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.util.TestStringInterner
Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.163 sec - in 
org.apache.hadoop.util.TestStringInterner
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.util.TestZKUtil
Tests run: 9, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.194 sec - in 
org.apache.hadoop.util.TestZKUtil
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.util.TestStringUtils
Tests run: 14, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.464 sec - in 
org.apache.hadoop.util.TestStringUtils
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.util.TestFindClass
Tests run: 16, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 1.104 sec - in 
org.apache.hadoop.util.TestFindClass
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.util.TestGenericOptionsParser
Tests run: 7, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 1.499 sec - in 
org.apache.hadoop.util.TestGenericOptionsParser
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.util.TestRunJar
Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.642 sec - in 
org.apache.hadoop.util.TestRunJar
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.util.TestSysInfoLinux
Tests run: 5, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.296 sec - in 
org.apache.hadoop.util.TestSysInfoLinux
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.util.TestDirectBufferPool
Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.331 sec - in 
org.apache.hadoop.util.TestDirectBufferPool
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.util.TestFileBasedIPList
Tests run: 8, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.279 sec - in 
org.apache.hadoop.util.TestFileBasedIPList
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.util.TestIndexedSort
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 1.242 sec - in 
org.apache.hadoop.util.TestIndexedSort
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.util.TestIdentityHashStore
Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.296 sec - in 
org.apache.hadoop.util.TestIdentityHashStore
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.util.TestMachineList
Tests run: 14, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.653 sec - in 
org.apache.hadoop.util.TestMachineList
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.util.TestWinUtils
Tests run: 11, Failures: 0, Errors: 0, Skipped: 11, Time elapsed: 0.37 sec - in 
org.apache.hadoop.util.TestWinUtils
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.util.hash.TestHash
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.517 sec - in 
org.apac

Jenkins build is back to normal : Hadoop-Common-trunk #2050

2015-11-25 Thread Apache Jenkins Server
See 



Re: Disable some of the Hudson integration comments on JIRA

2015-11-25 Thread Vinayakumar B
Yes, thats good idea.

-Vinay

On Thu, Nov 26, 2015 at 10:29 AM, Zhe Zhang  wrote:

> I think that's a good idea. Thanks for the proposal Andrew.
>
> ---
> Zhe Zhang
>
> On Wed, Nov 25, 2015 at 5:41 PM, Andrew Wang 
> wrote:
>
> > Hi all,
> >
> > Right now we get something like 7 comments from Hudson whenever a change
> is
> > committed. Would anyone object if I turned off 6 of them? We have
> > variations like:
> >
> > Hadoop-trunk-Commit
> > Hadoop-Hdfs-trunk-Java8
> > Hadoop-Yarn-trunk
> > ...etc
> >
> > I propose leaving notifications on for just Hadoop-trunk-Commit.
> >
> > Side note is that we could probably stand to delete the disabled jobs on
> > our view, I'll do that if I see a job has been disabled for a while with
> no
> > recent builds:
> >
> > https://builds.apache.org/view/H-L/view/Hadoop/
> >
> > Best,
> > Andrew
> >
>


Re: Disable some of the Hudson integration comments on JIRA

2015-11-25 Thread Zhe Zhang
I think that's a good idea. Thanks for the proposal Andrew.

---
Zhe Zhang

On Wed, Nov 25, 2015 at 5:41 PM, Andrew Wang 
wrote:

> Hi all,
>
> Right now we get something like 7 comments from Hudson whenever a change is
> committed. Would anyone object if I turned off 6 of them? We have
> variations like:
>
> Hadoop-trunk-Commit
> Hadoop-Hdfs-trunk-Java8
> Hadoop-Yarn-trunk
> ...etc
>
> I propose leaving notifications on for just Hadoop-trunk-Commit.
>
> Side note is that we could probably stand to delete the disabled jobs on
> our view, I'll do that if I see a job has been disabled for a while with no
> recent builds:
>
> https://builds.apache.org/view/H-L/view/Hadoop/
>
> Best,
> Andrew
>


Jenkins build is back to normal : Hadoop-common-trunk-Java8 #750

2015-11-25 Thread Apache Jenkins Server
See 



Build failed in Jenkins: Hadoop-Common-trunk #2049

2015-11-25 Thread Apache Jenkins Server
See 

Changes:

[rkanter] MAPREDUCE-6549. multibyte delimiters with LineRecordReader cause

[rkanter] MAPREDUCE-6550. archive-logs tool changes log ownership to the Yarn 
user

[vinodkv] Adding release 2.8.0 to CHANGES.txt

--
[...truncated 5466 lines...]
at org.mockito.internal.util.MockUtil.createMock(MockUtil.java:54)
at org.mockito.internal.MockitoCore.mock(MockitoCore.java:45)
at org.mockito.Mockito.mock(Mockito.java:921)
at org.mockito.Mockito.mock(Mockito.java:816)
at 
org.apache.hadoop.fs.shell.find.TestFind.processArguments(TestFind.java:248)

Running org.apache.hadoop.fs.shell.find.TestPrint0
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 2.149 sec - in 
org.apache.hadoop.fs.shell.find.TestPrint0
Running org.apache.hadoop.fs.shell.find.TestPrint
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 2.138 sec - in 
org.apache.hadoop.fs.shell.find.TestPrint
Running org.apache.hadoop.fs.shell.find.TestAnd
Tests run: 10, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.647 sec - in 
org.apache.hadoop.fs.shell.find.TestAnd
Running org.apache.hadoop.fs.shell.find.TestResult
Tests run: 18, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.265 sec - in 
org.apache.hadoop.fs.shell.find.TestResult
Running org.apache.hadoop.fs.shell.find.TestIname
Tests run: 6, Failures: 0, Errors: 1, Skipped: 0, Time elapsed: 2.647 sec <<< 
FAILURE! - in org.apache.hadoop.fs.shell.find.TestIname
applyGlob(org.apache.hadoop.fs.shell.find.TestIname)  Time elapsed: 2.092 sec  
<<< ERROR!
java.lang.Exception: test timed out after 1000 milliseconds
at java.security.CodeSource.(CodeSource.java:110)
at java.net.URLClassLoader.defineClass(URLClassLoader.java:447)
at java.net.URLClassLoader.access$100(URLClassLoader.java:71)
at java.net.URLClassLoader$1.run(URLClassLoader.java:361)
at java.net.URLClassLoader$1.run(URLClassLoader.java:355)
at java.security.AccessController.doPrivileged(Native Method)
at java.net.URLClassLoader.findClass(URLClassLoader.java:354)
at java.lang.ClassLoader.loadClass(ClassLoader.java:425)
at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:308)
at java.lang.ClassLoader.loadClass(ClassLoader.java:358)
at java.lang.Class.getDeclaredConstructors0(Native Method)
at java.lang.Class.privateGetDeclaredConstructors(Class.java:2493)
at java.lang.Class.getConstructor0(Class.java:2803)
at java.lang.Class.newInstance(Class.java:345)
at java.util.ServiceLoader$LazyIterator.next(ServiceLoader.java:373)
at java.util.ServiceLoader$1.next(ServiceLoader.java:445)
at org.apache.hadoop.fs.FileSystem.loadFileSystems(FileSystem.java:2676)
at 
org.apache.hadoop.fs.FileSystem.getFileSystemClass(FileSystem.java:2687)
at 
org.apache.hadoop.fs.FileSystem.createFileSystem(FileSystem.java:2708)
at org.apache.hadoop.fs.FileSystem.access$200(FileSystem.java:95)
at 
org.apache.hadoop.fs.FileSystem$Cache.getInternal(FileSystem.java:2747)
at org.apache.hadoop.fs.FileSystem$Cache.get(FileSystem.java:2729)
at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:375)
at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:174)
at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:359)
at org.apache.hadoop.fs.shell.PathData.(PathData.java:81)
at 
org.apache.hadoop.fs.shell.find.TestIname.applyGlob(TestIname.java:74)

Running org.apache.hadoop.fs.shell.find.TestName
Tests run: 6, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 2.422 sec - in 
org.apache.hadoop.fs.shell.find.TestName
Running org.apache.hadoop.fs.shell.find.TestFilterExpression
Tests run: 11, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.858 sec - in 
org.apache.hadoop.fs.shell.find.TestFilterExpression
Running org.apache.hadoop.fs.shell.TestPathExceptions
Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.172 sec - in 
org.apache.hadoop.fs.shell.TestPathExceptions
Running org.apache.hadoop.fs.shell.TestCommandFactory
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.363 sec - in 
org.apache.hadoop.fs.shell.TestCommandFactory
Running org.apache.hadoop.fs.shell.TestLs
Tests run: 33, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 3.199 sec - in 
org.apache.hadoop.fs.shell.TestLs
Running org.apache.hadoop.fs.shell.TestMove
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 1.973 sec - in 
org.apache.hadoop.fs.shell.TestMove
Running org.apache.hadoop.fs.shell.TestXAttrCommands
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 1.202 sec - in 
org.apache.hadoop.fs.shell.TestXAttrCommands
Running org.apache.hadoop.fs.shell.TestPathData
Tests run: 11, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 2.157 sec - in 
org.apache

Re: [Release thread] 2.8.0 release activities

2015-11-25 Thread Vinod Kumar Vavilapalli
Branch-2.8 is created.

As mentioned before, the goal on branch-2.8 is to put improvements / fixes to 
existing features with a goal of converging on an alpha release soon.

Thanks
+Vinod


> On Nov 25, 2015, at 5:30 PM, Vinod Kumar Vavilapalli  
> wrote:
> 
> Forking threads now in order to track all things related to the release.
> 
> Creating the branch now.
> 
> Thanks
> +Vinod
> 
> 
>> On Nov 25, 2015, at 11:37 AM, Vinod Kumar Vavilapalli  
>> wrote:
>> 
>> I think we’ve converged at a high level w.r.t 2.8. And as I just sent out an 
>> email, I updated the Roadmap wiki reflecting the same: 
>> https://wiki.apache.org/hadoop/Roadmap 
>> 
>> 
>> I plan to create a 2.8 branch EOD today.
>> 
>> The goal for all of us should be to restrict improvements & fixes to only 
>> (a) the feature-set documented under 2.8 in the RoadMap wiki and (b) other 
>> minor features that are already in 2.8.
>> 
>> Thanks
>> +Vinod
>> 
>> 
>>> On Nov 11, 2015, at 12:13 PM, Vinod Kumar Vavilapalli 
>>> mailto:vino...@hortonworks.com>> wrote:
>>> 
>>> - Cut a branch about two weeks from now
>>> - Do an RC mid next month (leaving ~4weeks since branch-cut)
>>> - As with 2.7.x series, the first release will still be called as early / 
>>> alpha release in the interest of
>>>— gaining downstream adoption
>>>— wider testing,
>>>— yet reserving our right to fix any inadvertent incompatibilities 
>>> introduced.
>> 
> 



Disable some of the Hudson integration comments on JIRA

2015-11-25 Thread Andrew Wang
Hi all,

Right now we get something like 7 comments from Hudson whenever a change is
committed. Would anyone object if I turned off 6 of them? We have
variations like:

Hadoop-trunk-Commit
Hadoop-Hdfs-trunk-Java8
Hadoop-Yarn-trunk
...etc

I propose leaving notifications on for just Hadoop-trunk-Commit.

Side note is that we could probably stand to delete the disabled jobs on
our view, I'll do that if I see a job has been disabled for a while with no
recent builds:

https://builds.apache.org/view/H-L/view/Hadoop/

Best,
Andrew


[Release thread] 2.8.0 release activities

2015-11-25 Thread Vinod Kumar Vavilapalli
Forking threads now in order to track all things related to the release.

Creating the branch now.

Thanks
+Vinod


> On Nov 25, 2015, at 11:37 AM, Vinod Kumar Vavilapalli  
> wrote:
> 
> I think we’ve converged at a high level w.r.t 2.8. And as I just sent out an 
> email, I updated the Roadmap wiki reflecting the same: 
> https://wiki.apache.org/hadoop/Roadmap 
> 
> 
> I plan to create a 2.8 branch EOD today.
> 
> The goal for all of us should be to restrict improvements & fixes to only (a) 
> the feature-set documented under 2.8 in the RoadMap wiki and (b) other minor 
> features that are already in 2.8.
> 
> Thanks
> +Vinod
> 
> 
>> On Nov 11, 2015, at 12:13 PM, Vinod Kumar Vavilapalli 
>> mailto:vino...@hortonworks.com>> wrote:
>> 
>>  - Cut a branch about two weeks from now
>>  - Do an RC mid next month (leaving ~4weeks since branch-cut)
>>  - As with 2.7.x series, the first release will still be called as early / 
>> alpha release in the interest of
>> — gaining downstream adoption
>> — wider testing,
>> — yet reserving our right to fix any inadvertent incompatibilities 
>> introduced.
> 



Re: [DISCUSS] Looking to a 2.8.0 release

2015-11-25 Thread Allen Wittenauer

> On Nov 25, 2015, at 11:23 AM, Vinod Kumar Vavilapalli  
> wrote:
> 
> There are 40 odd incompatible changes in 3.x: 
> https://issues.apache.org/jira/issues/?jql=project%20in%20%28HADOOP%2C%20YARN%2C%20HDFS%2C%20MAPREDUCE%29%20AND%20resolution%20%3D%20Fixed%20AND%20fixVersion%20%3D%203.0.0%20AND%20fixVersion%20not%20in%20%282.6.2%2C%202.6.3%2C%202.7.1%2C%202.7.2%2C%202.7.3%2C%202.8.0%29%20and%20%22Hadoop%20Flags%22%20in%20%28%22Incompatible%20change%22%29%20ORDER%20BY%20key%20ASC%2C%20due%20ASC%2C%20priority%20DESC%2C%20created%20ASC
> 
> Need to dig deeper on their impact. Clearly all my local shell scripts 
> completely stopped working, it will be good to have some bridging there to 
> help users migrate.

I think you should file a JIRA on what actually broke.  I’m genuinely 
curious.

> Like I said before, I will spend more time on trunk only changes in order to 
> kick-start a 3.x discussion.
> 
> What are the incompatible changes in the 2.x line that you are talking about?

Thanks for confirming what I’ve always suspected.

Build failed in Jenkins: Hadoop-common-trunk-Java8 #749

2015-11-25 Thread Apache Jenkins Server
See 

Changes:

[wheat9] HDFS-9451. Clean up depreated umasks and related unit tests. 
Contributed

[xyao] HDFS-8512. WebHDFS : GETFILESTATUS should return LocatedBlock with

[shv] HDFS-9407. TestFileTruncate should not use fixed NN port. Contributed by

[jing9] HDFS-9467. Fix data race accessing writeLockHeldTimeStamp in

--
[...truncated 5784 lines...]
Running org.apache.hadoop.util.TestVersionUtil
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.185 sec - in 
org.apache.hadoop.util.TestVersionUtil
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.util.TestProtoUtil
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.298 sec - in 
org.apache.hadoop.util.TestProtoUtil
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.util.TestLightWeightGSet
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.234 sec - in 
org.apache.hadoop.util.TestLightWeightGSet
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.util.TestGSet
Tests run: 7, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 1.95 sec - in 
org.apache.hadoop.util.TestGSet
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.util.TestStringInterner
Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.14 sec - in 
org.apache.hadoop.util.TestStringInterner
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.util.TestZKUtil
Tests run: 9, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.162 sec - in 
org.apache.hadoop.util.TestZKUtil
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.util.TestStringUtils
Tests run: 14, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.299 sec - in 
org.apache.hadoop.util.TestStringUtils
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.util.TestFindClass
Tests run: 16, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.713 sec - in 
org.apache.hadoop.util.TestFindClass
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.util.TestGenericOptionsParser
Tests run: 7, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.954 sec - in 
org.apache.hadoop.util.TestGenericOptionsParser
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.util.TestRunJar
Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.471 sec - in 
org.apache.hadoop.util.TestRunJar
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.util.TestSysInfoLinux
Tests run: 5, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.269 sec - in 
org.apache.hadoop.util.TestSysInfoLinux
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.util.TestDirectBufferPool
Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.179 sec - in 
org.apache.hadoop.util.TestDirectBufferPool
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.util.TestFileBasedIPList
Tests run: 8, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.183 sec - in 
org.apache.hadoop.util.TestFileBasedIPList
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.util.TestIndexedSort
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.928 sec - in 
org.apache.hadoop.util.TestIndexedSort
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.util.TestIdentityHashStore
Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.193 sec - in 
org.apache.hadoop.util.TestIdentityHashStore
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.util.TestMachineList
Tests run: 14, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.612 sec - in 
org.apache.hadoop.util.TestMachineList
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.util.TestWinUtils
Tests run: 11, Failures: 0, Errors: 0, Skipped: 11, Time elapsed: 0.218 sec - 
in org.apache.hadoop.util.TestWinUtils
Java HotSpot

[jira] [Created] (HADOOP-12603) TestSymlinkLocalFSFileContext#testSetTimesSymlinkToDir occasionally fail

2015-11-25 Thread Wei-Chiu Chuang (JIRA)
Wei-Chiu Chuang created HADOOP-12603:


 Summary: TestSymlinkLocalFSFileContext#testSetTimesSymlinkToDir 
occasionally fail
 Key: HADOOP-12603
 URL: https://issues.apache.org/jira/browse/HADOOP-12603
 Project: Hadoop Common
  Issue Type: Bug
Reporter: Wei-Chiu Chuang


I have observed this test failure a few times in the past. When it fails, the 
expected access time (of the file link) is always 1000 less than the actual 
access time.

Error Message
{noformat}
expected:<1448478654000> but was:<1448478655000>
{noformat}
Stacktrace
{noformat}
java.lang.AssertionError: expected:<1448478654000> but was:<1448478655000>
at org.junit.Assert.fail(Assert.java:88)
at org.junit.Assert.failNotEquals(Assert.java:743)
at org.junit.Assert.assertEquals(Assert.java:118)
at org.junit.Assert.assertEquals(Assert.java:555)
at org.junit.Assert.assertEquals(Assert.java:542)
at 
org.apache.hadoop.fs.SymlinkBaseTest.testSetTimesSymlinkToDir(SymlinkBaseTest.java:1391)
at 
org.apache.hadoop.fs.TestSymlinkLocalFS.testSetTimesSymlinkToDir(TestSymlinkLocalFS.java:233)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at 
org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:47)
at 
org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
at 
org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:44)
at 
org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
at 
org.junit.internal.runners.statements.FailOnTimeout$StatementThread.run(FailOnTimeout.java:74)
{noformat}

Standard Output
{noformat}
2015-11-25 19:10:55,231 WARN  fs.FileUtil (FileUtil.java:symLink(813)) - 
Command 'ln -s 
/testptch/hadoop/hadoop-common-project/hadoop-common/target/test/data/4/vae1ng5t75/test1/file
 
/testptch/hadoop/hadoop-common-project/hadoop-common/target/test/data/4/vae1ng5t75/test2/linkToFile'
 failed 1 with: ln: failed to create symbolic link 
'/testptch/hadoop/hadoop-common-project/hadoop-common/target/test/data/4/vae1ng5t75/test2/linkToFile':
 No such file or directory

2015-11-25 19:10:56,212 WARN  fs.FileUtil (FileUtil.java:symLink(813)) - 
Command 'ln -s 
/testptch/hadoop/hadoop-common-project/hadoop-common/target/test/data/4/vae1ng5t75/test1/file
 
/testptch/hadoop/hadoop-common-project/hadoop-common/target/test/data/4/vae1ng5t75/test1/linkToFile'
 failed 1 with: ln: failed to create symbolic link 
'/testptch/hadoop/hadoop-common-project/hadoop-common/target/test/data/4/vae1ng5t75/test1/linkToFile':
 File exists

{noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HADOOP-12602) TestMetricsSystemImpl#testQSize occasionally fail

2015-11-25 Thread Wei-Chiu Chuang (JIRA)
Wei-Chiu Chuang created HADOOP-12602:


 Summary: TestMetricsSystemImpl#testQSize occasionally fail
 Key: HADOOP-12602
 URL: https://issues.apache.org/jira/browse/HADOOP-12602
 Project: Hadoop Common
  Issue Type: Bug
Reporter: Wei-Chiu Chuang


I have seen this test failed a few times in the past.
Error Message
{noformat}
metricsSink.putMetrics();
Wanted 2 times:
-> at 
org.apache.hadoop.metrics2.impl.TestMetricsSystemImpl.testQSize(TestMetricsSystemImpl.java:472)
But was 1 time:
-> at 
org.apache.hadoop.metrics2.impl.MetricsSinkAdapter.consume(MetricsSinkAdapter.java:183)
{noformat}
Stacktrace
{noformat}
org.mockito.exceptions.verification.TooLittleActualInvocations: 
metricsSink.putMetrics();
Wanted 2 times:
-> at 
org.apache.hadoop.metrics2.impl.TestMetricsSystemImpl.testQSize(TestMetricsSystemImpl.java:472)
But was 1 time:
-> at 
org.apache.hadoop.metrics2.impl.MetricsSinkAdapter.consume(MetricsSinkAdapter.java:183)

at 
org.apache.hadoop.metrics2.impl.TestMetricsSystemImpl.testQSize(TestMetricsSystemImpl.java:472)
{noformat}
Standard Output
{noformat}
2015-11-25 19:07:49,867 INFO  impl.MetricsConfig 
(MetricsConfig.java:loadFirst(115)) - loaded properties from 
hadoop-metrics2-test.properties
2015-11-25 19:07:49,932 INFO  impl.MetricsSystemImpl 
(MetricsSystemImpl.java:startTimer(374)) - Scheduled snapshot period at 10 
second(s).
2015-11-25 19:07:49,932 INFO  impl.MetricsSystemImpl 
(MetricsSystemImpl.java:start(192)) - Test metrics system started
2015-11-25 19:07:50,134 INFO  impl.MetricsSinkAdapter 
(MetricsSinkAdapter.java:start(203)) - Sink slowSink started
2015-11-25 19:07:50,135 INFO  impl.MetricsSystemImpl 
(MetricsSystemImpl.java:registerSink(301)) - Registered sink slowSink
2015-11-25 19:07:50,135 INFO  impl.MetricsSinkAdapter 
(MetricsSinkAdapter.java:start(203)) - Sink dataSink started
2015-11-25 19:07:50,136 INFO  impl.MetricsSystemImpl 
(MetricsSystemImpl.java:registerSink(301)) - Registered sink dataSink
2015-11-25 19:07:50,746 INFO  impl.MetricsSystemImpl 
(MetricsSystemImpl.java:stop(211)) - Stopping Test metrics system...
2015-11-25 19:07:50,747 INFO  impl.MetricsSinkAdapter 
(MetricsSinkAdapter.java:publishMetricsFromQueue(140)) - slowSink thread 
interrupted.
2015-11-25 19:07:50,748 INFO  impl.MetricsSinkAdapter 
(MetricsSinkAdapter.java:publishMetricsFromQueue(140)) - dataSink thread 
interrupted.
2015-11-25 19:07:50,748 INFO  impl.MetricsSystemImpl 
(MetricsSystemImpl.java:stop(217)) - Test metrics system stopped.
{noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Jenkins build is back to normal : Hadoop-Common-trunk #2047

2015-11-25 Thread Apache Jenkins Server
See 



Re: [DISCUSS] Looking to a 2.8.0 release

2015-11-25 Thread Andrew Wang
SGTM, thanks Vinod! LMK if you need reviews on any of that.

Regarding the release checklist, another item I'd add is updating the
release notes in the project documentation, we've forgotten in the past.

On Wed, Nov 25, 2015 at 2:01 PM, Vinod Kumar Vavilapalli  wrote:

> Tx for your comments, Andrew!
>
> I did talk about it in a few discussions in the past related to this but
> yes, we never codified the feature-level alpha/beta tags. Part of the
> reason why I never pushed for such a codification is that (a) it is a
> subjective decision that the feature contributors usually have the best say
> on and (2) voting on the alpha-ness / beta-ness may not be a productive
> exercise in non-trivial number of cases (as I have seen with the
> release-level tags, some users think an alpha release is of production
> quality enough for _their_ use-cases).
>
> That said, I agree about noting down our general recommendations on what
> an alpha feature means, what a beta feature means etc. Let me file a JIRA
> for this.
>
> The second point you made is absolutely true. Atleast on YARN / MR side, I
> usually end up traversing (some if not all of) alpha features and making
> sure the corresponding APIs are explicitly marked private or public
> unstable / evolving. I do think that there is a lot of value in us  getting
> more systematic with this - how about we do this for the feature list of
> 2.8 and evolve the process?
>
> In general, may be we could have a list of ‘check-list’ JIRAs that we
> always address before every release. Few things already come to my mind:
>  - Mark which features are alpha / beta and make sure the corresponding
> APIs, public interfaces reflect the state
>  - Revise all newly added configuration properties to make sure they
> follow our general naming patterns. New contributors sometimes create
> non-standard properties that we come to regret supporting.
>  - Generate a list of newly added public entry-points and validate that
> they are all indeed meant to be public
>  - [...]
>
> Thoughts?
>
> +Vinod
>
>
> > On Nov 25, 2015, at 11:47 AM, Andrew Wang 
> wrote:
> >
> > Hey Vinod,
> >
> > I'm fine with the idea of alpha/beta marking in the abstract, but had a
> > question: do we define these terms in our compatibility policy or
> > elsewhere? I think it's commonly understood among us developers (alpha
> > means not fully tested and API unstable, beta means it's not fully tested
> > but is API stable), but it'd be good to have it written down.
> >
> > Also I think we've only done alpha/beta tagging at the release-level
> > previously which is a simpler story to tell users. So it's important for
> > this release that alpha features set their interface stability
> annotations
> > to "evolving". There isn't a corresponding annotation for "interface
> > quality", but IMO that's overkill.
> >
> > Thanks,
> > Andrew
> >
> > On Wed, Nov 25, 2015 at 11:08 AM, Vinod Kumar Vavilapalli <
> > vino...@apache.org> wrote:
> >
> >> This is the current state from the feedback I gathered.
> >> - Support priorities across applications within the same queue YARN-1963
> >>— Can push as an alpha / beta feature per Sunil
> >> - YARN-1197 Support changing resources of an allocated container:
> >>— Can push as an alpha/beta feature per Wangda
> >> - YARN-3611 Support Docker Containers In LinuxContainerExecutor: Well
> >> most of it anyways.
> >>— Can push as an alpha feature.
> >> - YARN Timeline Service v1.5 - YARN-4233
> >>— Should include per Li Lu
> >> - YARN Timeline Service Next generation: YARN-2928
> >>— Per analysis from Sangjin, drop this from 2.8.
> >>
> >> One open feature status
> >> - HDFS-8155Support OAuth2 in WebHDFS: Alpha / Early feature?
> >>
> >> Updated the Roadmap wiki with the same.
> >>
> >> Thanks
> >> +Vinod
> >>
> >>> On Nov 13, 2015, at 12:12 PM, Sangjin Lee  wrote:
> >>>
> >>> I reviewed the current state of the YARN-2928 changes regarding its
> >> impact
> >>> if the timeline service v.2 is disabled. It does appear that there are
> a
> >>> lot of things that still do get created and enabled unconditionally
> >>> regardless of configuration. While this is understandable when we were
> >>> working to implement the feature, this clearly needs to be cleaned up
> so
> >>> that when disabled the timeline service v.2 doesn't impact other
> things.
> >>>
> >>> I filed a JIRA for that work:
> >>> https://issues.apache.org/jira/browse/YARN-4356
> >>>
> >>> We need to complete it before we can merge.
> >>>
> >>> Somewhat related is the status of the configuration and what it means
> in
> >>> various contexts (client/app-side vs. server-side, v.1 vs. v.2, etc.).
> I
> >>> know there is an ongoing discussion regarding YARN-4183. We'll need to
> >>> reflect the outcome of that discussion.
> >>>
> >>> My overall impression of whether this can be done for 2.8 is that it
> >> looks
> >>> rather challenging given the suggested timeframe. We also need to
> >> complete
> >>> several maj

Re: [DISCUSS] Looking to a 2.8.0 release

2015-11-25 Thread Chris Nauroth
+1.  Thanks, Vinod.

--Chris Nauroth




On 11/25/15, 1:45 PM, "Vinod Kumar Vavilapalli"  wrote:

>Okay, tx for this clarification Chris! I dug more into this and now
>realized the actual scope of this. Given the the limited nature of this
>feature (non-Namenode etc) and the WIP nature of the larger umbrella
>HADOOP-11744, we will ship the feature but I’ll stop calling this out as
>a notable feature.
>
>Thanks
>+Vinod
>
>
>> On Nov 25, 2015, at 12:04 PM, Chris Nauroth 
>>wrote:
>> 
>> Hi Vinod,
>> 
>> The HDFS-8155 work is complete in branch-2 already, so feel free to
>> include it in the roadmap.
>> 
>> For those watching the thread that aren't familiar with HDFS-8155, I
>>want
>> to call out that it was a client-side change only.  The WebHDFS client
>>is
>> capable of obtaining OAuth2 tokens and passing them along in its HTTP
>> requests.  The NameNode and DataNode server side currently do not have
>>any
>> support for OAuth2, so overall, this feature is only useful in some very
>> unique deployment architectures right now.  This is all discussed
>> explicitly in documentation committed with HDFS-8155, but I wanted to
>> prevent any mistaken assumptions for people only reading this thread.
>> 
>> --Chris Nauroth
>> 
>> 
>> 
>> 
>> On 11/25/15, 11:08 AM, "Vinod Kumar Vavilapalli" 
>> wrote:
>> 
>>> This is the current state from the feedback I gathered.
>>> - Support priorities across applications within the same queue
>>>YARN-1963
>>>   ‹ Can push as an alpha / beta feature per Sunil
>>> - YARN-1197 Support changing resources of an allocated container:
>>>   ‹ Can push as an alpha/beta feature per Wangda
>>> - YARN-3611 Support Docker Containers In LinuxContainerExecutor: Well
>>> most of it anyways.
>>>   ‹ Can push as an alpha feature.
>>> - YARN Timeline Service v1.5 - YARN-4233
>>>   ‹ Should include per Li Lu
>>> - YARN Timeline Service Next generation: YARN-2928
>>>   ‹ Per analysis from Sangjin, drop this from 2.8.
>>> 
>>> One open feature status
>>> - HDFS-8155Support OAuth2 in WebHDFS: Alpha / Early feature?
>>> 
>>> Updated the Roadmap wiki with the same.
>>> 
>>> Thanks
>>> +Vinod
>>> 
 On Nov 13, 2015, at 12:12 PM, Sangjin Lee  wrote:
 
 I reviewed the current state of the YARN-2928 changes regarding its
 impact
 if the timeline service v.2 is disabled. It does appear that there
are a
 lot of things that still do get created and enabled unconditionally
 regardless of configuration. While this is understandable when we were
 working to implement the feature, this clearly needs to be cleaned up
so
 that when disabled the timeline service v.2 doesn't impact other
things.
 
 I filed a JIRA for that work:
 https://issues.apache.org/jira/browse/YARN-4356
 
 We need to complete it before we can merge.
 
 Somewhat related is the status of the configuration and what it means
in
 various contexts (client/app-side vs. server-side, v.1 vs. v.2,
etc.). I
 know there is an ongoing discussion regarding YARN-4183. We'll need to
 reflect the outcome of that discussion.
 
 My overall impression of whether this can be done for 2.8 is that it
 looks
 rather challenging given the suggested timeframe. We also need to
 complete
 several major tasks before it is ready.
 
 Sangjin
 
 
 On Wed, Nov 11, 2015 at 5:49 PM, Sangjin Lee  wrote:
 
> 
> On Wed, Nov 11, 2015 at 12:13 PM, Vinod Vavilapalli <
> vino...@hortonworks.com> wrote:
> 
>>   ‹ YARN Timeline Service Next generation: YARN-2928: Lots of
>> momentum,
>> but clearly a work in progress. Two options here
>>   ‹ If it is safe to ship it into 2.8 in a disable manner, we
>>can
>> get the early code into trunk and all the way int o2.8.
>>   ‹ If it is not safe, it organically rolls over into 2.9
>> 
> 
> I'll review the changes on YARN-2928 to see what impact it has (if
> any) if
> the timeline service v.2 is disabled.
> 
> Another condition for it to make 2.8 is whether the branch will be
>in a
> shape in a couple of weeks such that it adds value for folks that
>want
> to
> test it. Hopefully it will become clearer soon.
> 
> Sangjin
> 
>>> 
>> 
>> 
>
>



Re: [DISCUSS] Looking to a 2.8.0 release

2015-11-25 Thread Vinod Kumar Vavilapalli
Tx for your comments, Andrew!

I did talk about it in a few discussions in the past related to this but yes, 
we never codified the feature-level alpha/beta tags. Part of the reason why I 
never pushed for such a codification is that (a) it is a subjective decision 
that the feature contributors usually have the best say on and (2) voting on 
the alpha-ness / beta-ness may not be a productive exercise in non-trivial 
number of cases (as I have seen with the release-level tags, some users think 
an alpha release is of production quality enough for _their_ use-cases).

That said, I agree about noting down our general recommendations on what an 
alpha feature means, what a beta feature means etc. Let me file a JIRA for this.

The second point you made is absolutely true. Atleast on YARN / MR side, I 
usually end up traversing (some if not all of) alpha features and making sure 
the corresponding APIs are explicitly marked private or public unstable / 
evolving. I do think that there is a lot of value in us  getting more 
systematic with this - how about we do this for the feature list of 2.8 and 
evolve the process?

In general, may be we could have a list of ‘check-list’ JIRAs that we always 
address before every release. Few things already come to my mind:
 - Mark which features are alpha / beta and make sure the corresponding APIs, 
public interfaces reflect the state
 - Revise all newly added configuration properties to make sure they follow our 
general naming patterns. New contributors sometimes create non-standard 
properties that we come to regret supporting.
 - Generate a list of newly added public entry-points and validate that they 
are all indeed meant to be public
 - [...]

Thoughts?

+Vinod


> On Nov 25, 2015, at 11:47 AM, Andrew Wang  wrote:
> 
> Hey Vinod,
> 
> I'm fine with the idea of alpha/beta marking in the abstract, but had a
> question: do we define these terms in our compatibility policy or
> elsewhere? I think it's commonly understood among us developers (alpha
> means not fully tested and API unstable, beta means it's not fully tested
> but is API stable), but it'd be good to have it written down.
> 
> Also I think we've only done alpha/beta tagging at the release-level
> previously which is a simpler story to tell users. So it's important for
> this release that alpha features set their interface stability annotations
> to "evolving". There isn't a corresponding annotation for "interface
> quality", but IMO that's overkill.
> 
> Thanks,
> Andrew
> 
> On Wed, Nov 25, 2015 at 11:08 AM, Vinod Kumar Vavilapalli <
> vino...@apache.org> wrote:
> 
>> This is the current state from the feedback I gathered.
>> - Support priorities across applications within the same queue YARN-1963
>>— Can push as an alpha / beta feature per Sunil
>> - YARN-1197 Support changing resources of an allocated container:
>>— Can push as an alpha/beta feature per Wangda
>> - YARN-3611 Support Docker Containers In LinuxContainerExecutor: Well
>> most of it anyways.
>>— Can push as an alpha feature.
>> - YARN Timeline Service v1.5 - YARN-4233
>>— Should include per Li Lu
>> - YARN Timeline Service Next generation: YARN-2928
>>— Per analysis from Sangjin, drop this from 2.8.
>> 
>> One open feature status
>> - HDFS-8155Support OAuth2 in WebHDFS: Alpha / Early feature?
>> 
>> Updated the Roadmap wiki with the same.
>> 
>> Thanks
>> +Vinod
>> 
>>> On Nov 13, 2015, at 12:12 PM, Sangjin Lee  wrote:
>>> 
>>> I reviewed the current state of the YARN-2928 changes regarding its
>> impact
>>> if the timeline service v.2 is disabled. It does appear that there are a
>>> lot of things that still do get created and enabled unconditionally
>>> regardless of configuration. While this is understandable when we were
>>> working to implement the feature, this clearly needs to be cleaned up so
>>> that when disabled the timeline service v.2 doesn't impact other things.
>>> 
>>> I filed a JIRA for that work:
>>> https://issues.apache.org/jira/browse/YARN-4356
>>> 
>>> We need to complete it before we can merge.
>>> 
>>> Somewhat related is the status of the configuration and what it means in
>>> various contexts (client/app-side vs. server-side, v.1 vs. v.2, etc.). I
>>> know there is an ongoing discussion regarding YARN-4183. We'll need to
>>> reflect the outcome of that discussion.
>>> 
>>> My overall impression of whether this can be done for 2.8 is that it
>> looks
>>> rather challenging given the suggested timeframe. We also need to
>> complete
>>> several major tasks before it is ready.
>>> 
>>> Sangjin
>>> 
>>> 
>>> On Wed, Nov 11, 2015 at 5:49 PM, Sangjin Lee  wrote:
>>> 
 
 On Wed, Nov 11, 2015 at 12:13 PM, Vinod Vavilapalli <
 vino...@hortonworks.com> wrote:
 
>   — YARN Timeline Service Next generation: YARN-2928: Lots of
>> momentum,
> but clearly a work in progress. Two options here
>   — If it is safe to ship it into 2.8 in a disable manner, we can
> get the early

Re: [DISCUSS] Looking to a 2.8.0 release

2015-11-25 Thread Vinod Kumar Vavilapalli
Okay, tx for this clarification Chris! I dug more into this and now realized 
the actual scope of this. Given the the limited nature of this feature 
(non-Namenode etc) and the WIP nature of the larger umbrella HADOOP-11744, we 
will ship the feature but I’ll stop calling this out as a notable feature.

Thanks
+Vinod


> On Nov 25, 2015, at 12:04 PM, Chris Nauroth  wrote:
> 
> Hi Vinod,
> 
> The HDFS-8155 work is complete in branch-2 already, so feel free to
> include it in the roadmap.
> 
> For those watching the thread that aren't familiar with HDFS-8155, I want
> to call out that it was a client-side change only.  The WebHDFS client is
> capable of obtaining OAuth2 tokens and passing them along in its HTTP
> requests.  The NameNode and DataNode server side currently do not have any
> support for OAuth2, so overall, this feature is only useful in some very
> unique deployment architectures right now.  This is all discussed
> explicitly in documentation committed with HDFS-8155, but I wanted to
> prevent any mistaken assumptions for people only reading this thread.
> 
> --Chris Nauroth
> 
> 
> 
> 
> On 11/25/15, 11:08 AM, "Vinod Kumar Vavilapalli" 
> wrote:
> 
>> This is the current state from the feedback I gathered.
>> - Support priorities across applications within the same queue YARN-1963
>>   ‹ Can push as an alpha / beta feature per Sunil
>> - YARN-1197 Support changing resources of an allocated container:
>>   ‹ Can push as an alpha/beta feature per Wangda
>> - YARN-3611 Support Docker Containers In LinuxContainerExecutor: Well
>> most of it anyways.
>>   ‹ Can push as an alpha feature.
>> - YARN Timeline Service v1.5 - YARN-4233
>>   ‹ Should include per Li Lu
>> - YARN Timeline Service Next generation: YARN-2928
>>   ‹ Per analysis from Sangjin, drop this from 2.8.
>> 
>> One open feature status
>> - HDFS-8155Support OAuth2 in WebHDFS: Alpha / Early feature?
>> 
>> Updated the Roadmap wiki with the same.
>> 
>> Thanks
>> +Vinod
>> 
>>> On Nov 13, 2015, at 12:12 PM, Sangjin Lee  wrote:
>>> 
>>> I reviewed the current state of the YARN-2928 changes regarding its
>>> impact
>>> if the timeline service v.2 is disabled. It does appear that there are a
>>> lot of things that still do get created and enabled unconditionally
>>> regardless of configuration. While this is understandable when we were
>>> working to implement the feature, this clearly needs to be cleaned up so
>>> that when disabled the timeline service v.2 doesn't impact other things.
>>> 
>>> I filed a JIRA for that work:
>>> https://issues.apache.org/jira/browse/YARN-4356
>>> 
>>> We need to complete it before we can merge.
>>> 
>>> Somewhat related is the status of the configuration and what it means in
>>> various contexts (client/app-side vs. server-side, v.1 vs. v.2, etc.). I
>>> know there is an ongoing discussion regarding YARN-4183. We'll need to
>>> reflect the outcome of that discussion.
>>> 
>>> My overall impression of whether this can be done for 2.8 is that it
>>> looks
>>> rather challenging given the suggested timeframe. We also need to
>>> complete
>>> several major tasks before it is ready.
>>> 
>>> Sangjin
>>> 
>>> 
>>> On Wed, Nov 11, 2015 at 5:49 PM, Sangjin Lee  wrote:
>>> 
 
 On Wed, Nov 11, 2015 at 12:13 PM, Vinod Vavilapalli <
 vino...@hortonworks.com> wrote:
 
>   ‹ YARN Timeline Service Next generation: YARN-2928: Lots of
> momentum,
> but clearly a work in progress. Two options here
>   ‹ If it is safe to ship it into 2.8 in a disable manner, we can
> get the early code into trunk and all the way int o2.8.
>   ‹ If it is not safe, it organically rolls over into 2.9
> 
 
 I'll review the changes on YARN-2928 to see what impact it has (if
 any) if
 the timeline service v.2 is disabled.
 
 Another condition for it to make 2.8 is whether the branch will be in a
 shape in a couple of weeks such that it adds value for folks that want
 to
 test it. Hopefully it will become clearer soon.
 
 Sangjin
 
>> 
> 
> 



[jira] [Resolved] (HADOOP-12601) findbugs highlights problem with FsPermission

2015-11-25 Thread Haohui Mai (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12601?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Haohui Mai resolved HADOOP-12601.
-
Resolution: Duplicate

Fixed in HDFS-9451.

> findbugs highlights problem with FsPermission
> -
>
> Key: HADOOP-12601
> URL: https://issues.apache.org/jira/browse/HADOOP-12601
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: build, io
>Affects Versions: 3.0.0
> Environment: yetus
>Reporter: Steve Loughran
>
> Findbugs is warning of a problem in {{FsPermission}}
> {code}
> n class org.apache.hadoop.fs.permission.FsPermission
> In method org.apache.hadoop.fs.permission.FsPermission.getUMask(Configuration)
> Local variable named oldUmask
> At FsPermission.java:[line 249]
> {code}
> This may actually be a sign of a bug in the code, but as it's reading a key 
> tagged as deprecated since 2010 and to be culled in 0.23, maybe cutting the 
> line is the strategy. After all, if the code has been broken, and nobody 
> complained, that deprecation worked



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Build failed in Jenkins: Hadoop-Common-trunk #2046

2015-11-25 Thread Apache Jenkins Server
See 

Changes:

[wheat9] HDFS-9459. hadoop-hdfs-native-client fails test build on Windows after

--
[...truncated 5386 lines...]
Running org.apache.hadoop.util.TestLineReader
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.197 sec - in 
org.apache.hadoop.util.TestLineReader
Running org.apache.hadoop.util.TestIdentityHashStore
Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.191 sec - in 
org.apache.hadoop.util.TestIdentityHashStore
Running org.apache.hadoop.util.TestClasspath
Tests run: 7, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.362 sec - in 
org.apache.hadoop.util.TestClasspath
Running org.apache.hadoop.util.TestApplicationClassLoader
Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.255 sec - in 
org.apache.hadoop.util.TestApplicationClassLoader
Running org.apache.hadoop.util.TestShell
Tests run: 20, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 5.349 sec - in 
org.apache.hadoop.util.TestShell
Running org.apache.hadoop.util.TestShutdownHookManager
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.175 sec - in 
org.apache.hadoop.util.TestShutdownHookManager
Running org.apache.hadoop.util.TestConfTest
Tests run: 14, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.242 sec - in 
org.apache.hadoop.util.TestConfTest
Running org.apache.hadoop.util.TestHttpExceptionUtils
Tests run: 7, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.689 sec - in 
org.apache.hadoop.util.TestHttpExceptionUtils
Running org.apache.hadoop.util.TestJarFinder
Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.781 sec - in 
org.apache.hadoop.util.TestJarFinder
Running org.apache.hadoop.util.hash.TestHash
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.446 sec - in 
org.apache.hadoop.util.hash.TestHash
Running org.apache.hadoop.util.TestLightWeightCache
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 8.975 sec - in 
org.apache.hadoop.util.TestLightWeightCache
Running org.apache.hadoop.util.TestNativeCodeLoader
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.169 sec - in 
org.apache.hadoop.util.TestNativeCodeLoader
Running org.apache.hadoop.util.TestReflectionUtils
Tests run: 5, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.63 sec - in 
org.apache.hadoop.util.TestReflectionUtils
Running org.apache.hadoop.crypto.TestCryptoStreams
Tests run: 14, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 21.846 sec - 
in org.apache.hadoop.crypto.TestCryptoStreams
Running org.apache.hadoop.crypto.TestCryptoStreamsForLocalFS
Tests run: 14, Failures: 0, Errors: 0, Skipped: 5, Time elapsed: 12.048 sec - 
in org.apache.hadoop.crypto.TestCryptoStreamsForLocalFS
Running org.apache.hadoop.crypto.TestOpensslCipher
Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.18 sec - in 
org.apache.hadoop.crypto.TestOpensslCipher
Running org.apache.hadoop.crypto.TestCryptoStreamsWithOpensslAesCtrCryptoCodec
Tests run: 14, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 21.854 sec - 
in org.apache.hadoop.crypto.TestCryptoStreamsWithOpensslAesCtrCryptoCodec
Running org.apache.hadoop.crypto.TestCryptoStreamsNormal
Tests run: 14, Failures: 0, Errors: 0, Skipped: 8, Time elapsed: 7.096 sec - in 
org.apache.hadoop.crypto.TestCryptoStreamsNormal
Running org.apache.hadoop.crypto.TestCryptoCodec
Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 14.487 sec - in 
org.apache.hadoop.crypto.TestCryptoCodec
Running org.apache.hadoop.crypto.random.TestOsSecureRandom
Tests run: 6, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.606 sec - in 
org.apache.hadoop.crypto.random.TestOsSecureRandom
Running org.apache.hadoop.crypto.random.TestOpensslSecureRandom
Tests run: 5, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.18 sec - in 
org.apache.hadoop.crypto.random.TestOpensslSecureRandom
Running org.apache.hadoop.crypto.TestCryptoStreamsWithJceAesCtrCryptoCodec
Tests run: 14, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 22.999 sec - 
in org.apache.hadoop.crypto.TestCryptoStreamsWithJceAesCtrCryptoCodec
Running org.apache.hadoop.crypto.key.TestKeyProviderDelegationTokenExtension
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.728 sec - in 
org.apache.hadoop.crypto.key.TestKeyProviderDelegationTokenExtension
Running org.apache.hadoop.crypto.key.TestCachingKeyProvider
Tests run: 5, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 2.096 sec - in 
org.apache.hadoop.crypto.key.TestCachingKeyProvider
Running org.apache.hadoop.crypto.key.TestKeyProviderFactory
Tests run: 8, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 1.46 sec - in 
org.apache.hadoop.crypto.key.TestKeyProviderFactory
Running org.apache.hadoop.crypto.key.TestValueQueue
Tests run: 8, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 1.17 sec - in 
org.apache.hadoop.crypto.key.TestValueQueue
Running org.apa

[jira] [Created] (HADOOP-12601) findbugs highlights problem with FsPermission

2015-11-25 Thread Steve Loughran (JIRA)
Steve Loughran created HADOOP-12601:
---

 Summary: findbugs highlights problem with FsPermission
 Key: HADOOP-12601
 URL: https://issues.apache.org/jira/browse/HADOOP-12601
 Project: Hadoop Common
  Issue Type: Bug
  Components: build, io
Affects Versions: 3.0.0
 Environment: yetus
Reporter: Steve Loughran


Findbugs is warning of a problem in {{FsPermission}}

{code}
n class org.apache.hadoop.fs.permission.FsPermission
In method org.apache.hadoop.fs.permission.FsPermission.getUMask(Configuration)
Local variable named oldUmask
At FsPermission.java:[line 249]
{code}

This may actually be a sign of a bug in the code, but as it's reading a key 
tagged as deprecated since 2010 and to be culled in 0.23, maybe cutting the 
line is the strategy. After all, if the code has been broken, and nobody 
complained, that deprecation worked



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Build failed in Jenkins: Hadoop-common-trunk-Java8 #748

2015-11-25 Thread Apache Jenkins Server
See 

Changes:

[wheat9] HDFS-9459. hadoop-hdfs-native-client fails test build on Windows after

--
[...truncated 5793 lines...]
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.io.retry.TestFailoverProxy
Tests run: 9, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 13.498 sec - in 
org.apache.hadoop.io.retry.TestFailoverProxy
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.io.retry.TestDefaultRetryPolicy
Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.341 sec - in 
org.apache.hadoop.io.retry.TestDefaultRetryPolicy
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.io.retry.TestRetryProxy
Tests run: 12, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 1.214 sec - in 
org.apache.hadoop.io.retry.TestRetryProxy
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.io.TestDefaultStringifier
Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.366 sec - in 
org.apache.hadoop.io.TestDefaultStringifier
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.io.TestBloomMapFile
Tests run: 7, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 1.333 sec - in 
org.apache.hadoop.io.TestBloomMapFile
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.io.TestBytesWritable
Tests run: 6, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.188 sec - in 
org.apache.hadoop.io.TestBytesWritable
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.io.erasurecode.rawcoder.TestDummyRawCoder
Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.086 sec - in 
org.apache.hadoop.io.erasurecode.rawcoder.TestDummyRawCoder
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.io.erasurecode.rawcoder.TestXORRawCoder
Tests run: 6, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.151 sec - in 
org.apache.hadoop.io.erasurecode.rawcoder.TestXORRawCoder
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.io.erasurecode.rawcoder.TestRSRawCoder
Tests run: 15, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.327 sec - in 
org.apache.hadoop.io.erasurecode.rawcoder.TestRSRawCoder
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.io.erasurecode.TestECSchema
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.073 sec - in 
org.apache.hadoop.io.erasurecode.TestECSchema
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.io.erasurecode.coder.TestXORCoder
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.2 sec - in 
org.apache.hadoop.io.erasurecode.coder.TestXORCoder
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.io.erasurecode.coder.TestRSErasureCoder
Tests run: 10, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 1.021 sec - in 
org.apache.hadoop.io.erasurecode.coder.TestRSErasureCoder
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.io.TestWritableUtils
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.146 sec - in 
org.apache.hadoop.io.TestWritableUtils
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.io.TestBooleanWritable
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.178 sec - in 
org.apache.hadoop.io.TestBooleanWritable
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.io.TestDataByteBuffers
Tests run: 5, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.369 sec - in 
org.apache.hadoop.io.TestDataByteBuffers
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.io.TestVersionedWritable
Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.18 sec - in 
org.apache.hadoop.io.TestVersionedWritable
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.io.TestEnumSetWritable
Tests run: 6

Re: [DISCUSS] Looking to a 2.8.0 release

2015-11-25 Thread Chris Nauroth
Hi Vinod,

The HDFS-8155 work is complete in branch-2 already, so feel free to
include it in the roadmap.

For those watching the thread that aren't familiar with HDFS-8155, I want
to call out that it was a client-side change only.  The WebHDFS client is
capable of obtaining OAuth2 tokens and passing them along in its HTTP
requests.  The NameNode and DataNode server side currently do not have any
support for OAuth2, so overall, this feature is only useful in some very
unique deployment architectures right now.  This is all discussed
explicitly in documentation committed with HDFS-8155, but I wanted to
prevent any mistaken assumptions for people only reading this thread.

--Chris Nauroth




On 11/25/15, 11:08 AM, "Vinod Kumar Vavilapalli" 
wrote:

>This is the current state from the feedback I gathered.
> - Support priorities across applications within the same queue YARN-1963
>‹ Can push as an alpha / beta feature per Sunil
> - YARN-1197 Support changing resources of an allocated container:
>‹ Can push as an alpha/beta feature per Wangda
> - YARN-3611 Support Docker Containers In LinuxContainerExecutor: Well
>most of it anyways.
>‹ Can push as an alpha feature.
> - YARN Timeline Service v1.5 - YARN-4233
>‹ Should include per Li Lu
> - YARN Timeline Service Next generation: YARN-2928
>‹ Per analysis from Sangjin, drop this from 2.8.
>
>One open feature status
> - HDFS-8155Support OAuth2 in WebHDFS: Alpha / Early feature?
>
>Updated the Roadmap wiki with the same.
>
>Thanks
>+Vinod
>
>> On Nov 13, 2015, at 12:12 PM, Sangjin Lee  wrote:
>> 
>> I reviewed the current state of the YARN-2928 changes regarding its
>>impact
>> if the timeline service v.2 is disabled. It does appear that there are a
>> lot of things that still do get created and enabled unconditionally
>> regardless of configuration. While this is understandable when we were
>> working to implement the feature, this clearly needs to be cleaned up so
>> that when disabled the timeline service v.2 doesn't impact other things.
>> 
>> I filed a JIRA for that work:
>> https://issues.apache.org/jira/browse/YARN-4356
>> 
>> We need to complete it before we can merge.
>> 
>> Somewhat related is the status of the configuration and what it means in
>> various contexts (client/app-side vs. server-side, v.1 vs. v.2, etc.). I
>> know there is an ongoing discussion regarding YARN-4183. We'll need to
>> reflect the outcome of that discussion.
>> 
>> My overall impression of whether this can be done for 2.8 is that it
>>looks
>> rather challenging given the suggested timeframe. We also need to
>>complete
>> several major tasks before it is ready.
>> 
>> Sangjin
>> 
>> 
>> On Wed, Nov 11, 2015 at 5:49 PM, Sangjin Lee  wrote:
>> 
>>> 
>>> On Wed, Nov 11, 2015 at 12:13 PM, Vinod Vavilapalli <
>>> vino...@hortonworks.com> wrote:
>>> 
‹ YARN Timeline Service Next generation: YARN-2928: Lots of
momentum,
 but clearly a work in progress. Two options here
‹ If it is safe to ship it into 2.8 in a disable manner, we can
 get the early code into trunk and all the way int o2.8.
‹ If it is not safe, it organically rolls over into 2.9
 
>>> 
>>> I'll review the changes on YARN-2928 to see what impact it has (if
>>>any) if
>>> the timeline service v.2 is disabled.
>>> 
>>> Another condition for it to make 2.8 is whether the branch will be in a
>>> shape in a couple of weeks such that it adds value for folks that want
>>>to
>>> test it. Hopefully it will become clearer soon.
>>> 
>>> Sangjin
>>> 
>



Re: [DISCUSS] Looking to a 2.8.0 release

2015-11-25 Thread Chris Nauroth
Regarding interface visibility/stability, I'm aware of 2 relevant JIRAs
right now.

HADOOP-10776 proposes to mark Public some of the security plumbing like
the FileSystem delegation token methods, UserGroupInformation, Token and
Credentials.  At this point, I think we are only fooling ourselves trying
to treat these as Private or LimitedPrivate.  I believe they are de facto
Public, because downstream applications just don't have any other reliable
way to do what these interfaces do.

HADOOP-12600 proposes to mark FileContext Stable.  I expect this one is
simply an oversight.

I've taken the bold step of marking both issues as 2.8.0 blockers.  We can
of course reconsider if this is controversial.

--Chris Nauroth




On 11/25/15, 11:30 AM, "Vinod Kumar Vavilapalli" 
wrote:

>Steve,
>
>
>> There's a lot of stuff in 2.8; I note that I'd like to see the s3a perf
>>improvements & openstack fixes in there: for which I need reviewers. I
>>don't have the spare time to do this myself.
>
>If you think they are useful, it helps to file tickets (or point out
>existing tickets), start discussion etc w.r.t these areas in order to
>attract contributors.
>
>
>> -likewise, DFSConfigKeys stayed in hdfs-server. I know it's tagged as
>>@Private, but it's long been where all the string constants for HDFS
>>options live. Forcing users to retype them in their own source is not
>>only dangerous (it only encourages typos), it actually stops you using
>>your IDE finding out where those constants get used.
>
>> We do now have a set of keys in the client, HdfsClientConfigKeys, but
>>these are still declared as @Private. Which is a mistake for the reasons
>>above, and because it encourages hadoop developers to assume that they
>>are free to make whatever changes they want to this code, and if it
>>breaks something, say "it was tagged as private²
>
>
>If these are worth going after, please file tickets under HDFS-6200 if
>they don¹t exist already.
>
>
>> 
>> 1. We have to recognise that a lot of things marked @Private are in
>>fact essential for clients to use. Not just constants, but actual
>>classes.
>> 
>> 2. We have to look hard at @LimitedPrivate and question the legitimacy
>>of tagging things as so, especially anything
>>"@InterfaceAudience.LimitedPrivate({""MapReduce"}) ‹because any YARN app
>>you write ends up needing those classes. For evidence, look at
>>DistributedShell's imports, and pick a few at random: NMClientAsyncImpl,
>>ConverterUtils being easy targets.
>
>There are existing tickets for some of these under YARN-1953 that need
>some developer love.
>
>
>> Returning to the pending 2.8.0 release, there's a way to find out
>>what's going to break: build and test things against the snapshots,
>>without waiting for the beta releases and expecting the downstream
>>projects to do it for you. If they don't build, that's a success: you've
>>found a compatibility problem to fix. If they don't test, well that's
>>trouble ‹you are in finger pointing time.
>
>
>I¹ve tried doing this in the past without much success. Some downstream
>components did pick up RCs but a majority of them needed a release -
>hence my current approach.
>
>Thanks
>+Vinod
>
>



Re: [DISCUSS] Looking to a 2.8.0 release

2015-11-25 Thread Andrew Wang
Hey Vinod,

I'm fine with the idea of alpha/beta marking in the abstract, but had a
question: do we define these terms in our compatibility policy or
elsewhere? I think it's commonly understood among us developers (alpha
means not fully tested and API unstable, beta means it's not fully tested
but is API stable), but it'd be good to have it written down.

Also I think we've only done alpha/beta tagging at the release-level
previously which is a simpler story to tell users. So it's important for
this release that alpha features set their interface stability annotations
to "evolving". There isn't a corresponding annotation for "interface
quality", but IMO that's overkill.

Thanks,
Andrew

On Wed, Nov 25, 2015 at 11:08 AM, Vinod Kumar Vavilapalli <
vino...@apache.org> wrote:

> This is the current state from the feedback I gathered.
>  - Support priorities across applications within the same queue YARN-1963
> — Can push as an alpha / beta feature per Sunil
>  - YARN-1197 Support changing resources of an allocated container:
> — Can push as an alpha/beta feature per Wangda
>  - YARN-3611 Support Docker Containers In LinuxContainerExecutor: Well
> most of it anyways.
> — Can push as an alpha feature.
>  - YARN Timeline Service v1.5 - YARN-4233
> — Should include per Li Lu
>  - YARN Timeline Service Next generation: YARN-2928
> — Per analysis from Sangjin, drop this from 2.8.
>
> One open feature status
>  - HDFS-8155Support OAuth2 in WebHDFS: Alpha / Early feature?
>
> Updated the Roadmap wiki with the same.
>
> Thanks
> +Vinod
>
> > On Nov 13, 2015, at 12:12 PM, Sangjin Lee  wrote:
> >
> > I reviewed the current state of the YARN-2928 changes regarding its
> impact
> > if the timeline service v.2 is disabled. It does appear that there are a
> > lot of things that still do get created and enabled unconditionally
> > regardless of configuration. While this is understandable when we were
> > working to implement the feature, this clearly needs to be cleaned up so
> > that when disabled the timeline service v.2 doesn't impact other things.
> >
> > I filed a JIRA for that work:
> > https://issues.apache.org/jira/browse/YARN-4356
> >
> > We need to complete it before we can merge.
> >
> > Somewhat related is the status of the configuration and what it means in
> > various contexts (client/app-side vs. server-side, v.1 vs. v.2, etc.). I
> > know there is an ongoing discussion regarding YARN-4183. We'll need to
> > reflect the outcome of that discussion.
> >
> > My overall impression of whether this can be done for 2.8 is that it
> looks
> > rather challenging given the suggested timeframe. We also need to
> complete
> > several major tasks before it is ready.
> >
> > Sangjin
> >
> >
> > On Wed, Nov 11, 2015 at 5:49 PM, Sangjin Lee  wrote:
> >
> >>
> >> On Wed, Nov 11, 2015 at 12:13 PM, Vinod Vavilapalli <
> >> vino...@hortonworks.com> wrote:
> >>
> >>>— YARN Timeline Service Next generation: YARN-2928: Lots of
> momentum,
> >>> but clearly a work in progress. Two options here
> >>>— If it is safe to ship it into 2.8 in a disable manner, we can
> >>> get the early code into trunk and all the way int o2.8.
> >>>— If it is not safe, it organically rolls over into 2.9
> >>>
> >>
> >> I'll review the changes on YARN-2928 to see what impact it has (if any)
> if
> >> the timeline service v.2 is disabled.
> >>
> >> Another condition for it to make 2.8 is whether the branch will be in a
> >> shape in a couple of weeks such that it adds value for folks that want
> to
> >> test it. Hopefully it will become clearer soon.
> >>
> >> Sangjin
> >>
>
>


Re: [DISCUSS] Looking to a 2.8.0 release

2015-11-25 Thread Subramaniam V K
Hi Vinod,

Thanks for driving this. Can you add YARN-2573 which includes the work done
to integrate ReservationSystem with the RM failover mechanism to your list.
This can be reviewed and committed (branch-2) also about a month back.

Cheers,
Subru

On Wed, Nov 25, 2015 at 11:37 AM, Vinod Kumar Vavilapalli <
vino...@apache.org> wrote:

> I think we’ve converged at a high level w.r.t 2.8. And as I just sent out
> an email, I updated the Roadmap wiki reflecting the same:
> https://wiki.apache.org/hadoop/Roadmap
>
> I plan to create a 2.8 branch EOD today.
>
> The goal for all of us should be to restrict improvements & fixes to only
> (a) the feature-set documented under 2.8 in the RoadMap wiki and (b) other
> minor features that are already in 2.8.
>
> Thanks
> +Vinod
>
>
> > On Nov 11, 2015, at 12:13 PM, Vinod Kumar Vavilapalli <
> vino...@hortonworks.com> wrote:
> >
> >  - Cut a branch about two weeks from now
> >  - Do an RC mid next month (leaving ~4weeks since branch-cut)
> >  - As with 2.7.x series, the first release will still be called as early
> / alpha release in the interest of
> > — gaining downstream adoption
> > — wider testing,
> > — yet reserving our right to fix any inadvertent incompatibilities
> introduced.
>
>


[jira] [Created] (HADOOP-12600) FileContext should be annotated as a Stable interface.

2015-11-25 Thread Chris Nauroth (JIRA)
Chris Nauroth created HADOOP-12600:
--

 Summary: FileContext should be annotated as a Stable interface.
 Key: HADOOP-12600
 URL: https://issues.apache.org/jira/browse/HADOOP-12600
 Project: Hadoop Common
  Issue Type: Improvement
  Components: fs
Reporter: Chris Nauroth
Priority: Trivial


The {{FileContext}} class currently is annotated as {{Evolving}}.  However, at 
this point we really need to treat it as a {{Stable}} interface.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Re: continuing releases on Apache Hadoop 2.6.x

2015-11-25 Thread Junping Du
Thanks Sangjin for comments. This is also an option that I thought before. 
However, a single-fix release sounds a little overkill, especially after 
looking at the release scope of Apache commons-collections 3.3.2 
(https://commons.apache.org/proper/commons-collections/release_3_2_2.html) 
which actually get affected more directly.

May be we should quickly go through landed patches in 2.6.3 list and postpone 
any risky patches to 2.6.4?


Thanks,


Junping


From: sjl...@gmail.com  on behalf of Sangjin Lee 

Sent: Wednesday, November 25, 2015 6:55 PM
To: Junping Du
Cc: mapreduce-...@hadoop.apache.org; yarn-...@hadoop.apache.org; Hadoop Common; 
hdfs-...@hadoop.apache.org; Vinod Vavilapalli; Haohui Mai; Chris Trezzo
Subject: Re: continuing releases on Apache Hadoop 2.6.x

If the speed and clarity are important for the security release, then I would 
argue for a single-fix release (2.6.2 + HADOOP-12577 only). The verification of 
the RC and the associated release process would be so much faster.

We would need to do a little bit of special branch creation etc., but it would 
still be very straightforward. My 2 cents.

Sangjin

On Wed, Nov 25, 2015 at 10:23 AM, Junping Du 
mailto:j...@hortonworks.com>> wrote:
Given there is a critical security fix (HADOOP-12577) coming, I think we should 
move faster on releasing 2.6.3.
I would propose to freeze nominating new fixes to 2.6.3 unless they are 
critical enough as blocker. We can nominate more fixes later in 2.6.4. Thoughts?

Thanks,

Junping

From: sjl...@gmail.com 
mailto:sjl...@gmail.com>> on behalf of Sangjin Lee 
mailto:sj...@apache.org>>
Sent: Friday, November 20, 2015 7:07 PM
To: mapreduce-...@hadoop.apache.org
Cc: Hadoop Common; 
yarn-...@hadoop.apache.org; 
hdfs-...@hadoop.apache.org
Subject: Re: continuing releases on Apache Hadoop 2.6.x

It would be great if we can get enough number of fixes by early December.
18 seems bit on the low side, but if we lose this window it won't be until
next year.

As for the release management, thanks Chris, Junping, and Haohui for
volunteering! I'll reach out to you to discuss what we do with 2.6.3. I
assume we will have more maintenance releases in the 2.6.x line, so there
will be more opportunities. We do need one person with PMC privileges to be
able to go through all the release management steps without assistance,
which I learned last time.

Regards,
Sangjin

On Fri, Nov 20, 2015 at 10:03 AM, Sean Busbey 
mailto:bus...@cloudera.com>> wrote:

> Early december would be great, presuming the RC process doesn't take too
> long. By then it'll already have over a month since the 2.6.2 release and
> I'm sure the folks contributing the 18 patches we already have in would
> like to see their work out there.
>
> On Fri, Nov 20, 2015 at 7:51 AM, Junping Du 
> mailto:j...@hortonworks.com>> wrote:
>
> > +1. Early Dec sounds too early for 2.6.3 release given we only have 18
> > patches since recently release 2.6.2.
> > We should nominate more fixes and wait a while for the feedback on 2.6.2.
> >
> > Thanks,
> >
> > Junping
> > 
> > From: Vinod Vavilapalli 
> > mailto:vino...@hortonworks.com>>
> > Sent: Thursday, November 19, 2015 11:34 PM
> > To: yarn-...@hadoop.apache.org
> > Cc: common-dev@hadoop.apache.org; 
> > hdfs-...@hadoop.apache.org;
> > mapreduce-...@hadoop.apache.org
> > Subject: Re: continuing releases on Apache Hadoop 2.6.x
> >
> > I see 18 JIRAs across the sub-projects as of now in 2.6.3. Seems like we
> > will have a reasonable number of fixes if we start an RC early december.
> >
> > In the mean while, we should also review 2.7.3 and 2.8.0 blocker /
> > critical list and see if it makes sense to backport any of those into
> 2.6.3.
> >
> > +Vinod
> >
> >
> > On Nov 17, 2015, at 5:10 PM, Sangjin Lee 
> > mailto:sj...@apache.org> > sj...@apache.org>> wrote:
> >
> > I'd like to pick up this email discussion again. It is time that we
> started
> > thinking about the next release in the 2.6.x line. IMO we want to walk
> the
> > balance between maintaining a reasonable release cadence and getting a
> good
> > amount of high-quality fixes. The timeframe is a little tricky as the
> > holidays are approaching. If we have enough fixes accumulated in
> > branch-2.6, some time early December might be a good target for cutting
> the
> > first release candidate. Once we miss that window, I think we are looking
> > at next January. I'd like to hear your thoughts on this.
> >
> >
>
>
> --
> Sean
>



Re: [DISCUSS] Looking to a 2.8.0 release

2015-11-25 Thread Vinod Kumar Vavilapalli
I think we’ve converged at a high level w.r.t 2.8. And as I just sent out an 
email, I updated the Roadmap wiki reflecting the same: 
https://wiki.apache.org/hadoop/Roadmap

I plan to create a 2.8 branch EOD today.

The goal for all of us should be to restrict improvements & fixes to only (a) 
the feature-set documented under 2.8 in the RoadMap wiki and (b) other minor 
features that are already in 2.8.

Thanks
+Vinod


> On Nov 11, 2015, at 12:13 PM, Vinod Kumar Vavilapalli 
>  wrote:
> 
>  - Cut a branch about two weeks from now
>  - Do an RC mid next month (leaving ~4weeks since branch-cut)
>  - As with 2.7.x series, the first release will still be called as early / 
> alpha release in the interest of
> — gaining downstream adoption
> — wider testing,
> — yet reserving our right to fix any inadvertent incompatibilities 
> introduced.



Re: [DISCUSS] Looking to a 2.8.0 release

2015-11-25 Thread Vinod Kumar Vavilapalli
Steve,


> There's a lot of stuff in 2.8; I note that I'd like to see the s3a perf 
> improvements & openstack fixes in there: for which I need reviewers. I don't 
> have the spare time to do this myself.

If you think they are useful, it helps to file tickets (or point out existing 
tickets), start discussion etc w.r.t these areas in order to attract 
contributors.


> -likewise, DFSConfigKeys stayed in hdfs-server. I know it's tagged as 
> @Private, but it's long been where all the string constants for HDFS options 
> live. Forcing users to retype them in their own source is not only dangerous 
> (it only encourages typos), it actually stops you using your IDE finding out 
> where those constants get used. 

> We do now have a set of keys in the client, HdfsClientConfigKeys, but these 
> are still declared as @Private. Which is a mistake for the reasons above, and 
> because it encourages hadoop developers to assume that they are free to make 
> whatever changes they want to this code, and if it breaks something, say "it 
> was tagged as private”


If these are worth going after, please file tickets under HDFS-6200 if they 
don’t exist already.


> 
> 1. We have to recognise that a lot of things marked @Private are in fact 
> essential for clients to use. Not just constants, but actual classes.
> 
> 2. We have to look hard at @LimitedPrivate and question the legitimacy of 
> tagging things as so, especially anything 
> "@InterfaceAudience.LimitedPrivate({""MapReduce"}) —because any YARN app you 
> write ends up needing those classes. For evidence, look at DistributedShell's 
> imports, and pick a few at random: NMClientAsyncImpl, ConverterUtils being 
> easy targets.

There are existing tickets for some of these under YARN-1953 that need some 
developer love.


> Returning to the pending 2.8.0 release, there's a way to find out what's 
> going to break: build and test things against the snapshots, without waiting 
> for the beta releases and expecting the downstream projects to do it for you. 
> If they don't build, that's a success: you've found a compatibility problem 
> to fix. If they don't test, well that's trouble —you are in finger pointing 
> time.


I’ve tried doing this in the past without much success. Some downstream 
components did pick up RCs but a majority of them needed a release - hence my 
current approach.

Thanks
+Vinod



Re: [DISCUSS] Looking to a 2.8.0 release

2015-11-25 Thread Vinod Kumar Vavilapalli
There are 40 odd incompatible changes in 3.x: 
https://issues.apache.org/jira/issues/?jql=project%20in%20%28HADOOP%2C%20YARN%2C%20HDFS%2C%20MAPREDUCE%29%20AND%20resolution%20%3D%20Fixed%20AND%20fixVersion%20%3D%203.0.0%20AND%20fixVersion%20not%20in%20%282.6.2%2C%202.6.3%2C%202.7.1%2C%202.7.2%2C%202.7.3%2C%202.8.0%29%20and%20%22Hadoop%20Flags%22%20in%20%28%22Incompatible%20change%22%29%20ORDER%20BY%20key%20ASC%2C%20due%20ASC%2C%20priority%20DESC%2C%20created%20ASC

Need to dig deeper on their impact. Clearly all my local shell scripts 
completely stopped working, it will be good to have some bridging there to help 
users migrate.

Like I said before, I will spend more time on trunk only changes in order to 
kick-start a 3.x discussion.

What are the incompatible changes in the 2.x line that you are talking about?

+Vinod

> On Nov 11, 2015, at 2:15 PM, Allen Wittenauer  wrote:
> 
> 
>> On Nov 11, 2015, at 1:11 PM, Vinod Vavilapalli  
>> wrote:
>> 
>> I’ll let others comment on specific features.
>> 
>> Regarding the 3.x vs 2.x point, as I noted before on other threads, given 
>> all the incompatibilities in trunk it will be ways off before users can run 
>> their production workloads on a 3.x release.
> 
>   [citation needed]
> 
>   Seriously. Back that statement up especially in light of there having 
> been more incompatibilities in all the 2.x releases combined than in 3.x. 



Re: [DISCUSS] Looking to a 2.8.0 release

2015-11-25 Thread Vinod Kumar Vavilapalli
Haohui,

It’ll help to document this whole line of discussion about hdfs jar change and 
its impact/non-impact for existing users so there is less confusion.

Thanks
+Vinod


> On Nov 11, 2015, at 3:26 PM, Haohui Mai  wrote:
> 
> bq. If and only if they take the Hadoop class path at face value.
> Many applications don’t because of conflicting dependencies and
> instead import specific jars.
> 
> We do make the assumptions that applications need to pick up all the
> dependency (either automatically or manually). The situation is
> similar with adding a new dependency into hdfs in a minor release.
> 
> Maven / gradle obviously help, but I'd love to hear more about it how
> you get it to work. In trunk hadoop-env.sh adds 118 jars into the
> class path. Are you manually importing 118 jars for every single
> applications?
> 
> 
> 
> On Wed, Nov 11, 2015 at 3:09 PM, Haohui Mai  wrote:
>> bq. currently pulling in hadoop-client gives downstream apps
>> hadoop-hdfs-client, but not hadoop-hdfs server side, right?
>> 
>> Right now hadoop-client pulls in hadoop-hdfs directly to ensure a
>> smooth transition. Maybe we can revisit the decision in the 2.9 / 3.x?
>> 
>> On Wed, Nov 11, 2015 at 3:00 PM, Steve Loughran  
>> wrote:
>>> 
 On 11 Nov 2015, at 22:15, Haohui Mai  wrote:
 
 bq.  it basically makes the assumption that everyone recompiles for
 every minor release.
 
 I don't think that the statement holds. HDFS-6200 keeps classes in the
 same package. hdfs-client becomes a transitive dependency of the
 original hdfs jar.
 
 Applications continue to work without recompilation as the classes
 will be in the same name and will be available in the classpath. They
 have the option of switching to depending only on hdfs-client to
 minimize the dependency when they are comfortable.
 
 I'm not claiming that there are no bugs in HDFS-6200, but just like
 other features we discover bugs and fix them continuously.
 
 ~Haohui
 
>>> 
>>> currently pulling in hadoop-client gives downstream apps 
>>> hadoop-hdfs-client, but not hadoop-hdfs server side, right?
> 



Re: [DISCUSS] Looking to a 2.8.0 release

2015-11-25 Thread Vinod Kumar Vavilapalli
This is the current state from the feedback I gathered.
 - Support priorities across applications within the same queue YARN-1963
— Can push as an alpha / beta feature per Sunil
 - YARN-1197 Support changing resources of an allocated container:
— Can push as an alpha/beta feature per Wangda
 - YARN-3611 Support Docker Containers In LinuxContainerExecutor: Well most of 
it anyways.
— Can push as an alpha feature.
 - YARN Timeline Service v1.5 - YARN-4233
— Should include per Li Lu
 - YARN Timeline Service Next generation: YARN-2928
— Per analysis from Sangjin, drop this from 2.8.

One open feature status
 - HDFS-8155Support OAuth2 in WebHDFS: Alpha / Early feature?

Updated the Roadmap wiki with the same.

Thanks
+Vinod

> On Nov 13, 2015, at 12:12 PM, Sangjin Lee  wrote:
> 
> I reviewed the current state of the YARN-2928 changes regarding its impact
> if the timeline service v.2 is disabled. It does appear that there are a
> lot of things that still do get created and enabled unconditionally
> regardless of configuration. While this is understandable when we were
> working to implement the feature, this clearly needs to be cleaned up so
> that when disabled the timeline service v.2 doesn't impact other things.
> 
> I filed a JIRA for that work:
> https://issues.apache.org/jira/browse/YARN-4356
> 
> We need to complete it before we can merge.
> 
> Somewhat related is the status of the configuration and what it means in
> various contexts (client/app-side vs. server-side, v.1 vs. v.2, etc.). I
> know there is an ongoing discussion regarding YARN-4183. We'll need to
> reflect the outcome of that discussion.
> 
> My overall impression of whether this can be done for 2.8 is that it looks
> rather challenging given the suggested timeframe. We also need to complete
> several major tasks before it is ready.
> 
> Sangjin
> 
> 
> On Wed, Nov 11, 2015 at 5:49 PM, Sangjin Lee  wrote:
> 
>> 
>> On Wed, Nov 11, 2015 at 12:13 PM, Vinod Vavilapalli <
>> vino...@hortonworks.com> wrote:
>> 
>>>— YARN Timeline Service Next generation: YARN-2928: Lots of momentum,
>>> but clearly a work in progress. Two options here
>>>— If it is safe to ship it into 2.8 in a disable manner, we can
>>> get the early code into trunk and all the way int o2.8.
>>>— If it is not safe, it organically rolls over into 2.9
>>> 
>> 
>> I'll review the changes on YARN-2928 to see what impact it has (if any) if
>> the timeline service v.2 is disabled.
>> 
>> Another condition for it to make 2.8 is whether the branch will be in a
>> shape in a couple of weeks such that it adds value for folks that want to
>> test it. Hopefully it will become clearer soon.
>> 
>> Sangjin
>> 



Re: continuing releases on Apache Hadoop 2.6.x

2015-11-25 Thread Sangjin Lee
If the speed and clarity are important for the security release, then I
would argue for a single-fix release (2.6.2 + HADOOP-12577 only). The
verification of the RC and the associated release process would be so much
faster.

We would need to do a little bit of special branch creation etc., but it
would still be very straightforward. My 2 cents.

Sangjin

On Wed, Nov 25, 2015 at 10:23 AM, Junping Du  wrote:

> Given there is a critical security fix (HADOOP-12577) coming, I think we
> should move faster on releasing 2.6.3.
> I would propose to freeze nominating new fixes to 2.6.3 unless they are
> critical enough as blocker. We can nominate more fixes later in 2.6.4.
> Thoughts?
>
> Thanks,
>
> Junping
> 
> From: sjl...@gmail.com  on behalf of Sangjin Lee <
> sj...@apache.org>
> Sent: Friday, November 20, 2015 7:07 PM
> To: mapreduce-...@hadoop.apache.org
> Cc: Hadoop Common; yarn-...@hadoop.apache.org; hdfs-...@hadoop.apache.org
> Subject: Re: continuing releases on Apache Hadoop 2.6.x
>
> It would be great if we can get enough number of fixes by early December.
> 18 seems bit on the low side, but if we lose this window it won't be until
> next year.
>
> As for the release management, thanks Chris, Junping, and Haohui for
> volunteering! I'll reach out to you to discuss what we do with 2.6.3. I
> assume we will have more maintenance releases in the 2.6.x line, so there
> will be more opportunities. We do need one person with PMC privileges to be
> able to go through all the release management steps without assistance,
> which I learned last time.
>
> Regards,
> Sangjin
>
> On Fri, Nov 20, 2015 at 10:03 AM, Sean Busbey  wrote:
>
> > Early december would be great, presuming the RC process doesn't take too
> > long. By then it'll already have over a month since the 2.6.2 release and
> > I'm sure the folks contributing the 18 patches we already have in would
> > like to see their work out there.
> >
> > On Fri, Nov 20, 2015 at 7:51 AM, Junping Du  wrote:
> >
> > > +1. Early Dec sounds too early for 2.6.3 release given we only have 18
> > > patches since recently release 2.6.2.
> > > We should nominate more fixes and wait a while for the feedback on
> 2.6.2.
> > >
> > > Thanks,
> > >
> > > Junping
> > > 
> > > From: Vinod Vavilapalli 
> > > Sent: Thursday, November 19, 2015 11:34 PM
> > > To: yarn-...@hadoop.apache.org
> > > Cc: common-dev@hadoop.apache.org; hdfs-...@hadoop.apache.org;
> > > mapreduce-...@hadoop.apache.org
> > > Subject: Re: continuing releases on Apache Hadoop 2.6.x
> > >
> > > I see 18 JIRAs across the sub-projects as of now in 2.6.3. Seems like
> we
> > > will have a reasonable number of fixes if we start an RC early
> december.
> > >
> > > In the mean while, we should also review 2.7.3 and 2.8.0 blocker /
> > > critical list and see if it makes sense to backport any of those into
> > 2.6.3.
> > >
> > > +Vinod
> > >
> > >
> > > On Nov 17, 2015, at 5:10 PM, Sangjin Lee  > > sj...@apache.org>> wrote:
> > >
> > > I'd like to pick up this email discussion again. It is time that we
> > started
> > > thinking about the next release in the 2.6.x line. IMO we want to walk
> > the
> > > balance between maintaining a reasonable release cadence and getting a
> > good
> > > amount of high-quality fixes. The timeframe is a little tricky as the
> > > holidays are approaching. If we have enough fixes accumulated in
> > > branch-2.6, some time early December might be a good target for cutting
> > the
> > > first release candidate. Once we miss that window, I think we are
> looking
> > > at next January. I'd like to hear your thoughts on this.
> > >
> > >
> >
> >
> > --
> > Sean
> >
>


Build failed in Jenkins: Hadoop-common-trunk-Java8 #747

2015-11-25 Thread Apache Jenkins Server
See 

Changes:

[junping_du] Tests in mapreduce-client-app are writing outside of target. 
Contributed

--
[...truncated 5781 lines...]
Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 11.453 sec - in 
org.apache.hadoop.metrics2.impl.TestMetricsSourceAdapter
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.metrics2.impl.TestMetricsCollectorImpl
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.274 sec - in 
org.apache.hadoop.metrics2.impl.TestMetricsCollectorImpl
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.metrics2.impl.TestGangliaMetrics
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.5 sec - in 
org.apache.hadoop.metrics2.impl.TestGangliaMetrics
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.metrics2.impl.TestMetricsConfig
Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.316 sec - in 
org.apache.hadoop.metrics2.impl.TestMetricsConfig
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.metrics2.impl.TestMetricsVisitor
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.348 sec - in 
org.apache.hadoop.metrics2.impl.TestMetricsVisitor
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.cli.TestCLI
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.692 sec - in 
org.apache.hadoop.cli.TestCLI
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.security.TestNetgroupCache
Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.102 sec - in 
org.apache.hadoop.security.TestNetgroupCache
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.security.TestCompositeGroupMapping
Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.627 sec - in 
org.apache.hadoop.security.TestCompositeGroupMapping
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.security.TestUGILoginFromKeytab
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 5.092 sec - in 
org.apache.hadoop.security.TestUGILoginFromKeytab
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.security.http.TestCrossOriginFilter
Tests run: 9, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.443 sec - in 
org.apache.hadoop.security.http.TestCrossOriginFilter
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.security.TestSecurityUtil
Tests run: 17, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.751 sec - in 
org.apache.hadoop.security.TestSecurityUtil
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.security.TestHttpCrossOriginFilterInitializer
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.342 sec - in 
org.apache.hadoop.security.TestHttpCrossOriginFilterInitializer
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.security.TestLdapGroupsMappingWithPosixGroup
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.582 sec - in 
org.apache.hadoop.security.TestLdapGroupsMappingWithPosixGroup
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.security.TestGroupsCaching
Tests run: 10, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 1.675 sec - in 
org.apache.hadoop.security.TestGroupsCaching
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.security.authorize.TestProxyUsers
Tests run: 17, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.777 sec - in 
org.apache.hadoop.security.authorize.TestProxyUsers
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.security.authorize.TestServiceAuthorization
Tests run: 7, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.891 sec - in 
org.apache.hadoop.security.authorize.TestServiceAuthorization
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.security.authorize.TestProxyServers
Tests run: 

[jira] [Created] (HADOOP-12599) Add RM Delegation Token DtFetcher Implementation for DtUtil

2015-11-25 Thread Matthew Paduano (JIRA)
Matthew Paduano created HADOOP-12599:


 Summary: Add RM Delegation Token DtFetcher Implementation for 
DtUtil
 Key: HADOOP-12599
 URL: https://issues.apache.org/jira/browse/HADOOP-12599
 Project: Hadoop Common
  Issue Type: Sub-task
Reporter: Matthew Paduano
Assignee: Matthew Paduano


Add a class to yarn project that implements the DtFetcher interface to return a 
RM delegation token object.  

I attached a proposed class implementation that does this, but it cannot be 
added as a patch until the interface is merged in HADOOP-12563



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Re: continuing releases on Apache Hadoop 2.6.x

2015-11-25 Thread Junping Du
Given there is a critical security fix (HADOOP-12577) coming, I think we should 
move faster on releasing 2.6.3. 
I would propose to freeze nominating new fixes to 2.6.3 unless they are 
critical enough as blocker. We can nominate more fixes later in 2.6.4. Thoughts?

Thanks,

Junping

From: sjl...@gmail.com  on behalf of Sangjin Lee 

Sent: Friday, November 20, 2015 7:07 PM
To: mapreduce-...@hadoop.apache.org
Cc: Hadoop Common; yarn-...@hadoop.apache.org; hdfs-...@hadoop.apache.org
Subject: Re: continuing releases on Apache Hadoop 2.6.x

It would be great if we can get enough number of fixes by early December.
18 seems bit on the low side, but if we lose this window it won't be until
next year.

As for the release management, thanks Chris, Junping, and Haohui for
volunteering! I'll reach out to you to discuss what we do with 2.6.3. I
assume we will have more maintenance releases in the 2.6.x line, so there
will be more opportunities. We do need one person with PMC privileges to be
able to go through all the release management steps without assistance,
which I learned last time.

Regards,
Sangjin

On Fri, Nov 20, 2015 at 10:03 AM, Sean Busbey  wrote:

> Early december would be great, presuming the RC process doesn't take too
> long. By then it'll already have over a month since the 2.6.2 release and
> I'm sure the folks contributing the 18 patches we already have in would
> like to see their work out there.
>
> On Fri, Nov 20, 2015 at 7:51 AM, Junping Du  wrote:
>
> > +1. Early Dec sounds too early for 2.6.3 release given we only have 18
> > patches since recently release 2.6.2.
> > We should nominate more fixes and wait a while for the feedback on 2.6.2.
> >
> > Thanks,
> >
> > Junping
> > 
> > From: Vinod Vavilapalli 
> > Sent: Thursday, November 19, 2015 11:34 PM
> > To: yarn-...@hadoop.apache.org
> > Cc: common-dev@hadoop.apache.org; hdfs-...@hadoop.apache.org;
> > mapreduce-...@hadoop.apache.org
> > Subject: Re: continuing releases on Apache Hadoop 2.6.x
> >
> > I see 18 JIRAs across the sub-projects as of now in 2.6.3. Seems like we
> > will have a reasonable number of fixes if we start an RC early december.
> >
> > In the mean while, we should also review 2.7.3 and 2.8.0 blocker /
> > critical list and see if it makes sense to backport any of those into
> 2.6.3.
> >
> > +Vinod
> >
> >
> > On Nov 17, 2015, at 5:10 PM, Sangjin Lee  > sj...@apache.org>> wrote:
> >
> > I'd like to pick up this email discussion again. It is time that we
> started
> > thinking about the next release in the 2.6.x line. IMO we want to walk
> the
> > balance between maintaining a reasonable release cadence and getting a
> good
> > amount of high-quality fixes. The timeframe is a little tricky as the
> > holidays are approaching. If we have enough fixes accumulated in
> > branch-2.6, some time early December might be a good target for cutting
> the
> > first release candidate. Once we miss that window, I think we are looking
> > at next January. I'd like to hear your thoughts on this.
> >
> >
>
>
> --
> Sean
>


Build failed in Jenkins: Hadoop-Common-trunk #2045

2015-11-25 Thread Apache Jenkins Server
See 

Changes:

[aajisaka] HADOOP-12598. Add XML namespace declarations for some hadoop/tools

[yzhang] HDFS-6694. Addendum. Update CHANGES.txt for cherry-picking to 2.8.

[junping_du] Tests in mapreduce-client-app are writing outside of target. 
Contributed

--
[...truncated 5386 lines...]
Running org.apache.hadoop.util.TestLineReader
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.198 sec - in 
org.apache.hadoop.util.TestLineReader
Running org.apache.hadoop.util.TestIdentityHashStore
Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.205 sec - in 
org.apache.hadoop.util.TestIdentityHashStore
Running org.apache.hadoop.util.TestClasspath
Tests run: 7, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.361 sec - in 
org.apache.hadoop.util.TestClasspath
Running org.apache.hadoop.util.TestApplicationClassLoader
Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.274 sec - in 
org.apache.hadoop.util.TestApplicationClassLoader
Running org.apache.hadoop.util.TestShell
Tests run: 20, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 5.369 sec - in 
org.apache.hadoop.util.TestShell
Running org.apache.hadoop.util.TestShutdownHookManager
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.163 sec - in 
org.apache.hadoop.util.TestShutdownHookManager
Running org.apache.hadoop.util.TestConfTest
Tests run: 14, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.29 sec - in 
org.apache.hadoop.util.TestConfTest
Running org.apache.hadoop.util.TestHttpExceptionUtils
Tests run: 7, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.678 sec - in 
org.apache.hadoop.util.TestHttpExceptionUtils
Running org.apache.hadoop.util.TestJarFinder
Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.785 sec - in 
org.apache.hadoop.util.TestJarFinder
Running org.apache.hadoop.util.hash.TestHash
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.45 sec - in 
org.apache.hadoop.util.hash.TestHash
Running org.apache.hadoop.util.TestLightWeightCache
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 7.849 sec - in 
org.apache.hadoop.util.TestLightWeightCache
Running org.apache.hadoop.util.TestNativeCodeLoader
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.177 sec - in 
org.apache.hadoop.util.TestNativeCodeLoader
Running org.apache.hadoop.util.TestReflectionUtils
Tests run: 5, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.637 sec - in 
org.apache.hadoop.util.TestReflectionUtils
Running org.apache.hadoop.crypto.TestCryptoStreams
Tests run: 14, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 22.437 sec - 
in org.apache.hadoop.crypto.TestCryptoStreams
Running org.apache.hadoop.crypto.TestCryptoStreamsForLocalFS
Tests run: 14, Failures: 0, Errors: 0, Skipped: 5, Time elapsed: 12.36 sec - in 
org.apache.hadoop.crypto.TestCryptoStreamsForLocalFS
Running org.apache.hadoop.crypto.TestOpensslCipher
Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.195 sec - in 
org.apache.hadoop.crypto.TestOpensslCipher
Running org.apache.hadoop.crypto.TestCryptoStreamsWithOpensslAesCtrCryptoCodec
Tests run: 14, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 23.253 sec - 
in org.apache.hadoop.crypto.TestCryptoStreamsWithOpensslAesCtrCryptoCodec
Running org.apache.hadoop.crypto.TestCryptoStreamsNormal
Tests run: 14, Failures: 0, Errors: 0, Skipped: 8, Time elapsed: 7.171 sec - in 
org.apache.hadoop.crypto.TestCryptoStreamsNormal
Running org.apache.hadoop.crypto.TestCryptoCodec
Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 14.63 sec - in 
org.apache.hadoop.crypto.TestCryptoCodec
Running org.apache.hadoop.crypto.random.TestOsSecureRandom
Tests run: 6, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.608 sec - in 
org.apache.hadoop.crypto.random.TestOsSecureRandom
Running org.apache.hadoop.crypto.random.TestOpensslSecureRandom
Tests run: 5, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.178 sec - in 
org.apache.hadoop.crypto.random.TestOpensslSecureRandom
Running org.apache.hadoop.crypto.TestCryptoStreamsWithJceAesCtrCryptoCodec
Tests run: 14, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 22.789 sec - 
in org.apache.hadoop.crypto.TestCryptoStreamsWithJceAesCtrCryptoCodec
Running org.apache.hadoop.crypto.key.TestKeyProviderDelegationTokenExtension
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.746 sec - in 
org.apache.hadoop.crypto.key.TestKeyProviderDelegationTokenExtension
Running org.apache.hadoop.crypto.key.TestCachingKeyProvider
Tests run: 5, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 2.1 sec - in 
org.apache.hadoop.crypto.key.TestCachingKeyProvider
Running org.apache.hadoop.crypto.key.TestKeyProviderFactory
Tests run: 8, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 1.488 sec - in 
org.apache.hadoop.crypto.key.TestKeyProviderFactory
Running org.apache.hadoop.cry

[ANNOUNCE] CFP open for ApacheCon North America 2016

2015-11-25 Thread Rich Bowen
Community growth starts by talking with those interested in your
project. ApacheCon North America is coming, are you?

We are delighted to announce that the Call For Presentations (CFP) is
now open for ApacheCon North America. You can submit your proposed
sessions at
http://events.linuxfoundation.org/events/apache-big-data-north-america/program/cfp
for big data talks and
http://events.linuxfoundation.org/events/apachecon-north-america/program/cfp
for all other topics.

ApacheCon North America will be held in Vancouver, Canada, May 9-13th
2016. ApacheCon has been running every year since 2000, and is the place
to build your project communities.

While we will consider individual talks we prefer to see related
sessions that are likely to draw users and community members. When
submitting your talk work with your project community and with related
communities to come up with a full program that will walk attendees
through the basics and on into mastery of your project in example use
cases. Content that introduces what's new in your latest release is also
of particular interest, especially when it builds upon existing well
know application models. The goal should be to showcase your project in
ways that will attract participants and encourage engagement in your
community, Please remember to involve your whole project community (user
and dev lists) when building content. This is your chance to create a
project specific event within the broader ApacheCon conference.

Content at ApacheCon North America will be cross-promoted as
mini-conferences, such as ApacheCon Big Data, and ApacheCon Mobile, so
be sure to indicate which larger category your proposed sessions fit into.

Finally, please plan to attend ApacheCon, even if you're not proposing a
talk. The biggest value of the event is community building, and we count
on you to make it a place where your project community is likely to
congregate, not just for the technical content in sessions, but for
hackathons, project summits, and good old fashioned face-to-face networking.

-- 
rbo...@apache.org
http://apache.org/


Build failed in Jenkins: Hadoop-common-trunk-Java8 #746

2015-11-25 Thread Apache Jenkins Server
See 

Changes:

[yzhang] HDFS-9438. TestPipelinesFailover assumes Linux ifconfig. (John Zhuge 
via

[ozawa] MAPREDUCE-6555. TestMRAppMaster fails on trunk. (Junping Du via ozawa)

[ozawa] YARN-4380.

[aajisaka] HADOOP-12598. Add XML namespace declarations for some hadoop/tools

[yzhang] HDFS-6694. Addendum. Update CHANGES.txt for cherry-picking to 2.8.

--
[...truncated 5783 lines...]
Tests run: 10, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.979 sec - in 
org.apache.hadoop.io.erasurecode.coder.TestRSErasureCoder
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.io.erasurecode.coder.TestXORCoder
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.217 sec - in 
org.apache.hadoop.io.erasurecode.coder.TestXORCoder
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.io.TestSequenceFileSync
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.706 sec - in 
org.apache.hadoop.io.TestSequenceFileSync
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.io.TestVersionedWritable
Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.174 sec - in 
org.apache.hadoop.io.TestVersionedWritable
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.io.TestWritable
Tests run: 6, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.279 sec - in 
org.apache.hadoop.io.TestWritable
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.io.TestBloomMapFile
Tests run: 7, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 1.277 sec - in 
org.apache.hadoop.io.TestBloomMapFile
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.io.TestSequenceFileAppend
Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.74 sec - in 
org.apache.hadoop.io.TestSequenceFileAppend
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.io.TestEnumSetWritable
Tests run: 6, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.437 sec - in 
org.apache.hadoop.io.TestEnumSetWritable
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.io.TestMapWritable
Tests run: 5, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.191 sec - in 
org.apache.hadoop.io.TestMapWritable
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.io.TestBooleanWritable
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.197 sec - in 
org.apache.hadoop.io.TestBooleanWritable
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.io.TestBytesWritable
Tests run: 6, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.172 sec - in 
org.apache.hadoop.io.TestBytesWritable
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.io.TestSequenceFile
Tests run: 9, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 8.539 sec - in 
org.apache.hadoop.io.TestSequenceFile
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.io.TestTextNonUTF8
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.167 sec - in 
org.apache.hadoop.io.TestTextNonUTF8
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.io.TestObjectWritableProtos
Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.262 sec - in 
org.apache.hadoop.io.TestObjectWritableProtos
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.io.TestDefaultStringifier
Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.342 sec - in 
org.apache.hadoop.io.TestDefaultStringifier
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.io.retry.TestRetryProxy
Tests run: 12, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 1.229 sec - in 
org.apache.hadoop.io.retry.TestRetryProxy
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.io.retry.TestDefaultRetryPolicy
Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.329

Build failed in Jenkins: Hadoop-Common-trunk #2044

2015-11-25 Thread Apache Jenkins Server
See 

Changes:

[yzhang] HDFS-9438. TestPipelinesFailover assumes Linux ifconfig. (John Zhuge 
via

[ozawa] MAPREDUCE-6555. TestMRAppMaster fails on trunk. (Junping Du via ozawa)

[ozawa] YARN-4380.

--
[...truncated 5450 lines...]
Running org.apache.hadoop.conf.TestConfigurationSubclass
Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 1.528 sec - in 
org.apache.hadoop.conf.TestConfigurationSubclass
Running org.apache.hadoop.conf.TestGetInstances
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 1.302 sec - in 
org.apache.hadoop.conf.TestGetInstances
Running org.apache.hadoop.conf.TestConfigurationDeprecation
Tests run: 9, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 23.94 sec - in 
org.apache.hadoop.conf.TestConfigurationDeprecation
Running org.apache.hadoop.conf.TestDeprecatedKeys
Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 1.486 sec - in 
org.apache.hadoop.conf.TestDeprecatedKeys
Running org.apache.hadoop.conf.TestConfiguration
Tests run: 62, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 12.445 sec - 
in org.apache.hadoop.conf.TestConfiguration
Running org.apache.hadoop.conf.TestReconfiguration
Tests run: 5, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 4.919 sec - in 
org.apache.hadoop.conf.TestReconfiguration
Running org.apache.hadoop.conf.TestConfServlet
Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 1.829 sec - in 
org.apache.hadoop.conf.TestConfServlet
Running org.apache.hadoop.test.TestJUnitSetup
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.388 sec - in 
org.apache.hadoop.test.TestJUnitSetup
Running org.apache.hadoop.test.TestMultithreadedTestUtil
Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 3.305 sec - in 
org.apache.hadoop.test.TestMultithreadedTestUtil
Running org.apache.hadoop.test.TestGenericTestUtils
Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.268 sec - in 
org.apache.hadoop.test.TestGenericTestUtils
Running org.apache.hadoop.test.TestTimedOutTestsListener
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.337 sec - in 
org.apache.hadoop.test.TestTimedOutTestsListener
Running org.apache.hadoop.metrics.TestMetricsServlet
Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.123 sec - in 
org.apache.hadoop.metrics.TestMetricsServlet
Running org.apache.hadoop.metrics.spi.TestOutputRecord
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.094 sec - in 
org.apache.hadoop.metrics.spi.TestOutputRecord
Running org.apache.hadoop.metrics.ganglia.TestGangliaContext
Tests run: 5, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.269 sec - in 
org.apache.hadoop.metrics.ganglia.TestGangliaContext
Running org.apache.hadoop.net.TestNetUtils
Tests run: 40, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 3.949 sec - in 
org.apache.hadoop.net.TestNetUtils
Running org.apache.hadoop.net.TestDNS
Tests run: 12, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.486 sec - in 
org.apache.hadoop.net.TestDNS
Running org.apache.hadoop.net.TestSocketIOWithTimeout
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 5.545 sec - in 
org.apache.hadoop.net.TestSocketIOWithTimeout
Running org.apache.hadoop.net.TestNetworkTopologyWithNodeGroup
Tests run: 8, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.275 sec - in 
org.apache.hadoop.net.TestNetworkTopologyWithNodeGroup
Running org.apache.hadoop.net.TestClusterTopology
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.359 sec - in 
org.apache.hadoop.net.TestClusterTopology
Running org.apache.hadoop.net.TestScriptBasedMappingWithDependency
Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.962 sec - in 
org.apache.hadoop.net.TestScriptBasedMappingWithDependency
Running org.apache.hadoop.net.TestTableMapping
Tests run: 6, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.849 sec - in 
org.apache.hadoop.net.TestTableMapping
Running org.apache.hadoop.net.TestScriptBasedMapping
Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.754 sec - in 
org.apache.hadoop.net.TestScriptBasedMapping
Running org.apache.hadoop.net.unix.TestDomainSocketWatcher
Tests run: 6, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.738 sec - in 
org.apache.hadoop.net.unix.TestDomainSocketWatcher
Running org.apache.hadoop.net.unix.TestDomainSocket
Tests run: 17, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 3.035 sec - in 
org.apache.hadoop.net.unix.TestDomainSocket
Running org.apache.hadoop.net.TestSwitchMapping
Tests run: 5, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.549 sec - in 
org.apache.hadoop.net.TestSwitchMapping
Running org.apache.hadoop.net.TestStaticMapping
Tests run: 8, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.947 sec - in 
org.apache.hadoop.net.TestStaticMapping
Running org.apache.hadoop.cli.TestCLI
Tests run: 1, Fail

Build failed in Jenkins: Hadoop-Common-trunk #2043

2015-11-25 Thread Apache Jenkins Server
See 

Changes:

[aajisaka] MAPREDUCE-6553. Replace '\u2b05' with '<-' in rendering job

[aajisaka] Fix indents in the 2.8.0 section of MapReduce CHANGES.txt.

--
[...truncated 5402 lines...]
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.514 sec - in 
org.apache.hadoop.metrics2.impl.TestMetricsCollectorImpl
Running org.apache.hadoop.metrics2.impl.TestMetricsSystemImpl
Tests run: 12, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 2.529 sec - in 
org.apache.hadoop.metrics2.impl.TestMetricsSystemImpl
Running org.apache.hadoop.metrics2.impl.TestMetricsConfig
Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.434 sec - in 
org.apache.hadoop.metrics2.impl.TestMetricsConfig
Running org.apache.hadoop.metrics2.impl.TestGraphiteMetrics
Tests run: 5, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.538 sec - in 
org.apache.hadoop.metrics2.impl.TestGraphiteMetrics
Running org.apache.hadoop.metrics2.impl.TestMetricsVisitor
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.548 sec - in 
org.apache.hadoop.metrics2.impl.TestMetricsVisitor
Running org.apache.hadoop.metrics2.source.TestJvmMetrics
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.645 sec - in 
org.apache.hadoop.metrics2.source.TestJvmMetrics
Running org.apache.hadoop.metrics2.sink.TestFileSink
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.696 sec - in 
org.apache.hadoop.metrics2.sink.TestFileSink
Running org.apache.hadoop.metrics2.sink.ganglia.TestGangliaSink
Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.365 sec - in 
org.apache.hadoop.metrics2.sink.ganglia.TestGangliaSink
Running org.apache.hadoop.metrics2.filter.TestPatternFilter
Tests run: 5, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.785 sec - in 
org.apache.hadoop.metrics2.filter.TestPatternFilter
Running org.apache.hadoop.log.TestLogLevel
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 1.246 sec - in 
org.apache.hadoop.log.TestLogLevel
Running org.apache.hadoop.log.TestLog4Json
Tests run: 5, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.647 sec - in 
org.apache.hadoop.log.TestLog4Json
Running org.apache.hadoop.jmx.TestJMXJsonServlet
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 1.496 sec - in 
org.apache.hadoop.jmx.TestJMXJsonServlet
Running org.apache.hadoop.ipc.TestIPCServerResponder
Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 9.105 sec - in 
org.apache.hadoop.ipc.TestIPCServerResponder
Running org.apache.hadoop.ipc.TestRPCWaitForProxy
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 1.593 sec - in 
org.apache.hadoop.ipc.TestRPCWaitForProxy
Running org.apache.hadoop.ipc.TestSocketFactory
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.502 sec - in 
org.apache.hadoop.ipc.TestSocketFactory
Running org.apache.hadoop.ipc.TestCallQueueManager
Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 8.42 sec - in 
org.apache.hadoop.ipc.TestCallQueueManager
Running org.apache.hadoop.ipc.TestIdentityProviders
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.925 sec - in 
org.apache.hadoop.ipc.TestIdentityProviders
Running org.apache.hadoop.ipc.TestWeightedRoundRobinMultiplexer
Tests run: 6, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.898 sec - in 
org.apache.hadoop.ipc.TestWeightedRoundRobinMultiplexer
Running org.apache.hadoop.ipc.TestRPCCompatibility
Tests run: 8, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 1.629 sec - in 
org.apache.hadoop.ipc.TestRPCCompatibility
Running org.apache.hadoop.ipc.TestProtoBufRpc
Tests run: 6, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 14.474 sec - in 
org.apache.hadoop.ipc.TestProtoBufRpc
Running org.apache.hadoop.ipc.TestMultipleProtocolServer
Tests run: 5, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 1.63 sec - in 
org.apache.hadoop.ipc.TestMultipleProtocolServer
Running org.apache.hadoop.ipc.TestRPCCallBenchmark
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 11.356 sec - in 
org.apache.hadoop.ipc.TestRPCCallBenchmark
Running org.apache.hadoop.ipc.TestRetryCacheMetrics
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.585 sec - in 
org.apache.hadoop.ipc.TestRetryCacheMetrics
Running org.apache.hadoop.ipc.TestMiniRPCBenchmark
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 1.365 sec - in 
org.apache.hadoop.ipc.TestMiniRPCBenchmark
Running org.apache.hadoop.ipc.TestIPC
Tests run: 34, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 95.803 sec - 
in org.apache.hadoop.ipc.TestIPC
Running org.apache.hadoop.ipc.TestDecayRpcScheduler
Tests run: 9, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 1.518 sec - in 
org.apache.hadoop.ipc.TestDecayRpcScheduler
Running org.apache.hadoop.ipc.TestFairCallQueue
Tests run: 22, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 2.0

Build failed in Jenkins: Hadoop-common-trunk-Java8 #745

2015-11-25 Thread Apache Jenkins Server
See 

Changes:

[aajisaka] MAPREDUCE-6553. Replace '\u2b05' with '<-' in rendering job

[aajisaka] Fix indents in the 2.8.0 section of MapReduce CHANGES.txt.

--
[...truncated 3901 lines...]
[INFO] Compiling 2 source files to 

[INFO] 
[INFO] --- maven-surefire-plugin:2.17:test (default-test) @ hadoop-minikdc ---
[INFO] Surefire report directory: 


---
 T E S T S
---

---
 T E S T S
---
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.minikdc.TestMiniKdc
Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 17.618 sec - in 
org.apache.hadoop.minikdc.TestMiniKdc
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.minikdc.TestChangeOrgNameAndDomain
Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 18.144 sec - in 
org.apache.hadoop.minikdc.TestChangeOrgNameAndDomain

Results :

Tests run: 6, Failures: 0, Errors: 0, Skipped: 0

[INFO] 
[INFO] --- maven-jar-plugin:2.5:jar (default-jar) @ hadoop-minikdc ---
[INFO] Building jar: 

[INFO] 
[INFO] --- maven-source-plugin:2.3:jar-no-fork (hadoop-java-sources) @ 
hadoop-minikdc ---
[INFO] Building jar: 

[INFO] 
[INFO] --- maven-source-plugin:2.3:test-jar-no-fork (hadoop-java-sources) @ 
hadoop-minikdc ---
[INFO] Building jar: 

[INFO] 
[INFO] --- maven-enforcer-plugin:1.3.1:enforce (dist-enforce) @ hadoop-minikdc 
---
[INFO] 
[INFO] --- maven-site-plugin:3.4:attach-descriptor (attach-descriptor) @ 
hadoop-minikdc ---
[INFO] 
[INFO] --- maven-javadoc-plugin:2.8.1:jar (module-javadocs) @ hadoop-minikdc ---
[INFO] 
Loading source files for package org.apache.hadoop.minikdc...
Constructing Javadoc information...
Standard Doclet version 1.8.0
Building tree for all the packages and classes...
Generating 

Generating 

Generating 

Generating 

Generating 

Generating 

Generating 

Generating 

Generating 

Building index for all the packages and classes...
Generating 

Generating 

Generating 

Building index for all classes...
Generating 

Generating