Re: FYI: Major, long-standing Issue with trunk's test-patch

2015-10-29 Thread Chris Nauroth
+1 to treating it as a bug in Hadoop code if a test writes a file outside
of the target directory.  This has the side effect that "mvn clean"
doesn't really clean up fully.

HADOOP-12519 is a recent patch I committed to fix this kind of problem in
hadoop-azure.  Let's follow up with similar fixes in hadoop-hdfs tests and
anywhere else needed.

--Chris Nauroth




On 10/29/15, 4:44 AM, "Sean Busbey"  wrote:

>In Maven projects, all build generated files (including test working
>space)
>is supposed to be under directories named 'target'.
>
>I believe a few folks already have an issue open to correct the HDFS
>tests:
>
>https://issues.apache.org/jira/browse/HDFS-9263
>
>Please make sure these incorrect paths are covered there.
>
>-- 
>Sean
>On Oct 29, 2015 3:14 AM, "Vinayakumar B"  wrote:
>
>> Thanks,
>>
>> In HDFS precommmit builds we can see ASF licence check.
>> HDFS uses /build as a test directory for some tests, I
>>think
>> this is an exception case.
>>
>>
>> 
>>https://builds.apache.org/job/PreCommit-HDFS-Build/13268/artifact/patchpr
>>ocess/patch-asflicense-problems.txt
>>
>> Regards,
>> Vinay
>>
>> On Thu, Oct 29, 2015 at 9:48 AM, Sean Busbey 
>>wrote:
>>
>> > On Wed, Oct 28, 2015 at 9:23 PM, Vinayakumar B
>>
>> > wrote:
>> > >   So I¹m going to turn on Yetus for *ALL* Hadoop precommit jobs
>> > >> later tonight. (Given how backed up Jenkins is at the moment,
>>there is
>> > >> plenty of time. haha) Anyway, if you see ³Powered by Yetus² in the
>> > Hadoop
>> > >> QA posts, you¹ve got Yetus.  If you don¹t see it, it ran on trunk¹s
>> > >> test-patch.
>> > >
>> > > +1,
>> > >
>> > > Report looks very clean, and multiple JDK runs helps as well. Also
>> > parallel
>> > > run is enabled for HDFS precommit as well.
>> > >
>> > > One issue, Looks like ASF licence check is done on files in build
>> > directory
>> > > also, which generates too many errors. Just need to skip this
>> directory.
>> > >
>> > > Regards,
>> > > Vinay
>> > >
>> > >
>> >
>> > Yetus should not be running ASF license checks inside of build
>> > directories. If you can point to a specific job where this happens
>> > please either file a jira against Yetus or let Allen or I know so we
>> > can file it.
>> >
>> > At a quick glance, I see ASF license failures on HADOOP-9613, but
>> > looking at the flagged files:
>> >
>> >
>> >
>> 
>>https://builds.apache.org/job/PreCommit-HADOOP-Build/7966/artifact/patchp
>>rocess/patch-asflicense-problems.txt
>> >
>> > It is dumping test files outside of the target/ directory, so those
>> > files are legitimately in the source tree.
>> >
>> >
>> > --
>> > Sean
>> >
>>



Re: FYI: Major, long-standing Issue with trunk's test-patch

2015-10-29 Thread Andrew Wang
Are these RAT errors related to the previous discussion about running RAT
after tests? I thought we resolved to run RAT prior, since that's what a
release tarball will look like.

On Thu, Oct 29, 2015 at 8:40 AM, Chris Nauroth 
wrote:

> +1 to treating it as a bug in Hadoop code if a test writes a file outside
> of the target directory.  This has the side effect that "mvn clean"
> doesn't really clean up fully.
>
> HADOOP-12519 is a recent patch I committed to fix this kind of problem in
> hadoop-azure.  Let's follow up with similar fixes in hadoop-hdfs tests and
> anywhere else needed.
>
> --Chris Nauroth
>
>
>
>
> On 10/29/15, 4:44 AM, "Sean Busbey"  wrote:
>
> >In Maven projects, all build generated files (including test working
> >space)
> >is supposed to be under directories named 'target'.
> >
> >I believe a few folks already have an issue open to correct the HDFS
> >tests:
> >
> >https://issues.apache.org/jira/browse/HDFS-9263
> >
> >Please make sure these incorrect paths are covered there.
> >
> >--
> >Sean
> >On Oct 29, 2015 3:14 AM, "Vinayakumar B"  wrote:
> >
> >> Thanks,
> >>
> >> In HDFS precommmit builds we can see ASF licence check.
> >> HDFS uses /build as a test directory for some tests, I
> >>think
> >> this is an exception case.
> >>
> >>
> >>
> >>
> https://builds.apache.org/job/PreCommit-HDFS-Build/13268/artifact/patchpr
> >>ocess/patch-asflicense-problems.txt
> >>
> >> Regards,
> >> Vinay
> >>
> >> On Thu, Oct 29, 2015 at 9:48 AM, Sean Busbey 
> >>wrote:
> >>
> >> > On Wed, Oct 28, 2015 at 9:23 PM, Vinayakumar B
> >>
> >> > wrote:
> >> > >   So I¹m going to turn on Yetus for *ALL* Hadoop precommit jobs
> >> > >> later tonight. (Given how backed up Jenkins is at the moment,
> >>there is
> >> > >> plenty of time. haha) Anyway, if you see ³Powered by Yetus² in the
> >> > Hadoop
> >> > >> QA posts, you¹ve got Yetus.  If you don¹t see it, it ran on trunk¹s
> >> > >> test-patch.
> >> > >
> >> > > +1,
> >> > >
> >> > > Report looks very clean, and multiple JDK runs helps as well. Also
> >> > parallel
> >> > > run is enabled for HDFS precommit as well.
> >> > >
> >> > > One issue, Looks like ASF licence check is done on files in build
> >> > directory
> >> > > also, which generates too many errors. Just need to skip this
> >> directory.
> >> > >
> >> > > Regards,
> >> > > Vinay
> >> > >
> >> > >
> >> >
> >> > Yetus should not be running ASF license checks inside of build
> >> > directories. If you can point to a specific job where this happens
> >> > please either file a jira against Yetus or let Allen or I know so we
> >> > can file it.
> >> >
> >> > At a quick glance, I see ASF license failures on HADOOP-9613, but
> >> > looking at the flagged files:
> >> >
> >> >
> >> >
> >>
> >>
> https://builds.apache.org/job/PreCommit-HADOOP-Build/7966/artifact/patchp
> >>rocess/patch-asflicense-problems.txt
> >> >
> >> > It is dumping test files outside of the target/ directory, so those
> >> > files are legitimately in the source tree.
> >> >
> >> >
> >> > --
> >> > Sean
> >> >
> >>
>
>


Build failed in Jenkins: Hadoop-common-trunk-Java8 #621

2015-10-29 Thread Apache Jenkins Server
See 

Changes:

[jlowe] YARN-2902. Killing a container that is localizing can orphan resources

--
[...truncated 6121 lines...]
"Finalizer" daemon prio=8 tid=3 in Object.wait()
java.lang.Thread.State: WAITING (on object monitor)
at java.lang.Object.wait(Native Method)
at java.lang.ref.ReferenceQueue.remove(ReferenceQueue.java:142)
at java.lang.ref.ReferenceQueue.remove(ReferenceQueue.java:158)
at java.lang.ref.Finalizer$FinalizerThread.run(Finalizer.java:209)
"main"  prio=5 tid=1 timed_waiting
java.lang.Thread.State: TIMED_WAITING
at java.lang.Object.wait(Native Method)
at java.lang.Thread.join(Thread.java:1252)
at 
org.junit.internal.runners.statements.FailOnTimeout.evaluateStatement(FailOnTimeout.java:26)
at 
org.junit.internal.runners.statements.FailOnTimeout.evaluate(FailOnTimeout.java:17)
at 
org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
at 
org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)
at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:271)
at 
org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:70)
at 
org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:50)
at org.junit.runners.ParentRunner$3.run(ParentRunner.java:238)
at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:63)
at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:236)
at org.junit.runners.ParentRunner.access$000(ParentRunner.java:53)
at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:229)
at org.junit.runners.ParentRunner.run(ParentRunner.java:309)
at 
org.apache.maven.surefire.junit4.JUnit4Provider.execute(JUnit4Provider.java:264)
at 
org.apache.maven.surefire.junit4.JUnit4Provider.executeTestSet(JUnit4Provider.java:153)
at 
org.apache.maven.surefire.junit4.JUnit4Provider.invoke(JUnit4Provider.java:124)
at 
org.apache.maven.surefire.booter.ForkedBooter.invokeProviderInSameClassLoader(ForkedBooter.java:200)
at 
org.apache.maven.surefire.booter.ForkedBooter.runSuitesInProcess(ForkedBooter.java:153)
at 
org.apache.maven.surefire.booter.ForkedBooter.main(ForkedBooter.java:103)
"Thread-6"  prio=5 tid=25 runnable
java.lang.Thread.State: RUNNABLE
at java.lang.Thread.dumpThreads(Native Method)
at java.lang.Thread.getAllStackTraces(Thread.java:1602)
at 
org.apache.hadoop.test.TimedOutTestsListener.buildThreadDump(TimedOutTestsListener.java:87)
at 
org.apache.hadoop.test.TimedOutTestsListener.buildThreadDiagnosticString(TimedOutTestsListener.java:73)
at 
org.apache.hadoop.test.GenericTestUtils.waitFor(GenericTestUtils.java:188)
at 
org.apache.hadoop.fs.FCStatisticsBaseTest.testStatisticsThreadLocalDataCleanUp(FCStatisticsBaseTest.java:143)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:483)
at 
org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:47)
at 
org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
at 
org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:44)
at 
org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
at 
org.junit.internal.runners.statements.FailOnTimeout$StatementThread.run(FailOnTimeout.java:74)


at 
org.apache.hadoop.test.GenericTestUtils.waitFor(GenericTestUtils.java:188)
at 
org.apache.hadoop.fs.FCStatisticsBaseTest.testStatisticsThreadLocalDataCleanUp(FCStatisticsBaseTest.java:143)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:483)
at 
org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:47)
at 
org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
at 
org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:44)
at 
org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
at 
org.junit.internal.runners.statements.FailOnTimeout$StatementThread.run(FailOnTimeout.java:74)

Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0

Github integration for Hadoop

2015-10-29 Thread Owen O'Malley
All,
   For code & patch review, many of the newer projects are using the Github
pull request integration. You can read about it here:

https://blogs.apache.org/infra/entry/improved_integration_between_apache_and

It basically lets you:
* have mirroring between comments on pull requests and jira
* lets you close pull requests
* have mirroring between pull request comments and the Apache mail lists

Thoughts?
.. Owen


Re: Github integration for Hadoop

2015-10-29 Thread Hitesh Shah
+1 on supporting patch contributions through github pull requests.

— Hitesh

On Oct 29, 2015, at 10:47 AM, Owen O'Malley  wrote:

> All,
>   For code & patch review, many of the newer projects are using the Github
> pull request integration. You can read about it here:
> 
> https://blogs.apache.org/infra/entry/improved_integration_between_apache_and
> 
> It basically lets you:
> * have mirroring between comments on pull requests and jira
> * lets you close pull requests
> * have mirroring between pull request comments and the Apache mail lists
> 
> Thoughts?
> .. Owen



Re: Github integration for Hadoop

2015-10-29 Thread Haohui Mai
+1

On Thu, Oct 29, 2015 at 10:55 AM, Hitesh Shah  wrote:
> +1 on supporting patch contributions through github pull requests.
>
> — Hitesh
>
> On Oct 29, 2015, at 10:47 AM, Owen O'Malley  wrote:
>
>> All,
>>   For code & patch review, many of the newer projects are using the Github
>> pull request integration. You can read about it here:
>>
>> https://blogs.apache.org/infra/entry/improved_integration_between_apache_and
>>
>> It basically lets you:
>> * have mirroring between comments on pull requests and jira
>> * lets you close pull requests
>> * have mirroring between pull request comments and the Apache mail lists
>>
>> Thoughts?
>> .. Owen
>


Re: Github integration for Hadoop

2015-10-29 Thread Ashish
+1

On Thu, Oct 29, 2015 at 11:51 AM, Mingliang Liu  wrote:
> +1 (non-binding)
>
> Mingliang Liu
> Member of Technical Staff - HDFS,
> Hortonworks Inc.
> m...@hortonworks.com
>
>
>
>> On Oct 29, 2015, at 10:55 AM, Hitesh Shah  wrote:
>>
>> +1 on supporting patch contributions through github pull requests.
>>
>> — Hitesh
>>
>> On Oct 29, 2015, at 10:47 AM, Owen O'Malley  wrote:
>>
>>> All,
>>>  For code & patch review, many of the newer projects are using the Github
>>> pull request integration. You can read about it here:
>>>
>>> https://blogs.apache.org/infra/entry/improved_integration_between_apache_and
>>>
>>> It basically lets you:
>>> * have mirroring between comments on pull requests and jira
>>> * lets you close pull requests
>>> * have mirroring between pull request comments and the Apache mail lists
>>>
>>> Thoughts?
>>> .. Owen
>>
>>
>



-- 
thanks
ashish

Blog: http://www.ashishpaliwal.com/blog
My Photo Galleries: http://www.pbase.com/ashishpaliwal


Re: Github integration for Hadoop

2015-10-29 Thread Sean Busbey
It looks like there's pretty good consensus. Why do we need a VOTE thread?

Perhaps better for someone to submit a patch with proposed text for
hte contribution guide[1]?

-Sean

[1]: http://wiki.apache.org/hadoop/HowToContribute

On Thu, Oct 29, 2015 at 2:01 PM, Xiaoyu Yao  wrote:
> +1, should we start a vote on this?
>
>
>
>
> On 10/29/15, 11:54 AM, "Ashish"  wrote:
>
>>+1
>>
>>On Thu, Oct 29, 2015 at 11:51 AM, Mingliang Liu  wrote:
>>> +1 (non-binding)
>>>
>>> Mingliang Liu
>>> Member of Technical Staff - HDFS,
>>> Hortonworks Inc.
>>> m...@hortonworks.com
>>>
>>>
>>>
 On Oct 29, 2015, at 10:55 AM, Hitesh Shah  wrote:

 +1 on supporting patch contributions through github pull requests.

 — Hitesh

 On Oct 29, 2015, at 10:47 AM, Owen O'Malley  wrote:

> All,
>  For code & patch review, many of the newer projects are using the Github
> pull request integration. You can read about it here:
>
> https://blogs.apache.org/infra/entry/improved_integration_between_apache_and
>
> It basically lets you:
> * have mirroring between comments on pull requests and jira
> * lets you close pull requests
> * have mirroring between pull request comments and the Apache mail lists
>
> Thoughts?
> .. Owen


>>>
>>
>>
>>
>>--
>>thanks
>>ashish
>>
>>Blog: http://www.ashishpaliwal.com/blog
>>My Photo Galleries: http://www.pbase.com/ashishpaliwal
>>



-- 
Sean


Build failed in Jenkins: Hadoop-Common-trunk #1924

2015-10-29 Thread Apache Jenkins Server
See 

Changes:

[jlowe] MAPREDUCE-6515. Update Application priority in AM side from AM-RM

[zhz] HDFS-9229. Expose size of NameNode directory as a metric. Contributed by

[wang] HDFS-9332. Fix Precondition failures from NameNodeEditLogRoller while

[jlowe] Update CHANGES.txt to reflect commit of MR-6273 to branch-2.7

--
[...truncated 5413 lines...]
Running org.apache.hadoop.security.TestNetgroupCache
Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.083 sec - in 
org.apache.hadoop.security.TestNetgroupCache
Running org.apache.hadoop.security.TestUserFromEnv
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.592 sec - in 
org.apache.hadoop.security.TestUserFromEnv
Running org.apache.hadoop.security.TestGroupsCaching
Tests run: 10, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 1.926 sec - in 
org.apache.hadoop.security.TestGroupsCaching
Running org.apache.hadoop.security.ssl.TestSSLFactory
Tests run: 15, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 4.241 sec - in 
org.apache.hadoop.security.ssl.TestSSLFactory
Running org.apache.hadoop.security.ssl.TestReloadingX509TrustManager
Tests run: 5, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 2.675 sec - in 
org.apache.hadoop.security.ssl.TestReloadingX509TrustManager
Running org.apache.hadoop.security.TestUGILoginFromKeytab
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 5.774 sec - in 
org.apache.hadoop.security.TestUGILoginFromKeytab
Running org.apache.hadoop.security.TestUserGroupInformation
Tests run: 34, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 3.794 sec - in 
org.apache.hadoop.security.TestUserGroupInformation
Running org.apache.hadoop.security.TestUGIWithExternalKdc
Tests run: 1, Failures: 0, Errors: 0, Skipped: 1, Time elapsed: 0.087 sec - in 
org.apache.hadoop.security.TestUGIWithExternalKdc
Running org.apache.hadoop.security.http.TestCrossOriginFilter
Tests run: 9, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.491 sec - in 
org.apache.hadoop.security.http.TestCrossOriginFilter
Running org.apache.hadoop.security.TestProxyUserFromEnv
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.605 sec - in 
org.apache.hadoop.security.TestProxyUserFromEnv
Running org.apache.hadoop.security.authorize.TestProxyUsers
Tests run: 17, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 1.114 sec - in 
org.apache.hadoop.security.authorize.TestProxyUsers
Running org.apache.hadoop.security.authorize.TestProxyServers
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.461 sec - in 
org.apache.hadoop.security.authorize.TestProxyServers
Running org.apache.hadoop.security.authorize.TestServiceAuthorization
Tests run: 7, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 1.297 sec - in 
org.apache.hadoop.security.authorize.TestServiceAuthorization
Running org.apache.hadoop.security.authorize.TestAccessControlList
Tests run: 8, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 1.007 sec - in 
org.apache.hadoop.security.authorize.TestAccessControlList
Running org.apache.hadoop.security.alias.TestCredShell
Tests run: 9, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 1.345 sec - in 
org.apache.hadoop.security.alias.TestCredShell
Running org.apache.hadoop.security.alias.TestCredentialProviderFactory
Tests run: 6, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 1.435 sec - in 
org.apache.hadoop.security.alias.TestCredentialProviderFactory
Running org.apache.hadoop.security.alias.TestCredentialProvider
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.099 sec - in 
org.apache.hadoop.security.alias.TestCredentialProvider
Running org.apache.hadoop.security.TestAuthenticationFilter
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.627 sec - in 
org.apache.hadoop.security.TestAuthenticationFilter
Running org.apache.hadoop.security.TestLdapGroupsMapping
Tests run: 5, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 1.597 sec - in 
org.apache.hadoop.security.TestLdapGroupsMapping
Running org.apache.hadoop.security.token.TestToken
Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.551 sec - in 
org.apache.hadoop.security.token.TestToken
Running org.apache.hadoop.security.token.delegation.TestDelegationToken
Tests run: 15, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 31.079 sec - 
in org.apache.hadoop.security.token.delegation.TestDelegationToken
Running 
org.apache.hadoop.security.token.delegation.web.TestDelegationTokenAuthenticationHandlerWithMocks
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 1.566 sec - in 
org.apache.hadoop.security.token.delegation.web.TestDelegationTokenAuthenticationHandlerWithMocks
Running 
org.apache.hadoop.security.token.delegation.web.TestDelegationTokenManager
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.952 sec - in 

Re: Github integration for Hadoop

2015-10-29 Thread Chang Li
+1 (non-binding)

On Thu, Oct 29, 2015 at 1:54 PM, Ashish  wrote:

> +1
>
> On Thu, Oct 29, 2015 at 11:51 AM, Mingliang Liu 
> wrote:
> > +1 (non-binding)
> >
> > Mingliang Liu
> > Member of Technical Staff - HDFS,
> > Hortonworks Inc.
> > m...@hortonworks.com
> >
> >
> >
> >> On Oct 29, 2015, at 10:55 AM, Hitesh Shah  wrote:
> >>
> >> +1 on supporting patch contributions through github pull requests.
> >>
> >> — Hitesh
> >>
> >> On Oct 29, 2015, at 10:47 AM, Owen O'Malley  wrote:
> >>
> >>> All,
> >>>  For code & patch review, many of the newer projects are using the
> Github
> >>> pull request integration. You can read about it here:
> >>>
> >>>
> https://blogs.apache.org/infra/entry/improved_integration_between_apache_and
> >>>
> >>> It basically lets you:
> >>> * have mirroring between comments on pull requests and jira
> >>> * lets you close pull requests
> >>> * have mirroring between pull request comments and the Apache mail
> lists
> >>>
> >>> Thoughts?
> >>> .. Owen
> >>
> >>
> >
>
>
>
> --
> thanks
> ashish
>
> Blog: http://www.ashishpaliwal.com/blog
> My Photo Galleries: http://www.pbase.com/ashishpaliwal
>


Re: Github integration for Hadoop

2015-10-29 Thread Xiaoyu Yao
+1, should we start a vote on this?




On 10/29/15, 11:54 AM, "Ashish"  wrote:

>+1
>
>On Thu, Oct 29, 2015 at 11:51 AM, Mingliang Liu  wrote:
>> +1 (non-binding)
>>
>> Mingliang Liu
>> Member of Technical Staff - HDFS,
>> Hortonworks Inc.
>> m...@hortonworks.com
>>
>>
>>
>>> On Oct 29, 2015, at 10:55 AM, Hitesh Shah  wrote:
>>>
>>> +1 on supporting patch contributions through github pull requests.
>>>
>>> — Hitesh
>>>
>>> On Oct 29, 2015, at 10:47 AM, Owen O'Malley  wrote:
>>>
 All,
  For code & patch review, many of the newer projects are using the Github
 pull request integration. You can read about it here:

 https://blogs.apache.org/infra/entry/improved_integration_between_apache_and

 It basically lets you:
 * have mirroring between comments on pull requests and jira
 * lets you close pull requests
 * have mirroring between pull request comments and the Apache mail lists

 Thoughts?
 .. Owen
>>>
>>>
>>
>
>
>
>-- 
>thanks
>ashish
>
>Blog: http://www.ashishpaliwal.com/blog
>My Photo Galleries: http://www.pbase.com/ashishpaliwal
>


Re: Github integration for Hadoop

2015-10-29 Thread Vinod Vavilapalli
Don’t think we need a vote. If someone can demonstrate how this works 
end-to-end, and enough folks find it useful, we can start using it. There is no 
need for a mandate.

+Vinod

> On Oct 29, 2015, at 12:01 PM, Xiaoyu Yao  wrote:
> 
> +1, should we start a vote on this?
> 
> 
> 
> 
> On 10/29/15, 11:54 AM, "Ashish"  wrote:
> 
>> +1
>> 
>> On Thu, Oct 29, 2015 at 11:51 AM, Mingliang Liu  wrote:
>>> +1 (non-binding)
>>> 
>>> Mingliang Liu
>>> Member of Technical Staff - HDFS,
>>> Hortonworks Inc.
>>> m...@hortonworks.com
>>> 
>>> 
>>> 
 On Oct 29, 2015, at 10:55 AM, Hitesh Shah  wrote:
 
 +1 on supporting patch contributions through github pull requests.
 
 — Hitesh
 
 On Oct 29, 2015, at 10:47 AM, Owen O'Malley  wrote:
 
> All,
> For code & patch review, many of the newer projects are using the Github
> pull request integration. You can read about it here:
> 
> https://blogs.apache.org/infra/entry/improved_integration_between_apache_and
> 
> It basically lets you:
> * have mirroring between comments on pull requests and jira
> * lets you close pull requests
> * have mirroring between pull request comments and the Apache mail lists
> 
> Thoughts?
> .. Owen
 
 
>>> 
>> 
>> 
>> 
>> -- 
>> thanks
>> ashish
>> 
>> Blog: http://www.ashishpaliwal.com/blog
>> My Photo Galleries: http://www.pbase.com/ashishpaliwal
>> 



Re: Github integration for Hadoop

2015-10-29 Thread Arpit Agarwal
+1, thanks for proposing it.





On 10/29/15, 10:47 AM, "Owen O'Malley"  wrote:

>All,
>   For code & patch review, many of the newer projects are using the Github
>pull request integration. You can read about it here:
>
>https://blogs.apache.org/infra/entry/improved_integration_between_apache_and
>
>It basically lets you:
>* have mirroring between comments on pull requests and jira
>* lets you close pull requests
>* have mirroring between pull request comments and the Apache mail lists
>
>Thoughts?
>.. Owen


Re: Github integration for Hadoop

2015-10-29 Thread Mingliang Liu
+1 (non-binding)

Mingliang Liu
Member of Technical Staff - HDFS,
Hortonworks Inc.
m...@hortonworks.com



> On Oct 29, 2015, at 10:55 AM, Hitesh Shah  wrote:
> 
> +1 on supporting patch contributions through github pull requests.
> 
> — Hitesh
> 
> On Oct 29, 2015, at 10:47 AM, Owen O'Malley  wrote:
> 
>> All,
>>  For code & patch review, many of the newer projects are using the Github
>> pull request integration. You can read about it here:
>> 
>> https://blogs.apache.org/infra/entry/improved_integration_between_apache_and
>> 
>> It basically lets you:
>> * have mirroring between comments on pull requests and jira
>> * lets you close pull requests
>> * have mirroring between pull request comments and the Apache mail lists
>> 
>> Thoughts?
>> .. Owen
> 
> 



Re: Github integration for Hadoop

2015-10-29 Thread Andrew Wang
Has anything changed regarding the github integration since the last time
we discussed this? That blog post is from 2014, and we discussed
alternative review systems earlier in 2015.

Colin specifically was concerned about forking the discussion between JIRA
and other places:

http://search-hadoop.com/m/uOzYtkYxo4qazi=Re+Patch+review+process
http://search-hadoop.com/m/uOzYtSz7z624qazi=Re+Patch+review+process

There are also questions about PRs leading to messy commit history with the
extra merge commits. Spark IIRC has something to linearize it again, which
seems important if we actually want to do this.

Could someone outline the upsides of using github? I don't find the review
UI particularly great compared to Gerrit or even RB, and there's the merge
commit issue. For instance, do we think using Github would lead to more
contributions? Improved developer workflows? Have we re-examined
alternatives like Gerrit or RB as well?

On Thu, Oct 29, 2015 at 12:25 PM, Arpit Agarwal 
wrote:

> +1, thanks for proposing it.
>
>
>
>
>
> On 10/29/15, 10:47 AM, "Owen O'Malley"  wrote:
>
> >All,
> >   For code & patch review, many of the newer projects are using the
> Github
> >pull request integration. You can read about it here:
> >
> >
> https://blogs.apache.org/infra/entry/improved_integration_between_apache_and
> >
> >It basically lets you:
> >* have mirroring between comments on pull requests and jira
> >* lets you close pull requests
> >* have mirroring between pull request comments and the Apache mail lists
> >
> >Thoughts?
> >.. Owen
>


Jenkins build is back to normal : Hadoop-common-trunk-Java8 #622

2015-10-29 Thread Apache Jenkins Server
See 



Re: Github integration for Hadoop

2015-10-29 Thread Steve Loughran

> On 29 Oct 2015, at 20:34, Andrew Wang  wrote:
> 
> Has anything changed regarding the github integration since the last time
> we discussed this? That blog post is from 2014, and we discussed
> alternative review systems earlier in 2015.
> 
> Colin specifically was concerned about forking the discussion between JIRA
> and other places:
> 
> http://search-hadoop.com/m/uOzYtkYxo4qazi=Re+Patch+review+process
> http://search-hadoop.com/m/uOzYtSz7z624qazi=Re+Patch+review+process
> 
> There are also questions about PRs leading to messy commit history with the
> extra merge commits. Spark IIRC has something to linearize it again, which
> seems important if we actually want to do this.
> 
> Could someone outline the upsides of using github? I don't find the review
> UI particularly great compared to Gerrit or even RB, and there's the merge
> commit issue. For instance, do we think using Github would lead to more
> contributions? Improved developer workflows? Have we re-examined
> alternatives like Gerrit or RB as well?

I've been using it for some ATS integration work

For simple patches, you can do some good review cycles

https://github.com/apache/spark/pull/9232

For something big, well, it gets big

https://github.com/apache/spark/pull/8744#discussion_r43388336

... you have to switch to the many-file view, which kind of loses some of the 
temporal ordering of comments —instead you get lots of little threads on 
individual issues. Which may scale better

https://github.com/apache/spark/pull/8744/files

I'd need more experience to come to a real conclusion. What I do like is the 
immediate push-to-trigger rebuild process, no need to create patches and submit 
them. That I like.


> On Thu, Oct 29, 2015 at 12:25 PM, Arpit Agarwal 
> wrote:
> 
>> +1, thanks for proposing it.
>> 
>> 
>> 
>> 
>> 
>> On 10/29/15, 10:47 AM, "Owen O'Malley"  wrote:
>> 
>>> All,
>>>  For code & patch review, many of the newer projects are using the
>> Github
>>> pull request integration. You can read about it here:
>>> 
>>> 
>> https://blogs.apache.org/infra/entry/improved_integration_between_apache_and
>>> 
>>> It basically lets you:
>>> * have mirroring between comments on pull requests and jira
>>> * lets you close pull requests
>>> * have mirroring between pull request comments and the Apache mail lists
>>> 
>>> Thoughts?
>>> .. Owen
>> 



[jira] [Created] (HADOOP-12528) Avoid spinning in CallQueueManager.take()

2015-10-29 Thread Staffan Friberg (JIRA)
Staffan Friberg created HADOOP-12528:


 Summary: Avoid spinning in CallQueueManager.take()
 Key: HADOOP-12528
 URL: https://issues.apache.org/jira/browse/HADOOP-12528
 Project: Hadoop Common
  Issue Type: Improvement
  Components: performance
Affects Versions: 2.7.1
Reporter: Staffan Friberg
Priority: Minor
 Attachments: HADOOP-12528.001.patch

When IPC threads (Server$Handler) does take() to get the next Call, the 
CallManager does a poll instead of take() on the internal queue.

This causes threads to wake up and unnecessarily waste some CPU and do extra 
allocation as part of the internal await/signal mechanism each time the thread 
redoes poll().

This patch uses take() on the queue instead of poll() which will keep thread in 
the await state until work is available. Since threads will be blocked on the 
queue indefinitely the swapping of queues requires a bit of extra work to make 
sure threads wake up and does take on the new queue.

Updated the test TestCallQueueManager.testSwapUnderContention() to ensure that 
no threads get stuck on the old queue as part of swapping.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Build failed in Jenkins: Hadoop-common-trunk-Java8 #623

2015-10-29 Thread Apache Jenkins Server
See 

Changes:

[Arun Suresh] YARN-4310. FairScheduler: Log skipping reservation messages at 
DEBUG

[jeagles] YARN-4183. Enabling generic application history forces every job to 
get

--
[...truncated 3908 lines...]
---

---
 T E S T S
---
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.minikdc.TestChangeOrgNameAndDomain
Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 8.068 sec - in 
org.apache.hadoop.minikdc.TestChangeOrgNameAndDomain
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.minikdc.TestMiniKdc
Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 7.474 sec - in 
org.apache.hadoop.minikdc.TestMiniKdc

Results :

Tests run: 6, Failures: 0, Errors: 0, Skipped: 0

[INFO] 
[INFO] --- maven-jar-plugin:2.5:jar (default-jar) @ hadoop-minikdc ---
[INFO] Building jar: 

[INFO] 
[INFO] --- maven-source-plugin:2.3:jar-no-fork (hadoop-java-sources) @ 
hadoop-minikdc ---
[INFO] Building jar: 

[INFO] 
[INFO] --- maven-source-plugin:2.3:test-jar-no-fork (hadoop-java-sources) @ 
hadoop-minikdc ---
[INFO] Building jar: 

[INFO] 
[INFO] --- maven-enforcer-plugin:1.3.1:enforce (dist-enforce) @ hadoop-minikdc 
---
[INFO] 
[INFO] --- maven-site-plugin:3.4:attach-descriptor (attach-descriptor) @ 
hadoop-minikdc ---
[INFO] 
[INFO] --- maven-javadoc-plugin:2.8.1:jar (module-javadocs) @ hadoop-minikdc ---
[INFO] 
Loading source files for package org.apache.hadoop.minikdc...
Constructing Javadoc information...
Standard Doclet version 1.8.0
Building tree for all the packages and classes...
Generating 

Generating 

Generating 

Generating 

Generating 

Generating 

Generating 

Generating 

Generating 

Building index for all the packages and classes...
Generating 

Generating 

Generating 

Building index for all classes...
Generating 

Generating 

Generating 

Generating 

2 warnings
[WARNING] Javadoc Warnings
[WARNING] 

Re: Github integration for Hadoop

2015-10-29 Thread Owen O'Malley
On Thu, Oct 29, 2015 at 1:34 PM, Andrew Wang 
wrote:

> Has anything changed regarding the github integration since the last time
> we discussed this?


There is a lot more experience with it now and opinions change over time.
Clearly, there is overwhelming support for this idea now.


> There are also questions about PRs leading to messy commit history with the
> extra merge commits. Spark IIRC has something to linearize it again, which
> seems important if we actually want to do this.
>

In the vast majority of cases, the pull requests should be squashed to a
single commit. The Github integration doesn't change how patches are
committed or pushed into the repository. It is about using the code review
tools that Github provides.

Note that we already have Github pull requests for Hadoop. We just don't
have any way to manage them and they aren't mirrored to the Apache lists &
jiras. Many people find the Github code review tools useful and Apache has
the tools to integrate them into our mailing lists and jira.

.. Owen


Re: Github integration for Hadoop

2015-10-29 Thread Andrew Wang
On Thu, Oct 29, 2015 at 3:12 PM, Owen O'Malley  wrote:

> On Thu, Oct 29, 2015 at 1:34 PM, Andrew Wang 
> wrote:
>
> > Has anything changed regarding the github integration since the last time
> > we discussed this?
>
>
> There is a lot more experience with it now and opinions change over time.
> Clearly, there is overwhelming support for this idea now.
>
>
> > There are also questions about PRs leading to messy commit history with
> the
> > extra merge commits. Spark IIRC has something to linearize it again,
> which
> > seems important if we actually want to do this.
> >
>
> In the vast majority of cases, the pull requests should be squashed to a
> single commit. The Github integration doesn't change how patches are
> committed or pushed into the repository. It is about using the code review
> tools that Github provides.
>
> Okay, it wasn't clear to me that this proposal was only to use Github as a
review tool, not for patch integration.

If that's the case, we should also look at review alternatives like RB and
Crucible. RB also has cmdline tools for easily updating a patch.


> Note that we already have Github pull requests for Hadoop. We just don't
> have any way to manage them and they aren't mirrored to the Apache lists &
> jiras. Many people find the Github code review tools useful and Apache has
> the tools to integrate them into our mailing lists and jira.
>

If the issue is PRs that no one looks at, one option is to disable PRs and
tell these contributors to file a JIRA instead.


Re: Github integration for Hadoop

2015-10-29 Thread Andrew Wang
On Thu, Oct 29, 2015 at 2:29 PM, Steve Loughran 
wrote:

>
> > On 29 Oct 2015, at 20:34, Andrew Wang  wrote:
> >
> > Has anything changed regarding the github integration since the last time
> > we discussed this? That blog post is from 2014, and we discussed
> > alternative review systems earlier in 2015.
> >
> > Colin specifically was concerned about forking the discussion between
> JIRA
> > and other places:
> >
> > http://search-hadoop.com/m/uOzYtkYxo4qazi=Re+Patch+review+process
> > http://search-hadoop.com/m/uOzYtSz7z624qazi=Re+Patch+review+process
> >
> > There are also questions about PRs leading to messy commit history with
> the
> > extra merge commits. Spark IIRC has something to linearize it again,
> which
> > seems important if we actually want to do this.
> >
> > Could someone outline the upsides of using github? I don't find the
> review
> > UI particularly great compared to Gerrit or even RB, and there's the
> merge
> > commit issue. For instance, do we think using Github would lead to more
> > contributions? Improved developer workflows? Have we re-examined
> > alternatives like Gerrit or RB as well?
>
> I've been using it for some ATS integration work
>
> For simple patches, you can do some good review cycles
>
> https://github.com/apache/spark/pull/9232
>
> For something big, well, it gets big
>
> https://github.com/apache/spark/pull/8744#discussion_r43388336
>
> ... you have to switch to the many-file view, which kind of loses some of
> the temporal ordering of comments —instead you get lots of little threads
> on individual issues. Which may scale better
>
> https://github.com/apache/spark/pull/8744/files
>
> I'd need more experience to come to a real conclusion. What I do like is
> the immediate push-to-trigger rebuild process, no need to create patches
> and submit them. That I like.
>
> I like this streamlining a lot too, but it's also something Gerrit
provides. I also prefer Gerrit's review interface to Github. Gerrit also
has nice interdiff support, which AFAICT Github does not support at all.
You can push patch revs on top of a PR, but it doesn't show interdiffs when
you force push. PRs seem designed for a merge rather than rebase workflow,
which is at odds with our existing dev workflow.

If I had my druthers, we'd focus on Gerrit rather than Github as a
successor review system. Better review, better fits our dev workflow. The
only question is JIRA integration.

Best,
Andrew


Re: Github integration for Hadoop

2015-10-29 Thread Owen O'Malley
On Thu, Oct 29, 2015 at 2:42 PM, Andrew Wang 
wrote:

>
> If I had my druthers, we'd focus on Gerrit rather than Github as a
> successor review system. Better review, better fits our dev workflow. The
> only question is JIRA integration.
>

Last time I asked, Apache Infra wasn't willing to support Gerrit and they
obviously do successfully support Github. I don't know if Gerrit is better
or worse than Github, but it really isn't an option at this point.

.. Owen


Re: Github integration for Hadoop

2015-10-29 Thread Andrew Wang
On Thu, Oct 29, 2015 at 3:29 PM, Owen O'Malley  wrote:

> On Thu, Oct 29, 2015 at 2:42 PM, Andrew Wang 
> wrote:
>
> >
> > If I had my druthers, we'd focus on Gerrit rather than Github as a
> > successor review system. Better review, better fits our dev workflow. The
> > only question is JIRA integration.
> >
>
> Last time I asked, Apache Infra wasn't willing to support Gerrit and they
> obviously do successfully support Github. I don't know if Gerrit is better
> or worse than Github, but it really isn't an option at this point.
>
> I asked INFRA about this before too, and the response was more "Later"
than "Won't Fix", to use JIRA terminology. If we pushed for it, I think it
could happen. So let's not rule Gerrit out from the get-go.

Andrew


Jenkins build is back to normal : Hadoop-Common-trunk #1925

2015-10-29 Thread Apache Jenkins Server
See 



Build failed in Jenkins: Hadoop-common-trunk-Java8 #624

2015-10-29 Thread Apache Jenkins Server
See 

Changes:

[jianhe] YARN-4127. RM fail with noAuth error if switched from failover to

[xgong] YARN-4313. Race condition in MiniMRYarnCluster when getting history

--
[...truncated 5887 lines...]
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 5.162 sec - in 
org.apache.hadoop.security.TestUGILoginFromKeytab
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.security.TestLdapGroupsMappingWithPosixGroup
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.642 sec - in 
org.apache.hadoop.security.TestLdapGroupsMappingWithPosixGroup
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.security.TestShellBasedIdMapping
Tests run: 7, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 2.565 sec - in 
org.apache.hadoop.security.TestShellBasedIdMapping
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.security.TestDoAsEffectiveUser
Tests run: 9, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 1.485 sec - in 
org.apache.hadoop.security.TestDoAsEffectiveUser
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.security.TestUGIWithExternalKdc
Tests run: 1, Failures: 0, Errors: 0, Skipped: 1, Time elapsed: 0.09 sec - in 
org.apache.hadoop.security.TestUGIWithExternalKdc
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.security.TestJNIGroupsMapping
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.584 sec - in 
org.apache.hadoop.security.TestJNIGroupsMapping
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.security.TestGroupsCaching
Tests run: 10, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 1.777 sec - in 
org.apache.hadoop.security.TestGroupsCaching
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.security.alias.TestCredentialProviderFactory
Tests run: 6, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 1.289 sec - in 
org.apache.hadoop.security.alias.TestCredentialProviderFactory
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.security.alias.TestCredentialProvider
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.156 sec - in 
org.apache.hadoop.security.alias.TestCredentialProvider
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.security.alias.TestCredShell
Tests run: 9, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 1.101 sec - in 
org.apache.hadoop.security.alias.TestCredShell
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.security.ssl.TestReloadingX509TrustManager
Tests run: 5, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 2.638 sec - in 
org.apache.hadoop.security.ssl.TestReloadingX509TrustManager
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.security.ssl.TestSSLFactory
Tests run: 15, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 3.643 sec - in 
org.apache.hadoop.security.ssl.TestSSLFactory
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.security.TestLdapGroupsMapping
Tests run: 5, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 1.401 sec - in 
org.apache.hadoop.security.TestLdapGroupsMapping
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.security.TestUserFromEnv
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.521 sec - in 
org.apache.hadoop.security.TestUserFromEnv
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.security.TestHttpCrossOriginFilterInitializer
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.358 sec - in 
org.apache.hadoop.security.TestHttpCrossOriginFilterInitializer
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.security.http.TestCrossOriginFilter
Tests run: 9, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.488 sec - in 
org.apache.hadoop.security.http.TestCrossOriginFilter
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 

Re: Github integration for Hadoop

2015-10-29 Thread Arpit Agarwal
On 10/29/15, 3:48 PM, "Andrew Wang"  wrote:


> If we pushed for it, I think it could happen. 

Gerrit support is a complete unknown. The community response to date supports 
Github integration. Github will appeal to new contributors as their Github 
profiles will reflect their work. I'd be interested in hearing more contributor 
opinions.



> If that's the case, we should also look at review alternatives
> like RB and Crucible.

Okay by me if the community consensus supports one of them. The fact that they 
exist but no one uses them is not a ringing endorsement.



Re: Github integration for Hadoop

2015-10-29 Thread Andrew Wang
On Thu, Oct 29, 2015 at 4:58 PM, Arpit Agarwal 
wrote:

> On 10/29/15, 3:48 PM, "Andrew Wang"  wrote:
>
>
> > If we pushed for it, I think it could happen.
>
> Gerrit support is a complete unknown. The community response to date
> supports Github integration. Github will appeal to new contributors as
> their Github profiles will reflect their work. I'd be interested in hearing
> more contributor opinions.
>
> Owen said above that he was proposing using github as a review tool, not
for code integration. So contributors wouldn't have anything showing up on
their github profiles, since we aren't directly taking PRs.

However, if we were to use GH for integration, it would be with the
auto-squash to avoid the merge commit. Would this preserve the correct
attribution?

>
> > If that's the case, we should also look at review alternatives
> > like RB and Crucible.
>
> Okay by me if the community consensus supports one of them. The fact that
> they exist but no one uses them is not a ringing endorsement.
>
> HBase uses reviewboard, as do I'm sure other Apache projects.
reviews.apache.org existed before we had github integration. I've used RB a
fair bit, and don't mind it.


Re: Github integration for Hadoop

2015-10-29 Thread Allen Wittenauer

> On Oct 29, 2015, at 5:14 PM, Andrew Wang  wrote:
> 
> However, if we were to use GH for integration, it would be with the
> auto-squash to avoid the merge commit. Would this preserve the correct
> attribution?

FWIW, Yetus *really really really* wants a single commit when it comes 
to directly pulling github PRs.  There is a very high risk of Yetus not being 
able to apply a patch generated from github if there are multiple, 
intertwinedcommits to the same tree due to how it functions.

Jenkins build is back to normal : Hadoop-Common-trunk #1922

2015-10-29 Thread Apache Jenkins Server
See 



Re: FYI: Major, long-standing Issue with trunk's test-patch

2015-10-29 Thread Vinayakumar B
Thanks,

In HDFS precommmit builds we can see ASF licence check.
HDFS uses /build as a test directory for some tests, I think
this is an exception case.

https://builds.apache.org/job/PreCommit-HDFS-Build/13268/artifact/patchprocess/patch-asflicense-problems.txt

Regards,
Vinay

On Thu, Oct 29, 2015 at 9:48 AM, Sean Busbey  wrote:

> On Wed, Oct 28, 2015 at 9:23 PM, Vinayakumar B 
> wrote:
> >   So I’m going to turn on Yetus for *ALL* Hadoop precommit jobs
> >> later tonight. (Given how backed up Jenkins is at the moment, there is
> >> plenty of time. haha) Anyway, if you see “Powered by Yetus” in the
> Hadoop
> >> QA posts, you’ve got Yetus.  If you don’t see it, it ran on trunk’s
> >> test-patch.
> >
> > +1,
> >
> > Report looks very clean, and multiple JDK runs helps as well. Also
> parallel
> > run is enabled for HDFS precommit as well.
> >
> > One issue, Looks like ASF licence check is done on files in build
> directory
> > also, which generates too many errors. Just need to skip this directory.
> >
> > Regards,
> > Vinay
> >
> >
>
> Yetus should not be running ASF license checks inside of build
> directories. If you can point to a specific job where this happens
> please either file a jira against Yetus or let Allen or I know so we
> can file it.
>
> At a quick glance, I see ASF license failures on HADOOP-9613, but
> looking at the flagged files:
>
>
> https://builds.apache.org/job/PreCommit-HADOOP-Build/7966/artifact/patchprocess/patch-asflicense-problems.txt
>
> It is dumping test files outside of the target/ directory, so those
> files are legitimately in the source tree.
>
>
> --
> Sean
>


Re: Java 8 + Jersey updates

2015-10-29 Thread Steve Loughran
as an update, this looks like a "protbuf-class" incompatibility

the problem here is that this updates to JAX-RS 2.0, which is incompatible at 
the client API

https://jersey.java.net/nonav/documentation/2.0/migration.html

which means everything downstream gets to rewrite their Jersey client code.

This is going to add yet another barrier to adoption of Hadoop 3.x; you will 
not be able to seamlessly update your client apps, and you will never be able 
to have code which compiles against 2.x and 3.x

If you exclude jax-rs 2 and try to stay @ jersey 1.9 for your client, all the 
http clients: KMS, webhdfs, ATS, aren't going to link.

I think to pull this off we'll need to somehow wrap that client-side use of 
jersey with enough introspection that at least the Hadoop REST clients can use 
whichever jersey lib is on the classpath

Any volunteers.


> On 27 Oct 2015, at 02:38, Tsuyoshi Ozawa  wrote:
> 
>> I assume you are targetting this only at trunk / 3.0 based on the "target 
>> version" and the incompatibility discussion?
> 
> Yes, you're right.
> 
> Best regards,
> - Tsuyoshi
> 
> On Tue, Oct 27, 2015 at 5:12 AM, Colin P. McCabe  wrote:
>> Looks like a good idea.  I assume you are targetting this only at trunk /
>> 3.0 based on the "target version" and the incompatibility discussion?
>> 
>> best,
>> Colin
>> 
>> On Mon, Oct 26, 2015 at 7:07 AM, Tsuyoshi Ozawa  wrote:
>> 
>>> Hi Steve,
>>> 
>>> Thanks for your help.
>>> 
 2. it's "significant"
>>> 
>>> This change includes upgrading not only Jersey, but also its
>>> dependencies like grizzly, asm, and so on.
>>> 
 I'll try to rebuild a YARN app (slider) with the patch to see how it
>>> fares
>>> 
>>> It helps us a lot. I'd like to suggest that the incompatibility
>>> clarified here be described on release note of jira(HADOOP-9613).
>>> 
>>> - Tsuyoshi
>>> 
>>> On Sun, Oct 25, 2015 at 10:34 PM, Steve Loughran 
>>> wrote:
 
 
 https://issues.apache.org/jira/browse/HADOOP-9613 covers the issue of
>>> updating Jersey to 3.0 to cope with the move to Java 8
 
 1. this is trunk
 2. it's "significant"
 
 we've been frozen on an old version of Jersey with issues, known ones
>>> needing separate threads to detect jersey startup failures, and putting off
>>> the Update-Jersey issue for 2- years.
 
 For Java 8 there's no choice but to move on.
 
 thoughts and comments on the JIRA please. I'll try to rebuild a YARN app
>>> (slider) with the patch to see how it fares
 
 
>>> 
> 



Build failed in Jenkins: Hadoop-Common-trunk #1926

2015-10-29 Thread Apache Jenkins Server
See 

Changes:

[jianhe] YARN-4127. RM fail with noAuth error if switched from failover to

[xgong] YARN-4313. Race condition in MiniMRYarnCluster when getting history

--
[...truncated 5413 lines...]
Running org.apache.hadoop.security.TestNetgroupCache
Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.085 sec - in 
org.apache.hadoop.security.TestNetgroupCache
Running org.apache.hadoop.security.TestUserFromEnv
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.691 sec - in 
org.apache.hadoop.security.TestUserFromEnv
Running org.apache.hadoop.security.TestGroupsCaching
Tests run: 10, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 1.954 sec - in 
org.apache.hadoop.security.TestGroupsCaching
Running org.apache.hadoop.security.ssl.TestSSLFactory
Tests run: 15, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 3.977 sec - in 
org.apache.hadoop.security.ssl.TestSSLFactory
Running org.apache.hadoop.security.ssl.TestReloadingX509TrustManager
Tests run: 5, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 2.766 sec - in 
org.apache.hadoop.security.ssl.TestReloadingX509TrustManager
Running org.apache.hadoop.security.TestUGILoginFromKeytab
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 6.177 sec - in 
org.apache.hadoop.security.TestUGILoginFromKeytab
Running org.apache.hadoop.security.TestUserGroupInformation
Tests run: 34, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 3.885 sec - in 
org.apache.hadoop.security.TestUserGroupInformation
Running org.apache.hadoop.security.TestUGIWithExternalKdc
Tests run: 1, Failures: 0, Errors: 0, Skipped: 1, Time elapsed: 0.083 sec - in 
org.apache.hadoop.security.TestUGIWithExternalKdc
Running org.apache.hadoop.security.http.TestCrossOriginFilter
Tests run: 9, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.48 sec - in 
org.apache.hadoop.security.http.TestCrossOriginFilter
Running org.apache.hadoop.security.TestProxyUserFromEnv
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.616 sec - in 
org.apache.hadoop.security.TestProxyUserFromEnv
Running org.apache.hadoop.security.authorize.TestProxyUsers
Tests run: 17, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 1.129 sec - in 
org.apache.hadoop.security.authorize.TestProxyUsers
Running org.apache.hadoop.security.authorize.TestProxyServers
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.489 sec - in 
org.apache.hadoop.security.authorize.TestProxyServers
Running org.apache.hadoop.security.authorize.TestServiceAuthorization
Tests run: 7, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 1.353 sec - in 
org.apache.hadoop.security.authorize.TestServiceAuthorization
Running org.apache.hadoop.security.authorize.TestAccessControlList
Tests run: 8, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 1.047 sec - in 
org.apache.hadoop.security.authorize.TestAccessControlList
Running org.apache.hadoop.security.alias.TestCredShell
Tests run: 9, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 1.531 sec - in 
org.apache.hadoop.security.alias.TestCredShell
Running org.apache.hadoop.security.alias.TestCredentialProviderFactory
Tests run: 6, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 1.528 sec - in 
org.apache.hadoop.security.alias.TestCredentialProviderFactory
Running org.apache.hadoop.security.alias.TestCredentialProvider
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.095 sec - in 
org.apache.hadoop.security.alias.TestCredentialProvider
Running org.apache.hadoop.security.TestAuthenticationFilter
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.625 sec - in 
org.apache.hadoop.security.TestAuthenticationFilter
Running org.apache.hadoop.security.TestLdapGroupsMapping
Tests run: 5, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 1.675 sec - in 
org.apache.hadoop.security.TestLdapGroupsMapping
Running org.apache.hadoop.security.token.TestToken
Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.562 sec - in 
org.apache.hadoop.security.token.TestToken
Running org.apache.hadoop.security.token.delegation.TestDelegationToken
Tests run: 15, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 31.074 sec - 
in org.apache.hadoop.security.token.delegation.TestDelegationToken
Running 
org.apache.hadoop.security.token.delegation.web.TestDelegationTokenAuthenticationHandlerWithMocks
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 1.576 sec - in 
org.apache.hadoop.security.token.delegation.web.TestDelegationTokenAuthenticationHandlerWithMocks
Running 
org.apache.hadoop.security.token.delegation.web.TestDelegationTokenManager
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.981 sec - in 
org.apache.hadoop.security.token.delegation.web.TestDelegationTokenManager
Running org.apache.hadoop.security.token.delegation.web.TestWebDelegationToken
Tests run: 12, Failures: 0, Errors: 0, Skipped: 

Re: Github integration for Hadoop

2015-10-29 Thread Arpit Agarwal
On 10/29/15, 5:14 PM, "Andrew Wang"  wrote:



>On Thu, Oct 29, 2015 at 4:58 PM, Arpit Agarwal 
>wrote:
>
>> On 10/29/15, 3:48 PM, "Andrew Wang"  wrote:
>>
>>
>> > If we pushed for it, I think it could happen.
>>
>> Gerrit support is a complete unknown. The community response to date
>> supports Github integration. Github will appeal to new contributors as
>> their Github profiles will reflect their work. I'd be interested in hearing
>> more contributor opinions.
>>
>> Owen said above that he was proposing using github as a review tool, not
>for code integration. So contributors wouldn't have anything showing up on
>their github profiles, since we aren't directly taking PRs.
>
>However, if we were to use GH for integration, it would be with the
>auto-squash to avoid the merge commit. Would this preserve the correct
>attribution?

The original mail said pull request integration. Unless infra is planning on 
Gerrit integration soon it's not a practical alternative.


>>
>> > If that's the case, we should also look at review alternatives
>> > like RB and Crucible.
>>
>> Okay by me if the community consensus supports one of them. The fact that
>> they exist but no one uses them is not a ringing endorsement.
>>
>> HBase uses reviewboard, as do I'm sure other Apache projects.
>reviews.apache.org existed before we had github integration. I've used RB a
>fair bit, and don't mind it.

I could not get RB working with the Hadoop sub-projects. Would you be willing 
to try it out on a Hadoop/HDFS Jira since you have experience with it?


[jira] [Created] (HADOOP-12529) UserGroupInformation equals method depend on the subject object address

2015-10-29 Thread wangwenli (JIRA)
wangwenli created HADOOP-12529:
--

 Summary: UserGroupInformation equals method depend on the subject 
object address
 Key: HADOOP-12529
 URL: https://issues.apache.org/jira/browse/HADOOP-12529
 Project: Hadoop Common
  Issue Type: Bug
  Components: security
Affects Versions: 2.7.1
Reporter: wangwenli



 my question iswhy UserGroupInformation equals method depend on the subject 
object?


   try below code which is extract from HiveMetaStore:
{code:title=TestUgi.java|borderStyle=solid}
UserGroupInformation clientUgi = null;
UserGroupInformation clientUgi2 = null;
try {
clientUgi = UserGroupInformation.createProxyUser("user2", 
UserGroupInformation.getLoginUser());
clientUgi2 = UserGroupInformation.createProxyUser("user2", 
UserGroupInformation.getLoginUser());
if (clientUgi.equals(clientUgi2)) {
System.out.println("==");
} else {
System.out.println("!=");   //  strangely  this will be hit
}
} catch (IOException e1) {
e1.printStackTrace();
}
{code}
  i found that it is because the equal method from UserGroupInformation is 
compare on subject object ref : subject == ((UserGroupInformation) o).subject;  
.

 as you know,   ipc.Client connect to namenode,   
connections.get(ConnectionId)this code will try to reuse the same socket to 
namenode, but because of ConnectionId's equal depend on ugi equal, which will 
cause connections.get(ConnectionId) cann't get the same socket,   suppose many 
connect to HiveMetaStore, then many connection to Namenode will established.

  so my doubts is why UserGroupInformation is compare on subject object ref 
: subject == ((UserGroupInformation) o).subject,   it should compare on 
subject's principal,  am i right?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)