3.2.1 branch is closed for commits //Re: [DISCUSS] Hadoop-3.2.1 release proposal

2019-09-06 Thread Rohith Sharma K S
I have created branch *branch-3.2.1* for the release .Hence 3.2.1 branch
closed for commits. I will be creating RC on this branch.

Kindly use *branch-3.2* for any commit and set "*Fix Version/s*" to *3.2.2*

-Rohith Sharma K S


On Sat, 7 Sep 2019 at 08:39, Rohith Sharma K S 
wrote:

> Hi Folks
>
> Given all the blockers/critical issues[1] are resolved, I will be cutting
> the branch-3.2.1 sooner.
> Thanks all for your support in pushing the JIRAs to closure.
>
> [1] https://s.apache.org/7yjh5
>
>
> -Rohith Sharma K S
>
> On Thu, 29 Aug 2019 at 11:21, Rohith Sharma K S 
> wrote:
>
>> [Update]
>> Ramping down all the critical/blockers https://s.apache.org/7yjh5, left
>> with three issues!
>>
>> YARN-9785 - Solution discussion is going on, hopefully should be able to
>> rap up solution sooner.
>> HADOOP-15998 - To be committed.
>> YARN-9796 - Patch available, to be committed.
>>
>> I am closely monitoring for these issues, and will update once these are
>> fixed.
>>
>> -Rohith Sharma K S
>>
>>
>> On Wed, 21 Aug 2019 at 13:42, Bibinchundatt 
>> wrote:
>>
>>> Hi Rohith
>>>
>>> Thank you for initiating this
>>>
>>> Few critical/blocker jira's we could consider
>>>
>>> YARN-9714
>>> YARN-9642
>>> YARN-9640
>>>
>>> Regards
>>> Bibin
>>> -Original Message-
>>> From: Rohith Sharma K S [mailto:rohithsharm...@apache.org]
>>> Sent: 21 August 2019 11:42
>>> To: Wei-Chiu Chuang 
>>> Cc: Hdfs-dev ; yarn-dev <
>>> yarn-dev@hadoop.apache.org>; mapreduce-dev <
>>> mapreduce-...@hadoop.apache.org>; Hadoop Common <
>>> common-...@hadoop.apache.org>
>>> Subject: Re: [DISCUSS] Hadoop-3.2.1 release proposal
>>>
>>> On Tue, 20 Aug 2019 at 22:28, Wei-Chiu Chuang 
>>> wrote:
>>>
>>> > Hi Rohith,
>>> > Thanks for initiating this.
>>> > I want to bring up one blocker issue: HDFS-13596
>>> >  (NN restart fails
>>> > after RollingUpgrade from 2.x to 3.x)
>>> >
>>>
>>> > This should be a blocker for all active Hadoop 3.x releases: 3.3.0,
>>> > 3.2.1, 3.1.3. Hopefully we can get this fixed within this week.
>>> > Additionally, HDFS-14396
>>> >  (Failed to load
>>> > image from FSImageFile when downgrade from 3.x to 2.x).Probably not a
>>> > blocker but nice to have.
>>> >
>>>
>>>  Please set target version so that I don't miss in blockers/critical
>>> list for 3.2.1 https://s.apache.org/7yjh5.
>>>
>>>
>>> >
>>> > On Tue, Aug 20, 2019 at 3:22 AM Rohith Sharma K S <
>>> > rohithsharm...@apache.org> wrote:
>>> >
>>> >> Hello folks,
>>> >>
>>> >> It's been more than six month Hadoop-3.2.0 is released i.e 16th
>>> Jan,2019.
>>> >> We have several important fixes landed in branch-3.2 (around 48
>>> >> blockers/critical https://s.apache.org/ozd6o).
>>> >>
>>> >> I am planning to do a maintenance release of 3.2.1 in next few weeks
>>> >> i.e around 1st week of September.
>>> >>
>>> >> So far I don't see any blockers/critical in 3.2.1. I see few pending
>>> >> issues on 3.2.1 are https://s.apache.org/ni6v7.
>>> >>
>>> >> *Proposal*:
>>> >> Code Freezing Date:  30th August 2019 Release Date : 7th Sept 2019
>>> >>
>>> >> Please let me know if you have any thoughts or comments on this plan.
>>> >>
>>> >> Thanks & Regards
>>> >> Rohith Sharma K S
>>> >>
>>> >
>>>
>>


Re: [VOTE] Moving Submarine to a separate Apache project proposal

2019-09-06 Thread Wangda Tan
Thanks everyone for voting! And for whoever has interests to join
Submarine, you're always welcome!

And thanks to Owen for the kind help offered, I just added you to PMC list
in the proposal. It will be a great help to the community if you could
join!

For existing Hadoop committers who have interests to join, I plan to add
you to the initial list after discussed with other proposed initial
Submarine PMC members. The list I saw is:

* Naganarasimha Garla (naganarasimha_gr at apache dot org) (Hadoop PMC)
* Devaraj K (devaraj at apache dot org) (Hadoop PMC)
* Rakesh Radhakrishnan (rakeshr at apache dot org) (bookkeeper PMC, Hadoop
PMC, incubator, Mnemonic PMC, Zookeeper PMC)
* Vinayakumar B (vinayakumarb at apache dot org) (Hadoop PMC, incubator PMC)
* Ayush Saxena (ayushsaxena at apache dot org) (Hadoop Committer)
* Bibin Chundatt (bibinchundatt at apache dot org) (Hadoop PMC)
* Bharat Viswanadham (bharat at apache dot org) (Hadoop)
* Brahma Reddy Battula (brahma at apache dot org)) (Hadoop PMC)
* Abhishek Modi (abmodi at apache dot org) (Hadoop Committer)
* Wei-Chiu Chuang (weichiu at apache dot org) (Hadoop PMC)
* Junping Du (junping_du at apache dot org) (Hadoop PMC, member)

We'd like to see some reasonable contributions to the projects from all our
committers who will join now. Please join the weekly call or mailing lists
(once established) and share your inputs to the project. Members of
Submarine will reach out to all of you individually to understand the areas
you wish to contribute and will help in same. please let me know if you
DON'T want to add to the committer list.

Best,
Wangda Tan

On Fri, Sep 6, 2019 at 3:54 PM Wei-Chiu Chuang  wrote:

> +1
> I've involved myself in Submarine dev and I'd like to be included in the
> future.
>
> Thanks
>
> On Sat, Sep 7, 2019 at 5:27 AM Owen O'Malley 
> wrote:
>
>> Since you don't have any Apache Members, I'll join to provide Apache
>> oversight.
>>
>> .. Owen
>>
>> On Fri, Sep 6, 2019 at 1:38 PM Owen O'Malley 
>> wrote:
>>
>> > +1 for moving to a new project.
>> >
>> > On Sat, Aug 31, 2019 at 10:19 PM Wangda Tan 
>> wrote:
>> >
>> >> Hi all,
>> >>
>> >> As we discussed in the previous thread [1],
>> >>
>> >> I just moved the spin-off proposal to CWIKI and completed all TODO
>> parts.
>> >>
>> >>
>> >>
>> https://cwiki.apache.org/confluence/display/HADOOP/Submarine+Project+Spin-Off+to+TLP+Proposal
>> >>
>> >> If you have interests to learn more about this. Please review the
>> proposal
>> >> let me know if you have any questions/suggestions for the proposal.
>> This
>> >> will be sent to board post voting passed. (And please note that the
>> >> previous voting thread [2] to move Submarine to a separate Github repo
>> is
>> >> a
>> >> necessary effort to move Submarine to a separate Apache project but not
>> >> sufficient so I sent two separate voting thread.)
>> >>
>> >> Please let me know if I missed anyone in the proposal, and reply if
>> you'd
>> >> like to be included in the project.
>> >>
>> >> This voting runs for 7 days and will be concluded at Sep 7th, 11 PM
>> PDT.
>> >>
>> >> Thanks,
>> >> Wangda Tan
>> >>
>> >> [1]
>> >>
>> >>
>> https://lists.apache.org/thread.html/4a2210d567cbc05af92c12aa6283fd09b857ce209d537986ed800029@%3Cyarn-dev.hadoop.apache.org%3E
>> >> [2]
>> >>
>> >>
>> https://lists.apache.org/thread.html/6e94469ca105d5a15dc63903a541bd21c7ef70b8bcff475a16b5ed73@%3Cyarn-dev.hadoop.apache.org%3E
>> >>
>> >
>>
>


Re: [VOTE] Moving Submarine to a separate Apache project proposal

2019-09-06 Thread 俊平堵
+1. Please include me also.

Thanks,

Junping

Wangda Tan  于2019年9月1日周日 下午1:19写道:

> Hi all,
>
> As we discussed in the previous thread [1],
>
> I just moved the spin-off proposal to CWIKI and completed all TODO parts.
>
>
> https://cwiki.apache.org/confluence/display/HADOOP/Submarine+Project+Spin-Off+to+TLP+Proposal
>
> If you have interests to learn more about this. Please review the proposal
> let me know if you have any questions/suggestions for the proposal. This
> will be sent to board post voting passed. (And please note that the
> previous voting thread [2] to move Submarine to a separate Github repo is a
> necessary effort to move Submarine to a separate Apache project but not
> sufficient so I sent two separate voting thread.)
>
> Please let me know if I missed anyone in the proposal, and reply if you'd
> like to be included in the project.
>
> This voting runs for 7 days and will be concluded at Sep 7th, 11 PM PDT.
>
> Thanks,
> Wangda Tan
>
> [1]
>
> https://lists.apache.org/thread.html/4a2210d567cbc05af92c12aa6283fd09b857ce209d537986ed800029@%3Cyarn-dev.hadoop.apache.org%3E
> [2]
>
> https://lists.apache.org/thread.html/6e94469ca105d5a15dc63903a541bd21c7ef70b8bcff475a16b5ed73@%3Cyarn-dev.hadoop.apache.org%3E
>


Re: [DISCUSS] Hadoop-3.2.1 release proposal

2019-09-06 Thread Rohith Sharma K S
Hi Folks

Given all the blockers/critical issues[1] are resolved, I will be cutting
the branch-3.2.1 sooner.
Thanks all for your support in pushing the JIRAs to closure.

[1] https://s.apache.org/7yjh5


-Rohith Sharma K S

On Thu, 29 Aug 2019 at 11:21, Rohith Sharma K S 
wrote:

> [Update]
> Ramping down all the critical/blockers https://s.apache.org/7yjh5, left
> with three issues!
>
> YARN-9785 - Solution discussion is going on, hopefully should be able to
> rap up solution sooner.
> HADOOP-15998 - To be committed.
> YARN-9796 - Patch available, to be committed.
>
> I am closely monitoring for these issues, and will update once these are
> fixed.
>
> -Rohith Sharma K S
>
>
> On Wed, 21 Aug 2019 at 13:42, Bibinchundatt 
> wrote:
>
>> Hi Rohith
>>
>> Thank you for initiating this
>>
>> Few critical/blocker jira's we could consider
>>
>> YARN-9714
>> YARN-9642
>> YARN-9640
>>
>> Regards
>> Bibin
>> -Original Message-
>> From: Rohith Sharma K S [mailto:rohithsharm...@apache.org]
>> Sent: 21 August 2019 11:42
>> To: Wei-Chiu Chuang 
>> Cc: Hdfs-dev ; yarn-dev <
>> yarn-dev@hadoop.apache.org>; mapreduce-dev <
>> mapreduce-...@hadoop.apache.org>; Hadoop Common <
>> common-...@hadoop.apache.org>
>> Subject: Re: [DISCUSS] Hadoop-3.2.1 release proposal
>>
>> On Tue, 20 Aug 2019 at 22:28, Wei-Chiu Chuang  wrote:
>>
>> > Hi Rohith,
>> > Thanks for initiating this.
>> > I want to bring up one blocker issue: HDFS-13596
>> >  (NN restart fails
>> > after RollingUpgrade from 2.x to 3.x)
>> >
>>
>> > This should be a blocker for all active Hadoop 3.x releases: 3.3.0,
>> > 3.2.1, 3.1.3. Hopefully we can get this fixed within this week.
>> > Additionally, HDFS-14396
>> >  (Failed to load
>> > image from FSImageFile when downgrade from 3.x to 2.x).Probably not a
>> > blocker but nice to have.
>> >
>>
>>  Please set target version so that I don't miss in blockers/critical
>> list for 3.2.1 https://s.apache.org/7yjh5.
>>
>>
>> >
>> > On Tue, Aug 20, 2019 at 3:22 AM Rohith Sharma K S <
>> > rohithsharm...@apache.org> wrote:
>> >
>> >> Hello folks,
>> >>
>> >> It's been more than six month Hadoop-3.2.0 is released i.e 16th
>> Jan,2019.
>> >> We have several important fixes landed in branch-3.2 (around 48
>> >> blockers/critical https://s.apache.org/ozd6o).
>> >>
>> >> I am planning to do a maintenance release of 3.2.1 in next few weeks
>> >> i.e around 1st week of September.
>> >>
>> >> So far I don't see any blockers/critical in 3.2.1. I see few pending
>> >> issues on 3.2.1 are https://s.apache.org/ni6v7.
>> >>
>> >> *Proposal*:
>> >> Code Freezing Date:  30th August 2019 Release Date : 7th Sept 2019
>> >>
>> >> Please let me know if you have any thoughts or comments on this plan.
>> >>
>> >> Thanks & Regards
>> >> Rohith Sharma K S
>> >>
>> >
>>
>


Re: [VOTE] Moving Submarine to a separate Apache project proposal

2019-09-06 Thread Wei-Chiu Chuang
+1
I've involved myself in Submarine dev and I'd like to be included in the
future.

Thanks

On Sat, Sep 7, 2019 at 5:27 AM Owen O'Malley  wrote:

> Since you don't have any Apache Members, I'll join to provide Apache
> oversight.
>
> .. Owen
>
> On Fri, Sep 6, 2019 at 1:38 PM Owen O'Malley 
> wrote:
>
> > +1 for moving to a new project.
> >
> > On Sat, Aug 31, 2019 at 10:19 PM Wangda Tan  wrote:
> >
> >> Hi all,
> >>
> >> As we discussed in the previous thread [1],
> >>
> >> I just moved the spin-off proposal to CWIKI and completed all TODO
> parts.
> >>
> >>
> >>
> https://cwiki.apache.org/confluence/display/HADOOP/Submarine+Project+Spin-Off+to+TLP+Proposal
> >>
> >> If you have interests to learn more about this. Please review the
> proposal
> >> let me know if you have any questions/suggestions for the proposal. This
> >> will be sent to board post voting passed. (And please note that the
> >> previous voting thread [2] to move Submarine to a separate Github repo
> is
> >> a
> >> necessary effort to move Submarine to a separate Apache project but not
> >> sufficient so I sent two separate voting thread.)
> >>
> >> Please let me know if I missed anyone in the proposal, and reply if
> you'd
> >> like to be included in the project.
> >>
> >> This voting runs for 7 days and will be concluded at Sep 7th, 11 PM PDT.
> >>
> >> Thanks,
> >> Wangda Tan
> >>
> >> [1]
> >>
> >>
> https://lists.apache.org/thread.html/4a2210d567cbc05af92c12aa6283fd09b857ce209d537986ed800029@%3Cyarn-dev.hadoop.apache.org%3E
> >> [2]
> >>
> >>
> https://lists.apache.org/thread.html/6e94469ca105d5a15dc63903a541bd21c7ef70b8bcff475a16b5ed73@%3Cyarn-dev.hadoop.apache.org%3E
> >>
> >
>


Re: [VOTE] Moving Submarine to a separate Apache project proposal

2019-09-06 Thread Owen O'Malley
Since you don't have any Apache Members, I'll join to provide Apache
oversight.

.. Owen

On Fri, Sep 6, 2019 at 1:38 PM Owen O'Malley  wrote:

> +1 for moving to a new project.
>
> On Sat, Aug 31, 2019 at 10:19 PM Wangda Tan  wrote:
>
>> Hi all,
>>
>> As we discussed in the previous thread [1],
>>
>> I just moved the spin-off proposal to CWIKI and completed all TODO parts.
>>
>>
>> https://cwiki.apache.org/confluence/display/HADOOP/Submarine+Project+Spin-Off+to+TLP+Proposal
>>
>> If you have interests to learn more about this. Please review the proposal
>> let me know if you have any questions/suggestions for the proposal. This
>> will be sent to board post voting passed. (And please note that the
>> previous voting thread [2] to move Submarine to a separate Github repo is
>> a
>> necessary effort to move Submarine to a separate Apache project but not
>> sufficient so I sent two separate voting thread.)
>>
>> Please let me know if I missed anyone in the proposal, and reply if you'd
>> like to be included in the project.
>>
>> This voting runs for 7 days and will be concluded at Sep 7th, 11 PM PDT.
>>
>> Thanks,
>> Wangda Tan
>>
>> [1]
>>
>> https://lists.apache.org/thread.html/4a2210d567cbc05af92c12aa6283fd09b857ce209d537986ed800029@%3Cyarn-dev.hadoop.apache.org%3E
>> [2]
>>
>> https://lists.apache.org/thread.html/6e94469ca105d5a15dc63903a541bd21c7ef70b8bcff475a16b5ed73@%3Cyarn-dev.hadoop.apache.org%3E
>>
>


Re: [VOTE] Moving Submarine to a separate Apache project proposal

2019-09-06 Thread Owen O'Malley
+1 for moving to a new project.

On Sat, Aug 31, 2019 at 10:19 PM Wangda Tan  wrote:

> Hi all,
>
> As we discussed in the previous thread [1],
>
> I just moved the spin-off proposal to CWIKI and completed all TODO parts.
>
>
> https://cwiki.apache.org/confluence/display/HADOOP/Submarine+Project+Spin-Off+to+TLP+Proposal
>
> If you have interests to learn more about this. Please review the proposal
> let me know if you have any questions/suggestions for the proposal. This
> will be sent to board post voting passed. (And please note that the
> previous voting thread [2] to move Submarine to a separate Github repo is a
> necessary effort to move Submarine to a separate Apache project but not
> sufficient so I sent two separate voting thread.)
>
> Please let me know if I missed anyone in the proposal, and reply if you'd
> like to be included in the project.
>
> This voting runs for 7 days and will be concluded at Sep 7th, 11 PM PDT.
>
> Thanks,
> Wangda Tan
>
> [1]
>
> https://lists.apache.org/thread.html/4a2210d567cbc05af92c12aa6283fd09b857ce209d537986ed800029@%3Cyarn-dev.hadoop.apache.org%3E
> [2]
>
> https://lists.apache.org/thread.html/6e94469ca105d5a15dc63903a541bd21c7ef70b8bcff475a16b5ed73@%3Cyarn-dev.hadoop.apache.org%3E
>


[jira] [Created] (YARN-9818) test_docker_util.cc:test_add_mounts doesn't correctly test for parent dir of container-executor.cfg

2019-09-06 Thread Eric Badger (Jira)
Eric Badger created YARN-9818:
-

 Summary: test_docker_util.cc:test_add_mounts doesn't correctly 
test for parent dir of container-executor.cfg
 Key: YARN-9818
 URL: https://issues.apache.org/jira/browse/YARN-9818
 Project: Hadoop YARN
  Issue Type: Bug
Reporter: Eric Badger


The code attempts to mount a directory that is a parent of 
container-executor.cfg. However, the docker.allowed.[ro,rw]-mounts settings in 
the container-executor.cfg don't allow that directory. So the test isn't ever 
getting to the code where we disallow the mount because it is a parent of 
container-executor.cfg. The test is disallowing it because the mount isn't in 
the allowed mounts list. 



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: yarn-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-dev-h...@hadoop.apache.org



[jira] [Created] (YARN-9817) Mapreduce testcases failing as Asyncdispatcher throws ArithmeticException: / by zero

2019-09-06 Thread Prabhu Joseph (Jira)
Prabhu Joseph created YARN-9817:
---

 Summary: Mapreduce testcases failing as Asyncdispatcher throws 
ArithmeticException: / by zero
 Key: YARN-9817
 URL: https://issues.apache.org/jira/browse/YARN-9817
 Project: Hadoop YARN
  Issue Type: Bug
  Components: test
Affects Versions: 3.3.0
Reporter: Prabhu Joseph
Assignee: Prabhu Joseph


Mapreduce testcases failing as Asyncdispatcher throws ArithmeticException: / by 
zero

{code}
 hadoop.mapreduce.v2.app.TestRuntimeEstimators 
 hadoop.mapreduce.v2.app.job.impl.TestJobImpl 
 hadoop.mapreduce.v2.app.TestMRApp 
{code}

Error Message:

{code}
[ERROR] testUpdatedNodes(org.apache.hadoop.mapreduce.v2.app.TestMRApp)  Time 
elapsed: 0.847 s  <<< ERROR!
java.lang.ArithmeticException: / by zero
at 
org.apache.hadoop.yarn.event.AsyncDispatcher$GenericEventHandler.handle(AsyncDispatcher.java:295)
at 
org.apache.hadoop.mapreduce.v2.app.job.impl.JobImpl.handle(JobImpl.java:1015)
at 
org.apache.hadoop.mapreduce.v2.app.job.impl.JobImpl.handle(JobImpl.java:141)
at 
org.apache.hadoop.mapreduce.v2.app.MRAppMaster$JobEventDispatcher.handle(MRAppMaster.java:1544)
at 
org.apache.hadoop.mapreduce.v2.app.MRAppMaster.serviceStart(MRAppMaster.java:1263)
at 
org.apache.hadoop.service.AbstractService.start(AbstractService.java:194)
at org.apache.hadoop.mapreduce.v2.app.MRApp.submit(MRApp.java:301)
at org.apache.hadoop.mapreduce.v2.app.MRApp.submit(MRApp.java:285)
at 
org.apache.hadoop.mapreduce.v2.app.TestMRApp.testUpdatedNodes(TestMRApp.java:223)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:50)
at 
org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
at 
org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:47)
at 
org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:325)
at 
org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:78)
at 
org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:57)
at org.junit.runners.ParentRunner$3.run(ParentRunner.java:290)
at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:71)
at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:288)
at org.junit.runners.ParentRunner.access$000(ParentRunner.java:58)
at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:268)
at org.junit.runners.ParentRunner.run(ParentRunner.java:363)
at 
org.apache.maven.surefire.junit4.JUnit4Provider.execute(JUnit4Provider.java:365)
at 
org.apache.maven.surefire.junit4.JUnit4Provider.executeWithRerun(JUnit4Provider.java:273)
at 
org.apache.maven.surefire.junit4.JUnit4Provider.executeTestSet(JUnit4Provider.java:238)
at 
org.apache.maven.surefire.junit4.JUnit4Provider.invoke(JUnit4Provider.java:159)
at 
org.apache.maven.surefire.booter.ForkedBooter.invokeProviderInSameClassLoader(ForkedBooter.java:384)
at 
org.apache.maven.surefire.booter.ForkedBooter.runSuitesInProcess(ForkedBooter.java:345)
at 
org.apache.maven.surefire.booter.ForkedBooter.execute(ForkedBooter.java:126)
at 
org.apache.maven.surefire.booter.ForkedBooter.main(ForkedBooter.java:418)
{code}



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: yarn-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-dev-h...@hadoop.apache.org



[jira] [Created] (YARN-9816) EntityGroupFSTimelineStore#scanActiveLogs fails with StackOverflowErro

2019-09-06 Thread Prabhu Joseph (Jira)
Prabhu Joseph created YARN-9816:
---

 Summary: EntityGroupFSTimelineStore#scanActiveLogs fails with 
StackOverflowErro
 Key: YARN-9816
 URL: https://issues.apache.org/jira/browse/YARN-9816
 Project: Hadoop YARN
  Issue Type: Bug
  Components: timelineserver
Affects Versions: 3.3.0
Reporter: Prabhu Joseph
Assignee: Prabhu Joseph


EntityGroupFSTimelineStore#scanActiveLogs fails with StackOverflowError.  This 
happens when an Invalid applicationDir is present in /ats/active.

{code}
[hdfs@node2 yarn]$ hadoop fs -ls /ats/active
Found 1 items
-rw-r--r--   3 hdfs hadoop  0 2019-09-06 16:34 
/ats/active/.distcp.tmp.attempt_155759136_39768_m_01_0
{code}

 
{code:java}
java.lang.StackOverflowError
at 
org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.getListing(ClientNamenodeProtocolTranslatorPB.java:632)
at sun.reflect.GeneratedMethodAccessor7.invoke(Unknown Source)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:291)
at 
org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:203)
at 
org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:185)
at com.sun.proxy.$Proxy15.getListing(Unknown Source)
at org.apache.hadoop.hdfs.DFSClient.listPaths(DFSClient.java:2143)
at 
org.apache.hadoop.hdfs.DistributedFileSystem$DirListingIterator.(DistributedFileSystem.java:1076)
at 
org.apache.hadoop.hdfs.DistributedFileSystem$DirListingIterator.(DistributedFileSystem.java:1088)
at 
org.apache.hadoop.hdfs.DistributedFileSystem$DirListingIterator.(DistributedFileSystem.java:1059)
at 
org.apache.hadoop.hdfs.DistributedFileSystem$24.doCall(DistributedFileSystem.java:1038)
at 
org.apache.hadoop.hdfs.DistributedFileSystem$24.doCall(DistributedFileSystem.java:1034)
at 
org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81)
at 
org.apache.hadoop.hdfs.DistributedFileSystem.listStatusIterator(DistributedFileSystem.java:1046)
at 
org.apache.hadoop.yarn.server.timeline.EntityGroupFSTimelineStore.list(EntityGroupFSTimelineStore.java:398)
at 
org.apache.hadoop.yarn.server.timeline.EntityGroupFSTimelineStore.scanActiveLogs(EntityGroupFSTimelineStore.java:368)
at 
org.apache.hadoop.yarn.server.timeline.EntityGroupFSTimelineStore.scanActiveLogs(EntityGroupFSTimelineStore.java:383)
at 
org.apache.hadoop.yarn.server.timeline.EntityGroupFSTimelineStore.scanActiveLogs(EntityGroupFSTimelineStore.java:383)
at 
org.apache.hadoop.yarn.server.timeline.EntityGroupFSTimelineStore.scanActiveLogs(EntityGroupFSTimelineStore.java:383)
at 
org.apache.hadoop.yarn.server.timeline.EntityGroupFSTimelineStore.scanActiveLogs(EntityGroupFSTimelineStore.java:383)
at 
org.apache.hadoop.yarn.server.timeline.EntityGroupFSTimelineStore.scanActiveLogs(EntityGroupFSTimelineStore.java:383)
at 
org.apache.hadoop.yarn.server.timeline.EntityGroupFSTimelineStore.scanActiveLogs(EntityGroupFSTimelineStore.java:383)
at 
org.apache.hadoop.yarn.server.timeline.EntityGroupFSTimelineStore.scanActiveLogs(EntityGroupFSTimelineStore.java:383)
at 
org.apache.hadoop.yarn.server.timeline.EntityGroupFSTimelineStore.scanActiveLogs(EntityGroupFSTimelineStore.java:383)
at 
org.apache.hadoop.yarn.server.timeline.EntityGroupFSTimelineStore.scanActiveLogs(EntityGroupFSTimelineStore.java:383)
at 
org.apache.hadoop.yarn.server.timeline.EntityGroupFSTimelineStore.scanActiveLogs(EntityGroupFSTimelineStore.java:383)
at 
org.apache.hadoop.yarn.server.timeline.EntityGroupFSTimelineStore.scanActiveLogs(EntityGroupFSTimelineStore.java:383)
at 
org.apache.hadoop.yarn.server.timeline.EntityGroupFSTimelineStore.scanActiveLogs(EntityGroupFSTimelineStore.java:383)
at 
org.apache.hadoop.yarn.server.timeline.EntityGroupFSTimelineStore.scanActiveLogs(EntityGroupFSTimelineStore.java:383)
 {code}

Looks one of our user has tried to distcp hdfs://ats/active dir. Distcp job has 
created the 
.distcp.tmp.attempt_155759136_39768_m_01_0 temp file and failed to 
delete at end which has caused ATS to fail reading active applications.



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: yarn-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-dev-h...@hadoop.apache.org



Apache Hadoop qbt Report: trunk+JDK8 on Linux/x86

2019-09-06 Thread Apache Jenkins Server
For more details, see 
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1251/

[Sep 5, 2019 8:20:05 AM] (taoyang) YARN-8995. Log events info in 
AsyncDispatcher when event queue size
[Sep 5, 2019 12:42:36 PM] (github) HDDS-1898. GrpcReplicationService#download 
cannot replicate the
[Sep 5, 2019 1:25:15 PM] (stevel) HADOOP-16430. S3AFilesystem.delete to 
incrementally update s3guard with
[Sep 5, 2019 6:44:02 PM] (inigoiri) HDFS-12904. Add DataTransferThrottler to 
the Datanode transfers.
[Sep 5, 2019 7:49:58 PM] (billie) YARN-9718. Fixed yarn.service.am.java.opts 
shell injection. Contributed
[Sep 5, 2019 9:01:42 PM] (jhung) YARN-9810. Add queue capacity/maxcapacity 
percentage metrics.
[Sep 5, 2019 9:33:06 PM] (aengineer) HDDS-1708. Add container scrubber metrics. 
Contributed by Hrishikesh




-1 overall


The following subsystems voted -1:
asflicense findbugs hadolint mvnsite pathlen unit xml


The following subsystems voted -1 but
were configured to be filtered/ignored:
cc checkstyle javac javadoc pylint shellcheck shelldocs whitespace


The following subsystems are considered long running:
(runtime bigger than 1h  0m  0s)
unit


Specific tests:

XML :

   Parsing Error(s): 
   
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/resources/nvidia-smi-output-excerpt.xml
 
   
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/resources/nvidia-smi-output-missing-tags.xml
 
   
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/resources/nvidia-smi-output-missing-tags2.xml
 
   
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/resources/nvidia-smi-sample-output.xml
 

FindBugs :

   
module:hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-applications-mawo/hadoop-yarn-applications-mawo-core
 
   Class org.apache.hadoop.applications.mawo.server.common.TaskStatus 
implements Cloneable but does not define or use clone method At 
TaskStatus.java:does not define or use clone method At TaskStatus.java:[lines 
39-346] 
   Equals method for 
org.apache.hadoop.applications.mawo.server.worker.WorkerId assumes the argument 
is of type WorkerId At WorkerId.java:the argument is of type WorkerId At 
WorkerId.java:[line 114] 
   
org.apache.hadoop.applications.mawo.server.worker.WorkerId.equals(Object) does 
not check for null argument At WorkerId.java:null argument At 
WorkerId.java:[lines 114-115] 

Failed CTEST tests :

   test_test_libhdfs_ops_hdfs_static 
   test_test_libhdfs_threaded_hdfs_static 
   test_test_libhdfs_zerocopy_hdfs_static 
   test_test_native_mini_dfs 
   test_libhdfs_threaded_hdfspp_test_shim_static 
   test_hdfspp_mini_dfs_smoke_hdfspp_test_shim_static 
   libhdfs_mini_stress_valgrind_hdfspp_test_static 
   memcheck_libhdfs_mini_stress_valgrind_hdfspp_test_static 
   test_libhdfs_mini_stress_hdfspp_test_shim_static 
   test_hdfs_ext_hdfspp_test_shim_static 

Failed junit tests :

   hadoop.hdfs.server.federation.router.TestRouterWithSecureStartup 
   hadoop.hdfs.server.federation.security.TestRouterHttpDelegationToken 
   hadoop.mapreduce.v2.app.TestRuntimeEstimators 
   hadoop.mapreduce.v2.app.job.impl.TestJobImpl 
   hadoop.mapreduce.v2.app.TestMRApp 
   hadoop.yarn.sls.TestSLSStreamAMSynth 
  

   cc:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1251/artifact/out/diff-compile-cc-root.txt
  [4.0K]

   javac:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1251/artifact/out/diff-compile-javac-root.txt
  [332K]

   checkstyle:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1251/artifact/out/diff-checkstyle-root.txt
  [17M]

   hadolint:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1251/artifact/out/diff-patch-hadolint.txt
  [4.0K]

   mvnsite:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1251/artifact/out/patch-mvnsite-root.txt
  [468K]

   pathlen:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1251/artifact/out/pathlen.txt
  [12K]

   pylint:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1251/artifact/out/diff-patch-pylint.txt
  [220K]

   shellcheck:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1251/artifact/out/diff-patch-shellcheck.txt
  [24K]

   shelldocs:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1251/artifact/out/diff-patch-shelldocs.txt
  [88K]

   whitespace:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1251/artifact/out/whitespace-eol.txt
  [9.6M]
   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1251/artifact/out/whitespace-tabs.txt
  [1.1M]

   xml:

   

[jira] [Created] (YARN-9815) ReservationACLsTestBase fails with NPE

2019-09-06 Thread Ahmed Hussein (Jira)
Ahmed Hussein created YARN-9815:
---

 Summary: ReservationACLsTestBase fails with NPE
 Key: YARN-9815
 URL: https://issues.apache.org/jira/browse/YARN-9815
 Project: Hadoop YARN
  Issue Type: Improvement
  Components: yarn
Reporter: Ahmed Hussein


Running ReservationACLsTestBase throws a NPE running the FairScheduler. Old 
revisions back in 2016 also throw NPE.

In the test case, QueueC does not have reserveACLs, so ReservationsACLsManager 
would throw NPE when it tries to access the ACL on line 82.

I still could not find what was the first revision that caused this test case 
to fail. I stopped at bbfaf3c2712c9ba82b0f8423bdeb314bf505a692 which was 
working fine.

I have OsX with java 1.8.0_201

 
{code:java}
[ERROR] 
testApplicationACLs[1](org.apache.hadoop.yarn.server.resourcemanager.ReservationACLsTestBase)
  Time elapsed: 1.897 s  <<< ERROR![ERROR] 
testApplicationACLs[1](org.apache.hadoop.yarn.server.resourcemanager.ReservationACLsTestBase)
  Time elapsed: 1.897 s  <<< 
ERROR!java.lang.NullPointerException:java.lang.NullPointerException at 
org.apache.hadoop.yarn.server.resourcemanager.security.ReservationsACLsManager.checkAccess(ReservationsACLsManager.java:83)
 at 
org.apache.hadoop.yarn.server.resourcemanager.ClientRMService.checkReservationACLs(ClientRMService.java:1527)
 at 
org.apache.hadoop.yarn.server.resourcemanager.ClientRMService.submitReservation(ClientRMService.java:1290)
 at 
org.apache.hadoop.yarn.api.impl.pb.service.ApplicationClientProtocolPBServiceImpl.submitReservation(ApplicationClientProtocolPBServiceImpl.java:511)
 at 
org.apache.hadoop.yarn.proto.ApplicationClientProtocol$ApplicationClientProtocolService$2.callBlockingMethod(ApplicationClientProtocol.java:645)
 at 
org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:529)
 at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1070) at 
org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:1001) at 
org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:929) at 
java.security.AccessController.doPrivileged(Native Method) at 
javax.security.auth.Subject.doAs(Subject.java:422) at 
org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1891)
 at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2921)
 at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) at 
sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
 at 
sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
 at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at 
org.apache.hadoop.yarn.ipc.RPCUtil.instantiateException(RPCUtil.java:53) at 
org.apache.hadoop.yarn.ipc.RPCUtil.instantiateRuntimeException(RPCUtil.java:85) 
at org.apache.hadoop.yarn.ipc.RPCUtil.unwrapAndThrowException(RPCUtil.java:122) 
at 
org.apache.hadoop.yarn.api.impl.pb.client.ApplicationClientProtocolPBClientImpl.submitReservation(ApplicationClientProtocolPBClientImpl.java:511)
 at 
org.apache.hadoop.yarn.server.resourcemanager.ReservationACLsTestBase.submitReservation(ReservationACLsTestBase.java:447)
 at 
org.apache.hadoop.yarn.server.resourcemanager.ReservationACLsTestBase.verifySubmitReservationSuccess(ReservationACLsTestBase.java:247)
 at 
org.apache.hadoop.yarn.server.resourcemanager.ReservationACLsTestBase.testApplicationACLs(ReservationACLsTestBase.java:125)
 at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) 
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
 at java.lang.reflect.Method.invoke(Method.java:498) at 
org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:50)
 at 
org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
 at 
org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:47)
 at 
org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
 at 
org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) 
at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) 
at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:325) at 
org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:78)
 at 
org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:57)
 at org.junit.runners.ParentRunner$3.run(ParentRunner.java:290) at 
org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:71) at 
org.junit.runners.ParentRunner.runChildren(ParentRunner.java:288) at 
org.junit.runners.ParentRunner.access$000(ParentRunner.java:58) at 
org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:268) at 
org.junit.runners.ParentRunner.run(ParentRunner.java:363) at 

[jira] [Created] (YARN-9814) JobHistoryServer can't delete aggregated files, if remote app root directory is created by NodeManager

2019-09-06 Thread Adam Antal (Jira)
Adam Antal created YARN-9814:


 Summary: JobHistoryServer can't delete aggregated files, if remote 
app root directory is created by NodeManager
 Key: YARN-9814
 URL: https://issues.apache.org/jira/browse/YARN-9814
 Project: Hadoop YARN
  Issue Type: Bug
  Components: log-aggregation, yarn
Affects Versions: 3.1.2
Reporter: Adam Antal


If remote-app-log-dir is not created before starting Yarn processes, the 
NodeManager creates it during the init of AppLogAggregator service. In a custom 
system the primary group of the yarn user (which starts the NM/RM daemons) is 
not hadoop, but set to a more restricted group (say yarn). If NodeManager 
creates the folder it derives the group of the folder from the primary group of 
the login user (which is yarn:yarn in this case), thus setting the root log 
folder and all its subfolders to yarn group, ultimately making it unaccessible 
to other processes - e.g. the JobHistoryServer's AggregatedLogDeletionService.

I suggest to make this group configurable. If this new configuration is not set 
then we can still stick to the existing behaviour. 

Creating the root app-log-dir each time during the setup of this system is a 
bit error prone, and an end user can easily forget it. I think the best to put 
this step is the LogAggregationService, which was responsible for creating the 
folder already.



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: yarn-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-dev-h...@hadoop.apache.org



Apache Hadoop qbt Report: branch2+JDK7 on Linux/x86

2019-09-06 Thread Apache Jenkins Server
For more details, see 
https://builds.apache.org/job/hadoop-qbt-branch2-java7-linux-x86/436/

[Sep 5, 2019 4:35:47 PM] (ayushsaxena) HDFS-14276. [SBN read] Reduce tailing 
overhead. Contributed by Wei-Chiu
[Sep 5, 2019 9:09:08 PM] (jhung) YARN-9810. Add queue capacity/maxcapacity 
percentage metrics.
[Sep 5, 2019 11:22:15 PM] (xyao) Revert "HDFS-14633. The StorageType quota and 
consume in QuotaFeature is
[Sep 5, 2019 11:24:17 PM] (xyao) HDFS-14633. The StorageType quota and consume 
in QuotaFeature is not




-1 overall


The following subsystems voted -1:
asflicense findbugs hadolint pathlen unit xml


The following subsystems voted -1 but
were configured to be filtered/ignored:
cc checkstyle javac javadoc pylint shellcheck shelldocs whitespace


The following subsystems are considered long running:
(runtime bigger than 1h  0m  0s)
unit


Specific tests:

XML :

   Parsing Error(s): 
   
hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/conf/empty-configuration.xml
 
   hadoop-tools/hadoop-azure/src/config/checkstyle-suppressions.xml 
   hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/public/crossdomain.xml 
   
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/public/crossdomain.xml
 

FindBugs :

   
module:hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-timelineservice-hbase/hadoop-yarn-server-timelineservice-hbase-client
 
   Boxed value is unboxed and then immediately reboxed in 
org.apache.hadoop.yarn.server.timelineservice.storage.common.ColumnRWHelper.readResultsWithTimestamps(Result,
 byte[], byte[], KeyConverter, ValueConverter, boolean) At 
ColumnRWHelper.java:then immediately reboxed in 
org.apache.hadoop.yarn.server.timelineservice.storage.common.ColumnRWHelper.readResultsWithTimestamps(Result,
 byte[], byte[], KeyConverter, ValueConverter, boolean) At 
ColumnRWHelper.java:[line 335] 

Failed junit tests :

   hadoop.util.TestReadWriteDiskValidator 
   hadoop.fs.sftp.TestSFTPFileSystem 
   hadoop.hdfs.TestAbandonBlock 
   hadoop.hdfs.qjournal.server.TestJournalNodeRespectsBindHostKeys 
   hadoop.hdfs.server.datanode.TestDirectoryScanner 
   hadoop.registry.secure.TestSecureLogins 
   
hadoop.yarn.server.resourcemanager.metrics.TestCombinedSystemMetricsPublisher 
   hadoop.yarn.server.resourcemanager.TestLeaderElectorService 
   hadoop.yarn.server.timelineservice.security.TestTimelineAuthFilterForV2 
  

   cc:

   
https://builds.apache.org/job/hadoop-qbt-branch2-java7-linux-x86/436/artifact/out/diff-compile-cc-root-jdk1.7.0_95.txt
  [4.0K]

   javac:

   
https://builds.apache.org/job/hadoop-qbt-branch2-java7-linux-x86/436/artifact/out/diff-compile-javac-root-jdk1.7.0_95.txt
  [328K]

   cc:

   
https://builds.apache.org/job/hadoop-qbt-branch2-java7-linux-x86/436/artifact/out/diff-compile-cc-root-jdk1.8.0_222.txt
  [4.0K]

   javac:

   
https://builds.apache.org/job/hadoop-qbt-branch2-java7-linux-x86/436/artifact/out/diff-compile-javac-root-jdk1.8.0_222.txt
  [308K]

   checkstyle:

   
https://builds.apache.org/job/hadoop-qbt-branch2-java7-linux-x86/436/artifact/out/diff-checkstyle-root.txt
  [16M]

   hadolint:

   
https://builds.apache.org/job/hadoop-qbt-branch2-java7-linux-x86/436/artifact/out/diff-patch-hadolint.txt
  [4.0K]

   pathlen:

   
https://builds.apache.org/job/hadoop-qbt-branch2-java7-linux-x86/436/artifact/out/pathlen.txt
  [12K]

   pylint:

   
https://builds.apache.org/job/hadoop-qbt-branch2-java7-linux-x86/436/artifact/out/diff-patch-pylint.txt
  [24K]

   shellcheck:

   
https://builds.apache.org/job/hadoop-qbt-branch2-java7-linux-x86/436/artifact/out/diff-patch-shellcheck.txt
  [72K]

   shelldocs:

   
https://builds.apache.org/job/hadoop-qbt-branch2-java7-linux-x86/436/artifact/out/diff-patch-shelldocs.txt
  [8.0K]

   whitespace:

   
https://builds.apache.org/job/hadoop-qbt-branch2-java7-linux-x86/436/artifact/out/whitespace-eol.txt
  [12M]
   
https://builds.apache.org/job/hadoop-qbt-branch2-java7-linux-x86/436/artifact/out/whitespace-tabs.txt
  [1.3M]

   xml:

   
https://builds.apache.org/job/hadoop-qbt-branch2-java7-linux-x86/436/artifact/out/xml.txt
  [12K]

   findbugs:

   
https://builds.apache.org/job/hadoop-qbt-branch2-java7-linux-x86/436/artifact/out/branch-findbugs-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-timelineservice-hbase_hadoop-yarn-server-timelineservice-hbase-client-warnings.html
  [8.0K]

   javadoc:

   
https://builds.apache.org/job/hadoop-qbt-branch2-java7-linux-x86/436/artifact/out/diff-javadoc-javadoc-root-jdk1.7.0_95.txt
  [16K]
   
https://builds.apache.org/job/hadoop-qbt-branch2-java7-linux-x86/436/artifact/out/diff-javadoc-javadoc-root-jdk1.8.0_222.txt
  [1.1M]

   unit:

   

[jira] [Created] (YARN-9813) RM does not start on JDK11 when UIv2 is enabled

2019-09-06 Thread Adam Antal (Jira)
Adam Antal created YARN-9813:


 Summary: RM does not start on JDK11 when UIv2 is enabled
 Key: YARN-9813
 URL: https://issues.apache.org/jira/browse/YARN-9813
 Project: Hadoop YARN
  Issue Type: Bug
  Components: resourcemanager, yarn
Affects Versions: 3.1.2
Reporter: Adam Antal
Assignee: Adam Antal


Starting a ResourceManager on JDK11 with UIv2 is enabled, RM startup fails with 
the following message:
{noformat}
Error starting ResourceManager
java.lang.ClassCastException: class 
jdk.internal.loader.ClassLoaders$AppClassLoader cannot be cast to class 
java.net.URLClassLoader (jdk.internal.loader.ClassLoaders$AppClassLoader and 
java.net.URLClassLoader are in module java.base of loader 'bootstrap')
at 
org.apache.hadoop.yarn.server.resourcemanager.ResourceManager.startWepApp(ResourceManager.java:1190)
at 
org.apache.hadoop.yarn.server.resourcemanager.ResourceManager.serviceStart(ResourceManager.java:1333)
at 
org.apache.hadoop.service.AbstractService.start(AbstractService.java:194)
at 
org.apache.hadoop.yarn.server.resourcemanager.ResourceManager.main(ResourceManager.java:1531)

{noformat}

It is a known issue that the systemClassLoader is not URLClassLoader anymore 
from JDK9 (see related UT failure: YARN-9512). 



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: yarn-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-dev-h...@hadoop.apache.org