[jira] [Updated] (YARN-8947) [UI2] Active User info missing from UI2

2019-03-06 Thread Akhil PB (JIRA)


 [ 
https://issues.apache.org/jira/browse/YARN-8947?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Akhil PB updated YARN-8947:
---
Attachment: YARN-8947.004.patch

> [UI2] Active User info missing from UI2
> ---
>
> Key: YARN-8947
> URL: https://issues.apache.org/jira/browse/YARN-8947
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: yarn-ui-v2
>Reporter: Akhil PB
>Assignee: Akhil PB
>Priority: Major
> Attachments: Active_User_Info_RM_UI1.png, 
> Active_User_Info_RM_UI2_Fixed.png, Active_User_Info_RM_UI2_Fixed_2.png, 
> YARN-8947.001.patch, YARN-8947.002.patch, YARN-8947.003.patch, 
> YARN-8947.004.patch
>
>
> UI1 Scheduler section has Active User info. Where it shows Active users and 
> Application scheduled.
> UI2 is missing that information. There is no way to get a summary of apps as 
> per User.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-8906) [UI2] NM hostnames not displayed correctly in Node Heatmap Chart

2019-03-06 Thread Akhil PB (JIRA)


 [ 
https://issues.apache.org/jira/browse/YARN-8906?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Akhil PB updated YARN-8906:
---
Attachment: YARN-8906.002.patch

> [UI2] NM hostnames not displayed correctly in Node Heatmap Chart
> 
>
> Key: YARN-8906
> URL: https://issues.apache.org/jira/browse/YARN-8906
> Project: Hadoop YARN
>  Issue Type: Bug
>Reporter: Charan Hebri
>Assignee: Akhil PB
>Priority: Major
> Attachments: Node_Heatmap_Chart.png, Node_Heatmap_Chart_Fixed.png, 
> YARN-8906.001.patch, YARN-8906.002.patch
>
>
> Hostnames displayed on the Node Heatmap Chart look garbled and are not 
> clearly visible. Attached screenshot.
> cc [~akhilpb]



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-8218) Add application launch time to ATSV1

2019-03-06 Thread Vrushali C (JIRA)


[ 
https://issues.apache.org/jira/browse/YARN-8218?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16786449#comment-16786449
 ] 

Vrushali C commented on YARN-8218:
--

Retriggered the build. It had failed at hadoop-common. 

> Add application launch time to ATSV1
> 
>
> Key: YARN-8218
> URL: https://issues.apache.org/jira/browse/YARN-8218
> Project: Hadoop YARN
>  Issue Type: Improvement
>Reporter: Kanwaljeet Sachdev
>Assignee: Abhishek Modi
>Priority: Major
> Fix For: 3.3.0
>
> Attachments: YARN-8218.001.patch
>
>
> YARN-7088 publishes application launch time to RMStore and also adds it to 
> the YARN UI. It would be a nice enhancement to have the launchTime event 
> published into the Application history server as well.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-8218) Add application launch time to ATSV1

2019-03-06 Thread Hudson (JIRA)


[ 
https://issues.apache.org/jira/browse/YARN-8218?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16786443#comment-16786443
 ] 

Hudson commented on YARN-8218:
--

FAILURE: Integrated in Jenkins build Hadoop-trunk-Commit #16151 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/16151/])
YARN-8218 Add application launch time to ATSV1. Contributed by Abhishek 
(vrushali: rev 491313ab84cc76683d0ef93a1ac17d8ecc8c430c)
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/metrics/TimelineServiceV1Publisher.java
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/metrics/TestSystemMetricsPublisher.java
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-applicationhistoryservice/src/main/java/org/apache/hadoop/yarn/server/applicationhistoryservice/ApplicationHistoryManagerOnTimelineStore.java


> Add application launch time to ATSV1
> 
>
> Key: YARN-8218
> URL: https://issues.apache.org/jira/browse/YARN-8218
> Project: Hadoop YARN
>  Issue Type: Improvement
>Reporter: Kanwaljeet Sachdev
>Assignee: Abhishek Modi
>Priority: Major
> Fix For: 3.3.0
>
> Attachments: YARN-8218.001.patch
>
>
> YARN-7088 publishes application launch time to RMStore and also adds it to 
> the YARN UI. It would be a nice enhancement to have the launchTime event 
> published into the Application history server as well.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7266) Timeline Server event handler threads locked

2019-03-06 Thread Prabhu Joseph (JIRA)


[ 
https://issues.apache.org/jira/browse/YARN-7266?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16786378#comment-16786378
 ] 

Prabhu Joseph commented on YARN-7266:
-

Thanks [~eyang]!

> Timeline Server event handler threads locked
> 
>
> Key: YARN-7266
> URL: https://issues.apache.org/jira/browse/YARN-7266
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: ATSv2, timelineserver
>Affects Versions: 2.7.3
>Reporter: Venkata Puneet Ravuri
>Assignee: Prabhu Joseph
>Priority: Major
> Fix For: 2.7.8, 3.3.0, 2.8.6, 2.9.3
>
> Attachments: YARN-7266-0005.patch, YARN-7266-001.patch, 
> YARN-7266-002.patch, YARN-7266-003.patch, YARN-7266-004.patch, 
> YARN-7266-006.patch, YARN-7266-007.patch, YARN-7266-008.patch, 
> YARN-7266-branch-2.7.001.patch, YARN-7266-branch-2.8.001.patch
>
>
> Event handlers for Timeline Server seem to take a lock while parsing HTTP 
> headers of the request. This is causing all other threads to wait and slowing 
> down the overall performance of Timeline server. We have resourcemanager 
> metrics enabled to send to timeline server. Because of the high load on 
> ResourceManager, the metrics to be sent are getting backlogged and in turn 
> increasing heap footprint of Resource Manager (due to pending metrics).
> This is the complete stack trace of a blocked thread on timeline server:-
> "2079644967@qtp-1658980982-4560" #4632 daemon prio=5 os_prio=0 
> tid=0x7f6ba490a000 nid=0x5eb waiting for monitor entry 
> [0x7f6b9142c000]
>java.lang.Thread.State: BLOCKED (on object monitor)
> at 
> com.sun.xml.bind.v2.runtime.reflect.opt.AccessorInjector.prepare(AccessorInjector.java:82)
> - waiting to lock <0x0005c0621860> (a java.lang.Class for 
> com.sun.xml.bind.v2.runtime.reflect.opt.AccessorInjector)
> at 
> com.sun.xml.bind.v2.runtime.reflect.opt.OptimizedAccessorFactory.get(OptimizedAccessorFactory.java:168)
> at 
> com.sun.xml.bind.v2.runtime.reflect.Accessor$FieldReflection.optimize(Accessor.java:282)
> at 
> com.sun.xml.bind.v2.runtime.property.SingleElementNodeProperty.(SingleElementNodeProperty.java:94)
> at sun.reflect.GeneratedConstructorAccessor52.newInstance(Unknown 
> Source)
> at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(Unknown 
> Source)
> at java.lang.reflect.Constructor.newInstance(Unknown Source)
> at 
> com.sun.xml.bind.v2.runtime.property.PropertyFactory.create(PropertyFactory.java:128)
> at 
> com.sun.xml.bind.v2.runtime.ClassBeanInfoImpl.(ClassBeanInfoImpl.java:183)
> at 
> com.sun.xml.bind.v2.runtime.JAXBContextImpl.getOrCreate(JAXBContextImpl.java:532)
> at 
> com.sun.xml.bind.v2.runtime.JAXBContextImpl.getOrCreate(JAXBContextImpl.java:551)
> at 
> com.sun.xml.bind.v2.runtime.property.ArrayElementProperty.(ArrayElementProperty.java:112)
> at 
> com.sun.xml.bind.v2.runtime.property.ArrayElementNodeProperty.(ArrayElementNodeProperty.java:62)
> at sun.reflect.GeneratedConstructorAccessor19.newInstance(Unknown 
> Source)
> at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(Unknown 
> Source)
> at java.lang.reflect.Constructor.newInstance(Unknown Source)
> at 
> com.sun.xml.bind.v2.runtime.property.PropertyFactory.create(PropertyFactory.java:128)
> at 
> com.sun.xml.bind.v2.runtime.ClassBeanInfoImpl.(ClassBeanInfoImpl.java:183)
> at 
> com.sun.xml.bind.v2.runtime.JAXBContextImpl.getOrCreate(JAXBContextImpl.java:532)
> at 
> com.sun.xml.bind.v2.runtime.JAXBContextImpl.(JAXBContextImpl.java:347)
> at 
> com.sun.xml.bind.v2.runtime.JAXBContextImpl$JAXBContextBuilder.build(JAXBContextImpl.java:1170)
> at 
> com.sun.xml.bind.v2.ContextFactory.createContext(ContextFactory.java:145)
> at sun.reflect.GeneratedMethodAccessor17.invoke(Unknown Source)
> at sun.reflect.DelegatingMethodAccessorImpl.invoke(Unknown Source)
> at java.lang.reflect.Method.invoke(Unknown Source)
> at javax.xml.bind.ContextFinder.newInstance(Unknown Source)
> at javax.xml.bind.ContextFinder.newInstance(Unknown Source)
> at javax.xml.bind.ContextFinder.find(Unknown Source)
> at javax.xml.bind.JAXBContext.newInstance(Unknown Source)
> at javax.xml.bind.JAXBContext.newInstance(Unknown Source)
> at 
> com.sun.jersey.server.wadl.generators.WadlGeneratorJAXBGrammarGenerator.buildModelAndSchemas(WadlGeneratorJAXBGrammarGenerator.java:412)
> at 
> com.sun.jersey.server.wadl.generators.WadlGeneratorJAXBGrammarGenerator.createExternalGrammar(WadlGeneratorJAXBGrammarGenerator.java:352)
> at 
> com.sun.jersey.server.wadl.WadlBuilder.generate(WadlBuilder.java:115)
> at 
> 

[jira] [Commented] (YARN-9343) Replace isDebugEnabled with SLF4J parameterized log messages

2019-03-06 Thread Prabhu Joseph (JIRA)


[ 
https://issues.apache.org/jira/browse/YARN-9343?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16786371#comment-16786371
 ] 

Prabhu Joseph commented on YARN-9343:
-

[~wilfreds] Thanks for the review again. If you are fine, will file a separate 
jira to handle the above two points. 

> Replace isDebugEnabled with SLF4J parameterized log messages
> 
>
> Key: YARN-9343
> URL: https://issues.apache.org/jira/browse/YARN-9343
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Prabhu Joseph
>Assignee: Prabhu Joseph
>Priority: Major
> Attachments: YARN-9343-001.patch, YARN-9343-002.patch, 
> YARN-9343-003.patch
>
>
> Replace isDebugEnabled with SLF4J parameterized log messages. 
> https://www.slf4j.org/faq.html



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-8967) Change FairScheduler to use PlacementRule interface

2019-03-06 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/YARN-8967?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16786365#comment-16786365
 ] 

Hadoop QA commented on YARN-8967:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
18s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 17 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 16m 
32s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
45s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
42s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
48s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
11m 47s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
12s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
27s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
38s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
39s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
39s{color} | {color:green} 
hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager
 generated 0 new + 16 unchanged - 1 fixed = 16 total (was 17) {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
33s{color} | {color:green} 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager:
 The patch generated 0 new + 339 unchanged - 67 fixed = 339 total (was 406) 
{color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
41s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
10m 40s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
20s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
26s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 76m 14s{color} 
| {color:red} hadoop-yarn-server-resourcemanager in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
24s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}123m 42s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | 
hadoop.yarn.server.resourcemanager.scheduler.fair.TestFairScheduler |
|   | 
hadoop.yarn.server.resourcemanager.metrics.TestCombinedSystemMetricsPublisher |
|   | 
hadoop.yarn.server.resourcemanager.metrics.TestSystemMetricsPublisherForV2 |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:8f97d6f |
| JIRA Issue | YARN-8967 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12961487/YARN-8967.007.patch |
| Optional Tests |  dupname  asflicense  compile  javac  javadoc  mvninstall  
mvnsite  unit  shadedclient  findbugs  checkstyle  |
| uname | Linux 521b6ad9fdd5 4.4.0-139-generic #165-Ubuntu SMP Wed Oct 24 
10:58:50 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / a55fc36 |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_191 |
| findbugs | v3.1.0-RC1 |
| unit | 

[jira] [Comment Edited] (YARN-9343) Replace isDebugEnabled with SLF4J parameterized log messages

2019-03-06 Thread Wilfred Spiegelenburg (JIRA)


[ 
https://issues.apache.org/jira/browse/YARN-9343?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16786336#comment-16786336
 ] 

Wilfred Spiegelenburg edited comment on YARN-9343 at 3/7/19 3:28 AM:
-

Thank you for the update [~Prabhu Joseph]

I can see that we still have 200+ {{LOG.isDebugEnabled()}} calls in the code. 
two things:
# There are a lot of simple one parameter calls which could easily be converted 
to unguarded calls, examples:
** NvidiaDockerV1CommandPlugin.java
** FSParentQueue.java
** Application.java
# Some of the calls to {{LOG.debug}} that are guarded inside those guards have 
not been changed to parameterised calls yet. Do you want to file a followup 
jira for that or should that also be part of these changes?


was (Author: wilfreds):
Thank you for the update [~Prabhu Joseph]

I can see that we still have 200+ {{LOG.isDebugEnabled()}} calls in the code. 
two things:
# There are a lot of simple one parameter calls which could easily be converted 
to unguarded calls, examples:
* NvidiaDockerV1CommandPlugin.java
* FSParentQueue.java
* Application.java
# Some of the calls to {{LOG.debug}} that are guarded inside those have not 
been changed to parameterised calls yet. Do you want to file a followup jira 
for that or should that also be part of these changes?

> Replace isDebugEnabled with SLF4J parameterized log messages
> 
>
> Key: YARN-9343
> URL: https://issues.apache.org/jira/browse/YARN-9343
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Prabhu Joseph
>Assignee: Prabhu Joseph
>Priority: Major
> Attachments: YARN-9343-001.patch, YARN-9343-002.patch, 
> YARN-9343-003.patch
>
>
> Replace isDebugEnabled with SLF4J parameterized log messages. 
> https://www.slf4j.org/faq.html



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-9343) Replace isDebugEnabled with SLF4J parameterized log messages

2019-03-06 Thread Wilfred Spiegelenburg (JIRA)


[ 
https://issues.apache.org/jira/browse/YARN-9343?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16786336#comment-16786336
 ] 

Wilfred Spiegelenburg commented on YARN-9343:
-

Thank you for the update [~Prabhu Joseph]

I can see that we still have 200+ {{LOG.isDebugEnabled()}} calls in the code. 
two things:
# There are a lot of simple one parameter calls which could easily be converted 
to unguarded calls, examples:
* NvidiaDockerV1CommandPlugin.java
* FSParentQueue.java
* Application.java
# Some of the calls to {{LOG.debug}} that are guarded inside those have not 
been changed to parameterised calls yet. Do you want to file a followup jira 
for that or should that also be part of these changes?

> Replace isDebugEnabled with SLF4J parameterized log messages
> 
>
> Key: YARN-9343
> URL: https://issues.apache.org/jira/browse/YARN-9343
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Prabhu Joseph
>Assignee: Prabhu Joseph
>Priority: Major
> Attachments: YARN-9343-001.patch, YARN-9343-002.patch, 
> YARN-9343-003.patch
>
>
> Replace isDebugEnabled with SLF4J parameterized log messages. 
> https://www.slf4j.org/faq.html



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-9344) FS should not reserve when container capability is bigger than node total resource

2019-03-06 Thread Wilfred Spiegelenburg (JIRA)


[ 
https://issues.apache.org/jira/browse/YARN-9344?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16786314#comment-16786314
 ] 

Wilfred Spiegelenburg commented on YARN-9344:
-

The test failures are also not related: TestApplicationMasterServiceFair failed 
because it ran with the CapacityScheduler... Not sure what happened there.

[~uranus] This change should be easily testable in a junit test. We should not 
have a -1 from test4tests.
 Can you please add tests to TestFSAppAttempt to make sure that this is working 
as expected?

> FS should not reserve when container capability is bigger than node total 
> resource
> --
>
> Key: YARN-9344
> URL: https://issues.apache.org/jira/browse/YARN-9344
> Project: Hadoop YARN
>  Issue Type: Bug
>Reporter: Zhaohui Xin
>Assignee: Zhaohui Xin
>Priority: Major
> Attachments: YARN-9344.001.patch, YARN-9344.002.patch
>
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-8967) Change FairScheduler to use PlacementRule interface

2019-03-06 Thread Wilfred Spiegelenburg (JIRA)


[ 
https://issues.apache.org/jira/browse/YARN-8967?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16786306#comment-16786306
 ] 

Wilfred Spiegelenburg commented on YARN-8967:
-

Fixed the newly introduced checkstyle issues. The build should now not have any 
white space issues anymore.
Test failures are not related to the patch, uploading patch 007.

> Change FairScheduler to use PlacementRule interface
> ---
>
> Key: YARN-8967
> URL: https://issues.apache.org/jira/browse/YARN-8967
> Project: Hadoop YARN
>  Issue Type: Improvement
>  Components: capacityscheduler, fairscheduler
>Reporter: Wilfred Spiegelenburg
>Assignee: Wilfred Spiegelenburg
>Priority: Major
> Attachments: YARN-8967.001.patch, YARN-8967.002.patch, 
> YARN-8967.003.patch, YARN-8967.004.patch, YARN-8967.005.patch, 
> YARN-8967.006.patch, YARN-8967.007.patch
>
>
> The PlacementRule interface was introduced to be used by all schedulers as 
> per YARN-3635. The CapacityScheduler is using it but the FairScheduler is not 
> and is using its own rule definition.
> YARN-8948 cleans up the implementation and removes the CS references which 
> should allow this change to go through.
> This would be the first step in using one placement rule engine for both 
> schedulers.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-8967) Change FairScheduler to use PlacementRule interface

2019-03-06 Thread Wilfred Spiegelenburg (JIRA)


 [ 
https://issues.apache.org/jira/browse/YARN-8967?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wilfred Spiegelenburg updated YARN-8967:

Attachment: YARN-8967.007.patch

> Change FairScheduler to use PlacementRule interface
> ---
>
> Key: YARN-8967
> URL: https://issues.apache.org/jira/browse/YARN-8967
> Project: Hadoop YARN
>  Issue Type: Improvement
>  Components: capacityscheduler, fairscheduler
>Reporter: Wilfred Spiegelenburg
>Assignee: Wilfred Spiegelenburg
>Priority: Major
> Attachments: YARN-8967.001.patch, YARN-8967.002.patch, 
> YARN-8967.003.patch, YARN-8967.004.patch, YARN-8967.005.patch, 
> YARN-8967.006.patch, YARN-8967.007.patch
>
>
> The PlacementRule interface was introduced to be used by all schedulers as 
> per YARN-3635. The CapacityScheduler is using it but the FairScheduler is not 
> and is using its own rule definition.
> YARN-8948 cleans up the implementation and removes the CS references which 
> should allow this change to go through.
> This would be the first step in using one placement rule engine for both 
> schedulers.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-9326) Fair Scheduler configuration defaults are not documented in case of min and maxResources

2019-03-06 Thread Wilfred Spiegelenburg (JIRA)


[ 
https://issues.apache.org/jira/browse/YARN-9326?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16786290#comment-16786290
 ] 

Wilfred Spiegelenburg commented on YARN-9326:
-

The white space issues are fixed via YARN-9348. A new build should not show 
them anymore,

The text looks good to me now. [~templedf] you did a lot of the work around 
resource types. Does this change look good to you from that perspective or 
should we extend the new format examples with a resource type tag like this to 
make it really clear:
{code}
"vcores=X, memory-mb=Y, GPU=5"
{code}

> Fair Scheduler configuration defaults are not documented in case of min and 
> maxResources
> 
>
> Key: YARN-9326
> URL: https://issues.apache.org/jira/browse/YARN-9326
> Project: Hadoop YARN
>  Issue Type: Improvement
>  Components: docs, documentation, fairscheduler, yarn
>Affects Versions: 3.2.0
>Reporter: Adam Antal
>Assignee: Adam Antal
>Priority: Major
> Attachments: YARN-9326.001.patch, YARN-9326.002.patch, 
> YARN-9326.003.patch, YARN-9326.004.patch, YARN-9326.005.patch
>
>
> The FairScheduler's configuration has the following defaults (from the code: 
> javadoc):
> {noformat}
> In new style resources, any resource that is not specified will be set to 
> missing or 0%, as appropriate. Also, in the new style resources, units are 
> not allowed. Units are assumed from the resource manager's settings for the 
> resources when the value isn't a percentage. The missing parameter is only 
> used in the case of new style resources without percentages. With new style 
> resources with percentages, any missing resources will be assumed to be 100% 
> because percentages are only used with maximum resource limits.
> {noformat}
> This is not documented in the hadoop yarn site FairScheduler.html. It is 
> quite intuitive, but still need to be documented though.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Created] (YARN-9350) Change log level to WARN when ShellCommandExecutor.execute() occurs exception

2019-03-06 Thread Anuhan Torgonshar (JIRA)
Anuhan Torgonshar created YARN-9350:
---

 Summary: Change log level to WARN when 
ShellCommandExecutor.execute() occurs exception 
 Key: YARN-9350
 URL: https://issues.apache.org/jira/browse/YARN-9350
 Project: Hadoop YARN
  Issue Type: Improvement
  Components: yarn
Affects Versions: 2.8.5, 3.1.0
Reporter: Anuhan Torgonshar
 Attachments: NodeHealthScriptRunner.java, 
PrivilegedOperationExecutor.java, WindowsBasedProcessTree.java

When method ShellCommandExecutor.execute() occurs exception , it has 
inconsistent log level practices in different places as following :
{code:java}
//hadoop-2.8.5-src\hadoop-yarn-project\hadoop-yarn\hadoop-yarn-common\src\main\java\org\apache\hadoop\yarn\util\WindowsBasedProcessTree.java
//log statement line number:69
//log level:error
try {
   shellExecutor.execute();
} catch (IOException e) {
   LOG.error(StringUtils.stringifyException(e));
} finally {
   String output = shellExecutor.getOutput();
   if (output != null &&
   output.contains("Prints to stdout a list of processes in the task")) {
   return true;
}

//hadoop-2.8.5-src\hadoop-common-project\hadoop-common\src\main\java\org\apache\hadoop\util\NodeHealthScriptRunner.java
//log statement line number:116
//log level:warn
try {
  shexec.execute();
} catch (ExitCodeException e) {
  ..
} catch (Exception e) {
   LOG.warn("Caught exception : " + e.getMessage());
   if (!shexec.isTimedOut()) {
 status = HealthCheckerExitStatus.FAILED_WITH_EXCEPTION;
   } else {
 status = HealthCheckerExitStatus.TIMED_OUT;
   }
   exceptionStackTrace = StringUtils.stringifyException(e);
} finally {
   ..
}

//hadoop-2.8.5-src\hadoop-yarn-project\hadoop-yarn\hadoop-yarn-server\hadoop-yarn-server-nodemanager\src\main\java\org\apache\hadoop\yarn\server\nodemanager\containermanager\linux\privileged\PrivilegedOperationExecutor.java
//log statement line number:179
//log level:warn
try {
   exec.execute();
   if (LOG.isDebugEnabled()) {
 LOG.debug("command array:");
 LOG.debug(Arrays.toString(fullCommandArray));
 LOG.debug("Privileged Execution Operation Output:");
 LOG.debug(exec.getOutput());
   }
} catch (ExitCodeException e) {
   ..
} catch (IOException e) {
LOG.warn("IOException executing command: ", e);
throw new PrivilegedOperationException(e);
}{code}
There are 2 similar code snippets assign WARN level, when execute() occurs 
exception, simultaneously only 1 code snippet chooses ERROR level for same 
situation. Therefore, I think this one log statement is more likely to be 
assigned WARN level.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7266) Timeline Server event handler threads locked

2019-03-06 Thread Eric Yang (JIRA)


[ 
https://issues.apache.org/jira/browse/YARN-7266?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16786267#comment-16786267
 ] 

Eric Yang commented on YARN-7266:
-

[~Prabhu Joseph] Thanks for the patch.  I have committed to branch-2.7 and 
branch-2.8 for the respective patches.

> Timeline Server event handler threads locked
> 
>
> Key: YARN-7266
> URL: https://issues.apache.org/jira/browse/YARN-7266
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: ATSv2, timelineserver
>Affects Versions: 2.7.3
>Reporter: Venkata Puneet Ravuri
>Assignee: Prabhu Joseph
>Priority: Major
> Fix For: 3.3.0, 2.9.3
>
> Attachments: YARN-7266-0005.patch, YARN-7266-001.patch, 
> YARN-7266-002.patch, YARN-7266-003.patch, YARN-7266-004.patch, 
> YARN-7266-006.patch, YARN-7266-007.patch, YARN-7266-008.patch, 
> YARN-7266-branch-2.7.001.patch, YARN-7266-branch-2.8.001.patch
>
>
> Event handlers for Timeline Server seem to take a lock while parsing HTTP 
> headers of the request. This is causing all other threads to wait and slowing 
> down the overall performance of Timeline server. We have resourcemanager 
> metrics enabled to send to timeline server. Because of the high load on 
> ResourceManager, the metrics to be sent are getting backlogged and in turn 
> increasing heap footprint of Resource Manager (due to pending metrics).
> This is the complete stack trace of a blocked thread on timeline server:-
> "2079644967@qtp-1658980982-4560" #4632 daemon prio=5 os_prio=0 
> tid=0x7f6ba490a000 nid=0x5eb waiting for monitor entry 
> [0x7f6b9142c000]
>java.lang.Thread.State: BLOCKED (on object monitor)
> at 
> com.sun.xml.bind.v2.runtime.reflect.opt.AccessorInjector.prepare(AccessorInjector.java:82)
> - waiting to lock <0x0005c0621860> (a java.lang.Class for 
> com.sun.xml.bind.v2.runtime.reflect.opt.AccessorInjector)
> at 
> com.sun.xml.bind.v2.runtime.reflect.opt.OptimizedAccessorFactory.get(OptimizedAccessorFactory.java:168)
> at 
> com.sun.xml.bind.v2.runtime.reflect.Accessor$FieldReflection.optimize(Accessor.java:282)
> at 
> com.sun.xml.bind.v2.runtime.property.SingleElementNodeProperty.(SingleElementNodeProperty.java:94)
> at sun.reflect.GeneratedConstructorAccessor52.newInstance(Unknown 
> Source)
> at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(Unknown 
> Source)
> at java.lang.reflect.Constructor.newInstance(Unknown Source)
> at 
> com.sun.xml.bind.v2.runtime.property.PropertyFactory.create(PropertyFactory.java:128)
> at 
> com.sun.xml.bind.v2.runtime.ClassBeanInfoImpl.(ClassBeanInfoImpl.java:183)
> at 
> com.sun.xml.bind.v2.runtime.JAXBContextImpl.getOrCreate(JAXBContextImpl.java:532)
> at 
> com.sun.xml.bind.v2.runtime.JAXBContextImpl.getOrCreate(JAXBContextImpl.java:551)
> at 
> com.sun.xml.bind.v2.runtime.property.ArrayElementProperty.(ArrayElementProperty.java:112)
> at 
> com.sun.xml.bind.v2.runtime.property.ArrayElementNodeProperty.(ArrayElementNodeProperty.java:62)
> at sun.reflect.GeneratedConstructorAccessor19.newInstance(Unknown 
> Source)
> at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(Unknown 
> Source)
> at java.lang.reflect.Constructor.newInstance(Unknown Source)
> at 
> com.sun.xml.bind.v2.runtime.property.PropertyFactory.create(PropertyFactory.java:128)
> at 
> com.sun.xml.bind.v2.runtime.ClassBeanInfoImpl.(ClassBeanInfoImpl.java:183)
> at 
> com.sun.xml.bind.v2.runtime.JAXBContextImpl.getOrCreate(JAXBContextImpl.java:532)
> at 
> com.sun.xml.bind.v2.runtime.JAXBContextImpl.(JAXBContextImpl.java:347)
> at 
> com.sun.xml.bind.v2.runtime.JAXBContextImpl$JAXBContextBuilder.build(JAXBContextImpl.java:1170)
> at 
> com.sun.xml.bind.v2.ContextFactory.createContext(ContextFactory.java:145)
> at sun.reflect.GeneratedMethodAccessor17.invoke(Unknown Source)
> at sun.reflect.DelegatingMethodAccessorImpl.invoke(Unknown Source)
> at java.lang.reflect.Method.invoke(Unknown Source)
> at javax.xml.bind.ContextFinder.newInstance(Unknown Source)
> at javax.xml.bind.ContextFinder.newInstance(Unknown Source)
> at javax.xml.bind.ContextFinder.find(Unknown Source)
> at javax.xml.bind.JAXBContext.newInstance(Unknown Source)
> at javax.xml.bind.JAXBContext.newInstance(Unknown Source)
> at 
> com.sun.jersey.server.wadl.generators.WadlGeneratorJAXBGrammarGenerator.buildModelAndSchemas(WadlGeneratorJAXBGrammarGenerator.java:412)
> at 
> com.sun.jersey.server.wadl.generators.WadlGeneratorJAXBGrammarGenerator.createExternalGrammar(WadlGeneratorJAXBGrammarGenerator.java:352)
> at 
> 

[jira] [Updated] (YARN-7266) Timeline Server event handler threads locked

2019-03-06 Thread Eric Yang (JIRA)


 [ 
https://issues.apache.org/jira/browse/YARN-7266?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Eric Yang updated YARN-7266:

Fix Version/s: 2.8.6
   2.7.8

> Timeline Server event handler threads locked
> 
>
> Key: YARN-7266
> URL: https://issues.apache.org/jira/browse/YARN-7266
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: ATSv2, timelineserver
>Affects Versions: 2.7.3
>Reporter: Venkata Puneet Ravuri
>Assignee: Prabhu Joseph
>Priority: Major
> Fix For: 2.7.8, 3.3.0, 2.8.6, 2.9.3
>
> Attachments: YARN-7266-0005.patch, YARN-7266-001.patch, 
> YARN-7266-002.patch, YARN-7266-003.patch, YARN-7266-004.patch, 
> YARN-7266-006.patch, YARN-7266-007.patch, YARN-7266-008.patch, 
> YARN-7266-branch-2.7.001.patch, YARN-7266-branch-2.8.001.patch
>
>
> Event handlers for Timeline Server seem to take a lock while parsing HTTP 
> headers of the request. This is causing all other threads to wait and slowing 
> down the overall performance of Timeline server. We have resourcemanager 
> metrics enabled to send to timeline server. Because of the high load on 
> ResourceManager, the metrics to be sent are getting backlogged and in turn 
> increasing heap footprint of Resource Manager (due to pending metrics).
> This is the complete stack trace of a blocked thread on timeline server:-
> "2079644967@qtp-1658980982-4560" #4632 daemon prio=5 os_prio=0 
> tid=0x7f6ba490a000 nid=0x5eb waiting for monitor entry 
> [0x7f6b9142c000]
>java.lang.Thread.State: BLOCKED (on object monitor)
> at 
> com.sun.xml.bind.v2.runtime.reflect.opt.AccessorInjector.prepare(AccessorInjector.java:82)
> - waiting to lock <0x0005c0621860> (a java.lang.Class for 
> com.sun.xml.bind.v2.runtime.reflect.opt.AccessorInjector)
> at 
> com.sun.xml.bind.v2.runtime.reflect.opt.OptimizedAccessorFactory.get(OptimizedAccessorFactory.java:168)
> at 
> com.sun.xml.bind.v2.runtime.reflect.Accessor$FieldReflection.optimize(Accessor.java:282)
> at 
> com.sun.xml.bind.v2.runtime.property.SingleElementNodeProperty.(SingleElementNodeProperty.java:94)
> at sun.reflect.GeneratedConstructorAccessor52.newInstance(Unknown 
> Source)
> at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(Unknown 
> Source)
> at java.lang.reflect.Constructor.newInstance(Unknown Source)
> at 
> com.sun.xml.bind.v2.runtime.property.PropertyFactory.create(PropertyFactory.java:128)
> at 
> com.sun.xml.bind.v2.runtime.ClassBeanInfoImpl.(ClassBeanInfoImpl.java:183)
> at 
> com.sun.xml.bind.v2.runtime.JAXBContextImpl.getOrCreate(JAXBContextImpl.java:532)
> at 
> com.sun.xml.bind.v2.runtime.JAXBContextImpl.getOrCreate(JAXBContextImpl.java:551)
> at 
> com.sun.xml.bind.v2.runtime.property.ArrayElementProperty.(ArrayElementProperty.java:112)
> at 
> com.sun.xml.bind.v2.runtime.property.ArrayElementNodeProperty.(ArrayElementNodeProperty.java:62)
> at sun.reflect.GeneratedConstructorAccessor19.newInstance(Unknown 
> Source)
> at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(Unknown 
> Source)
> at java.lang.reflect.Constructor.newInstance(Unknown Source)
> at 
> com.sun.xml.bind.v2.runtime.property.PropertyFactory.create(PropertyFactory.java:128)
> at 
> com.sun.xml.bind.v2.runtime.ClassBeanInfoImpl.(ClassBeanInfoImpl.java:183)
> at 
> com.sun.xml.bind.v2.runtime.JAXBContextImpl.getOrCreate(JAXBContextImpl.java:532)
> at 
> com.sun.xml.bind.v2.runtime.JAXBContextImpl.(JAXBContextImpl.java:347)
> at 
> com.sun.xml.bind.v2.runtime.JAXBContextImpl$JAXBContextBuilder.build(JAXBContextImpl.java:1170)
> at 
> com.sun.xml.bind.v2.ContextFactory.createContext(ContextFactory.java:145)
> at sun.reflect.GeneratedMethodAccessor17.invoke(Unknown Source)
> at sun.reflect.DelegatingMethodAccessorImpl.invoke(Unknown Source)
> at java.lang.reflect.Method.invoke(Unknown Source)
> at javax.xml.bind.ContextFinder.newInstance(Unknown Source)
> at javax.xml.bind.ContextFinder.newInstance(Unknown Source)
> at javax.xml.bind.ContextFinder.find(Unknown Source)
> at javax.xml.bind.JAXBContext.newInstance(Unknown Source)
> at javax.xml.bind.JAXBContext.newInstance(Unknown Source)
> at 
> com.sun.jersey.server.wadl.generators.WadlGeneratorJAXBGrammarGenerator.buildModelAndSchemas(WadlGeneratorJAXBGrammarGenerator.java:412)
> at 
> com.sun.jersey.server.wadl.generators.WadlGeneratorJAXBGrammarGenerator.createExternalGrammar(WadlGeneratorJAXBGrammarGenerator.java:352)
> at 
> com.sun.jersey.server.wadl.WadlBuilder.generate(WadlBuilder.java:115)
> at 
> 

[jira] [Resolved] (YARN-8890) Port existing GPU module into pluggable device framework

2019-03-06 Thread Zhankun Tang (JIRA)


 [ 
https://issues.apache.org/jira/browse/YARN-8890?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Zhankun Tang resolved YARN-8890.

Resolution: Duplicate

Since we have a sample Nvidia GPU plugin merged in YARN-9060. Need not to do 
this again.

> Port existing GPU module into pluggable device framework
> 
>
> Key: YARN-8890
> URL: https://issues.apache.org/jira/browse/YARN-8890
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Zhankun Tang
>Assignee: Zhankun Tang
>Priority: Major
>
> Once we get pluggable device framework mature, we can port existing GPU 
> related code into this new framework.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-9348) Build issues on hadoop-yarn-application-catalog-webapp

2019-03-06 Thread Eric Yang (JIRA)


[ 
https://issues.apache.org/jira/browse/YARN-9348?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16786256#comment-16786256
 ] 

Eric Yang commented on YARN-9348:
-

Tested on YARN-9255, and the precommit build is generating correct report now.  
Close this as resolved.

> Build issues on hadoop-yarn-application-catalog-webapp
> --
>
> Key: YARN-9348
> URL: https://issues.apache.org/jira/browse/YARN-9348
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Affects Versions: 3.3.0
>Reporter: Eric Yang
>Assignee: Eric Yang
>Priority: Major
> Attachments: YARN-9348.001.patch, YARN-9348.002.patch, 
> YARN-9348.003.patch, YARN-9348.004.patch, YARN-9348.005.patch
>
>
> A couple reports jenkins precommit builds are failing due to integration 
> problem between nodejs libraries and Yetus.  Problems are:
> # Nodejs third party libraries are checked by whitespace check, which 
> generates many errors.  One possible solution is to move nodejs libraries 
> placement from project top level directory to target directory to prevent 
> stumble on whitespace checks.
> # maven clean fails because clean plugin tries to remove target directory and 
> files inside target/generated-sources directories to cause race conditions.
> # Building on mac will trigger access to osx keychain to attempt to login to 
> Dockerhub.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-9255) Improve recommend applications order

2019-03-06 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/YARN-9255?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16786254#comment-16786254
 ] 

Hadoop QA commented on YARN-9255:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
16s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 16m 
 3s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
43s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
20s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
17s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
11m 29s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  0m 
37s{color} | {color:red} 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-applications-catalog/hadoop-yarn-applications-catalog-webapp
 in trunk has 13 extant Findbugs warnings. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
18s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
37s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
32s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
32s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
11s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m  
5s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
11m 21s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
42s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
15s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red}  0m 43s{color} 
| {color:red} hadoop-yarn-applications-catalog-webapp in the patch passed. 
{color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
26s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 46m 24s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Timed out junit tests | 
org.apache.hadoop.yarn.appcatalog.application.TestAppCatalogSolrClient |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:8f97d6f |
| JIRA Issue | YARN-9255 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12957559/YARN-9255.001.patch |
| Optional Tests |  dupname  asflicense  compile  javac  javadoc  mvninstall  
mvnsite  unit  shadedclient  findbugs  checkstyle  |
| uname | Linux 14f45cf877b2 4.4.0-139-generic #165-Ubuntu SMP Wed Oct 24 
10:58:50 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / 618e009 |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_191 |
| findbugs | v3.1.0-RC1 |
| findbugs | 

[jira] [Updated] (YARN-9349) When doTransition() method occurs exception, the log level practices are inconsistent

2019-03-06 Thread Anuhan Torgonshar (JIRA)


 [ 
https://issues.apache.org/jira/browse/YARN-9349?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anuhan Torgonshar updated YARN-9349:

Description: 
There are *inconsistent* log level practices when code catches 
*_InvalidStateTransitionException_* for _*doTransition()*_ method.
{code:java}
**WARN level**
/**
  file path: 
hadoop-2.8.5-src\hadoop-yarn-project\hadoop-yarn\hadoop-yarn-server\hadoop-yarn-server-nodemanager\src\main\java\org\apache\hadoop\yarn\server\nodemanager\containermanager\application\ApplicationImpl.java
  log statement line number: 482
  log level:warn
**/
try {
   // queue event requesting init of the same app
   newState = stateMachine.doTransition(event.getType(), event);
} catch (InvalidStateTransitionException e) {
   LOG.warn("Can't handle this event at current state", e);
}

/**
  file path: 
hadoop-2.8.5-src\hadoop-yarn-project\hadoop-yarn\hadoop-yarn-server\hadoop-yarn-server-nodemanager\src\main\java\org\apache\hadoop\yarn\server\nodemanager\containermanager\localizer\LocalizedResource.java
  log statement line number: 200
  log level:warn
**/
try {
   newState = this.stateMachine.doTransition(event.getType(), event);
} catch (InvalidStateTransitionException e) {
   LOG.warn("Can't handle this event at current state", e);
}

/**
  file path: 
hadoop-2.8.5-src\hadoop-yarn-project\hadoop-yarn\hadoop-yarn-server\hadoop-yarn-server-nodemanager\src\main\java\org\apache\hadoop\yarn\server\nodemanager\containermanager\container\ContainerImpl.java
  log statement line number: 1156
  log level:warn
**/
try {
newState =
stateMachine.doTransition(event.getType(), event);
} catch (InvalidStateTransitionException e) {
LOG.warn("Can't handle this event at current state: Current: ["
+ oldState + "], eventType: [" + event.getType() + "]", e);
}

**ERROR level*
/**
file path: 
hadoop-2.8.5-src\hadoop-yarn-project\hadoop-yarn\hadoop-yarn-server\hadoop-yarn-server-resourcemanager\src\main\java\org\apache\hadoop\yarn\server\resourcemanager\rmapp\attempt\RMAppAttemptImpl.java
log statement line number:878
log level: error
**/
try {
   /* keep the master in sync with the state machine */
   this.stateMachine.doTransition(event.getType(), event);
} catch (InvalidStateTransitionException e) {
   LOG.error("App attempt: " + appAttemptID
   + " can't handle this event at current state", e);
   onInvalidTranstion(event.getType(), oldState);
}

/**
file path: 
hadoop-2.8.5-src\hadoop-yarn-project\hadoop-yarn\hadoop-yarn-server\hadoop-yarn-server-resourcemanager\src\main\java\org\apache\hadoop\yarn\server\resourcemanager\rmnode\RMNodeImpl.java
log statement line number:623
log level: error
**/
try {
   stateMachine.doTransition(event.getType(), event);
} catch (InvalidStateTransitionException e) {
   LOG.error("Can't handle this event at current state", e);
   LOG.error("Invalid event " + event.getType() + 
   " on Node " + this.nodeId);
}

 
//There are 8 similar code snippets with ERROR log level.

{code}
After had a look on whole project, I found that there are 8 similar code 
snippets assgin the ERROR level, when doTransition() ocurrs 
*InvalidStateTransitionException*. And there are just 3 places choose  the WARN 
level when in same situations. Therefor, I think these 3 log statements should 
be assigned ERROR level to keep consistent with other code snippets.

  was:
There are *inconsistent* log level practices when code catches 
*_InvalidStateTransitionException_* for _*doTransition()*_ method.
{code:java}
**WARN level**
/**
  file path: 
hadoop-2.8.5-src\hadoop-yarn-project\hadoop-yarn\hadoop-yarn-server\hadoop-yarn-server-nodemanager\src\main\java\org\apache\hadoop\yarn\server\nodemanager\containermanager\application\ApplicationImpl.java
  log statement line number: 482
  log level:warn
**/
try {
   // queue event requesting init of the same app
   newState = stateMachine.doTransition(event.getType(), event);
} catch (InvalidStateTransitionException e) {
   LOG.warn("Can't handle this event at current state", e);
}

/**
  file path: 
hadoop-2.8.5-src\hadoop-yarn-project\hadoop-yarn\hadoop-yarn-server\hadoop-yarn-server-nodemanager\src\main\java\org\apache\hadoop\yarn\server\nodemanager\containermanager\localizer\LocalizedResource.java
  log statement line number: 200
  log level:warn
**/
try {
   newState = this.stateMachine.doTransition(event.getType(), event);
} catch (InvalidStateTransitionException e) {
   LOG.warn("Can't handle this event at current state", e);
}

/**
  file path: 
hadoop-2.8.5-src\hadoop-yarn-project\hadoop-yarn\hadoop-yarn-server\hadoop-yarn-server-nodemanager\src\main\java\org\apache\hadoop\yarn\server\nodemanager\containermanager\container\ContainerImpl.java
  log statement line number: 1156
  log level:warn
**/
try {
newState =
stateMachine.doTransition(event.getType(), event);
} catch 

[jira] [Commented] (YARN-9348) Build issues on hadoop-yarn-application-catalog-webapp

2019-03-06 Thread Eric Yang (JIRA)


[ 
https://issues.apache.org/jira/browse/YARN-9348?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16786214#comment-16786214
 ] 

Eric Yang commented on YARN-9348:
-

Jenkins precommit build doesn't produce useful result until this problem is 
addressed.  I committed patch 005 to trunk to fix Jenkins errors based on 
Billie's +1.

The root cause of creating problem on Jenkins precommit build is use of Docker 
Ubuntu 16 container.  Chinese filename does not appear correctly inside docker 
container.  This causes *mvn clean* to fail during precommit build test.  This 
is a environment specific problem that can be repaired by setting 
LANG=en_US.UTF-8 on some system, but not all .  Hence, the work around solution 
is to prevent sourcing of ecstatic nodejs package that contains Chinese folder 
name.

> Build issues on hadoop-yarn-application-catalog-webapp
> --
>
> Key: YARN-9348
> URL: https://issues.apache.org/jira/browse/YARN-9348
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Affects Versions: 3.3.0
>Reporter: Eric Yang
>Assignee: Eric Yang
>Priority: Major
> Attachments: YARN-9348.001.patch, YARN-9348.002.patch, 
> YARN-9348.003.patch, YARN-9348.004.patch, YARN-9348.005.patch
>
>
> A couple reports jenkins precommit builds are failing due to integration 
> problem between nodejs libraries and Yetus.  Problems are:
> # Nodejs third party libraries are checked by whitespace check, which 
> generates many errors.  One possible solution is to move nodejs libraries 
> placement from project top level directory to target directory to prevent 
> stumble on whitespace checks.
> # maven clean fails because clean plugin tries to remove target directory and 
> files inside target/generated-sources directories to cause race conditions.
> # Building on mac will trigger access to osx keychain to attempt to login to 
> Dockerhub.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-9348) Build issues on hadoop-yarn-application-catalog-webapp

2019-03-06 Thread Hudson (JIRA)


[ 
https://issues.apache.org/jira/browse/YARN-9348?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16786208#comment-16786208
 ] 

Hudson commented on YARN-9348:
--

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #16146 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/16146/])
YARN-9348.  Application catalog build system bug fixes. (eyang: rev 
01ada40ea47da0ba32fee22d44f185da2a967456)
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-applications-catalog/hadoop-yarn-applications-catalog-docker/pom.xml
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-applications-catalog/hadoop-yarn-applications-catalog-webapp/.gitignore
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-applications-catalog/hadoop-yarn-applications-catalog-webapp/package.json
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-applications-catalog/hadoop-yarn-applications-catalog-webapp/pom.xml


> Build issues on hadoop-yarn-application-catalog-webapp
> --
>
> Key: YARN-9348
> URL: https://issues.apache.org/jira/browse/YARN-9348
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Affects Versions: 3.3.0
>Reporter: Eric Yang
>Assignee: Eric Yang
>Priority: Major
> Attachments: YARN-9348.001.patch, YARN-9348.002.patch, 
> YARN-9348.003.patch, YARN-9348.004.patch, YARN-9348.005.patch
>
>
> A couple reports jenkins precommit builds are failing due to integration 
> problem between nodejs libraries and Yetus.  Problems are:
> # Nodejs third party libraries are checked by whitespace check, which 
> generates many errors.  One possible solution is to move nodejs libraries 
> placement from project top level directory to target directory to prevent 
> stumble on whitespace checks.
> # maven clean fails because clean plugin tries to remove target directory and 
> files inside target/generated-sources directories to cause race conditions.
> # Building on mac will trigger access to osx keychain to attempt to login to 
> Dockerhub.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-9348) Build issues on hadoop-yarn-application-catalog-webapp

2019-03-06 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/YARN-9348?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16786193#comment-16786193
 ] 

Hadoop QA commented on YARN-9348:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  1m 
15s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
33s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 18m 
53s{color} | {color:green} trunk passed {color} |
| {color:red}-1{color} | {color:red} compile {color} | {color:red}  0m 
17s{color} | {color:red} hadoop-yarn-applications-catalog in trunk failed. 
{color} |
| {color:red}-1{color} | {color:red} mvnsite {color} | {color:red}  0m 
17s{color} | {color:red} hadoop-yarn-applications-catalog-webapp in trunk 
failed. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
31m 35s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:red}-1{color} | {color:red} javadoc {color} | {color:red}  0m 
17s{color} | {color:red} hadoop-yarn-applications-catalog-webapp in trunk 
failed. {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
12s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:red}-1{color} | {color:red} mvninstall {color} | {color:red}  0m 
15s{color} | {color:red} hadoop-yarn-applications-catalog-webapp in the patch 
failed. {color} |
| {color:red}-1{color} | {color:red} compile {color} | {color:red}  0m 
15s{color} | {color:red} hadoop-yarn-applications-catalog in the patch failed. 
{color} |
| {color:red}-1{color} | {color:red} javac {color} | {color:red}  0m 15s{color} 
| {color:red} hadoop-yarn-applications-catalog in the patch failed. {color} |
| {color:red}-1{color} | {color:red} mvnsite {color} | {color:red}  0m 
14s{color} | {color:red} hadoop-yarn-applications-catalog-webapp in the patch 
failed. {color} |
| {color:red}-1{color} | {color:red} whitespace {color} | {color:red}  0m  
0s{color} | {color:red} The patch has 75 line(s) that end in whitespace. Use 
git apply --whitespace=fix <>. Refer 
https://git-scm.com/docs/git-apply {color} |
| {color:red}-1{color} | {color:red} whitespace {color} | {color:red}  0m 
15s{color} | {color:red} The patch 19849 line(s) with tabs. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green}  0m  
2s{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
19m  1s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:red}-1{color} | {color:red} javadoc {color} | {color:red}  0m 
47s{color} | {color:red} hadoop-yarn-applications-catalog-webapp in the patch 
failed. {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red}  1m  0s{color} 
| {color:red} hadoop-yarn-applications-catalog-webapp in the patch failed. 
{color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
30s{color} | {color:green} hadoop-yarn-applications-catalog-docker in the patch 
passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  1m 
13s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}125m 45s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:8f97d6f |
| JIRA Issue | YARN-9348 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12961446/YARN-9348.005.patch |
| Optional Tests |  dupname  asflicense  compile  javac  javadoc  mvninstall  
mvnsite  unit  shadedclient  xml  |
| uname | Linux df9cec78376c 4.4.0-139-generic #165~14.04.1-Ubuntu SMP Wed Oct 
31 10:55:11 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | 

[jira] [Commented] (YARN-9348) Build issues on hadoop-yarn-application-catalog-webapp

2019-03-06 Thread Billie Rinaldi (JIRA)


[ 
https://issues.apache.org/jira/browse/YARN-9348?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16786191#comment-16786191
 ] 

Billie Rinaldi commented on YARN-9348:
--

I think we'll have to commit patch 5 to be able to get a clean build. I am +1 
for patch 5.

> Build issues on hadoop-yarn-application-catalog-webapp
> --
>
> Key: YARN-9348
> URL: https://issues.apache.org/jira/browse/YARN-9348
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Affects Versions: 3.3.0
>Reporter: Eric Yang
>Assignee: Eric Yang
>Priority: Major
> Attachments: YARN-9348.001.patch, YARN-9348.002.patch, 
> YARN-9348.003.patch, YARN-9348.004.patch, YARN-9348.005.patch
>
>
> A couple reports jenkins precommit builds are failing due to integration 
> problem between nodejs libraries and Yetus.  Problems are:
> # Nodejs third party libraries are checked by whitespace check, which 
> generates many errors.  One possible solution is to move nodejs libraries 
> placement from project top level directory to target directory to prevent 
> stumble on whitespace checks.
> # maven clean fails because clean plugin tries to remove target directory and 
> files inside target/generated-sources directories to cause race conditions.
> # Building on mac will trigger access to osx keychain to attempt to login to 
> Dockerhub.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-9348) Build issues on hadoop-yarn-application-catalog-webapp

2019-03-06 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/YARN-9348?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16786163#comment-16786163
 ] 

Hadoop QA commented on YARN-9348:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
26s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
10s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 18m 
11s{color} | {color:green} trunk passed {color} |
| {color:red}-1{color} | {color:red} compile {color} | {color:red}  0m 
19s{color} | {color:red} hadoop-yarn-applications-catalog in trunk failed. 
{color} |
| {color:red}-1{color} | {color:red} mvnsite {color} | {color:red}  0m 
17s{color} | {color:red} hadoop-yarn-applications-catalog-webapp in trunk 
failed. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
30m 33s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:red}-1{color} | {color:red} javadoc {color} | {color:red}  0m 
16s{color} | {color:red} hadoop-yarn-applications-catalog-webapp in trunk 
failed. {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
10s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:red}-1{color} | {color:red} mvninstall {color} | {color:red}  0m 
13s{color} | {color:red} hadoop-yarn-applications-catalog-webapp in the patch 
failed. {color} |
| {color:red}-1{color} | {color:red} compile {color} | {color:red}  0m 
14s{color} | {color:red} hadoop-yarn-applications-catalog in the patch failed. 
{color} |
| {color:red}-1{color} | {color:red} javac {color} | {color:red}  0m 14s{color} 
| {color:red} hadoop-yarn-applications-catalog in the patch failed. {color} |
| {color:red}-1{color} | {color:red} mvnsite {color} | {color:red}  0m 
13s{color} | {color:red} hadoop-yarn-applications-catalog-webapp in the patch 
failed. {color} |
| {color:red}-1{color} | {color:red} whitespace {color} | {color:red}  0m  
0s{color} | {color:red} The patch has 75 line(s) that end in whitespace. Use 
git apply --whitespace=fix <>. Refer 
https://git-scm.com/docs/git-apply {color} |
| {color:red}-1{color} | {color:red} whitespace {color} | {color:red}  0m 
14s{color} | {color:red} The patch 19849 line(s) with tabs. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green}  0m  
3s{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
14m 39s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:red}-1{color} | {color:red} javadoc {color} | {color:red}  0m 
22s{color} | {color:red} hadoop-yarn-applications-catalog-webapp in the patch 
failed. {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red}  0m 20s{color} 
| {color:red} hadoop-yarn-applications-catalog-webapp in the patch failed. 
{color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
20s{color} | {color:green} hadoop-yarn-applications-catalog-docker in the patch 
passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
35s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}116m  1s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:8f97d6f |
| JIRA Issue | YARN-9348 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12961444/YARN-9348.004.patch |
| Optional Tests |  dupname  asflicense  compile  javac  javadoc  mvninstall  
mvnsite  unit  shadedclient  xml  |
| uname | Linux b3b409718c1c 4.4.0-138-generic #164~14.04.1-Ubuntu SMP Fri Oct 
5 08:56:16 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | 

[jira] [Commented] (YARN-5714) ContainerExecutor does not order environment map

2019-03-06 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/YARN-5714?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16786148#comment-16786148
 ] 

Hadoop QA commented on YARN-5714:
-

| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 11m 
26s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} branch-2 Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 12m 
50s{color} | {color:green} branch-2 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
49s{color} | {color:green} branch-2 passed with JDK v1.7.0_95 {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
44s{color} | {color:green} branch-2 passed with JDK v1.8.0_191 {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
21s{color} | {color:green} branch-2 passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
32s{color} | {color:green} branch-2 passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
52s{color} | {color:green} branch-2 passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
26s{color} | {color:green} branch-2 passed with JDK v1.7.0_95 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
20s{color} | {color:green} branch-2 passed with JDK v1.8.0_191 {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
28s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
47s{color} | {color:green} the patch passed with JDK v1.7.0_95 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
47s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
42s{color} | {color:green} the patch passed with JDK v1.8.0_191 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
42s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
16s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
27s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
53s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
21s{color} | {color:green} the patch passed with JDK v1.7.0_95 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
17s{color} | {color:green} the patch passed with JDK v1.8.0_191 {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 14m  
7s{color} | {color:green} hadoop-yarn-server-nodemanager in the patch passed. 
{color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
22s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 48m 27s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:da67579 |
| JIRA Issue | YARN-5714 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12961448/YARN-5714-branch-2.001.patch
 |
| Optional Tests |  dupname  asflicense  compile  javac  javadoc  mvninstall  
mvnsite  unit  shadedclient  findbugs  checkstyle  |
| uname | Linux 765796d519f7 4.4.0-138-generic #164~14.04.1-Ubuntu SMP Fri Oct 
5 08:56:16 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | branch-2 / d71cfe1 |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_191 |
| Multi-JDK versions |  /usr/lib/jvm/java-7-openjdk-amd64:1.7.0_95 
/usr/lib/jvm/java-8-openjdk-amd64:1.8.0_191 |
| findbugs | v3.0.0 |
|  Test 

[jira] [Commented] (YARN-5714) ContainerExecutor does not order environment map

2019-03-06 Thread Jim Brennan (JIRA)


[ 
https://issues.apache.org/jira/browse/YARN-5714?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16786121#comment-16786121
 ] 

Jim Brennan commented on YARN-5714:
---

Thanks [~eepayne]!  I re-uploaded the branch-2 patch.

 

> ContainerExecutor does not order environment map
> 
>
> Key: YARN-5714
> URL: https://issues.apache.org/jira/browse/YARN-5714
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: nodemanager
>Affects Versions: 2.4.1, 2.5.2, 2.7.3, 2.6.4, 3.0.0-alpha1
> Environment: all (linux and windows alike)
>Reporter: Remi Catherinot
>Assignee: Remi Catherinot
>Priority: Trivial
>  Labels: oct16-medium
> Fix For: 3.1.0
>
> Attachments: YARN-5714-branch-2.001.patch, 
> YARN-5714-branch-2.8.001.patch, YARN-5714.001.patch, YARN-5714.002.patch, 
> YARN-5714.003.patch, YARN-5714.004.patch, YARN-5714.005.patch, 
> YARN-5714.006.patch, YARN-5714.007.patch, YARN-5714.008.patch, 
> YARN-5714.009.patch
>
>   Original Estimate: 120h
>  Remaining Estimate: 120h
>
> when dumping the launch container script, environment variables are dumped 
> based on the order internally used by the map implementation (hash based). It 
> does not take into consideration that some env varibales may refer each 
> other, and so that some env variables must be declared before those 
> referencing them.
> In my case, i ended up having LD_LIBRARY_PATH which was depending on 
> HADOOP_COMMON_HOME being dumped before HADOOP_COMMON_HOME. Thus it had a 
> wrong value and so native libraries weren't loaded. jobs were running but not 
> at their best efficiency. This is just a use case falling into that bug, but 
> i'm sure others may happen as well.
> I already have a patch running in my production environment, i just estimate 
> to 5 days for packaging the patch in the right fashion for JIRA + try my best 
> to add tests.
> Note : the patch is not OS aware with a default empty implementation. I will 
> only implement the unix version on a 1st release. I'm not used to windows env 
> variables syntax so it will take me more time/research for it.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-5714) ContainerExecutor does not order environment map

2019-03-06 Thread Jim Brennan (JIRA)


 [ 
https://issues.apache.org/jira/browse/YARN-5714?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jim Brennan updated YARN-5714:
--
Attachment: (was: YARN-5714-branch-2.001.patch)

> ContainerExecutor does not order environment map
> 
>
> Key: YARN-5714
> URL: https://issues.apache.org/jira/browse/YARN-5714
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: nodemanager
>Affects Versions: 2.4.1, 2.5.2, 2.7.3, 2.6.4, 3.0.0-alpha1
> Environment: all (linux and windows alike)
>Reporter: Remi Catherinot
>Assignee: Remi Catherinot
>Priority: Trivial
>  Labels: oct16-medium
> Fix For: 3.1.0
>
> Attachments: YARN-5714-branch-2.001.patch, 
> YARN-5714-branch-2.8.001.patch, YARN-5714.001.patch, YARN-5714.002.patch, 
> YARN-5714.003.patch, YARN-5714.004.patch, YARN-5714.005.patch, 
> YARN-5714.006.patch, YARN-5714.007.patch, YARN-5714.008.patch, 
> YARN-5714.009.patch
>
>   Original Estimate: 120h
>  Remaining Estimate: 120h
>
> when dumping the launch container script, environment variables are dumped 
> based on the order internally used by the map implementation (hash based). It 
> does not take into consideration that some env varibales may refer each 
> other, and so that some env variables must be declared before those 
> referencing them.
> In my case, i ended up having LD_LIBRARY_PATH which was depending on 
> HADOOP_COMMON_HOME being dumped before HADOOP_COMMON_HOME. Thus it had a 
> wrong value and so native libraries weren't loaded. jobs were running but not 
> at their best efficiency. This is just a use case falling into that bug, but 
> i'm sure others may happen as well.
> I already have a patch running in my production environment, i just estimate 
> to 5 days for packaging the patch in the right fashion for JIRA + try my best 
> to add tests.
> Note : the patch is not OS aware with a default empty implementation. I will 
> only implement the unix version on a 1st release. I'm not used to windows env 
> variables syntax so it will take me more time/research for it.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-5714) ContainerExecutor does not order environment map

2019-03-06 Thread Jim Brennan (JIRA)


 [ 
https://issues.apache.org/jira/browse/YARN-5714?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jim Brennan updated YARN-5714:
--
Attachment: YARN-5714-branch-2.001.patch

> ContainerExecutor does not order environment map
> 
>
> Key: YARN-5714
> URL: https://issues.apache.org/jira/browse/YARN-5714
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: nodemanager
>Affects Versions: 2.4.1, 2.5.2, 2.7.3, 2.6.4, 3.0.0-alpha1
> Environment: all (linux and windows alike)
>Reporter: Remi Catherinot
>Assignee: Remi Catherinot
>Priority: Trivial
>  Labels: oct16-medium
> Fix For: 3.1.0
>
> Attachments: YARN-5714-branch-2.001.patch, 
> YARN-5714-branch-2.001.patch, YARN-5714-branch-2.8.001.patch, 
> YARN-5714.001.patch, YARN-5714.002.patch, YARN-5714.003.patch, 
> YARN-5714.004.patch, YARN-5714.005.patch, YARN-5714.006.patch, 
> YARN-5714.007.patch, YARN-5714.008.patch, YARN-5714.009.patch
>
>   Original Estimate: 120h
>  Remaining Estimate: 120h
>
> when dumping the launch container script, environment variables are dumped 
> based on the order internally used by the map implementation (hash based). It 
> does not take into consideration that some env varibales may refer each 
> other, and so that some env variables must be declared before those 
> referencing them.
> In my case, i ended up having LD_LIBRARY_PATH which was depending on 
> HADOOP_COMMON_HOME being dumped before HADOOP_COMMON_HOME. Thus it had a 
> wrong value and so native libraries weren't loaded. jobs were running but not 
> at their best efficiency. This is just a use case falling into that bug, but 
> i'm sure others may happen as well.
> I already have a patch running in my production environment, i just estimate 
> to 5 days for packaging the patch in the right fashion for JIRA + try my best 
> to add tests.
> Note : the patch is not OS aware with a default empty implementation. I will 
> only implement the unix version on a 1st release. I'm not used to windows env 
> variables syntax so it will take me more time/research for it.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-9348) Build issues on hadoop-yarn-application-catalog-webapp

2019-03-06 Thread Billie Rinaldi (JIRA)


[ 
https://issues.apache.org/jira/browse/YARN-9348?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16786114#comment-16786114
 ] 

Billie Rinaldi commented on YARN-9348:
--

Patch 5 is looking good to me. Awaiting precommit build.

> Build issues on hadoop-yarn-application-catalog-webapp
> --
>
> Key: YARN-9348
> URL: https://issues.apache.org/jira/browse/YARN-9348
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Affects Versions: 3.3.0
>Reporter: Eric Yang
>Assignee: Eric Yang
>Priority: Major
> Attachments: YARN-9348.001.patch, YARN-9348.002.patch, 
> YARN-9348.003.patch, YARN-9348.004.patch, YARN-9348.005.patch
>
>
> A couple reports jenkins precommit builds are failing due to integration 
> problem between nodejs libraries and Yetus.  Problems are:
> # Nodejs third party libraries are checked by whitespace check, which 
> generates many errors.  One possible solution is to move nodejs libraries 
> placement from project top level directory to target directory to prevent 
> stumble on whitespace checks.
> # maven clean fails because clean plugin tries to remove target directory and 
> files inside target/generated-sources directories to cause race conditions.
> # Building on mac will trigger access to osx keychain to attempt to login to 
> Dockerhub.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-9348) Build issues on hadoop-yarn-application-catalog-webapp

2019-03-06 Thread Eric Yang (JIRA)


 [ 
https://issues.apache.org/jira/browse/YARN-9348?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Eric Yang updated YARN-9348:

Attachment: YARN-9348.005.patch

> Build issues on hadoop-yarn-application-catalog-webapp
> --
>
> Key: YARN-9348
> URL: https://issues.apache.org/jira/browse/YARN-9348
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Affects Versions: 3.3.0
>Reporter: Eric Yang
>Assignee: Eric Yang
>Priority: Major
> Attachments: YARN-9348.001.patch, YARN-9348.002.patch, 
> YARN-9348.003.patch, YARN-9348.004.patch, YARN-9348.005.patch
>
>
> A couple reports jenkins precommit builds are failing due to integration 
> problem between nodejs libraries and Yetus.  Problems are:
> # Nodejs third party libraries are checked by whitespace check, which 
> generates many errors.  One possible solution is to move nodejs libraries 
> placement from project top level directory to target directory to prevent 
> stumble on whitespace checks.
> # maven clean fails because clean plugin tries to remove target directory and 
> files inside target/generated-sources directories to cause race conditions.
> # Building on mac will trigger access to osx keychain to attempt to login to 
> Dockerhub.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-9348) Build issues on hadoop-yarn-application-catalog-webapp

2019-03-06 Thread Eric Yang (JIRA)


 [ 
https://issues.apache.org/jira/browse/YARN-9348?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Eric Yang updated YARN-9348:

Attachment: YARN-9348.004.patch

> Build issues on hadoop-yarn-application-catalog-webapp
> --
>
> Key: YARN-9348
> URL: https://issues.apache.org/jira/browse/YARN-9348
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Affects Versions: 3.3.0
>Reporter: Eric Yang
>Assignee: Eric Yang
>Priority: Major
> Attachments: YARN-9348.001.patch, YARN-9348.002.patch, 
> YARN-9348.003.patch, YARN-9348.004.patch
>
>
> A couple reports jenkins precommit builds are failing due to integration 
> problem between nodejs libraries and Yetus.  Problems are:
> # Nodejs third party libraries are checked by whitespace check, which 
> generates many errors.  One possible solution is to move nodejs libraries 
> placement from project top level directory to target directory to prevent 
> stumble on whitespace checks.
> # maven clean fails because clean plugin tries to remove target directory and 
> files inside target/generated-sources directories to cause race conditions.
> # Building on mac will trigger access to osx keychain to attempt to login to 
> Dockerhub.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-9348) Build issues on hadoop-yarn-application-catalog-webapp

2019-03-06 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/YARN-9348?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16786073#comment-16786073
 ] 

Hadoop QA commented on YARN-9348:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
15s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  1m 
29s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 16m 
31s{color} | {color:green} trunk passed {color} |
| {color:red}-1{color} | {color:red} compile {color} | {color:red}  0m 
17s{color} | {color:red} hadoop-yarn-applications-catalog in trunk failed. 
{color} |
| {color:red}-1{color} | {color:red} mvnsite {color} | {color:red}  0m 
14s{color} | {color:red} hadoop-yarn-applications-catalog-webapp in trunk 
failed. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
28m 29s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:red}-1{color} | {color:red} javadoc {color} | {color:red}  0m 
15s{color} | {color:red} hadoop-yarn-applications-catalog-webapp in trunk 
failed. {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
11s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:red}-1{color} | {color:red} mvninstall {color} | {color:red}  0m 
11s{color} | {color:red} hadoop-yarn-applications-catalog-webapp in the patch 
failed. {color} |
| {color:red}-1{color} | {color:red} compile {color} | {color:red}  0m 
14s{color} | {color:red} hadoop-yarn-applications-catalog in the patch failed. 
{color} |
| {color:red}-1{color} | {color:red} javac {color} | {color:red}  0m 14s{color} 
| {color:red} hadoop-yarn-applications-catalog in the patch failed. {color} |
| {color:red}-1{color} | {color:red} mvnsite {color} | {color:red}  0m 
11s{color} | {color:red} hadoop-yarn-applications-catalog-webapp in the patch 
failed. {color} |
| {color:red}-1{color} | {color:red} whitespace {color} | {color:red}  0m  
0s{color} | {color:red} The patch has 75 line(s) that end in whitespace. Use 
git apply --whitespace=fix <>. Refer 
https://git-scm.com/docs/git-apply {color} |
| {color:red}-1{color} | {color:red} whitespace {color} | {color:red}  0m 
18s{color} | {color:red} The patch 19849 line(s) with tabs. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green}  0m  
3s{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
11m 25s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:red}-1{color} | {color:red} javadoc {color} | {color:red}  0m 
15s{color} | {color:red} hadoop-yarn-applications-catalog-webapp in the patch 
failed. {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red}  0m 16s{color} 
| {color:red} hadoop-yarn-applications-catalog-webapp in the patch failed. 
{color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
13s{color} | {color:green} hadoop-yarn-applications-catalog-docker in the patch 
passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
28s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}125m 10s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:8f97d6f |
| JIRA Issue | YARN-9348 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12961424/YARN-9348.003.patch |
| Optional Tests |  dupname  asflicense  compile  javac  javadoc  mvninstall  
mvnsite  unit  shadedclient  xml  |
| uname | Linux 4483b3635b58 4.4.0-139-generic #165-Ubuntu SMP Wed Oct 24 
10:58:50 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / 

[jira] [Commented] (YARN-9348) Build issues on hadoop-yarn-application-catalog-webapp

2019-03-06 Thread Steve Loughran (JIRA)


[ 
https://issues.apache.org/jira/browse/YARN-9348?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16786042#comment-16786042
 ] 

Steve Loughran commented on YARN-9348:
--

bq. Building on mac will trigger access to osx keychain to attempt to login to 
Dockerhub.

funny

> Build issues on hadoop-yarn-application-catalog-webapp
> --
>
> Key: YARN-9348
> URL: https://issues.apache.org/jira/browse/YARN-9348
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Affects Versions: 3.3.0
>Reporter: Eric Yang
>Assignee: Eric Yang
>Priority: Major
> Attachments: YARN-9348.001.patch, YARN-9348.002.patch, 
> YARN-9348.003.patch
>
>
> A couple reports jenkins precommit builds are failing due to integration 
> problem between nodejs libraries and Yetus.  Problems are:
> # Nodejs third party libraries are checked by whitespace check, which 
> generates many errors.  One possible solution is to move nodejs libraries 
> placement from project top level directory to target directory to prevent 
> stumble on whitespace checks.
> # maven clean fails because clean plugin tries to remove target directory and 
> files inside target/generated-sources directories to cause race conditions.
> # Building on mac will trigger access to osx keychain to attempt to login to 
> Dockerhub.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-9338) Timeline related testcases are failing

2019-03-06 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/YARN-9338?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16785951#comment-16785951
 ] 

Hadoop QA commented on YARN-9338:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
23s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 2 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
35s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 20m 
26s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  2m 
53s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
 3s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
56s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
13m 26s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Skipped patched modules with no Java source: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-tests 
{color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
39s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
46s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
15s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
51s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  2m 
53s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  2m 
53s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
 2s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
54s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} whitespace {color} | {color:red}  0m  
0s{color} | {color:red} The patch has 75 line(s) that end in whitespace. Use 
git apply --whitespace=fix <>. Refer 
https://git-scm.com/docs/git-apply {color} |
| {color:red}-1{color} | {color:red} whitespace {color} | {color:red}  0m 
15s{color} | {color:red} The patch 19849 line(s) with tabs. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
77m 45s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Skipped patched modules with no Java source: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-tests 
{color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
44s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
37s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  1m 
16s{color} | {color:green} hadoop-yarn-server-timelineservice in the patch 
passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  3m 
12s{color} | {color:green} hadoop-yarn-server-tests in the patch passed. 
{color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
30s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}131m  3s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:8f97d6f |
| JIRA Issue | YARN-9338 |
| JIRA Patch URL | 

[jira] [Commented] (YARN-8967) Change FairScheduler to use PlacementRule interface

2019-03-06 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/YARN-8967?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16785921#comment-16785921
 ] 

Hadoop QA commented on YARN-8967:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
22s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 17 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 18m 
54s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
46s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
47s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
50s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
13m 13s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
22s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
31s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
47s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
49s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
49s{color} | {color:green} 
hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager
 generated 0 new + 16 unchanged - 1 fixed = 16 total (was 17) {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 40s{color} | {color:orange} 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager:
 The patch generated 2 new + 341 unchanged - 65 fixed = 343 total (was 406) 
{color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
47s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} whitespace {color} | {color:red}  0m  
0s{color} | {color:red} The patch has 75 line(s) that end in whitespace. Use 
git apply --whitespace=fix <>. Refer 
https://git-scm.com/docs/git-apply {color} |
| {color:red}-1{color} | {color:red} whitespace {color} | {color:red}  0m 
15s{color} | {color:red} The patch 19849 line(s) with tabs. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
77m 15s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
26s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
28s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 83m 32s{color} 
| {color:red} hadoop-yarn-server-resourcemanager in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
33s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}203m  9s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | 
hadoop.yarn.server.resourcemanager.metrics.TestSystemMetricsPublisherForV2 |
|   | 
hadoop.yarn.server.resourcemanager.metrics.TestCombinedSystemMetricsPublisher |
|   | 
hadoop.yarn.server.resourcemanager.scheduler.fair.TestFairSchedulerPreemption |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:8f97d6f |
| JIRA Issue | YARN-8967 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12961405/YARN-8967.006.patch |
| Optional Tests |  dupname  asflicense  compile  javac  javadoc  mvninstall  
mvnsite  unit  shadedclient  findbugs  checkstyle  |
| uname | Linux ced17073eb60 4.4.0-138-generic #164~14.04.1-Ubuntu SMP Fri Oct 
5 08:56:16 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build 

[jira] [Commented] (YARN-7129) Application Catalog for YARN applications

2019-03-06 Thread Eric Yang (JIRA)


[ 
https://issues.apache.org/jira/browse/YARN-7129?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16785908#comment-16785908
 ] 

Eric Yang commented on YARN-7129:
-

[~ste...@apache.org] Sorry about the build problem, I have a fix in YARN-9348 
patch 3.

> Application Catalog for YARN applications
> -
>
> Key: YARN-7129
> URL: https://issues.apache.org/jira/browse/YARN-7129
> Project: Hadoop YARN
>  Issue Type: New Feature
>  Components: applications
>Reporter: Eric Yang
>Assignee: Eric Yang
>Priority: Major
> Fix For: 3.3.0
>
> Attachments: YARN Appstore.pdf, YARN-7129.001.patch, 
> YARN-7129.002.patch, YARN-7129.003.patch, YARN-7129.004.patch, 
> YARN-7129.005.patch, YARN-7129.006.patch, YARN-7129.007.patch, 
> YARN-7129.008.patch, YARN-7129.009.patch, YARN-7129.010.patch, 
> YARN-7129.011.patch, YARN-7129.012.patch, YARN-7129.013.patch, 
> YARN-7129.014.patch, YARN-7129.015.patch, YARN-7129.016.patch, 
> YARN-7129.017.patch, YARN-7129.018.patch, YARN-7129.019.patch, 
> YARN-7129.020.patch, YARN-7129.021.patch, YARN-7129.022.patch, 
> YARN-7129.023.patch, YARN-7129.024.patch, YARN-7129.025.patch, 
> YARN-7129.026.patch, YARN-7129.027.patch, YARN-7129.028.patch
>
>
> YARN native services provides web services API to improve usability of 
> application deployment on Hadoop using collection of docker images.  It would 
> be nice to have an application catalog system which provides an editorial and 
> search interface for YARN applications.  This improves usability of YARN for 
> manage the life cycle of applications.  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-9348) Build issues on hadoop-yarn-application-catalog-webapp

2019-03-06 Thread Eric Yang (JIRA)


 [ 
https://issues.apache.org/jira/browse/YARN-9348?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Eric Yang updated YARN-9348:

Attachment: YARN-9348.003.patch

> Build issues on hadoop-yarn-application-catalog-webapp
> --
>
> Key: YARN-9348
> URL: https://issues.apache.org/jira/browse/YARN-9348
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Affects Versions: 3.3.0
>Reporter: Eric Yang
>Assignee: Eric Yang
>Priority: Major
> Attachments: YARN-9348.001.patch, YARN-9348.002.patch, 
> YARN-9348.003.patch
>
>
> A couple reports jenkins precommit builds are failing due to integration 
> problem between nodejs libraries and Yetus.  Problems are:
> # Nodejs third party libraries are checked by whitespace check, which 
> generates many errors.  One possible solution is to move nodejs libraries 
> placement from project top level directory to target directory to prevent 
> stumble on whitespace checks.
> # maven clean fails because clean plugin tries to remove target directory and 
> files inside target/generated-sources directories to cause race conditions.
> # Building on mac will trigger access to osx keychain to attempt to login to 
> Dockerhub.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-9348) Build issues on hadoop-yarn-application-catalog-webapp

2019-03-06 Thread Eric Yang (JIRA)


 [ 
https://issues.apache.org/jira/browse/YARN-9348?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Eric Yang updated YARN-9348:

Attachment: YARN-9348.002.patch

> Build issues on hadoop-yarn-application-catalog-webapp
> --
>
> Key: YARN-9348
> URL: https://issues.apache.org/jira/browse/YARN-9348
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Affects Versions: 3.3.0
>Reporter: Eric Yang
>Assignee: Eric Yang
>Priority: Major
> Attachments: YARN-9348.001.patch, YARN-9348.002.patch
>
>
> A couple reports jenkins precommit builds are failing due to integration 
> problem between nodejs libraries and Yetus.  Problems are:
> # Nodejs third party libraries are checked by whitespace check, which 
> generates many errors.  One possible solution is to move nodejs libraries 
> placement from project top level directory to target directory to prevent 
> stumble on whitespace checks.
> # maven clean fails because clean plugin tries to remove target directory and 
> files inside target/generated-sources directories to cause race conditions.
> # Building on mac will trigger access to osx keychain to attempt to login to 
> Dockerhub.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-9348) Build issues on hadoop-yarn-application-catalog-webapp

2019-03-06 Thread Eric Yang (JIRA)


 [ 
https://issues.apache.org/jira/browse/YARN-9348?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Eric Yang updated YARN-9348:

Description: 
A couple reports jenkins precommit builds are failing due to integration 
problem between nodejs libraries and Yetus.  Problems are:

# Nodejs third party libraries are checked by whitespace check, which generates 
many errors.  One possible solution is to move nodejs libraries placement from 
project top level directory to target directory to prevent stumble on 
whitespace checks.
# maven clean fails because clean plugin tries to remove target directory and 
files inside target/generated-sources directories to cause race conditions.
# Building on mac will trigger access to osx keychain to attempt to login to 
Dockerhub.

  was:Nodesjs third party libraries are checked by whitespace check, which 
generates many errors.  One possible solution is to move nodesjs libraries 
placement from project top level directory to target directory to prevent 
stumble on whitespace checks.


> Build issues on hadoop-yarn-application-catalog-webapp
> --
>
> Key: YARN-9348
> URL: https://issues.apache.org/jira/browse/YARN-9348
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Affects Versions: 3.3.0
>Reporter: Eric Yang
>Assignee: Eric Yang
>Priority: Major
> Attachments: YARN-9348.001.patch
>
>
> A couple reports jenkins precommit builds are failing due to integration 
> problem between nodejs libraries and Yetus.  Problems are:
> # Nodejs third party libraries are checked by whitespace check, which 
> generates many errors.  One possible solution is to move nodejs libraries 
> placement from project top level directory to target directory to prevent 
> stumble on whitespace checks.
> # maven clean fails because clean plugin tries to remove target directory and 
> files inside target/generated-sources directories to cause race conditions.
> # Building on mac will trigger access to osx keychain to attempt to login to 
> Dockerhub.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-9348) Build issues on hadoop-yarn-application-catalog-webapp

2019-03-06 Thread Eric Yang (JIRA)


 [ 
https://issues.apache.org/jira/browse/YARN-9348?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Eric Yang updated YARN-9348:

Summary: Build issues on hadoop-yarn-application-catalog-webapp  (was: 
whitespace check generates too many errors on 
hadoop-yarn-application-catalog-webapp)

> Build issues on hadoop-yarn-application-catalog-webapp
> --
>
> Key: YARN-9348
> URL: https://issues.apache.org/jira/browse/YARN-9348
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Affects Versions: 3.3.0
>Reporter: Eric Yang
>Assignee: Eric Yang
>Priority: Major
> Attachments: YARN-9348.001.patch
>
>
> Nodesjs third party libraries are checked by whitespace check, which 
> generates many errors.  One possible solution is to move nodesjs libraries 
> placement from project top level directory to target directory to prevent 
> stumble on whitespace checks.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5714) ContainerExecutor does not order environment map

2019-03-06 Thread Eric Payne (JIRA)


[ 
https://issues.apache.org/jira/browse/YARN-5714?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16785851#comment-16785851
 ] 

Eric Payne commented on YARN-5714:
--

[~Jim_Brennan], the ASF warning is from dependency-reduced-pom.xml which is a 
generated file. I do find it odd that findbugs didn't run because {{Findbugs 
executables are not available.}}.

In order to get the branch-2 pre-commit build to run, we need to re-upload the 
branch-2 patch so that it is newer than the branch-2.8 patch.

> ContainerExecutor does not order environment map
> 
>
> Key: YARN-5714
> URL: https://issues.apache.org/jira/browse/YARN-5714
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: nodemanager
>Affects Versions: 2.4.1, 2.5.2, 2.7.3, 2.6.4, 3.0.0-alpha1
> Environment: all (linux and windows alike)
>Reporter: Remi Catherinot
>Assignee: Remi Catherinot
>Priority: Trivial
>  Labels: oct16-medium
> Fix For: 3.1.0
>
> Attachments: YARN-5714-branch-2.001.patch, 
> YARN-5714-branch-2.8.001.patch, YARN-5714.001.patch, YARN-5714.002.patch, 
> YARN-5714.003.patch, YARN-5714.004.patch, YARN-5714.005.patch, 
> YARN-5714.006.patch, YARN-5714.007.patch, YARN-5714.008.patch, 
> YARN-5714.009.patch
>
>   Original Estimate: 120h
>  Remaining Estimate: 120h
>
> when dumping the launch container script, environment variables are dumped 
> based on the order internally used by the map implementation (hash based). It 
> does not take into consideration that some env varibales may refer each 
> other, and so that some env variables must be declared before those 
> referencing them.
> In my case, i ended up having LD_LIBRARY_PATH which was depending on 
> HADOOP_COMMON_HOME being dumped before HADOOP_COMMON_HOME. Thus it had a 
> wrong value and so native libraries weren't loaded. jobs were running but not 
> at their best efficiency. This is just a use case falling into that bug, but 
> i'm sure others may happen as well.
> I already have a patch running in my production environment, i just estimate 
> to 5 days for packaging the patch in the right fashion for JIRA + try my best 
> to add tests.
> Note : the patch is not OS aware with a default empty implementation. I will 
> only implement the unix version on a 1st release. I'm not used to windows env 
> variables syntax so it will take me more time/research for it.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-9326) Fair Scheduler configuration defaults are not documented in case of min and maxResources

2019-03-06 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/YARN-9326?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16785830#comment-16785830
 ] 

Hadoop QA commented on YARN-9326:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
16s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 17m 
 4s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
19s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
27m 11s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
11s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
15s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} whitespace {color} | {color:red}  0m  
0s{color} | {color:red} The patch has 75 line(s) that end in whitespace. Use 
git apply --whitespace=fix <>. Refer 
https://git-scm.com/docs/git-apply {color} |
| {color:red}-1{color} | {color:red} whitespace {color} | {color:red}  0m 
18s{color} | {color:red} The patch 19849 line(s) with tabs. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
88m 13s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
27s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}117m 25s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:8f97d6f |
| JIRA Issue | YARN-9326 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12961406/YARN-9326.005.patch |
| Optional Tests |  dupname  asflicense  mvnsite  |
| uname | Linux ee8b8bc2b447 4.4.0-139-generic #165-Ubuntu SMP Wed Oct 24 
10:58:50 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / 9d87247 |
| maven | version: Apache Maven 3.3.9 |
| whitespace | 
https://builds.apache.org/job/PreCommit-YARN-Build/23643/artifact/out/whitespace-eol.txt
 |
| whitespace | 
https://builds.apache.org/job/PreCommit-YARN-Build/23643/artifact/out/whitespace-tabs.txt
 |
| Max. process+thread count | 447 (vs. ulimit of 1) |
| modules | C: hadoop-yarn-project/hadoop-yarn/hadoop-yarn-site U: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-site |
| Console output | 
https://builds.apache.org/job/PreCommit-YARN-Build/23643/console |
| Powered by | Apache Yetus 0.8.0   http://yetus.apache.org |


This message was automatically generated.



> Fair Scheduler configuration defaults are not documented in case of min and 
> maxResources
> 
>
> Key: YARN-9326
> URL: https://issues.apache.org/jira/browse/YARN-9326
> Project: Hadoop YARN
>  Issue Type: Improvement
>  Components: docs, documentation, fairscheduler, yarn
>Affects Versions: 3.2.0
>Reporter: Adam Antal
>Assignee: Adam Antal
>Priority: Major
> Attachments: YARN-9326.001.patch, YARN-9326.002.patch, 
> YARN-9326.003.patch, YARN-9326.004.patch, YARN-9326.005.patch
>
>
> The FairScheduler's configuration has the following defaults (from the code: 
> javadoc):
> {noformat}
> In new style resources, any resource that is not specified will be set to 
> missing or 0%, as appropriate. Also, in the new style resources, units are 
> not allowed. Units are assumed from the resource manager's settings for the 
> resources when the value isn't a percentage. The missing parameter is only 
> used in the case of new style resources without percentages. With new style 
> resources with percentages, any missing resources will be assumed to be 100% 
> because percentages are only used with maximum resource limits.
> {noformat}
> This is not documented in the hadoop yarn site FairScheduler.html. It is 
> quite intuitive, 

[jira] [Commented] (YARN-8218) Add application launch time to ATSV1

2019-03-06 Thread Abhishek Modi (JIRA)


[ 
https://issues.apache.org/jira/browse/YARN-8218?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16785751#comment-16785751
 ] 

Abhishek Modi commented on YARN-8218:
-

Gentle reminder [~vrushalic]. Thanks.

> Add application launch time to ATSV1
> 
>
> Key: YARN-8218
> URL: https://issues.apache.org/jira/browse/YARN-8218
> Project: Hadoop YARN
>  Issue Type: Improvement
>Reporter: Kanwaljeet Sachdev
>Assignee: Abhishek Modi
>Priority: Major
> Attachments: YARN-8218.001.patch
>
>
> YARN-7088 publishes application launch time to RMStore and also adds it to 
> the YARN UI. It would be a nice enhancement to have the launchTime event 
> published into the Application history server as well.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-3488) AM get timeline service info from RM rather than Application specific configuration.

2019-03-06 Thread Abhishek Modi (JIRA)


[ 
https://issues.apache.org/jira/browse/YARN-3488?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16785746#comment-16785746
 ] 

Abhishek Modi commented on YARN-3488:
-

Gentle reminder [~rohithsharma] [~vrushalic]

> AM get timeline service info from RM rather than Application specific 
> configuration.
> 
>
> Key: YARN-3488
> URL: https://issues.apache.org/jira/browse/YARN-3488
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: applications
>Reporter: Junping Du
>Assignee: Abhishek Modi
>Priority: Major
>  Labels: YARN-5355
> Attachments: YARN-3488.001.patch, YARN-3488.002.patch, 
> YARN-3488.003.patch
>
>
> Since v1 timeline service, we have MR configuration to enable/disable putting 
> history event to timeline service. For today's v2 timeline service ongoing 
> effort, currently we have different methods/structures between v1 and v2 for 
> consuming TimelineClient, so application have to be aware of which version 
> timeline service get used there.
> There are basically two options here:
> First option is as current way in DistributedShell or MR to let application 
> has specific configuration to point out that if enabling ATS and which 
> version could be, like: MRJobConfig.MAPREDUCE_JOB_EMIT_TIMELINE_DATA, etc.
> The other option is to let application to figure out timeline related info 
> from YARN/RM, it can be done through registerApplicationMaster() in 
> ApplicationMasterProtocol with return value for service "off", "v1_on", or 
> "v2_on".
> We prefer the latter option because application owner doesn't have to aware 
> RM/YARN infrastructure details. Please note that we should keep compatible 
> (consistent behavior with the same setting) with released configurations.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-9338) Timeline related testcases are failing

2019-03-06 Thread Abhishek Modi (JIRA)


 [ 
https://issues.apache.org/jira/browse/YARN-9338?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Abhishek Modi updated YARN-9338:

Attachment: YARN-9338.003.patch

> Timeline related testcases are failing
> --
>
> Key: YARN-9338
> URL: https://issues.apache.org/jira/browse/YARN-9338
> Project: Hadoop YARN
>  Issue Type: Test
>Reporter: Prabhu Joseph
>Assignee: Abhishek Modi
>Priority: Major
> Attachments: YARN-9338.001.patch, YARN-9338.002.patch, 
> YARN-9338.003.patch
>
>
> Timeline related testcases are failing
> {code}
> [ERROR] Failures: 
> [ERROR]   
> TestCombinedSystemMetricsPublisher.testTimelineServiceEventPublishingV2Enabled:262->runTest:245->validateV2:382->verifyEntity:417
>  Expected 2 events to be published expected:<2> but was:<1>
> [ERROR]   
> TestSystemMetricsPublisherForV2.testPublishAppAttemptMetrics:259->verifyEntity:332
>  Expected 2 events to be published expected:<2> but was:<1>
> [ERROR]   
> TestSystemMetricsPublisherForV2.testPublishApplicationMetrics:224->verifyEntity:332
>  Expected 4 events to be published expected:<4> but was:<1>
> [ERROR]   
> TestSystemMetricsPublisherForV2.testPublishContainerMetrics:291->verifyEntity:332
>  Expected 2 events to be published expected:<2> but was:<1>
> [ERROR] Errors: 
> [ERROR]   
> TestCombinedSystemMetricsPublisher.testTimelineServiceEventPublishingV1V2Enabled:252->runTest:242->testSetup:123
>  » YarnRuntime
> [ERROR] Failures: 
> [ERROR]   
> TestTimelineAuthFilterForV2.testPutTimelineEntities:343->access$000:87->publishAndVerifyEntity:307
>  expected:<1> but was:<2>
> [ERROR]   
> TestTimelineAuthFilterForV2.testPutTimelineEntities:352->publishWithRetries:320->publishAndVerifyEntity:307
>  expected:<1> but was:<2>
> [ERROR]   
> TestTimelineAuthFilterForV2.testPutTimelineEntities:352->publishWithRetries:320->publishAndVerifyEntity:307
>  expected:<1> but was:<2>
> [ERROR]   
> TestTimelineAuthFilterForV2.testPutTimelineEntities:343->access$000:87->publishAndVerifyEntity:307
>  expected:<1> but was:<2>
> [INFO] 
> [ERROR] Failures: 
> [ERROR]   
> TestDistributedShell.testDSShellWithoutDomainV2:313->testDSShell:317->testDSShell:458->checkTimelineV2:557->verifyEntityForTimelineV2:710
>  Unexpected number of DS_APP_ATTEMPT_START event published. expected:<1> but 
> was:<0>
> [ERROR]   
> TestDistributedShell.testDSShellWithoutDomainV2CustomizedFlow:329->testDSShell:458->checkTimelineV2:557->verifyEntityForTimelineV2:710
>  Unexpected number of DS_APP_ATTEMPT_START event published. expected:<1> but 
> was:<0>
> [ERROR]   
> TestDistributedShell.testDSShellWithoutDomainV2DefaultFlow:323->testDSShell:458->checkTimelineV2:557->verifyEntityForTimelineV2:710
>  Unexpected number of DS_APP_ATTEMPT_START event published. expected:<1> but 
> was:<0>
> [ERROR] Failures: 
> [ERROR]   
> TestMRTimelineEventHandling.testMRNewTimelineServiceEventHandling:240->checkNewTimelineEvent:304->verifyEntity:462
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-9265) FPGA plugin fails to recognize Intel Processing Accelerator Card

2019-03-06 Thread Sunil Govindan (JIRA)


[ 
https://issues.apache.org/jira/browse/YARN-9265?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16785702#comment-16785702
 ] 

Sunil Govindan commented on YARN-9265:
--

Thanks [~pbacsko]

Few comments:
 # I would prefer to take FPGADiscoveryStrategy and related impls out of the 
FPGA common class to have better modularity.
 # In {{SettingsBasedFPGADiscoveryStrategy}}, is it possible to receive 
multiple devices into availableDevices? Looks like its comma separated. Not 
sure whether its to be parsed and kept in each Strategy class ctor itself
 # As discussed, let's try to do some checks for the script such as exec 
permission etc.
 # If it's possible, please add few comments to give sample values of 
deviceSpec. So its easier for reference later.
 # In discover, any impact due to the removal of synchronized?

 

> FPGA plugin fails to recognize Intel Processing Accelerator Card
> 
>
> Key: YARN-9265
> URL: https://issues.apache.org/jira/browse/YARN-9265
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Affects Versions: 3.1.0
>Reporter: Peter Bacsko
>Assignee: Peter Bacsko
>Priority: Critical
> Attachments: YARN-9265-001.patch, YARN-9265-002.patch, 
> YARN-9265-003.patch, YARN-9265-004.patch, YARN-9265-005.patch, 
> YARN-9265-006.patch, YARN-9265-007.patch
>
>
> The plugin cannot autodetect Intel FPGA PAC (Processing Accelerator Card).
> There are two major issues.
> Problem #1
> The output of aocl diagnose:
> {noformat}
> 
> Device Name:
> acl0
>  
> Package Pat:
> /home/pbacsko/inteldevstack/intelFPGA_pro/hld/board/opencl_bsp
>  
> Vendor: Intel Corp
>  
> Physical Dev Name   StatusInformation
>  
> pac_a10_f20 PassedPAC Arria 10 Platform (pac_a10_f20)
>   PCIe 08:00.0
>   FPGA temperature = 79 degrees C.
>  
> DIAGNOSTIC_PASSED
> 
>  
> Call "aocl diagnose " to run diagnose for specified devices
> Call "aocl diagnose all" to run diagnose for all devices
> {noformat}
> The plugin fails to recognize this and fails with the following message:
> {noformat}
> 2019-01-25 06:46:02,834 INFO 
> org.apache.hadoop.yarn.server.nodemanager.containermanager.resourceplugin.fpga.FpgaResourcePlugin:
>  Using FPGA vendor plugin: 
> org.apache.hadoop.yarn.server.nodemanager.containermanager.resourceplugin.fpga.IntelFpgaOpenclPlugin
> 2019-01-25 06:46:02,943 INFO 
> org.apache.hadoop.yarn.server.nodemanager.containermanager.resourceplugin.fpga.FpgaDiscoverer:
>  Trying to diagnose FPGA information ...
> 2019-01-25 06:46:03,085 INFO 
> org.apache.hadoop.yarn.server.nodemanager.containermanager.linux.resources.ResourceHandlerModule:
>  Using traffic control bandwidth handler
> 2019-01-25 06:46:03,108 INFO 
> org.apache.hadoop.yarn.server.nodemanager.containermanager.linux.resources.CGroupsHandlerImpl:
>  Initializing mounted controller cpu at /sys/fs/cgroup/cpu,cpuacct/yarn
> 2019-01-25 06:46:03,139 INFO 
> org.apache.hadoop.yarn.server.nodemanager.containermanager.linux.resources.fpga.FpgaResourceHandlerImpl:
>  FPGA Plugin bootstrap success.
> 2019-01-25 06:46:03,247 WARN 
> org.apache.hadoop.yarn.server.nodemanager.containermanager.resourceplugin.fpga.IntelFpgaOpenclPlugin:
>  Couldn't find (?i)bus:slot.func\s=\s.*, pattern
> 2019-01-25 06:46:03,248 WARN 
> org.apache.hadoop.yarn.server.nodemanager.containermanager.resourceplugin.fpga.IntelFpgaOpenclPlugin:
>  Couldn't find (?i)Total\sCard\sPower\sUsage\s=\s.* pattern
> 2019-01-25 06:46:03,251 WARN 
> org.apache.hadoop.yarn.server.nodemanager.containermanager.resourceplugin.fpga.IntelFpgaOpenclPlugin:
>  Failed to get major-minor number from reading /dev/pac_a10_f30
> 2019-01-25 06:46:03,252 ERROR 
> org.apache.hadoop.yarn.server.nodemanager.LinuxContainerExecutor: Failed to 
> bootstrap configured resource subsystems!
> org.apache.hadoop.yarn.server.nodemanager.containermanager.linux.resources.ResourceHandlerException:
>  No FPGA devices detected!
> {noformat}
> Problem #2
> The plugin assumes that the file name under {{/dev}} can be derived from the 
> "Physical Dev Name", but this is wrong. For example, it thinks that the 
> device file is {{/dev/pac_a10_f30}} which is not the case, the actual 
> file is {{/dev/intel-fpga-port.0}}.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7129) Application Catalog for YARN applications

2019-03-06 Thread Billie Rinaldi (JIRA)


[ 
https://issues.apache.org/jira/browse/YARN-7129?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16785699#comment-16785699
 ] 

Billie Rinaldi commented on YARN-7129:
--

Thanks for the head up, [~ste...@apache.org]. Looking into it now.

> Application Catalog for YARN applications
> -
>
> Key: YARN-7129
> URL: https://issues.apache.org/jira/browse/YARN-7129
> Project: Hadoop YARN
>  Issue Type: New Feature
>  Components: applications
>Reporter: Eric Yang
>Assignee: Eric Yang
>Priority: Major
> Fix For: 3.3.0
>
> Attachments: YARN Appstore.pdf, YARN-7129.001.patch, 
> YARN-7129.002.patch, YARN-7129.003.patch, YARN-7129.004.patch, 
> YARN-7129.005.patch, YARN-7129.006.patch, YARN-7129.007.patch, 
> YARN-7129.008.patch, YARN-7129.009.patch, YARN-7129.010.patch, 
> YARN-7129.011.patch, YARN-7129.012.patch, YARN-7129.013.patch, 
> YARN-7129.014.patch, YARN-7129.015.patch, YARN-7129.016.patch, 
> YARN-7129.017.patch, YARN-7129.018.patch, YARN-7129.019.patch, 
> YARN-7129.020.patch, YARN-7129.021.patch, YARN-7129.022.patch, 
> YARN-7129.023.patch, YARN-7129.024.patch, YARN-7129.025.patch, 
> YARN-7129.026.patch, YARN-7129.027.patch, YARN-7129.028.patch
>
>
> YARN native services provides web services API to improve usability of 
> application deployment on Hadoop using collection of docker images.  It would 
> be nice to have an application catalog system which provides an editorial and 
> search interface for YARN applications.  This improves usability of YARN for 
> manage the life cycle of applications.  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-9326) Fair Scheduler configuration defaults are not documented in case of min and maxResources

2019-03-06 Thread Adam Antal (JIRA)


 [ 
https://issues.apache.org/jira/browse/YARN-9326?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Adam Antal updated YARN-9326:
-
Attachment: YARN-9326.005.patch

> Fair Scheduler configuration defaults are not documented in case of min and 
> maxResources
> 
>
> Key: YARN-9326
> URL: https://issues.apache.org/jira/browse/YARN-9326
> Project: Hadoop YARN
>  Issue Type: Improvement
>  Components: docs, documentation, fairscheduler, yarn
>Affects Versions: 3.2.0
>Reporter: Adam Antal
>Assignee: Adam Antal
>Priority: Major
> Attachments: YARN-9326.001.patch, YARN-9326.002.patch, 
> YARN-9326.003.patch, YARN-9326.004.patch, YARN-9326.005.patch
>
>
> The FairScheduler's configuration has the following defaults (from the code: 
> javadoc):
> {noformat}
> In new style resources, any resource that is not specified will be set to 
> missing or 0%, as appropriate. Also, in the new style resources, units are 
> not allowed. Units are assumed from the resource manager's settings for the 
> resources when the value isn't a percentage. The missing parameter is only 
> used in the case of new style resources without percentages. With new style 
> resources with percentages, any missing resources will be assumed to be 100% 
> because percentages are only used with maximum resource limits.
> {noformat}
> This is not documented in the hadoop yarn site FairScheduler.html. It is 
> quite intuitive, but still need to be documented though.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-9326) Fair Scheduler configuration defaults are not documented in case of min and maxResources

2019-03-06 Thread Adam Antal (JIRA)


[ 
https://issues.apache.org/jira/browse/YARN-9326?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16785682#comment-16785682
 ] 

Adam Antal commented on YARN-9326:
--

Thanks for the suggestions [~wilfreds]. Uploaded patch v5, triggering jenkins 
(though there are some problem with jenkins, see YARN-9348).

Checked the format with the Intellij md plugin - looks ok to me, but feel free 
to check it yourself. By the way filed HADOOP-16168 which conerning the site 
compile failure.

> Fair Scheduler configuration defaults are not documented in case of min and 
> maxResources
> 
>
> Key: YARN-9326
> URL: https://issues.apache.org/jira/browse/YARN-9326
> Project: Hadoop YARN
>  Issue Type: Improvement
>  Components: docs, documentation, fairscheduler, yarn
>Affects Versions: 3.2.0
>Reporter: Adam Antal
>Assignee: Adam Antal
>Priority: Major
> Attachments: YARN-9326.001.patch, YARN-9326.002.patch, 
> YARN-9326.003.patch, YARN-9326.004.patch, YARN-9326.005.patch
>
>
> The FairScheduler's configuration has the following defaults (from the code: 
> javadoc):
> {noformat}
> In new style resources, any resource that is not specified will be set to 
> missing or 0%, as appropriate. Also, in the new style resources, units are 
> not allowed. Units are assumed from the resource manager's settings for the 
> resources when the value isn't a percentage. The missing parameter is only 
> used in the case of new style resources without percentages. With new style 
> resources with percentages, any missing resources will be assumed to be 100% 
> because percentages are only used with maximum resource limits.
> {noformat}
> This is not documented in the hadoop yarn site FairScheduler.html. It is 
> quite intuitive, but still need to be documented though.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-8967) Change FairScheduler to use PlacementRule interface

2019-03-06 Thread Wilfred Spiegelenburg (JIRA)


 [ 
https://issues.apache.org/jira/browse/YARN-8967?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wilfred Spiegelenburg updated YARN-8967:

Attachment: (was: YARN-8967.006.patch)

> Change FairScheduler to use PlacementRule interface
> ---
>
> Key: YARN-8967
> URL: https://issues.apache.org/jira/browse/YARN-8967
> Project: Hadoop YARN
>  Issue Type: Improvement
>  Components: capacityscheduler, fairscheduler
>Reporter: Wilfred Spiegelenburg
>Assignee: Wilfred Spiegelenburg
>Priority: Major
> Attachments: YARN-8967.001.patch, YARN-8967.002.patch, 
> YARN-8967.003.patch, YARN-8967.004.patch, YARN-8967.005.patch, 
> YARN-8967.006.patch
>
>
> The PlacementRule interface was introduced to be used by all schedulers as 
> per YARN-3635. The CapacityScheduler is using it but the FairScheduler is not 
> and is using its own rule definition.
> YARN-8948 cleans up the implementation and removes the CS references which 
> should allow this change to go through.
> This would be the first step in using one placement rule engine for both 
> schedulers.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-8967) Change FairScheduler to use PlacementRule interface

2019-03-06 Thread Wilfred Spiegelenburg (JIRA)


 [ 
https://issues.apache.org/jira/browse/YARN-8967?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wilfred Spiegelenburg updated YARN-8967:

Attachment: YARN-8967.006.patch

> Change FairScheduler to use PlacementRule interface
> ---
>
> Key: YARN-8967
> URL: https://issues.apache.org/jira/browse/YARN-8967
> Project: Hadoop YARN
>  Issue Type: Improvement
>  Components: capacityscheduler, fairscheduler
>Reporter: Wilfred Spiegelenburg
>Assignee: Wilfred Spiegelenburg
>Priority: Major
> Attachments: YARN-8967.001.patch, YARN-8967.002.patch, 
> YARN-8967.003.patch, YARN-8967.004.patch, YARN-8967.005.patch, 
> YARN-8967.006.patch, YARN-8967.006.patch
>
>
> The PlacementRule interface was introduced to be used by all schedulers as 
> per YARN-3635. The CapacityScheduler is using it but the FairScheduler is not 
> and is using its own rule definition.
> YARN-8948 cleans up the implementation and removes the CS references which 
> should allow this change to go through.
> This would be the first step in using one placement rule engine for both 
> schedulers.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-9349) When doTransition() method occurs exception, the log level practices are inconsistent

2019-03-06 Thread Anuhan Torgonshar (JIRA)


 [ 
https://issues.apache.org/jira/browse/YARN-9349?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anuhan Torgonshar updated YARN-9349:

Description: 
There are *inconsistent* log level practices when code catches 
*_InvalidStateTransitionException_* for _*doTransition()*_ method.
{code:java}
**WARN level**
/**
  file path: 
hadoop-2.8.5-src\hadoop-yarn-project\hadoop-yarn\hadoop-yarn-server\hadoop-yarn-server-nodemanager\src\main\java\org\apache\hadoop\yarn\server\nodemanager\containermanager\application\ApplicationImpl.java
  log statement line number: 482
  log level:warn
**/
try {
   // queue event requesting init of the same app
   newState = stateMachine.doTransition(event.getType(), event);
} catch (InvalidStateTransitionException e) {
   LOG.warn("Can't handle this event at current state", e);
}

/**
  file path: 
hadoop-2.8.5-src\hadoop-yarn-project\hadoop-yarn\hadoop-yarn-server\hadoop-yarn-server-nodemanager\src\main\java\org\apache\hadoop\yarn\server\nodemanager\containermanager\localizer\LocalizedResource.java
  log statement line number: 200
  log level:warn
**/
try {
   newState = this.stateMachine.doTransition(event.getType(), event);
} catch (InvalidStateTransitionException e) {
   LOG.warn("Can't handle this event at current state", e);
}

/**
  file path: 
hadoop-2.8.5-src\hadoop-yarn-project\hadoop-yarn\hadoop-yarn-server\hadoop-yarn-server-nodemanager\src\main\java\org\apache\hadoop\yarn\server\nodemanager\containermanager\container\ContainerImpl.java
  log statement line number: 1156
  log level:warn
**/
try {
newState =
stateMachine.doTransition(event.getType(), event);
} catch (InvalidStateTransitionException e) {
LOG.warn("Can't handle this event at current state: Current: ["
+ oldState + "], eventType: [" + event.getType() + "]", e);
}

**ERROR level*
/**
file path: 
hadoop-2.8.5-src\hadoop-yarn-project\hadoop-yarn\hadoop-yarn-server\hadoop-yarn-server-resourcemanager\src\main\java\org\apache\hadoop\yarn\server\resourcemanager\rmapp\attempt\RMAppAttemptImpl.java
log statement line number:878
log level: error
**/
try {
   /* keep the master in sync with the state machine */
   this.stateMachine.doTransition(event.getType(), event);
} catch (InvalidStateTransitionException e) {
   LOG.error("App attempt: " + appAttemptID
   + " can't handle this event at current state", e);
   onInvalidTranstion(event.getType(), oldState);
}

/**
file path: 
hadoop-2.8.5-src\hadoop-yarn-project\hadoop-yarn\hadoop-yarn-server\hadoop-yarn-server-resourcemanager\src\main\java\org\apache\hadoop\yarn\server\resourcemanager\rmnode\RMNodeImpl.java
log statement line number:623
log level: error
**/
try {
   stateMachine.doTransition(event.getType(), event);
} catch (InvalidStateTransitionException e) {
   LOG.error("Can't handle this event at current state", e);
   LOG.error("Invalid event " + event.getType() + 
   " on Node " + this.nodeId);
}

 
//There are 8 similar code snippets with ERROR log level.

{code}
After have a look on whole project, I found that there are 8 similar code 
snippets assgin the ERROR level to log statement. And there are just 3 places 
choose  the WARNlevel when in same situations. Therefor, I think these 3 log 
statements should be assigned ERROR level to keep consistent with other code 
snippets.

  was:
There are *inconsistent* log level practices when code catches 
*_InvalidStateTransitionException_* for _*doTransition()*_ method.
{code:java}
**WARN level**
/**
  file path: 
hadoop-2.8.5-src\hadoop-yarn-project\hadoop-yarn\hadoop-yarn-server\hadoop-yarn-server-nodemanager\src\main\java\org\apache\hadoop\yarn\server\nodemanager\containermanager\application\ApplicationImpl.java
  log statement line number: 480
  log level:warn
**/
try {
   // queue event requesting init of the same app
   newState = stateMachine.doTransition(event.getType(), event);
} catch (InvalidStateTransitionException e) {
   LOG.warn("Can't handle this event at current state", e);
}

/**
  file path: 
hadoop-2.8.5-src\hadoop-yarn-project\hadoop-yarn\hadoop-yarn-server\hadoop-yarn-server-nodemanager\src\main\java\org\apache\hadoop\yarn\server\nodemanager\containermanager\localizer\LocalizedResource.java
  log statement line number: 200
  log level:warn
**/
try {
   newState = this.stateMachine.doTransition(event.getType(), event);
} catch (InvalidStateTransitionException e) {
   LOG.warn("Can't handle this event at current state", e);
}

/**
  file path: 
hadoop-2.8.5-src\hadoop-yarn-project\hadoop-yarn\hadoop-yarn-server\hadoop-yarn-server-nodemanager\src\main\java\org\apache\hadoop\yarn\server\nodemanager\containermanager\container\ContainerImpl.java
  log statement line number: 1156
  log level:warn
**/
try {
newState =
stateMachine.doTransition(event.getType(), event);
} catch (InvalidStateTransitionException e) {

[jira] [Updated] (YARN-9349) When doTransition() method occurs exception, the log level practices are inconsistent

2019-03-06 Thread Anuhan Torgonshar (JIRA)


 [ 
https://issues.apache.org/jira/browse/YARN-9349?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anuhan Torgonshar updated YARN-9349:

  Flags: Important
 Attachment: LocalizedResource.java
 ContainerImpl.java
 ApplicationImpl.java
Description: 
There are *inconsistent* log level practices when code catches 
*_InvalidStateTransitionException_* for _*doTransition()*_ method.
{code:java}
**WARN level**
/**
  file path: 
hadoop-2.8.5-src\hadoop-yarn-project\hadoop-yarn\hadoop-yarn-server\hadoop-yarn-server-nodemanager\src\main\java\org\apache\hadoop\yarn\server\nodemanager\containermanager\application\ApplicationImpl.java
  log statement line number: 480
  log level:warn
**/
try {
   // queue event requesting init of the same app
   newState = stateMachine.doTransition(event.getType(), event);
} catch (InvalidStateTransitionException e) {
   LOG.warn("Can't handle this event at current state", e);
}

/**
  file path: 
hadoop-2.8.5-src\hadoop-yarn-project\hadoop-yarn\hadoop-yarn-server\hadoop-yarn-server-nodemanager\src\main\java\org\apache\hadoop\yarn\server\nodemanager\containermanager\localizer\LocalizedResource.java
  log statement line number: 200
  log level:warn
**/
try {
   newState = this.stateMachine.doTransition(event.getType(), event);
} catch (InvalidStateTransitionException e) {
   LOG.warn("Can't handle this event at current state", e);
}

/**
  file path: 
hadoop-2.8.5-src\hadoop-yarn-project\hadoop-yarn\hadoop-yarn-server\hadoop-yarn-server-nodemanager\src\main\java\org\apache\hadoop\yarn\server\nodemanager\containermanager\container\ContainerImpl.java
  log statement line number: 1156
  log level:warn
**/
try {
newState =
stateMachine.doTransition(event.getType(), event);
} catch (InvalidStateTransitionException e) {
LOG.warn("Can't handle this event at current state: Current: ["
+ oldState + "], eventType: [" + event.getType() + "]", e);
}

**ERROR level*
/**
file path: 
hadoop-2.8.5-src\hadoop-yarn-project\hadoop-yarn\hadoop-yarn-server\hadoop-yarn-server-resourcemanager\src\main\java\org\apache\hadoop\yarn\server\resourcemanager\rmapp\attempt\RMAppAttemptImpl.java
log statement line number:878
log level: error
**/
try {
   /* keep the master in sync with the state machine */
   this.stateMachine.doTransition(event.getType(), event);
} catch (InvalidStateTransitionException e) {
   LOG.error("App attempt: " + appAttemptID
   + " can't handle this event at current state", e);
   onInvalidTranstion(event.getType(), oldState);
}

/**
file path: 
hadoop-2.8.5-src\hadoop-yarn-project\hadoop-yarn\hadoop-yarn-server\hadoop-yarn-server-resourcemanager\src\main\java\org\apache\hadoop\yarn\server\resourcemanager\rmnode\RMNodeImpl.java
log statement line number:623
log level: error
**/
try {
   stateMachine.doTransition(event.getType(), event);
} catch (InvalidStateTransitionException e) {
   LOG.error("Can't handle this event at current state", e);
   LOG.error("Invalid event " + event.getType() + 
   " on Node " + this.nodeId);
}

 
//There are 8 similar code snippets with ERROR log level.

{code}
After have a look on whole project, I found that there are 8 similar code 
snippets assgin the ERROR level to log statement. And there are just 3 places 
choose  the WARNlevel when in same situations. Therefor, I think these 3 log 
statements should be assigned ERROR level to keep consistent with other code 
snippets.

  was:
There are *inconsistent* log level practices when code catches 
*_InvalidStateTransitionException_* for _*doTransition()*_ method.
{code:java}
//file path: 
hadoop-3.1.0-src/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/application/ApplicationImpl.java
//log statement line number: 629
try {
   // queue event requesting init of the same app
   newState = stateMachine.doTransition(event.getType(), event);
} catch (InvalidStateTransitionException e) {
   LOG.warn("Can't handle this event at current state", e);
}


{code}


> When doTransition() method occurs exception, the log level practices are 
> inconsistent
> -
>
> Key: YARN-9349
> URL: https://issues.apache.org/jira/browse/YARN-9349
> Project: Hadoop YARN
>  Issue Type: Improvement
>  Components: nodemanager
>Affects Versions: 3.1.0, 2.8.5
>Reporter: Anuhan Torgonshar
>Priority: Major
> Attachments: ApplicationImpl.java, ContainerImpl.java, 
> LocalizedResource.java
>
>
> There are *inconsistent* log level practices when code catches 
> *_InvalidStateTransitionException_* for _*doTransition()*_ method.
> {code:java}
> **WARN level**

[jira] [Updated] (YARN-9349) When doTransition() method occurs exception, the log level practices are inconsistent

2019-03-06 Thread Anuhan Torgonshar (JIRA)


 [ 
https://issues.apache.org/jira/browse/YARN-9349?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anuhan Torgonshar updated YARN-9349:

Description: 
There are *inconsistent* log level practices when code catches 
*_InvalidStateTransitionException_* for _*doTransition()*_ method.
{code:java}
//file path: 
hadoop-3.1.0-src/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/application/ApplicationImpl.java
//log statement line number: 629
try {
   // queue event requesting init of the same app
   newState = stateMachine.doTransition(event.getType(), event);
} catch (InvalidStateTransitionException e) {
   LOG.warn("Can't handle this event at current state", e);
}


{code}

  was:There are *inconsistent* log level practices when code catches 
*_InvalidStateTransitionException_* for _*doTransition()*_ method.

Summary: When doTransition() method occurs exception, the log level 
practices are inconsistent  (was: When doTransition() method occurs exception, 
the log level practices are incon)

> When doTransition() method occurs exception, the log level practices are 
> inconsistent
> -
>
> Key: YARN-9349
> URL: https://issues.apache.org/jira/browse/YARN-9349
> Project: Hadoop YARN
>  Issue Type: Improvement
>  Components: nodemanager
>Affects Versions: 3.1.0, 2.8.5
>Reporter: Anuhan Torgonshar
>Priority: Major
>
> There are *inconsistent* log level practices when code catches 
> *_InvalidStateTransitionException_* for _*doTransition()*_ method.
> {code:java}
> //file path: 
> hadoop-3.1.0-src/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/application/ApplicationImpl.java
> //log statement line number: 629
> try {
>// queue event requesting init of the same app
>newState = stateMachine.doTransition(event.getType(), event);
> } catch (InvalidStateTransitionException e) {
>LOG.warn("Can't handle this event at current state", e);
> }
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Created] (YARN-9349) When doTransition() method occurs exception, the log level practices are incon

2019-03-06 Thread Anuhan Torgonshar (JIRA)
Anuhan Torgonshar created YARN-9349:
---

 Summary: When doTransition() method occurs exception, the log 
level practices are incon
 Key: YARN-9349
 URL: https://issues.apache.org/jira/browse/YARN-9349
 Project: Hadoop YARN
  Issue Type: Improvement
  Components: nodemanager
Affects Versions: 2.8.5, 3.1.0
Reporter: Anuhan Torgonshar


There are *inconsistent* log level practices when code catches 
*_InvalidStateTransitionException_* for _*doTransition()*_ method.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-9344) FS should not reserve when container capability is bigger than node total resource

2019-03-06 Thread Zhaohui Xin (JIRA)


[ 
https://issues.apache.org/jira/browse/YARN-9344?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16785603#comment-16785603
 ] 

Zhaohui Xin commented on YARN-9344:
---

The whitespace error is not relatived to this patch,  
[YARN-9348|https://issues.apache.org/jira/browse/YARN-9348] will fix it.

> FS should not reserve when container capability is bigger than node total 
> resource
> --
>
> Key: YARN-9344
> URL: https://issues.apache.org/jira/browse/YARN-9344
> Project: Hadoop YARN
>  Issue Type: Bug
>Reporter: Zhaohui Xin
>Assignee: Zhaohui Xin
>Priority: Major
> Attachments: YARN-9344.001.patch, YARN-9344.002.patch
>
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-9343) Replace isDebugEnabled with SLF4J parameterized log messages

2019-03-06 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/YARN-9343?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16785613#comment-16785613
 ] 

Hadoop QA commented on YARN-9343:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
26s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} pathlen {color} | {color:red}  0m  
0s{color} | {color:red} The patch appears to contain 1 files with names longer 
than 240 {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 5 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  2m  
4s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 18m 
34s{color} | {color:green} trunk passed {color} |
| {color:red}-1{color} | {color:red} compile {color} | {color:red}  8m 
54s{color} | {color:red} root in trunk failed. {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  4m 
 0s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 12m  
8s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
28m 18s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Skipped patched modules with no Java source: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-tests 
{color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 15m  
1s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  8m 
12s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
23s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  9m 
53s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} compile {color} | {color:red}  8m 
54s{color} | {color:red} root in the patch failed. {color} |
| {color:red}-1{color} | {color:red} javac {color} | {color:red}  8m 54s{color} 
| {color:red} root in the patch failed. {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  3m 
53s{color} | {color:green} root: The patch generated 0 new + 2326 unchanged - 
25 fixed = 2326 total (was 2351) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 11m 
23s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} whitespace {color} | {color:red}  0m  
0s{color} | {color:red} The patch has 75 line(s) that end in whitespace. Use 
git apply --whitespace=fix <>. Refer 
https://git-scm.com/docs/git-apply {color} |
| {color:red}-1{color} | {color:red} whitespace {color} | {color:red}  0m 
15s{color} | {color:red} The patch 19849 line(s) with tabs. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
79m  0s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Skipped patched modules with no Java source: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-tests 
{color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 24m  
5s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  8m 
26s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  9m 
30s{color} | {color:green} hadoop-common in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
50s{color} | {color:green} hadoop-yarn-api in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  3m 
34s{color} | {color:green} hadoop-yarn-common in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  2m 
38s{color} 

[jira] [Comment Edited] (YARN-9344) FS should not reserve when container capability is bigger than node total resource

2019-03-06 Thread Zhaohui Xin (JIRA)


[ 
https://issues.apache.org/jira/browse/YARN-9344?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16785603#comment-16785603
 ] 

Zhaohui Xin edited comment on YARN-9344 at 3/6/19 1:00 PM:
---

The whitespace error is not relatived to this patch, YARN-9348 will fix it.


was (Author: uranus):
The whitespace error is not relatived to this patch,  
[YARN-9348|https://issues.apache.org/jira/browse/YARN-9348] will fix it.

> FS should not reserve when container capability is bigger than node total 
> resource
> --
>
> Key: YARN-9344
> URL: https://issues.apache.org/jira/browse/YARN-9344
> Project: Hadoop YARN
>  Issue Type: Bug
>Reporter: Zhaohui Xin
>Assignee: Zhaohui Xin
>Priority: Major
> Attachments: YARN-9344.001.patch, YARN-9344.002.patch
>
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-9265) FPGA plugin fails to recognize Intel Processing Accelerator Card

2019-03-06 Thread Sunil Govindan (JIRA)


[ 
https://issues.apache.org/jira/browse/YARN-9265?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16785572#comment-16785572
 ] 

Sunil Govindan commented on YARN-9265:
--

+ [~tangzhankun]

Could you please help to review this patch. Thanks.

> FPGA plugin fails to recognize Intel Processing Accelerator Card
> 
>
> Key: YARN-9265
> URL: https://issues.apache.org/jira/browse/YARN-9265
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Affects Versions: 3.1.0
>Reporter: Peter Bacsko
>Assignee: Peter Bacsko
>Priority: Critical
> Attachments: YARN-9265-001.patch, YARN-9265-002.patch, 
> YARN-9265-003.patch, YARN-9265-004.patch, YARN-9265-005.patch, 
> YARN-9265-006.patch, YARN-9265-007.patch
>
>
> The plugin cannot autodetect Intel FPGA PAC (Processing Accelerator Card).
> There are two major issues.
> Problem #1
> The output of aocl diagnose:
> {noformat}
> 
> Device Name:
> acl0
>  
> Package Pat:
> /home/pbacsko/inteldevstack/intelFPGA_pro/hld/board/opencl_bsp
>  
> Vendor: Intel Corp
>  
> Physical Dev Name   StatusInformation
>  
> pac_a10_f20 PassedPAC Arria 10 Platform (pac_a10_f20)
>   PCIe 08:00.0
>   FPGA temperature = 79 degrees C.
>  
> DIAGNOSTIC_PASSED
> 
>  
> Call "aocl diagnose " to run diagnose for specified devices
> Call "aocl diagnose all" to run diagnose for all devices
> {noformat}
> The plugin fails to recognize this and fails with the following message:
> {noformat}
> 2019-01-25 06:46:02,834 INFO 
> org.apache.hadoop.yarn.server.nodemanager.containermanager.resourceplugin.fpga.FpgaResourcePlugin:
>  Using FPGA vendor plugin: 
> org.apache.hadoop.yarn.server.nodemanager.containermanager.resourceplugin.fpga.IntelFpgaOpenclPlugin
> 2019-01-25 06:46:02,943 INFO 
> org.apache.hadoop.yarn.server.nodemanager.containermanager.resourceplugin.fpga.FpgaDiscoverer:
>  Trying to diagnose FPGA information ...
> 2019-01-25 06:46:03,085 INFO 
> org.apache.hadoop.yarn.server.nodemanager.containermanager.linux.resources.ResourceHandlerModule:
>  Using traffic control bandwidth handler
> 2019-01-25 06:46:03,108 INFO 
> org.apache.hadoop.yarn.server.nodemanager.containermanager.linux.resources.CGroupsHandlerImpl:
>  Initializing mounted controller cpu at /sys/fs/cgroup/cpu,cpuacct/yarn
> 2019-01-25 06:46:03,139 INFO 
> org.apache.hadoop.yarn.server.nodemanager.containermanager.linux.resources.fpga.FpgaResourceHandlerImpl:
>  FPGA Plugin bootstrap success.
> 2019-01-25 06:46:03,247 WARN 
> org.apache.hadoop.yarn.server.nodemanager.containermanager.resourceplugin.fpga.IntelFpgaOpenclPlugin:
>  Couldn't find (?i)bus:slot.func\s=\s.*, pattern
> 2019-01-25 06:46:03,248 WARN 
> org.apache.hadoop.yarn.server.nodemanager.containermanager.resourceplugin.fpga.IntelFpgaOpenclPlugin:
>  Couldn't find (?i)Total\sCard\sPower\sUsage\s=\s.* pattern
> 2019-01-25 06:46:03,251 WARN 
> org.apache.hadoop.yarn.server.nodemanager.containermanager.resourceplugin.fpga.IntelFpgaOpenclPlugin:
>  Failed to get major-minor number from reading /dev/pac_a10_f30
> 2019-01-25 06:46:03,252 ERROR 
> org.apache.hadoop.yarn.server.nodemanager.LinuxContainerExecutor: Failed to 
> bootstrap configured resource subsystems!
> org.apache.hadoop.yarn.server.nodemanager.containermanager.linux.resources.ResourceHandlerException:
>  No FPGA devices detected!
> {noformat}
> Problem #2
> The plugin assumes that the file name under {{/dev}} can be derived from the 
> "Physical Dev Name", but this is wrong. For example, it thinks that the 
> device file is {{/dev/pac_a10_f30}} which is not the case, the actual 
> file is {{/dev/intel-fpga-port.0}}.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-8803) [UI2] Show flow runs in the order of recently created time in graph widgets

2019-03-06 Thread Hudson (JIRA)


[ 
https://issues.apache.org/jira/browse/YARN-8803?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16785539#comment-16785539
 ] 

Hudson commented on YARN-8803:
--

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #16140 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/16140/])
YARN-8803. [UI2] Show flow runs in the order of recently created time in 
(sunilg: rev c79f139519e9b2486de31b307d7811b4c2a6b5b0)
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/app/controllers/yarn-flow/runs.js
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/app/models/yarn-flowrun-brief.js


> [UI2] Show flow runs in the order of recently created time in graph widgets
> ---
>
> Key: YARN-8803
> URL: https://issues.apache.org/jira/browse/YARN-8803
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: yarn-ui-v2
>Reporter: Akhil PB
>Assignee: Akhil PB
>Priority: Major
> Fix For: 3.3.0, 3.2.1, 3.1.3
>
> Attachments: Screen Shot 2018-09-20 at 11.42.17 AM.png, Screen Shot 
> 2018-09-20 at 11.42.35 AM.png, Screen Shot 2018-09-20 at 11.42.46 AM.png, 
> YARN-8803.001.patch
>
>
> Flow Run Vs Run Duration – shows Run1 a job where as Flow Run Vs Memory Used 
> shows Run 1 another job and the Table with list of Run Ids has all jobs in 
> different order sorted on Creation Time.  The widgets will be useful if it is 
> representing the runs in same order as the Table so that easy to correlate. 
> Its better to have ordering in the widgets based on the creation time instead 
> of sorting based on time taken.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-8132) Final Status of applications shown as UNDEFINED in ATS app queries

2019-03-06 Thread Prabhu Joseph (JIRA)


[ 
https://issues.apache.org/jira/browse/YARN-8132?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16785510#comment-16785510
 ] 

Prabhu Joseph commented on YARN-8132:
-

[~bibinchundatt]  RMAppImpl can publish the final application status based upon 
the state it transitions to when app is finished. It publishes finishTime 
similarly. This won't have the above issue. Can you review the v2 patch.

> Final Status of applications shown as UNDEFINED in ATS app queries
> --
>
> Key: YARN-8132
> URL: https://issues.apache.org/jira/browse/YARN-8132
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: ATSv2, timelineservice
>Reporter: Charan Hebri
>Assignee: Prabhu Joseph
>Priority: Major
> Attachments: YARN-8132-001.patch, YARN-8132-002.patch, 
> YARN-8132-003.patch, YARN-8132-004.patch, YARN-8132-branch-3.1.001.patch, 
> YARN-8132-branch-3.2.001.patch, YARN-8132-branch-3.2.002.patch
>
>
> Final Status is shown as UNDEFINED for applications that are KILLED/FAILED. A 
> sample request/response with INFO field for an application,
> {noformat}
> 2018-04-09 13:10:02,126 INFO  reader.TimelineReaderWebServices 
> (TimelineReaderWebServices.java:getApp(1693)) - Received URL 
> /ws/v2/timeline/apps/application_1523259757659_0003?fields=INFO from user 
> hrt_qa
> 2018-04-09 13:10:02,156 INFO  reader.TimelineReaderWebServices 
> (TimelineReaderWebServices.java:getApp(1716)) - Processed URL 
> /ws/v2/timeline/apps/application_1523259757659_0003?fields=INFO (Took 30 
> ms.){noformat}
> {noformat}
> {
>   "metrics": [],
>   "events": [],
>   "createdtime": 1523263360719,
>   "idprefix": 0,
>   "id": "application_1523259757659_0003",
>   "type": "YARN_APPLICATION",
>   "info": {
> "YARN_APPLICATION_CALLER_CONTEXT": "CLI",
> "YARN_APPLICATION_DIAGNOSTICS_INFO": "Application 
> application_1523259757659_0003 was killed by user xxx_xx at XXX.XXX.XXX.XXX",
> "YARN_APPLICATION_FINAL_STATUS": "UNDEFINED",
> "YARN_APPLICATION_NAME": "Sleep job",
> "YARN_APPLICATION_USER": "hrt_qa",
> "YARN_APPLICATION_UNMANAGED_APPLICATION": false,
> "FROM_ID": 
> "yarn-cluster!hrt_qa!test_flow!1523263360719!application_1523259757659_0003",
> "UID": "yarn-cluster!application_1523259757659_0003",
> "YARN_APPLICATION_VIEW_ACLS": " ",
> "YARN_APPLICATION_SUBMITTED_TIME": 1523263360718,
> "YARN_AM_CONTAINER_LAUNCH_COMMAND": [
>   "$JAVA_HOME/bin/java -Djava.io.tmpdir=$PWD/tmp 
> -Dlog4j.configuration=container-log4j.properties 
> -Dyarn.app.container.log.dir= -Dyarn.app.container.log.filesize=0 
> -Dhadoop.root.logger=INFO,CLA -Dhadoop.root.logfile=syslog 
> -Dhdp.version=3.0.0.0-1163 -Xmx819m -Dhdp.version=3.0.0.0-1163 
> org.apache.hadoop.mapreduce.v2.app.MRAppMaster 1>/stdout 
> 2>/stderr "
> ],
> "YARN_APPLICATION_QUEUE": "default",
> "YARN_APPLICATION_TYPE": "MAPREDUCE",
> "YARN_APPLICATION_PRIORITY": 0,
> "YARN_APPLICATION_LATEST_APP_ATTEMPT": 
> "appattempt_1523259757659_0003_01",
> "YARN_APPLICATION_TAGS": [
>   "timeline_flow_name_tag:test_flow"
> ],
> "YARN_APPLICATION_STATE": "KILLED"
>   },
>   "configs": {},
>   "isrelatedto": {},
>   "relatesto": {}
> }{noformat}
> This is different to what the Resource Manager reports. For KILLED 
> applications the final status is KILLED and for FAILED applications it is 
> FAILED. This behavior is seen in ATSv2 as well as older versions of ATS. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-8947) [UI2] Active User info missing from UI2

2019-03-06 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/YARN-8947?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16785508#comment-16785508
 ] 

Hadoop QA commented on YARN-8947:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m  
0s{color} | {color:blue} Docker mode activated. {color} |
| {color:red}-1{color} | {color:red} patch {color} | {color:red}  0m  8s{color} 
| {color:red} YARN-8947 does not apply to trunk. Rebase required? Wrong Branch? 
See https://wiki.apache.org/hadoop/HowToContribute for help. {color} |
\\
\\
|| Subsystem || Report/Notes ||
| JIRA Issue | YARN-8947 |
| Console output | 
https://builds.apache.org/job/PreCommit-YARN-Build/23641/console |
| Powered by | Apache Yetus 0.8.0   http://yetus.apache.org |


This message was automatically generated.



> [UI2] Active User info missing from UI2
> ---
>
> Key: YARN-8947
> URL: https://issues.apache.org/jira/browse/YARN-8947
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: yarn-ui-v2
>Reporter: Akhil PB
>Assignee: Akhil PB
>Priority: Major
> Attachments: Active_User_Info_RM_UI1.png, 
> Active_User_Info_RM_UI2_Fixed.png, Active_User_Info_RM_UI2_Fixed_2.png, 
> YARN-8947.001.patch, YARN-8947.002.patch, YARN-8947.003.patch
>
>
> UI1 Scheduler section has Active User info. Where it shows Active users and 
> Application scheduled.
> UI2 is missing that information. There is no way to get a summary of apps as 
> per User.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-9138) Improve test coverage for nvidia-smi binary execution of GpuDiscoverer

2019-03-06 Thread Hudson (JIRA)


[ 
https://issues.apache.org/jira/browse/YARN-9138?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16785498#comment-16785498
 ] 

Hudson commented on YARN-9138:
--

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #16139 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/16139/])
YARN-9138. Improve test coverage for nvidia-smi binary execution of (sunilg: 
rev 46045c5cb3ab06a35df27879afbd1bc3c2a384dd)
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/resourceplugin/gpu/GpuDiscoverer.java
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/webapp/dao/gpu/GpuDeviceInformationParser.java
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/resourceplugin/gpu/TestGpuDiscoverer.java


> Improve test coverage for nvidia-smi binary execution of GpuDiscoverer
> --
>
> Key: YARN-9138
> URL: https://issues.apache.org/jira/browse/YARN-9138
> Project: Hadoop YARN
>  Issue Type: Improvement
>Reporter: Szilard Nemeth
>Assignee: Szilard Nemeth
>Priority: Major
> Fix For: 3.3.0, 3.2.1, 3.1.3
>
> Attachments: YARN-9138.001.patch, YARN-9138.002.patch, 
> YARN-9138.003.patch, YARN-9138.004.patch, YARN-9138.005.patch, 
> YARN-9138.006.patch, YARN-9138.007.patch
>
>
> The code that executes nvidia-smi (doing GPU device auto-discovery) don't 
> have much test coverage.
> This patch adds tests to this part of the code.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-9080) Bucket Directories as part of ATS done accumulates

2019-03-06 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/YARN-9080?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16785489#comment-16785489
 ] 

Hadoop QA commented on YARN-9080:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
25s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 17m 
14s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
27s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
21s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
30s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
11m 58s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
36s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
24s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
25s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
23s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
23s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 15s{color} | {color:orange} 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-timeline-pluginstorage:
 The patch generated 2 new + 11 unchanged - 0 fixed = 13 total (was 11) {color} 
|
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
26s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} whitespace {color} | {color:red}  0m  
0s{color} | {color:red} The patch has 75 line(s) that end in whitespace. Use 
git apply --whitespace=fix <>. Refer 
https://git-scm.com/docs/git-apply {color} |
| {color:red}-1{color} | {color:red} whitespace {color} | {color:red}  0m 
20s{color} | {color:red} The patch 19849 line(s) with tabs. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
92m 58s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
33s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
17s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  2m 
36s{color} | {color:green} hadoop-yarn-server-timeline-pluginstorage in the 
patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
28s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}130m 58s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:8f97d6f |
| JIRA Issue | YARN-9080 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12961362/YARN-9080-004.patch |
| Optional Tests |  dupname  asflicense  compile  javac  javadoc  mvninstall  
mvnsite  unit  shadedclient  findbugs  checkstyle  |
| uname | Linux 1bfdbdbf8ee9 4.4.0-139-generic #165-Ubuntu SMP Wed Oct 24 
10:58:50 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / 62e89dc |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_191 |
| findbugs | v3.1.0-RC1 |
| checkstyle | 
https://builds.apache.org/job/PreCommit-YARN-Build/23640/artifact/out/diff-checkstyle-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-timeline-pluginstorage.txt
 |
| 

[jira] [Updated] (YARN-9138) Improve test coverage for nvidia-smi binary execution of GpuDiscoverer

2019-03-06 Thread Sunil Govindan (JIRA)


 [ 
https://issues.apache.org/jira/browse/YARN-9138?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sunil Govindan updated YARN-9138:
-
Summary: Improve test coverage for nvidia-smi binary execution of 
GpuDiscoverer  (was: Test error handling of nvidia-smi binary execution of 
GpuDiscoverer)

> Improve test coverage for nvidia-smi binary execution of GpuDiscoverer
> --
>
> Key: YARN-9138
> URL: https://issues.apache.org/jira/browse/YARN-9138
> Project: Hadoop YARN
>  Issue Type: Improvement
>Reporter: Szilard Nemeth
>Assignee: Szilard Nemeth
>Priority: Major
> Attachments: YARN-9138.001.patch, YARN-9138.002.patch, 
> YARN-9138.003.patch, YARN-9138.004.patch, YARN-9138.005.patch, 
> YARN-9138.006.patch, YARN-9138.007.patch
>
>
> The code that executes nvidia-smi (doing GPU device auto-discovery) don't 
> have much test coverage.
> This patch adds tests to this part of the code.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-9138) Test error handling of nvidia-smi binary execution of GpuDiscoverer

2019-03-06 Thread Sunil Govindan (JIRA)


[ 
https://issues.apache.org/jira/browse/YARN-9138?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16785485#comment-16785485
 ] 

Sunil Govindan commented on YARN-9138:
--

+1 on latest patch.

> Test error handling of nvidia-smi binary execution of GpuDiscoverer
> ---
>
> Key: YARN-9138
> URL: https://issues.apache.org/jira/browse/YARN-9138
> Project: Hadoop YARN
>  Issue Type: Improvement
>Reporter: Szilard Nemeth
>Assignee: Szilard Nemeth
>Priority: Major
> Attachments: YARN-9138.001.patch, YARN-9138.002.patch, 
> YARN-9138.003.patch, YARN-9138.004.patch, YARN-9138.005.patch, 
> YARN-9138.006.patch, YARN-9138.007.patch
>
>
> The code that executes nvidia-smi (doing GPU device auto-discovery) don't 
> have much test coverage.
> This patch adds tests to this part of the code.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7129) Application Catalog for YARN applications

2019-03-06 Thread Steve Loughran (JIRA)


[ 
https://issues.apache.org/jira/browse/YARN-7129?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16785470#comment-16785470
 ] 

Steve Loughran commented on YARN-7129:
--

this is doing odd things to builds. e.g the latest HADOOP-15625 yetus run 
couldn't clean it up {code}
[ERROR] Failed to execute goal 
org.apache.maven.plugins:maven-clean-plugin:2.5:clean (default-clean) on 
project hadoop-yarn-applications-catalog-webapp: Failed to clean project: 
Failed to delete 
/testptch/hadoop/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-applications-catalog/hadoop-yarn-applications-catalog-webapp/target/generated-sources/vendor/ecstatic/test/public
{code}

 I seemed to find a copy of the tree was hanging around locally when I switched 
branches. I thought that was a one-off, but if yetus is getting confused too...


> Application Catalog for YARN applications
> -
>
> Key: YARN-7129
> URL: https://issues.apache.org/jira/browse/YARN-7129
> Project: Hadoop YARN
>  Issue Type: New Feature
>  Components: applications
>Reporter: Eric Yang
>Assignee: Eric Yang
>Priority: Major
> Fix For: 3.3.0
>
> Attachments: YARN Appstore.pdf, YARN-7129.001.patch, 
> YARN-7129.002.patch, YARN-7129.003.patch, YARN-7129.004.patch, 
> YARN-7129.005.patch, YARN-7129.006.patch, YARN-7129.007.patch, 
> YARN-7129.008.patch, YARN-7129.009.patch, YARN-7129.010.patch, 
> YARN-7129.011.patch, YARN-7129.012.patch, YARN-7129.013.patch, 
> YARN-7129.014.patch, YARN-7129.015.patch, YARN-7129.016.patch, 
> YARN-7129.017.patch, YARN-7129.018.patch, YARN-7129.019.patch, 
> YARN-7129.020.patch, YARN-7129.021.patch, YARN-7129.022.patch, 
> YARN-7129.023.patch, YARN-7129.024.patch, YARN-7129.025.patch, 
> YARN-7129.026.patch, YARN-7129.027.patch, YARN-7129.028.patch
>
>
> YARN native services provides web services API to improve usability of 
> application deployment on Hadoop using collection of docker images.  It would 
> be nice to have an application catalog system which provides an editorial and 
> search interface for YARN applications.  This improves usability of YARN for 
> manage the life cycle of applications.  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Assigned] (YARN-8158) Document that create tag doesn't work for rule secondaryGroupExistingQueue

2019-03-06 Thread Siddharth Ahuja (JIRA)


 [ 
https://issues.apache.org/jira/browse/YARN-8158?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Siddharth Ahuja reassigned YARN-8158:
-

Assignee: Siddharth Ahuja

> Document that create tag doesn't work for rule secondaryGroupExistingQueue
> --
>
> Key: YARN-8158
> URL: https://issues.apache.org/jira/browse/YARN-8158
> Project: Hadoop YARN
>  Issue Type: Improvement
>  Components: fairscheduler
>Affects Versions: 3.1.0
>Reporter: Yufei Gu
>Assignee: Siddharth Ahuja
>Priority: Minor
>
> No matter what "create" tag you give it to secondaryGroupExistingQueue rule, 
> this rule won't create a queue if it doesn't exist. 
>  and  name="secondaryGroupExistingQueue" create="true" /> are the same.
> We need to document this.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Assigned] (YARN-6412) aux-services classpath not documented

2019-03-06 Thread Siddharth Ahuja (JIRA)


 [ 
https://issues.apache.org/jira/browse/YARN-6412?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Siddharth Ahuja reassigned YARN-6412:
-

Assignee: Siddharth Ahuja

> aux-services classpath not documented
> -
>
> Key: YARN-6412
> URL: https://issues.apache.org/jira/browse/YARN-6412
> Project: Hadoop YARN
>  Issue Type: Bug
>Reporter: Miklos Szegedi
>Assignee: Siddharth Ahuja
>Priority: Minor
>  Labels: docuentation, newbie
>
> YARN-4577 introduced two new configuration entries 
> yarn.nodemanager.aux-services.%s.classpath and 
> yarn.nodemanager.aux-services.%s.system-classes. These are not documented in 
> hadoop-yarn-common/.../yarn-default.xml



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Assigned] (YARN-6442) Inaccurate javadoc in NodeManagerHardwareUtils.getContainerMemoryMB

2019-03-06 Thread Adam Antal (JIRA)


 [ 
https://issues.apache.org/jira/browse/YARN-6442?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Adam Antal reassigned YARN-6442:


Assignee: Siddharth Ahuja

> Inaccurate javadoc in NodeManagerHardwareUtils.getContainerMemoryMB
> ---
>
> Key: YARN-6442
> URL: https://issues.apache.org/jira/browse/YARN-6442
> Project: Hadoop YARN
>  Issue Type: Bug
>Reporter: Miklos Szegedi
>Assignee: Siddharth Ahuja
>Priority: Minor
>  Labels: newbie
>
> NodeManagerHardwareUtils.getContainerMemoryMB has the following javadoc:
> {code}
> "If the OS has a
>* ResourceCalculatorPlugin implemented, the calculation is 0.8 * (RAM - 2 *
>* JVM-memory) i.e. use 80% of the memory after accounting for memory used 
> by
>* the DataNode and the NodeManager. If the number is less than 1GB, log a
>* warning message."
> {code}
> I think the accurate expression is 0.8*(RAM-2*JVM)-systemreserved. I also do 
> not see the 1GB cap in the code.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6442) Inaccurate javadoc in NodeManagerHardwareUtils.getContainerMemoryMB

2019-03-06 Thread Siddharth Ahuja (JIRA)


[ 
https://issues.apache.org/jira/browse/YARN-6442?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16785437#comment-16785437
 ] 

Siddharth Ahuja commented on YARN-6442:
---

Hi there, can someone kindly assign this Jira to me please? Thanks in advance!

> Inaccurate javadoc in NodeManagerHardwareUtils.getContainerMemoryMB
> ---
>
> Key: YARN-6442
> URL: https://issues.apache.org/jira/browse/YARN-6442
> Project: Hadoop YARN
>  Issue Type: Bug
>Reporter: Miklos Szegedi
>Priority: Minor
>  Labels: newbie
>
> NodeManagerHardwareUtils.getContainerMemoryMB has the following javadoc:
> {code}
> "If the OS has a
>* ResourceCalculatorPlugin implemented, the calculation is 0.8 * (RAM - 2 *
>* JVM-memory) i.e. use 80% of the memory after accounting for memory used 
> by
>* the DataNode and the NodeManager. If the number is less than 1GB, log a
>* warning message."
> {code}
> I think the accurate expression is 0.8*(RAM-2*JVM)-systemreserved. I also do 
> not see the 1GB cap in the code.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-9080) Bucket Directories as part of ATS done accumulates

2019-03-06 Thread Rakesh Shah (JIRA)


[ 
https://issues.apache.org/jira/browse/YARN-9080?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16785428#comment-16785428
 ] 

Rakesh Shah commented on YARN-9080:
---

Thanks [~Prabhu Joseph]

> Bucket Directories as part of ATS done accumulates
> --
>
> Key: YARN-9080
> URL: https://issues.apache.org/jira/browse/YARN-9080
> Project: Hadoop YARN
>  Issue Type: Bug
>Reporter: Prabhu Joseph
>Assignee: Prabhu Joseph
>Priority: Major
> Attachments: 0001-YARN-9080.patch, 0002-YARN-9080.patch, 
> 0003-YARN-9080.patch, YARN-9080-004.patch
>
>
> Have observed older bucket directories cluster_timestamp, bucket1 and bucket2 
> as part of ATS done accumulates. The cleanLogs part of EntityLogCleaner 
> removes only the app directories and not the bucket directories.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-9344) FS should not reserve when container capability is bigger than node total resource

2019-03-06 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/YARN-9344?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16785416#comment-16785416
 ] 

Hadoop QA commented on YARN-9344:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
40s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 25m 
 5s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
46s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
40s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
50s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
13m  8s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
26s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
39s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
51s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
51s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
51s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
37s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
51s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} whitespace {color} | {color:red}  0m  
0s{color} | {color:red} The patch has 75 line(s) that end in whitespace. Use 
git apply --whitespace=fix <>. Refer 
https://git-scm.com/docs/git-apply {color} |
| {color:red}-1{color} | {color:red} whitespace {color} | {color:red}  0m 
14s{color} | {color:red} The patch 19849 line(s) with tabs. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
80m  3s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  3m 
21s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
25s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 95m 29s{color} 
| {color:red} hadoop-yarn-server-resourcemanager in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
34s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}229m 17s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | 
hadoop.yarn.server.resourcemanager.TestApplicationMasterServiceFair |
|   | 
hadoop.yarn.server.resourcemanager.metrics.TestCombinedSystemMetricsPublisher |
|   | 
hadoop.yarn.server.resourcemanager.metrics.TestSystemMetricsPublisherForV2 |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:8f97d6f |
| JIRA Issue | YARN-9344 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12961286/YARN-9344.002.patch |
| Optional Tests |  dupname  asflicense  compile  javac  javadoc  mvninstall  
mvnsite  unit  shadedclient  findbugs  checkstyle  |
| uname | Linux c6caf6cba9c7 3.13.0-153-generic #203-Ubuntu SMP Thu Jun 14 
08:52:28 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / 62e89dc |
| maven | version: Apache Maven 3.3.9 |
| 

[jira] [Commented] (YARN-9080) Bucket Directories as part of ATS done accumulates

2019-03-06 Thread Prabhu Joseph (JIRA)


[ 
https://issues.apache.org/jira/browse/YARN-9080?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16785415#comment-16785415
 ] 

Prabhu Joseph commented on YARN-9080:
-

[~Rakesh_Shah] Yes.

> Bucket Directories as part of ATS done accumulates
> --
>
> Key: YARN-9080
> URL: https://issues.apache.org/jira/browse/YARN-9080
> Project: Hadoop YARN
>  Issue Type: Bug
>Reporter: Prabhu Joseph
>Assignee: Prabhu Joseph
>Priority: Major
> Attachments: 0001-YARN-9080.patch, 0002-YARN-9080.patch, 
> 0003-YARN-9080.patch, YARN-9080-004.patch
>
>
> Have observed older bucket directories cluster_timestamp, bucket1 and bucket2 
> as part of ATS done accumulates. The cleanLogs part of EntityLogCleaner 
> removes only the app directories and not the bucket directories.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-9080) Bucket Directories as part of ATS done accumulates

2019-03-06 Thread Rakesh Shah (JIRA)


[ 
https://issues.apache.org/jira/browse/YARN-9080?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16785414#comment-16785414
 ] 

Rakesh Shah commented on YARN-9080:
---

Hi [~Prabhu Joseph]

so apart from the current date directories rest all it will clean?

 

> Bucket Directories as part of ATS done accumulates
> --
>
> Key: YARN-9080
> URL: https://issues.apache.org/jira/browse/YARN-9080
> Project: Hadoop YARN
>  Issue Type: Bug
>Reporter: Prabhu Joseph
>Assignee: Prabhu Joseph
>Priority: Major
> Attachments: 0001-YARN-9080.patch, 0002-YARN-9080.patch, 
> 0003-YARN-9080.patch, YARN-9080-004.patch
>
>
> Have observed older bucket directories cluster_timestamp, bucket1 and bucket2 
> as part of ATS done accumulates. The cleanLogs part of EntityLogCleaner 
> removes only the app directories and not the bucket directories.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-9299) TestTimelineReaderWhitelistAuthorizationFilter ignores Http Errors

2019-03-06 Thread Prabhu Joseph (JIRA)


[ 
https://issues.apache.org/jira/browse/YARN-9299?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16785401#comment-16785401
 ] 

Prabhu Joseph commented on YARN-9299:
-

[~rohithsharma] Can we commit this patch. Thanks.

> TestTimelineReaderWhitelistAuthorizationFilter ignores Http Errors
> --
>
> Key: YARN-9299
> URL: https://issues.apache.org/jira/browse/YARN-9299
> Project: Hadoop YARN
>  Issue Type: Test
>Affects Versions: 3.1.2
>Reporter: Prabhu Joseph
>Assignee: Prabhu Joseph
>Priority: Major
> Attachments: YARN-9299-001.patch
>
>
> TestTimelineReaderWhitelistAuthorizationFilter positive test cases does not 
> check if there is any Error in HttpResponse. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-9080) Bucket Directories as part of ATS done accumulates

2019-03-06 Thread Prabhu Joseph (JIRA)


[ 
https://issues.apache.org/jira/browse/YARN-9080?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16785398#comment-16785398
 ] 

Prabhu Joseph commented on YARN-9080:
-

Hi [~Rakesh_Shah], Have tested on my test cluster which had bucket directories 
older than a month.

Before Patch there are clustertimestamp, bucket directories which are older 
than retention 
yarn.timeline-service.entity-group-fs-store.retain-seconds (604800 i.e 7 days) 
are present.

{code}
[yarn@tparimi-tarunhdp26-3 ~]$ hadoop fs -ls -R /ats/done
drwxr-xr-x   - yarn hadoop  0 2018-10-24 15:03 /ats/done/1540393151330
drwxr-xr-x   - yarn hadoop  0 2018-10-24 15:03 
/ats/done/1540393151330/
drwxr-xr-x   - yarn hadoop  0 2018-10-31 15:15 
/ats/done/1540393151330//000
drwxr-xr-x   - yarn hadoop  0 2018-10-26 09:41 /ats/done/1540478614379
drwxr-xr-x   - yarn hadoop  0 2018-10-26 09:41 
/ats/done/1540478614379/
drwxr-xr-x   - yarn hadoop  0 2018-11-02 11:02 
/ats/done/1540478614379//000
drwxr-xr-x   - yarn  hadoop  0 2019-03-05 04:44 
/ats/done/1551701066597
drwxr-xr-x   - yarn  hadoop  0 2019-03-05 04:44 
/ats/done/1551701066597/
drwxr-xr-x   - yarn  hadoop  0 2019-03-05 04:46 
/ats/done/1551701066597//000
drwxrwx---   - ambari-qa hadoop  0 2019-03-05 04:44 
/ats/done/1551701066597//000/application_1551701066597_0001
{code}

After Patch with cleaner interval set to minute.
yarn.timeline-service.entity-group-fs-store.cleaner-interval-seconds = 60 

After restart of ATS and a minute, the older clustertimestamp, bucket 
directories are removed as expected. 

{code}
[yarn@tparimi-tarunhdp26-3 yarn]$ hadoop fs -ls -R /ats/done
drwxr-xr-x   - yarn  hadoop  0 2019-03-05 04:44 
/ats/done/1551701066597
drwxr-xr-x   - yarn  hadoop  0 2019-03-05 04:44 
/ats/done/1551701066597/
drwxr-xr-x   - yarn  hadoop  0 2019-03-05 04:46 
/ats/done/1551701066597//000
drwxrwx---   - ambari-qa hadoop  0 2019-03-05 04:44 
/ats/done/1551701066597//000/application_1551701066597_0001
{code}


[~rohithsharma] Can you review the v4 patch for this jira.




> Bucket Directories as part of ATS done accumulates
> --
>
> Key: YARN-9080
> URL: https://issues.apache.org/jira/browse/YARN-9080
> Project: Hadoop YARN
>  Issue Type: Bug
>Reporter: Prabhu Joseph
>Assignee: Prabhu Joseph
>Priority: Major
> Attachments: 0001-YARN-9080.patch, 0002-YARN-9080.patch, 
> 0003-YARN-9080.patch, YARN-9080-004.patch
>
>
> Have observed older bucket directories cluster_timestamp, bucket1 and bucket2 
> as part of ATS done accumulates. The cleanLogs part of EntityLogCleaner 
> removes only the app directories and not the bucket directories.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-9080) Bucket Directories as part of ATS done accumulates

2019-03-06 Thread Prabhu Joseph (JIRA)


 [ 
https://issues.apache.org/jira/browse/YARN-9080?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Prabhu Joseph updated YARN-9080:

Attachment: YARN-9080-004.patch

> Bucket Directories as part of ATS done accumulates
> --
>
> Key: YARN-9080
> URL: https://issues.apache.org/jira/browse/YARN-9080
> Project: Hadoop YARN
>  Issue Type: Bug
>Reporter: Prabhu Joseph
>Assignee: Prabhu Joseph
>Priority: Major
> Attachments: 0001-YARN-9080.patch, 0002-YARN-9080.patch, 
> 0003-YARN-9080.patch, YARN-9080-004.patch
>
>
> Have observed older bucket directories cluster_timestamp, bucket1 and bucket2 
> as part of ATS done accumulates. The cleanLogs part of EntityLogCleaner 
> removes only the app directories and not the bucket directories.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org