[jira] [Commented] (YARN-8388) TestCGroupElasticMemoryController.testNormalExit() hangs on Linux

2018-06-04 Thread genericqa (JIRA)


[ 
https://issues.apache.org/jira/browse/YARN-8388?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16499847#comment-16499847
 ] 

genericqa commented on YARN-8388:
-

| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
33s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 2 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
59s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 25m 
27s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 28m 
44s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  3m 
15s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
56s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
16m 29s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m 
58s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
30s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
18s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
20s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 27m 
28s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 27m 
28s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  3m 
13s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
52s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
10m 26s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m 
50s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
28s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  8m 
11s{color} | {color:green} hadoop-common in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 19m 
33s{color} | {color:green} hadoop-yarn-server-nodemanager in the patch passed. 
{color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
41s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}156m 51s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:abb62dd |
| JIRA Issue | YARN-8388 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12926306/YARN-8388.000.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  shadedclient  findbugs  checkstyle  |
| uname | Linux 3594e6a8a527 3.13.0-139-generic #188-Ubuntu SMP Tue Jan 9 
14:43:09 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / 9c4cbed |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_162 |
| findbugs | v3.1.0-RC1 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-YARN-Build/20936/testReport/ |
| Max. process+thread count | 1411 (vs. ulimit of 1) |
|

[jira] [Commented] (YARN-8320) [Umbrella] Support CPU isolation for latency-sensitive (LS) service

2018-06-04 Thread Bibin A Chundatt (JIRA)


[ 
https://issues.apache.org/jira/browse/YARN-8320?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16499938#comment-16499938
 ] 

Bibin A Chundatt commented on YARN-8320:


Thank you [~cheersyang]/[~yangjiandan] for design doc

{quote}
When a container’s cpu_share_mode is EXCLUSIVE/RESERVED, the number of allocated
processor ​ allocateProcessorNum​ = ​ container_vcore / Vcore_Ratio, ​ request 
will be
rejected if​ allocateProcessorNum <= 0;
{quote}
# IIUC If we don't  have slots to bind container will be rejecting the 
container start request. Which will be considered as failed. Scheduler could 
again allocate container to same nodemanager rt ??
# When nm processors/ nm vcores < 1  and share mode have you considered 
*strictness per containers* ?? ie using the periods and quota also along with 
Cpuset assignment ?? If no other process is using  cpu then process will be 
consuming more than what its supposed to rt ??

Thoughts on having CpuBindHandlerImpl includes 2 Allocators for cgroups 
subgroups one for cpu and another for cpuset?

Could you also consider the following in design

# Using fixed set of folders for assignment in Allocator (Reduce overload of 
creation and deletion on containers.)
# Resource calculation could go wrong incase of preemption of  containers rt . 
kill reject could get processed after container start.



> [Umbrella] Support CPU isolation for latency-sensitive (LS) service
> ---
>
> Key: YARN-8320
> URL: https://issues.apache.org/jira/browse/YARN-8320
> Project: Hadoop YARN
>  Issue Type: New Feature
>  Components: nodemanager
>Reporter: Jiandan Yang 
>Priority: Major
> Attachments: CPU-isolation-for-latency-sensitive-services-v1.pdf, 
> CPU-isolation-for-latency-sensitive-services-v2.pdf, YARN-8320.001.patch
>
>
> Currently NodeManager uses “cpu.cfs_period_us”, “cpu.cfs_quota_us” and 
> “cpu.shares” to isolate cpu resource. However,
>  * Linux Completely Fair Scheduling (CFS) is a throughput-oriented scheduler; 
> no support for differentiated latency
>  * Request latency of services running on container may be frequent shake 
> when all containers share cpus, and latency-sensitive services can not afford 
> in our production environment.
> So we need more fine-grained cpu isolation.
> Here we propose a solution using cgroup cpuset to binds containers to 
> different processors, this is inspired by the isolation technique in [Borg 
> system|http://schd.ws/hosted_files/lcccna2016/a7/CAT%20@%20Scale.pdf].



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-8155) Improve the logging in NMTimelinePublisher and TimelineCollectorWebService

2018-06-04 Thread Rohith Sharma K S (JIRA)


[ 
https://issues.apache.org/jira/browse/YARN-8155?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16499958#comment-16499958
 ] 

Rohith Sharma K S commented on YARN-8155:
-

thanks Abhishek Modi for patch. it looks reasonable to me.  Would you add 
similar change  in TimelineServiceV2Publisher as well.?

TimelineCollectorWebService
# Catching notfoundexception and converting into web application exception 
changing return code. We should still retain return code not found right?

[~vrushalic] [~haibo.chen] would you take a look at this patch please? 

> Improve the logging in NMTimelinePublisher and TimelineCollectorWebService
> --
>
> Key: YARN-8155
> URL: https://issues.apache.org/jira/browse/YARN-8155
> Project: Hadoop YARN
>  Issue Type: Bug
>Reporter: Rohith Sharma K S
>Assignee: Abhishek Modi
>Priority: Major
> Attachments: YARN-8155.001.patch, YARN-8155.002.patch
>
>
> We see that NM logs are filled with larger stack trace of NotFoundException 
> if collector is removed from one of the NM and other NMs are still publishing 
> the entities.
>  
> This Jira is to improve the logging in NM so that we log with informative 
> message.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Created] (YARN-8389) Improve the description of machine-list property in Federation docs

2018-06-04 Thread Takanobu Asanuma (JIRA)
Takanobu Asanuma created YARN-8389:
--

 Summary: Improve the description of machine-list property in 
Federation docs
 Key: YARN-8389
 URL: https://issues.apache.org/jira/browse/YARN-8389
 Project: Hadoop YARN
  Issue Type: Improvement
  Components: documentation, federation
Affects Versions: 3.2.0, 3.1.1
Reporter: Takanobu Asanuma
Assignee: Takanobu Asanuma


The current example and the description seem to be a bit ambiguous.

http://hadoop.apache.org/docs/r3.1.0/hadoop-yarn/hadoop-yarn-site/Federation.html#Optional:



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-8389) Improve the description of machine-list property in Federation docs

2018-06-04 Thread Takanobu Asanuma (JIRA)


 [ 
https://issues.apache.org/jira/browse/YARN-8389?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Takanobu Asanuma updated YARN-8389:
---
Attachment: YARN-8389.1.patch

> Improve the description of machine-list property in Federation docs
> ---
>
> Key: YARN-8389
> URL: https://issues.apache.org/jira/browse/YARN-8389
> Project: Hadoop YARN
>  Issue Type: Improvement
>  Components: documentation, federation
>Affects Versions: 3.2.0, 3.1.1
>Reporter: Takanobu Asanuma
>Assignee: Takanobu Asanuma
>Priority: Major
> Attachments: YARN-8389.1.patch
>
>
> The current example and the description seem to be a bit ambiguous.
> http://hadoop.apache.org/docs/r3.1.0/hadoop-yarn/hadoop-yarn-site/Federation.html#Optional:



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-8389) Improve the description of machine-list property in Federation docs

2018-06-04 Thread Takanobu Asanuma (JIRA)


[ 
https://issues.apache.org/jira/browse/YARN-8389?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16500170#comment-16500170
 ] 

Takanobu Asanuma commented on YARN-8389:


Uploaded the 1st patch.

> Improve the description of machine-list property in Federation docs
> ---
>
> Key: YARN-8389
> URL: https://issues.apache.org/jira/browse/YARN-8389
> Project: Hadoop YARN
>  Issue Type: Improvement
>  Components: documentation, federation
>Affects Versions: 3.2.0, 3.1.1
>Reporter: Takanobu Asanuma
>Assignee: Takanobu Asanuma
>Priority: Major
> Attachments: YARN-8389.1.patch
>
>
> The current example and the description seem to be a bit ambiguous.
> http://hadoop.apache.org/docs/r3.1.0/hadoop-yarn/hadoop-yarn-site/Federation.html#Optional:



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-8155) Improve the logging in NMTimelinePublisher and TimelineCollectorWebService

2018-06-04 Thread Abhishek Modi (JIRA)


[ 
https://issues.apache.org/jira/browse/YARN-8155?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16500230#comment-16500230
 ] 

Abhishek Modi commented on YARN-8155:
-

Thanks [~rohithsharma] for quick review. I will submit the updated patch.

> Improve the logging in NMTimelinePublisher and TimelineCollectorWebService
> --
>
> Key: YARN-8155
> URL: https://issues.apache.org/jira/browse/YARN-8155
> Project: Hadoop YARN
>  Issue Type: Bug
>Reporter: Rohith Sharma K S
>Assignee: Abhishek Modi
>Priority: Major
> Attachments: YARN-8155.001.patch, YARN-8155.002.patch
>
>
> We see that NM logs are filled with larger stack trace of NotFoundException 
> if collector is removed from one of the NM and other NMs are still publishing 
> the entities.
>  
> This Jira is to improve the logging in NM so that we log with informative 
> message.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Resolved] (YARN-8345) NodeHealthCheckerService to differentiate between reason for UnusableNodes for client to act suitably on it

2018-06-04 Thread Kartik Bhatia (JIRA)


 [ 
https://issues.apache.org/jira/browse/YARN-8345?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kartik Bhatia resolved YARN-8345.
-
Resolution: Duplicate

> NodeHealthCheckerService to differentiate between reason for UnusableNodes 
> for client to act suitably on it
> ---
>
> Key: YARN-8345
> URL: https://issues.apache.org/jira/browse/YARN-8345
> Project: Hadoop YARN
>  Issue Type: New Feature
>  Components: nodemanager
>Reporter: Kartik Bhatia
>Priority: Major
>
> +*Current Scenario :*+ 
> NodeHealthCheckerService marks a node Unhealthy on basis of 2 things : 
>  # External Script
>  # Directory status
> If a directory is marked as full(as per DiskCheck configs in yarn-site), node 
> manager marks this as unhealthy. 
> Once a node is marked unhealthy, mapreduce launches all the map tasks that 
> ran on this usable node. This leads to even successful tasks being relaunched.
> +{color:#33}*Problem :*{color}+
> {color:#33}We do not have distinction between disk limit to stop 
> container launch on that node and limit so that reducer can read data from 
> that node.{color}
> {color:#33}For Example : {color}
> {color:#33}Let us consider a 3 TB disk. If we set max disk utilisation 
> percentage as 95% (since launch of container requires approx 0.15 TB for jobs 
> in our cluster) and there are few nodes where disk utilisation is say 96%, 
> the threshold will be breached. These nodes will be marked unhealthy by 
> NodeManager. This will result in all successful mappers being relaunched on 
> other nodes. But still 4% memory is good enough for reducers to read that 
> data. This causes unnecessary delay in our jobs. (Mappers launching again can 
> preempt reducers if there is crunch for space and there are issues with 
> calculating Headroom in Capacity scheduler as well){color}
>  
> +*Correction :*+
> We need a state (say UNUSABLE_WRITE) that can let mapreduce know that node is 
> still good for reading data and successful mappers should not be relaunched. 
> This can prevent delay.
>   



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-8155) Improve the logging in NMTimelinePublisher and TimelineCollectorWebService

2018-06-04 Thread Abhishek Modi (JIRA)


[ 
https://issues.apache.org/jira/browse/YARN-8155?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16500234#comment-16500234
 ] 

Abhishek Modi commented on YARN-8155:
-

{quote}Catching notfoundexception and converting into web application exception 
changing return code. We should still retain return code not found right?
{quote}
I don't think we should pass WebApplicationException with Not Found code as 
that is meant when web application doesn't handle that page request. IMO 
WebApplicationException should be passed only with Internal Error only.

[~rohithsharma] thoughts?

> Improve the logging in NMTimelinePublisher and TimelineCollectorWebService
> --
>
> Key: YARN-8155
> URL: https://issues.apache.org/jira/browse/YARN-8155
> Project: Hadoop YARN
>  Issue Type: Bug
>Reporter: Rohith Sharma K S
>Assignee: Abhishek Modi
>Priority: Major
> Attachments: YARN-8155.001.patch, YARN-8155.002.patch
>
>
> We see that NM logs are filled with larger stack trace of NotFoundException 
> if collector is removed from one of the NM and other NMs are still publishing 
> the entities.
>  
> This Jira is to improve the logging in NM so that we log with informative 
> message.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Created] (YARN-8390) Fix API incompatible changes in FairScheduler's AllocationFileLoaderService

2018-06-04 Thread Gergo Repas (JIRA)
Gergo Repas created YARN-8390:
-

 Summary: Fix API incompatible changes in FairScheduler's 
AllocationFileLoaderService
 Key: YARN-8390
 URL: https://issues.apache.org/jira/browse/YARN-8390
 Project: Hadoop YARN
  Issue Type: Bug
  Components: fairscheduler
Affects Versions: 3.2.0
Reporter: Gergo Repas
Assignee: Gergo Repas


YARN-8191 introduced API-incompatible changes, which would break some classes 
in hive (e.g. 
https://github.com/apache/hive/blob/master/shims/scheduler/src/main/java/org/apache/hadoop/hive/schshim/FairSchedulerShim.java),
 this ticket's goal is to fix the incompatible changes.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-8390) Fix API incompatible changes in FairScheduler's AllocationFileLoaderService

2018-06-04 Thread Gergo Repas (JIRA)


 [ 
https://issues.apache.org/jira/browse/YARN-8390?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gergo Repas updated YARN-8390:
--
Attachment: YARN-8390.000.patch

> Fix API incompatible changes in FairScheduler's AllocationFileLoaderService
> ---
>
> Key: YARN-8390
> URL: https://issues.apache.org/jira/browse/YARN-8390
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: fairscheduler
>Affects Versions: 3.2.0
>Reporter: Gergo Repas
>Assignee: Gergo Repas
>Priority: Major
> Attachments: YARN-8390.000.patch
>
>
> YARN-8191 introduced API-incompatible changes, which would break some classes 
> in hive (e.g. 
> https://github.com/apache/hive/blob/master/shims/scheduler/src/main/java/org/apache/hadoop/hive/schshim/FairSchedulerShim.java),
>  this ticket's goal is to fix the incompatible changes.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-8155) Improve the logging in NMTimelinePublisher and TimelineCollectorWebService

2018-06-04 Thread Abhishek Modi (JIRA)


 [ 
https://issues.apache.org/jira/browse/YARN-8155?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Abhishek Modi updated YARN-8155:

Attachment: YARN-8155.003.patch

> Improve the logging in NMTimelinePublisher and TimelineCollectorWebService
> --
>
> Key: YARN-8155
> URL: https://issues.apache.org/jira/browse/YARN-8155
> Project: Hadoop YARN
>  Issue Type: Bug
>Reporter: Rohith Sharma K S
>Assignee: Abhishek Modi
>Priority: Major
> Attachments: YARN-8155.001.patch, YARN-8155.002.patch, 
> YARN-8155.003.patch
>
>
> We see that NM logs are filled with larger stack trace of NotFoundException 
> if collector is removed from one of the NM and other NMs are still publishing 
> the entities.
>  
> This Jira is to improve the logging in NM so that we log with informative 
> message.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-8389) Improve the description of machine-list property in Federation docs

2018-06-04 Thread genericqa (JIRA)


[ 
https://issues.apache.org/jira/browse/YARN-8389?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16500252#comment-16500252
 ] 

genericqa commented on YARN-8389:
-

| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
36s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 27m 
17s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
23s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
39m 33s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
19s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
14m 21s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
36s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 56m  0s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:abb62dd |
| JIRA Issue | YARN-8389 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12926374/YARN-8389.1.patch |
| Optional Tests |  asflicense  mvnsite  |
| uname | Linux 645cea1e33b4 3.13.0-139-generic #188-Ubuntu SMP Tue Jan 9 
14:43:09 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / 9efb4b7 |
| maven | version: Apache Maven 3.3.9 |
| Max. process+thread count | 302 (vs. ulimit of 1) |
| modules | C: hadoop-yarn-project/hadoop-yarn/hadoop-yarn-site U: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-site |
| Console output | 
https://builds.apache.org/job/PreCommit-YARN-Build/20937/console |
| Powered by | Apache Yetus 0.8.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> Improve the description of machine-list property in Federation docs
> ---
>
> Key: YARN-8389
> URL: https://issues.apache.org/jira/browse/YARN-8389
> Project: Hadoop YARN
>  Issue Type: Improvement
>  Components: documentation, federation
>Affects Versions: 3.2.0, 3.1.1
>Reporter: Takanobu Asanuma
>Assignee: Takanobu Asanuma
>Priority: Major
> Attachments: YARN-8389.1.patch
>
>
> The current example and the description seem to be a bit ambiguous.
> http://hadoop.apache.org/docs/r3.1.0/hadoop-yarn/hadoop-yarn-site/Federation.html#Optional:



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Assigned] (YARN-6931) Make the aggregation interval in AppLevelTimelineCollector configurable

2018-06-04 Thread Abhishek Modi (JIRA)


 [ 
https://issues.apache.org/jira/browse/YARN-6931?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Abhishek Modi reassigned YARN-6931:
---

Assignee: Abhishek Modi

> Make the aggregation interval in AppLevelTimelineCollector configurable
> ---
>
> Key: YARN-6931
> URL: https://issues.apache.org/jira/browse/YARN-6931
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: timelineserver
>Affects Versions: 3.0.0-alpha3
>Reporter: Haibo Chen
>Assignee: Abhishek Modi
>Priority: Minor
>
> We do application-level metrics aggregation in AppLevelTimelineCollector, but 
> the interval is hardcoded.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Assigned] (YARN-6989) Ensure timeline service v2 codebase gets UGI from HttpServletRequest in a consistent way

2018-06-04 Thread Abhishek Modi (JIRA)


 [ 
https://issues.apache.org/jira/browse/YARN-6989?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Abhishek Modi reassigned YARN-6989:
---

Assignee: Abhishek Modi

> Ensure timeline service v2 codebase gets UGI from HttpServletRequest in a 
> consistent way
> 
>
> Key: YARN-6989
> URL: https://issues.apache.org/jira/browse/YARN-6989
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: timelineserver
>Reporter: Vrushali C
>Assignee: Abhishek Modi
>Priority: Major
>
> As noticed during discussions in YARN-6820, the webservices in timeline 
> service v2 get the UGI created from the user obtained by invoking 
> getRemoteUser on the HttpServletRequest . 
> It will be good to use getUserPrincipal instead of invoking getRemoteUser on 
> the HttpServletRequest. 
> Filing jira to update the code. 
> Per Java EE documentations for 6 and 7, the behavior around getRemoteUser and 
> getUserPrincipal is listed at:
> http://docs.oracle.com/javaee/6/tutorial/doc/gjiie.html#bncba
> https://docs.oracle.com/javaee/7/tutorial/security-webtier003.htm
> {code}
> getRemoteUser, which determines the user name with which the client 
> authenticated. The getRemoteUser method returns the name of the remote user 
> (the caller) associated by the container with the request. If no user has 
> been authenticated, this method returns null.
> getUserPrincipal, which determines the principal name of the current user and 
> returns a java.security.Principal object. If no user has been authenticated, 
> this method returns null. Calling the getName method on the Principal 
> returned by getUserPrincipal returns the name of the remote user.
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Assigned] (YARN-6121) Launch app collectors for unmanaged AMs'

2018-06-04 Thread Abhishek Modi (JIRA)


 [ 
https://issues.apache.org/jira/browse/YARN-6121?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Abhishek Modi reassigned YARN-6121:
---

Assignee: Abhishek Modi

> Launch app collectors for unmanaged AMs'
> 
>
> Key: YARN-6121
> URL: https://issues.apache.org/jira/browse/YARN-6121
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Varun Saxena
>Assignee: Abhishek Modi
>Priority: Major
>
> Currently an app collector is launched whenever an AM container is launched 
> on a NM. This means for an unmanaged AM, app collector is never launched.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6121) Launch app collectors for unmanaged AMs'

2018-06-04 Thread Abhishek Modi (JIRA)


[ 
https://issues.apache.org/jira/browse/YARN-6121?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16500274#comment-16500274
 ] 

Abhishek Modi commented on YARN-6121:
-

Thanks [~varun_saxena] for filing this. I will start working on this.

> Launch app collectors for unmanaged AMs'
> 
>
> Key: YARN-6121
> URL: https://issues.apache.org/jira/browse/YARN-6121
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Varun Saxena
>Assignee: Abhishek Modi
>Priority: Major
>
> Currently an app collector is launched whenever an AM container is launched 
> on a NM. This means for an unmanaged AM, app collector is never launched.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Assigned] (YARN-6904) [ATSv2] Fix findbugs warnings

2018-06-04 Thread Abhishek Modi (JIRA)


 [ 
https://issues.apache.org/jira/browse/YARN-6904?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Abhishek Modi reassigned YARN-6904:
---

Assignee: Abhishek Modi

> [ATSv2] Fix findbugs warnings
> -
>
> Key: YARN-6904
> URL: https://issues.apache.org/jira/browse/YARN-6904
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Affects Versions: YARN-5355
>Reporter: Rohith Sharma K S
>Assignee: Abhishek Modi
>Priority: Major
>
> Many extant findbugs warnings are reported branch YARN-5355 
> [Jenkins|https://issues.apache.org/jira/browse/YARN-6130?focusedCommentId=16105786&page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-16105786]
> This need to be investigated and fix one by one. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-8240) Add queue-level control to allow all applications in a queue to opt-out

2018-06-04 Thread Haibo Chen (JIRA)


[ 
https://issues.apache.org/jira/browse/YARN-8240?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16500315#comment-16500315
 ] 

Haibo Chen commented on YARN-8240:
--

TestCapacityOverTimePolicy.testAllocation is flaky and unrelated to this patch. 
I'll upload a patch to address the other issues.

> Add queue-level control to allow all applications in a queue to opt-out
> ---
>
> Key: YARN-8240
> URL: https://issues.apache.org/jira/browse/YARN-8240
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Haibo Chen
>Assignee: Haibo Chen
>Priority: Major
> Attachments: YARN-8240-YARN-1011.00.patch
>
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-8240) Add queue-level control to allow all applications in a queue to opt-out

2018-06-04 Thread Haibo Chen (JIRA)


 [ 
https://issues.apache.org/jira/browse/YARN-8240?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Haibo Chen updated YARN-8240:
-
Attachment: YARN-8240-YARN-1011.01.patch

> Add queue-level control to allow all applications in a queue to opt-out
> ---
>
> Key: YARN-8240
> URL: https://issues.apache.org/jira/browse/YARN-8240
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Haibo Chen
>Assignee: Haibo Chen
>Priority: Major
> Attachments: YARN-8240-YARN-1011.00.patch, 
> YARN-8240-YARN-1011.01.patch
>
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-8390) Fix API incompatible changes in FairScheduler's AllocationFileLoaderService

2018-06-04 Thread genericqa (JIRA)


[ 
https://issues.apache.org/jira/browse/YARN-8390?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16500419#comment-16500419
 ] 

genericqa commented on YARN-8390:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
30s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 23m 
16s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
40s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
33s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
42s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
10m 28s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m  
5s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
26s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
39s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
34s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
34s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
28s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
39s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
10m 27s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  1m 
12s{color} | {color:red} 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager
 generated 1 new + 0 unchanged - 0 fixed = 1 total (was 0) {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
24s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 69m 
24s{color} | {color:green} hadoop-yarn-server-resourcemanager in the patch 
passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
19s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}121m 37s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| FindBugs | 
module:hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager
 |
|  |  Inconsistent synchronization of 
org.apache.hadoop.yarn.server.resourcemanager.scheduler.fair.AllocationFileLoaderService.reloadListener;
 locked 75% of time  Unsynchronized access at 
AllocationFileLoaderService.java:75% of time  Unsynchronized access at 
AllocationFileLoaderService.java:[line 117] |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:abb62dd |
| JIRA Issue | YARN-8390 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12926386/YARN-8390.000.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  shadedclient  findbugs  checkstyle  |
| uname | Linux 430c13c4887b 4.4.0-89-generic #112-Ubuntu SMP Mon Jul 31 
19:38:41 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / 9efb4b7 |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_162 |
| findbugs | v3.1.0-RC1 |
| findbugs | 
https

[jira] [Updated] (YARN-8191) Fair scheduler: queue deletion without RM restart

2018-06-04 Thread Haibo Chen (JIRA)


 [ 
https://issues.apache.org/jira/browse/YARN-8191?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Haibo Chen updated YARN-8191:
-
Hadoop Flags: Incompatible change,Reviewed  (was: Reviewed)

> Fair scheduler: queue deletion without RM restart
> -
>
> Key: YARN-8191
> URL: https://issues.apache.org/jira/browse/YARN-8191
> Project: Hadoop YARN
>  Issue Type: Improvement
>  Components: fairscheduler
>Affects Versions: 3.0.1
>Reporter: Gergo Repas
>Assignee: Gergo Repas
>Priority: Major
> Fix For: 3.2.0
>
> Attachments: Queue Deletion in Fair Scheduler.pdf, 
> YARN-8191.000.patch, YARN-8191.001.patch, YARN-8191.002.patch, 
> YARN-8191.003.patch, YARN-8191.004.patch, YARN-8191.005.patch, 
> YARN-8191.006.patch, YARN-8191.007.patch, YARN-8191.008.patch, 
> YARN-8191.009.patch, YARN-8191.010.patch, YARN-8191.011.patch, 
> YARN-8191.012.patch, YARN-8191.013.patch, YARN-8191.014.patch, 
> YARN-8191.015.patch, YARN-8191.016.patch, YARN-8191.017.patch
>
>
> The Fair Scheduler never cleans up queues even if they are deleted in the 
> allocation file, or were dynamically created and are never going to be used 
> again. Queues always remain in memory which leads to two following issues.
>  # Steady fairshares aren’t calculated correctly due to remaining queues
>  # WebUI shows deleted queues, which is confusing for users (YARN-4022).
> We want to support proper queue deletion without restarting the Resource 
> Manager:
>  # Static queues without any entries that are removed from fair-scheduler.xml 
> should be deleted from memory.
>  # Dynamic queues without any entries should be deleted.
>  # RM Web UI should only show the queues defined in the scheduler at that 
> point in time.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-8390) Fix API incompatible changes in FairScheduler's AllocationFileLoaderService

2018-06-04 Thread Haibo Chen (JIRA)


[ 
https://issues.apache.org/jira/browse/YARN-8390?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16500437#comment-16500437
 ] 

Haibo Chen commented on YARN-8390:
--

+1. The findbug issue is independent of this patch, that we should probably fix 
in a separate jira.

> Fix API incompatible changes in FairScheduler's AllocationFileLoaderService
> ---
>
> Key: YARN-8390
> URL: https://issues.apache.org/jira/browse/YARN-8390
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: fairscheduler
>Affects Versions: 3.2.0
>Reporter: Gergo Repas
>Assignee: Gergo Repas
>Priority: Major
> Attachments: YARN-8390.000.patch
>
>
> YARN-8191 introduced API-incompatible changes, which would break some classes 
> in hive (e.g. 
> https://github.com/apache/hive/blob/master/shims/scheduler/src/main/java/org/apache/hadoop/hive/schshim/FairSchedulerShim.java),
>  this ticket's goal is to fix the incompatible changes.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Created] (YARN-8391) Investigate AllocationFileLoaderService.reloadListener locking issue

2018-06-04 Thread Haibo Chen (JIRA)
Haibo Chen created YARN-8391:


 Summary: Investigate AllocationFileLoaderService.reloadListener 
locking issue
 Key: YARN-8391
 URL: https://issues.apache.org/jira/browse/YARN-8391
 Project: Hadoop YARN
  Issue Type: Bug
  Components: fairscheduler
Affects Versions: 3.2.0
Reporter: Haibo Chen


Per findbugs report in YARN-8390, there is some inconsistent locking of  
reloadListener



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-8391) Investigate AllocationFileLoaderService.reloadListener locking issue

2018-06-04 Thread Haibo Chen (JIRA)


 [ 
https://issues.apache.org/jira/browse/YARN-8391?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Haibo Chen updated YARN-8391:
-
Description: 
Per findbugs report in YARN-8390, there is some inconsistent locking of  
reloadListener

 
h1. Warnings

Click on a warning row to see full context information.
h2. Multithreaded correctness Warnings
||Code||Warning||
|IS|Inconsistent synchronization of 
org.apache.hadoop.yarn.server.resourcemanager.scheduler.fair.AllocationFileLoaderService.reloadListener;
 locked 75% of time|
| |[Bug type IS2_INCONSISTENT_SYNC (click for 
details)|https://builds.apache.org/job/PreCommit-YARN-Build/20939/artifact/out/new-findbugs-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager.html#IS2_INCONSISTENT_SYNC]
 
In class 
org.apache.hadoop.yarn.server.resourcemanager.scheduler.fair.AllocationFileLoaderService
Field 
org.apache.hadoop.yarn.server.resourcemanager.scheduler.fair.AllocationFileLoaderService.reloadListener
Synchronized 75% of the time
Unsynchronized access at AllocationFileLoaderService.java:[line 117]
Synchronized access at AllocationFileLoaderService.java:[line 212]
Synchronized access at AllocationFileLoaderService.java:[line 228]
Synchronized access at AllocationFileLoaderService.java:[line 269]|
h1. Details
h2. IS2_INCONSISTENT_SYNC: Inconsistent synchronization

The fields of this class appear to be accessed inconsistently with respect to 
synchronization.  This bug report indicates that the bug pattern detector 
judged that
 * The class contains a mix of locked and unlocked accesses,
 * The class is *not* annotated as javax.annotation.concurrent.NotThreadSafe,
 * At least one locked access was performed by one of the class's own methods, 
and
 * The number of unsynchronized field accesses (reads and writes) was no more 
than one third of all accesses, with writes being weighed twice as high as reads

A typical bug matching this bug pattern is forgetting to synchronize one of the 
methods in a class that is intended to be thread-safe.

You can select the nodes labeled "Unsynchronized access" to show the code 
locations where the detector believed that a field was accessed without 
synchronization.

Note that there are various sources of inaccuracy in this detector; for 
example, the detector cannot statically detect all situations in which a lock 
is held.  Also, even when the detector is accurate in distinguishing locked vs. 
unlocked accesses, the code in question may still be correct.

  was:Per findbugs report in YARN-8390, there is some inconsistent locking of  
reloadListener


> Investigate AllocationFileLoaderService.reloadListener locking issue
> 
>
> Key: YARN-8391
> URL: https://issues.apache.org/jira/browse/YARN-8391
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: fairscheduler
>Affects Versions: 3.2.0
>Reporter: Haibo Chen
>Priority: Critical
>
> Per findbugs report in YARN-8390, there is some inconsistent locking of  
> reloadListener
>  
> h1. Warnings
> Click on a warning row to see full context information.
> h2. Multithreaded correctness Warnings
> ||Code||Warning||
> |IS|Inconsistent synchronization of 
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.fair.AllocationFileLoaderService.reloadListener;
>  locked 75% of time|
> | |[Bug type IS2_INCONSISTENT_SYNC (click for 
> details)|https://builds.apache.org/job/PreCommit-YARN-Build/20939/artifact/out/new-findbugs-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager.html#IS2_INCONSISTENT_SYNC]
>  
> In class 
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.fair.AllocationFileLoaderService
> Field 
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.fair.AllocationFileLoaderService.reloadListener
> Synchronized 75% of the time
> Unsynchronized access at AllocationFileLoaderService.java:[line 117]
> Synchronized access at AllocationFileLoaderService.java:[line 212]
> Synchronized access at AllocationFileLoaderService.java:[line 228]
> Synchronized access at AllocationFileLoaderService.java:[line 269]|
> h1. Details
> h2. IS2_INCONSISTENT_SYNC: Inconsistent synchronization
> The fields of this class appear to be accessed inconsistently with respect to 
> synchronization.  This bug report indicates that the bug pattern detector 
> judged that
>  * The class contains a mix of locked and unlocked accesses,
>  * The class is *not* annotated as javax.annotation.concurrent.NotThreadSafe,
>  * At least one locked access was performed by one of the class's own 
> methods, and
>  * The number of unsynchronized field accesses (reads and writes) was no more 
> than one third of all accesses, with writes being weighed twice as high as 
> reads
> A typical bug matching this bug pattern is forgetting 

[jira] [Commented] (YARN-8390) Fix API incompatible changes in FairScheduler's AllocationFileLoaderService

2018-06-04 Thread Haibo Chen (JIRA)


[ 
https://issues.apache.org/jira/browse/YARN-8390?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16500451#comment-16500451
 ] 

Haibo Chen commented on YARN-8390:
--

Thanks [~grepas] for the quick fix. I have checked in the patch to trunk

> Fix API incompatible changes in FairScheduler's AllocationFileLoaderService
> ---
>
> Key: YARN-8390
> URL: https://issues.apache.org/jira/browse/YARN-8390
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: fairscheduler
>Affects Versions: 3.2.0
>Reporter: Gergo Repas
>Assignee: Gergo Repas
>Priority: Major
> Fix For: 3.2.0
>
> Attachments: YARN-8390.000.patch
>
>
> YARN-8191 introduced API-incompatible changes, which would break some classes 
> in hive (e.g. 
> https://github.com/apache/hive/blob/master/shims/scheduler/src/main/java/org/apache/hadoop/hive/schshim/FairSchedulerShim.java),
>  this ticket's goal is to fix the incompatible changes.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-8389) Improve the description of machine-list property in Federation docs

2018-06-04 Thread Giovanni Matteo Fumarola (JIRA)


[ 
https://issues.apache.org/jira/browse/YARN-8389?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16500454#comment-16500454
 ] 

Giovanni Matteo Fumarola commented on YARN-8389:


Thanks [~tasanuma0829] for the fix. LGTM +1.

> Improve the description of machine-list property in Federation docs
> ---
>
> Key: YARN-8389
> URL: https://issues.apache.org/jira/browse/YARN-8389
> Project: Hadoop YARN
>  Issue Type: Improvement
>  Components: documentation, federation
>Affects Versions: 3.2.0, 3.1.1
>Reporter: Takanobu Asanuma
>Assignee: Takanobu Asanuma
>Priority: Major
> Attachments: YARN-8389.1.patch
>
>
> The current example and the description seem to be a bit ambiguous.
> http://hadoop.apache.org/docs/r3.1.0/hadoop-yarn/hadoop-yarn-site/Federation.html#Optional:



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-8389) Improve the description of machine-list property in Federation docs

2018-06-04 Thread JIRA


 [ 
https://issues.apache.org/jira/browse/YARN-8389?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Íñigo Goiri updated YARN-8389:
--
Description: 
The current example and the description for machine-list property seem to be a 
bit ambiguous.

http://hadoop.apache.org/docs/r3.1.0/hadoop-yarn/hadoop-yarn-site/Federation.html#Optional:

  was:
The current example and the description seem to be a bit ambiguous.

http://hadoop.apache.org/docs/r3.1.0/hadoop-yarn/hadoop-yarn-site/Federation.html#Optional:


> Improve the description of machine-list property in Federation docs
> ---
>
> Key: YARN-8389
> URL: https://issues.apache.org/jira/browse/YARN-8389
> Project: Hadoop YARN
>  Issue Type: Improvement
>  Components: documentation, federation
>Affects Versions: 3.2.0, 3.1.1
>Reporter: Takanobu Asanuma
>Assignee: Takanobu Asanuma
>Priority: Major
> Attachments: YARN-8389.1.patch
>
>
> The current example and the description for machine-list property seem to be 
> a bit ambiguous.
> http://hadoop.apache.org/docs/r3.1.0/hadoop-yarn/hadoop-yarn-site/Federation.html#Optional:



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-8389) Improve the description of machine-list property in Federation docs

2018-06-04 Thread JIRA


[ 
https://issues.apache.org/jira/browse/YARN-8389?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16500464#comment-16500464
 ] 

Íñigo Goiri commented on YARN-8389:
---

The lines were also broken in the markdown,  [^YARN-8389.1.patch] fixes it.
+1
Committing.

> Improve the description of machine-list property in Federation docs
> ---
>
> Key: YARN-8389
> URL: https://issues.apache.org/jira/browse/YARN-8389
> Project: Hadoop YARN
>  Issue Type: Improvement
>  Components: documentation, federation
>Affects Versions: 3.2.0, 3.1.1
>Reporter: Takanobu Asanuma
>Assignee: Takanobu Asanuma
>Priority: Major
> Attachments: YARN-8389.1.patch
>
>
> The current example and the description for machine-list property seem to be 
> a bit ambiguous.
> http://hadoop.apache.org/docs/r3.1.0/hadoop-yarn/hadoop-yarn-site/Federation.html#Optional:



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Created] (YARN-8392) Allow multiple tags for anti-affinity placement policy in service specification

2018-06-04 Thread Billie Rinaldi (JIRA)
Billie Rinaldi created YARN-8392:


 Summary: Allow multiple tags for anti-affinity placement policy in 
service specification
 Key: YARN-8392
 URL: https://issues.apache.org/jira/browse/YARN-8392
 Project: Hadoop YARN
  Issue Type: Bug
Reporter: Billie Rinaldi
Assignee: Billie Rinaldi


Currently the service client code is restricting a component's target tags to 
include only a single tag, the component name. I have a use case for two 
components having anti-affinity with themselves and with each other. The YARN 
placement policies support this, but the service framework isn't allowing it.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-8390) Fix API incompatible changes in FairScheduler's AllocationFileLoaderService

2018-06-04 Thread Hudson (JIRA)


[ 
https://issues.apache.org/jira/browse/YARN-8390?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16500493#comment-16500493
 ] 

Hudson commented on YARN-8390:
--

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #14351 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/14351/])
YARN-8390. Fix API incompatible changes in FairScheduler's (haibochen: rev 
ba12f87dcb0e406da57cdd1ad17677ac2367f425)
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/fair/TestAllocationFileLoaderService.java
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/fair/AllocationFileLoaderService.java
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/fair/FairScheduler.java


> Fix API incompatible changes in FairScheduler's AllocationFileLoaderService
> ---
>
> Key: YARN-8390
> URL: https://issues.apache.org/jira/browse/YARN-8390
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: fairscheduler
>Affects Versions: 3.2.0
>Reporter: Gergo Repas
>Assignee: Gergo Repas
>Priority: Major
> Fix For: 3.2.0
>
> Attachments: YARN-8390.000.patch
>
>
> YARN-8191 introduced API-incompatible changes, which would break some classes 
> in hive (e.g. 
> https://github.com/apache/hive/blob/master/shims/scheduler/src/main/java/org/apache/hadoop/hive/schshim/FairSchedulerShim.java),
>  this ticket's goal is to fix the incompatible changes.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-8155) Improve the logging in NMTimelinePublisher and TimelineCollectorWebService

2018-06-04 Thread genericqa (JIRA)


[ 
https://issues.apache.org/jira/browse/YARN-8155?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16500523#comment-16500523
 ] 

genericqa commented on YARN-8155:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
46s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
17s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 32m 
12s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  3m 
34s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
14s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  2m 
37s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
15m 41s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  3m 
26s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
32s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
15s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  2m 
18s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  3m 
14s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  3m 
14s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
 3s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  2m  
7s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
13m 12s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  4m 
10s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
22s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 20m  
7s{color} | {color:green} hadoop-yarn-server-nodemanager in the patch passed. 
{color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
57s{color} | {color:green} hadoop-yarn-server-timelineservice in the patch 
passed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 68m 17s{color} 
| {color:red} hadoop-yarn-server-resourcemanager in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
25s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}176m 49s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | 
hadoop.yarn.server.resourcemanager.applicationsmanager.TestAMRestart |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:abb62dd |
| JIRA Issue | YARN-8155 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12926387/YARN-8155.003.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  shadedclient  findbugs  checkstyle  |
| uname | Linux 86ab68d513b1 3.13.0-139-generic #188-Ubuntu SMP Tue Jan 9

[jira] [Commented] (YARN-8155) Improve the logging in NMTimelinePublisher and TimelineCollectorWebService

2018-06-04 Thread Abhishek Modi (JIRA)


[ 
https://issues.apache.org/jira/browse/YARN-8155?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16500529#comment-16500529
 ] 

Abhishek Modi commented on YARN-8155:
-

The test that failed is unrelated to this patch.

> Improve the logging in NMTimelinePublisher and TimelineCollectorWebService
> --
>
> Key: YARN-8155
> URL: https://issues.apache.org/jira/browse/YARN-8155
> Project: Hadoop YARN
>  Issue Type: Bug
>Reporter: Rohith Sharma K S
>Assignee: Abhishek Modi
>Priority: Major
> Attachments: YARN-8155.001.patch, YARN-8155.002.patch, 
> YARN-8155.003.patch
>
>
> We see that NM logs are filled with larger stack trace of NotFoundException 
> if collector is removed from one of the NM and other NMs are still publishing 
> the entities.
>  
> This Jira is to improve the logging in NM so that we log with informative 
> message.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-8389) Improve the description of machine-list property in Federation docs

2018-06-04 Thread Hudson (JIRA)


[ 
https://issues.apache.org/jira/browse/YARN-8389?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16500531#comment-16500531
 ] 

Hudson commented on YARN-8389:
--

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #14352 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/14352/])
YARN-8389. Improve the description of machine-list property in (inigoiri: rev 
61fc7f73f21b0949e27ef3893efda757d91a03f9)
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-site/src/site/markdown/Federation.md


> Improve the description of machine-list property in Federation docs
> ---
>
> Key: YARN-8389
> URL: https://issues.apache.org/jira/browse/YARN-8389
> Project: Hadoop YARN
>  Issue Type: Improvement
>  Components: documentation, federation
>Affects Versions: 3.2.0, 3.1.1
>Reporter: Takanobu Asanuma
>Assignee: Takanobu Asanuma
>Priority: Major
> Fix For: 3.2.0, 3.1.1
>
> Attachments: YARN-8389.1.patch
>
>
> The current example and the description for machine-list property seem to be 
> a bit ambiguous.
> http://hadoop.apache.org/docs/r3.1.0/hadoop-yarn/hadoop-yarn-site/Federation.html#Optional:



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-8240) Add queue-level control to allow all applications in a queue to opt-out

2018-06-04 Thread genericqa (JIRA)


[ 
https://issues.apache.org/jira/browse/YARN-8240?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16500536#comment-16500536
 ] 

genericqa commented on YARN-8240:
-

| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
22s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 3 new or modified test 
files. {color} |
|| || || || {color:brown} YARN-1011 Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  1m  
5s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 23m 
39s{color} | {color:green} YARN-1011 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  8m  
4s{color} | {color:green} YARN-1011 passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
 8s{color} | {color:green} YARN-1011 passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m  
3s{color} | {color:green} YARN-1011 passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
10m 51s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Skipped patched modules with no Java source: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-site {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m  
8s{color} | {color:green} YARN-1011 passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
41s{color} | {color:green} YARN-1011 passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
10s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
48s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  6m 
39s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  6m 
39s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
1m  5s{color} | {color:orange} hadoop-yarn-project/hadoop-yarn: The patch 
generated 1 new + 241 unchanged - 0 fixed = 242 total (was 241) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m  
1s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green}  
9m 55s{color} | {color:green} patch has no errors when building and testing our 
client artifacts. {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Skipped patched modules with no Java source: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-site {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
15s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
43s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 67m 
26s{color} | {color:green} hadoop-yarn-server-resourcemanager in the patch 
passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
21s{color} | {color:green} hadoop-yarn-site in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
34s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}137m 17s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:8620d2b |
| JIRA Issue | YARN-8240 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12926395/YARN-8240-YARN-1011.01.patch
 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsit

[jira] [Commented] (YARN-8388) TestCGroupElasticMemoryController.testNormalExit() hangs on Linux

2018-06-04 Thread Haibo Chen (JIRA)


[ 
https://issues.apache.org/jira/browse/YARN-8388?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16500570#comment-16500570
 ] 

Haibo Chen commented on YARN-8388:
--

Two minor comments

1) Let's add a comment above ` when(cgroups.getPathForCGroup(any(), 
any())).thenReturn("1");` that 1 will be passed to the sleep command. 
This is a little hacky, so worth a comment I think.

2) there is a misspelling, `aviod` -> 'avoid'

> TestCGroupElasticMemoryController.testNormalExit() hangs on Linux
> -
>
> Key: YARN-8388
> URL: https://issues.apache.org/jira/browse/YARN-8388
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: nodemanager
>Affects Versions: 3.2.0
>Reporter: Haibo Chen
>Assignee: Miklos Szegedi
>Priority: Major
> Attachments: YARN-8388.000.patch
>
>
> YARN-8375 disables the unit test on Linux. But given that we will be running 
> the CGroupElasticMemoryController on Linux, we need to figure out why it is 
> hanging and ideally fix it.
>  
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-8388) TestCGroupElasticMemoryController.testNormalExit() hangs on Linux

2018-06-04 Thread Miklos Szegedi (JIRA)


 [ 
https://issues.apache.org/jira/browse/YARN-8388?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Miklos Szegedi updated YARN-8388:
-
Attachment: YARN-8388.001.patch

> TestCGroupElasticMemoryController.testNormalExit() hangs on Linux
> -
>
> Key: YARN-8388
> URL: https://issues.apache.org/jira/browse/YARN-8388
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: nodemanager
>Affects Versions: 3.2.0
>Reporter: Haibo Chen
>Assignee: Miklos Szegedi
>Priority: Major
> Attachments: YARN-8388.000.patch, YARN-8388.001.patch
>
>
> YARN-8375 disables the unit test on Linux. But given that we will be running 
> the CGroupElasticMemoryController on Linux, we need to figure out why it is 
> hanging and ideally fix it.
>  
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6931) Make the aggregation interval in AppLevelTimelineCollector configurable

2018-06-04 Thread genericqa (JIRA)


[ 
https://issues.apache.org/jira/browse/YARN-6931?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16500579#comment-16500579
 ] 

genericqa commented on YARN-6931:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
32s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
12s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 26m 
 2s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  8m 
49s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
25s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  2m  
8s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
14m 16s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  3m 
23s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
48s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
11s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
37s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  7m 
25s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  7m 
25s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
24s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
54s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green}  0m  
1s{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
11m 17s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  3m 
40s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
42s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
45s{color} | {color:green} hadoop-yarn-api in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  3m 
13s{color} | {color:green} hadoop-yarn-common in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  1m  
5s{color} | {color:green} hadoop-yarn-server-timelineservice in the patch 
passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
35s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 91m 57s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:abb62dd |
| JIRA Issue | YARN-6931 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12926405/YARN-6931.001.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  shadedclient  findbugs  checkstyle  xml  |
| uname | Linux bde009ff7f26 3.13.0-139-generic #188-Ubuntu SMP Tu

[jira] [Commented] (YARN-8240) Add queue-level control to allow all applications in a queue to opt-out

2018-06-04 Thread Miklos Szegedi (JIRA)


[ 
https://issues.apache.org/jira/browse/YARN-8240?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16500587#comment-16500587
 ] 

Miklos Szegedi commented on YARN-8240:
--

Thank you, [~haibochen] for the patch. In general this looks good, please 
address the outstanding checkstyle.

> Add queue-level control to allow all applications in a queue to opt-out
> ---
>
> Key: YARN-8240
> URL: https://issues.apache.org/jira/browse/YARN-8240
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Haibo Chen
>Assignee: Haibo Chen
>Priority: Major
> Attachments: YARN-8240-YARN-1011.00.patch, 
> YARN-8240-YARN-1011.01.patch
>
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-8382) cgroup file leak in NM

2018-06-04 Thread Miklos Szegedi (JIRA)


[ 
https://issues.apache.org/jira/browse/YARN-8382?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16500615#comment-16500615
 ] 

Miklos Szegedi commented on YARN-8382:
--

Committed to trunk. Thank you for the contribution [~ziqian hu].

> cgroup file leak in NM
> --
>
> Key: YARN-8382
> URL: https://issues.apache.org/jira/browse/YARN-8382
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: nodemanager
> Environment: we write an container with a shutdownHook which has a 
> piece of code like  "while(true) sleep(100)" . when 
> *yarn.nodemanager.linux-container-executor.cgroups.delete-timeout-ms <* 
> *yarn.nodemanager.sleep-delay-before-sigkill.ms , cgourp file leak happens; 
> when* *yarn.nodemanager.linux-container-executor.cgroups.delete-timeout-ms >* 
> ** *yarn.nodemanager.sleep-delay-before-sigkill.ms, cgroup file is deleted 
> successfully***
>Reporter: Hu Ziqian
>Assignee: Hu Ziqian
>Priority: Major
> Attachments: YARN-8382-branch-2.8.3.001.patch, 
> YARN-8382-branch-2.8.3.002.patch, YARN-8382.001.patch, YARN-8382.002.patch
>
>
> As Jiandan said in YARN-6525, NM may delete  Cgroup container file timeout 
> with logs like below:
> org.apache.hadoop.yarn.server.nodemanager.util.CgroupsLCEResourcesHandler: 
> Unable to delete cgroup at: /cgroup/cpu/hadoop-yarn/container_xxx, tried to 
> delete for 1000ms
>  
> we found one situation is that when we set 
> *yarn.nodemanager.sleep-delay-before-sigkill.ms* bigger than 
> *yarn.nodemanager.linux-container-executor.cgroups.delete-timeout-ms*, the 
> cgroup file leak happens *.* 
>  
> One container process tree looks like follow graph:
> bash(16097)───java(16099)─┬─\{java}(16100) 
>                                                   ├─\{java}(16101) 
> {{                       ├─\{java}(16102)}}
>  
> {{when NM kills a container, NM sends kill -15 -pid to kill container process 
> group. Bash process will exit when it received sigterm, but java process may 
> do some job (shutdownHook etc.), and doesn't exit unit receive sigkill. And 
> when bash process exits, CgroupsLCEResourcesHandler begin to try to delete 
> cgroup files. So when 
> *yarn.nodemanager.linux-container-executor.cgroups.delete-timeout-ms* 
> arrived, the java processes may still running and cgourp/tasks still not 
> empty and cause a cgroup file leak.}}
>  
> {{we add a condition that 
> *yarn.nodemanager.linux-container-executor.cgroups.delete-timeout-ms* must 
> bigger than *yarn.nodemanager.sleep-delay-before-sigkill.ms* to solve this 
> problem.}}
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-8240) Add queue-level control to allow all applications in a queue to opt-out

2018-06-04 Thread Haibo Chen (JIRA)


[ 
https://issues.apache.org/jira/browse/YARN-8240?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16500617#comment-16500617
 ] 

Haibo Chen commented on YARN-8240:
--

Thanks for the review [~miklos.szeg...@cloudera.com], I updated the patch to 
address the checkstyle issue.

> Add queue-level control to allow all applications in a queue to opt-out
> ---
>
> Key: YARN-8240
> URL: https://issues.apache.org/jira/browse/YARN-8240
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Haibo Chen
>Assignee: Haibo Chen
>Priority: Major
> Attachments: YARN-8240-YARN-1011.00.patch, 
> YARN-8240-YARN-1011.01.patch, YARN-8240-YARN-1011.02.patch
>
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-8240) Add queue-level control to allow all applications in a queue to opt-out

2018-06-04 Thread Haibo Chen (JIRA)


 [ 
https://issues.apache.org/jira/browse/YARN-8240?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Haibo Chen updated YARN-8240:
-
Attachment: YARN-8240-YARN-1011.02.patch

> Add queue-level control to allow all applications in a queue to opt-out
> ---
>
> Key: YARN-8240
> URL: https://issues.apache.org/jira/browse/YARN-8240
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Haibo Chen
>Assignee: Haibo Chen
>Priority: Major
> Attachments: YARN-8240-YARN-1011.00.patch, 
> YARN-8240-YARN-1011.01.patch, YARN-8240-YARN-1011.02.patch
>
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-8388) TestCGroupElasticMemoryController.testNormalExit() hangs on Linux

2018-06-04 Thread Haibo Chen (JIRA)


[ 
https://issues.apache.org/jira/browse/YARN-8388?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16500650#comment-16500650
 ] 

Haibo Chen commented on YARN-8388:
--

+1 on the latest patch (02) pending Jenkins.

> TestCGroupElasticMemoryController.testNormalExit() hangs on Linux
> -
>
> Key: YARN-8388
> URL: https://issues.apache.org/jira/browse/YARN-8388
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: nodemanager
>Affects Versions: 3.2.0
>Reporter: Haibo Chen
>Assignee: Miklos Szegedi
>Priority: Major
> Attachments: YARN-8388.000.patch, YARN-8388.001.patch, 
> YARN-8388.002.patch
>
>
> YARN-8375 disables the unit test on Linux. But given that we will be running 
> the CGroupElasticMemoryController on Linux, we need to figure out why it is 
> hanging and ideally fix it.
>  
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-8388) TestCGroupElasticMemoryController.testNormalExit() hangs on Linux

2018-06-04 Thread Miklos Szegedi (JIRA)


 [ 
https://issues.apache.org/jira/browse/YARN-8388?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Miklos Szegedi updated YARN-8388:
-
Attachment: YARN-8388.002.patch

> TestCGroupElasticMemoryController.testNormalExit() hangs on Linux
> -
>
> Key: YARN-8388
> URL: https://issues.apache.org/jira/browse/YARN-8388
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: nodemanager
>Affects Versions: 3.2.0
>Reporter: Haibo Chen
>Assignee: Miklos Szegedi
>Priority: Major
> Attachments: YARN-8388.000.patch, YARN-8388.001.patch, 
> YARN-8388.002.patch
>
>
> YARN-8375 disables the unit test on Linux. But given that we will be running 
> the CGroupElasticMemoryController on Linux, we need to figure out why it is 
> hanging and ideally fix it.
>  
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-8382) cgroup file leak in NM

2018-06-04 Thread Hudson (JIRA)


[ 
https://issues.apache.org/jira/browse/YARN-8382?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16500662#comment-16500662
 ] 

Hudson commented on YARN-8382:
--

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #14354 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/14354/])
YARN-8382. cgroup file leak in NM. Contributed by Hu Ziqian. (miklos.szegedi: 
rev e2c172dc9faeb5472a32d7052e54d79d499c0a55)
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/linux/resources/CGroupsHandlerImpl.java


> cgroup file leak in NM
> --
>
> Key: YARN-8382
> URL: https://issues.apache.org/jira/browse/YARN-8382
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: nodemanager
> Environment: we write an container with a shutdownHook which has a 
> piece of code like  "while(true) sleep(100)" . when 
> *yarn.nodemanager.linux-container-executor.cgroups.delete-timeout-ms <* 
> *yarn.nodemanager.sleep-delay-before-sigkill.ms , cgourp file leak happens; 
> when* *yarn.nodemanager.linux-container-executor.cgroups.delete-timeout-ms >* 
> ** *yarn.nodemanager.sleep-delay-before-sigkill.ms, cgroup file is deleted 
> successfully***
>Reporter: Hu Ziqian
>Assignee: Hu Ziqian
>Priority: Major
> Attachments: YARN-8382-branch-2.8.3.001.patch, 
> YARN-8382-branch-2.8.3.002.patch, YARN-8382.001.patch, YARN-8382.002.patch
>
>
> As Jiandan said in YARN-6525, NM may delete  Cgroup container file timeout 
> with logs like below:
> org.apache.hadoop.yarn.server.nodemanager.util.CgroupsLCEResourcesHandler: 
> Unable to delete cgroup at: /cgroup/cpu/hadoop-yarn/container_xxx, tried to 
> delete for 1000ms
>  
> we found one situation is that when we set 
> *yarn.nodemanager.sleep-delay-before-sigkill.ms* bigger than 
> *yarn.nodemanager.linux-container-executor.cgroups.delete-timeout-ms*, the 
> cgroup file leak happens *.* 
>  
> One container process tree looks like follow graph:
> bash(16097)───java(16099)─┬─\{java}(16100) 
>                                                   ├─\{java}(16101) 
> {{                       ├─\{java}(16102)}}
>  
> {{when NM kills a container, NM sends kill -15 -pid to kill container process 
> group. Bash process will exit when it received sigterm, but java process may 
> do some job (shutdownHook etc.), and doesn't exit unit receive sigkill. And 
> when bash process exits, CgroupsLCEResourcesHandler begin to try to delete 
> cgroup files. So when 
> *yarn.nodemanager.linux-container-executor.cgroups.delete-timeout-ms* 
> arrived, the java processes may still running and cgourp/tasks still not 
> empty and cause a cgroup file leak.}}
>  
> {{we add a condition that 
> *yarn.nodemanager.linux-container-executor.cgroups.delete-timeout-ms* must 
> bigger than *yarn.nodemanager.sleep-delay-before-sigkill.ms* to solve this 
> problem.}}
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-8258) YARN webappcontext for UI2 should inherit all filters from default context

2018-06-04 Thread Sunil Govindan (JIRA)


 [ 
https://issues.apache.org/jira/browse/YARN-8258?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sunil Govindan updated YARN-8258:
-
Attachment: YARN-8258.004.patch

> YARN webappcontext for UI2 should inherit all filters from default context
> --
>
> Key: YARN-8258
> URL: https://issues.apache.org/jira/browse/YARN-8258
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: webapp
>Reporter: Sumana Sathish
>Assignee: Sunil Govindan
>Priority: Major
> Attachments: YARN-8258.001.patch, YARN-8258.002.patch, 
> YARN-8258.003.patch, YARN-8258.004.patch
>
>
> Thanks [~ssath...@hortonworks.com] for finding this.
> Ideally all filters from default context has to be inherited to UI2 context 
> as well.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-8258) YARN webappcontext for UI2 should inherit all filters from default context

2018-06-04 Thread Sunil Govindan (JIRA)


[ 
https://issues.apache.org/jira/browse/YARN-8258?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16500748#comment-16500748
 ] 

Sunil Govindan commented on YARN-8258:
--

Updating v4 patch with test case to check filter ordering as well.

{{httpServer.getWebAppContext().getServletHandler()}} was providing all 
filterHolding and filterMappings. UI2 is copying this context from default 
context. For Spnego, path spec has to be null to ensure that Spnego filter is 
coming after kerberos authentication filter. 

[~vinodkv] Could you  please help to check this.

> YARN webappcontext for UI2 should inherit all filters from default context
> --
>
> Key: YARN-8258
> URL: https://issues.apache.org/jira/browse/YARN-8258
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: webapp
>Reporter: Sumana Sathish
>Assignee: Sunil Govindan
>Priority: Major
> Attachments: YARN-8258.001.patch, YARN-8258.002.patch, 
> YARN-8258.003.patch, YARN-8258.004.patch
>
>
> Thanks [~ssath...@hortonworks.com] for finding this.
> Ideally all filters from default context has to be inherited to UI2 context 
> as well.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-8388) TestCGroupElasticMemoryController.testNormalExit() hangs on Linux

2018-06-04 Thread genericqa (JIRA)


[ 
https://issues.apache.org/jira/browse/YARN-8388?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16500807#comment-16500807
 ] 

genericqa commented on YARN-8388:
-

| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
31s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 2 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
17s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 25m 
38s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 30m 
36s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  3m 
28s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
59s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
16m 46s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  3m  
1s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
33s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
20s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
26s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 29m  
1s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 29m  
1s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  3m 
12s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
54s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
10m 27s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m 
49s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
30s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  8m 
28s{color} | {color:green} hadoop-common in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 19m 
44s{color} | {color:green} hadoop-yarn-server-nodemanager in the patch passed. 
{color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
37s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}160m 36s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:abb62dd |
| JIRA Issue | YARN-8388 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12926423/YARN-8388.001.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  shadedclient  findbugs  checkstyle  |
| uname | Linux f53cee1c2cb3 3.13.0-139-generic #188-Ubuntu SMP Tue Jan 9 
14:43:09 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / ea7b53f |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_162 |
| findbugs | v3.1.0-RC1 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-YARN-Build/20942/testReport/ |
| Max. process+thread count | 1411 (vs. ulimit of 1) |
|

[jira] [Commented] (YARN-8240) Add queue-level control to allow all applications in a queue to opt-out

2018-06-04 Thread genericqa (JIRA)


[ 
https://issues.apache.org/jira/browse/YARN-8240?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16500821#comment-16500821
 ] 

genericqa commented on YARN-8240:
-

| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
33s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 3 new or modified test 
files. {color} |
|| || || || {color:brown} YARN-1011 Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  1m  
8s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 26m 
26s{color} | {color:green} YARN-1011 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  9m 
41s{color} | {color:green} YARN-1011 passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
28s{color} | {color:green} YARN-1011 passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
21s{color} | {color:green} YARN-1011 passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
13m 17s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Skipped patched modules with no Java source: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-site {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
17s{color} | {color:green} YARN-1011 passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
58s{color} | {color:green} YARN-1011 passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
13s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
 4s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  8m  
8s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  8m  
8s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
20s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
14s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
11m  0s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Skipped patched modules with no Java source: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-site {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
30s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
55s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 66m 
37s{color} | {color:green} hadoop-yarn-server-resourcemanager in the patch 
passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
21s{color} | {color:green} hadoop-yarn-site in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
34s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}148m 12s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:8620d2b |
| JIRA Issue | YARN-8240 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12926430/YARN-8240-YARN-1011.02.patch
 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  shadedclient  findbugs  checkstyle  |
| uname | Linux b0dca8530960 3.13.0-139-generic

[jira] [Commented] (YARN-8388) TestCGroupElasticMemoryController.testNormalExit() hangs on Linux

2018-06-04 Thread genericqa (JIRA)


[ 
https://issues.apache.org/jira/browse/YARN-8388?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16500830#comment-16500830
 ] 

genericqa commented on YARN-8388:
-

| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
24s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 2 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  1m 
39s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 23m 
24s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 27m  
4s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  3m 
 8s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
36s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
13m 57s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m 
14s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
15s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
17s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
13s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 26m 
15s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 26m 
15s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  3m 
20s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
41s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green}  
9m 44s{color} | {color:green} patch has no errors when building and testing our 
client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m 
41s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
30s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  8m  
8s{color} | {color:green} hadoop-common in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 19m 
55s{color} | {color:green} hadoop-yarn-server-nodemanager in the patch passed. 
{color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
38s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}147m 55s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:abb62dd |
| JIRA Issue | YARN-8388 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12926431/YARN-8388.002.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  shadedclient  findbugs  checkstyle  |
| uname | Linux 45bb7263908f 4.4.0-64-generic #85-Ubuntu SMP Mon Feb 20 
11:50:30 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / e2c172d |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_162 |
| findbugs | v3.1.0-RC1 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-YARN-Build/20944/testReport/ |
| Max. process+thread count | 1436 (vs. ulimit of 1) |
| m

[jira] [Updated] (YARN-6677) Preempt opportunistic containers when root container cgroup goes over memory limit

2018-06-04 Thread Haibo Chen (JIRA)


 [ 
https://issues.apache.org/jira/browse/YARN-6677?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Haibo Chen updated YARN-6677:
-
Attachment: YARN-6677.02.patch

> Preempt opportunistic containers when root container cgroup goes over memory 
> limit
> --
>
> Key: YARN-6677
> URL: https://issues.apache.org/jira/browse/YARN-6677
> Project: Hadoop YARN
>  Issue Type: Improvement
>  Components: nodemanager
>Affects Versions: 3.0.0-alpha3
>Reporter: Haibo Chen
>Assignee: Miklos Szegedi
>Priority: Major
> Attachments: YARN-6677.00.patch, YARN-6677.01.patch, 
> YARN-6677.02.patch
>
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-8388) TestCGroupElasticMemoryController.testNormalExit() hangs on Linux

2018-06-04 Thread Hudson (JIRA)


[ 
https://issues.apache.org/jira/browse/YARN-8388?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16500904#comment-16500904
 ] 

Hudson commented on YARN-8388:
--

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #14356 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/14356/])
YARN-8388. TestCGroupElasticMemoryController.testNormalExit() hangs on 
(haibochen: rev 04cf699dd54aab3595eb80295652dcde9a2f4dd5)
* (edit) 
hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/test/PlatformAssumptions.java
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/linux/resources/TestCGroupElasticMemoryController.java


> TestCGroupElasticMemoryController.testNormalExit() hangs on Linux
> -
>
> Key: YARN-8388
> URL: https://issues.apache.org/jira/browse/YARN-8388
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: nodemanager
>Affects Versions: 3.2.0
>Reporter: Haibo Chen
>Assignee: Miklos Szegedi
>Priority: Major
> Attachments: YARN-8388.000.patch, YARN-8388.001.patch, 
> YARN-8388.002.patch
>
>
> YARN-8375 disables the unit test on Linux. But given that we will be running 
> the CGroupElasticMemoryController on Linux, we need to figure out why it is 
> hanging and ideally fix it.
>  
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-8240) Add queue-level control to allow all applications in a queue to opt-out

2018-06-04 Thread Miklos Szegedi (JIRA)


[ 
https://issues.apache.org/jira/browse/YARN-8240?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16500929#comment-16500929
 ] 

Miklos Szegedi commented on YARN-8240:
--

+1

> Add queue-level control to allow all applications in a queue to opt-out
> ---
>
> Key: YARN-8240
> URL: https://issues.apache.org/jira/browse/YARN-8240
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Haibo Chen
>Assignee: Haibo Chen
>Priority: Major
> Attachments: YARN-8240-YARN-1011.00.patch, 
> YARN-8240-YARN-1011.01.patch, YARN-8240-YARN-1011.02.patch
>
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-4677) RMNodeResourceUpdateEvent update from scheduler can lead to race condition

2018-06-04 Thread Robert Kanter (JIRA)


[ 
https://issues.apache.org/jira/browse/YARN-4677?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16500947#comment-16500947
 ] 

Robert Kanter commented on YARN-4677:
-

+1 LGTM

> RMNodeResourceUpdateEvent update from scheduler can lead to race condition
> --
>
> Key: YARN-4677
> URL: https://issues.apache.org/jira/browse/YARN-4677
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: graceful, resourcemanager, scheduler
>Affects Versions: 2.7.1
>Reporter: Brook Zhou
>Assignee: Wilfred Spiegelenburg
>Priority: Major
> Attachments: YARN-4677-branch-2.001.patch, 
> YARN-4677-branch-2.002.patch, YARN-4677-branch-2.003.patch, YARN-4677.01.patch
>
>
> When a node is in decommissioning state, there is time window between 
> completedContainer() and RMNodeResourceUpdateEvent get handled in 
> scheduler.nodeUpdate (YARN-3223). 
> So if a scheduling effort happens within this window, the new container could 
> still get allocated on this node. Even worse case is if scheduling effort 
> happen after RMNodeResourceUpdateEvent sent out but before it is propagated 
> to SchedulerNode - then the total resource is lower than used resource and 
> available resource is a negative value. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-4677) RMNodeResourceUpdateEvent update from scheduler can lead to race condition

2018-06-04 Thread Robert Kanter (JIRA)


 [ 
https://issues.apache.org/jira/browse/YARN-4677?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Robert Kanter updated YARN-4677:

Target Version/s: 3.2.0, 3.1.1, 2.9.2, 3.0.x  (was: 2.7.1)

> RMNodeResourceUpdateEvent update from scheduler can lead to race condition
> --
>
> Key: YARN-4677
> URL: https://issues.apache.org/jira/browse/YARN-4677
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: graceful, resourcemanager, scheduler
>Affects Versions: 2.7.1
>Reporter: Brook Zhou
>Assignee: Wilfred Spiegelenburg
>Priority: Major
> Attachments: YARN-4677-branch-2.001.patch, 
> YARN-4677-branch-2.002.patch, YARN-4677-branch-2.003.patch, YARN-4677.01.patch
>
>
> When a node is in decommissioning state, there is time window between 
> completedContainer() and RMNodeResourceUpdateEvent get handled in 
> scheduler.nodeUpdate (YARN-3223). 
> So if a scheduling effort happens within this window, the new container could 
> still get allocated on this node. Even worse case is if scheduling effort 
> happen after RMNodeResourceUpdateEvent sent out but before it is propagated 
> to SchedulerNode - then the total resource is lower than used resource and 
> available resource is a negative value. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6677) Preempt opportunistic containers when root container cgroup goes over memory limit

2018-06-04 Thread genericqa (JIRA)


[ 
https://issues.apache.org/jira/browse/YARN-6677?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16500980#comment-16500980
 ] 

genericqa commented on YARN-6677:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
33s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 2 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 26m 
16s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
59s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
26s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
38s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
11m 33s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
57s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
25s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
39s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
55s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
55s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 22s{color} | {color:orange} 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager:
 The patch generated 6 new + 145 unchanged - 0 fixed = 151 total (was 145) 
{color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
32s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
11m 35s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  1m  
1s{color} | {color:red} 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager
 generated 1 new + 0 unchanged - 0 fixed = 1 total (was 0) {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
21s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 21m 
13s{color} | {color:green} hadoop-yarn-server-nodemanager in the patch passed. 
{color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  1m 
 5s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 79m 35s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| FindBugs | 
module:hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager
 |
|  |  
org.apache.hadoop.yarn.server.nodemanager.containermanager.linux.resources.DefaultOOMHandler$ContainerCandidate
 defines compareTo(DefaultOOMHandler$ContainerCandidate) and uses 
Object.equals()  At DefaultOOMHandler.java:Object.equals()  At 
DefaultOOMHandler.java:[lines 261-275] |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:abb62dd |
| JIRA Issue | YARN-6677 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12926448/YARN-6677.02.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  shadedclient  findbugs  checkstyle  |
| uname | Linux f2e9cac7f2da 3.13.0-139-generic #188-Ubuntu SMP Tue Jan 9 
14:43:09 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / 04cf699

[jira] [Commented] (YARN-4677) RMNodeResourceUpdateEvent update from scheduler can lead to race condition

2018-06-04 Thread Hudson (JIRA)


[ 
https://issues.apache.org/jira/browse/YARN-4677?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16500992#comment-16500992
 ] 

Hudson commented on YARN-4677:
--

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #14357 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/14357/])
YARN-4677. RMNodeResourceUpdateEvent update from scheduler can lead to 
(rkanter: rev 0cd145a44390bc1a01113dce4be4e629637c3e8a)
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/AbstractYarnScheduler.java
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/capacity/TestCapacityScheduler.java
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/fifo/FifoScheduler.java
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/fair/TestFairScheduler.java
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/fifo/TestFifoScheduler.java
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/fair/FairScheduler.java


> RMNodeResourceUpdateEvent update from scheduler can lead to race condition
> --
>
> Key: YARN-4677
> URL: https://issues.apache.org/jira/browse/YARN-4677
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: graceful, resourcemanager, scheduler
>Affects Versions: 2.7.1
>Reporter: Brook Zhou
>Assignee: Wilfred Spiegelenburg
>Priority: Major
> Attachments: YARN-4677-branch-2.001.patch, 
> YARN-4677-branch-2.002.patch, YARN-4677-branch-2.003.patch, YARN-4677.01.patch
>
>
> When a node is in decommissioning state, there is time window between 
> completedContainer() and RMNodeResourceUpdateEvent get handled in 
> scheduler.nodeUpdate (YARN-3223). 
> So if a scheduling effort happens within this window, the new container could 
> still get allocated on this node. Even worse case is if scheduling effort 
> happen after RMNodeResourceUpdateEvent sent out but before it is propagated 
> to SchedulerNode - then the total resource is lower than used resource and 
> available resource is a negative value. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-8258) YARN webappcontext for UI2 should inherit all filters from default context

2018-06-04 Thread genericqa (JIRA)


[ 
https://issues.apache.org/jira/browse/YARN-8258?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16500994#comment-16500994
 ] 

genericqa commented on YARN-8258:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
19s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  1m 
10s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 38m 
46s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 15m 
52s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  3m 
 7s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  2m 
35s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
16m 21s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  1m 
16s{color} | {color:red} 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager
 in trunk has 1 extant Findbugs warnings. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
23s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
12s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
27s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  9m 
57s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  9m 
57s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
2m 51s{color} | {color:orange} hadoop-yarn-project/hadoop-yarn: The patch 
generated 1 new + 56 unchanged - 0 fixed = 57 total (was 56) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  4m 
44s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
17m 59s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  7m 
50s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  4m 
14s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  3m 
28s{color} | {color:green} hadoop-yarn-common in the patch passed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 83m  4s{color} 
| {color:red} hadoop-yarn-server-resourcemanager in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  1m 
53s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}218m 48s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | 
hadoop.yarn.server.resourcemanager.scheduler.capacity.TestContainerResizing |
|   | 
hadoop.yarn.server.resourcemanager.scheduler.capacity.TestIncreaseAllocationExpirer
 |
|   | 
hadoop.yarn.server.resourcemanager.scheduler.capacity.TestCapacitySchedulerNodeLabelUpdate
 |
|   | 
hadoop.yarn.server.resourcemanager.scheduler.capacity.TestCapacitySchedulerSchedulingRequestUpdate
 |
|   | hadoop.yarn.server.resourcemanager.ahs.TestRMApplicationHistoryWriter |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:abb62dd |
| JIRA Issue | YARN-8258 |
| JIRA Patch URL 

[jira] [Commented] (YARN-6677) Preempt opportunistic containers when root container cgroup goes over memory limit

2018-06-04 Thread Haibo Chen (JIRA)


[ 
https://issues.apache.org/jira/browse/YARN-6677?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16501010#comment-16501010
 ] 

Haibo Chen commented on YARN-6677:
--

I think the findbugs warning is bonus because the wrapper class is used in 
sorting only.

> Preempt opportunistic containers when root container cgroup goes over memory 
> limit
> --
>
> Key: YARN-6677
> URL: https://issues.apache.org/jira/browse/YARN-6677
> Project: Hadoop YARN
>  Issue Type: Improvement
>  Components: nodemanager
>Affects Versions: 3.0.0-alpha3
>Reporter: Haibo Chen
>Assignee: Miklos Szegedi
>Priority: Major
> Attachments: YARN-6677.00.patch, YARN-6677.01.patch, 
> YARN-6677.02.patch
>
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-8365) Revisit the record type used by Registry DNS for upstream resolution

2018-06-04 Thread Hudson (JIRA)


[ 
https://issues.apache.org/jira/browse/YARN-8365?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16501036#comment-16501036
 ] 

Hudson commented on YARN-8365:
--

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #14358 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/14358/])
YARN-8365.  Set DNS query type according to client request. (eyang: 
rev 5cf37418bdc6ff09c0c1ae3ac8ac4b0867de0de4)
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-registry/src/main/java/org/apache/hadoop/registry/server/dns/RegistryDNS.java
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-registry/src/test/java/org/apache/hadoop/registry/server/dns/TestRegistryDNS.java


> Revisit the record type used by Registry DNS for upstream resolution
> 
>
> Key: YARN-8365
> URL: https://issues.apache.org/jira/browse/YARN-8365
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: yarn-native-services
>Reporter: Shane Kumpf
>Assignee: Shane Kumpf
>Priority: Major
> Fix For: 3.2.0, 3.1.1
>
> Attachments: YARN-8365.001.patch
>
>
> YARN-7326 leveraged the ANY record type for upstream resolution, but some 
> implementations [don't support 
> ANY|https://tools.ietf.org/html/draft-ietf-dnsop-refuse-any-06] due to the 
> potential for abuse, namely Cloudflare. Docker Hub leverages Cloudflare for 
> image distribution, so when Registry DNS is used as the sole resolver, docker 
> image downloads are failing. 
> {code:java}
> [root@host ~]# docker run -u root -it centos bash
> Unable to find image 'centos:latest' locally
> latest: Pulling from library/centos
> 469cfcc7a4b3: Already exists
> docker: error pulling image configuration: Get 
> https://production.cloudflare.docker.com/registry-v2/docker/registry/v2/blobs/sha256/e9/e934aafc22064b7322c0250f1e32e5ce93b2d19b356f4537f5864bd102e8531f/data?verify=1527265495-nG8jk%2Bya9qrdPVlXRKGMnOhSnV0%3D:
>  dial tcp: lookup production.cloudflare.docker.com on registry.dns.host:53: 
> no such host.
> {code}
> {code:java}
> [root@host~]# nslookup production.cloudflare.docker.com registry.dns.host
> Server:   registry.dns.host
> Address:  registry.dns.host#53
> Non-authoritative answer:
> production.cloudflare.docker.com  hinfo = "ANY obsoleted" "See 
> draft-ietf-dnsop-refuse-any"
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-6677) Preempt opportunistic containers when root container cgroup goes over memory limit

2018-06-04 Thread Haibo Chen (JIRA)


 [ 
https://issues.apache.org/jira/browse/YARN-6677?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Haibo Chen updated YARN-6677:
-
Attachment: YARN-6677.03.patch

> Preempt opportunistic containers when root container cgroup goes over memory 
> limit
> --
>
> Key: YARN-6677
> URL: https://issues.apache.org/jira/browse/YARN-6677
> Project: Hadoop YARN
>  Issue Type: Improvement
>  Components: nodemanager
>Affects Versions: 3.0.0-alpha3
>Reporter: Haibo Chen
>Assignee: Miklos Szegedi
>Priority: Major
> Attachments: YARN-6677.00.patch, YARN-6677.01.patch, 
> YARN-6677.02.patch, YARN-6677.03.patch
>
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6677) Preempt opportunistic containers when root container cgroup goes over memory limit

2018-06-04 Thread Haibo Chen (JIRA)


[ 
https://issues.apache.org/jira/browse/YARN-6677?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16501039#comment-16501039
 ] 

Haibo Chen commented on YARN-6677:
--

Updated the patch to address all but one checkstyle issues.

> Preempt opportunistic containers when root container cgroup goes over memory 
> limit
> --
>
> Key: YARN-6677
> URL: https://issues.apache.org/jira/browse/YARN-6677
> Project: Hadoop YARN
>  Issue Type: Improvement
>  Components: nodemanager
>Affects Versions: 3.0.0-alpha3
>Reporter: Haibo Chen
>Assignee: Miklos Szegedi
>Priority: Major
> Attachments: YARN-6677.00.patch, YARN-6677.01.patch, 
> YARN-6677.02.patch, YARN-6677.03.patch
>
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-4677) RMNodeResourceUpdateEvent update from scheduler can lead to race condition

2018-06-04 Thread Robert Kanter (JIRA)


 [ 
https://issues.apache.org/jira/browse/YARN-4677?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Robert Kanter updated YARN-4677:

Fix Version/s: 3.0.x
   2.9.2
   3.1.1
   3.2.0

> RMNodeResourceUpdateEvent update from scheduler can lead to race condition
> --
>
> Key: YARN-4677
> URL: https://issues.apache.org/jira/browse/YARN-4677
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: graceful, resourcemanager, scheduler
>Affects Versions: 2.7.1
>Reporter: Brook Zhou
>Assignee: Wilfred Spiegelenburg
>Priority: Major
> Fix For: 3.2.0, 3.1.1, 2.9.2, 3.0.x
>
> Attachments: YARN-4677-branch-2.001.patch, 
> YARN-4677-branch-2.002.patch, YARN-4677-branch-2.003.patch, YARN-4677.01.patch
>
>
> When a node is in decommissioning state, there is time window between 
> completedContainer() and RMNodeResourceUpdateEvent get handled in 
> scheduler.nodeUpdate (YARN-3223). 
> So if a scheduling effort happens within this window, the new container could 
> still get allocated on this node. Even worse case is if scheduling effort 
> happen after RMNodeResourceUpdateEvent sent out but before it is propagated 
> to SchedulerNode - then the total resource is lower than used resource and 
> available resource is a negative value. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-4677) RMNodeResourceUpdateEvent update from scheduler can lead to race condition

2018-06-04 Thread Robert Kanter (JIRA)


 [ 
https://issues.apache.org/jira/browse/YARN-4677?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Robert Kanter updated YARN-4677:

Hadoop Flags: Reviewed

> RMNodeResourceUpdateEvent update from scheduler can lead to race condition
> --
>
> Key: YARN-4677
> URL: https://issues.apache.org/jira/browse/YARN-4677
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: graceful, resourcemanager, scheduler
>Affects Versions: 2.7.1
>Reporter: Brook Zhou
>Assignee: Wilfred Spiegelenburg
>Priority: Major
> Fix For: 3.2.0, 3.1.1, 2.9.2, 3.0.x
>
> Attachments: YARN-4677-branch-2.001.patch, 
> YARN-4677-branch-2.002.patch, YARN-4677-branch-2.003.patch, YARN-4677.01.patch
>
>
> When a node is in decommissioning state, there is time window between 
> completedContainer() and RMNodeResourceUpdateEvent get handled in 
> scheduler.nodeUpdate (YARN-3223). 
> So if a scheduling effort happens within this window, the new container could 
> still get allocated on this node. Even worse case is if scheduling effort 
> happen after RMNodeResourceUpdateEvent sent out but before it is propagated 
> to SchedulerNode - then the total resource is lower than used resource and 
> available resource is a negative value. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Created] (YARN-8393) timeline flow runs API createdtimestart/createdtimeend parameter does not work

2018-06-04 Thread Haibo Chen (JIRA)
Haibo Chen created YARN-8393:


 Summary: timeline flow runs API createdtimestart/createdtimeend 
parameter does not work
 Key: YARN-8393
 URL: https://issues.apache.org/jira/browse/YARN-8393
 Project: Hadoop YARN
  Issue Type: Sub-task
  Components: timelinereader
Affects Versions: 3.2.0
Reporter: Haibo Chen


[http://vijayk-ats-4.gce.cloudera.com:8188/ws/v2/timeline/users/systest/flows/flow1/runs]
 output:
{code:java}
[{"metrics":[],"events":[],"createdtime":1516405275543,"idprefix":0,"id":"systest@flow1/12342","info":{"UID":"aftc!systest!flow1!12342","SYSTEM_INFO_FLOW_RUN_END_TIME":151640562,"SYSTEM_INFO_FLOW_NAME":"flow1","SYSTEM_INFO_FLOW_RUN_ID":12342,"SYSTEM_INFO_USER":"systest","FROM_ID":"aftc!systest!flow1!12342"},"isrelatedto":{},"relatesto":{},"type":"YARN_FLOW_RUN"},{"metrics":[],"events":[],"createdtime":1516223999363,"idprefix":0,"id":"systest@flow1/12341","info":{"UID":"aftc!systest!flow1!12341","SYSTEM_INFO_FLOW_RUN_END_TIME":1516405586650,"SYSTEM_INFO_FLOW_NAME":"flow1","SYSTEM_INFO_FLOW_RUN_ID":12341,"SYSTEM_INFO_USER":"systest","FROM_ID":"aftc!systest!flow1!12341"},"isrelatedto":{},"relatesto":{},"type":"YARN_FLOW_RUN"}]
{code}
createdtimestart parameter call(used the run that had higher timestamp so that 
the other run gets filtered out):
 
[http://vijayk-ats-4.gce.cloudera.com:8188/ws/v2/timeline/users/systest/flows/flow1/runs?createdtimestart=1516405275543]

But the output didn't get filtered out.

When trying with a even higher timestamp, the expectation was that both the 
runs get filtered out. But only one got filtered out at this value.
 
[http://vijayk-ats-4.gce.cloudera.com:8188/ws/v2/timeline/users/systest/flows/flow1/runs?createdtimestart=1516405585543]



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-8103) Add CLI interface to query node attributes

2018-06-04 Thread genericqa (JIRA)


[ 
https://issues.apache.org/jira/browse/YARN-8103?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16501103#comment-16501103
 ] 

genericqa commented on YARN-8103:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
24s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 4 new or modified test 
files. {color} |
|| || || || {color:brown} YARN-3409 Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  8m 
46s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 26m 
40s{color} | {color:green} YARN-3409 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 28m 
24s{color} | {color:green} YARN-3409 passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  3m 
33s{color} | {color:green} YARN-3409 passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  7m 
43s{color} | {color:green} YARN-3409 passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
10m 28s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Skipped patched modules with no Java source: 
hadoop-yarn-project/hadoop-yarn {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  5m 
56s{color} | {color:green} YARN-3409 passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  4m 
44s{color} | {color:green} YARN-3409 passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
21s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  7m 
45s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 26m 
42s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} cc {color} | {color:green} 26m 
42s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 26m 
42s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
4m 36s{color} | {color:orange} root: The patch generated 13 new + 483 unchanged 
- 27 fixed = 496 total (was 510) {color} |
| {color:red}-1{color} | {color:red} mvnsite {color} | {color:red}  0m 
35s{color} | {color:red} hadoop-yarn-common in the patch failed. {color} |
| {color:red}-1{color} | {color:red} mvnsite {color} | {color:red}  0m 
32s{color} | {color:red} hadoop-yarn-server-common in the patch failed. {color} 
|
| {color:red}-1{color} | {color:red} mvnsite {color} | {color:red}  0m 
38s{color} | {color:red} hadoop-yarn-server-resourcemanager in the patch 
failed. {color} |
| {color:red}-1{color} | {color:red} mvnsite {color} | {color:red}  0m 
30s{color} | {color:red} hadoop-yarn-client in the patch failed. {color} |
| {color:red}-1{color} | {color:red} mvnsite {color} | {color:red}  0m 
29s{color} | {color:red} hadoop-sls in the patch failed. {color} |
| {color:green}+1{color} | {color:green} shellcheck {color} | {color:green}  0m 
25s{color} | {color:green} There were no new shellcheck issues. {color} |
| {color:green}+1{color} | {color:green} shelldocs {color} | {color:green}  0m 
14s{color} | {color:green} There were no new shelldocs issues. {color} |
| {color:red}-1{color} | {color:red} whitespace {color} | {color:red}  0m  
0s{color} | {color:red} The patch 2 line(s) with tabs. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
10m 27s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Skipped patched modules with no Java source: 
hadoop-yarn-project/hadoop-yarn {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  0m 
48s{color} | {color:red} hadoop-yarn-project/hadoop-yarn/hadoop-yarn-client 
generated 1 new + 0 unchanged - 0 fixed = 1 total (was 0) {color} |
| {color:red}-1{color} | {color:re

[jira] [Commented] (YARN-8389) Improve the description of machine-list property in Federation docs

2018-06-04 Thread Takanobu Asanuma (JIRA)


[ 
https://issues.apache.org/jira/browse/YARN-8389?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16501106#comment-16501106
 ] 

Takanobu Asanuma commented on YARN-8389:


Thanks for reviewing and committing it, [~giovanni.fumarola] and [~elgoiri]!

> Improve the description of machine-list property in Federation docs
> ---
>
> Key: YARN-8389
> URL: https://issues.apache.org/jira/browse/YARN-8389
> Project: Hadoop YARN
>  Issue Type: Improvement
>  Components: documentation, federation
>Affects Versions: 3.2.0, 3.1.1
>Reporter: Takanobu Asanuma
>Assignee: Takanobu Asanuma
>Priority: Major
> Fix For: 3.2.0, 3.1.1
>
> Attachments: YARN-8389.1.patch
>
>
> The current example and the description for machine-list property seem to be 
> a bit ambiguous.
> http://hadoop.apache.org/docs/r3.1.0/hadoop-yarn/hadoop-yarn-site/Federation.html#Optional:



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6677) Preempt opportunistic containers when root container cgroup goes over memory limit

2018-06-04 Thread genericqa (JIRA)


[ 
https://issues.apache.org/jira/browse/YARN-6677?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16501123#comment-16501123
 ] 

genericqa commented on YARN-6677:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
34s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 2 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 26m 
56s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
58s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
26s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
36s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
11m 46s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
52s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
24s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
34s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
55s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
55s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 22s{color} | {color:orange} 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager:
 The patch generated 1 new + 145 unchanged - 0 fixed = 146 total (was 145) 
{color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
33s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
12m 16s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  1m  
5s{color} | {color:red} 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager
 generated 1 new + 0 unchanged - 0 fixed = 1 total (was 0) {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
22s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 19m 
51s{color} | {color:green} hadoop-yarn-server-nodemanager in the patch passed. 
{color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
25s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 78m 59s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| FindBugs | 
module:hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager
 |
|  |  
org.apache.hadoop.yarn.server.nodemanager.containermanager.linux.resources.DefaultOOMHandler$ContainerCandidate
 defines compareTo(DefaultOOMHandler$ContainerCandidate) and uses 
Object.equals()  At DefaultOOMHandler.java:Object.equals()  At 
DefaultOOMHandler.java:[lines 262-276] |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:abb62dd |
| JIRA Issue | YARN-6677 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12926477/YARN-6677.03.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  shadedclient  findbugs  checkstyle  |
| uname | Linux d51970ebffb5 3.13.0-139-generic #188-Ubuntu SMP Tue Jan 9 
14:43:09 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / f30f2dc

[jira] [Commented] (YARN-8258) YARN webappcontext for UI2 should inherit all filters from default context

2018-06-04 Thread Sunil Govindan (JIRA)


[ 
https://issues.apache.org/jira/browse/YARN-8258?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16501219#comment-16501219
 ] 

Sunil Govindan commented on YARN-8258:
--

Cleaned up the test case to another class to cover all initializers when Spnego 
is also used.

Fixed checkstyles as well. Attached new patch.

cc [~vinodkv]

> YARN webappcontext for UI2 should inherit all filters from default context
> --
>
> Key: YARN-8258
> URL: https://issues.apache.org/jira/browse/YARN-8258
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: webapp
>Reporter: Sumana Sathish
>Assignee: Sunil Govindan
>Priority: Major
> Attachments: YARN-8258.001.patch, YARN-8258.002.patch, 
> YARN-8258.003.patch, YARN-8258.004.patch, YARN-8258.005.patch
>
>
> Thanks [~ssath...@hortonworks.com] for finding this.
> Ideally all filters from default context has to be inherited to UI2 context 
> as well.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-8258) YARN webappcontext for UI2 should inherit all filters from default context

2018-06-04 Thread Sunil Govindan (JIRA)


 [ 
https://issues.apache.org/jira/browse/YARN-8258?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sunil Govindan updated YARN-8258:
-
Attachment: YARN-8258.005.patch

> YARN webappcontext for UI2 should inherit all filters from default context
> --
>
> Key: YARN-8258
> URL: https://issues.apache.org/jira/browse/YARN-8258
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: webapp
>Reporter: Sumana Sathish
>Assignee: Sunil Govindan
>Priority: Major
> Attachments: YARN-8258.001.patch, YARN-8258.002.patch, 
> YARN-8258.003.patch, YARN-8258.004.patch, YARN-8258.005.patch
>
>
> Thanks [~ssath...@hortonworks.com] for finding this.
> Ideally all filters from default context has to be inherited to UI2 context 
> as well.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-8103) Add CLI interface to query node attributes

2018-06-04 Thread Naganarasimha G R (JIRA)


[ 
https://issues.apache.org/jira/browse/YARN-8103?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16501238#comment-16501238
 ] 

Naganarasimha G R commented on YARN-8103:
-

Thanks for the patch and apologies for the delayed response.
Major Comments :
 
 * I agree with the approach of not storing and updating the attributes in 
RMNode and instead request NodeAttributesManager to share the information. But 
was also wondering whether the API(RMNodeImpl.getAllNodeAttributes) is useful 
based on current scenarios ? All the callers are eventually converting it into 
a set of attributes and utilizing it. So i would prefer to change the api to 
just return the set of attributes applicable on a node and when needed let the 
caller take care of sorting based on the prefix( which is anyway not a current 
scenario)


Few other comments:
 * hadoop-yarn/bin/yarn ln no 58: i think "client" was missing from earlier 
which we need to add it
 *  NodeAttributesCLI ln no 195: i think its better to use null here instead of 
"handler" for better readability
 *  NodeAttributesCLI ln no 88,96 : unused variables
 *  TestNodeAttributesCLI ln no 405: testListAttributes is encapsulating 
NodesToAttributes tests too, may be it can captured as a different case

Some of the findbugs and checkstyle seems to be valid can you have a look at 
them ?

Kudos NodeAttributesCLI has been handled well !

> Add CLI interface to  query node attributes
> ---
>
> Key: YARN-8103
> URL: https://issues.apache.org/jira/browse/YARN-8103
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Bibin A Chundatt
>Assignee: Bibin A Chundatt
>Priority: Major
> Attachments: YARN-8103-YARN-3409.001.patch, 
> YARN-8103-YARN-3409.002.patch, YARN-8103-YARN-3409.WIP.patch
>
>
> YARN-8100 will add API interface for querying the attributes. CLI interface 
> for querying node attributes for each nodes and list all attributes in 
> cluster.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-8103) Add CLI interface to query node attributes

2018-06-04 Thread Weiwei Yang (JIRA)


[ 
https://issues.apache.org/jira/browse/YARN-8103?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16501253#comment-16501253
 ] 

Weiwei Yang commented on YARN-8103:
---

Hi [~bibinchundatt]

The patch overall looks good to me, some comments below

ClusterCLI
* line 121: printClusterNodeAttributes() writer is not closed

NodeAttributesCLI
* line 299: nodestoattributes -> nodes2attributes
* line 302: attributestonodes -> attributes2nodes
* line 149, 374, 421: ByteArrayOutputStream is not closed
* line 588: buildNodeLabelsMapFromStr -> buildNodeAttributesListFromStr

NodeCLI
* line 349: why not to call NodeAttribute#toString?

Thanks


> Add CLI interface to  query node attributes
> ---
>
> Key: YARN-8103
> URL: https://issues.apache.org/jira/browse/YARN-8103
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Bibin A Chundatt
>Assignee: Bibin A Chundatt
>Priority: Major
> Attachments: YARN-8103-YARN-3409.001.patch, 
> YARN-8103-YARN-3409.002.patch, YARN-8103-YARN-3409.WIP.patch
>
>
> YARN-8100 will add API interface for querying the attributes. CLI interface 
> for querying node attributes for each nodes and list all attributes in 
> cluster.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (YARN-8103) Add CLI interface to query node attributes

2018-06-04 Thread Naganarasimha G R (JIRA)


[ 
https://issues.apache.org/jira/browse/YARN-8103?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16501238#comment-16501238
 ] 

Naganarasimha G R edited comment on YARN-8103 at 6/5/18 3:53 AM:
-

Thanks for the patch and apologies for the delayed response.
 Major Comments :
  
 * I agree with the approach of not storing and updating the attributes in 
RMNode and instead request NodeAttributesManager to share the information. But 
was also wondering whether the API(RMNodeImpl.getAllNodeAttributes) is useful 
based on current scenarios ? All the callers are eventually converting it into 
a set of attributes and utilizing it. So i would prefer to change the api to 
just return the set of attributes applicable on a node and when needed let the 
caller take care of sorting based on the prefix( which is anyway not a current 
scenario)

Few other comments:
 * hadoop-yarn/bin/yarn ln no 58: i think "client" was missing from earlier 
which we need to add it
 * NodeAttributesCLI ln no 195: i think its better to use null here instead of 
"handler" for better readability
 * NodeAttributesCLI ln no 88,96 : unused variables
 * TestNodeAttributesCLI ln no 405: testListAttributes is encapsulating 
NodesToAttributes tests too, may be it can captured as a different case
 * NodeAttributesCLI ln no 355 : HashSet => HashSet
 * NodeAttributesCLI ln no 394 : HashSet => HashSet
 * NodeCLI ln no 349: formatNodeAttribute : instead of having new formatting 
here why not update NodeAttributePBImpl.toString ?
 * NodeCLI ln no 317 : IMO it would better to wrap each attribute in a new line 
?
 * TestClusterCLI : does not capture any case for listing of attributes 
(basically need to capture multiline)
 * BuilderUtils ln no 211 : space is required before Set

Some of the findbugs and checkstyle seems to be valid can you have a look at 
them ?

Kudos NodeAttributesCLI has been handled well !


was (Author: naganarasimha):
Thanks for the patch and apologies for the delayed response.
Major Comments :
 
 * I agree with the approach of not storing and updating the attributes in 
RMNode and instead request NodeAttributesManager to share the information. But 
was also wondering whether the API(RMNodeImpl.getAllNodeAttributes) is useful 
based on current scenarios ? All the callers are eventually converting it into 
a set of attributes and utilizing it. So i would prefer to change the api to 
just return the set of attributes applicable on a node and when needed let the 
caller take care of sorting based on the prefix( which is anyway not a current 
scenario)


Few other comments:
 * hadoop-yarn/bin/yarn ln no 58: i think "client" was missing from earlier 
which we need to add it
 *  NodeAttributesCLI ln no 195: i think its better to use null here instead of 
"handler" for better readability
 *  NodeAttributesCLI ln no 88,96 : unused variables
 *  TestNodeAttributesCLI ln no 405: testListAttributes is encapsulating 
NodesToAttributes tests too, may be it can captured as a different case

Some of the findbugs and checkstyle seems to be valid can you have a look at 
them ?

Kudos NodeAttributesCLI has been handled well !

> Add CLI interface to  query node attributes
> ---
>
> Key: YARN-8103
> URL: https://issues.apache.org/jira/browse/YARN-8103
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Bibin A Chundatt
>Assignee: Bibin A Chundatt
>Priority: Major
> Attachments: YARN-8103-YARN-3409.001.patch, 
> YARN-8103-YARN-3409.002.patch, YARN-8103-YARN-3409.WIP.patch
>
>
> YARN-8100 will add API interface for querying the attributes. CLI interface 
> for querying node attributes for each nodes and list all attributes in 
> cluster.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-8258) YARN webappcontext for UI2 should inherit all filters from default context

2018-06-04 Thread Vinod Kumar Vavilapalli (JIRA)


[ 
https://issues.apache.org/jira/browse/YARN-8258?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16501263#comment-16501263
 ] 

Vinod Kumar Vavilapalli commented on YARN-8258:
---

bq. While doing this, getFilterMappings helps to get the URL path associated 
with each filter name and UI2 also should use same except for authentication 
filter. In that case, UI2 has to add /*.
What is special about authentication filter? Can you comment on why every 
filter can be copied as is? If it is special, let's add java comment too.

WebApps is a generic utility in hadoop-yarn-common, so it's weird for it to 
have RM specific code. We should look at cleaning this class up by moving RM 
specific code into RM itself.

After that, can you make {{addFiltersForUI2Context()}} static and move to 
RMWebAppUtil?

Refactor the string "authentication" in AuthenticationFilterInitializer and 
reuse it.

The following can lose multiple filters in the array with the same name. Is 
that okay?
{code}
  Map filterMappings = new HashMap<>();
  for (FilterMapping filterMapping : filterMappingsArray) {
filterMappings.put(filterMapping.getFilterName(), filterMapping);
  }
{code}

> YARN webappcontext for UI2 should inherit all filters from default context
> --
>
> Key: YARN-8258
> URL: https://issues.apache.org/jira/browse/YARN-8258
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: webapp
>Reporter: Sumana Sathish
>Assignee: Sunil Govindan
>Priority: Major
> Attachments: YARN-8258.001.patch, YARN-8258.002.patch, 
> YARN-8258.003.patch, YARN-8258.004.patch, YARN-8258.005.patch
>
>
> Thanks [~ssath...@hortonworks.com] for finding this.
> Ideally all filters from default context has to be inherited to UI2 context 
> as well.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-8258) YARN webappcontext for UI2 should inherit all filters from default context

2018-06-04 Thread genericqa (JIRA)


[ 
https://issues.apache.org/jira/browse/YARN-8258?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16501298#comment-16501298
 ] 

genericqa commented on YARN-8258:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
25s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  1m  
0s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 23m 
29s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  7m 
57s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
14s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
37s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
13m  0s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  1m 
14s{color} | {color:red} 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager
 in trunk has 1 extant Findbugs warnings. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
22s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
12s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
16s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  6m 
43s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  6m 
43s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
10s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
31s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
10m 30s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m 
34s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
12s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  3m 
20s{color} | {color:green} hadoop-yarn-common in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 69m  
2s{color} | {color:green} hadoop-yarn-server-resourcemanager in the patch 
passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
35s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}149m 45s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:abb62dd |
| JIRA Issue | YARN-8258 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12926496/YARN-8258.005.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  shadedclient  findbugs  checkstyle  |
| uname | Linux 3d6202834a89 4.4.0-64-generic #85-Ubuntu SMP Mon Feb 20 
11:50:30 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / 8d31ddc |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_162 |
| findbugs | v3.1.0-RC1 |
| findbugs | 
https://builds.apache.o

[jira] [Commented] (YARN-8382) cgroup file leak in NM

2018-06-04 Thread Hu Ziqian (JIRA)


[ 
https://issues.apache.org/jira/browse/YARN-8382?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16501301#comment-16501301
 ] 

Hu Ziqian commented on YARN-8382:
-

[~miklos.szeg...@cloudera.com], thank you for reviewing this patch.

> cgroup file leak in NM
> --
>
> Key: YARN-8382
> URL: https://issues.apache.org/jira/browse/YARN-8382
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: nodemanager
> Environment: we write an container with a shutdownHook which has a 
> piece of code like  "while(true) sleep(100)" . when 
> *yarn.nodemanager.linux-container-executor.cgroups.delete-timeout-ms <* 
> *yarn.nodemanager.sleep-delay-before-sigkill.ms , cgourp file leak happens; 
> when* *yarn.nodemanager.linux-container-executor.cgroups.delete-timeout-ms >* 
> ** *yarn.nodemanager.sleep-delay-before-sigkill.ms, cgroup file is deleted 
> successfully***
>Reporter: Hu Ziqian
>Assignee: Hu Ziqian
>Priority: Major
> Fix For: 3.2.0, 3.1.1, 3.0.x
>
> Attachments: YARN-8382-branch-2.8.3.001.patch, 
> YARN-8382-branch-2.8.3.002.patch, YARN-8382.001.patch, YARN-8382.002.patch
>
>
> As Jiandan said in YARN-6525, NM may delete  Cgroup container file timeout 
> with logs like below:
> org.apache.hadoop.yarn.server.nodemanager.util.CgroupsLCEResourcesHandler: 
> Unable to delete cgroup at: /cgroup/cpu/hadoop-yarn/container_xxx, tried to 
> delete for 1000ms
>  
> we found one situation is that when we set 
> *yarn.nodemanager.sleep-delay-before-sigkill.ms* bigger than 
> *yarn.nodemanager.linux-container-executor.cgroups.delete-timeout-ms*, the 
> cgroup file leak happens *.* 
>  
> One container process tree looks like follow graph:
> bash(16097)───java(16099)─┬─\{java}(16100) 
>                                                   ├─\{java}(16101) 
> {{                       ├─\{java}(16102)}}
>  
> {{when NM kills a container, NM sends kill -15 -pid to kill container process 
> group. Bash process will exit when it received sigterm, but java process may 
> do some job (shutdownHook etc.), and doesn't exit unit receive sigkill. And 
> when bash process exits, CgroupsLCEResourcesHandler begin to try to delete 
> cgroup files. So when 
> *yarn.nodemanager.linux-container-executor.cgroups.delete-timeout-ms* 
> arrived, the java processes may still running and cgourp/tasks still not 
> empty and cause a cgroup file leak.}}
>  
> {{we add a condition that 
> *yarn.nodemanager.linux-container-executor.cgroups.delete-timeout-ms* must 
> bigger than *yarn.nodemanager.sleep-delay-before-sigkill.ms* to solve this 
> problem.}}
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-8382) cgroup file leak in NM

2018-06-04 Thread Hu Ziqian (JIRA)


 [ 
https://issues.apache.org/jira/browse/YARN-8382?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Hu Ziqian updated YARN-8382:

Description: 
As Jiandan said in YARN-6562, NM may delete  Cgroup container file timeout with 
logs like below:

org.apache.hadoop.yarn.server.nodemanager.util.CgroupsLCEResourcesHandler: 
Unable to delete cgroup at: /cgroup/cpu/hadoop-yarn/container_xxx, tried to 
delete for 1000ms

 

we found one situation is that when we set 
*yarn.nodemanager.sleep-delay-before-sigkill.ms* bigger than 
*yarn.nodemanager.linux-container-executor.cgroups.delete-timeout-ms*, the 
cgroup file leak happens *.* 

 

One container process tree looks like follow graph:

bash(16097)───java(16099)─┬─\{java}(16100) 

                                                  ├─\{java}(16101) 

{{                       ├─\{java}(16102)}}

 

{{when NM kills a container, NM sends kill -15 -pid to kill container process 
group. Bash process will exit when it received sigterm, but java process may do 
some job (shutdownHook etc.), and doesn't exit unit receive sigkill. And when 
bash process exits, CgroupsLCEResourcesHandler begin to try to delete cgroup 
files. So when 
*yarn.nodemanager.linux-container-executor.cgroups.delete-timeout-ms* arrived, 
the java processes may still running and cgourp/tasks still not empty and cause 
a cgroup file leak.}}

 

{{we add a condition that 
*yarn.nodemanager.linux-container-executor.cgroups.delete-timeout-ms* must 
bigger than *yarn.nodemanager.sleep-delay-before-sigkill.ms* to solve this 
problem.}}

 

  was:
As Jiandan said in YARN-6525, NM may delete  Cgroup container file timeout with 
logs like below:

org.apache.hadoop.yarn.server.nodemanager.util.CgroupsLCEResourcesHandler: 
Unable to delete cgroup at: /cgroup/cpu/hadoop-yarn/container_xxx, tried to 
delete for 1000ms

 

we found one situation is that when we set 
*yarn.nodemanager.sleep-delay-before-sigkill.ms* bigger than 
*yarn.nodemanager.linux-container-executor.cgroups.delete-timeout-ms*, the 
cgroup file leak happens *.* 

 

One container process tree looks like follow graph:

bash(16097)───java(16099)─┬─\{java}(16100) 

                                                  ├─\{java}(16101) 

{{                       ├─\{java}(16102)}}

 

{{when NM kills a container, NM sends kill -15 -pid to kill container process 
group. Bash process will exit when it received sigterm, but java process may do 
some job (shutdownHook etc.), and doesn't exit unit receive sigkill. And when 
bash process exits, CgroupsLCEResourcesHandler begin to try to delete cgroup 
files. So when 
*yarn.nodemanager.linux-container-executor.cgroups.delete-timeout-ms* arrived, 
the java processes may still running and cgourp/tasks still not empty and cause 
a cgroup file leak.}}

 

{{we add a condition that 
*yarn.nodemanager.linux-container-executor.cgroups.delete-timeout-ms* must 
bigger than *yarn.nodemanager.sleep-delay-before-sigkill.ms* to solve this 
problem.}}

 


> cgroup file leak in NM
> --
>
> Key: YARN-8382
> URL: https://issues.apache.org/jira/browse/YARN-8382
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: nodemanager
> Environment: we write an container with a shutdownHook which has a 
> piece of code like  "while(true) sleep(100)" . when 
> *yarn.nodemanager.linux-container-executor.cgroups.delete-timeout-ms <* 
> *yarn.nodemanager.sleep-delay-before-sigkill.ms , cgourp file leak happens; 
> when* *yarn.nodemanager.linux-container-executor.cgroups.delete-timeout-ms >* 
> ** *yarn.nodemanager.sleep-delay-before-sigkill.ms, cgroup file is deleted 
> successfully***
>Reporter: Hu Ziqian
>Assignee: Hu Ziqian
>Priority: Major
> Fix For: 3.2.0, 3.1.1, 3.0.x
>
> Attachments: YARN-8382-branch-2.8.3.001.patch, 
> YARN-8382-branch-2.8.3.002.patch, YARN-8382.001.patch, YARN-8382.002.patch
>
>
> As Jiandan said in YARN-6562, NM may delete  Cgroup container file timeout 
> with logs like below:
> org.apache.hadoop.yarn.server.nodemanager.util.CgroupsLCEResourcesHandler: 
> Unable to delete cgroup at: /cgroup/cpu/hadoop-yarn/container_xxx, tried to 
> delete for 1000ms
>  
> we found one situation is that when we set 
> *yarn.nodemanager.sleep-delay-before-sigkill.ms* bigger than 
> *yarn.nodemanager.linux-container-executor.cgroups.delete-timeout-ms*, the 
> cgroup file leak happens *.* 
>  
> One container process tree looks like follow graph:
> bash(16097)───java(16099)─┬─\{java}(16100) 
>                                                   ├─\{java}(16101) 
> {{                       ├─\{java}(16102)}}
>  
> {{when NM kills a container, NM sends kill -15 -pid to kill container process 
> group. Bash process will exit when it received sigterm, but java process may 
> do some job (shutdownHook etc.), and doesn't exit unit receive 

[jira] [Updated] (YARN-8258) YARN webappcontext for UI2 should inherit all filters from default context

2018-06-04 Thread Sunil Govindan (JIRA)


 [ 
https://issues.apache.org/jira/browse/YARN-8258?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sunil Govindan updated YARN-8258:
-
Attachment: YARN-8258.006.patch

> YARN webappcontext for UI2 should inherit all filters from default context
> --
>
> Key: YARN-8258
> URL: https://issues.apache.org/jira/browse/YARN-8258
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: webapp
>Reporter: Sumana Sathish
>Assignee: Sunil Govindan
>Priority: Major
> Attachments: YARN-8258.001.patch, YARN-8258.002.patch, 
> YARN-8258.003.patch, YARN-8258.004.patch, YARN-8258.005.patch, 
> YARN-8258.006.patch
>
>
> Thanks [~ssath...@hortonworks.com] for finding this.
> Ideally all filters from default context has to be inherited to UI2 context 
> as well.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-8258) YARN webappcontext for UI2 should inherit all filters from default context

2018-06-04 Thread Sunil Govindan (JIRA)


[ 
https://issues.apache.org/jira/browse/YARN-8258?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16501336#comment-16501336
 ] 

Sunil Govindan commented on YARN-8258:
--

Thank you very much [~vinodkv]

Updating new patch addressing all comments.

> YARN webappcontext for UI2 should inherit all filters from default context
> --
>
> Key: YARN-8258
> URL: https://issues.apache.org/jira/browse/YARN-8258
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: webapp
>Reporter: Sumana Sathish
>Assignee: Sunil Govindan
>Priority: Major
> Attachments: YARN-8258.001.patch, YARN-8258.002.patch, 
> YARN-8258.003.patch, YARN-8258.004.patch, YARN-8258.005.patch, 
> YARN-8258.006.patch
>
>
> Thanks [~ssath...@hortonworks.com] for finding this.
> Ideally all filters from default context has to be inherited to UI2 context 
> as well.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Resolved] (YARN-6289) Fail to achieve data locality when runing MapReduce and Spark on HDFS

2018-06-04 Thread Weiwei Yang (JIRA)


 [ 
https://issues.apache.org/jira/browse/YARN-6289?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Weiwei Yang resolved YARN-6289.
---
   Resolution: Duplicate
Fix Version/s: 2.9.0
   3.0.0

> Fail to achieve data locality when runing MapReduce and Spark on HDFS
> -
>
> Key: YARN-6289
> URL: https://issues.apache.org/jira/browse/YARN-6289
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: distributed-scheduling
> Environment: Hardware configuration
> CPU: 2 x Intel(R) Xeon(R) E5-2620 v2 @ 2.10GHz /15M Cache 6-Core 12-Thread 
> Memory: 128GB Memory (16x8GB) 1600MHz
> Disk: 600GBx2 3.5-inch with RAID-1
> Network bandwidth: 968Mb/s
> Software configuration
> Spark-1.6.2   Hadoop-2.7.1 
>Reporter: Huangkaixuan
>Priority: Major
> Fix For: 3.0.0, 2.9.0
>
> Attachments: Hadoop_Spark_Conf.zip, YARN-DataLocality.docx, 
> YARN-RackAwareness.docx
>
>
> When running a simple wordcount experiment on YARN, I noticed that the task 
> failed to achieve data locality, even though there is no other job running on 
> the cluster at the same time. The experiment was done in a 7-node (1 master, 
> 6 data nodes/node managers) cluster and the input of the wordcount job (both 
> Spark and MapReduce) is a single-block file in HDFS which is two-way 
> replicated (replication factor = 2). I ran wordcount on YARN for 10 times. 
> The results show that only 30% of tasks can achieve data locality, which 
> seems like the result of a random placement of tasks. The experiment details 
> are in the attachment, and feel free to reproduce the experiments.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-6289) Fail to achieve data locality when runing MapReduce and Spark on HDFS

2018-06-04 Thread Weiwei Yang (JIRA)


 [ 
https://issues.apache.org/jira/browse/YARN-6289?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Weiwei Yang updated YARN-6289:
--
Fix Version/s: (was: 3.0.0)
   (was: 2.9.0)

> Fail to achieve data locality when runing MapReduce and Spark on HDFS
> -
>
> Key: YARN-6289
> URL: https://issues.apache.org/jira/browse/YARN-6289
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: distributed-scheduling
> Environment: Hardware configuration
> CPU: 2 x Intel(R) Xeon(R) E5-2620 v2 @ 2.10GHz /15M Cache 6-Core 12-Thread 
> Memory: 128GB Memory (16x8GB) 1600MHz
> Disk: 600GBx2 3.5-inch with RAID-1
> Network bandwidth: 968Mb/s
> Software configuration
> Spark-1.6.2   Hadoop-2.7.1 
>Reporter: Huangkaixuan
>Priority: Major
> Attachments: Hadoop_Spark_Conf.zip, YARN-DataLocality.docx, 
> YARN-RackAwareness.docx
>
>
> When running a simple wordcount experiment on YARN, I noticed that the task 
> failed to achieve data locality, even though there is no other job running on 
> the cluster at the same time. The experiment was done in a 7-node (1 master, 
> 6 data nodes/node managers) cluster and the input of the wordcount job (both 
> Spark and MapReduce) is a single-block file in HDFS which is two-way 
> replicated (replication factor = 2). I ran wordcount on YARN for 10 times. 
> The results show that only 30% of tasks can achieve data locality, which 
> seems like the result of a random placement of tasks. The experiment details 
> are in the attachment, and feel free to reproduce the experiments.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6289) Fail to achieve data locality when runing MapReduce and Spark on HDFS

2018-06-04 Thread Weiwei Yang (JIRA)


[ 
https://issues.apache.org/jira/browse/YARN-6289?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16501352#comment-16501352
 ] 

Weiwei Yang commented on YARN-6289:
---

[~Huangkx6810], [~leftnoteasy], I have closed this as a dup of YARN-6344 as it 
is already resolved.

> Fail to achieve data locality when runing MapReduce and Spark on HDFS
> -
>
> Key: YARN-6289
> URL: https://issues.apache.org/jira/browse/YARN-6289
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: distributed-scheduling
> Environment: Hardware configuration
> CPU: 2 x Intel(R) Xeon(R) E5-2620 v2 @ 2.10GHz /15M Cache 6-Core 12-Thread 
> Memory: 128GB Memory (16x8GB) 1600MHz
> Disk: 600GBx2 3.5-inch with RAID-1
> Network bandwidth: 968Mb/s
> Software configuration
> Spark-1.6.2   Hadoop-2.7.1 
>Reporter: Huangkaixuan
>Priority: Major
> Attachments: Hadoop_Spark_Conf.zip, YARN-DataLocality.docx, 
> YARN-RackAwareness.docx
>
>
> When running a simple wordcount experiment on YARN, I noticed that the task 
> failed to achieve data locality, even though there is no other job running on 
> the cluster at the same time. The experiment was done in a 7-node (1 master, 
> 6 data nodes/node managers) cluster and the input of the wordcount job (both 
> Spark and MapReduce) is a single-block file in HDFS which is two-way 
> replicated (replication factor = 2). I ran wordcount on YARN for 10 times. 
> The results show that only 30% of tasks can achieve data locality, which 
> seems like the result of a random placement of tasks. The experiment details 
> are in the attachment, and feel free to reproduce the experiments.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org