[jira] [Updated] (YARN-7766) Introduce a new config property for YARN Service dependency tarball location

2018-01-19 Thread Gour Saha (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-7766?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gour Saha updated YARN-7766:

Attachment: YARN-7766.003.patch

> Introduce a new config property for YARN Service dependency tarball location
> 
>
> Key: YARN-7766
> URL: https://issues.apache.org/jira/browse/YARN-7766
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: applications, client, yarn-native-services
>Reporter: Gour Saha
>Assignee: Gour Saha
>Priority: Major
> Attachments: YARN-7766.001.patch, YARN-7766.002.patch, 
> YARN-7766.003.patch
>
>
> Introduce a new config property (something like _yarn.service.framework.path_ 
> in-line with _mapreduce.application.framework.path_) for YARN Service 
> dependency tarball location. This will provide flexibility to the 
> user/cluster-admin to upload the dependency tarball to a location of their 
> choice. If this config property is not set, YARN Service client will default 
> to uploading all dependency jars from the client-host's classpath for every 
> service launch request (as it does today).
> Also, accept an optional destination HDFS location for *-enableFastLaunch* 
> command, to specify the location where user/cluster-admin wants to upload the 
> tarball. If not specified, let's default it to the location we use today. The 
> cluster-admin still needs to set _yarn.service.framework.path_ to this 
> default location otherwise it will not be used. So the command-line will 
> become something like this -
> {code:java}
> yarn app -enableFastLaunch []{code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7766) Introduce a new config property for YARN Service dependency tarball location

2018-01-19 Thread Gour Saha (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7766?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16333152#comment-16333152
 ] 

Gour Saha commented on YARN-7766:
-

[~jianhe], thank you for reviewing the patch. I incorporated your suggestion 
and uploaded 003 patch.

> Introduce a new config property for YARN Service dependency tarball location
> 
>
> Key: YARN-7766
> URL: https://issues.apache.org/jira/browse/YARN-7766
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: applications, client, yarn-native-services
>Reporter: Gour Saha
>Assignee: Gour Saha
>Priority: Major
> Attachments: YARN-7766.001.patch, YARN-7766.002.patch, 
> YARN-7766.003.patch
>
>
> Introduce a new config property (something like _yarn.service.framework.path_ 
> in-line with _mapreduce.application.framework.path_) for YARN Service 
> dependency tarball location. This will provide flexibility to the 
> user/cluster-admin to upload the dependency tarball to a location of their 
> choice. If this config property is not set, YARN Service client will default 
> to uploading all dependency jars from the client-host's classpath for every 
> service launch request (as it does today).
> Also, accept an optional destination HDFS location for *-enableFastLaunch* 
> command, to specify the location where user/cluster-admin wants to upload the 
> tarball. If not specified, let's default it to the location we use today. The 
> cluster-admin still needs to set _yarn.service.framework.path_ to this 
> default location otherwise it will not be used. So the command-line will 
> become something like this -
> {code:java}
> yarn app -enableFastLaunch []{code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-2185) Use pipes when localizing archives

2018-01-19 Thread genericqa (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-2185?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16333145#comment-16333145
 ] 

genericqa commented on YARN-2185:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
42s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  1m 
21s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 15m 
14s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 15m 
28s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
56s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
39s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
12m 50s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m 
36s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
28s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
17s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
14s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 14m  
4s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} javac {color} | {color:red} 14m  4s{color} 
| {color:red} root generated 1 new + 1241 unchanged - 0 fixed = 1242 total (was 
1241) {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
54s{color} | {color:green} root: The patch generated 0 new + 151 unchanged - 8 
fixed = 151 total (was 159) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
38s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green}  
8m 53s{color} | {color:green} patch has no errors when building and testing our 
client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m 
46s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
27s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  8m 
33s{color} | {color:green} hadoop-common in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  3m  
3s{color} | {color:green} hadoop-yarn-common in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
32s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 96m 10s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:5b98639 |
| JIRA Issue | YARN-2185 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12906899/YARN-2185.008.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  shadedclient  findbugs  checkstyle  |
| uname | Linux 53de2762fdb1 4.4.0-64-generic #85-Ubuntu SMP Mon Feb 20 
11:50:30 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / 2ed9d61 |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_151 |
| findbugs | v3.1.0-RC1 |
| javac | 
https://builds.apache.org/job/PreCommit-YARN-

[jira] [Commented] (YARN-7780) Documentation for Placement Constraints

2018-01-19 Thread Sunil G (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7780?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16333142#comment-16333142
 ] 

Sunil G commented on YARN-7780:
---

Thanks [~kkaranasos]. Such an example will really help in case of 
affinity/ant-affinity or cardinality and one could easily try the feature.

> Documentation for Placement Constraints
> ---
>
> Key: YARN-7780
> URL: https://issues.apache.org/jira/browse/YARN-7780
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Arun Suresh
>Assignee: Konstantinos Karanasos
>Priority: Major
>
> JIRA to track documentation for the feature.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7780) Documentation for Placement Constraints

2018-01-19 Thread Konstantinos Karanasos (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7780?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16333138#comment-16333138
 ] 

Konstantinos Karanasos commented on YARN-7780:
--

Thanks [~cheersyang] – will upload a first patch within the next few days and 
will let you know if I need help on specific parts.

I will certainly give some DS commands for people to easily try it out, good 
point.

> Documentation for Placement Constraints
> ---
>
> Key: YARN-7780
> URL: https://issues.apache.org/jira/browse/YARN-7780
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Arun Suresh
>Assignee: Konstantinos Karanasos
>Priority: Major
>
> JIRA to track documentation for the feature.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7778) Merging of constraints defined at different levels

2018-01-19 Thread Konstantinos Karanasos (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7778?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16333136#comment-16333136
 ] 

Konstantinos Karanasos commented on YARN-7778:
--

Sure [~cheersyang], go ahead. Let me know if you want to discuss about any 
details.

> Merging of constraints defined at different levels
> --
>
> Key: YARN-7778
> URL: https://issues.apache.org/jira/browse/YARN-7778
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Konstantinos Karanasos
>Priority: Major
>
> When we have multiple constraints defined for a given set of allocation tags 
> at different levels (i.e., at the cluster, the application or the scheduling 
> request level), we need to merge those constraints.
> Defining constraint levels as cluster > application > scheduling request, 
> constraints defined at lower levels should only be more restrictive than 
> those of higher levels. Otherwise the allocation should fail.
> For example, if there is an application level constraint that allows no more 
> than 5 HBase containers per rack, a scheduling request can further restrict 
> that to 3 containers per rack but not to 7 containers per rack.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7774) Miscellaneous fixes to the PlacementProcessor

2018-01-19 Thread Sunil G (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7774?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16333135#comment-16333135
 ] 

Sunil G commented on YARN-7774:
---

Makes sense. Sorry, I missed that one. Could you also update the exception in 
the same block, though maxCardinality check is != 0, exception message still 
says != 1 in {{SingleConstraintAppPlacementAllocator}}. If any Jiras are abt to 
commit, we can get this in or ll file a trivial one later.

> Miscellaneous fixes to the PlacementProcessor
> -
>
> Key: YARN-7774
> URL: https://issues.apache.org/jira/browse/YARN-7774
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Arun Suresh
>Assignee: Arun Suresh
>Priority: Blocker
> Fix For: YARN-6592
>
> Attachments: YARN-7774-YARN-6592.001.patch, 
> YARN-7774-YARN-6592.002.patch, YARN-7774-YARN-6592.003.patch, 
> YARN-7774-YARN-6592.004.patch, YARN-7774-YARN-6592.005.patch
>
>
> JIRA to track the following minor changes:
> * Scheduler must normalize requests that are made using the 
> {{attemptAllocationOnNode}} method.
> * Currently, the placement algorithm resets the node iterator for each 
> request. The Placement Algorithm should either shuffle the node iterator OR 
> use a circular iterator - to ensure a) more nodes are looked at and b) bias 
> against placing too many containers on the same node
> * Add a placement retry loop for rejected requests - since there are cases 
> especially, when Constraints will be satisfied after a subsequent request has 
> been placed.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7774) Miscellaneous fixes to the PlacementProcessor

2018-01-19 Thread Arun Suresh (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7774?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16333132#comment-16333132
 ] 

Arun Suresh commented on YARN-7774:
---

[~sunilg], so, do take a look at [this 
comment|https://issues.apache.org/jira/browse/YARN-7774?focusedCommentId=16331286&page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-16331286].
 As per that, the max cardinality is the cardinality the constraint satisfier 
will see BEFORE the placing the container.

> Miscellaneous fixes to the PlacementProcessor
> -
>
> Key: YARN-7774
> URL: https://issues.apache.org/jira/browse/YARN-7774
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Arun Suresh
>Assignee: Arun Suresh
>Priority: Blocker
> Fix For: YARN-6592
>
> Attachments: YARN-7774-YARN-6592.001.patch, 
> YARN-7774-YARN-6592.002.patch, YARN-7774-YARN-6592.003.patch, 
> YARN-7774-YARN-6592.004.patch, YARN-7774-YARN-6592.005.patch
>
>
> JIRA to track the following minor changes:
> * Scheduler must normalize requests that are made using the 
> {{attemptAllocationOnNode}} method.
> * Currently, the placement algorithm resets the node iterator for each 
> request. The Placement Algorithm should either shuffle the node iterator OR 
> use a circular iterator - to ensure a) more nodes are looked at and b) bias 
> against placing too many containers on the same node
> * Add a placement retry loop for rejected requests - since there are cases 
> especially, when Constraints will be satisfied after a subsequent request has 
> been placed.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7774) Miscellaneous fixes to the PlacementProcessor

2018-01-19 Thread Sunil G (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7774?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16333127#comment-16333127
 ] 

Sunil G commented on YARN-7774:
---

A quick doubt. {{SingleConstraintAppPlacementAllocator}} why maxCardinality 
validation is changed from 1 to 0?

 

> Miscellaneous fixes to the PlacementProcessor
> -
>
> Key: YARN-7774
> URL: https://issues.apache.org/jira/browse/YARN-7774
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Arun Suresh
>Assignee: Arun Suresh
>Priority: Blocker
> Fix For: YARN-6592
>
> Attachments: YARN-7774-YARN-6592.001.patch, 
> YARN-7774-YARN-6592.002.patch, YARN-7774-YARN-6592.003.patch, 
> YARN-7774-YARN-6592.004.patch, YARN-7774-YARN-6592.005.patch
>
>
> JIRA to track the following minor changes:
> * Scheduler must normalize requests that are made using the 
> {{attemptAllocationOnNode}} method.
> * Currently, the placement algorithm resets the node iterator for each 
> request. The Placement Algorithm should either shuffle the node iterator OR 
> use a circular iterator - to ensure a) more nodes are looked at and b) bias 
> against placing too many containers on the same node
> * Add a placement retry loop for rejected requests - since there are cases 
> especially, when Constraints will be satisfied after a subsequent request has 
> been placed.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7780) Documentation for Placement Constraints

2018-01-19 Thread Weiwei Yang (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7780?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16333125#comment-16333125
 ] 

Weiwei Yang commented on YARN-7780:
---

[~kkaranasos], please let me know if this task can be split and how we can help.

Also please include some doc to introduce DS changes made in YARN-7745, that 
will be very useful to give user the first impression how this works out.

Thanks.

> Documentation for Placement Constraints
> ---
>
> Key: YARN-7780
> URL: https://issues.apache.org/jira/browse/YARN-7780
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Arun Suresh
>Assignee: Konstantinos Karanasos
>Priority: Major
>
> JIRA to track documentation for the feature.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-2185) Use pipes when localizing archives

2018-01-19 Thread Robert Kanter (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-2185?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16333122#comment-16333122
 ] 

Robert Kanter commented on YARN-2185:
-

+1 LGTM

[~jlowe] any other comments?

> Use pipes when localizing archives
> --
>
> Key: YARN-2185
> URL: https://issues.apache.org/jira/browse/YARN-2185
> Project: Hadoop YARN
>  Issue Type: Improvement
>  Components: nodemanager
>Affects Versions: 2.4.0
>Reporter: Jason Lowe
>Assignee: Miklos Szegedi
>Priority: Major
> Attachments: YARN-2185.000.patch, YARN-2185.001.patch, 
> YARN-2185.002.patch, YARN-2185.003.patch, YARN-2185.004.patch, 
> YARN-2185.005.patch, YARN-2185.006.patch, YARN-2185.007.patch, 
> YARN-2185.008.patch
>
>
> Currently the nodemanager downloads an archive to a local file, unpacks it, 
> and then removes it.  It would be more efficient to stream the data as it's 
> being unpacked to avoid both the extra disk space requirements and the 
> additional disk activity from storing the archive.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7774) Miscellaneous fixes to the PlacementProcessor

2018-01-19 Thread Weiwei Yang (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7774?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16333121#comment-16333121
 ] 

Weiwei Yang commented on YARN-7774:
---

Hi [~asuresh], that sounds good to me. +1 on latest patch. I will try more 
tests when this one gets in, and file JIRAs if I found any issue or improvement 
can be made. Thank you.

> Miscellaneous fixes to the PlacementProcessor
> -
>
> Key: YARN-7774
> URL: https://issues.apache.org/jira/browse/YARN-7774
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Arun Suresh
>Assignee: Arun Suresh
>Priority: Blocker
> Attachments: YARN-7774-YARN-6592.001.patch, 
> YARN-7774-YARN-6592.002.patch, YARN-7774-YARN-6592.003.patch, 
> YARN-7774-YARN-6592.004.patch, YARN-7774-YARN-6592.005.patch
>
>
> JIRA to track the following minor changes:
> * Scheduler must normalize requests that are made using the 
> {{attemptAllocationOnNode}} method.
> * Currently, the placement algorithm resets the node iterator for each 
> request. The Placement Algorithm should either shuffle the node iterator OR 
> use a circular iterator - to ensure a) more nodes are looked at and b) bias 
> against placing too many containers on the same node
> * Add a placement retry loop for rejected requests - since there are cases 
> especially, when Constraints will be satisfied after a subsequent request has 
> been placed.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (YARN-7745) Allow DistributedShell to take a placement specification for containers it wants to launch

2018-01-19 Thread Arun Suresh (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7745?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16333112#comment-16333112
 ] 

Arun Suresh edited comment on YARN-7745 at 1/20/18 1:42 AM:


[~sunilg], sure, will file test case JIRAs.
In the mean time, in the meantime, you can start using this for eg. as follows:
{code}
$ yarn org.apache.hadoop.yarn.applications.distributedshell.Client –jar 
/hadoop-yarn-applications-distributedshell-3.1.0-SNAPSHOT.jar  
-shell_command sleep -shell_args 10 -placement_spec foo=3,NOTIN,NODE,foo
{code}
This requests 3 containers with anti-affinity to each other.


was (Author: asuresh):
[~sunilg], sure, will file test case JIRAs.
In the mean time, in the meantime, you can start using this for eg. as follows:
{code}
$ yarn org.apache.hadoop.yarn.applications.distributedshell.Client –jar \
$YARN_DS/hadoop-yarn-applications-distributedshell-$YARN_VERSION.jar \
 -shell_command sleep -shell_args 10 -placement_spec foo=3,NOTIN,NODE,foo
{code}
This requests 3 containers with anti-affinity to each other.

> Allow DistributedShell to take a placement specification for containers it 
> wants to launch
> --
>
> Key: YARN-7745
> URL: https://issues.apache.org/jira/browse/YARN-7745
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Arun Suresh
>Assignee: Arun Suresh
>Priority: Major
> Fix For: YARN-6592
>
> Attachments: YARN-7745-YARN-6592.001.patch
>
>
> This is add a '-placement_spec' option to the distributed shell client. Where 
> the user can specify a string-ified specification for how it wants containers 
> to be placed.
> For eg:
> {noformat}
> $ yarn org.apache.hadoop.yarn.applications.distributedshell.Client –jar \
> $YARN_DS/hadoop-yarn-applications-distributedshell-$YARN_VERSION.jar \
>  -shell_command sleep -shell_args 10 -placement_spec 
> {noformat}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7745) Allow DistributedShell to take a placement specification for containers it wants to launch

2018-01-19 Thread Arun Suresh (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7745?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16333112#comment-16333112
 ] 

Arun Suresh commented on YARN-7745:
---

[~sunilg], sure, will file test case JIRAs.
In the mean time, in the meantime, you can start using this for eg. as follows:
{code}
$ yarn org.apache.hadoop.yarn.applications.distributedshell.Client –jar \
$YARN_DS/hadoop-yarn-applications-distributedshell-$YARN_VERSION.jar \
 -shell_command sleep -shell_args 10 -placement_spec foo=3,NOTIN,NODE,foo
{code}
This requests 3 containers with anti-affinity to each other.

> Allow DistributedShell to take a placement specification for containers it 
> wants to launch
> --
>
> Key: YARN-7745
> URL: https://issues.apache.org/jira/browse/YARN-7745
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Arun Suresh
>Assignee: Arun Suresh
>Priority: Major
> Fix For: YARN-6592
>
> Attachments: YARN-7745-YARN-6592.001.patch
>
>
> This is add a '-placement_spec' option to the distributed shell client. Where 
> the user can specify a string-ified specification for how it wants containers 
> to be placed.
> For eg:
> {noformat}
> $ yarn org.apache.hadoop.yarn.applications.distributedshell.Client –jar \
> $YARN_DS/hadoop-yarn-applications-distributedshell-$YARN_VERSION.jar \
>  -shell_command sleep -shell_args 10 -placement_spec 
> {noformat}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (YARN-7774) Miscellaneous fixes to the PlacementProcessor

2018-01-19 Thread Arun Suresh (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7774?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16333097#comment-16333097
 ] 

Arun Suresh edited comment on YARN-7774 at 1/20/18 1:34 AM:


The test case error is unrelated. [~cheersyang] , [~kkaranasos] , can we move 
forward with the latest patch? Happy to file subsequent JIRAs for anything 
specific.


was (Author: asuresh):
The test case error is unrelated. [~Weiwei Yang] , [~kkaranasos] , can we move 
forward with the latest patch? Happy to file subsequent JIRAs for anything 
specific.

> Miscellaneous fixes to the PlacementProcessor
> -
>
> Key: YARN-7774
> URL: https://issues.apache.org/jira/browse/YARN-7774
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Arun Suresh
>Assignee: Arun Suresh
>Priority: Blocker
> Attachments: YARN-7774-YARN-6592.001.patch, 
> YARN-7774-YARN-6592.002.patch, YARN-7774-YARN-6592.003.patch, 
> YARN-7774-YARN-6592.004.patch, YARN-7774-YARN-6592.005.patch
>
>
> JIRA to track the following minor changes:
> * Scheduler must normalize requests that are made using the 
> {{attemptAllocationOnNode}} method.
> * Currently, the placement algorithm resets the node iterator for each 
> request. The Placement Algorithm should either shuffle the node iterator OR 
> use a circular iterator - to ensure a) more nodes are looked at and b) bias 
> against placing too many containers on the same node
> * Add a placement retry loop for rejected requests - since there are cases 
> especially, when Constraints will be satisfied after a subsequent request has 
> been placed.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7774) Miscellaneous fixes to the PlacementProcessor

2018-01-19 Thread Arun Suresh (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7774?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16333097#comment-16333097
 ] 

Arun Suresh commented on YARN-7774:
---

The test case error is unrelated. [~Weiwei Yang] , [~kkaranasos] , can we move 
forward with the latest patch? Happy to file subsequent JIRAs for anything 
specific.

> Miscellaneous fixes to the PlacementProcessor
> -
>
> Key: YARN-7774
> URL: https://issues.apache.org/jira/browse/YARN-7774
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Arun Suresh
>Assignee: Arun Suresh
>Priority: Blocker
> Attachments: YARN-7774-YARN-6592.001.patch, 
> YARN-7774-YARN-6592.002.patch, YARN-7774-YARN-6592.003.patch, 
> YARN-7774-YARN-6592.004.patch, YARN-7774-YARN-6592.005.patch
>
>
> JIRA to track the following minor changes:
> * Scheduler must normalize requests that are made using the 
> {{attemptAllocationOnNode}} method.
> * Currently, the placement algorithm resets the node iterator for each 
> request. The Placement Algorithm should either shuffle the node iterator OR 
> use a circular iterator - to ensure a) more nodes are looked at and b) bias 
> against placing too many containers on the same node
> * Add a placement retry loop for rejected requests - since there are cases 
> especially, when Constraints will be satisfied after a subsequent request has 
> been placed.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7778) Merging of constraints defined at different levels

2018-01-19 Thread Weiwei Yang (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7778?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16333091#comment-16333091
 ] 

Weiwei Yang commented on YARN-7778:
---

I volunteer to help on this task, [~kkaranasos] can I take over? I would like 
to explore this some day next week.

> Merging of constraints defined at different levels
> --
>
> Key: YARN-7778
> URL: https://issues.apache.org/jira/browse/YARN-7778
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Konstantinos Karanasos
>Priority: Major
>
> When we have multiple constraints defined for a given set of allocation tags 
> at different levels (i.e., at the cluster, the application or the scheduling 
> request level), we need to merge those constraints.
> Defining constraint levels as cluster > application > scheduling request, 
> constraints defined at lower levels should only be more restrictive than 
> those of higher levels. Otherwise the allocation should fail.
> For example, if there is an application level constraint that allows no more 
> than 5 HBase containers per rack, a scheduling request can further restrict 
> that to 3 containers per rack but not to 7 containers per rack.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7763) Allow Constraints specified in the SchedulingRequest to override application level constraints

2018-01-19 Thread Weiwei Yang (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7763?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16333090#comment-16333090
 ] 

Weiwei Yang commented on YARN-7763:
---

Hi [~sunilg]
{quote} i think *constraint* could be updated in *pcm* than in a util. So when 
a policy comes to support different level, we could operate from *pcm* better.
{quote}
Agreed, [~kkaranasos] was giving a similar comment too, please see YARN-7778. 
We can track a further improvements with the handling of different level of 
constraints in that JIRA.

> Allow Constraints specified in the SchedulingRequest to override application 
> level constraints
> --
>
> Key: YARN-7763
> URL: https://issues.apache.org/jira/browse/YARN-7763
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Wangda Tan
>Assignee: Weiwei Yang
>Priority: Blocker
> Attachments: YARN-7763-YARN-6592.001.patch, 
> YARN-7763-YARN-6592.002.patch, YARN-7763-YARN-6592.003.patch, 
> YARN-7763-YARN-6592.004.patch, YARN-7763-YARN-6592.005.patch, 
> YARN-7763-YARN-6592.006.patch
>
>
> As I mentioned on YARN-6599, we will add SchedulingRequest as part of the 
> PlacementConstraintUtil method and both of processor/scheduler implementation 
> will use the same logic. The logic looks like:
> {code:java}
> PlacementConstraint pc = schedulingRequest.getPlacementConstraint();
> If (pc == null) {
>   pc = 
> PlacementConstraintMgr.getPlacementConstraint(schedulingRequest.getAllocationTags());
> }
> // Do placement constraint match ...{code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5473) Expose per-application over-allocation info in the Resource Manager

2018-01-19 Thread genericqa (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5473?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16333045#comment-16333045
 ] 

genericqa commented on YARN-5473:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
33s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 23 new or modified test 
files. {color} |
|| || || || {color:brown} YARN-1011 Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  1m 
57s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 16m 
 7s{color} | {color:green} YARN-1011 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 14m 
36s{color} | {color:green} YARN-1011 passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  2m 
 7s{color} | {color:green} YARN-1011 passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  4m 
59s{color} | {color:green} YARN-1011 passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
16m 53s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  1m 
14s{color} | {color:red} hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api in 
YARN-1011 has 1 extant Findbugs warnings. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  3m 
55s{color} | {color:green} YARN-1011 passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
18s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  4m 
15s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 11m 
29s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} cc {color} | {color:green} 11m 
29s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 11m 
29s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
2m  9s{color} | {color:orange} root: The patch generated 8 new + 1770 unchanged 
- 20 fixed = 1778 total (was 1790) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  4m 
47s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green}  
8m 47s{color} | {color:green} patch has no errors when building and testing our 
client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  9m  
0s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} javadoc {color} | {color:red}  0m 
26s{color} | {color:red} 
hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager
 generated 8 new + 4 unchanged - 0 fixed = 12 total (was 4) {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
36s{color} | {color:green} hadoop-yarn-api in the patch passed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red}  0m 30s{color} 
| {color:red} hadoop-yarn-common in the patch failed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red}  0m 26s{color} 
| {color:red} hadoop-yarn-server-common in the patch failed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red}  0m 23s{color} 
| {color:red} hadoop-yarn-server-applicationhistoryservice in the patch failed. 
{color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red}  0m 29s{color} 
| {color:red} hadoop-yarn-server-resourcemanager in the patch failed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red}  0m 22s{color} 
| {color:red} hadoop-yarn-client in the patch failed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red}  0m 20s{color} 
| {color:red} hadoop-yarn-server-router

[jira] [Commented] (YARN-2185) Use pipes when localizing archives

2018-01-19 Thread genericqa (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-2185?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16333038#comment-16333038
 ] 

genericqa commented on YARN-2185:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m  
9s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
15s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 16m 
24s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 12m 
50s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  2m 
 7s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
50s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
14m 28s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m 
46s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
38s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
16s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
19s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 11m 
41s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} javac {color} | {color:red} 11m 41s{color} 
| {color:red} root generated 1 new + 1241 unchanged - 0 fixed = 1242 total (was 
1241) {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  2m 
 5s{color} | {color:green} root: The patch generated 0 new + 151 unchanged - 8 
fixed = 151 total (was 159) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
46s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
10m 12s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  3m 
13s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
58s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red}  8m 40s{color} 
| {color:red} hadoop-common in the patch failed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red}  0m 36s{color} 
| {color:red} hadoop-yarn-common in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
32s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 93m 58s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | 
hadoop.security.token.delegation.TestZKDelegationTokenSecretManager |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:5b98639 |
| JIRA Issue | YARN-2185 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12906899/YARN-2185.008.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  shadedclient  findbugs  checkstyle  |
| uname | Linux 76596800d7e6 3.13.0-135-generic #184-Ubuntu SMP Wed Oct 18 
11:55:51 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / cce71dc |
| maven | version: Apache Maven 3.3.9 |
| 

[jira] [Created] (YARN-7782) Enable user re-mapping for Docker containers in yarn-default.xml

2018-01-19 Thread Eric Yang (JIRA)
Eric Yang created YARN-7782:
---

 Summary: Enable user re-mapping for Docker containers in 
yarn-default.xml
 Key: YARN-7782
 URL: https://issues.apache.org/jira/browse/YARN-7782
 Project: Hadoop YARN
  Issue Type: Sub-task
  Components: security, yarn
Affects Versions: 2.9.0, 3.0.0
Reporter: Eric Yang
Assignee: Eric Yang
 Fix For: 3.0.0, 3.1.0, 2.10.0, 2.9.1


In YARN-4266, the recommendation was to use -u [uid]:[gid] numeric values to 
enforce user and group for the running user.  In YARN-6623, this translated to 
--user=test --group-add=group1.  The code no longer enforce group correctly for 
launched process.  

In addition, the implementation in YARN-6623 requires the user and group 
information to exist in container to translate username and group to uid/gid.  
For users on LDAP, there is no good way to populate container with user and 
group information. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-7782) Enable user re-mapping for Docker containers in yarn-default.xml

2018-01-19 Thread Eric Yang (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-7782?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Eric Yang updated YARN-7782:

Description: In YARN-4266, the recommendation was to use -u [uid]:[gid] 
numeric values to enforce user and group for the running user. In YARN-7430, 
the user remapping is default to true, but yarn-default.xml is still set to 
false.  (was: In YARN-4266, the recommendation was to use -u [uid]:[gid] 
numeric values to enforce user and group for the running user.  In YARN-6623, 
this translated to --user=test --group-add=group1.  The code no longer enforce 
group correctly for launched process.  

In addition, the implementation in YARN-6623 requires the user and group 
information to exist in container to translate username and group to uid/gid.  
For users on LDAP, there is no good way to populate container with user and 
group information. )

> Enable user re-mapping for Docker containers in yarn-default.xml
> 
>
> Key: YARN-7782
> URL: https://issues.apache.org/jira/browse/YARN-7782
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: security, yarn
>Affects Versions: 2.9.0, 3.0.0
>Reporter: Eric Yang
>Assignee: Eric Yang
>Priority: Blocker
> Fix For: 3.0.0, 3.1.0, 2.10.0, 2.9.1
>
>
> In YARN-4266, the recommendation was to use -u [uid]:[gid] numeric values to 
> enforce user and group for the running user. In YARN-7430, the user remapping 
> is default to true, but yarn-default.xml is still set to false.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7537) [Atsv2] load hbase configuration from filesystem rather than URL

2018-01-19 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7537?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16333023#comment-16333023
 ] 

Hudson commented on YARN-7537:
--

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #13527 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/13527/])
YARN-7537 [Atsv2] load hbase configuration from filesystem rather than 
(vrushali: rev ec8f47e7fadbe62c0c39390d0a46cefd50e98492)
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/resources/yarn-default.xml
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-timelineservice-hbase/src/main/java/org/apache/hadoop/yarn/server/timelineservice/storage/common/HBaseTimelineStorageUtils.java


> [Atsv2] load hbase configuration from filesystem rather than URL
> 
>
> Key: YARN-7537
> URL: https://issues.apache.org/jira/browse/YARN-7537
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Rohith Sharma K S
>Assignee: Rohith Sharma K S
>Priority: Major
> Attachments: YARN-7537.01.patch, YARN-7537.02.patch
>
>
> Currently HBaseTimelineStorageUtils#getTimelineServiceHBaseConf loads hbase 
> configurations using URL if *yarn.timeline-service.hbase.configuration.file* 
> is configured. But it is restricted to URLs only. This need to be changed to 
> load from file system. In deployment, hbase configuration can be kept under 
> filesystem so that it be utilized by all the NodeManager and ResourceManager.
> cc :/ [~vrushalic] [~varun_saxena]



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7766) Introduce a new config property for YARN Service dependency tarball location

2018-01-19 Thread Jian He (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7766?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16333002#comment-16333002
 ] 

Jian He commented on YARN-7766:
---

{code}
  public int actionDependency(String destinationFolder, boolean overwrite)
  throws IOException, YarnException {
String currentUser = RegistryUtils.currentUser();
LOG.info("Running command as user {}", currentUser);

Path dependencyLibTarGzip = fs.getDependencyTarGzip(true,
destinationFolder);
{code}
- For above code, feel like we don't need to  pass in a 'true' flag (this 
avoids the other chain of caller changes which passes in "false").  We can do 
special code handling right in here something like below ?
{code}
if (destinationFolder == null) {
  destinationFolder = String.format(YarnServiceConstants.DEPENDENCY_DIR,
  VersionInfo.getVersion());
}
Path dependencyLibTarGzip = new Path(destinationFolder,
YarnServiceConstants.DEPENDENCY_TAR_GZ_FILE_NAME
+ YarnServiceConstants.DEPENDENCY_TAR_GZ_FILE_EXT);
{code}

> Introduce a new config property for YARN Service dependency tarball location
> 
>
> Key: YARN-7766
> URL: https://issues.apache.org/jira/browse/YARN-7766
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: applications, client, yarn-native-services
>Reporter: Gour Saha
>Assignee: Gour Saha
>Priority: Major
> Attachments: YARN-7766.001.patch, YARN-7766.002.patch
>
>
> Introduce a new config property (something like _yarn.service.framework.path_ 
> in-line with _mapreduce.application.framework.path_) for YARN Service 
> dependency tarball location. This will provide flexibility to the 
> user/cluster-admin to upload the dependency tarball to a location of their 
> choice. If this config property is not set, YARN Service client will default 
> to uploading all dependency jars from the client-host's classpath for every 
> service launch request (as it does today).
> Also, accept an optional destination HDFS location for *-enableFastLaunch* 
> command, to specify the location where user/cluster-admin wants to upload the 
> tarball. If not specified, let's default it to the location we use today. The 
> cluster-admin still needs to set _yarn.service.framework.path_ to this 
> default location otherwise it will not be used. So the command-line will 
> become something like this -
> {code:java}
> yarn app -enableFastLaunch []{code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-2185) Use pipes when localizing archives

2018-01-19 Thread Miklos Szegedi (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-2185?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Miklos Szegedi updated YARN-2185:
-
Attachment: YARN-2185.008.patch

> Use pipes when localizing archives
> --
>
> Key: YARN-2185
> URL: https://issues.apache.org/jira/browse/YARN-2185
> Project: Hadoop YARN
>  Issue Type: Improvement
>  Components: nodemanager
>Affects Versions: 2.4.0
>Reporter: Jason Lowe
>Assignee: Miklos Szegedi
>Priority: Major
> Attachments: YARN-2185.000.patch, YARN-2185.001.patch, 
> YARN-2185.002.patch, YARN-2185.003.patch, YARN-2185.004.patch, 
> YARN-2185.005.patch, YARN-2185.006.patch, YARN-2185.007.patch, 
> YARN-2185.008.patch
>
>
> Currently the nodemanager downloads an archive to a local file, unpacks it, 
> and then removes it.  It would be more efficient to stream the data as it's 
> being unpacked to avoid both the extra disk space requirements and the 
> additional disk activity from storing the archive.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-2185) Use pipes when localizing archives

2018-01-19 Thread Miklos Szegedi (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-2185?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16332965#comment-16332965
 ] 

Miklos Szegedi commented on YARN-2185:
--

Thank you for the review [~rkanter]. I updated the patch.

> Use pipes when localizing archives
> --
>
> Key: YARN-2185
> URL: https://issues.apache.org/jira/browse/YARN-2185
> Project: Hadoop YARN
>  Issue Type: Improvement
>  Components: nodemanager
>Affects Versions: 2.4.0
>Reporter: Jason Lowe
>Assignee: Miklos Szegedi
>Priority: Major
> Attachments: YARN-2185.000.patch, YARN-2185.001.patch, 
> YARN-2185.002.patch, YARN-2185.003.patch, YARN-2185.004.patch, 
> YARN-2185.005.patch, YARN-2185.006.patch, YARN-2185.007.patch, 
> YARN-2185.008.patch
>
>
> Currently the nodemanager downloads an archive to a local file, unpacks it, 
> and then removes it.  It would be more efficient to stream the data as it's 
> being unpacked to avoid both the extra disk space requirements and the 
> additional disk activity from storing the archive.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5473) Expose per-application over-allocation info in the Resource Manager

2018-01-19 Thread Haibo Chen (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5473?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16332963#comment-16332963
 ] 

Haibo Chen commented on YARN-5473:
--

Thanks [~snemeth] for the review! For 1) and 2), given the patch is already 
very large and there are only 1 or 2 constructors, I think we can leave the 
refactoring to another jira.

For 4), instead of adding three more methods, I used anonymous HashMap class to 
make it more readable.  3), 5) and 6) are all addressed accordingly. Thanks 
again for the review.

> Expose per-application over-allocation info in the Resource Manager
> ---
>
> Key: YARN-5473
> URL: https://issues.apache.org/jira/browse/YARN-5473
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Íñigo Goiri
>Assignee: Haibo Chen
>Priority: Major
> Attachments: YARN-5473-YARN-1011.00.patch, 
> YARN-5473-YARN-1011.01.patch, YARN-5473-YARN-1011.02.patch, 
> YARN-5473-YARN-1011.prelim.patch
>
>
> When enabling over-allocation of nodes, the resources in the cluster change. 
> We need to surface this information for users to understand these changes.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-5473) Expose per-application over-allocation info in the Resource Manager

2018-01-19 Thread Haibo Chen (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-5473?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Haibo Chen updated YARN-5473:
-
Attachment: YARN-5473-YARN-1011.02.patch

> Expose per-application over-allocation info in the Resource Manager
> ---
>
> Key: YARN-5473
> URL: https://issues.apache.org/jira/browse/YARN-5473
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Íñigo Goiri
>Assignee: Haibo Chen
>Priority: Major
> Attachments: YARN-5473-YARN-1011.00.patch, 
> YARN-5473-YARN-1011.01.patch, YARN-5473-YARN-1011.02.patch, 
> YARN-5473-YARN-1011.prelim.patch
>
>
> When enabling over-allocation of nodes, the resources in the cluster change. 
> We need to surface this information for users to understand these changes.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7779) Display allocation tags in RM web UI and expose via REST API

2018-01-19 Thread Arun Suresh (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7779?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16332954#comment-16332954
 ] 

Arun Suresh commented on YARN-7779:
---

Definitely a good idea. Thanks for taking this up [~cheersyang]

> Display allocation tags in RM web UI and expose via REST API
> 
>
> Key: YARN-7779
> URL: https://issues.apache.org/jira/browse/YARN-7779
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: RM
>Reporter: Weiwei Yang
>Assignee: Weiwei Yang
>Priority: Major
>
> Propose to display node allocation tags on RM. This will users to check 
> allocations w.r.t the tags. It would be good to expose node allocation tags 
> from:  
>  * Web UI: {{http:///cluster/nodes}}
>  * REST API: {{http:///ws/v1/cluster/nodes}}, 
> {{http:///ws/v1/cluster/node/}}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-2185) Use pipes when localizing archives

2018-01-19 Thread Robert Kanter (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-2185?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16332948#comment-16332948
 ] 

Robert Kanter commented on YARN-2185:
-

Thanks for the update [~miklos.szeg...@cloudera.com].  
One last trivial thing:
 - {{downloadAndUnpack}} is missing the javadoc for the added {{throws 
YarnException}}

> Use pipes when localizing archives
> --
>
> Key: YARN-2185
> URL: https://issues.apache.org/jira/browse/YARN-2185
> Project: Hadoop YARN
>  Issue Type: Improvement
>  Components: nodemanager
>Affects Versions: 2.4.0
>Reporter: Jason Lowe
>Assignee: Miklos Szegedi
>Priority: Major
> Attachments: YARN-2185.000.patch, YARN-2185.001.patch, 
> YARN-2185.002.patch, YARN-2185.003.patch, YARN-2185.004.patch, 
> YARN-2185.005.patch, YARN-2185.006.patch, YARN-2185.007.patch
>
>
> Currently the nodemanager downloads an archive to a local file, unpacks it, 
> and then removes it.  It would be more efficient to stream the data as it's 
> being unpacked to avoid both the extra disk space requirements and the 
> additional disk activity from storing the archive.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Created] (YARN-7781) Update YARN-Services-Examples.md to be in sync with the latest code

2018-01-19 Thread Gour Saha (JIRA)
Gour Saha created YARN-7781:
---

 Summary: Update YARN-Services-Examples.md to be in sync with the 
latest code
 Key: YARN-7781
 URL: https://issues.apache.org/jira/browse/YARN-7781
 Project: Hadoop YARN
  Issue Type: Sub-task
Reporter: Gour Saha


Update YARN-Services-Examples.md to make the following additions/changes:

1. Add an additional URL and PUT Request JSON to support flex:

Update to flex up/down the no of containers (instances) of a component of a 
service
PUT URL – http://localhost:9191/app/v1/services/hello-world
PUT Request JSON
{code}
{
  "components" : [ {
"name" : "hello",
"number_of_containers" : 3
  } ]
}
{code}

2. Modify all occurrences of /ws/ to /app/



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7774) Miscellaneous fixes to the PlacementProcessor

2018-01-19 Thread genericqa (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7774?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16332945#comment-16332945
 ] 

genericqa commented on YARN-7774:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
17s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 4 new or modified test 
files. {color} |
|| || || || {color:brown} YARN-6592 Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 14m 
55s{color} | {color:green} YARN-6592 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
35s{color} | {color:green} YARN-6592 passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
23s{color} | {color:green} YARN-6592 passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
38s{color} | {color:green} YARN-6592 passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green}  
9m 30s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
59s{color} | {color:green} YARN-6592 passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
24s{color} | {color:green} YARN-6592 passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
38s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
33s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
33s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 21s{color} | {color:orange} 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager:
 The patch generated 2 new + 115 unchanged - 0 fixed = 117 total (was 115) 
{color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
36s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green}  
9m 38s{color} | {color:green} patch has no errors when building and testing our 
client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m  
5s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
20s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 75m 59s{color} 
| {color:red} hadoop-yarn-server-resourcemanager in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
18s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}116m 56s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.yarn.server.resourcemanager.TestRMEmbeddedElector 
|
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:5b98639 |
| JIRA Issue | YARN-7774 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12906882/YARN-7774-YARN-6592.005.patch
 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  shadedclient  findbugs  checkstyle  |
| uname | Linux 655580a91755 4.4.0-64-generic #85-Ubuntu SMP Mon Feb 20 
11:50:30 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | YARN-6592 / 27fa101 |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_151 |
| findbugs | v3.1.0-RC1 |
| checkstyle | 
https://builds.apache.org/job/PreCommit-YARN-Build/19354/artifact/out/diff-checkstyle-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager.txt
 |
| unit | 
https://builds.apache.org/job/PreCommit-YARN-Build/19354/artifact/out/patch-unit-hadoop-

[jira] [Commented] (YARN-7732) Support Pluggable AM Simulator

2018-01-19 Thread genericqa (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7732?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16332944#comment-16332944
 ] 

genericqa commented on YARN-7732:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m  
9s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 10 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 17m 
10s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
25s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
14s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
24s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
10m 25s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
34s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
18s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
23s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
19s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
19s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 12s{color} | {color:orange} hadoop-tools/hadoop-sls: The patch generated 65 
new + 49 unchanged - 3 fixed = 114 total (was 52) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
21s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green}  0m  
1s{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
11m 14s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  0m 
43s{color} | {color:red} hadoop-tools/hadoop-sls generated 4 new + 0 unchanged 
- 0 fixed = 4 total (was 0) {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
16s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 10m 
13s{color} | {color:green} hadoop-sls in the patch passed. {color} |
| {color:red}-1{color} | {color:red} asflicense {color} | {color:red}  0m 
20s{color} | {color:red} The patch generated 2 ASF License warnings. {color} |
| {color:black}{color} | {color:black} {color} | {color:black} 53m 58s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| FindBugs | module:hadoop-tools/hadoop-sls |
|  |  Dead store to now in 
org.apache.hadoop.yarn.sls.SLSRunner.startAMFromSynthGenerator()  At 
SLSRunner.java:org.apache.hadoop.yarn.sls.SLSRunner.startAMFromSynthGenerator() 
 At SLSRunner.java:[line 633] |
|  |  org.apache.hadoop.yarn.sls.synthetic.SynthJob.toString() concatenates 
strings using + in a loop  At SynthJob.java:in a loop  At SynthJob.java:[line 
175] |
|  |  Format string should use %n rather than n in 
org.apache.hadoop.yarn.sls.synthetic.SynthJob.toString()  At 
SynthJob.java:rather than n in 
org.apache.hadoop.yarn.sls.synthetic.SynthJob.toString()  At 
SynthJob.java:[line 175] |
|  |  org.apache.hadoop.yarn.sls.synthetic.SynthJob$SynthTask defines equals 
and uses Object.hashCode()  At SynthJob.java:Object.hashCode()  At 
SynthJob.java:[lines 256-260] |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:5b98639 |
| JIRA Issue | YARN-7732 |
| JIRA Patch URL | 
https://issues.apache.org/jir

[jira] [Commented] (YARN-7774) Miscellaneous fixes to the PlacementProcessor

2018-01-19 Thread Arun Suresh (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7774?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16332847#comment-16332847
 ] 

Arun Suresh commented on YARN-7774:
---

Updating patch

* Moved circular iterator to its own class
* The algorithm follows the scheme I mentioned in my previous comment. It will 
try the last satisfied node before checking the next node in the iteration.
* Fixed the tests. The {{TestContinuousScheduling}} is unrelated and works fine 
for me.

> Miscellaneous fixes to the PlacementProcessor
> -
>
> Key: YARN-7774
> URL: https://issues.apache.org/jira/browse/YARN-7774
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Arun Suresh
>Assignee: Arun Suresh
>Priority: Blocker
> Attachments: YARN-7774-YARN-6592.001.patch, 
> YARN-7774-YARN-6592.002.patch, YARN-7774-YARN-6592.003.patch, 
> YARN-7774-YARN-6592.004.patch, YARN-7774-YARN-6592.005.patch
>
>
> JIRA to track the following minor changes:
> * Scheduler must normalize requests that are made using the 
> {{attemptAllocationOnNode}} method.
> * Currently, the placement algorithm resets the node iterator for each 
> request. The Placement Algorithm should either shuffle the node iterator OR 
> use a circular iterator - to ensure a) more nodes are looked at and b) bias 
> against placing too many containers on the same node
> * Add a placement retry loop for rejected requests - since there are cases 
> especially, when Constraints will be satisfied after a subsequent request has 
> been placed.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-7774) Miscellaneous fixes to the PlacementProcessor

2018-01-19 Thread Arun Suresh (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-7774?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arun Suresh updated YARN-7774:
--
Attachment: YARN-7774-YARN-6592.005.patch

> Miscellaneous fixes to the PlacementProcessor
> -
>
> Key: YARN-7774
> URL: https://issues.apache.org/jira/browse/YARN-7774
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Arun Suresh
>Assignee: Arun Suresh
>Priority: Blocker
> Attachments: YARN-7774-YARN-6592.001.patch, 
> YARN-7774-YARN-6592.002.patch, YARN-7774-YARN-6592.003.patch, 
> YARN-7774-YARN-6592.004.patch, YARN-7774-YARN-6592.005.patch
>
>
> JIRA to track the following minor changes:
> * Scheduler must normalize requests that are made using the 
> {{attemptAllocationOnNode}} method.
> * Currently, the placement algorithm resets the node iterator for each 
> request. The Placement Algorithm should either shuffle the node iterator OR 
> use a circular iterator - to ensure a) more nodes are looked at and b) bias 
> against placing too many containers on the same node
> * Add a placement retry loop for rejected requests - since there are cases 
> especially, when Constraints will be satisfied after a subsequent request has 
> been placed.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7777) Fix user name format in YARN Registry DNS name

2018-01-19 Thread genericqa (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16332831#comment-16332831
 ] 

genericqa commented on YARN-:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
17s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
11s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 14m 
51s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  8m 
38s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
52s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
10s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
10m 50s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Skipped patched modules with no Java source: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-site {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m  
9s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
53s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m  
9s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
52s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  6m 
39s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  6m 
39s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
51s{color} | {color:green} hadoop-yarn-project/hadoop-yarn: The patch generated 
0 new + 27 unchanged - 1 fixed = 27 total (was 28) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
17s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
10m 19s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Skipped patched modules with no Java source: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-site {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
19s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
50s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  1m  
0s{color} | {color:green} hadoop-yarn-registry in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  3m 
44s{color} | {color:green} hadoop-yarn-services-core in the patch passed. 
{color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
15s{color} | {color:green} hadoop-yarn-site in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
26s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 65m 59s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:

[jira] [Commented] (YARN-5428) Allow for specifying the docker client configuration directory

2018-01-19 Thread Jian He (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5428?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16332828#comment-16332828
 ] 

Jian He commented on YARN-5428:
---

Patch looks good me overall, wondering if we should check the credential object 
size, because user may accidentally point to a wrong file which has big size.  
And it'll stay in RM memory, znode etc for a long time

> Allow for specifying the docker client configuration directory
> --
>
> Key: YARN-5428
> URL: https://issues.apache.org/jira/browse/YARN-5428
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: yarn
>Reporter: Shane Kumpf
>Assignee: Shane Kumpf
>Priority: Major
>  Labels: oct16-medium
> Attachments: YARN-5428.001.patch, YARN-5428.002.patch, 
> YARN-5428.003.patch, YARN-5428.004.patch, YARN-5428.005.patch, 
> YARN-5428.006.patch, 
> YARN-5428Allowforspecifyingthedockerclientconfigurationdirectory.pdf
>
>
> The docker client allows for specifying a configuration directory that 
> contains the docker client's configuration. It is common to store "docker 
> login" credentials in this config, to avoid the need to docker login on each 
> cluster member. 
> By default the docker client config is $HOME/.docker/config.json on Linux. 
> However, this does not work with the current container executor user 
> switching and it may also be desirable to centralize this configuration 
> beyond the single user's home directory.
> Note that the command line arg is for the configuration directory NOT the 
> configuration file.
> This change will be needed to allow YARN to automatically pull images at 
> localization time or within container executor.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5094) some YARN container events have timestamp of -1

2018-01-19 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5094?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16332736#comment-16332736
 ] 

Hudson commented on YARN-5094:
--

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #13525 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/13525/])
YARN-5094. some YARN container events have timestamp of -1. (haibochen: rev 
4aca4ff759f773135f8a27dbaa9731196fac5233)
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/application/TestApplication.java
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/localizer/event/LocalizationEvent.java
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/container/ContainerEvent.java
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/application/ApplicationEvent.java


> some YARN container events have timestamp of -1
> ---
>
> Key: YARN-5094
> URL: https://issues.apache.org/jira/browse/YARN-5094
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: timelineserver
>Affects Versions: 2.9.0, 3.0.0
>Reporter: Sangjin Lee
>Assignee: Haibo Chen
>Priority: Critical
>  Labels: YARN-5355
> Attachments: YARN-5094-YARN-2928.001.patch, YARN-5094.00.patch, 
> YARN-5094.02.patch
>
>
> Some events in the YARN container entities have timestamp of -1. The 
> RM-generated container events have proper timestamps. It appears that it's 
> the NM-generated events that have -1: YARN_CONTAINER_CREATED, 
> YARN_CONTAINER_FINISHED, YARN_NM_CONTAINER_LOCALIZATION_FINISHED, 
> YARN_NM_CONTAINER_LOCALIZATION_STARTED.
> In the YARN container page,
> {noformat}
> {
> id: "YARN_CONTAINER_CREATED",
> timestamp: -1,
> info: { }
> },
> {
> id: "YARN_CONTAINER_FINISHED",
> timestamp: -1,
> info: {
> YARN_CONTAINER_EXIT_STATUS: 0,
> YARN_CONTAINER_STATE: "RUNNING",
> YARN_CONTAINER_DIAGNOSTICS_INFO: ""
> }
> },
> {
> id: "YARN_NM_CONTAINER_LOCALIZATION_FINISHED",
> timestamp: -1,
> info: { }
> },
> {
> id: "YARN_NM_CONTAINER_LOCALIZATION_STARTED",
> timestamp: -1,
> info: { }
> }
> {noformat}
> I think the data itself is OK, but the values are not being populated in the 
> REST output?



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7777) Fix user name format in YARN Registry DNS name

2018-01-19 Thread Jian He (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16332727#comment-16332727
 ] 

Jian He commented on YARN-:
---

yeah, good point, missed that, updated the patch

> Fix user name format in YARN Registry DNS name 
> ---
>
> Key: YARN-
> URL: https://issues.apache.org/jira/browse/YARN-
> Project: Hadoop YARN
>  Issue Type: Bug
>Reporter: Jian He
>Assignee: Jian He
>Priority: Major
> Attachments: YARN-.01.patch, YARN-.02.patch
>
>
> user name that has "\_" should be converted to user "-", because DNS name 
> doesn't allow "_"



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-7777) Fix user name format in YARN Registry DNS name

2018-01-19 Thread Jian He (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jian He updated YARN-:
--
Attachment: YARN-.02.patch

> Fix user name format in YARN Registry DNS name 
> ---
>
> Key: YARN-
> URL: https://issues.apache.org/jira/browse/YARN-
> Project: Hadoop YARN
>  Issue Type: Bug
>Reporter: Jian He
>Assignee: Jian He
>Priority: Major
> Attachments: YARN-.01.patch, YARN-.02.patch
>
>
> user name that has "\_" should be converted to user "-", because DNS name 
> doesn't allow "_"



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (YARN-6648) [GPG] Add SubClusterCleaner in Global Policy Generator

2018-01-19 Thread Carlo Curino (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6648?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16332694#comment-16332694
 ] 

Carlo Curino edited comment on YARN-6648 at 1/19/18 6:37 PM:
-

[~botong] thanks for the updated patch, I think it is nicer to have them 
combined (easier to follow).

Here a few questions/suggestions (some pretty minor, some more important):
 # in {{MemoryFederationStateStore.setSubClusterLastHeartbeat}} why do you go 
through {{getSubcluster}} instead of just doing 
{{membership.get(subClusterId).setLastHeartBeat(longHeartBeat)}} ?
 # In {{GPGUtils}} consider using {{DurationFormatUtils.formatDuration(long, 
string_format)}}, instead of the code you have.
 # In {{GlobalPolicyGenerator}}
 ## should we keep the string constants here, or have them in 
{{YarnConfiguration}} or other places where those are usually defined?
 ## Is the {{SubClusterCleanerService}} required by every Federation 
deployment, or is it something we might want to make configurable (runs only if 
turned on). More generally, should we have a generic mechanism to "start 
services" in the GPG?
 # In {{SubClusterCleaner}}
 ## line 77, is there a way for us to "check" whether the format in the 
{{StateStore}} is local or UTC? Related is the code around line 100, you seem 
to doubt the format, and be conservative about it, which might mean the 
clean-up is at times could be delayed by many hours. Anything better than 
assuming things and/or being overly conservative?
 ## In {{SubClusterCleaner}} line 87, maybe a bit verbose? Should some of this 
be {{LOG.debug}} instead (if so, wrap it in the usual {{if(debugEnabled)}} 
check)?
 ## What do you do in case the subCluster {{isUnusable()}}?
 # In {{SubClusterCleanerService}}
 ## type in Javadoc GPE
 ## I assume we will have many similar "actions run on a schedule", can you 
make this class more generic (templatize it, so we can re-use it)?
 ## If the threads crashes, do we have something that restarts it? I see it 
throws {{Exception}}, anyone restarting the service if it throws?


was (Author: curino):
[~botong] thanks for the updated patch, I think it is nicer to have them 
combined (easier to follow).

Here a few questions/suggestions (some pretty minor, some more important):
 # in {{MemoryFederationStateStore.setSubClusterLastHeartbeat}} why do you go 
through {{getSubcluster}} instead of just doing 
{{membership.get(subClusterId).setLastHeartBeat(longHeartBeat)}} ?
 # In {{GPGUtils}} consider using {{DurationFormatUtils.formatDuration(long, 
string_format)}}, instead of the code you have.
 # In {{GlobalPolicyGenerator}}
 ## should we keep the string constants here, or have them in 
{{YarnConfiguration}} or other places where those are usually defined?
 ## Is the {{SubClusterCleanerService}} required by every Federation 
deployment, or is it something we might want to make configurable (runs only if 
turned on). More generally, should we have a generic mechanism to "start 
services" in the GPG?
 # In {{SubClusterCleaner}}
 ## line 77, is there a way for us to "check" whether the format in the 
{{StateStore}} is local or UTC? Related is the code around line 100, you seem 
to doubt the format, and be conservative about it, which might mean the 
clean-up is at times could be delayed by many hours. Anything better than 
assuming things and/or being overly conservative?
 ## In {{SubClusterCleaner}} line 87, maybe a bit verbose? Should some of this 
be {{LOG.debug}} instead (if so, wrap it in the usual {{if(debugEnabled)}} 
check)?
 ## What do you do in case the subCluster {{isUnusable()}}?
 #In \{{SubClusterCleanerService }}
 ## type in Javadoc GPE
 ## I assume we will have many similar "actions run on a schedule", can you 
make this class more generic (templatize it, so we can re-use it)?
 ## If the threads crashes, do we have something that restarts it? I see it 
throws {{Exception}}, anyone restarting the service if it throws?

> [GPG] Add SubClusterCleaner in Global Policy Generator
> --
>
> Key: YARN-6648
> URL: https://issues.apache.org/jira/browse/YARN-6648
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Botong Huang
>Assignee: Botong Huang
>Priority: Minor
>  Labels: federation, gpg
> Attachments: YARN-6648-YARN-2915.v1.patch, 
> YARN-6648-YARN-7402.v2.patch, YARN-6648-YARN-7402.v3.patch
>
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7774) Miscellaneous fixes to the PlacementProcessor

2018-01-19 Thread Arun Suresh (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7774?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16332708#comment-16332708
 ] 

Arun Suresh commented on YARN-7774:
---

Thanks for the comments [~kkaranasos] and [~cheersyang].

bq. CircularIterator looks to be general enough and deserves its own class with 
a generic type, it can be moved to common package..
Will do - i had initially done that. Only issue was we need the element to 
implement equals, which the SchedulerNode does not (I had to explicitly get the 
nodeId from the schedulerNode and compare it :)) - But yeah, Ill move to a 
different class, and that way I can put in some unit tests.

bq. it mean the second allocation can only be made after it iterates all nodes 
again? This algorithm doesn't seem to be affinity friendly.
Yup, I did consider that. My thought process was: Given that anti-affinity will 
probably be used more often, and even if affinity is used - we've come across 
more use cases for RACK and label affinity, than node affinity - Both these 
cases are served better with the CircularIterator.

bq. Do we really need the CircularIterator? It seems to me that you can have a 
normal iterator initialized outside the for loop and then each time 
hasNext()=false, you can re-initialize it. But maybe I am missing something
Yeah, its mostly equivalant (we also need to record the starting element), but 
I did not want to clutter the the main loop of the code with additional 
variables etc., abstracting out to a different class looked cleaner.

Regarding using the minCardinality > 0 check anti affinity:
Instead of looking at the mincardinalty, I was thinking the following scheme 
would be efficient for both node-affinity as well as anti-affinity. If a 
SchedulingRequest has been placed on a node, and if there are more Requests, 
then instead of initializing the circularIterator from the next node, we 
initialize it starting from the CURRENT node. That way, if the next request (or 
even if the previous req has numAllocations > 1) the just-placed-node will be 
the first candidate - It will also not severely impact anti-affinity placement, 
since only a single previously considered node will be re-considered before 
moving on to the rest of the cluster. Makes sense ?

bq. Do we clean up the black list for each tag? It seems that black-listing can 
change based on the allocations that have been done so far, so we might need to 
use it carefully.
The blackists are not persistent - they only hang around time the max retries 
are completed - after which both the requests and the blacklists are discarded.

 

 

> Miscellaneous fixes to the PlacementProcessor
> -
>
> Key: YARN-7774
> URL: https://issues.apache.org/jira/browse/YARN-7774
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Arun Suresh
>Assignee: Arun Suresh
>Priority: Blocker
> Attachments: YARN-7774-YARN-6592.001.patch, 
> YARN-7774-YARN-6592.002.patch, YARN-7774-YARN-6592.003.patch, 
> YARN-7774-YARN-6592.004.patch
>
>
> JIRA to track the following minor changes:
> * Scheduler must normalize requests that are made using the 
> {{attemptAllocationOnNode}} method.
> * Currently, the placement algorithm resets the node iterator for each 
> request. The Placement Algorithm should either shuffle the node iterator OR 
> use a circular iterator - to ensure a) more nodes are looked at and b) bias 
> against placing too many containers on the same node
> * Add a placement retry loop for rejected requests - since there are cases 
> especially, when Constraints will be satisfied after a subsequent request has 
> been placed.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7537) [Atsv2] load hbase configuration from filesystem rather than URL

2018-01-19 Thread Vrushali C (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7537?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16332702#comment-16332702
 ] 

Vrushali C commented on YARN-7537:
--

+1 on latest patch. Will commit shortly to trunk and branch-2

> [Atsv2] load hbase configuration from filesystem rather than URL
> 
>
> Key: YARN-7537
> URL: https://issues.apache.org/jira/browse/YARN-7537
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Rohith Sharma K S
>Assignee: Rohith Sharma K S
>Priority: Major
> Attachments: YARN-7537.01.patch, YARN-7537.02.patch
>
>
> Currently HBaseTimelineStorageUtils#getTimelineServiceHBaseConf loads hbase 
> configurations using URL if *yarn.timeline-service.hbase.configuration.file* 
> is configured. But it is restricted to URLs only. This need to be changed to 
> load from file system. In deployment, hbase configuration can be kept under 
> filesystem so that it be utilized by all the NodeManager and ResourceManager.
> cc :/ [~vrushalic] [~varun_saxena]



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6648) [GPG] Add SubClusterCleaner in Global Policy Generator

2018-01-19 Thread Carlo Curino (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6648?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16332694#comment-16332694
 ] 

Carlo Curino commented on YARN-6648:


[~botong] thanks for the updated patch, I think it is nicer to have them 
combined (easier to follow).

Here a few questions/suggestions (some pretty minor, some more important):
 # in {{MemoryFederationStateStore.setSubClusterLastHeartbeat}} why do you go 
through {{getSubcluster}} instead of just doing 
{{membership.get(subClusterId).setLastHeartBeat(longHeartBeat)}} ?
 # In {{GPGUtils}} consider using {{DurationFormatUtils.formatDuration(long, 
string_format)}}, instead of the code you have.
 # In {{GlobalPolicyGenerator}}
 ## should we keep the string constants here, or have them in 
{{YarnConfiguration}} or other places where those are usually defined?
 ## Is the {{SubClusterCleanerService}} required by every Federation 
deployment, or is it something we might want to make configurable (runs only if 
turned on). More generally, should we have a generic mechanism to "start 
services" in the GPG?
 # In {{SubClusterCleaner}}
 ## line 77, is there a way for us to "check" whether the format in the 
{{StateStore}} is local or UTC? Related is the code around line 100, you seem 
to doubt the format, and be conservative about it, which might mean the 
clean-up is at times could be delayed by many hours. Anything better than 
assuming things and/or being overly conservative?
 ## In {{SubClusterCleaner}} line 87, maybe a bit verbose? Should some of this 
be {{LOG.debug}} instead (if so, wrap it in the usual {{if(debugEnabled)}} 
check)?
 ## What do you do in case the subCluster {{isUnusable()}}?
 #In \{{SubClusterCleanerService }}
 ## type in Javadoc GPE
 ## I assume we will have many similar "actions run on a schedule", can you 
make this class more generic (templatize it, so we can re-use it)?
 ## If the threads crashes, do we have something that restarts it? I see it 
throws {{Exception}}, anyone restarting the service if it throws?

> [GPG] Add SubClusterCleaner in Global Policy Generator
> --
>
> Key: YARN-6648
> URL: https://issues.apache.org/jira/browse/YARN-6648
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Botong Huang
>Assignee: Botong Huang
>Priority: Minor
>  Labels: federation, gpg
> Attachments: YARN-6648-YARN-2915.v1.patch, 
> YARN-6648-YARN-7402.v2.patch, YARN-6648-YARN-7402.v3.patch
>
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6648) [GPG] Add SubClusterCleaner in Global Policy Generator

2018-01-19 Thread genericqa (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6648?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16332682#comment-16332682
 ] 

genericqa commented on YARN-6648:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 16m 
11s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} YARN-7402 Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
12s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 15m 
40s{color} | {color:green} YARN-7402 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  2m  
9s{color} | {color:green} YARN-7402 passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
38s{color} | {color:green} YARN-7402 passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
50s{color} | {color:green} YARN-7402 passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
11m  7s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  0m 
25s{color} | {color:red} 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-globalpolicygenerator
 in YARN-7402 has 1 extant Findbugs warnings. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
38s{color} | {color:green} YARN-7402 passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
10s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
47s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  2m  
5s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  2m  
5s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
35s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
45s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
10m 27s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
22s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
35s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  2m  
8s{color} | {color:green} hadoop-yarn-server-common in the patch passed. 
{color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
21s{color} | {color:green} hadoop-yarn-server-globalpolicygenerator in the 
patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
22s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 68m  2s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:5b98639 |
| JIRA Issue | YARN-6648 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12906849/YARN-6648-YARN-7402.v3.patch
 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  shadedclient  findbugs  checkstyle  |
| uname | Linux c2ac292ea526 4.4.0-64-generic #85-Ubuntu SMP Mon Feb 20 
11:50:30 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | YARN-7402 / 1702dfa |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_151 |
| 

[jira] [Created] (YARN-7780) Documentation for Placement Constraints

2018-01-19 Thread Arun Suresh (JIRA)
Arun Suresh created YARN-7780:
-

 Summary: Documentation for Placement Constraints
 Key: YARN-7780
 URL: https://issues.apache.org/jira/browse/YARN-7780
 Project: Hadoop YARN
  Issue Type: Sub-task
Reporter: Arun Suresh
Assignee: Konstantinos Karanasos


JIRA to track documentation for the feature.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6384) Add configuration property to set max CPU usage when strict-resource-usage is false with cgroups

2018-01-19 Thread Miklos Szegedi (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6384?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16332616#comment-16332616
 ] 

Miklos Szegedi commented on YARN-6384:
--

Thank you for the latest patch [~lolee_k] and for the review [~templedf]. It 
looks good in general, I have two comments.
{code:java}
102 if (this.maxResourceUsagePercentInStrictMode <= 0.0f) {
103 this.maxResourceUsagePercentInStrictMode =
104 
YarnConfiguration.DEFAULT_NM_LINUX_CONTAINER_CGROUPS_MAX_RESOURCE_USAGE_PERCENT;
105 }{code}
It would be nice to give a warning log here that the setting was invalid and we 
reset to the default.

Also, even though the new configuration is named max-resource-usage-percent, it 
only applies to CPU for now. I think this is good, it may prove future proof, 
however it would be nice to file a new Jira that will handle memory and other 
subsystems.

> Add configuration property to set max CPU usage when strict-resource-usage is 
> false with cgroups
> 
>
> Key: YARN-6384
> URL: https://issues.apache.org/jira/browse/YARN-6384
> Project: Hadoop YARN
>  Issue Type: Improvement
>Reporter: dengkai
>Assignee: dengkai
>Priority: Major
> Attachments: YARN-6384-0.patch, YARN-6384-1.patch, YARN-6384-2.patch, 
> YARN-6384-3.patch, YARN-6384-4.patch, YARN-6384-5.patch
>
>
> When using cgroups on yarn, if 
> yarn.nodemanager.linux-container-executor.cgroups.strict-resource-usage is 
> false, user may get very more cpu time than expected based on the vcores. 
> There should be a upper limit even resource-usage is not strict, just like a 
> percentage which user can get more than promised by vcores. I think it's 
> important in a shared cluster.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-6384) Add configuration property to increase max CPU usage when strict-resource-usage is true with cgroups

2018-01-19 Thread Miklos Szegedi (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-6384?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Miklos Szegedi updated YARN-6384:
-
Summary: Add configuration property to increase max CPU usage when 
strict-resource-usage is true with cgroups  (was: Add configuration property to 
set max CPU usage when strict-resource-usage is false with cgroups)

> Add configuration property to increase max CPU usage when 
> strict-resource-usage is true with cgroups
> 
>
> Key: YARN-6384
> URL: https://issues.apache.org/jira/browse/YARN-6384
> Project: Hadoop YARN
>  Issue Type: Improvement
>Reporter: dengkai
>Assignee: dengkai
>Priority: Major
> Attachments: YARN-6384-0.patch, YARN-6384-1.patch, YARN-6384-2.patch, 
> YARN-6384-3.patch, YARN-6384-4.patch, YARN-6384-5.patch
>
>
> When using cgroups on yarn, if 
> yarn.nodemanager.linux-container-executor.cgroups.strict-resource-usage is 
> false, user may get very more cpu time than expected based on the vcores. 
> There should be a upper limit even resource-usage is not strict, just like a 
> percentage which user can get more than promised by vcores. I think it's 
> important in a shared cluster.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7737) prelaunch.err file not found exception on container failure

2018-01-19 Thread Zhe Zhang (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7737?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16332614#comment-16332614
 ] 

Zhe Zhang commented on YARN-7737:
-

+1, looks to me a clear fix. Will wait for [~jhung] to take a look before 
committing.

> prelaunch.err file not found exception on container failure
> ---
>
> Key: YARN-7737
> URL: https://issues.apache.org/jira/browse/YARN-7737
> Project: Hadoop YARN
>  Issue Type: Bug
>Reporter: Jonathan Hung
>Assignee: Keqiu Hu
>Priority: Major
> Attachments: YARN-7737.001.patch
>
>
> Hit this exception when a container failed:{noformat}2018-01-11 19:04:08,036 
> ERROR 
> org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch:
>  Failed to get tail of the container's prelaunch error log file
> java.io.FileNotFoundException: File 
> /grid/b/tmp/userlogs/application_1515190594800_1766/container_e39_1515190594800_1766_01_02/prelaunch.err
>  does not exist
> at 
> org.apache.hadoop.fs.RawLocalFileSystem.deprecatedGetFileStatus(RawLocalFileSystem.java:641)
> at 
> org.apache.hadoop.fs.RawLocalFileSystem.getFileLinkStatusInternal(RawLocalFileSystem.java:930)
> at 
> org.apache.hadoop.fs.RawLocalFileSystem.getFileStatus(RawLocalFileSystem.java:631)
> at 
> org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.handleContainerExitWithFailure(ContainerLaunch.java:545)
> at 
> org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.handleContainerExitCode(ContainerLaunch.java:511)
> at 
> org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:319)
> at 
> org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:93)
> at java.util.concurrent.FutureTask.run(FutureTask.java:266)
> at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
> at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
> at java.lang.Thread.run(Thread.java:745){noformat}
> containerLogDir is picked on container launch via 
> {{LocalDirAllocator#getLocalPathForWrite}}, which is where it looks for 
> {{prelaunch.err}} when the container fails. But prelaunch.err (and 
> prelaunch.out) are created in the first log dir (in {{ContainerLaunch#call}}: 
> {noformat}exec.writeLaunchEnv(containerScriptOutStream, environment,
> localResources, launchContext.getCommands(),
> new Path(containerLogDirs.get(0)), user);{noformat}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5094) some YARN container events have timestamp of -1

2018-01-19 Thread Haibo Chen (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5094?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16332604#comment-16332604
 ] 

Haibo Chen commented on YARN-5094:
--

Thanks [~rohithsharma] for the review. Will commit it shortly.

> some YARN container events have timestamp of -1
> ---
>
> Key: YARN-5094
> URL: https://issues.apache.org/jira/browse/YARN-5094
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: timelineserver
>Affects Versions: 2.9.0, 3.0.0
>Reporter: Sangjin Lee
>Assignee: Haibo Chen
>Priority: Critical
>  Labels: YARN-5355
> Attachments: YARN-5094-YARN-2928.001.patch, YARN-5094.00.patch, 
> YARN-5094.02.patch
>
>
> Some events in the YARN container entities have timestamp of -1. The 
> RM-generated container events have proper timestamps. It appears that it's 
> the NM-generated events that have -1: YARN_CONTAINER_CREATED, 
> YARN_CONTAINER_FINISHED, YARN_NM_CONTAINER_LOCALIZATION_FINISHED, 
> YARN_NM_CONTAINER_LOCALIZATION_STARTED.
> In the YARN container page,
> {noformat}
> {
> id: "YARN_CONTAINER_CREATED",
> timestamp: -1,
> info: { }
> },
> {
> id: "YARN_CONTAINER_FINISHED",
> timestamp: -1,
> info: {
> YARN_CONTAINER_EXIT_STATUS: 0,
> YARN_CONTAINER_STATE: "RUNNING",
> YARN_CONTAINER_DIAGNOSTICS_INFO: ""
> }
> },
> {
> id: "YARN_NM_CONTAINER_LOCALIZATION_FINISHED",
> timestamp: -1,
> info: { }
> },
> {
> id: "YARN_NM_CONTAINER_LOCALIZATION_STARTED",
> timestamp: -1,
> info: { }
> }
> {noformat}
> I think the data itself is OK, but the values are not being populated in the 
> REST output?



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5094) some YARN container events have timestamp of -1

2018-01-19 Thread Rohith Sharma K S (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5094?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16332598#comment-16332598
 ] 

Rohith Sharma K S commented on YARN-5094:
-

I cross confirmed that it only happens for NM publishing events. All the 
entities published by RM are associated with timestamp.
I will be able to commit it only tomorrow! [~haibochen] would you like to go 
ahead committing it today?

> some YARN container events have timestamp of -1
> ---
>
> Key: YARN-5094
> URL: https://issues.apache.org/jira/browse/YARN-5094
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: timelineserver
>Affects Versions: 2.9.0, 3.0.0
>Reporter: Sangjin Lee
>Assignee: Haibo Chen
>Priority: Critical
>  Labels: YARN-5355
> Attachments: YARN-5094-YARN-2928.001.patch, YARN-5094.00.patch, 
> YARN-5094.02.patch
>
>
> Some events in the YARN container entities have timestamp of -1. The 
> RM-generated container events have proper timestamps. It appears that it's 
> the NM-generated events that have -1: YARN_CONTAINER_CREATED, 
> YARN_CONTAINER_FINISHED, YARN_NM_CONTAINER_LOCALIZATION_FINISHED, 
> YARN_NM_CONTAINER_LOCALIZATION_STARTED.
> In the YARN container page,
> {noformat}
> {
> id: "YARN_CONTAINER_CREATED",
> timestamp: -1,
> info: { }
> },
> {
> id: "YARN_CONTAINER_FINISHED",
> timestamp: -1,
> info: {
> YARN_CONTAINER_EXIT_STATUS: 0,
> YARN_CONTAINER_STATE: "RUNNING",
> YARN_CONTAINER_DIAGNOSTICS_INFO: ""
> }
> },
> {
> id: "YARN_NM_CONTAINER_LOCALIZATION_FINISHED",
> timestamp: -1,
> info: { }
> },
> {
> id: "YARN_NM_CONTAINER_LOCALIZATION_STARTED",
> timestamp: -1,
> info: { }
> }
> {noformat}
> I think the data itself is OK, but the values are not being populated in the 
> REST output?



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Assigned] (YARN-6384) Add configuration property to set max CPU usage when strict-resource-usage is false with cgroups

2018-01-19 Thread Miklos Szegedi (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-6384?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Miklos Szegedi reassigned YARN-6384:


Assignee: dengkai

> Add configuration property to set max CPU usage when strict-resource-usage is 
> false with cgroups
> 
>
> Key: YARN-6384
> URL: https://issues.apache.org/jira/browse/YARN-6384
> Project: Hadoop YARN
>  Issue Type: Improvement
>Reporter: dengkai
>Assignee: dengkai
>Priority: Major
> Attachments: YARN-6384-0.patch, YARN-6384-1.patch, YARN-6384-2.patch, 
> YARN-6384-3.patch, YARN-6384-4.patch, YARN-6384-5.patch
>
>
> When using cgroups on yarn, if 
> yarn.nodemanager.linux-container-executor.cgroups.strict-resource-usage is 
> false, user may get very more cpu time than expected based on the vcores. 
> There should be a upper limit even resource-usage is not strict, just like a 
> percentage which user can get more than promised by vcores. I think it's 
> important in a shared cluster.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5094) some YARN container events have timestamp of -1

2018-01-19 Thread Rohith Sharma K S (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5094?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16332573#comment-16332573
 ] 

Rohith Sharma K S commented on YARN-5094:
-

Yes, we need this to get in. Patch looks fine to me. I am just wondering is 
this only in NM events or should we modify in RM events as well?

> some YARN container events have timestamp of -1
> ---
>
> Key: YARN-5094
> URL: https://issues.apache.org/jira/browse/YARN-5094
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: timelineserver
>Affects Versions: 2.9.0, 3.0.0
>Reporter: Sangjin Lee
>Assignee: Haibo Chen
>Priority: Critical
>  Labels: YARN-5355
> Attachments: YARN-5094-YARN-2928.001.patch, YARN-5094.00.patch, 
> YARN-5094.02.patch
>
>
> Some events in the YARN container entities have timestamp of -1. The 
> RM-generated container events have proper timestamps. It appears that it's 
> the NM-generated events that have -1: YARN_CONTAINER_CREATED, 
> YARN_CONTAINER_FINISHED, YARN_NM_CONTAINER_LOCALIZATION_FINISHED, 
> YARN_NM_CONTAINER_LOCALIZATION_STARTED.
> In the YARN container page,
> {noformat}
> {
> id: "YARN_CONTAINER_CREATED",
> timestamp: -1,
> info: { }
> },
> {
> id: "YARN_CONTAINER_FINISHED",
> timestamp: -1,
> info: {
> YARN_CONTAINER_EXIT_STATUS: 0,
> YARN_CONTAINER_STATE: "RUNNING",
> YARN_CONTAINER_DIAGNOSTICS_INFO: ""
> }
> },
> {
> id: "YARN_NM_CONTAINER_LOCALIZATION_FINISHED",
> timestamp: -1,
> info: { }
> },
> {
> id: "YARN_NM_CONTAINER_LOCALIZATION_STARTED",
> timestamp: -1,
> info: { }
> }
> {noformat}
> I think the data itself is OK, but the values are not being populated in the 
> REST output?



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (YARN-7745) Allow DistributedShell to take a placement specification for containers it wants to launch

2018-01-19 Thread Sunil G (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7745?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16332556#comment-16332556
 ] 

Sunil G edited comment on YARN-7745 at 1/19/18 5:10 PM:


[~asuresh] is this possible to have a test case for this? I used some sample 
commands based on this work and looks fine.


was (Author: sunilg):
[~asuresh] is this possible to have a test case for this? or could share some 
of the commands which you have used.

> Allow DistributedShell to take a placement specification for containers it 
> wants to launch
> --
>
> Key: YARN-7745
> URL: https://issues.apache.org/jira/browse/YARN-7745
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Arun Suresh
>Assignee: Arun Suresh
>Priority: Major
> Fix For: YARN-6592
>
> Attachments: YARN-7745-YARN-6592.001.patch
>
>
> This is add a '-placement_spec' option to the distributed shell client. Where 
> the user can specify a string-ified specification for how it wants containers 
> to be placed.
> For eg:
> {noformat}
> $ yarn org.apache.hadoop.yarn.applications.distributedshell.Client –jar \
> $YARN_DS/hadoop-yarn-applications-distributedshell-$YARN_VERSION.jar \
>  -shell_command sleep -shell_args 10 -placement_spec 
> {noformat}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-6648) [GPG] Add SubClusterCleaner in Global Policy Generator

2018-01-19 Thread Botong Huang (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-6648?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Botong Huang updated YARN-6648:
---
Attachment: YARN-6648-YARN-7402.v3.patch

> [GPG] Add SubClusterCleaner in Global Policy Generator
> --
>
> Key: YARN-6648
> URL: https://issues.apache.org/jira/browse/YARN-6648
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Botong Huang
>Assignee: Botong Huang
>Priority: Minor
>  Labels: federation, gpg
> Attachments: YARN-6648-YARN-2915.v1.patch, 
> YARN-6648-YARN-7402.v2.patch, YARN-6648-YARN-7402.v3.patch
>
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7745) Allow DistributedShell to take a placement specification for containers it wants to launch

2018-01-19 Thread Sunil G (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7745?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16332556#comment-16332556
 ] 

Sunil G commented on YARN-7745:
---

[~asuresh] is this possible to have a test case for this? or could share some 
of the commands which you have used.

> Allow DistributedShell to take a placement specification for containers it 
> wants to launch
> --
>
> Key: YARN-7745
> URL: https://issues.apache.org/jira/browse/YARN-7745
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Arun Suresh
>Assignee: Arun Suresh
>Priority: Major
> Fix For: YARN-6592
>
> Attachments: YARN-7745-YARN-6592.001.patch
>
>
> This is add a '-placement_spec' option to the distributed shell client. Where 
> the user can specify a string-ified specification for how it wants containers 
> to be placed.
> For eg:
> {noformat}
> $ yarn org.apache.hadoop.yarn.applications.distributedshell.Client –jar \
> $YARN_DS/hadoop-yarn-applications-distributedshell-$YARN_VERSION.jar \
>  -shell_command sleep -shell_args 10 -placement_spec 
> {noformat}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-6648) [GPG] Add SubClusterCleaner in Global Policy Generator

2018-01-19 Thread Botong Huang (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-6648?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Botong Huang updated YARN-6648:
---
Attachment: YARN-7102.v9.patch

> [GPG] Add SubClusterCleaner in Global Policy Generator
> --
>
> Key: YARN-6648
> URL: https://issues.apache.org/jira/browse/YARN-6648
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Botong Huang
>Assignee: Botong Huang
>Priority: Minor
>  Labels: federation, gpg
> Attachments: YARN-6648-YARN-2915.v1.patch, 
> YARN-6648-YARN-7402.v2.patch
>
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-6648) [GPG] Add SubClusterCleaner in Global Policy Generator

2018-01-19 Thread Botong Huang (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-6648?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Botong Huang updated YARN-6648:
---
Attachment: (was: YARN-7102.v9.patch)

> [GPG] Add SubClusterCleaner in Global Policy Generator
> --
>
> Key: YARN-6648
> URL: https://issues.apache.org/jira/browse/YARN-6648
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Botong Huang
>Assignee: Botong Huang
>Priority: Minor
>  Labels: federation, gpg
> Attachments: YARN-6648-YARN-2915.v1.patch, 
> YARN-6648-YARN-7402.v2.patch
>
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7723) Avoid using docker volume --format option to compatible to older docker releases

2018-01-19 Thread Sunil G (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7723?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16332547#comment-16332547
 ] 

Sunil G commented on YARN-7723:
---

This patch seems fine to me to work with old version. [~ebadger], if you dont 
have any more comments, i could commit this tomorrow. Thank You.

> Avoid using docker volume --format option to compatible to older docker 
> releases
> 
>
> Key: YARN-7723
> URL: https://issues.apache.org/jira/browse/YARN-7723
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Wangda Tan
>Assignee: Wangda Tan
>Priority: Major
> Attachments: YARN-7723.001.patch
>
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7763) Allow Constraints specified in the SchedulingRequest to override application level constraints

2018-01-19 Thread Sunil G (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7763?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16332545#comment-16332545
 ] 

Sunil G commented on YARN-7763:
---

[~cheersyang], in {{canSatisfyConstraints}} , we can see that constraint is 
pulled from req/app/global. Now when different level comes, its goes even more 
complex. This method is suitable for algorithm/allocator, however i think 
*constraint* could be updated in *pcm* than in a util. So when a policy comes 
to support different level, we could operate from *pcm* better.

> Allow Constraints specified in the SchedulingRequest to override application 
> level constraints
> --
>
> Key: YARN-7763
> URL: https://issues.apache.org/jira/browse/YARN-7763
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Wangda Tan
>Assignee: Weiwei Yang
>Priority: Blocker
> Attachments: YARN-7763-YARN-6592.001.patch, 
> YARN-7763-YARN-6592.002.patch, YARN-7763-YARN-6592.003.patch, 
> YARN-7763-YARN-6592.004.patch, YARN-7763-YARN-6592.005.patch, 
> YARN-7763-YARN-6592.006.patch
>
>
> As I mentioned on YARN-6599, we will add SchedulingRequest as part of the 
> PlacementConstraintUtil method and both of processor/scheduler implementation 
> will use the same logic. The logic looks like:
> {code:java}
> PlacementConstraint pc = schedulingRequest.getPlacementConstraint();
> If (pc == null) {
>   pc = 
> PlacementConstraintMgr.getPlacementConstraint(schedulingRequest.getAllocationTags());
> }
> // Do placement constraint match ...{code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Assigned] (YARN-7677) HADOOP_CONF_DIR should not be automatically put in task environment

2018-01-19 Thread Eric Badger (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-7677?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Eric Badger reassigned YARN-7677:
-

Assignee: Jim Brennan  (was: Eric Badger)

> HADOOP_CONF_DIR should not be automatically put in task environment
> ---
>
> Key: YARN-7677
> URL: https://issues.apache.org/jira/browse/YARN-7677
> Project: Hadoop YARN
>  Issue Type: Bug
>Reporter: Eric Badger
>Assignee: Jim Brennan
>Priority: Major
>
> Currently, {{HADOOP_CONF_DIR}} is being put into the task environment whether 
> it's set by the user or not. It completely bypasses the whitelist and so 
> there is no way for a task to not have {{HADOOP_CONF_DIR}} set. This causes 
> problems in the Docker use case where Docker containers will set up their own 
> environment and have their own {{HADOOP_CONF_DIR}} preset in the image 
> itself. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (YARN-5094) some YARN container events have timestamp of -1

2018-01-19 Thread Haibo Chen (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5094?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16332482#comment-16332482
 ] 

Haibo Chen edited comment on YARN-5094 at 1/19/18 4:13 PM:
---

+1 on fixing it. [~rohithsharma] [~sunilg] [~vrushalic] [~varun_saxena] want to 
take a look at the latest patch?


was (Author: haibochen):
+1 on fixing it. [~rohithsharma] [~sunilg] [~vrushalic] [~varun_saxena] want to 
take a look at the latest patch.

> some YARN container events have timestamp of -1
> ---
>
> Key: YARN-5094
> URL: https://issues.apache.org/jira/browse/YARN-5094
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: timelineserver
>Affects Versions: 2.9.0, 3.0.0
>Reporter: Sangjin Lee
>Assignee: Haibo Chen
>Priority: Critical
>  Labels: YARN-5355
> Attachments: YARN-5094-YARN-2928.001.patch, YARN-5094.00.patch, 
> YARN-5094.02.patch
>
>
> Some events in the YARN container entities have timestamp of -1. The 
> RM-generated container events have proper timestamps. It appears that it's 
> the NM-generated events that have -1: YARN_CONTAINER_CREATED, 
> YARN_CONTAINER_FINISHED, YARN_NM_CONTAINER_LOCALIZATION_FINISHED, 
> YARN_NM_CONTAINER_LOCALIZATION_STARTED.
> In the YARN container page,
> {noformat}
> {
> id: "YARN_CONTAINER_CREATED",
> timestamp: -1,
> info: { }
> },
> {
> id: "YARN_CONTAINER_FINISHED",
> timestamp: -1,
> info: {
> YARN_CONTAINER_EXIT_STATUS: 0,
> YARN_CONTAINER_STATE: "RUNNING",
> YARN_CONTAINER_DIAGNOSTICS_INFO: ""
> }
> },
> {
> id: "YARN_NM_CONTAINER_LOCALIZATION_FINISHED",
> timestamp: -1,
> info: { }
> },
> {
> id: "YARN_NM_CONTAINER_LOCALIZATION_STARTED",
> timestamp: -1,
> info: { }
> }
> {noformat}
> I think the data itself is OK, but the values are not being populated in the 
> REST output?



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5094) some YARN container events have timestamp of -1

2018-01-19 Thread Haibo Chen (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5094?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16332482#comment-16332482
 ] 

Haibo Chen commented on YARN-5094:
--

+1 on fixing it. [~rohithsharma] [~sunilg] [~vrushalic] [~varun_saxena] want to 
take a look at the latest patch.

> some YARN container events have timestamp of -1
> ---
>
> Key: YARN-5094
> URL: https://issues.apache.org/jira/browse/YARN-5094
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: timelineserver
>Affects Versions: 2.9.0, 3.0.0
>Reporter: Sangjin Lee
>Assignee: Haibo Chen
>Priority: Critical
>  Labels: YARN-5355
> Attachments: YARN-5094-YARN-2928.001.patch, YARN-5094.00.patch, 
> YARN-5094.02.patch
>
>
> Some events in the YARN container entities have timestamp of -1. The 
> RM-generated container events have proper timestamps. It appears that it's 
> the NM-generated events that have -1: YARN_CONTAINER_CREATED, 
> YARN_CONTAINER_FINISHED, YARN_NM_CONTAINER_LOCALIZATION_FINISHED, 
> YARN_NM_CONTAINER_LOCALIZATION_STARTED.
> In the YARN container page,
> {noformat}
> {
> id: "YARN_CONTAINER_CREATED",
> timestamp: -1,
> info: { }
> },
> {
> id: "YARN_CONTAINER_FINISHED",
> timestamp: -1,
> info: {
> YARN_CONTAINER_EXIT_STATUS: 0,
> YARN_CONTAINER_STATE: "RUNNING",
> YARN_CONTAINER_DIAGNOSTICS_INFO: ""
> }
> },
> {
> id: "YARN_NM_CONTAINER_LOCALIZATION_FINISHED",
> timestamp: -1,
> info: { }
> },
> {
> id: "YARN_NM_CONTAINER_LOCALIZATION_STARTED",
> timestamp: -1,
> info: { }
> }
> {noformat}
> I think the data itself is OK, but the values are not being populated in the 
> REST output?



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7516) Security check for untrusted docker image

2018-01-19 Thread Eric Badger (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7516?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16332477#comment-16332477
 ] 

Eric Badger commented on YARN-7516:
---

Yes, sorry for the delay, [~eyang]. I recently found out I have carpal tunnel 
in my right hand and so my new brace has severely limited my productivity this 
week. I will try to get to this today.

> Security check for untrusted docker image
> -
>
> Key: YARN-7516
> URL: https://issues.apache.org/jira/browse/YARN-7516
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Eric Yang
>Assignee: Eric Yang
>Priority: Major
> Attachments: YARN-7516.001.patch, YARN-7516.002.patch, 
> YARN-7516.003.patch, YARN-7516.004.patch, YARN-7516.005.patch, 
> YARN-7516.006.patch, YARN-7516.007.patch, YARN-7516.008.patch, 
> YARN-7516.009.patch, YARN-7516.010.patch, YARN-7516.011.patch
>
>
> Hadoop YARN Services can support using private docker registry image or 
> docker image from docker hub.  In current implementation, Hadoop security is 
> enforced through username and group membership, and enforce uid:gid 
> consistency in docker container and distributed file system.  There is cloud 
> use case for having ability to run untrusted docker image on the same cluster 
> for testing.  
> The basic requirement for untrusted container is to ensure all kernel and 
> root privileges are dropped, and there is no interaction with distributed 
> file system to avoid contamination.  We can probably enforce detection of 
> untrusted docker image by checking the following:
> # If docker image is from public docker hub repository, the container is 
> automatically flagged as insecure, and disk volume mount are disabled 
> automatically, and drop all kernel capabilities.
> # If docker image is from private repository in docker hub, and there is a 
> white list to allow the private repository, disk volume mount is allowed, 
> kernel capabilities follows the allowed list.
> # If docker image is from private trusted registry with image name like 
> "private.registry.local:5000/centos", and white list allows this private 
> trusted repository.  Disk volume mount is allowed, kernel capabilities 
> follows the allowed list.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Assigned] (YARN-4022) queue not remove from webpage(/cluster/scheduler) when delete queue in xxx-scheduler.xml

2018-01-19 Thread Szilard Nemeth (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-4022?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Szilard Nemeth reassigned YARN-4022:


Assignee: Szilard Nemeth

> queue not remove from webpage(/cluster/scheduler) when delete queue in 
> xxx-scheduler.xml
> 
>
> Key: YARN-4022
> URL: https://issues.apache.org/jira/browse/YARN-4022
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: fairscheduler, resourcemanager
>Affects Versions: 2.7.1
>Reporter: forrestchen
>Assignee: Szilard Nemeth
>Priority: Major
>  Labels: oct16-medium, scheduler
> Attachments: YARN-4022.001.patch, YARN-4022.002.patch, 
> YARN-4022.003.patch, YARN-4022.004.patch
>
>
> When I delete an existing queue by modify the xxx-schedule.xml, I can still 
> see the queue information block in webpage(/cluster/scheduler) though the 
> 'Min Resources' items all become to zero and have no item of 'Max Running 
> Applications'.
> I can still submit an application to the deleted queue and the application 
> will run using 'root.default' queue instead, but submit to an un-exist queue 
> will cause an exception.
> My expectation is the deleted queue will not displayed in webpage and submit 
> application to the deleted queue will act just like the queue doesn't exist.
> PS: There's no application running in the queue I delete.
> Some related config in yarn-site.xml:
> {code}
> 
> yarn.scheduler.fair.user-as-default-queue
> false
> 
> 
> yarn.scheduler.fair.allow-undeclared-pools
> false
> 
> {code}
> a related question is here: 
> http://stackoverflow.com/questions/26488564/hadoop-yarn-why-the-queue-cannot-be-deleted-after-i-revise-my-fair-scheduler-xm



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7777) Fix user name format in YARN Registry DNS name

2018-01-19 Thread Billie Rinaldi (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16332356#comment-16332356
 ] 

Billie Rinaldi commented on YARN-:
--

I don't think patch 01 will be sufficient, since RegistryDNS uses the ZK path 
to create the hostname. I think the replacement should be performed in 
RegistryUtils.currentUser().

> Fix user name format in YARN Registry DNS name 
> ---
>
> Key: YARN-
> URL: https://issues.apache.org/jira/browse/YARN-
> Project: Hadoop YARN
>  Issue Type: Bug
>Reporter: Jian He
>Assignee: Jian He
>Priority: Major
> Attachments: YARN-.01.patch
>
>
> user name that has "\_" should be converted to user "-", because DNS name 
> doesn't allow "_"



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7770) Support for setting application priority for Distributed Shell jobs

2018-01-19 Thread Sunil G (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7770?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16332351#comment-16332351
 ] 

Sunil G commented on YARN-7770:
---

[~charanh] DS has a property called *--priority*, could u pls check that. Thats 
been added earlier to support app priority from DS. Pls help to check the same.

> Support for setting application priority for Distributed Shell jobs
> ---
>
> Key: YARN-7770
> URL: https://issues.apache.org/jira/browse/YARN-7770
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: applications/distributed-shell
>Reporter: Charan Hebri
>Assignee: Sunil G
>Priority: Major
>
> Currently there isn't a way to submit a Distributed Shell job with an 
> application priority like how it is done via the property
> {noformat}
> mapred.job.priority{noformat}
> for MapReduce jobs. Creating this issue to track support for the same.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7537) [Atsv2] load hbase configuration from filesystem rather than URL

2018-01-19 Thread genericqa (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7537?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16332158#comment-16332158
 ] 

genericqa commented on YARN-7537:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
18s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m  
9s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 15m 
 8s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  7m 
34s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
 0s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
10s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
11m 34s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
52s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m  
6s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
10s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
55s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  6m 
35s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  6m 
35s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
54s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
12s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green}  0m  
1s{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
10m 22s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m  
6s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
10s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  3m 
14s{color} | {color:green} hadoop-yarn-common in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
36s{color} | {color:green} hadoop-yarn-server-timelineservice-hbase in the 
patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
33s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 67m 21s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:5b98639 |
| JIRA Issue | YARN-7537 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12900150/YARN-7537.02.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  shadedclient  xml  findbugs  checkstyle  |
| uname | Linux 3565e43759ee 4.4.0-64-generic #85-Ubuntu SMP Mon Feb 20 
11:50:30 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/pro

[jira] [Commented] (YARN-7139) FairScheduler: finished applications are always restored to default queue

2018-01-19 Thread Wilfred Spiegelenburg (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7139?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16332146#comment-16332146
 ] 

Wilfred Spiegelenburg commented on YARN-7139:
-

thank you  [~snemeth] for the review and [~miklos.szeg...@cloudera.com] for the 
review / checkin

> FairScheduler: finished applications are always restored to default queue
> -
>
> Key: YARN-7139
> URL: https://issues.apache.org/jira/browse/YARN-7139
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: fairscheduler
>Affects Versions: 2.8.1
>Reporter: Wilfred Spiegelenburg
>Assignee: Wilfred Spiegelenburg
>Priority: Major
> Fix For: 3.1.0
>
> Attachments: YARN-7139.01.patch, YARN-7139.02.patch, 
> YARN-7139.03.patch, YARN-7139.04.patch
>
>
> The queue an application gets submitted to is defined by the placement policy 
> in the FS. The placement policy returns the queue and the application object 
> is updated. When an application is stored in the state store the application 
> submission context is used which has not been updated after the placement 
> rules have run. 
> This means that the original queue from the submission is still stored which 
> is the incorrect queue. On restore we then read back the wrong queue and 
> display the wrong queue in the RM web UI.
> We should update the submission context after we have run the placement 
> policies to make sure that we store the correct queue for the application.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7763) Allow Constraints specified in the SchedulingRequest to override application level constraints

2018-01-19 Thread genericqa (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7763?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16332140#comment-16332140
 ] 

genericqa commented on YARN-7763:
-

| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
19s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} YARN-6592 Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 15m 
18s{color} | {color:green} YARN-6592 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
36s{color} | {color:green} YARN-6592 passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
24s{color} | {color:green} YARN-6592 passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
37s{color} | {color:green} YARN-6592 passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green}  
9m 42s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m  
0s{color} | {color:green} YARN-6592 passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
25s{color} | {color:green} YARN-6592 passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
38s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
33s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
33s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
23s{color} | {color:green} 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager:
 The patch generated 0 new + 111 unchanged - 19 fixed = 111 total (was 130) 
{color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
36s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
10m 16s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m  
7s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
22s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 64m 
33s{color} | {color:green} hadoop-yarn-server-resourcemanager in the patch 
passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
29s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}107m 17s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:5b98639 |
| JIRA Issue | YARN-7763 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12906793/YARN-7763-YARN-6592.006.patch
 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  shadedclient  findbugs  checkstyle  |
| uname | Linux 06dd5af72efd 4.4.0-64-generic #85-Ubuntu SMP Mon Feb 20 
11:50:30 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | YARN-6592 / 27fa101 |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_151 |
| findbugs | v3.1.0-RC1 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-YARN-Build/19350/testReport/ |
| Max. process+thread count | 894 (vs. ulimit of 5000) |
| modules | C: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager
 U: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager
 |
| Console output | 
https://builds.apache.org/job/Pre

[jira] [Commented] (YARN-7763) Allow Constraints specified in the SchedulingRequest to override application level constraints

2018-01-19 Thread genericqa (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7763?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16332103#comment-16332103
 ] 

genericqa commented on YARN-7763:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
18s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} YARN-6592 Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 14m 
46s{color} | {color:green} YARN-6592 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
38s{color} | {color:green} YARN-6592 passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
29s{color} | {color:green} YARN-6592 passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
41s{color} | {color:green} YARN-6592 passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
10m 23s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m  
1s{color} | {color:green} YARN-6592 passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
25s{color} | {color:green} YARN-6592 passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
37s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
33s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
33s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 23s{color} | {color:orange} 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager:
 The patch generated 1 new + 111 unchanged - 19 fixed = 112 total (was 130) 
{color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
36s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
10m 28s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m  
7s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
23s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 64m 40s{color} 
| {color:red} hadoop-yarn-server-resourcemanager in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
21s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}107m 39s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | 
hadoop.yarn.server.resourcemanager.scheduler.capacity.TestIncreaseAllocationExpirer
 |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:5b98639 |
| JIRA Issue | YARN-7763 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12906782/YARN-7763-YARN-6592.004.patch
 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  shadedclient  findbugs  checkstyle  |
| uname | Linux 5c40d50d7440 4.4.0-64-generic #85-Ubuntu SMP Mon Feb 20 
11:50:30 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | YARN-6592 / 27fa101 |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_151 |
| findbugs | v3.1.0-RC1 |
| checkstyle | 
https://builds.apache.org/job/PreCommit-YARN-Build/19348/artifact/out/diff-checkstyle-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager.txt
 |
| unit | 
https://builds.apache.org/job/PreCommit-YARN-Build/19348/ar

[jira] [Commented] (YARN-7753) [UI2] Application logs has to be pulled from ATS 1.5 instead of ATS2

2018-01-19 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7753?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16332096#comment-16332096
 ] 

Hudson commented on YARN-7753:
--

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #13521 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/13521/])
YARN-7753. [UI2] Application logs has to be pulled from ATS 1.5 instead 
(rohithsharmaks: rev c5bbd6418ed1a7b78bf5bd6c1e0fad1dc9fab300)
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/app/initializers/loader.js
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/app/serializers/yarn-log.js
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/app/services/hosts.js
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/app/adapters/yarn-log.js
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/config/default-config.js


> [UI2] Application logs has to be pulled from ATS 1.5 instead of ATS2
> 
>
> Key: YARN-7753
> URL: https://issues.apache.org/jira/browse/YARN-7753
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: yarn-ui-v2
>Reporter: Sunil G
>Assignee: Sunil G
>Priority: Major
> Fix For: 3.1.0
>
> Attachments: YARN-7753.001.patch, YARN-7753.002.patch
>
>
> Currently UI tries to pull logs from ATS v2. Instead, it should be pulled 
> from ATS v1 as ATS2 doesnt have a log story yet.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7528) Resource types that use units need to be defined at RM level and NM level or when using small units you will overflow max_allocation calculation

2018-01-19 Thread Szilard Nemeth (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7528?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16332084#comment-16332084
 ] 

Szilard Nemeth commented on YARN-7528:
--

Hey [~gsohn]! 

Sure, I will discuss this one with Daniel.

Thanks for your help so far!

> Resource types that use units need to be defined at RM level and NM level or 
> when using small units you will overflow max_allocation calculation
> 
>
> Key: YARN-7528
> URL: https://issues.apache.org/jira/browse/YARN-7528
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: documentation, resourcemanager
>Affects Versions: 3.0.0
>Reporter: Grant Sohn
>Assignee: Szilard Nemeth
>Priority: Major
>
> When the unit is not defined in the RM, the LONG_MAX default will overflow in 
> the conversion step.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7779) Display allocation tags in RM web UI and expose via REST API

2018-01-19 Thread Weiwei Yang (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7779?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16332033#comment-16332033
 ] 

Weiwei Yang commented on YARN-7779:
---

Hi [~kkaranasos], [~asuresh]

If this makes sense to you, I can help to submit a patch.

Thanks

> Display allocation tags in RM web UI and expose via REST API
> 
>
> Key: YARN-7779
> URL: https://issues.apache.org/jira/browse/YARN-7779
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: RM
>Reporter: Weiwei Yang
>Assignee: Weiwei Yang
>Priority: Major
>
> Propose to display node allocation tags on RM. This will users to check 
> allocations w.r.t the tags. It would be good to expose node allocation tags 
> from:  
>  * Web UI: {{http:///cluster/nodes}}
>  * REST API: {{http:///ws/v1/cluster/nodes}}, 
> {{http:///ws/v1/cluster/node/}}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Created] (YARN-7779) Display allocation tags in RM web UI and expose via REST API

2018-01-19 Thread Weiwei Yang (JIRA)
Weiwei Yang created YARN-7779:
-

 Summary: Display allocation tags in RM web UI and expose via REST 
API
 Key: YARN-7779
 URL: https://issues.apache.org/jira/browse/YARN-7779
 Project: Hadoop YARN
  Issue Type: Sub-task
  Components: RM
Reporter: Weiwei Yang
Assignee: Weiwei Yang


Propose to display node allocation tags on RM. This will users to check 
allocations w.r.t the tags. It would be good to expose node allocation tags 
from:  
 * Web UI: {{http:///cluster/nodes}}
 * REST API: {{http:///ws/v1/cluster/nodes}}, 
{{http:///ws/v1/cluster/node/}}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7763) Allow Constraints specified in the SchedulingRequest to override application level constraints

2018-01-19 Thread Weiwei Yang (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7763?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16332018#comment-16332018
 ] 

Weiwei Yang commented on YARN-7763:
---

Thanks [~kkaranasos], uploaded v6 patch that fixed the position of the instance 
check, it also fixed a minor comment in v5 patch to avoid confusion.

> Allow Constraints specified in the SchedulingRequest to override application 
> level constraints
> --
>
> Key: YARN-7763
> URL: https://issues.apache.org/jira/browse/YARN-7763
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Wangda Tan
>Assignee: Weiwei Yang
>Priority: Blocker
> Attachments: YARN-7763-YARN-6592.001.patch, 
> YARN-7763-YARN-6592.002.patch, YARN-7763-YARN-6592.003.patch, 
> YARN-7763-YARN-6592.004.patch, YARN-7763-YARN-6592.005.patch, 
> YARN-7763-YARN-6592.006.patch
>
>
> As I mentioned on YARN-6599, we will add SchedulingRequest as part of the 
> PlacementConstraintUtil method and both of processor/scheduler implementation 
> will use the same logic. The logic looks like:
> {code:java}
> PlacementConstraint pc = schedulingRequest.getPlacementConstraint();
> If (pc == null) {
>   pc = 
> PlacementConstraintMgr.getPlacementConstraint(schedulingRequest.getAllocationTags());
> }
> // Do placement constraint match ...{code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-7763) Allow Constraints specified in the SchedulingRequest to override application level constraints

2018-01-19 Thread Weiwei Yang (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-7763?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Weiwei Yang updated YARN-7763:
--
Attachment: YARN-7763-YARN-6592.006.patch

> Allow Constraints specified in the SchedulingRequest to override application 
> level constraints
> --
>
> Key: YARN-7763
> URL: https://issues.apache.org/jira/browse/YARN-7763
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Wangda Tan
>Assignee: Weiwei Yang
>Priority: Blocker
> Attachments: YARN-7763-YARN-6592.001.patch, 
> YARN-7763-YARN-6592.002.patch, YARN-7763-YARN-6592.003.patch, 
> YARN-7763-YARN-6592.004.patch, YARN-7763-YARN-6592.005.patch, 
> YARN-7763-YARN-6592.006.patch
>
>
> As I mentioned on YARN-6599, we will add SchedulingRequest as part of the 
> PlacementConstraintUtil method and both of processor/scheduler implementation 
> will use the same logic. The logic looks like:
> {code:java}
> PlacementConstraint pc = schedulingRequest.getPlacementConstraint();
> If (pc == null) {
>   pc = 
> PlacementConstraintMgr.getPlacementConstraint(schedulingRequest.getAllocationTags());
> }
> // Do placement constraint match ...{code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7753) [UI2] Application logs has to be pulled from ATS 1.5 instead of ATS2

2018-01-19 Thread genericqa (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7753?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16332012#comment-16332012
 ] 

genericqa commented on YARN-7753:
-

| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
22s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 15m 
24s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
23m 51s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green}  
9m 34s{color} | {color:green} patch has no errors when building and testing our 
client artifacts. {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
17s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 34m 25s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:5b98639 |
| JIRA Issue | YARN-7753 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12906787/YARN-7753.002.patch |
| Optional Tests |  asflicense  shadedclient  |
| uname | Linux d0f31cde50ca 4.4.0-89-generic #112-Ubuntu SMP Mon Jul 31 
19:38:41 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / 9e4f52d |
| maven | version: Apache Maven 3.3.9 |
| Max. process+thread count | 410 (vs. ulimit of 5000) |
| modules | C: hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui U: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui |
| Console output | 
https://builds.apache.org/job/PreCommit-YARN-Build/19349/console |
| Powered by | Apache Yetus 0.8.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> [UI2] Application logs has to be pulled from ATS 1.5 instead of ATS2
> 
>
> Key: YARN-7753
> URL: https://issues.apache.org/jira/browse/YARN-7753
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: yarn-ui-v2
>Reporter: Sunil G
>Assignee: Sunil G
>Priority: Major
> Attachments: YARN-7753.001.patch, YARN-7753.002.patch
>
>
> Currently UI tries to pull logs from ATS v2. Instead, it should be pulled 
> from ATS v1 as ATS2 doesnt have a log story yet.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-7763) Allow Constraints specified in the SchedulingRequest to override application level constraints

2018-01-19 Thread Weiwei Yang (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-7763?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Weiwei Yang updated YARN-7763:
--
Attachment: YARN-7763-YARN-6592.005.patch

> Allow Constraints specified in the SchedulingRequest to override application 
> level constraints
> --
>
> Key: YARN-7763
> URL: https://issues.apache.org/jira/browse/YARN-7763
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Wangda Tan
>Assignee: Weiwei Yang
>Priority: Blocker
> Attachments: YARN-7763-YARN-6592.001.patch, 
> YARN-7763-YARN-6592.002.patch, YARN-7763-YARN-6592.003.patch, 
> YARN-7763-YARN-6592.004.patch, YARN-7763-YARN-6592.005.patch
>
>
> As I mentioned on YARN-6599, we will add SchedulingRequest as part of the 
> PlacementConstraintUtil method and both of processor/scheduler implementation 
> will use the same logic. The logic looks like:
> {code:java}
> PlacementConstraint pc = schedulingRequest.getPlacementConstraint();
> If (pc == null) {
>   pc = 
> PlacementConstraintMgr.getPlacementConstraint(schedulingRequest.getAllocationTags());
> }
> // Do placement constraint match ...{code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5094) some YARN container events have timestamp of -1

2018-01-19 Thread genericqa (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5094?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16331994#comment-16331994
 ] 

genericqa commented on YARN-5094:
-

| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
18s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 15m 
11s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
49s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
21s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
33s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
10m 12s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
46s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
22s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
30s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
44s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
44s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
17s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
29s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green}  
9m 39s{color} | {color:green} patch has no errors when building and testing our 
client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
48s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
17s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 19m  
5s{color} | {color:green} hadoop-yarn-server-nodemanager in the patch passed. 
{color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
19s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 60m 37s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:5b98639 |
| JIRA Issue | YARN-5094 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12871021/YARN-5094.02.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  shadedclient  findbugs  checkstyle  |
| uname | Linux e968668aff69 4.4.0-64-generic #85-Ubuntu SMP Mon Feb 20 
11:50:30 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / 9e4f52d |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_151 |
| findbugs | v3.1.0-RC1 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-YARN-Build/19347/testReport/ |
| Max. process+thread count | 408 (vs. ulimit of 5000) |
| modules | C: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager
 U: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager
 |
| Console output | 
https://builds.apache.org/job/PreCommit-YARN-Build/19347/console |
| Powered by | Apache Yetus 0.8.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> some YARN container events have timestamp of -1
> 

[jira] [Commented] (YARN-7753) [UI2] Application logs has to be pulled from ATS 1.5 instead of ATS2

2018-01-19 Thread Rohith Sharma K S (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7753?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16331990#comment-16331990
 ] 

Rohith Sharma K S commented on YARN-7753:
-

+1lgtm, pending jenkins

> [UI2] Application logs has to be pulled from ATS 1.5 instead of ATS2
> 
>
> Key: YARN-7753
> URL: https://issues.apache.org/jira/browse/YARN-7753
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: yarn-ui-v2
>Reporter: Sunil G
>Assignee: Sunil G
>Priority: Major
> Attachments: YARN-7753.001.patch, YARN-7753.002.patch
>
>
> Currently UI tries to pull logs from ATS v2. Instead, it should be pulled 
> from ATS v1 as ATS2 doesnt have a log story yet.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7762) ATS uses timeline service config to identify local hostname

2018-01-19 Thread Rohith Sharma K S (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7762?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16331984#comment-16331984
 ] 

Rohith Sharma K S commented on YARN-7762:
-

It is done intentionally for secure login from configured hostname rather than 
getting from localhost. In case of VIP machines, doing as per patch disregard 
VIP address. This reverts YARN-1590, pls take a look at  it. 

> ATS uses timeline service config to identify local hostname
> ---
>
> Key: YARN-7762
> URL: https://issues.apache.org/jira/browse/YARN-7762
> Project: Hadoop YARN
>  Issue Type: Bug
>Reporter: NITHIN MAHESH
>Priority: Major
> Attachments: YARN-7762.patch
>
>
> In ApplicationHistoryServer.doSecureLogin(), the local hostname is got by 
> calling getBindAddress() which gets the hostname that is defined by the 
> config yarn.timeline-service.address. This is a bug and doSecureLogin should 
> just get local host name directly instead of this way.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7763) Allow Constraints specified in the SchedulingRequest to override application level constraints

2018-01-19 Thread Konstantinos Karanasos (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7763?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16331979#comment-16331979
 ] 

Konstantinos Karanasos commented on YARN-7763:
--

Thanks [~cheersyang]. Only one minor thing: the instanceof check has to be done 
after we call the transformer. If we do it before, it might be a 
CardinalityConstraint or a TargetConstraint, all of which get transformed to a 
SingleConstraint. So we have to do it just right before the cast. Sorry for not 
clarifying it properly.

> Allow Constraints specified in the SchedulingRequest to override application 
> level constraints
> --
>
> Key: YARN-7763
> URL: https://issues.apache.org/jira/browse/YARN-7763
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Wangda Tan
>Assignee: Weiwei Yang
>Priority: Blocker
> Attachments: YARN-7763-YARN-6592.001.patch, 
> YARN-7763-YARN-6592.002.patch, YARN-7763-YARN-6592.003.patch, 
> YARN-7763-YARN-6592.004.patch
>
>
> As I mentioned on YARN-6599, we will add SchedulingRequest as part of the 
> PlacementConstraintUtil method and both of processor/scheduler implementation 
> will use the same logic. The logic looks like:
> {code:java}
> PlacementConstraint pc = schedulingRequest.getPlacementConstraint();
> If (pc == null) {
>   pc = 
> PlacementConstraintMgr.getPlacementConstraint(schedulingRequest.getAllocationTags());
> }
> // Do placement constraint match ...{code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7753) [UI2] Application logs has to be pulled from ATS 1.5 instead of ATS2

2018-01-19 Thread Vasudevan Skm (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7753?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16331970#comment-16331970
 ] 

Vasudevan Skm commented on YARN-7753:
-

Looks good. +1

> [UI2] Application logs has to be pulled from ATS 1.5 instead of ATS2
> 
>
> Key: YARN-7753
> URL: https://issues.apache.org/jira/browse/YARN-7753
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: yarn-ui-v2
>Reporter: Sunil G
>Assignee: Sunil G
>Priority: Major
> Attachments: YARN-7753.001.patch, YARN-7753.002.patch
>
>
> Currently UI tries to pull logs from ATS v2. Instead, it should be pulled 
> from ATS v1 as ATS2 doesnt have a log story yet.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7774) Miscellaneous fixes to the PlacementProcessor

2018-01-19 Thread Konstantinos Karanasos (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7774?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16331969#comment-16331969
 ] 

Konstantinos Karanasos commented on YARN-7774:
--

Thanks [~asuresh] for the patch. Looks good overall, some comments:
 * Do we really need the CircularIterator? It seems to me that you can have a 
normal iterator initialized outside the for loop and then each time 
hasNext()=false, you can re-initialize it. But maybe I am missing something.
 * For what [~cheersyang] mentioned about not being affinity friendly, you 
could make a check whether the constraint has minCardinality>0 and scope=NODE, 
and then keep the iterator at the same place in such a case. But if you feel it 
is an over-optimization for the time being, I am fine tackling it in another 
JIRA. Up to you.
 * Do we clean up the black list for each tag? It seems that black-listing can 
change based on the allocations that have been done so far, so we might need to 
use it carefully.

> Miscellaneous fixes to the PlacementProcessor
> -
>
> Key: YARN-7774
> URL: https://issues.apache.org/jira/browse/YARN-7774
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Arun Suresh
>Assignee: Arun Suresh
>Priority: Blocker
> Attachments: YARN-7774-YARN-6592.001.patch, 
> YARN-7774-YARN-6592.002.patch, YARN-7774-YARN-6592.003.patch, 
> YARN-7774-YARN-6592.004.patch
>
>
> JIRA to track the following minor changes:
> * Scheduler must normalize requests that are made using the 
> {{attemptAllocationOnNode}} method.
> * Currently, the placement algorithm resets the node iterator for each 
> request. The Placement Algorithm should either shuffle the node iterator OR 
> use a circular iterator - to ensure a) more nodes are looked at and b) bias 
> against placing too many containers on the same node
> * Add a placement retry loop for rejected requests - since there are cases 
> especially, when Constraints will be satisfied after a subsequent request has 
> been placed.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7753) [UI2] Application logs has to be pulled from ATS 1.5 instead of ATS2

2018-01-19 Thread Sunil G (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7753?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16331959#comment-16331959
 ] 

Sunil G commented on YARN-7753:
---

Updated patch after fixing v1 address fix issue. This log will now work for 
both running and finished containers. I tested in local.

 

cc/ [~rohithsharma] pls review the same

> [UI2] Application logs has to be pulled from ATS 1.5 instead of ATS2
> 
>
> Key: YARN-7753
> URL: https://issues.apache.org/jira/browse/YARN-7753
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: yarn-ui-v2
>Reporter: Sunil G
>Assignee: Sunil G
>Priority: Major
> Attachments: YARN-7753.001.patch, YARN-7753.002.patch
>
>
> Currently UI tries to pull logs from ATS v2. Instead, it should be pulled 
> from ATS v1 as ATS2 doesnt have a log story yet.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-7753) [UI2] Application logs has to be pulled from ATS 1.5 instead of ATS2

2018-01-19 Thread Sunil G (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-7753?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sunil G updated YARN-7753:
--
Attachment: YARN-7753.002.patch

> [UI2] Application logs has to be pulled from ATS 1.5 instead of ATS2
> 
>
> Key: YARN-7753
> URL: https://issues.apache.org/jira/browse/YARN-7753
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: yarn-ui-v2
>Reporter: Sunil G
>Assignee: Sunil G
>Priority: Major
> Attachments: YARN-7753.001.patch, YARN-7753.002.patch
>
>
> Currently UI tries to pull logs from ATS v2. Instead, it should be pulled 
> from ATS v1 as ATS2 doesnt have a log story yet.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7763) Allow Constraints specified in the SchedulingRequest to override application level constraints

2018-01-19 Thread Weiwei Yang (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7763?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16331941#comment-16331941
 ] 

Weiwei Yang commented on YARN-7763:
---

Thanks [~kkaranasos], I just uploaded v4 patch to address your suggestions. 

> Allow Constraints specified in the SchedulingRequest to override application 
> level constraints
> --
>
> Key: YARN-7763
> URL: https://issues.apache.org/jira/browse/YARN-7763
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Wangda Tan
>Assignee: Weiwei Yang
>Priority: Blocker
> Attachments: YARN-7763-YARN-6592.001.patch, 
> YARN-7763-YARN-6592.002.patch, YARN-7763-YARN-6592.003.patch, 
> YARN-7763-YARN-6592.004.patch
>
>
> As I mentioned on YARN-6599, we will add SchedulingRequest as part of the 
> PlacementConstraintUtil method and both of processor/scheduler implementation 
> will use the same logic. The logic looks like:
> {code:java}
> PlacementConstraint pc = schedulingRequest.getPlacementConstraint();
> If (pc == null) {
>   pc = 
> PlacementConstraintMgr.getPlacementConstraint(schedulingRequest.getAllocationTags());
> }
> // Do placement constraint match ...{code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-7763) Allow Constraints specified in the SchedulingRequest to override application level constraints

2018-01-19 Thread Weiwei Yang (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-7763?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Weiwei Yang updated YARN-7763:
--
Attachment: YARN-7763-YARN-6592.004.patch

> Allow Constraints specified in the SchedulingRequest to override application 
> level constraints
> --
>
> Key: YARN-7763
> URL: https://issues.apache.org/jira/browse/YARN-7763
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Wangda Tan
>Assignee: Weiwei Yang
>Priority: Blocker
> Attachments: YARN-7763-YARN-6592.001.patch, 
> YARN-7763-YARN-6592.002.patch, YARN-7763-YARN-6592.003.patch, 
> YARN-7763-YARN-6592.004.patch
>
>
> As I mentioned on YARN-6599, we will add SchedulingRequest as part of the 
> PlacementConstraintUtil method and both of processor/scheduler implementation 
> will use the same logic. The logic looks like:
> {code:java}
> PlacementConstraint pc = schedulingRequest.getPlacementConstraint();
> If (pc == null) {
>   pc = 
> PlacementConstraintMgr.getPlacementConstraint(schedulingRequest.getAllocationTags());
> }
> // Do placement constraint match ...{code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7766) Introduce a new config property for YARN Service dependency tarball location

2018-01-19 Thread genericqa (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7766?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16331924#comment-16331924
 ] 

genericqa commented on YARN-7766:
-

| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
17s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
49s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 15m 
16s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  7m 
35s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
 3s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
10s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
11m 45s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
23s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
50s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
10s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
49s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  6m 
41s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  6m 
41s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
1m  1s{color} | {color:orange} hadoop-yarn-project/hadoop-yarn: The patch 
generated 3 new + 345 unchanged - 2 fixed = 348 total (was 347) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m  
6s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
10m 27s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
34s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
48s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 27m 
39s{color} | {color:green} hadoop-yarn-client in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  3m 
30s{color} | {color:green} hadoop-yarn-services-core in the patch passed. 
{color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
27s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 94m  2s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:5b98639 |
| JIRA Issue | YARN-7766 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12906772/YARN-7766.002.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  shadedclient  findbugs  checkstyle  |
| uname | Linux 8a2b5335b2e8 4.4.0-64-generic #85-Ubuntu SMP Mon Feb 20 
11:50:30 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / 9e4f52d |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_151 |
| findbugs | v3.1.0-RC1 |
| checkstyle | 
https://builds.apache.org/job/PreCommit-YA

[jira] [Commented] (YARN-7763) Allow Constraints specified in the SchedulingRequest to override application level constraints

2018-01-19 Thread Konstantinos Karanasos (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7763?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16331917#comment-16331917
 ] 

Konstantinos Karanasos commented on YARN-7763:
--

Thanks [~cheersyang].
{quote}I did not do that because I think we should have something better than 
{{instanceof}} to tell which constraint we are dealing with. E.g would a 
{{getType}} possible ?
{quote}
Without the instanceof, the cast will throw an exception though in case the 
user adds a composite constraint.

We could add a getType later if we see that we have other than these two 
constraint types to deal with.
{quote}We need to define the behavior how we merge constraints when there is 
several ones, we can have more discussion in a followup JIRA.
{quote}
Agreed. I just file YARN-7778, so that we can do it later. Could you please add 
a TODO when you deal with the different levels here, mentioning that Jira so 
that we don't forget to perform the merging?

+1 otherwise.

> Allow Constraints specified in the SchedulingRequest to override application 
> level constraints
> --
>
> Key: YARN-7763
> URL: https://issues.apache.org/jira/browse/YARN-7763
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Wangda Tan
>Assignee: Weiwei Yang
>Priority: Blocker
> Attachments: YARN-7763-YARN-6592.001.patch, 
> YARN-7763-YARN-6592.002.patch, YARN-7763-YARN-6592.003.patch
>
>
> As I mentioned on YARN-6599, we will add SchedulingRequest as part of the 
> PlacementConstraintUtil method and both of processor/scheduler implementation 
> will use the same logic. The logic looks like:
> {code:java}
> PlacementConstraint pc = schedulingRequest.getPlacementConstraint();
> If (pc == null) {
>   pc = 
> PlacementConstraintMgr.getPlacementConstraint(schedulingRequest.getAllocationTags());
> }
> // Do placement constraint match ...{code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Created] (YARN-7778) Merging of constraints defined at different levels

2018-01-19 Thread Konstantinos Karanasos (JIRA)
Konstantinos Karanasos created YARN-7778:


 Summary: Merging of constraints defined at different levels
 Key: YARN-7778
 URL: https://issues.apache.org/jira/browse/YARN-7778
 Project: Hadoop YARN
  Issue Type: Sub-task
Reporter: Konstantinos Karanasos


When we have multiple constraints defined for a given set of allocation tags at 
different levels (i.e., at the cluster, the application or the scheduling 
request level), we need to merge those constraints.

Defining constraint levels as cluster > application > scheduling request, 
constraints defined at lower levels should only be more restrictive than those 
of higher levels. Otherwise the allocation should fail.

For example, if there is an application level constraint that allows no more 
than 5 HBase containers per rack, a scheduling request can further restrict 
that to 3 containers per rack but not to 7 containers per rack.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5094) some YARN container events have timestamp of -1

2018-01-19 Thread Sunil G (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5094?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16331894#comment-16331894
 ] 

Sunil G commented on YARN-5094:
---

Container finished time is coming as -1 when new UI pulls container data from 
ATS v2. This needs to be fixed.

> some YARN container events have timestamp of -1
> ---
>
> Key: YARN-5094
> URL: https://issues.apache.org/jira/browse/YARN-5094
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: timelineserver
>Affects Versions: 2.9.0, 3.0.0
>Reporter: Sangjin Lee
>Assignee: Haibo Chen
>Priority: Critical
>  Labels: YARN-5355
> Attachments: YARN-5094-YARN-2928.001.patch, YARN-5094.00.patch, 
> YARN-5094.02.patch
>
>
> Some events in the YARN container entities have timestamp of -1. The 
> RM-generated container events have proper timestamps. It appears that it's 
> the NM-generated events that have -1: YARN_CONTAINER_CREATED, 
> YARN_CONTAINER_FINISHED, YARN_NM_CONTAINER_LOCALIZATION_FINISHED, 
> YARN_NM_CONTAINER_LOCALIZATION_STARTED.
> In the YARN container page,
> {noformat}
> {
> id: "YARN_CONTAINER_CREATED",
> timestamp: -1,
> info: { }
> },
> {
> id: "YARN_CONTAINER_FINISHED",
> timestamp: -1,
> info: {
> YARN_CONTAINER_EXIT_STATUS: 0,
> YARN_CONTAINER_STATE: "RUNNING",
> YARN_CONTAINER_DIAGNOSTICS_INFO: ""
> }
> },
> {
> id: "YARN_NM_CONTAINER_LOCALIZATION_FINISHED",
> timestamp: -1,
> info: { }
> },
> {
> id: "YARN_NM_CONTAINER_LOCALIZATION_STARTED",
> timestamp: -1,
> info: { }
> }
> {noformat}
> I think the data itself is OK, but the values are not being populated in the 
> REST output?



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (YARN-5094) some YARN container events have timestamp of -1

2018-01-19 Thread Sunil G (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5094?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16331894#comment-16331894
 ] 

Sunil G edited comment on YARN-5094 at 1/19/18 8:25 AM:


Container finished time is coming as -1 when new UI pulls container data from 
ATS v2. This needs to be fixed.

cc/ [~rohithsharma] [~haibochen]


was (Author: sunilg):
Container finished time is coming as -1 when new UI pulls container data from 
ATS v2. This needs to be fixed.

> some YARN container events have timestamp of -1
> ---
>
> Key: YARN-5094
> URL: https://issues.apache.org/jira/browse/YARN-5094
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: timelineserver
>Affects Versions: 2.9.0, 3.0.0
>Reporter: Sangjin Lee
>Assignee: Haibo Chen
>Priority: Critical
>  Labels: YARN-5355
> Attachments: YARN-5094-YARN-2928.001.patch, YARN-5094.00.patch, 
> YARN-5094.02.patch
>
>
> Some events in the YARN container entities have timestamp of -1. The 
> RM-generated container events have proper timestamps. It appears that it's 
> the NM-generated events that have -1: YARN_CONTAINER_CREATED, 
> YARN_CONTAINER_FINISHED, YARN_NM_CONTAINER_LOCALIZATION_FINISHED, 
> YARN_NM_CONTAINER_LOCALIZATION_STARTED.
> In the YARN container page,
> {noformat}
> {
> id: "YARN_CONTAINER_CREATED",
> timestamp: -1,
> info: { }
> },
> {
> id: "YARN_CONTAINER_FINISHED",
> timestamp: -1,
> info: {
> YARN_CONTAINER_EXIT_STATUS: 0,
> YARN_CONTAINER_STATE: "RUNNING",
> YARN_CONTAINER_DIAGNOSTICS_INFO: ""
> }
> },
> {
> id: "YARN_NM_CONTAINER_LOCALIZATION_FINISHED",
> timestamp: -1,
> info: { }
> },
> {
> id: "YARN_NM_CONTAINER_LOCALIZATION_STARTED",
> timestamp: -1,
> info: { }
> }
> {noformat}
> I think the data itself is OK, but the values are not being populated in the 
> REST output?



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-5094) some YARN container events have timestamp of -1

2018-01-19 Thread Sunil G (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-5094?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sunil G updated YARN-5094:
--
Affects Version/s: (was: YARN-2928)
   2.9.0
   3.0.0

> some YARN container events have timestamp of -1
> ---
>
> Key: YARN-5094
> URL: https://issues.apache.org/jira/browse/YARN-5094
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: timelineserver
>Affects Versions: 2.9.0, 3.0.0
>Reporter: Sangjin Lee
>Assignee: Haibo Chen
>Priority: Critical
>  Labels: YARN-5355
> Attachments: YARN-5094-YARN-2928.001.patch, YARN-5094.00.patch, 
> YARN-5094.02.patch
>
>
> Some events in the YARN container entities have timestamp of -1. The 
> RM-generated container events have proper timestamps. It appears that it's 
> the NM-generated events that have -1: YARN_CONTAINER_CREATED, 
> YARN_CONTAINER_FINISHED, YARN_NM_CONTAINER_LOCALIZATION_FINISHED, 
> YARN_NM_CONTAINER_LOCALIZATION_STARTED.
> In the YARN container page,
> {noformat}
> {
> id: "YARN_CONTAINER_CREATED",
> timestamp: -1,
> info: { }
> },
> {
> id: "YARN_CONTAINER_FINISHED",
> timestamp: -1,
> info: {
> YARN_CONTAINER_EXIT_STATUS: 0,
> YARN_CONTAINER_STATE: "RUNNING",
> YARN_CONTAINER_DIAGNOSTICS_INFO: ""
> }
> },
> {
> id: "YARN_NM_CONTAINER_LOCALIZATION_FINISHED",
> timestamp: -1,
> info: { }
> },
> {
> id: "YARN_NM_CONTAINER_LOCALIZATION_STARTED",
> timestamp: -1,
> info: { }
> }
> {noformat}
> I think the data itself is OK, but the values are not being populated in the 
> REST output?



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-5094) some YARN container events have timestamp of -1

2018-01-19 Thread Sunil G (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-5094?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sunil G updated YARN-5094:
--
Priority: Critical  (was: Major)

> some YARN container events have timestamp of -1
> ---
>
> Key: YARN-5094
> URL: https://issues.apache.org/jira/browse/YARN-5094
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: timelineserver
>Affects Versions: 2.9.0, 3.0.0
>Reporter: Sangjin Lee
>Assignee: Haibo Chen
>Priority: Critical
>  Labels: YARN-5355
> Attachments: YARN-5094-YARN-2928.001.patch, YARN-5094.00.patch, 
> YARN-5094.02.patch
>
>
> Some events in the YARN container entities have timestamp of -1. The 
> RM-generated container events have proper timestamps. It appears that it's 
> the NM-generated events that have -1: YARN_CONTAINER_CREATED, 
> YARN_CONTAINER_FINISHED, YARN_NM_CONTAINER_LOCALIZATION_FINISHED, 
> YARN_NM_CONTAINER_LOCALIZATION_STARTED.
> In the YARN container page,
> {noformat}
> {
> id: "YARN_CONTAINER_CREATED",
> timestamp: -1,
> info: { }
> },
> {
> id: "YARN_CONTAINER_FINISHED",
> timestamp: -1,
> info: {
> YARN_CONTAINER_EXIT_STATUS: 0,
> YARN_CONTAINER_STATE: "RUNNING",
> YARN_CONTAINER_DIAGNOSTICS_INFO: ""
> }
> },
> {
> id: "YARN_NM_CONTAINER_LOCALIZATION_FINISHED",
> timestamp: -1,
> info: { }
> },
> {
> id: "YARN_NM_CONTAINER_LOCALIZATION_STARTED",
> timestamp: -1,
> info: { }
> }
> {noformat}
> I think the data itself is OK, but the values are not being populated in the 
> REST output?



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7774) Miscellaneous fixes to the PlacementProcessor

2018-01-19 Thread Weiwei Yang (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7774?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16331868#comment-16331868
 ] 

Weiwei Yang commented on YARN-7774:
---

Hi [~asuresh]

In \{{DefaultPlacementAlgorithm}}, for each SchedulingRequest, it iterates over 
available nodes and each time it attempts to allocate one allocation on a node, 
then go to next node. Imagine the request asks for 2 allocations affinity on 
same node, does it mean the second allocation can only be made after it 
iterates all nodes again? This algorithm doesn't seem to be affinity friendly.

{\{CircularIterator}} looks to be general enough and deserves its own class 
with a generic type, it can be moved to common package. Also it would be good 
to add some test case for it. It doesn't have to be done in this patch. Just a 
suggestion.

Hope this helps. Thanks

> Miscellaneous fixes to the PlacementProcessor
> -
>
> Key: YARN-7774
> URL: https://issues.apache.org/jira/browse/YARN-7774
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Arun Suresh
>Assignee: Arun Suresh
>Priority: Blocker
> Attachments: YARN-7774-YARN-6592.001.patch, 
> YARN-7774-YARN-6592.002.patch, YARN-7774-YARN-6592.003.patch, 
> YARN-7774-YARN-6592.004.patch
>
>
> JIRA to track the following minor changes:
> * Scheduler must normalize requests that are made using the 
> {{attemptAllocationOnNode}} method.
> * Currently, the placement algorithm resets the node iterator for each 
> request. The Placement Algorithm should either shuffle the node iterator OR 
> use a circular iterator - to ensure a) more nodes are looked at and b) bias 
> against placing too many containers on the same node
> * Add a placement retry loop for rejected requests - since there are cases 
> especially, when Constraints will be satisfied after a subsequent request has 
> been placed.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org