[jira] [Commented] (YARN-7292) Retrospect Resource Profile Behavior for overriding capability

2018-02-15 Thread Wangda Tan (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7292?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16366684#comment-16366684
 ] 

Wangda Tan commented on YARN-7292:
--

Thanks [~sunilg] and thanks review from [~templedf].

[~sunilg], could u please commit the patch to branch-3.1 as well?

> Retrospect Resource Profile Behavior for overriding capability
> --
>
> Key: YARN-7292
> URL: https://issues.apache.org/jira/browse/YARN-7292
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: nodemanager, resourcemanager
>Reporter: Wangda Tan
>Assignee: Wangda Tan
>Priority: Blocker
> Fix For: 3.1.0
>
> Attachments: YARN-7292.002.patch, YARN-7292.003.patch, 
> YARN-7292.004.patch, YARN-7292.005.patch, YARN-7292.006.patch, 
> YARN-7292.007.patch, YARN-7292.wip.001.patch
>
>
> Had discussions with [~templedf], [~vvasudev], [~sunilg] offline. There're a 
> couple of resource profile related behaviors might need to be updated:
> 1) Configure resource profile in server side or client side: 
> Currently resource profile can be only configured centrally:
> - Advantages:
> A given resource profile has a the same meaning in the cluster. It won’t 
> change when we run different apps in different configurations. A job can run 
> under Amazon’s G2.8X can also run on YARN with G2.8X profile. A side benefit 
> is YARN scheduler can potentially do better bin packing.
> - Disadvantages: 
> Hard for applications to add their own resource profiles. 
> 2) Do we really need mandatory resource profiles such as 
> minimum/maximum/default? 
> 3) Should we send resource profile name inside ResourceRequest, or should 
> client/AM translate it to resource and set it to the existing resource 
> fields? 
> 4) Related to above, should we allow resource overrides or client/AM should 
> send final resource to RM?



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7732) Support Generic AM Simulator from SynthGenerator

2018-02-15 Thread Wangda Tan (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7732?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16366683#comment-16366683
 ] 

Wangda Tan commented on YARN-7732:
--

Thanks [~youchen], in general the patch looks good, I'd like to do some basic 
validations using old traces in the next Monday. If I don't get back by next 
Monday, please feel free to commit the patch to trunk. And is there any 
compatibility issue for syn.json? I saw the description says: 
{quote}See syn_generic.json for an equivalent of the previous syn.json in the 
new format.
{quote}
cc: [~curino]

> Support Generic AM Simulator from SynthGenerator
> 
>
> Key: YARN-7732
> URL: https://issues.apache.org/jira/browse/YARN-7732
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: scheduler-load-simulator
>Reporter: Young Chen
>Assignee: Young Chen
>Priority: Minor
> Attachments: YARN-7732-YARN-7798.01.patch, 
> YARN-7732-YARN-7798.02.patch, YARN-7732.01.patch, YARN-7732.02.patch, 
> YARN-7732.03.patch, YARN-7732.04.patch, YARN-7732.05.patch
>
>
> Extract the MapReduce specific set-up in the SLSRunner into the 
> MRAMSimulator, and enable support for pluggable AMSimulators.
> Previously, the AM set up in SLSRunner had the MRAMSimulator type hard coded, 
> for example startAMFromSynthGenerator() calls this:
>  
> {code:java}
> runNewAM(SLSUtils.DEFAULT_JOB_TYPE, user, jobQueue, oldJobId,
> jobStartTimeMS, jobFinishTimeMS, containerList, reservationId,
> job.getDeadline(), getAMContainerResource(null));
> {code}
> where SLSUtils.DEFAULT_JOB_TYPE = "mapreduce"
> The container set up was also only suitable for mapreduce: 
>  
> {code:java}
> Version:1.0 StartHTML:00286 EndHTML:12564 StartFragment:03634 
> EndFragment:12474 StartSelection:03700 EndSelection:12464 
> SourceURL:https://github.com/apache/hadoop/blob/trunk/hadoop-tools/hadoop-sls/src/main/java/org/apache/hadoop/yarn/sls/SLSRunner.java
>  
> // map tasks
> for (int i = 0; i < job.getNumberMaps(); i++) {
>   TaskAttemptInfo tai = job.getTaskAttemptInfo(TaskType.MAP, i, 0);
>   RMNode node =
>   nmMap.get(keyAsArray.get(rand.nextInt(keyAsArray.size(
>   .getNode();
>   String hostname = "/" + node.getRackName() + "/" + node.getHostName();
>   long containerLifeTime = tai.getRuntime();
>   Resource containerResource =
>   Resource.newInstance((int) tai.getTaskInfo().getTaskMemory(),
>   (int) tai.getTaskInfo().getTaskVCores());
>   containerList.add(new ContainerSimulator(containerResource,
>   containerLifeTime, hostname, DEFAULT_MAPPER_PRIORITY, "map"));
> }
> // reduce tasks
> for (int i = 0; i < job.getNumberReduces(); i++) {
>   TaskAttemptInfo tai = job.getTaskAttemptInfo(TaskType.REDUCE, i, 0);
>   RMNode node =
>   nmMap.get(keyAsArray.get(rand.nextInt(keyAsArray.size(
>   .getNode();
>   String hostname = "/" + node.getRackName() + "/" + node.getHostName();
>   long containerLifeTime = tai.getRuntime();
>   Resource containerResource =
>   Resource.newInstance((int) tai.getTaskInfo().getTaskMemory(),
>   (int) tai.getTaskInfo().getTaskVCores());
>   containerList.add(
>   new ContainerSimulator(containerResource, containerLifeTime,
>   hostname, DEFAULT_REDUCER_PRIORITY, "reduce"));
> }
> {code}
>  
> In addition, the syn.json format supported only mapreduce (the parameters 
> were very specific: mtime, rtime, mtasks, rtasks, etc..).
> This patch aims to introduce a new syn.json format that can describe generic 
> jobs, and the SLS setup required to support the synth generation of generic 
> jobs.
> See syn_generic.json for an equivalent of the previous syn.json in the new 
> format.
> Using the new generic format, we describe a StreamAMSimulator simulates a 
> long running streaming service that maintains N number of containers for the 
> lifetime of the AM. See syn_stream.json.
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7292) Retrospect Resource Profile Behavior for overriding capability

2018-02-15 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7292?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16366673#comment-16366673
 ] 

Hudson commented on YARN-7292:
--

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #13668 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/13668/])
YARN-7292. Retrospect Resource Profile Behavior for overriding (sunilg: rev 
aae629913cee0157c945a2c7384c7bf398f10616)
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/SchedulerUtils.java
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/TestApplicationMasterService.java
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/capacity/TestCapacitySchedulerWithMultiResourceTypes.java
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-client/src/test/java/org/apache/hadoop/yarn/client/api/impl/TestOpportunisticContainerAllocationE2E.java
* (delete) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/api/records/ProfileCapability.java
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-client/src/test/java/org/apache/hadoop/yarn/client/api/impl/TestAMRMClientContainerRequest.java
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/api/records/ResourceRequest.java
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-applications-distributedshell/src/main/java/org/apache/hadoop/yarn/applications/distributedshell/ApplicationMaster.java
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-client/src/main/java/org/apache/hadoop/yarn/client/api/impl/RemoteRequestsTable.java
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-client/src/test/java/org/apache/hadoop/yarn/client/api/impl/TestAMRMClient.java
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-applications-distributedshell/src/main/java/org/apache/hadoop/yarn/applications/distributedshell/Client.java
* (delete) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/resource/MockResourceProfileManager.java
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-client/src/main/java/org/apache/hadoop/yarn/client/api/impl/AMRMClientImpl.java
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/test/java/org/apache/hadoop/yarn/api/TestPBImplRecords.java
* (delete) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/test/java/org/apache/hadoop/yarn/api/TestProfileCapability.java
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-client/src/main/java/org/apache/hadoop/yarn/client/api/AMRMClient.java
* (delete) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/java/org/apache/hadoop/yarn/api/records/impl/pb/ProfileCapabilityPBImpl.java
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-client/src/test/java/org/apache/hadoop/yarn/client/api/impl/TestNMClient.java
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/RMServerUtils.java
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/java/org/apache/hadoop/yarn/api/records/impl/pb/ResourceRequestPBImpl.java
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-common/src/main/java/org/apache/hadoop/yarn/server/utils/BuilderUtils.java
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/proto/yarn_protos.proto


> Retrospect Resource Profile Behavior for overriding capability
> --
>
> Key: YARN-7292
> URL: https://issues.apache.org/jira/browse/YARN-7292
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: nodemanager, resourcemanager
>Reporter: Wangda Tan
>Assignee: Wangda Tan
>Priority: Blocker
> Fix For: 3.1.0
>
> Attachments: YARN-7292.002.patch, YARN-7292.003.patch, 
> YARN-7292.004.patch, YARN-7292.005.patch, YARN-7292.006.patch, 
> YARN-7292.007.patch, YARN-7292.wip.001.patch
>
>
> Had discussions with [~templedf], [~vvasudev], [~sunilg] offline. There're a 
> couple of resource profile related behaviors might need to be updated:
> 1) Configure resource profile in server side or client side: 
> Currently resource profile can be only configured centrally:
> - Advantages:
> A given resource profile has a the same meaning in the cluster. It won’t 
> change when we run different apps in different configurations. A job can run 
> under Amazon’s 

[jira] [Updated] (YARN-7292) Retrospect Resource Profile Behavior for overriding capability

2018-02-15 Thread Sunil G (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-7292?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sunil G updated YARN-7292:
--
Summary: Retrospect Resource Profile Behavior for overriding capability  
(was: Revisit Resource Profile Behavior)

> Retrospect Resource Profile Behavior for overriding capability
> --
>
> Key: YARN-7292
> URL: https://issues.apache.org/jira/browse/YARN-7292
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: nodemanager, resourcemanager
>Reporter: Wangda Tan
>Assignee: Wangda Tan
>Priority: Blocker
> Attachments: YARN-7292.002.patch, YARN-7292.003.patch, 
> YARN-7292.004.patch, YARN-7292.005.patch, YARN-7292.006.patch, 
> YARN-7292.007.patch, YARN-7292.wip.001.patch
>
>
> Had discussions with [~templedf], [~vvasudev], [~sunilg] offline. There're a 
> couple of resource profile related behaviors might need to be updated:
> 1) Configure resource profile in server side or client side: 
> Currently resource profile can be only configured centrally:
> - Advantages:
> A given resource profile has a the same meaning in the cluster. It won’t 
> change when we run different apps in different configurations. A job can run 
> under Amazon’s G2.8X can also run on YARN with G2.8X profile. A side benefit 
> is YARN scheduler can potentially do better bin packing.
> - Disadvantages: 
> Hard for applications to add their own resource profiles. 
> 2) Do we really need mandatory resource profiles such as 
> minimum/maximum/default? 
> 3) Should we send resource profile name inside ResourceRequest, or should 
> client/AM translate it to resource and set it to the existing resource 
> fields? 
> 4) Related to above, should we allow resource overrides or client/AM should 
> send final resource to RM?



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7599) [GPG] ApplicationCleaner in Global Policy Generator

2018-02-15 Thread genericqa (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7599?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16366655#comment-16366655
 ] 

genericqa commented on YARN-7599:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
28s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 4 new or modified test 
files. {color} |
|| || || || {color:brown} YARN-7402 Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
55s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 15m 
43s{color} | {color:green} YARN-7402 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  7m 
38s{color} | {color:green} YARN-7402 passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
 4s{color} | {color:green} YARN-7402 passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
46s{color} | {color:green} YARN-7402 passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
12m 40s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  1m 
12s{color} | {color:red} hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api in 
YARN-7402 has 1 extant Findbugs warnings. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
10s{color} | {color:green} YARN-7402 passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m  
9s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
 9s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  6m 
24s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} cc {color} | {color:green}  6m 
24s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} javac {color} | {color:red}  6m 24s{color} 
| {color:red} hadoop-yarn-project_hadoop-yarn generated 3 new + 87 unchanged - 
0 fixed = 90 total (was 87) {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 59s{color} | {color:orange} hadoop-yarn-project/hadoop-yarn: The patch 
generated 16 new + 230 unchanged - 0 fixed = 246 total (was 230) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
39s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} whitespace {color} | {color:red}  0m  
0s{color} | {color:red} The patch has 1 line(s) that end in whitespace. Use git 
apply --whitespace=fix <>. Refer https://git-scm.com/docs/git-apply 
{color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
10m 12s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  1m  
3s{color} | {color:red} 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-common 
generated 1 new + 0 unchanged - 0 fixed = 1 total (was 0) {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
21s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red}  0m 41s{color} 
| {color:red} hadoop-yarn-api in the patch failed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  2m 
18s{color} | {color:green} hadoop-yarn-server-common in the patch passed. 
{color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
31s{color} | {color:green} hadoop-yarn-server-globalpolicygenerator in the 
patch passed. {color} |
| {color:red}-1{color} | {color:red} asflicense {color} | {color:red}  0m 
34s{color} | {color:red} The patch generated 1 ASF License warnings. {color} |
| {color:black}{color} | {color:black} {color} | {color:black} 72m  3s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| FindBugs | 
module:hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-common
 |
|  |  

[jira] [Updated] (YARN-7599) [GPG] ApplicationCleaner in Global Policy Generator

2018-02-15 Thread Botong Huang (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-7599?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Botong Huang updated YARN-7599:
---
Attachment: YARN-7599-YARN-7402.v1.patch

> [GPG] ApplicationCleaner in Global Policy Generator
> ---
>
> Key: YARN-7599
> URL: https://issues.apache.org/jira/browse/YARN-7599
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Botong Huang
>Assignee: Botong Huang
>Priority: Minor
>  Labels: federation, gpg
> Attachments: YARN-7599-YARN-7402.v1.patch
>
>
> In Federation, we need a cleanup service for StateStore as well as Yarn 
> Registry. For the former, we need to remove old application records. For the 
> latter, failed and killed applications might leave records in the Yarn 
> Registry (see YARN-6128). We plan to do both cleanup work in 
> ApplicationCleaner in GPG



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5015) Unify restart policies across AM and container restarts

2018-02-15 Thread genericqa (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5015?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16366598#comment-16366598
 ] 

genericqa commented on YARN-5015:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
19s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 4 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
39s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 17m 
53s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  8m  
1s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
 8s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  3m 
24s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
15m  8s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  1m 
26s{color} | {color:red} hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api in 
trunk has 1 extant Findbugs warnings. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  2m 
40s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
11s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  2m 
28s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  6m 
33s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} cc {color} | {color:green}  6m 
33s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  6m 
33s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
1m  4s{color} | {color:orange} hadoop-yarn-project/hadoop-yarn: The patch 
generated 1 new + 440 unchanged - 0 fixed = 441 total (was 440) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  3m  
3s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
11m 15s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  1m  
5s{color} | {color:red} 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager
 generated 1 new + 0 unchanged - 0 fixed = 1 total (was 0) {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  2m 
33s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
43s{color} | {color:green} hadoop-yarn-api in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  3m  
7s{color} | {color:green} hadoop-yarn-common in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  2m  
7s{color} | {color:green} hadoop-yarn-server-common in the patch passed. 
{color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 18m 
26s{color} | {color:green} hadoop-yarn-server-nodemanager in the patch passed. 
{color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 12m 
20s{color} | {color:green} hadoop-yarn-applications-distributedshell in the 
patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
35s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}122m 40s{color} | 
{color:black} 

[jira] [Commented] (YARN-7799) YARN Service dependency follow up work

2018-02-15 Thread Wangda Tan (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7799?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16366582#comment-16366582
 ] 

Wangda Tan commented on YARN-7799:
--

[~gsaha], do you plan to finish this patch soon or it will be fine to be moved 
to 3.2.0?

> YARN Service dependency follow up work
> --
>
> Key: YARN-7799
> URL: https://issues.apache.org/jira/browse/YARN-7799
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: client, resourcemanager
>Reporter: Gour Saha
>Assignee: Gour Saha
>Priority: Critical
>
> As per [~jianhe] these are some followup items that make sense to do after 
> YARN-7766. Quoting Jian's comment below -
> Currently, if user doesn't supply location when run yarn app 
> -enableFastLaunch, the jars will be put under this location
> {code}
> hdfs:///yarn-services//service-dep.tar.gz
> {code}
> Since API server is embedded in RM, should RM look for this location too if 
> "yarn.service.framework.path" is not specified ?
> And if "yarn.service.framework.path" is not specified and still the file 
> doesn't exist at above default location, I think RM can try to upload the 
> jars to above default location instead, currently RM is uploading the jars to 
> the location defined by below code. This folder is per app and also 
> inconsistent with CLI location.
> {code}
>   protected Path addJarResource(String serviceName,
>   Map localResources)
>   throws IOException, SliderException {
> Path libPath = fs.buildClusterDirPath(serviceName);
> {code}
> By doing this, the next time a submission request comes, RM doesn't need to 
> upload the jars again.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7934) [GQ] Refactor preemption calculators to allow overriding for Federation Global Algos

2018-02-15 Thread genericqa (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7934?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16366573#comment-16366573
 ] 

genericqa commented on YARN-7934:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
27s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 16m 
 1s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
35s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
26s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
37s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green}  
9m 53s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m  
1s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
24s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
39s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
33s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
33s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
23s{color} | {color:green} 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager:
 The patch generated 0 new + 34 unchanged - 2 fixed = 34 total (was 36) {color} 
|
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
37s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} whitespace {color} | {color:red}  0m  
0s{color} | {color:red} The patch has 1 line(s) that end in whitespace. Use git 
apply --whitespace=fix <>. Refer https://git-scm.com/docs/git-apply 
{color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
10m  0s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m  
7s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} javadoc {color} | {color:red}  0m 
23s{color} | {color:red} 
hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager
 generated 1 new + 4 unchanged - 0 fixed = 5 total (was 4) {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 67m 
28s{color} | {color:green} hadoop-yarn-server-resourcemanager in the patch 
passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
21s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}110m 50s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:5b98639 |
| JIRA Issue | YARN-7934 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12910848/YARN-7934.v3.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  shadedclient  findbugs  checkstyle  |
| uname | Linux 05af080ffd7a 4.4.0-64-generic #85-Ubuntu SMP Mon Feb 20 
11:50:30 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / 8013475 |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_151 |
| findbugs | v3.1.0-RC1 |
| whitespace | 
https://builds.apache.org/job/PreCommit-YARN-Build/19717/artifact/out/whitespace-eol.txt
 |
| javadoc | 

[jira] [Commented] (YARN-7935) Expose container's hostname to applications running within the docker container

2018-02-15 Thread genericqa (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7935?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16366560#comment-16366560
 ] 

genericqa commented on YARN-7935:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
16s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
45s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 17m 
 8s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  8m 
13s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
 3s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
25s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
12m 53s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  1m 
22s{color} | {color:red} hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api in 
trunk has 1 extant Findbugs warnings. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m  
0s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
11s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:red}-1{color} | {color:red} mvninstall {color} | {color:red}  0m 
29s{color} | {color:red} hadoop-yarn-server-nodemanager in the patch failed. 
{color} |
| {color:red}-1{color} | {color:red} compile {color} | {color:red}  1m 
43s{color} | {color:red} hadoop-yarn in the patch failed. {color} |
| {color:red}-1{color} | {color:red} javac {color} | {color:red}  1m 43s{color} 
| {color:red} hadoop-yarn in the patch failed. {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 53s{color} | {color:orange} hadoop-yarn-project/hadoop-yarn: The patch 
generated 4 new + 51 unchanged - 0 fixed = 55 total (was 51) {color} |
| {color:red}-1{color} | {color:red} mvnsite {color} | {color:red}  0m 
31s{color} | {color:red} hadoop-yarn-server-nodemanager in the patch failed. 
{color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:red}-1{color} | {color:red} shadedclient {color} | {color:red}  3m 
44s{color} | {color:red} patch has errors when building and testing our client 
artifacts. {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  0m 
19s{color} | {color:red} hadoop-yarn-server-nodemanager in the patch failed. 
{color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
46s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
40s{color} | {color:green} hadoop-yarn-api in the patch passed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red}  0m 55s{color} 
| {color:red} hadoop-yarn-server-nodemanager in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
25s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 57m  4s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:5b98639 |
| JIRA Issue | YARN-7935 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12910849/YARN-7935.1.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  shadedclient  findbugs  checkstyle  |
| uname | Linux 88927504e5c3 3.13.0-139-generic #188-Ubuntu SMP Tue Jan 9 
14:43:09 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / 8013475 |
| maven | version: 

[jira] [Commented] (YARN-7707) [GPG] Policy generator framework

2018-02-15 Thread genericqa (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7707?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16366559#comment-16366559
 ] 

genericqa commented on YARN-7707:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
34s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 4 new or modified test 
files. {color} |
|| || || || {color:brown} YARN-7402 Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  2m 
58s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 15m 
44s{color} | {color:green} YARN-7402 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  7m 
25s{color} | {color:green} YARN-7402 passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
54s{color} | {color:green} YARN-7402 passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  2m  
5s{color} | {color:green} YARN-7402 passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
11m 39s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  1m  
8s{color} | {color:red} hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api in 
YARN-7402 has 1 extant Findbugs warnings. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  2m  
2s{color} | {color:green} YARN-7402 passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
10s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
49s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  6m 
25s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  6m 
25s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
58s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  2m 
21s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green}  0m  
3s{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
10m 17s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  4m 
24s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  2m  
2s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
41s{color} | {color:green} hadoop-yarn-api in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  3m  
8s{color} | {color:green} hadoop-yarn-common in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  2m 
15s{color} | {color:green} hadoop-yarn-server-common in the patch passed. 
{color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
37s{color} | {color:green} hadoop-yarn-server-globalpolicygenerator in the 
patch passed. {color} |
| {color:red}-1{color} | {color:red} asflicense {color} | {color:red}  0m 
33s{color} | {color:red} The patch generated 2 ASF License warnings. {color} |
| {color:black}{color} | {color:black} {color} | {color:black} 81m 50s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:5b98639 |
| JIRA Issue | YARN-7707 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12910843/YARN-7707-YARN-7402.03.patch
 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  

[jira] [Commented] (YARN-7292) Revisit Resource Profile Behavior

2018-02-15 Thread Sunil G (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7292?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16366556#comment-16366556
 ] 

Sunil G commented on YARN-7292:
---

+1 committing shortly.

> Revisit Resource Profile Behavior
> -
>
> Key: YARN-7292
> URL: https://issues.apache.org/jira/browse/YARN-7292
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: nodemanager, resourcemanager
>Reporter: Wangda Tan
>Assignee: Wangda Tan
>Priority: Blocker
> Attachments: YARN-7292.002.patch, YARN-7292.003.patch, 
> YARN-7292.004.patch, YARN-7292.005.patch, YARN-7292.006.patch, 
> YARN-7292.007.patch, YARN-7292.wip.001.patch
>
>
> Had discussions with [~templedf], [~vvasudev], [~sunilg] offline. There're a 
> couple of resource profile related behaviors might need to be updated:
> 1) Configure resource profile in server side or client side: 
> Currently resource profile can be only configured centrally:
> - Advantages:
> A given resource profile has a the same meaning in the cluster. It won’t 
> change when we run different apps in different configurations. A job can run 
> under Amazon’s G2.8X can also run on YARN with G2.8X profile. A side benefit 
> is YARN scheduler can potentially do better bin packing.
> - Disadvantages: 
> Hard for applications to add their own resource profiles. 
> 2) Do we really need mandatory resource profiles such as 
> minimum/maximum/default? 
> 3) Should we send resource profile name inside ResourceRequest, or should 
> client/AM translate it to resource and set it to the existing resource 
> fields? 
> 4) Related to above, should we allow resource overrides or client/AM should 
> send final resource to RM?



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7732) Support Generic AM Simulator from SynthGenerator

2018-02-15 Thread genericqa (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7732?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16366548#comment-16366548
 ] 

genericqa commented on YARN-7732:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
35s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 9 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 15m 
33s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
17s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
12s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
21s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green}  
9m 41s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
34s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
15s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
20s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
15s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
15s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
10s{color} | {color:green} hadoop-tools/hadoop-sls: The patch generated 0 new + 
50 unchanged - 1 fixed = 50 total (was 51) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
18s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green}  0m  
1s{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green}  
9m 42s{color} | {color:green} patch has no errors when building and testing our 
client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
34s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
13s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 10m  
9s{color} | {color:green} hadoop-sls in the patch passed. {color} |
| {color:red}-1{color} | {color:red} asflicense {color} | {color:red}  0m 
17s{color} | {color:red} The patch generated 2 ASF License warnings. {color} |
| {color:black}{color} | {color:black} {color} | {color:black} 49m 44s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:5b98639 |
| JIRA Issue | YARN-7732 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12910844/YARN-7732.05.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  shadedclient  findbugs  checkstyle  xml  |
| uname | Linux 361d6a34d163 4.4.0-89-generic #112-Ubuntu SMP Mon Jul 31 
19:38:41 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / 8013475 |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_151 |
| findbugs | v3.1.0-RC1 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-YARN-Build/19718/testReport/ |
| asflicense | 
https://builds.apache.org/job/PreCommit-YARN-Build/19718/artifact/out/patch-asflicense-problems.txt
 |
| Max. process+thread count | 467 (vs. ulimit of 5500) |
| modules | C: hadoop-tools/hadoop-sls U: hadoop-tools/hadoop-sls |
| Console output | 

[jira] [Commented] (YARN-7292) Revisit Resource Profile Behavior

2018-02-15 Thread Wangda Tan (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7292?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16366545#comment-16366545
 ] 

Wangda Tan commented on YARN-7292:
--

Test failures are not related to the patch. 

> Revisit Resource Profile Behavior
> -
>
> Key: YARN-7292
> URL: https://issues.apache.org/jira/browse/YARN-7292
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: nodemanager, resourcemanager
>Reporter: Wangda Tan
>Assignee: Wangda Tan
>Priority: Blocker
> Attachments: YARN-7292.002.patch, YARN-7292.003.patch, 
> YARN-7292.004.patch, YARN-7292.005.patch, YARN-7292.006.patch, 
> YARN-7292.007.patch, YARN-7292.wip.001.patch
>
>
> Had discussions with [~templedf], [~vvasudev], [~sunilg] offline. There're a 
> couple of resource profile related behaviors might need to be updated:
> 1) Configure resource profile in server side or client side: 
> Currently resource profile can be only configured centrally:
> - Advantages:
> A given resource profile has a the same meaning in the cluster. It won’t 
> change when we run different apps in different configurations. A job can run 
> under Amazon’s G2.8X can also run on YARN with G2.8X profile. A side benefit 
> is YARN scheduler can potentially do better bin packing.
> - Disadvantages: 
> Hard for applications to add their own resource profiles. 
> 2) Do we really need mandatory resource profiles such as 
> minimum/maximum/default? 
> 3) Should we send resource profile name inside ResourceRequest, or should 
> client/AM translate it to resource and set it to the existing resource 
> fields? 
> 4) Related to above, should we allow resource overrides or client/AM should 
> send final resource to RM?



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-5015) Unify restart policies across AM and container restarts

2018-02-15 Thread Chandni Singh (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-5015?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chandni Singh updated YARN-5015:

Attachment: YARN-5015.03.patch

> Unify restart policies across AM and container restarts
> ---
>
> Key: YARN-5015
> URL: https://issues.apache.org/jira/browse/YARN-5015
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: nodemanager
>Reporter: Varun Vasudev
>Assignee: Chandni Singh
>Priority: Major
>  Labels: oct16-medium
> Attachments: YARN-5015.01.patch, YARN-5015.02.patch, 
> YARN-5015.03.patch
>
>
> We support AM restart and container restarts - however the two have slightly 
> different capabilities. We should unify them. There's no reason for them to 
> be different.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-5015) Unify restart policies across AM and container restarts

2018-02-15 Thread Chandni Singh (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-5015?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chandni Singh updated YARN-5015:

Attachment: (was: YARN-5015.03.patch)

> Unify restart policies across AM and container restarts
> ---
>
> Key: YARN-5015
> URL: https://issues.apache.org/jira/browse/YARN-5015
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: nodemanager
>Reporter: Varun Vasudev
>Assignee: Chandni Singh
>Priority: Major
>  Labels: oct16-medium
> Attachments: YARN-5015.01.patch, YARN-5015.02.patch
>
>
> We support AM restart and container restarts - however the two have slightly 
> different capabilities. We should unify them. There's no reason for them to 
> be different.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-7935) Expose container's hostname to applications running within the docker container

2018-02-15 Thread Suma Shivaprasad (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-7935?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Suma Shivaprasad updated YARN-7935:
---
Attachment: YARN-7935.1.patch

> Expose container's hostname to applications running within the docker 
> container
> ---
>
> Key: YARN-7935
> URL: https://issues.apache.org/jira/browse/YARN-7935
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: yarn
>Reporter: Suma Shivaprasad
>Assignee: Suma Shivaprasad
>Priority: Major
> Attachments: YARN-7935.1.patch
>
>
> Some applications have a need to bind to the container's hostname (like 
> Spark) which is different from the NodeManager's hostname(NM_HOST which is 
> available as an env during container launch) when launched through Docker 
> runtime. The container's hostname can be exposed to applications via an env 
> CONTAINER_HOSTNAME. Another potential candidate is the container's IP but 
> this can be addressed in a separate jira.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7707) [GPG] Policy generator framework

2018-02-15 Thread Young Chen (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7707?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16366527#comment-16366527
 ] 

Young Chen commented on YARN-7707:
--

Hey [~botong] thanks for taking a look so quickly!

1/2. I'll get that fixed in the next patch.

3. That was because TestPolicyGenerator sets up a RM to check calling an RM 
endpoint, and the RMWebServices injects the timelineservice into the web 
service. The Router is set up similarly. 

4. Right now it extracts the queues into a mapping of queue type > list of 
queues. Although the non-leaf queues are not used in policy generation right 
now, the extraction requires that I traverse the entire tree, so it seemed a 
cleaner design choice to not hard code for leaf queues. In the future we may 
have policies generated for non-leaf queues, as well as some sort of policy 
inheritance.

> [GPG] Policy generator framework
> 
>
> Key: YARN-7707
> URL: https://issues.apache.org/jira/browse/YARN-7707
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Carlo Curino
>Assignee: Young Chen
>Priority: Major
>  Labels: federation, gpg
> Attachments: YARN-7707-YARN-7402.01.patch, 
> YARN-7707-YARN-7402.02.patch, YARN-7707-YARN-7402.03.patch
>
>
> This JIRA tracks the development of a generic framework for querying 
> sub-clusters for metrics, running policies, and updating them in the 
> FederationStateStore.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7707) [GPG] Policy generator framework

2018-02-15 Thread Botong Huang (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7707?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16366518#comment-16366518
 ] 

Botong Huang commented on YARN-7707:


Thanks [~youchen] for the patch. A few comments: 
1. Please remove {{PolicyGeneratorService}} and schedule the runnable directly 
in {{PolicyGenerator}}, similar to {{SubClusterCleaner}}
2. It is better to be able to disable {{PolicyGenerator}} from config. In fact, 
let's make the default DEFAULT_GPG_POLICY_GENERATOR_INTERVAL_MS to be -1. When 
the configured value is not positive, do not scheduler the service at all, 
again similar to {{SubClusterCleaner}}. 
3. Why do we need hadoop-yarn-server-timelineservice dependency in GPG's pom 
file? 
4. In {{PolicyGeneratorService.extractQueues()}}, we are extracting all (leaf 
and non-leaf) queues. Do we need to generate policy for non-leaf queues? My 
understanding is that no applications will be running in non-leaf queues. 

> [GPG] Policy generator framework
> 
>
> Key: YARN-7707
> URL: https://issues.apache.org/jira/browse/YARN-7707
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Carlo Curino
>Assignee: Young Chen
>Priority: Major
>  Labels: federation, gpg
> Attachments: YARN-7707-YARN-7402.01.patch, 
> YARN-7707-YARN-7402.02.patch, YARN-7707-YARN-7402.03.patch
>
>
> This JIRA tracks the development of a generic framework for querying 
> sub-clusters for metrics, running policies, and updating them in the 
> FederationStateStore.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (YARN-7707) [GPG] Policy generator framework

2018-02-15 Thread Botong Huang (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7707?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16366518#comment-16366518
 ] 

Botong Huang edited comment on YARN-7707 at 2/16/18 1:47 AM:
-

Thanks [~youchen] for the patch. A few comments: 
1. Please remove {{PolicyGeneratorService}} and schedule the runnable directly 
in {{PolicyGenerator}}, similar to {{SubClusterCleaner}}
2. It is better to be able to disable {{PolicyGenerator}} from config. In fact, 
let's make the default DEFAULT_GPG_POLICY_GENERATOR_INTERVAL_MS to be -1. When 
the configured value is not positive, do not schedule the service at all, again 
similar to {{SubClusterCleaner}}. 
3. Why do we need hadoop-yarn-server-timelineservice dependency in GPG's pom 
file? 
4. In {{PolicyGeneratorService.extractQueues()}}, we are extracting all (leaf 
and non-leaf) queues. Do we need to generate policy for non-leaf queues? My 
understanding is that no applications will be running in non-leaf queues. 


was (Author: botong):
Thanks [~youchen] for the patch. A few comments: 
1. Please remove {{PolicyGeneratorService}} and schedule the runnable directly 
in {{PolicyGenerator}}, similar to {{SubClusterCleaner}}
2. It is better to be able to disable {{PolicyGenerator}} from config. In fact, 
let's make the default DEFAULT_GPG_POLICY_GENERATOR_INTERVAL_MS to be -1. When 
the configured value is not positive, do not scheduler the service at all, 
again similar to {{SubClusterCleaner}}. 
3. Why do we need hadoop-yarn-server-timelineservice dependency in GPG's pom 
file? 
4. In {{PolicyGeneratorService.extractQueues()}}, we are extracting all (leaf 
and non-leaf) queues. Do we need to generate policy for non-leaf queues? My 
understanding is that no applications will be running in non-leaf queues. 

> [GPG] Policy generator framework
> 
>
> Key: YARN-7707
> URL: https://issues.apache.org/jira/browse/YARN-7707
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Carlo Curino
>Assignee: Young Chen
>Priority: Major
>  Labels: federation, gpg
> Attachments: YARN-7707-YARN-7402.01.patch, 
> YARN-7707-YARN-7402.02.patch, YARN-7707-YARN-7402.03.patch
>
>
> This JIRA tracks the development of a generic framework for querying 
> sub-clusters for metrics, running policies, and updating them in the 
> FederationStateStore.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7934) [GQ] Refactor preemption calculators to allow overriding for Federation Global Algos

2018-02-15 Thread Carlo Curino (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7934?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16366516#comment-16366516
 ] 

Carlo Curino commented on YARN-7934:


[~subru] thanks for the quick review. I have adressed the javadoc comments 
issue in patch v3.

Regarding consumers you are correct they are not in this patch, but in 
YARN-7403. This is by design, the purpose of this patch is to commit to trunk 
the most basic refactoring needed while we develop algos and big stuff in the 
YARN-7402 feature branch (to limit churn on the touch points of the branch 
work).

> [GQ] Refactor preemption calculators to allow overriding for Federation 
> Global Algos
> 
>
> Key: YARN-7934
> URL: https://issues.apache.org/jira/browse/YARN-7934
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Carlo Curino
>Assignee: Carlo Curino
>Priority: Major
> Attachments: YARN-7934.v1.patch, YARN-7934.v2.patch, 
> YARN-7934.v3.patch
>
>
> This Jira tracks minimal changes in the capacity scheduler preemption 
> mechanics that allow for sub-classing and overriding of certain behaviors, 
> which we use to implement federation global algorithms, e.g., in YARN-7403.
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-7934) [GQ] Refactor preemption calculators to allow overriding for Federation Global Algos

2018-02-15 Thread Carlo Curino (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-7934?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Carlo Curino updated YARN-7934:
---
Attachment: YARN-7934.v3.patch

> [GQ] Refactor preemption calculators to allow overriding for Federation 
> Global Algos
> 
>
> Key: YARN-7934
> URL: https://issues.apache.org/jira/browse/YARN-7934
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Carlo Curino
>Assignee: Carlo Curino
>Priority: Major
> Attachments: YARN-7934.v1.patch, YARN-7934.v2.patch, 
> YARN-7934.v3.patch
>
>
> This Jira tracks minimal changes in the capacity scheduler preemption 
> mechanics that allow for sub-classing and overriding of certain behaviors, 
> which we use to implement federation global algorithms, e.g., in YARN-7403.
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (YARN-6858) Attribute Manager to store and provide the attributes in RM

2018-02-15 Thread Bibin A Chundatt (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6858?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16362108#comment-16362108
 ] 

Bibin A Chundatt edited comment on YARN-6858 at 2/16/18 1:20 AM:
-

Thanks [~Naganarasimha] for patch

Few comments.

{code}
107 initNodeLabelStore(getConfig());
  protected void initNodeLabelStore(Configuration conf) throws Exception {
// TODO to generalize and make use of the FileSystemNodeLabelsStore
  }
{code}
# Had an offline discussion with Sunil G we thought to using separate store for 
 Nodelabels and Attributes enabled separately.
# Event type used for registration is wrong should be AttributeType
{code}
112 if (null != dispatcher) {
113   dispatcher.register(NodeLabelsStoreEventType.class,
114   new ForwardingEventHandler());
115 }
{code}
# Param name mismatch in following method
{code}
  /**
   * @param nodeAttributeMappings
   * @param newAttributesToBeAdded
   * @return Map>, node -> Map(
   * NodeAttribute -> AttributeValue)
   * @throws IOException, on invalid mapping in the current request or against
   *   already existing NodeAttributes.
   */
  protected Map> validate(
  Map nodeAttributeMapping,
  Map newAttributesToBeAdded,
  boolean isRemoveOperation) throws IOException
{code}
# Event Type wrong in {{ForwardingEventHandler}}
{code}
ForwardingEventHandler type is wrong
{code}
# InternalUpdateLabelsOnNodes rename to internalUpdateAttributesOnNodes
# Currently manager doesnt provide a way to filter out nodes of type central, 
distributed type. I think we should provide that too


was (Author: bibinchundatt):
Thanks [~Naganarasimha] for patch

Few comments.

{code}
107 initNodeLabelStore(getConfig());
  protected void initNodeLabelStore(Configuration conf) throws Exception {
// TODO to generalize and make use of the FileSystemNodeLabelsStore
  }
{code}
# Had an offline discussion with Sunil G we thought to using separate store for 
 Nodelabels and Attributes enabled separately.
# Event type used for registration is wrong should be 
{code}
112 if (null != dispatcher) {
113   dispatcher.register(NodeLabelsStoreEventType.class,
114   new ForwardingEventHandler());
115 }
{code}
# Param name mismatch in following method
{code}
  /**
   * @param nodeAttributeMappings
   * @param newAttributesToBeAdded
   * @return Map>, node -> Map(
   * NodeAttribute -> AttributeValue)
   * @throws IOException, on invalid mapping in the current request or against
   *   already existing NodeAttributes.
   */
  protected Map> validate(
  Map nodeAttributeMapping,
  Map newAttributesToBeAdded,
  boolean isRemoveOperation) throws IOException
{code}
# Event Type wrong in {{ForwardingEventHandler}}
{code}
ForwardingEventHandler type is wrong
{code}
# InternalUpdateLabelsOnNodes rename to internalUpdateAttributesOnNodes
# Currently manager doesnt provide a way to filter out nodes of type central, 
distributed type. I think we should provide that too

> Attribute Manager to store and provide the attributes in RM
> ---
>
> Key: YARN-6858
> URL: https://issues.apache.org/jira/browse/YARN-6858
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: api, capacityscheduler, client
>Reporter: Naganarasimha G R
>Assignee: Naganarasimha G R
>Priority: Major
> Attachments: YARN-6858-YARN-3409.001.patch, 
> YARN-6858-YARN-3409.002.patch, YARN-6858-YARN-3409.003.patch, 
> YARN-6858-YARN-3409.004.patch, YARN-6858-YARN-3409.005.patch, 
> YARN-6858-YARN-3409.006.patch
>
>
> Similar to CommonNodeLabelsManager we need to have a centralized manager for 
> Node Attributes too.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5015) Unify restart policies across AM and container restarts

2018-02-15 Thread genericqa (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5015?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16366487#comment-16366487
 ] 

genericqa commented on YARN-5015:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
23s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 4 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
10s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 15m 
26s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  7m 
59s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
 2s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  3m  
2s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
13m 51s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  1m 
14s{color} | {color:red} hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api in 
trunk has 1 extant Findbugs warnings. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  2m 
39s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
10s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  2m 
24s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  6m 
36s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} cc {color} | {color:green}  6m 
36s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  6m 
36s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
1m  2s{color} | {color:orange} hadoop-yarn-project/hadoop-yarn: The patch 
generated 1 new + 440 unchanged - 0 fixed = 441 total (was 440) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  3m 
29s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
10m 28s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  1m  
3s{color} | {color:red} 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager
 generated 1 new + 0 unchanged - 0 fixed = 1 total (was 0) {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  2m 
35s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
44s{color} | {color:green} hadoop-yarn-api in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  3m 
14s{color} | {color:green} hadoop-yarn-common in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  2m 
17s{color} | {color:green} hadoop-yarn-server-common in the patch passed. 
{color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 19m 35s{color} 
| {color:red} hadoop-yarn-server-nodemanager in the patch failed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 12m 
48s{color} | {color:green} hadoop-yarn-applications-distributedshell in the 
patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
34s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}119m  9s{color} | 
{color:black} {color} 

[jira] [Updated] (YARN-7732) Support Generic AM Simulator from SynthGenerator

2018-02-15 Thread Young Chen (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-7732?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Young Chen updated YARN-7732:
-
Attachment: YARN-7732.05.patch

> Support Generic AM Simulator from SynthGenerator
> 
>
> Key: YARN-7732
> URL: https://issues.apache.org/jira/browse/YARN-7732
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: scheduler-load-simulator
>Reporter: Young Chen
>Assignee: Young Chen
>Priority: Minor
> Attachments: YARN-7732-YARN-7798.01.patch, 
> YARN-7732-YARN-7798.02.patch, YARN-7732.01.patch, YARN-7732.02.patch, 
> YARN-7732.03.patch, YARN-7732.04.patch, YARN-7732.05.patch
>
>
> Extract the MapReduce specific set-up in the SLSRunner into the 
> MRAMSimulator, and enable support for pluggable AMSimulators.
> Previously, the AM set up in SLSRunner had the MRAMSimulator type hard coded, 
> for example startAMFromSynthGenerator() calls this:
>  
> {code:java}
> runNewAM(SLSUtils.DEFAULT_JOB_TYPE, user, jobQueue, oldJobId,
> jobStartTimeMS, jobFinishTimeMS, containerList, reservationId,
> job.getDeadline(), getAMContainerResource(null));
> {code}
> where SLSUtils.DEFAULT_JOB_TYPE = "mapreduce"
> The container set up was also only suitable for mapreduce: 
>  
> {code:java}
> Version:1.0 StartHTML:00286 EndHTML:12564 StartFragment:03634 
> EndFragment:12474 StartSelection:03700 EndSelection:12464 
> SourceURL:https://github.com/apache/hadoop/blob/trunk/hadoop-tools/hadoop-sls/src/main/java/org/apache/hadoop/yarn/sls/SLSRunner.java
>  
> // map tasks
> for (int i = 0; i < job.getNumberMaps(); i++) {
>   TaskAttemptInfo tai = job.getTaskAttemptInfo(TaskType.MAP, i, 0);
>   RMNode node =
>   nmMap.get(keyAsArray.get(rand.nextInt(keyAsArray.size(
>   .getNode();
>   String hostname = "/" + node.getRackName() + "/" + node.getHostName();
>   long containerLifeTime = tai.getRuntime();
>   Resource containerResource =
>   Resource.newInstance((int) tai.getTaskInfo().getTaskMemory(),
>   (int) tai.getTaskInfo().getTaskVCores());
>   containerList.add(new ContainerSimulator(containerResource,
>   containerLifeTime, hostname, DEFAULT_MAPPER_PRIORITY, "map"));
> }
> // reduce tasks
> for (int i = 0; i < job.getNumberReduces(); i++) {
>   TaskAttemptInfo tai = job.getTaskAttemptInfo(TaskType.REDUCE, i, 0);
>   RMNode node =
>   nmMap.get(keyAsArray.get(rand.nextInt(keyAsArray.size(
>   .getNode();
>   String hostname = "/" + node.getRackName() + "/" + node.getHostName();
>   long containerLifeTime = tai.getRuntime();
>   Resource containerResource =
>   Resource.newInstance((int) tai.getTaskInfo().getTaskMemory(),
>   (int) tai.getTaskInfo().getTaskVCores());
>   containerList.add(
>   new ContainerSimulator(containerResource, containerLifeTime,
>   hostname, DEFAULT_REDUCER_PRIORITY, "reduce"));
> }
> {code}
>  
> In addition, the syn.json format supported only mapreduce (the parameters 
> were very specific: mtime, rtime, mtasks, rtasks, etc..).
> This patch aims to introduce a new syn.json format that can describe generic 
> jobs, and the SLS setup required to support the synth generation of generic 
> jobs.
> See syn_generic.json for an equivalent of the previous syn.json in the new 
> format.
> Using the new generic format, we describe a StreamAMSimulator simulates a 
> long running streaming service that maintains N number of containers for the 
> lifetime of the AM. See syn_stream.json.
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-7707) [GPG] Policy generator framework

2018-02-15 Thread Young Chen (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-7707?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Young Chen updated YARN-7707:
-
Attachment: YARN-7707-YARN-7402.03.patch

> [GPG] Policy generator framework
> 
>
> Key: YARN-7707
> URL: https://issues.apache.org/jira/browse/YARN-7707
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Carlo Curino
>Assignee: Young Chen
>Priority: Major
>  Labels: federation, gpg
> Attachments: YARN-7707-YARN-7402.01.patch, 
> YARN-7707-YARN-7402.02.patch, YARN-7707-YARN-7402.03.patch
>
>
> This JIRA tracks the development of a generic framework for querying 
> sub-clusters for metrics, running policies, and updating them in the 
> FederationStateStore.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7446) Docker container privileged mode and --user flag contradict each other

2018-02-15 Thread genericqa (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7446?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16366436#comment-16366436
 ] 

genericqa commented on YARN-7446:
-

| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
30s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 15m 
24s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
51s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
34s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
25m 57s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
31s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
44s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} cc {color} | {color:green}  0m 
44s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
44s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
28s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
10m 32s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 19m 
42s{color} | {color:green} hadoop-yarn-server-nodemanager in the patch passed. 
{color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
21s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 59m 10s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:5b98639 |
| JIRA Issue | YARN-7446 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12910824/YARN-7446.003.patch |
| Optional Tests |  asflicense  compile  cc  mvnsite  javac  unit  |
| uname | Linux 445d0f3bf62e 4.4.0-89-generic #112-Ubuntu SMP Mon Jul 31 
19:38:41 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / 0b489e5 |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_151 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-YARN-Build/19715/testReport/ |
| Max. process+thread count | 408 (vs. ulimit of 5500) |
| modules | C: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager
 U: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager
 |
| Console output | 
https://builds.apache.org/job/PreCommit-YARN-Build/19715/console |
| Powered by | Apache Yetus 0.8.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> Docker container privileged mode and --user flag contradict each other
> --
>
> Key: YARN-7446
> URL: https://issues.apache.org/jira/browse/YARN-7446
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Affects Versions: 3.0.0
>Reporter: Eric Yang
>Assignee: Eric Yang
>Priority: Major
> Attachments: YARN-7446.001.patch, YARN-7446.002.patch, 
> YARN-7446.003.patch
>
>
> In the current implementation, when privileged=true, --user flag is also 
> passed to docker for launching container.  In reality, the container has no 
> way to use root privileges unless there is sticky bit or sudoers in the image 
> for the specified user to gain 

[jira] [Commented] (YARN-7677) Docker image cannot set HADOOP_CONF_DIR

2018-02-15 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7677?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16366420#comment-16366420
 ] 

Hudson commented on YARN-7677:
--

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #13667 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/13667/])
YARN-7677. Docker image cannot set HADOOP_CONF_DIR. Contributed by Jim (jlowe: 
rev 8013475d447a8377b5aed858208bf8b91dd32366)
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/java/org/apache/hadoop/yarn/util/AuxiliaryServiceHelper.java
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/ContainerExecutor.java
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/runtime/ContainerRuntime.java
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/java/org/apache/hadoop/yarn/util/Apps.java
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/linux/runtime/DelegatingLinuxContainerRuntime.java
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/launcher/ContainerLaunch.java
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/linux/runtime/DefaultLinuxContainerRuntime.java
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/launcher/TestContainerLaunch.java
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/LinuxContainerExecutor.java
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/linux/runtime/DockerLinuxContainerRuntime.java


> Docker image cannot set HADOOP_CONF_DIR
> ---
>
> Key: YARN-7677
> URL: https://issues.apache.org/jira/browse/YARN-7677
> Project: Hadoop YARN
>  Issue Type: Bug
>Affects Versions: 3.0.0
>Reporter: Eric Badger
>Assignee: Jim Brennan
>Priority: Major
> Fix For: 3.1.0
>
> Attachments: YARN-7677.001.patch, YARN-7677.002.patch, 
> YARN-7677.003.patch, YARN-7677.004.patch, YARN-7677.005.patch
>
>
> Currently, {{HADOOP_CONF_DIR}} is being put into the task environment whether 
> it's set by the user or not. It completely bypasses the whitelist and so 
> there is no way for a task to not have {{HADOOP_CONF_DIR}} set. This causes 
> problems in the Docker use case where Docker containers will set up their own 
> environment and have their own {{HADOOP_CONF_DIR}} preset in the image 
> itself. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-7939) Yarn Service: add supported to upgrade a component instance

2018-02-15 Thread Chandni Singh (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-7939?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chandni Singh updated YARN-7939:

Issue Type: Sub-task  (was: Task)
Parent: YARN-7054

> Yarn Service: add supported to upgrade a component instance 
> 
>
> Key: YARN-7939
> URL: https://issues.apache.org/jira/browse/YARN-7939
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Chandni Singh
>Assignee: Chandni Singh
>Priority: Major
>
> Yarn core supports in-place upgrade of containers. A yarn service can 
> leverage that to provide in-place upgrade of component instances. Please see 
> YARN-7512 for details.
> Will add support to upgrade a single component instance first and then 
> iteratively add other APIs and features.
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Created] (YARN-7939) Yarn Service: add supported to upgrade a component instance

2018-02-15 Thread Chandni Singh (JIRA)
Chandni Singh created YARN-7939:
---

 Summary: Yarn Service: add supported to upgrade a component 
instance 
 Key: YARN-7939
 URL: https://issues.apache.org/jira/browse/YARN-7939
 Project: Hadoop YARN
  Issue Type: Task
Reporter: Chandni Singh
Assignee: Chandni Singh


Yarn core supports in-place upgrade of containers. A yarn service can leverage 
that to provide in-place upgrade of component instances. Please see YARN-7512 
for details.

Will add support to upgrade a single component instance first and then 
iteratively add other APIs and features.

 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7920) Simplify configuration for PlacementConstraints

2018-02-15 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7920?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16366408#comment-16366408
 ] 

Hudson commented on YARN-7920:
--

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #13666 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/13666/])
YARN-7920. Simplify configuration for PlacementConstraints. Contributed 
(kkaranasos: rev 0b489e564ce5a50324a530e29c18aa8a75276c50)
* (add) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/constraint/processor/PlacementConstraintProcessor.java
* (add) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/constraint/processor/AbstractPlacementProcessor.java
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/ApplicationMasterService.java
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/resources/yarn-default.xml
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/capacity/CapacityScheduler.java
* (add) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/constraint/processor/SchedulerPlacementProcessor.java
* (add) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-site/src/site/markdown/PlacementConstraints.md
* (add) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/constraint/processor/DisabledPlacementProcessor.java
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/constraint/TestPlacementProcessor.java
* (delete) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/constraint/processor/PlacementProcessor.java
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/capacity/TestCapacitySchedulerSchedulingRequestUpdate.java
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/capacity/TestSchedulingRequestContainerAllocation.java
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/capacity/TestUtils.java
* (delete) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-site/src/site/markdown/PlacementConstraints.md.vm
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/capacity/TestSchedulingRequestContainerAllocationAsync.java
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/conf/YarnConfiguration.java
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-client/src/test/java/org/apache/hadoop/yarn/client/api/impl/TestAMRMClientPlacementConstraints.java
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/capacity/CapacitySchedulerConfiguration.java


> Simplify configuration for PlacementConstraints
> ---
>
> Key: YARN-7920
> URL: https://issues.apache.org/jira/browse/YARN-7920
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Wangda Tan
>Assignee: Wangda Tan
>Priority: Blocker
> Fix For: 3.1.0
>
> Attachments: YARN-7920.001.patch, YARN-7920.002.patch, 
> YARN-7920.003.patch, YARN-7920.004.patch, YARN-7920.005.patch, 
> YARN-7920.006.patch
>
>
> Currently it is very confusing to have the two configs in two different files 
> (yarn-site.xml and capacity-scheduler.xml). 
>  
> Maybe a better approach is: we can delete the scheduling-request.allowed in 
> CS, and update placement-constraints configs in yarn-site.xml a bit: 
>  
> - Remove placement-constraints.enabled, and add a new 
> placement-constraints.handler, by default is none, and other acceptable 
> values are a. external-processor (since algorithm is too generic to me), b. 
> scheduler. 
> - And add a new PlacementProcessor just to pass SchedulingRequest to 
> scheduler without any modifications.


[jira] [Updated] (YARN-7813) Capacity Scheduler Intra-queue Preemption should be configurable for each queue

2018-02-15 Thread Eric Payne (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-7813?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Eric Payne updated YARN-7813:
-
Attachment: YARN-7813.005.branch-2.8.patch

> Capacity Scheduler Intra-queue Preemption should be configurable for each 
> queue
> ---
>
> Key: YARN-7813
> URL: https://issues.apache.org/jira/browse/YARN-7813
> Project: Hadoop YARN
>  Issue Type: Improvement
>  Components: capacity scheduler, scheduler preemption
>Affects Versions: 2.9.0, 2.8.3, 3.0.0
>Reporter: Eric Payne
>Assignee: Eric Payne
>Priority: Major
> Attachments: YARN-7813.001.patch, YARN-7813.002.branch-3.0.patch, 
> YARN-7813.002.patch, YARN-7813.003.branch-2.patch, 
> YARN-7813.003.branch-3.0.patch, YARN-7813.004.patch, 
> YARN-7813.005.branch-2.8.patch, YARN-7813.005.branch-3.0.patch, 
> YARN-7813.005.patch
>
>
> Just as inter-queue (a.k.a. cross-queue) preemption is configurable per 
> queue, intra-queue (a.k.a. in-queue) preemption should be configurable per 
> queue. If a queue does not have a setting for intra-queue preemption, it 
> should inherit its parents value.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7221) Add security check for privileged docker container

2018-02-15 Thread Eric Yang (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7221?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16366395#comment-16366395
 ] 

Eric Yang commented on YARN-7221:
-

[~ebadger] My apologies, I know the current patch isn't working.  I will upload 
a new version to fix the username remapping issue.  I will omit any change 
required for making container read-only in the next updates for this jira.

> Add security check for privileged docker container
> --
>
> Key: YARN-7221
> URL: https://issues.apache.org/jira/browse/YARN-7221
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Eric Yang
>Assignee: Eric Yang
>Priority: Major
> Attachments: YARN-7221.001.patch, YARN-7221.002.patch, 
> YARN-7221.003.patch, YARN-7221.004.patch
>
>
> When a docker is running with privileges, majority of the use case is to have 
> some program running with root then drop privileges to another user.  i.e. 
> httpd to start with privileged and bind to port 80, then drop privileges to 
> www user.  
> # We should add security check for submitting users, to verify they have 
> "sudo" access to run privileged container.  
> # We should remove --user=uid:gid for privileged containers.  
>  
> Docker can be launched with --privileged=true, and --user=uid:gid flag.  With 
> this parameter combinations, user will not have access to become root user.  
> All docker exec command will be drop to uid:gid user to run instead of 
> granting privileges.  User can gain root privileges if container file system 
> contains files that give user extra power, but this type of image is 
> considered as dangerous.  Non-privileged user can launch container with 
> special bits to acquire same level of root power.  Hence, we lose control of 
> which image should be run with --privileges, and who have sudo rights to use 
> privileged container images.  As the result, we should check for sudo access 
> then decide to parameterize --privileged=true OR --user=uid:gid.  This will 
> avoid leading developer down the wrong path.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7446) Docker container privileged mode and --user flag contradict each other

2018-02-15 Thread Eric Yang (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7446?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16366385#comment-16366385
 ] 

Eric Yang commented on YARN-7446:
-

- Patch updated for fixing missing free.
- Patch updated to drop group-add for privileged container.


> Docker container privileged mode and --user flag contradict each other
> --
>
> Key: YARN-7446
> URL: https://issues.apache.org/jira/browse/YARN-7446
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Affects Versions: 3.0.0
>Reporter: Eric Yang
>Assignee: Eric Yang
>Priority: Major
> Attachments: YARN-7446.001.patch, YARN-7446.002.patch, 
> YARN-7446.003.patch
>
>
> In the current implementation, when privileged=true, --user flag is also 
> passed to docker for launching container.  In reality, the container has no 
> way to use root privileges unless there is sticky bit or sudoers in the image 
> for the specified user to gain privileges again.  To avoid duplication of 
> dropping and reacquire root privileges, we can reduce the duplication of 
> specifying both flag.  When privileged mode is enabled, --user flag should be 
> omitted.  When non-privileged mode is enabled, --user flag is supplied.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7916) Remove call to docker logs on failure in container-executor

2018-02-15 Thread Eric Badger (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7916?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16366376#comment-16366376
 ] 

Eric Badger commented on YARN-7916:
---

+1 (non-binding) looks good to me

> Remove call to docker logs on failure in container-executor
> ---
>
> Key: YARN-7916
> URL: https://issues.apache.org/jira/browse/YARN-7916
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Shane Kumpf
>Assignee: Shane Kumpf
>Priority: Major
> Attachments: YARN-7916.001.patch
>
>
> If a Docker container fails with a non-zero exit code, container-executor 
> attempts to run a {{docker logs --tail=250 container_name}} to provide more 
> details on why the container failed. While the idea is good, the current 
> implementation will fail for most containers as they are leveraging a launch 
> script whose output will be redirected to a file. The {{--tail}} option 
> throws an error if no log output is available for the container, resulting in 
> the docker logs command returning rc=1 in most cases.
> I propose we remove this code from container-executor. Alternative approaches 
> to handle logging can be explored as part of supporting an image's entrypoint.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-7446) Docker container privileged mode and --user flag contradict each other

2018-02-15 Thread Eric Yang (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-7446?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Eric Yang updated YARN-7446:

Attachment: YARN-7446.003.patch

> Docker container privileged mode and --user flag contradict each other
> --
>
> Key: YARN-7446
> URL: https://issues.apache.org/jira/browse/YARN-7446
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Affects Versions: 3.0.0
>Reporter: Eric Yang
>Assignee: Eric Yang
>Priority: Major
> Attachments: YARN-7446.001.patch, YARN-7446.002.patch, 
> YARN-7446.003.patch
>
>
> In the current implementation, when privileged=true, --user flag is also 
> passed to docker for launching container.  In reality, the container has no 
> way to use root privileges unless there is sticky bit or sudoers in the image 
> for the specified user to gain privileges again.  To avoid duplication of 
> dropping and reacquire root privileges, we can reduce the duplication of 
> specifying both flag.  When privileged mode is enabled, --user flag should be 
> omitted.  When non-privileged mode is enabled, --user flag is supplied.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7834) [GQ] Rebalance queue configuration for load-balancing and locality affinities

2018-02-15 Thread Carlo Curino (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7834?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16366372#comment-16366372
 ] 

Carlo Curino commented on YARN-7834:


The uploaded patch provides an Linear Programming (LP) implementation of this 
algorithms, leveraging the oljalgo solver (which ships with hadoop already, and 
contains a pure-java solver, as well as hooks to leverage an external more 
powerful solver such as Gurobi or CPLEX).

The formulation is designed to:
 # Guarantee that all queues will be allocated fully
 # Guarantee that none of the sub-clusters is allocated more capacity than it 
can take
 # It maximizes load-balancing (as a primary objective).
 # Subject to not impacting load-balancing more than a configurable delta (zero 
by default), it maximizes queue-to-sub-cluster affinity (as a secondary 
objective).

The reasons behind 3/4 being in a primary-secondary relationship (instead of a 
weighted linear combination) is that in our production experience 
load-balancing is the most concerning issue, secondary of which we aim at 
optimizing for locality.

> [GQ] Rebalance queue configuration for load-balancing and locality affinities
> -
>
> Key: YARN-7834
> URL: https://issues.apache.org/jira/browse/YARN-7834
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Carlo Curino
>Assignee: Carlo Curino
>Priority: Major
> Attachments: YARN-7834.v1.patch
>
>
> This Jira tracks algorithmic work, which will run in the GPG and will 
> rebalance the mapping of queues to sub-clusters. The current design supports 
> both balancing the "load" across sub-clusters (proportionally to their size) 
> and as a second objective to maximize the affinity between queues and the 
> sub-clusters where they historically have most demand.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-7834) [GQ] Rebalance queue configuration for load-balancing and locality affinities

2018-02-15 Thread Carlo Curino (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-7834?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Carlo Curino updated YARN-7834:
---
Attachment: YARN-7834.v1.patch

> [GQ] Rebalance queue configuration for load-balancing and locality affinities
> -
>
> Key: YARN-7834
> URL: https://issues.apache.org/jira/browse/YARN-7834
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Carlo Curino
>Assignee: Carlo Curino
>Priority: Major
> Attachments: YARN-7834.v1.patch
>
>
> This Jira tracks algorithmic work, which will run in the GPG and will 
> rebalance the mapping of queues to sub-clusters. The current design supports 
> both balancing the "load" across sub-clusters (proportionally to their size) 
> and as a second objective to maximize the affinity between queues and the 
> sub-clusters where they historically have most demand.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7813) Capacity Scheduler Intra-queue Preemption should be configurable for each queue

2018-02-15 Thread genericqa (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7813?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16366358#comment-16366358
 ] 

genericqa commented on YARN-7813:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
29s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 7 new or modified test 
files. {color} |
|| || || || {color:brown} branch-3.0 Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
16s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 14m 
46s{color} | {color:green} branch-3.0 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  6m 
32s{color} | {color:green} branch-3.0 passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
 4s{color} | {color:green} branch-3.0 passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  3m  
6s{color} | {color:green} branch-3.0 passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
13m 27s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Skipped patched modules with no Java source: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-site {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  3m 
50s{color} | {color:green} branch-3.0 passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
59s{color} | {color:green} branch-3.0 passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m  
9s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  2m 
 6s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  5m 
38s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} cc {color} | {color:green}  5m 
38s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  5m 
38s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 50s{color} | {color:orange} hadoop-yarn-project/hadoop-yarn: The patch 
generated 9 new + 813 unchanged - 1 fixed = 822 total (was 814) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  2m 
24s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green}  
9m  6s{color} | {color:green} patch has no errors when building and testing our 
client artifacts. {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Skipped patched modules with no Java source: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-site {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  4m 
10s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
54s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
29s{color} | {color:green} hadoop-yarn-api in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  2m 
36s{color} | {color:green} hadoop-yarn-common in the patch passed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 59m 38s{color} 
| {color:red} hadoop-yarn-server-resourcemanager in the patch failed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 24m 
19s{color} | {color:green} hadoop-yarn-client in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
20s{color} | {color:green} hadoop-yarn-site in the patch passed. {color} |
| {color:green}+1{color} | {color:green} 

[jira] [Commented] (YARN-7934) [GQ] Refactor preemption calculators to allow overriding for Federation Global Algos

2018-02-15 Thread Subru Krishnan (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7934?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16366351#comment-16366351
 ] 

Subru Krishnan commented on YARN-7934:
--

Thanks [~curino] for the patch, it looks fairly straightforward. I have only 
nit - can you add Javadocs for the new public and protected (especially so that 
overriding expectations are clear)  methods. Also I don't see any consumers for 
the public methods, is that in a subsequent patch?

> [GQ] Refactor preemption calculators to allow overriding for Federation 
> Global Algos
> 
>
> Key: YARN-7934
> URL: https://issues.apache.org/jira/browse/YARN-7934
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Carlo Curino
>Assignee: Carlo Curino
>Priority: Major
> Attachments: YARN-7934.v1.patch, YARN-7934.v2.patch
>
>
> This Jira tracks minimal changes in the capacity scheduler preemption 
> mechanics that allow for sub-classing and overriding of certain behaviors, 
> which we use to implement federation global algorithms, e.g., in YARN-7403.
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-7920) Simplify configuration for PlacementConstraints

2018-02-15 Thread Konstantinos Karanasos (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-7920?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Konstantinos Karanasos updated YARN-7920:
-
Summary: Simplify configuration for PlacementConstraints  (was: Cleanup 
configuration of PlacementConstraints)

> Simplify configuration for PlacementConstraints
> ---
>
> Key: YARN-7920
> URL: https://issues.apache.org/jira/browse/YARN-7920
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Wangda Tan
>Assignee: Wangda Tan
>Priority: Blocker
> Attachments: YARN-7920.001.patch, YARN-7920.002.patch, 
> YARN-7920.003.patch, YARN-7920.004.patch, YARN-7920.005.patch, 
> YARN-7920.006.patch
>
>
> Currently it is very confusing to have the two configs in two different files 
> (yarn-site.xml and capacity-scheduler.xml). 
>  
> Maybe a better approach is: we can delete the scheduling-request.allowed in 
> CS, and update placement-constraints configs in yarn-site.xml a bit: 
>  
> - Remove placement-constraints.enabled, and add a new 
> placement-constraints.handler, by default is none, and other acceptable 
> values are a. external-processor (since algorithm is too generic to me), b. 
> scheduler. 
> - And add a new PlacementProcessor just to pass SchedulingRequest to 
> scheduler without any modifications.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7934) [GQ] Refactor preemption calculators to allow overriding for Federation Global Algos

2018-02-15 Thread genericqa (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7934?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16366343#comment-16366343
 ] 

genericqa commented on YARN-7934:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
25s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 15m 
32s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
37s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
24s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
37s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
10m 44s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
58s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
21s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
36s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
31s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
31s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
21s{color} | {color:green} 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager:
 The patch generated 0 new + 34 unchanged - 2 fixed = 34 total (was 36) {color} 
|
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
34s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green}  
9m 38s{color} | {color:green} patch has no errors when building and testing our 
client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m  
3s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
20s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 66m 
19s{color} | {color:green} hadoop-yarn-server-resourcemanager in the patch 
passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
23s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}109m 15s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:5b98639 |
| JIRA Issue | YARN-7934 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12910793/YARN-7934.v2.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  shadedclient  findbugs  checkstyle  |
| uname | Linux 1ae08a3b288a 4.4.0-89-generic #112-Ubuntu SMP Mon Jul 31 
19:38:41 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / da59acd |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_151 |
| findbugs | v3.1.0-RC1 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-YARN-Build/19713/testReport/ |
| Max. process+thread count | 884 (vs. ulimit of 5500) |
| modules | C: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager
 U: 

[jira] [Resolved] (YARN-7725) [GQ] Compute global "ideal allocation" including locality biases

2018-02-15 Thread Carlo Curino (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-7725?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Carlo Curino resolved YARN-7725.

   Resolution: Duplicate
Fix Version/s: yarn-7403

Newer version of YARN-7403 subsumes this task.

> [GQ] Compute global "ideal allocation" including locality biases
> 
>
> Key: YARN-7725
> URL: https://issues.apache.org/jira/browse/YARN-7725
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: federation
>Reporter: Carlo Curino
>Assignee: Carlo Curino
>Priority: Major
> Fix For: yarn-7403
>
>
> This JIRA tracks an algorithmic effort to compute the global ideal 
> allocation. We also take into account of locality demand/availability gap, 
> and map down the global allocation to sub-cluster level, computing the delta+ 
> and delta- for each queue in each sub-cluster.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-7403) [GQ] Compute global and local "IdealAllocation"

2018-02-15 Thread Carlo Curino (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-7403?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Carlo Curino updated YARN-7403:
---
Description: This JIRA tracks algorithmic effort to combine the local queue 
views of capacity guarantee/use/demand and compute the global ideal allocation, 
and the respective local allocations. This will inform the RMs in each 
sub-clusters on how to allocate more containers to each queues (allowing for 
temporary over/under allocations that are locally excessive, but globally 
correct).  (was: This JIRA tracks algorithmic effort to combine the local queue 
views of capacity guarantee/use/demand and compute the global amount of 
preemption, and based on that, "where" (in which sub-cluster) preemption will 
be enacted.)

> [GQ] Compute global and local "IdealAllocation"
> ---
>
> Key: YARN-7403
> URL: https://issues.apache.org/jira/browse/YARN-7403
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: federation
>Reporter: Carlo Curino
>Assignee: Carlo Curino
>Priority: Major
> Attachments: YARN-7403.draft.patch, YARN-7403.draft2.patch, 
> YARN-7403.draft3.patch, YARN-7403.v1.patch, global-queues-preemption.PNG
>
>
> This JIRA tracks algorithmic effort to combine the local queue views of 
> capacity guarantee/use/demand and compute the global ideal allocation, and 
> the respective local allocations. This will inform the RMs in each 
> sub-clusters on how to allocate more containers to each queues (allowing for 
> temporary over/under allocations that are locally excessive, but globally 
> correct).



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-5015) Unify restart policies across AM and container restarts

2018-02-15 Thread Chandni Singh (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-5015?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chandni Singh updated YARN-5015:

Attachment: YARN-5015.03.patch

> Unify restart policies across AM and container restarts
> ---
>
> Key: YARN-5015
> URL: https://issues.apache.org/jira/browse/YARN-5015
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: nodemanager
>Reporter: Varun Vasudev
>Assignee: Chandni Singh
>Priority: Major
>  Labels: oct16-medium
> Attachments: YARN-5015.01.patch, YARN-5015.02.patch, 
> YARN-5015.03.patch
>
>
> We support AM restart and container restarts - however the two have slightly 
> different capabilities. We should unify them. There's no reason for them to 
> be different.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7221) Add security check for privileged docker container

2018-02-15 Thread Eric Badger (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7221?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16366240#comment-16366240
 ] 

Eric Badger commented on YARN-7221:
---

[~eyang], I meant how did you test your patch such that it works. I don't see 
any way that passing the uid:gid pair to {{sudo}} will work unless that pair 
just so happens to be a valid username of some different user. 

bq. Are we good with blocking localized directory for privileged container with 
read-only?
Yes, as specified in YARN-7904, all mounts should be read-only for trusted, 
privileged containers. However, this cannot work until YRN-7654 is implemented 
and committed so that we don't require writing symlinks via the 
launch_container.sh script. 

> Add security check for privileged docker container
> --
>
> Key: YARN-7221
> URL: https://issues.apache.org/jira/browse/YARN-7221
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Eric Yang
>Assignee: Eric Yang
>Priority: Major
> Attachments: YARN-7221.001.patch, YARN-7221.002.patch, 
> YARN-7221.003.patch, YARN-7221.004.patch
>
>
> When a docker is running with privileges, majority of the use case is to have 
> some program running with root then drop privileges to another user.  i.e. 
> httpd to start with privileged and bind to port 80, then drop privileges to 
> www user.  
> # We should add security check for submitting users, to verify they have 
> "sudo" access to run privileged container.  
> # We should remove --user=uid:gid for privileged containers.  
>  
> Docker can be launched with --privileged=true, and --user=uid:gid flag.  With 
> this parameter combinations, user will not have access to become root user.  
> All docker exec command will be drop to uid:gid user to run instead of 
> granting privileges.  User can gain root privileges if container file system 
> contains files that give user extra power, but this type of image is 
> considered as dangerous.  Non-privileged user can launch container with 
> special bits to acquire same level of root power.  Hence, we lose control of 
> which image should be run with --privileges, and who have sudo rights to use 
> privileged container images.  As the result, we should check for sudo access 
> then decide to parameterize --privileged=true OR --user=uid:gid.  This will 
> avoid leading developer down the wrong path.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7221) Add security check for privileged docker container

2018-02-15 Thread Eric Yang (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7221?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16366224#comment-16366224
 ] 

Eric Yang commented on YARN-7221:
-

[~ebadger], I tested with:
{code:java}
docker run -it --privileged -v /usr/local/hadoop-3.0.0-alpha2-SNAPSHOT:/mnt:ro 
centos:7 bash
[root@8062ce155bfa /]# cd /mnt
[root@8062ce155bfa mnt]# touch s
touch: cannot touch 's': Read-only file system
{code}
 
This seems to work that the localized directory is appearing as read-only even 
with privileged container.  Are we good with blocking localized directory for 
privileged container with read-only?

> Add security check for privileged docker container
> --
>
> Key: YARN-7221
> URL: https://issues.apache.org/jira/browse/YARN-7221
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Eric Yang
>Assignee: Eric Yang
>Priority: Major
> Attachments: YARN-7221.001.patch, YARN-7221.002.patch, 
> YARN-7221.003.patch, YARN-7221.004.patch
>
>
> When a docker is running with privileges, majority of the use case is to have 
> some program running with root then drop privileges to another user.  i.e. 
> httpd to start with privileged and bind to port 80, then drop privileges to 
> www user.  
> # We should add security check for submitting users, to verify they have 
> "sudo" access to run privileged container.  
> # We should remove --user=uid:gid for privileged containers.  
>  
> Docker can be launched with --privileged=true, and --user=uid:gid flag.  With 
> this parameter combinations, user will not have access to become root user.  
> All docker exec command will be drop to uid:gid user to run instead of 
> granting privileges.  User can gain root privileges if container file system 
> contains files that give user extra power, but this type of image is 
> considered as dangerous.  Non-privileged user can launch container with 
> special bits to acquire same level of root power.  Hence, we lose control of 
> which image should be run with --privileges, and who have sudo rights to use 
> privileged container images.  As the result, we should check for sudo access 
> then decide to parameterize --privileged=true OR --user=uid:gid.  This will 
> avoid leading developer down the wrong path.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7934) [GQ] Refactor preemption calculators to allow overriding for Federation Global Algos

2018-02-15 Thread Carlo Curino (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7934?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16366215#comment-16366215
 ] 

Carlo Curino commented on YARN-7934:


Patch v2 attempts to please the YETUS gods.

This patch does not change any of the behavior, just define hooks to be used by 
sub-classes in YARN-7403, hence it doesn't require any new test.

> [GQ] Refactor preemption calculators to allow overriding for Federation 
> Global Algos
> 
>
> Key: YARN-7934
> URL: https://issues.apache.org/jira/browse/YARN-7934
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Carlo Curino
>Assignee: Carlo Curino
>Priority: Major
> Attachments: YARN-7934.v1.patch, YARN-7934.v2.patch
>
>
> This Jira tracks minimal changes in the capacity scheduler preemption 
> mechanics that allow for sub-classing and overriding of certain behaviors, 
> which we use to implement federation global algorithms, e.g., in YARN-7403.
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-7934) [GQ] Refactor preemption calculators to allow overriding for Federation Global Algos

2018-02-15 Thread Carlo Curino (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-7934?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Carlo Curino updated YARN-7934:
---
Attachment: YARN-7934.v2.patch

> [GQ] Refactor preemption calculators to allow overriding for Federation 
> Global Algos
> 
>
> Key: YARN-7934
> URL: https://issues.apache.org/jira/browse/YARN-7934
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Carlo Curino
>Assignee: Carlo Curino
>Priority: Major
> Attachments: YARN-7934.v1.patch, YARN-7934.v2.patch
>
>
> This Jira tracks minimal changes in the capacity scheduler preemption 
> mechanics that allow for sub-classing and overriding of certain behaviors, 
> which we use to implement federation global algorithms, e.g., in YARN-7403.
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-7813) Capacity Scheduler Intra-queue Preemption should be configurable for each queue

2018-02-15 Thread Eric Payne (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-7813?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Eric Payne updated YARN-7813:
-
Attachment: YARN-7813.005.branch-3.0.patch

> Capacity Scheduler Intra-queue Preemption should be configurable for each 
> queue
> ---
>
> Key: YARN-7813
> URL: https://issues.apache.org/jira/browse/YARN-7813
> Project: Hadoop YARN
>  Issue Type: Improvement
>  Components: capacity scheduler, scheduler preemption
>Affects Versions: 2.9.0, 2.8.3, 3.0.0
>Reporter: Eric Payne
>Assignee: Eric Payne
>Priority: Major
> Attachments: YARN-7813.001.patch, YARN-7813.002.branch-3.0.patch, 
> YARN-7813.002.patch, YARN-7813.003.branch-2.patch, 
> YARN-7813.003.branch-3.0.patch, YARN-7813.004.patch, 
> YARN-7813.005.branch-3.0.patch, YARN-7813.005.patch
>
>
> Just as inter-queue (a.k.a. cross-queue) preemption is configurable per 
> queue, intra-queue (a.k.a. in-queue) preemption should be configurable per 
> queue. If a queue does not have a setting for intra-queue preemption, it 
> should inherit its parents value.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5028) RMStateStore should trim down app state for completed applications

2018-02-15 Thread genericqa (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5028?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16366187#comment-16366187
 ] 

genericqa commented on YARN-5028:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
16s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 15m 
31s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
38s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
28s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
42s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
10m 40s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m  
2s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
23s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
36s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
34s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
34s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 25s{color} | {color:orange} 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager:
 The patch generated 2 new + 76 unchanged - 0 fixed = 78 total (was 76) {color} 
|
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
34s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
10m  7s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m  
9s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
21s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 62m 27s{color} 
| {color:red} hadoop-yarn-server-resourcemanager in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
17s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}105m 46s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | 
hadoop.yarn.server.resourcemanager.TestContainerResourceUsage |
|   | hadoop.yarn.server.resourcemanager.recovery.TestZKRMStateStore |
|   | hadoop.yarn.server.resourcemanager.TestApplicationCleanup |
|   | hadoop.yarn.server.resourcemanager.applicationsmanager.TestAMRestart |
|   | hadoop.yarn.server.resourcemanager.TestRMRestart |
|   | hadoop.yarn.server.resourcemanager.TestKillApplicationWithRMHA |
|   | hadoop.yarn.server.resourcemanager.TestWorkPreservingRMRestart |
|   | hadoop.yarn.server.resourcemanager.TestRMHAForAsyncScheduler |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:5b98639 |
| JIRA Issue | YARN-5028 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12910758/YARN-5028.003.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  shadedclient  findbugs  checkstyle  |
| uname | Linux f91293d1c34c 4.4.0-43-generic #63-Ubuntu SMP Wed Oct 12 
13:48:03 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh 

[jira] [Commented] (YARN-7919) Split timelineservice-hbase module to make YARN-7346 easier

2018-02-15 Thread genericqa (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7919?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16366173#comment-16366173
 ] 

genericqa commented on YARN-7919:
-

| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
23s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 19 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
20s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 21m 
 7s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 17m 
34s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  2m 
 7s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
42s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
14m 21s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Skipped patched modules with no Java source: 
hadoop-project hadoop-assemblies 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-timelineservice-hbase-tests
 {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
42s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
28s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
14s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  2m 
39s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 13m 
27s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 13m 
27s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
2m 16s{color} | {color:orange} root: The patch generated 1 new + 29 unchanged - 
9 fixed = 30 total (was 38) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  3m 
27s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green}  0m 
11s{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
11m 13s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Skipped patched modules with no Java source: 
hadoop-project hadoop-assemblies 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-timelineservice-hbase
 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-timelineservice-hbase-tests
 {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m 
35s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  2m 
55s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
21s{color} | {color:green} hadoop-project in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
24s{color} | {color:green} hadoop-assemblies in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
50s{color} | {color:green} hadoop-yarn-server-timelineservice-hbase in the 
patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
38s{color} | {color:green} 

[jira] [Commented] (YARN-7920) Cleanup configuration of PlacementConstraints

2018-02-15 Thread Konstantinos Karanasos (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7920?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16366168#comment-16366168
 ] 

Konstantinos Karanasos commented on YARN-7920:
--

Patch looks good, thanks [~leftnoteasy].

There are a few checkstyle issues left. Anyway, I will fix them and commit in a 
bit, because I will not be available to commit this tomorrow.

> Cleanup configuration of PlacementConstraints
> -
>
> Key: YARN-7920
> URL: https://issues.apache.org/jira/browse/YARN-7920
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Wangda Tan
>Assignee: Wangda Tan
>Priority: Blocker
> Attachments: YARN-7920.001.patch, YARN-7920.002.patch, 
> YARN-7920.003.patch, YARN-7920.004.patch, YARN-7920.005.patch, 
> YARN-7920.006.patch
>
>
> Currently it is very confusing to have the two configs in two different files 
> (yarn-site.xml and capacity-scheduler.xml). 
>  
> Maybe a better approach is: we can delete the scheduling-request.allowed in 
> CS, and update placement-constraints configs in yarn-site.xml a bit: 
>  
> - Remove placement-constraints.enabled, and add a new 
> placement-constraints.handler, by default is none, and other acceptable 
> values are a. external-processor (since algorithm is too generic to me), b. 
> scheduler. 
> - And add a new PlacementProcessor just to pass SchedulingRequest to 
> scheduler without any modifications.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-7190) Ensure only NM classpath in 2.x gets TSv2 related hbase jars, not the user classpath

2018-02-15 Thread Lei (Eddy) Xu (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-7190?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lei (Eddy) Xu updated YARN-7190:

Release Note: Ensure only NM classpath in 2.x gets TSv2 related hbase jars, 
not the user classpath.

> Ensure only NM classpath in 2.x gets TSv2 related hbase jars, not the user 
> classpath
> 
>
> Key: YARN-7190
> URL: https://issues.apache.org/jira/browse/YARN-7190
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: timelineclient, timelinereader, timelineserver
>Reporter: Vrushali C
>Assignee: Varun Saxena
>Priority: Major
> Fix For: YARN-5355_branch2, 3.1.0, 2.9.1, 3.0.1
>
> Attachments: YARN-7190-YARN-5355_branch2.01.patch, 
> YARN-7190-YARN-5355_branch2.02.patch, YARN-7190-YARN-5355_branch2.03.patch, 
> YARN-7190.01.patch, YARN-7190.02.patch
>
>
> [~jlowe] had a good observation about the user classpath getting extra jars 
> in hadoop 2.x brought in with TSv2.  If users start picking up Hadoop 2,x's 
> version of HBase jars instead of the ones they shipped with their job, it 
> could be a problem.
> So when TSv2 is to be used in 2,x, the hbase related jars should come into 
> only the NM classpath not the user classpath.
> Here is a list of some jars
> {code}
> commons-csv-1.0.jar
> commons-el-1.0.jar
> commons-httpclient-3.1.jar
> disruptor-3.3.0.jar
> findbugs-annotations-1.3.9-1.jar
> hbase-annotations-1.2.6.jar
> hbase-client-1.2.6.jar
> hbase-common-1.2.6.jar
> hbase-hadoop2-compat-1.2.6.jar
> hbase-hadoop-compat-1.2.6.jar
> hbase-prefix-tree-1.2.6.jar
> hbase-procedure-1.2.6.jar
> hbase-protocol-1.2.6.jar
> hbase-server-1.2.6.jar
> htrace-core-3.1.0-incubating.jar
> jamon-runtime-2.4.1.jar
> jasper-compiler-5.5.23.jar
> jasper-runtime-5.5.23.jar
> jcodings-1.0.8.jar
> joni-2.1.2.jar
> jsp-2.1-6.1.14.jar
> jsp-api-2.1-6.1.14.jar
> jsr311-api-1.1.1.jar
> metrics-core-2.2.0.jar
> servlet-api-2.5-6.1.14.jar
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7919) Split timelineservice-hbase module to make YARN-7346 easier

2018-02-15 Thread genericqa (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7919?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16366161#comment-16366161
 ] 

genericqa commented on YARN-7919:
-

| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
25s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 19 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
17s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 15m 
19s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 12m 
33s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
59s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
54s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
14m  6s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Skipped patched modules with no Java source: 
hadoop-project hadoop-assemblies 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-timelineservice-hbase-tests
 {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
39s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
38s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
18s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  2m 
32s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 11m 
55s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 11m 
55s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
1m 47s{color} | {color:orange} root: The patch generated 1 new + 29 unchanged - 
9 fixed = 30 total (was 38) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  2m 
41s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green}  0m  
9s{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green}  
8m 45s{color} | {color:green} patch has no errors when building and testing our 
client artifacts. {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Skipped patched modules with no Java source: 
hadoop-project hadoop-assemblies 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-timelineservice-hbase
 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-timelineservice-hbase-tests
 {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
42s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  2m  
4s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
15s{color} | {color:green} hadoop-project in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
17s{color} | {color:green} hadoop-assemblies in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
39s{color} | {color:green} hadoop-yarn-server-timelineservice-hbase in the 
patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
27s{color} | {color:green} 

[jira] [Commented] (YARN-7813) Capacity Scheduler Intra-queue Preemption should be configurable for each queue

2018-02-15 Thread genericqa (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7813?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16366157#comment-16366157
 ] 

genericqa commented on YARN-7813:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 15m 
35s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 7 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
11s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 16m 
50s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  8m  
1s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
13s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  3m  
7s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
14m 35s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Skipped patched modules with no Java source: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-site {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  1m 
13s{color} | {color:red} hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api in 
trunk has 1 extant Findbugs warnings. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  2m 
29s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
10s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  2m 
27s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  7m 
14s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} cc {color} | {color:green}  7m 
14s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  7m 
14s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
1m 14s{color} | {color:orange} hadoop-yarn-project/hadoop-yarn: The patch 
generated 9 new + 919 unchanged - 1 fixed = 928 total (was 920) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  3m 
18s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
11m 15s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Skipped patched modules with no Java source: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-site {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  5m 
34s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  2m 
33s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
44s{color} | {color:green} hadoop-yarn-api in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  3m 
21s{color} | {color:green} hadoop-yarn-common in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 65m 
27s{color} | {color:green} hadoop-yarn-server-resourcemanager in the patch 
passed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 28m 33s{color} 
| {color:red} hadoop-yarn-client in the patch failed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
19s{color} | {color:green} hadoop-yarn-site in the patch passed. {color} |
| 

[jira] [Commented] (YARN-7813) Capacity Scheduler Intra-queue Preemption should be configurable for each queue

2018-02-15 Thread Eric Payne (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7813?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16366151#comment-16366151
 ] 

Eric Payne commented on YARN-7813:
--

Attached {{YARN-7813.005.patch}}. If the pre-commit passes, this should be the 
one.

> Capacity Scheduler Intra-queue Preemption should be configurable for each 
> queue
> ---
>
> Key: YARN-7813
> URL: https://issues.apache.org/jira/browse/YARN-7813
> Project: Hadoop YARN
>  Issue Type: Improvement
>  Components: capacity scheduler, scheduler preemption
>Affects Versions: 2.9.0, 2.8.3, 3.0.0
>Reporter: Eric Payne
>Assignee: Eric Payne
>Priority: Major
> Attachments: YARN-7813.001.patch, YARN-7813.002.branch-3.0.patch, 
> YARN-7813.002.patch, YARN-7813.003.branch-2.patch, 
> YARN-7813.003.branch-3.0.patch, YARN-7813.004.patch, YARN-7813.005.patch
>
>
> Just as inter-queue (a.k.a. cross-queue) preemption is configurable per 
> queue, intra-queue (a.k.a. in-queue) preemption should be configurable per 
> queue. If a queue does not have a setting for intra-queue preemption, it 
> should inherit its parents value.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Created] (YARN-7938) Yarn task attempt listener bind port range

2018-02-15 Thread David Johnson (JIRA)
David Johnson created YARN-7938:
---

 Summary: Yarn task attempt listener bind port range
 Key: YARN-7938
 URL: https://issues.apache.org/jira/browse/YARN-7938
 Project: Hadoop YARN
  Issue Type: Improvement
  Components: yarn
Affects Versions: 2.7.5
Reporter: David Johnson


Currently, the Yarn task attempt listener binds to a random hight port 
".setPort(0)"

For highly locked down environments, this can require a large number of opened 
ports between systems, which is troublesome.

 

I'd recommend either creating a new configuration item, or tying it to the 
MR_AM_JOB_CLIENT_PORT_RANGE.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-7935) Expose container's hostname to applications running within the docker container

2018-02-15 Thread Suma Shivaprasad (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-7935?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Suma Shivaprasad updated YARN-7935:
---
Attachment: (was: YARN-7935.patch)

> Expose container's hostname to applications running within the docker 
> container
> ---
>
> Key: YARN-7935
> URL: https://issues.apache.org/jira/browse/YARN-7935
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: yarn
>Reporter: Suma Shivaprasad
>Assignee: Suma Shivaprasad
>Priority: Major
>
> Some applications have a need to bind to the container's hostname (like 
> Spark) which is different from the NodeManager's hostname(NM_HOST which is 
> available as an env during container launch) when launched through Docker 
> runtime. The container's hostname can be exposed to applications via an env 
> CONTAINER_HOSTNAME. Another potential candidate is the container's IP but 
> this can be addressed in a separate jira.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-7935) Expose container's hostname to applications running within the docker container

2018-02-15 Thread Suma Shivaprasad (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-7935?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Suma Shivaprasad updated YARN-7935:
---
Attachment: YARN-7935.patch

> Expose container's hostname to applications running within the docker 
> container
> ---
>
> Key: YARN-7935
> URL: https://issues.apache.org/jira/browse/YARN-7935
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: yarn
>Reporter: Suma Shivaprasad
>Assignee: Suma Shivaprasad
>Priority: Major
> Attachments: YARN-7935.patch
>
>
> Some applications have a need to bind to the container's hostname (like 
> Spark) which is different from the NodeManager's hostname(NM_HOST which is 
> available as an env during container launch) when launched through Docker 
> runtime. The container's hostname can be exposed to applications via an env 
> CONTAINER_HOSTNAME. Another potential candidate is the container's IP but 
> this can be addressed in a separate jira.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7936) Add default service AM Xmx

2018-02-15 Thread Gour Saha (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7936?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16366042#comment-16366042
 ] 

Gour Saha commented on YARN-7936:
-

We tested the patch in our cluster and it fixed this issue. The patch looks 
good to me. +1 for commit.

> Add default service AM Xmx
> --
>
> Key: YARN-7936
> URL: https://issues.apache.org/jira/browse/YARN-7936
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Jian He
>Assignee: Jian He
>Priority: Major
> Attachments: YARN-7936.1.patch
>
>
> We were seeing issues in the Service AM where memory usage would go beyond 
> 1gb and AM container would get killed due to pmem check. We were not setting 
> Xmx explicitly and hence it was getting defaulted to 32gb (based on the java 
> version we were using and the available memory in the host). Hence even minor 
> GC cycles were not kicking in and taking the usage beyond 1gb.
> We need to default Xmx to a reasonable value, ofcourse allowing it to be 
> overridden by the service owner.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-7936) Add default service AM Xmx

2018-02-15 Thread Gour Saha (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-7936?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gour Saha updated YARN-7936:

Description: 
We were seeing issues in the Service AM where memory usage would go beyond 1gb 
and AM container would get killed due to pmem check. We were not setting Xmx 
explicitly and hence it was getting defaulted to 32gb (based on the java 
version we were using and the available memory in the host). Hence even minor 
GC cycles were not kicking in and taking the usage beyond 1gb.

We need to default Xmx to a reasonable value, ofcourse allowing it to be 
overridden by the service owner.

  was:We were seeing issues in the Service AM where memory usage would go 
beyond 1gb and AM container would get killed due to pmem check. We were not 
setting Xmx explicitly and hence it was getting defaulted to 32gb (based on the 
java version we were using and the available memory in the host). Hence even 
minor GC cycles were not kicking in and taking the usage beyond 1gb.


> Add default service AM Xmx
> --
>
> Key: YARN-7936
> URL: https://issues.apache.org/jira/browse/YARN-7936
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Jian He
>Assignee: Jian He
>Priority: Major
> Attachments: YARN-7936.1.patch
>
>
> We were seeing issues in the Service AM where memory usage would go beyond 
> 1gb and AM container would get killed due to pmem check. We were not setting 
> Xmx explicitly and hence it was getting defaulted to 32gb (based on the java 
> version we were using and the available memory in the host). Hence even minor 
> GC cycles were not kicking in and taking the usage beyond 1gb.
> We need to default Xmx to a reasonable value, ofcourse allowing it to be 
> overridden by the service owner.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-7936) Add default service AM Xmx

2018-02-15 Thread Gour Saha (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-7936?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gour Saha updated YARN-7936:

Description: We were seeing issues in the Service AM where memory usage 
would go beyond 1gb and AM container would get killed due to pmem check. We 
were not setting Xmx explicitly and hence it was getting defaulted to 32gb 
(based on the java version we were using and the available memory in the host). 
Hence even minor GC cycles were not kicking in and taking the usage beyond 1gb.

> Add default service AM Xmx
> --
>
> Key: YARN-7936
> URL: https://issues.apache.org/jira/browse/YARN-7936
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Jian He
>Assignee: Jian He
>Priority: Major
> Attachments: YARN-7936.1.patch
>
>
> We were seeing issues in the Service AM where memory usage would go beyond 
> 1gb and AM container would get killed due to pmem check. We were not setting 
> Xmx explicitly and hence it was getting defaulted to 32gb (based on the java 
> version we were using and the available memory in the host). Hence even minor 
> GC cycles were not kicking in and taking the usage beyond 1gb.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-7919) Split timelineservice-hbase module to make YARN-7346 easier

2018-02-15 Thread Haibo Chen (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-7919?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Haibo Chen updated YARN-7919:
-
Attachment: YARN-7919.04.patch

> Split timelineservice-hbase module to make YARN-7346 easier
> ---
>
> Key: YARN-7919
> URL: https://issues.apache.org/jira/browse/YARN-7919
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: timelineservice
>Affects Versions: 3.0.0
>Reporter: Haibo Chen
>Assignee: Haibo Chen
>Priority: Major
> Attachments: YARN-7919.00.patch, YARN-7919.01.patch, 
> YARN-7919.02.patch, YARN-7919.03.patch, YARN-7919.04.patch
>
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-7919) Split timelineservice-hbase module to make YARN-7346 easier

2018-02-15 Thread Haibo Chen (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-7919?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Haibo Chen updated YARN-7919:
-
Attachment: YARN-7919.03.patch

> Split timelineservice-hbase module to make YARN-7346 easier
> ---
>
> Key: YARN-7919
> URL: https://issues.apache.org/jira/browse/YARN-7919
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: timelineservice
>Affects Versions: 3.0.0
>Reporter: Haibo Chen
>Assignee: Haibo Chen
>Priority: Major
> Attachments: YARN-7919.00.patch, YARN-7919.01.patch, 
> YARN-7919.02.patch, YARN-7919.03.patch
>
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-7919) Split timelineservice-hbase module to make YARN-7346 easier

2018-02-15 Thread Haibo Chen (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-7919?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Haibo Chen updated YARN-7919:
-
Attachment: (was: YARN-7346.03.patch)

> Split timelineservice-hbase module to make YARN-7346 easier
> ---
>
> Key: YARN-7919
> URL: https://issues.apache.org/jira/browse/YARN-7919
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: timelineservice
>Affects Versions: 3.0.0
>Reporter: Haibo Chen
>Assignee: Haibo Chen
>Priority: Major
> Attachments: YARN-7919.00.patch, YARN-7919.01.patch, 
> YARN-7919.02.patch, YARN-7919.03.patch
>
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-7919) Split timelineservice-hbase module to make YARN-7346 easier

2018-02-15 Thread Haibo Chen (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-7919?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Haibo Chen updated YARN-7919:
-
Attachment: YARN-7346.03.patch

> Split timelineservice-hbase module to make YARN-7346 easier
> ---
>
> Key: YARN-7919
> URL: https://issues.apache.org/jira/browse/YARN-7919
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: timelineservice
>Affects Versions: 3.0.0
>Reporter: Haibo Chen
>Assignee: Haibo Chen
>Priority: Major
> Attachments: YARN-7919.00.patch, YARN-7919.01.patch, 
> YARN-7919.02.patch, YARN-7919.03.patch
>
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7919) Split timelineservice-hbase module to make YARN-7346 easier

2018-02-15 Thread genericqa (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7919?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16365996#comment-16365996
 ] 

genericqa commented on YARN-7919:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m  
0s{color} | {color:blue} Docker mode activated. {color} |
| {color:red}-1{color} | {color:red} patch {color} | {color:red}  0m  5s{color} 
| {color:red} YARN-7919 does not apply to trunk. Rebase required? Wrong Branch? 
See https://wiki.apache.org/hadoop/HowToContribute for help. {color} |
\\
\\
|| Subsystem || Report/Notes ||
| JIRA Issue | YARN-7919 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12910775/YARN-7346.03.patch |
| Console output | 
https://builds.apache.org/job/PreCommit-YARN-Build/19708/console |
| Powered by | Apache Yetus 0.8.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> Split timelineservice-hbase module to make YARN-7346 easier
> ---
>
> Key: YARN-7919
> URL: https://issues.apache.org/jira/browse/YARN-7919
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: timelineservice
>Affects Versions: 3.0.0
>Reporter: Haibo Chen
>Assignee: Haibo Chen
>Priority: Major
> Attachments: YARN-7919.00.patch, YARN-7919.01.patch, 
> YARN-7919.02.patch, YARN-7919.03.patch
>
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5028) RMStateStore should trim down app state for completed applications

2018-02-15 Thread genericqa (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5028?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16365852#comment-16365852
 ] 

genericqa commented on YARN-5028:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
38s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:red}-1{color} | {color:red} mvninstall {color} | {color:red} 34m  
5s{color} | {color:red} root in trunk failed. {color} |
| {color:red}-1{color} | {color:red} compile {color} | {color:red}  0m 
37s{color} | {color:red} hadoop-yarn-server-resourcemanager in trunk failed. 
{color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
28s{color} | {color:green} trunk passed {color} |
| {color:red}-1{color} | {color:red} mvnsite {color} | {color:red}  0m 
33s{color} | {color:red} hadoop-yarn-server-resourcemanager in trunk failed. 
{color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green}  
1m 15s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  0m 
13s{color} | {color:red} hadoop-yarn-server-resourcemanager in trunk failed. 
{color} |
| {color:red}-1{color} | {color:red} javadoc {color} | {color:red}  0m 
16s{color} | {color:red} hadoop-yarn-server-resourcemanager in trunk failed. 
{color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:red}-1{color} | {color:red} mvninstall {color} | {color:red}  0m 
13s{color} | {color:red} hadoop-yarn-server-resourcemanager in the patch 
failed. {color} |
| {color:red}-1{color} | {color:red} compile {color} | {color:red}  0m 
13s{color} | {color:red} hadoop-yarn-server-resourcemanager in the patch 
failed. {color} |
| {color:red}-1{color} | {color:red} javac {color} | {color:red}  0m 13s{color} 
| {color:red} hadoop-yarn-server-resourcemanager in the patch failed. {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
12s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} mvnsite {color} | {color:red}  0m 
12s{color} | {color:red} hadoop-yarn-server-resourcemanager in the patch 
failed. {color} |
| {color:red}-1{color} | {color:red} whitespace {color} | {color:red}  0m  
0s{color} | {color:red} The patch has 101 line(s) that end in whitespace. Use 
git apply --whitespace=fix <>. Refer 
https://git-scm.com/docs/git-apply {color} |
| {color:red}-1{color} | {color:red} whitespace {color} | {color:red}  0m  
4s{color} | {color:red} The patch 1152 line(s) with tabs. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green}  
0m 56s{color} | {color:green} patch has no errors when building and testing our 
client artifacts. {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  0m 
15s{color} | {color:red} hadoop-yarn-server-resourcemanager in the patch 
failed. {color} |
| {color:red}-1{color} | {color:red} javadoc {color} | {color:red}  0m 
13s{color} | {color:red} hadoop-yarn-server-resourcemanager in the patch 
failed. {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red}  0m 14s{color} 
| {color:red} hadoop-yarn-server-resourcemanager in the patch failed. {color} |
| {color:blue}0{color} | {color:blue} asflicense {color} | {color:blue}  0m 
14s{color} | {color:blue} ASF License check generated no output? {color} |
| {color:black}{color} | {color:black} {color} | {color:black} 40m 45s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:5b98639 |
| JIRA Issue | YARN-5028 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12910758/YARN-5028.003.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  shadedclient  findbugs  checkstyle  |
| uname | Linux 36c09b87c4f5 3.13.0-135-generic #184-Ubuntu SMP Wed Oct 18 
11:55:51 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / b27ab7d |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_151 |
| mvninstall | 

[jira] [Updated] (YARN-7813) Capacity Scheduler Intra-queue Preemption should be configurable for each queue

2018-02-15 Thread Eric Payne (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-7813?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Eric Payne updated YARN-7813:
-
Attachment: YARN-7813.005.patch

> Capacity Scheduler Intra-queue Preemption should be configurable for each 
> queue
> ---
>
> Key: YARN-7813
> URL: https://issues.apache.org/jira/browse/YARN-7813
> Project: Hadoop YARN
>  Issue Type: Improvement
>  Components: capacity scheduler, scheduler preemption
>Affects Versions: 2.9.0, 2.8.3, 3.0.0
>Reporter: Eric Payne
>Assignee: Eric Payne
>Priority: Major
> Attachments: YARN-7813.001.patch, YARN-7813.002.branch-3.0.patch, 
> YARN-7813.002.patch, YARN-7813.003.branch-2.patch, 
> YARN-7813.003.branch-3.0.patch, YARN-7813.004.patch, YARN-7813.005.patch
>
>
> Just as inter-queue (a.k.a. cross-queue) preemption is configurable per 
> queue, intra-queue (a.k.a. in-queue) preemption should be configurable per 
> queue. If a queue does not have a setting for intra-queue preemption, it 
> should inherit its parents value.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5028) RMStateStore should trim down app state for completed applications

2018-02-15 Thread Gergo Repas (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5028?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16365788#comment-16365788
 ] 

Gergo Repas commented on YARN-5028:
---

[~yufeigu] Thanks for the suggestions, I addressed all of them in v003 (the 
above two points in the test, removing the null-check around setting 
AMContainerSpec, and copying the whole AMContainerSpec over into the new 
app.submission context).

> RMStateStore should trim down app state for completed applications
> --
>
> Key: YARN-5028
> URL: https://issues.apache.org/jira/browse/YARN-5028
> Project: Hadoop YARN
>  Issue Type: Improvement
>  Components: resourcemanager
>Affects Versions: 2.8.0
>Reporter: Karthik Kambatla
>Assignee: Gergo Repas
>Priority: Major
> Attachments: YARN-5028.000.patch, YARN-5028.001.patch, 
> YARN-5028.002.patch, YARN-5028.003.patch
>
>
> RMStateStore stores enough information to recover applications in case of a 
> restart. The store also retains this information for completed applications 
> to serve their status to REST, WebUI, Java and CLI clients. We don't need all 
> the information we store today to serve application status; for instance, we 
> don't need the {{ApplicationSubmissionContext}}. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-5028) RMStateStore should trim down app state for completed applications

2018-02-15 Thread Gergo Repas (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-5028?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gergo Repas updated YARN-5028:
--
Attachment: YARN-5028.003.patch

> RMStateStore should trim down app state for completed applications
> --
>
> Key: YARN-5028
> URL: https://issues.apache.org/jira/browse/YARN-5028
> Project: Hadoop YARN
>  Issue Type: Improvement
>  Components: resourcemanager
>Affects Versions: 2.8.0
>Reporter: Karthik Kambatla
>Assignee: Gergo Repas
>Priority: Major
> Attachments: YARN-5028.000.patch, YARN-5028.001.patch, 
> YARN-5028.002.patch, YARN-5028.003.patch
>
>
> RMStateStore stores enough information to recover applications in case of a 
> restart. The store also retains this information for completed applications 
> to serve their status to REST, WebUI, Java and CLI clients. We don't need all 
> the information we store today to serve application status; for instance, we 
> don't need the {{ApplicationSubmissionContext}}. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7677) Docker image cannot set HADOOP_CONF_DIR

2018-02-15 Thread Jason Lowe (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7677?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16365736#comment-16365736
 ] 

Jason Lowe commented on YARN-7677:
--

That makes sense.  Agreed it is probably safer to leave the CLASSPATH in the 
batch of user vars.

 +1 for the latest patch.  I'll commit this later today if there are no 
objections.

> Docker image cannot set HADOOP_CONF_DIR
> ---
>
> Key: YARN-7677
> URL: https://issues.apache.org/jira/browse/YARN-7677
> Project: Hadoop YARN
>  Issue Type: Bug
>Affects Versions: 3.0.0
>Reporter: Eric Badger
>Assignee: Jim Brennan
>Priority: Major
> Attachments: YARN-7677.001.patch, YARN-7677.002.patch, 
> YARN-7677.003.patch, YARN-7677.004.patch, YARN-7677.005.patch
>
>
> Currently, {{HADOOP_CONF_DIR}} is being put into the task environment whether 
> it's set by the user or not. It completely bypasses the whitelist and so 
> there is no way for a task to not have {{HADOOP_CONF_DIR}} set. This causes 
> problems in the Docker use case where Docker containers will set up their own 
> environment and have their own {{HADOOP_CONF_DIR}} preset in the image 
> itself. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7677) Docker image cannot set HADOOP_CONF_DIR

2018-02-15 Thread Jim Brennan (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7677?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16365721#comment-16365721
 ] 

Jim Brennan commented on YARN-7677:
---

[~jlowe], do you agree about the CLASSPATH for windows?  Let me know if you 
want me to add it back.

 

> Docker image cannot set HADOOP_CONF_DIR
> ---
>
> Key: YARN-7677
> URL: https://issues.apache.org/jira/browse/YARN-7677
> Project: Hadoop YARN
>  Issue Type: Bug
>Affects Versions: 3.0.0
>Reporter: Eric Badger
>Assignee: Jim Brennan
>Priority: Major
> Attachments: YARN-7677.001.patch, YARN-7677.002.patch, 
> YARN-7677.003.patch, YARN-7677.004.patch, YARN-7677.005.patch
>
>
> Currently, {{HADOOP_CONF_DIR}} is being put into the task environment whether 
> it's set by the user or not. It completely bypasses the whitelist and so 
> there is no way for a task to not have {{HADOOP_CONF_DIR}} set. This causes 
> problems in the Docker use case where Docker containers will set up their own 
> environment and have their own {{HADOOP_CONF_DIR}} preset in the image 
> itself. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7328) ResourceUtils allows yarn.nodemanager.resource-types.memory-mb and .vcores to override yarn.nodemanager.resource.memory-mb and .cpu-vcores

2018-02-15 Thread Sunil G (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7328?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16365565#comment-16365565
 ] 

Sunil G commented on YARN-7328:
---

jenkins seems fine. I could help commit this tomorrow if no objections. cc/ 
[~leftnoteasy]

> ResourceUtils allows yarn.nodemanager.resource-types.memory-mb and .vcores to 
> override yarn.nodemanager.resource.memory-mb and .cpu-vcores
> --
>
> Key: YARN-7328
> URL: https://issues.apache.org/jira/browse/YARN-7328
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: nodemanager
>Affects Versions: 3.1.0
>Reporter: Daniel Templeton
>Assignee: lovekesh bansal
>Priority: Critical
> Attachments: YARN-7328_trunk.001.patch, YARN-7328_trunk.002.patch
>
>
> We will throw an exception if yarn.nodemanager.resource-types.memory is 
> configured, but not if .memory-mb or .vcores is configured.  We should be 
> consistent.  We should not allow resource types to redefine something for 
> which we already have a property to set. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7328) ResourceUtils allows yarn.nodemanager.resource-types.memory-mb and .vcores to override yarn.nodemanager.resource.memory-mb and .cpu-vcores

2018-02-15 Thread genericqa (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7328?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16365552#comment-16365552
 ] 

genericqa commented on YARN-7328:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
30s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 4 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
48s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 15m 
33s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  7m 
35s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
54s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
18s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
12m  4s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  1m 
12s{color} | {color:red} hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api in 
trunk has 1 extant Findbugs warnings. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
16s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
11s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
 4s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  6m 
27s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  6m 
27s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 56s{color} | {color:orange} hadoop-yarn-project/hadoop-yarn: The patch 
generated 8 new + 13 unchanged - 0 fixed = 21 total (was 13) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
19s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green}  0m  
4s{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
10m 18s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m 
43s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
14s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
44s{color} | {color:green} hadoop-yarn-api in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  3m 
19s{color} | {color:green} hadoop-yarn-common in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
32s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 71m  1s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:5b98639 |
| JIRA Issue | YARN-7328 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12910739/YARN-7328_trunk.002.patch
 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  shadedclient  findbugs  checkstyle  xml  |
| uname | Linux e83a676a08c2 4.4.0-89-generic #112-Ubuntu SMP Mon Jul 31 
19:38:41 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | 

[jira] [Commented] (YARN-7328) ResourceUtils allows yarn.nodemanager.resource-types.memory-mb and .vcores to override yarn.nodemanager.resource.memory-mb and .cpu-vcores

2018-02-15 Thread lovekesh bansal (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7328?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16365480#comment-16365480
 ] 

lovekesh bansal commented on YARN-7328:
---

[~leftnoteasy] Uploading the second patch with the test cases. Please review. 
Also Not sure but auto build was not triggered after the first upload, am I 
missing something?

> ResourceUtils allows yarn.nodemanager.resource-types.memory-mb and .vcores to 
> override yarn.nodemanager.resource.memory-mb and .cpu-vcores
> --
>
> Key: YARN-7328
> URL: https://issues.apache.org/jira/browse/YARN-7328
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: nodemanager
>Affects Versions: 3.1.0
>Reporter: Daniel Templeton
>Assignee: lovekesh bansal
>Priority: Critical
> Attachments: YARN-7328_trunk.001.patch, YARN-7328_trunk.002.patch
>
>
> We will throw an exception if yarn.nodemanager.resource-types.memory is 
> configured, but not if .memory-mb or .vcores is configured.  We should be 
> consistent.  We should not allow resource types to redefine something for 
> which we already have a property to set. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-7328) ResourceUtils allows yarn.nodemanager.resource-types.memory-mb and .vcores to override yarn.nodemanager.resource.memory-mb and .cpu-vcores

2018-02-15 Thread lovekesh bansal (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-7328?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

lovekesh bansal updated YARN-7328:
--
Attachment: YARN-7328_trunk.002.patch

> ResourceUtils allows yarn.nodemanager.resource-types.memory-mb and .vcores to 
> override yarn.nodemanager.resource.memory-mb and .cpu-vcores
> --
>
> Key: YARN-7328
> URL: https://issues.apache.org/jira/browse/YARN-7328
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: nodemanager
>Affects Versions: 3.1.0
>Reporter: Daniel Templeton
>Assignee: lovekesh bansal
>Priority: Critical
> Attachments: YARN-7328_trunk.001.patch, YARN-7328_trunk.002.patch
>
>
> We will throw an exception if yarn.nodemanager.resource-types.memory is 
> configured, but not if .memory-mb or .vcores is configured.  We should be 
> consistent.  We should not allow resource types to redefine something for 
> which we already have a property to set. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7901) Adding profile capability in resourceReq in LocalityMulticastAMRMProxyPolicy

2018-02-15 Thread lovekesh bansal (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7901?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16365476#comment-16365476
 ] 

lovekesh bansal commented on YARN-7901:
---

[~leftnoteasy] Can you please review? 
Thanks. 

> Adding profile capability in resourceReq in LocalityMulticastAMRMProxyPolicy
> 
>
> Key: YARN-7901
> URL: https://issues.apache.org/jira/browse/YARN-7901
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: lovekesh bansal
>Assignee: lovekesh bansal
>Priority: Minor
> Fix For: 3.0.1
>
> Attachments: YARN-7901_trunk.001.patch
>
>
> in the splitIndividualAny method while creating the resourceRequest we are 
> not setting the profile capability. 
> ResourceRequest.newInstance(originalResourceRequest.getPriority(),
>  originalResourceRequest.getResourceName(),
>  originalResourceRequest.getCapability(),
>  originalResourceRequest.getNumContainers(),
>  originalResourceRequest.getRelaxLocality(),
>  originalResourceRequest.getNodeLabelExpression(),
>  originalResourceRequest.getExecutionTypeRequest());



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-7901) Adding profile capability in resourceReq in LocalityMulticastAMRMProxyPolicy

2018-02-15 Thread lovekesh bansal (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-7901?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

lovekesh bansal updated YARN-7901:
--
Attachment: (was: YARN-7328_trunk.002.patch)

> Adding profile capability in resourceReq in LocalityMulticastAMRMProxyPolicy
> 
>
> Key: YARN-7901
> URL: https://issues.apache.org/jira/browse/YARN-7901
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: lovekesh bansal
>Assignee: lovekesh bansal
>Priority: Minor
> Fix For: 3.0.1
>
> Attachments: YARN-7901_trunk.001.patch
>
>
> in the splitIndividualAny method while creating the resourceRequest we are 
> not setting the profile capability. 
> ResourceRequest.newInstance(originalResourceRequest.getPriority(),
>  originalResourceRequest.getResourceName(),
>  originalResourceRequest.getCapability(),
>  originalResourceRequest.getNumContainers(),
>  originalResourceRequest.getRelaxLocality(),
>  originalResourceRequest.getNodeLabelExpression(),
>  originalResourceRequest.getExecutionTypeRequest());



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-7901) Adding profile capability in resourceReq in LocalityMulticastAMRMProxyPolicy

2018-02-15 Thread lovekesh bansal (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-7901?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

lovekesh bansal updated YARN-7901:
--
Attachment: YARN-7328_trunk.002.patch

> Adding profile capability in resourceReq in LocalityMulticastAMRMProxyPolicy
> 
>
> Key: YARN-7901
> URL: https://issues.apache.org/jira/browse/YARN-7901
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: lovekesh bansal
>Assignee: lovekesh bansal
>Priority: Minor
> Fix For: 3.0.1
>
> Attachments: YARN-7901_trunk.001.patch
>
>
> in the splitIndividualAny method while creating the resourceRequest we are 
> not setting the profile capability. 
> ResourceRequest.newInstance(originalResourceRequest.getPriority(),
>  originalResourceRequest.getResourceName(),
>  originalResourceRequest.getCapability(),
>  originalResourceRequest.getNumContainers(),
>  originalResourceRequest.getRelaxLocality(),
>  originalResourceRequest.getNodeLabelExpression(),
>  originalResourceRequest.getExecutionTypeRequest());



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7292) Revisit Resource Profile Behavior

2018-02-15 Thread genericqa (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7292?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16365459#comment-16365459
 ] 

genericqa commented on YARN-7292:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
23s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 9 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
10s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 15m 
 9s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  9m 
47s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
56s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  3m 
28s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
13m 30s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  1m  
8s{color} | {color:red} hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api in 
trunk has 1 extant Findbugs warnings. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  2m 
26s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
10s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  2m 
52s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  8m 
51s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} cc {color} | {color:green}  8m 
51s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  8m 
51s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
1m  3s{color} | {color:orange} hadoop-yarn-project/hadoop-yarn: The patch 
generated 2 new + 319 unchanged - 17 fixed = 321 total (was 336) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  3m 
13s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green}  
9m 40s{color} | {color:green} patch has no errors when building and testing our 
client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  6m  
3s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  2m 
37s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
41s{color} | {color:green} hadoop-yarn-api in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  3m 
21s{color} | {color:green} hadoop-yarn-common in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  2m 
27s{color} | {color:green} hadoop-yarn-server-common in the patch passed. 
{color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 77m 58s{color} 
| {color:red} hadoop-yarn-server-resourcemanager in the patch failed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 28m 21s{color} 
| {color:red} hadoop-yarn-client in the patch failed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 12m 
52s{color} | {color:green} hadoop-yarn-applications-distributedshell in the 
patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
33s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}209m 

[jira] [Commented] (YARN-7292) Revisit Resource Profile Behavior

2018-02-15 Thread genericqa (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7292?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16365453#comment-16365453
 ] 

genericqa commented on YARN-7292:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
20s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 9 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
58s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 20m 
49s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  9m 
40s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
16s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  4m  
4s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
15m 47s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  1m 
16s{color} | {color:red} hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api in 
trunk has 1 extant Findbugs warnings. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  3m  
8s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
11s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  3m 
 5s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  6m 
33s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} cc {color} | {color:green}  6m 
33s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  6m 
33s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
1m  4s{color} | {color:orange} hadoop-yarn-project/hadoop-yarn: The patch 
generated 2 new + 319 unchanged - 17 fixed = 321 total (was 336) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  3m 
45s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
11m 35s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  6m 
58s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  3m  
1s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
43s{color} | {color:green} hadoop-yarn-api in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  3m  
7s{color} | {color:green} hadoop-yarn-common in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  2m  
5s{color} | {color:green} hadoop-yarn-server-common in the patch passed. 
{color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 60m 
51s{color} | {color:green} hadoop-yarn-server-resourcemanager in the patch 
passed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 27m 54s{color} 
| {color:red} hadoop-yarn-client in the patch failed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 12m 
28s{color} | {color:green} hadoop-yarn-applications-distributedshell in the 
patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
35s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | 

[jira] [Commented] (YARN-7920) Cleanup configuration of PlacementConstraints

2018-02-15 Thread genericqa (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7920?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16365444#comment-16365444
 ] 

genericqa commented on YARN-7920:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
10s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 6 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
10s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 18m 
35s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 11m 
30s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
28s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  3m 
24s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
14m 59s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Skipped patched modules with no Java source: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-site {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  1m 
15s{color} | {color:red} hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api in 
trunk has 1 extant Findbugs warnings. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  2m 
29s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
10s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  2m 
26s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  8m 
35s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  8m 
36s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
1m 32s{color} | {color:orange} hadoop-yarn-project/hadoop-yarn: The patch 
generated 15 new + 408 unchanged - 0 fixed = 423 total (was 408) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  4m 
41s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green}  0m  
2s{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
12m 48s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Skipped patched modules with no Java source: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-site {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  5m  
0s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  2m 
21s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
41s{color} | {color:green} hadoop-yarn-api in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  3m  
5s{color} | {color:green} hadoop-yarn-common in the patch passed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 68m  0s{color} 
| {color:red} hadoop-yarn-server-resourcemanager in the patch failed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 29m 53s{color} 
| {color:red} hadoop-yarn-client in the patch failed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
18s{color} | {color:green} hadoop-yarn-site in the patch passed. 

[jira] [Comment Edited] (YARN-7328) ResourceUtils allows yarn.nodemanager.resource-types.memory-mb and .vcores to override yarn.nodemanager.resource.memory-mb and .cpu-vcores

2018-02-15 Thread lovekesh bansal (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7328?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16365387#comment-16365387
 ] 

lovekesh bansal edited comment on YARN-7328 at 2/15/18 11:13 AM:
-

Thanks [~leftnoteasy] Ya I'll add test cases shortly.


was (Author: lovekesh.bansal):
[~leftnoteasy] Ya I'll add test cases shortly.

> ResourceUtils allows yarn.nodemanager.resource-types.memory-mb and .vcores to 
> override yarn.nodemanager.resource.memory-mb and .cpu-vcores
> --
>
> Key: YARN-7328
> URL: https://issues.apache.org/jira/browse/YARN-7328
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: nodemanager
>Affects Versions: 3.1.0
>Reporter: Daniel Templeton
>Assignee: lovekesh bansal
>Priority: Critical
> Attachments: YARN-7328_trunk.001.patch
>
>
> We will throw an exception if yarn.nodemanager.resource-types.memory is 
> configured, but not if .memory-mb or .vcores is configured.  We should be 
> consistent.  We should not allow resource types to redefine something for 
> which we already have a property to set. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7328) ResourceUtils allows yarn.nodemanager.resource-types.memory-mb and .vcores to override yarn.nodemanager.resource.memory-mb and .cpu-vcores

2018-02-15 Thread lovekesh bansal (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7328?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16365387#comment-16365387
 ] 

lovekesh bansal commented on YARN-7328:
---

[~leftnoteasy] Ya I'll add test cases shortly.

> ResourceUtils allows yarn.nodemanager.resource-types.memory-mb and .vcores to 
> override yarn.nodemanager.resource.memory-mb and .cpu-vcores
> --
>
> Key: YARN-7328
> URL: https://issues.apache.org/jira/browse/YARN-7328
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: nodemanager
>Affects Versions: 3.1.0
>Reporter: Daniel Templeton
>Assignee: lovekesh bansal
>Priority: Critical
> Attachments: YARN-7328_trunk.001.patch
>
>
> We will throw an exception if yarn.nodemanager.resource-types.memory is 
> configured, but not if .memory-mb or .vcores is configured.  We should be 
> consistent.  We should not allow resource types to redefine something for 
> which we already have a property to set. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Assigned] (YARN-7933) [atsv2 read acls] Add TimelineWriter#writeDomain

2018-02-15 Thread Rohith Sharma K S (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-7933?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Rohith Sharma K S reassigned YARN-7933:
---

Assignee: Rohith Sharma K S

> [atsv2 read acls] Add TimelineWriter#writeDomain 
> -
>
> Key: YARN-7933
> URL: https://issues.apache.org/jira/browse/YARN-7933
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Vrushali C
>Assignee: Rohith Sharma K S
>Priority: Major
>
>  
> Add an API TimelineWriter#writeDomain for writing the domain info 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7328) ResourceUtils allows yarn.nodemanager.resource-types.memory-mb and .vcores to override yarn.nodemanager.resource.memory-mb and .cpu-vcores

2018-02-15 Thread Wangda Tan (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7328?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16365342#comment-16365342
 ] 

Wangda Tan commented on YARN-7328:
--

[~lovekesh.bansal], in general the patch looks good, could you please add test 
cases?

Thanks,

 

> ResourceUtils allows yarn.nodemanager.resource-types.memory-mb and .vcores to 
> override yarn.nodemanager.resource.memory-mb and .cpu-vcores
> --
>
> Key: YARN-7328
> URL: https://issues.apache.org/jira/browse/YARN-7328
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: nodemanager
>Affects Versions: 3.1.0
>Reporter: Daniel Templeton
>Assignee: lovekesh bansal
>Priority: Critical
> Attachments: YARN-7328_trunk.001.patch
>
>
> We will throw an exception if yarn.nodemanager.resource-types.memory is 
> configured, but not if .memory-mb or .vcores is configured.  We should be 
> consistent.  We should not allow resource types to redefine something for 
> which we already have a property to set. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Assigned] (YARN-7937) Fix http method name in Cluster Application Timeout Update API example request

2018-02-15 Thread Charan Hebri (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-7937?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Charan Hebri reassigned YARN-7937:
--

Assignee: Charan Hebri

> Fix http method name in Cluster Application Timeout Update API example request
> --
>
> Key: YARN-7937
> URL: https://issues.apache.org/jira/browse/YARN-7937
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: docs, documentation
>Affects Versions: 2.9.0, 3.0.0
>Reporter: Charan Hebri
>Assignee: Charan Hebri
>Priority: Minor
> Attachments: YARN-7937.001.patch
>
>
> In section, Cluster Application Timeout Update API, 
> https://hadoop.apache.org/docs/current/hadoop-yarn/hadoop-yarn-site/ResourceManagerRest.html#Cluster_Application_Timeout_Update_API
> the example requests for both XML and JSON formats show "GET" as the http 
> method. This should actually be a PUT.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7905) Parent directory permission incorrect during public localization

2018-02-15 Thread Bilwa S T (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7905?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16365333#comment-16365333
 ] 

Bilwa S T commented on YARN-7905:
-

Thanks [~bibinchundatt] for reporting the issue. I have attached a patch. 
Please review.

> Parent directory permission incorrect during public localization 
> -
>
> Key: YARN-7905
> URL: https://issues.apache.org/jira/browse/YARN-7905
> Project: Hadoop YARN
>  Issue Type: Bug
>Reporter: Bibin A Chundatt
>Assignee: Bilwa S T
>Priority: Critical
> Attachments: YARN-7905-001.patch
>
>
> Similar to YARN-6708 during public localization also we have to take care for 
> parent directory if the umask is 027 during node manager start up.
> /filecache/0/200
> the directory permission of /filecache/0 is 750. Which cause 
> application failure 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-7905) Parent directory permission incorrect during public localization

2018-02-15 Thread Bilwa S T (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-7905?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bilwa S T updated YARN-7905:

Attachment: YARN-7905-001.patch

> Parent directory permission incorrect during public localization 
> -
>
> Key: YARN-7905
> URL: https://issues.apache.org/jira/browse/YARN-7905
> Project: Hadoop YARN
>  Issue Type: Bug
>Reporter: Bibin A Chundatt
>Assignee: Bilwa S T
>Priority: Critical
> Attachments: YARN-7905-001.patch
>
>
> Similar to YARN-6708 during public localization also we have to take care for 
> parent directory if the umask is 027 during node manager start up.
> /filecache/0/200
> the directory permission of /filecache/0 is 750. Which cause 
> application failure 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7937) Fix http method name in Cluster Application Timeout Update API example request

2018-02-15 Thread genericqa (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7937?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16365300#comment-16365300
 ] 

genericqa commented on YARN-7937:
-

| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
16s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 17m 
54s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
18s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
28m 13s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
15s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
12m 16s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
28s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 41m 50s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:5b98639 |
| JIRA Issue | YARN-7937 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12910707/YARN-7937.001.patch |
| Optional Tests |  asflicense  mvnsite  |
| uname | Linux 7a54d23f7d50 3.13.0-135-generic #184-Ubuntu SMP Wed Oct 18 
11:55:51 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / 8f66aff |
| maven | version: Apache Maven 3.3.9 |
| Max. process+thread count | 301 (vs. ulimit of 5500) |
| modules | C: hadoop-yarn-project/hadoop-yarn/hadoop-yarn-site U: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-site |
| Console output | 
https://builds.apache.org/job/PreCommit-YARN-Build/19704/console |
| Powered by | Apache Yetus 0.8.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> Fix http method name in Cluster Application Timeout Update API example request
> --
>
> Key: YARN-7937
> URL: https://issues.apache.org/jira/browse/YARN-7937
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: docs, documentation
>Affects Versions: 2.9.0, 3.0.0
>Reporter: Charan Hebri
>Priority: Minor
> Attachments: YARN-7937.001.patch
>
>
> In section, Cluster Application Timeout Update API, 
> https://hadoop.apache.org/docs/current/hadoop-yarn/hadoop-yarn-site/ResourceManagerRest.html#Cluster_Application_Timeout_Update_API
> the example requests for both XML and JSON formats show "GET" as the http 
> method. This should actually be a PUT.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-7937) Fix http method name in Cluster Application Timeout Update API example request

2018-02-15 Thread Charan Hebri (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-7937?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Charan Hebri updated YARN-7937:
---
Attachment: YARN-7937.001.patch

> Fix http method name in Cluster Application Timeout Update API example request
> --
>
> Key: YARN-7937
> URL: https://issues.apache.org/jira/browse/YARN-7937
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: docs, documentation
>Affects Versions: 2.9.0, 3.0.0
>Reporter: Charan Hebri
>Priority: Minor
> Attachments: YARN-7937.001.patch
>
>
> In section, Cluster Application Timeout Update API, 
> https://hadoop.apache.org/docs/current/hadoop-yarn/hadoop-yarn-site/ResourceManagerRest.html#Cluster_Application_Timeout_Update_API
> the example requests for both XML and JSON formats show "GET" as the http 
> method. This should actually be a PUT.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-7937) Fix http method name in Cluster Application Timeout Update API example request

2018-02-15 Thread Charan Hebri (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-7937?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Charan Hebri updated YARN-7937:
---
Description: 
In section, Cluster Application Timeout Update API, 
https://hadoop.apache.org/docs/current/hadoop-yarn/hadoop-yarn-site/ResourceManagerRest.html#Cluster_Application_Timeout_Update_API
the example requests for both XML and JSON formats show "GET" as the http 
method. This should actually be a PUT.

  was:In section, Cluster Application Timeout Update API, the example requests 
for both XML and JSON formats show "GET" as the http method. This should 
actually be a PUT.


> Fix http method name in Cluster Application Timeout Update API example request
> --
>
> Key: YARN-7937
> URL: https://issues.apache.org/jira/browse/YARN-7937
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: docs, documentation
>Affects Versions: 2.9.0, 3.0.0
>Reporter: Charan Hebri
>Priority: Minor
>
> In section, Cluster Application Timeout Update API, 
> https://hadoop.apache.org/docs/current/hadoop-yarn/hadoop-yarn-site/ResourceManagerRest.html#Cluster_Application_Timeout_Update_API
> the example requests for both XML and JSON formats show "GET" as the http 
> method. This should actually be a PUT.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Created] (YARN-7937) Fix http method name in Cluster Application Timeout Update API example request

2018-02-15 Thread Charan Hebri (JIRA)
Charan Hebri created YARN-7937:
--

 Summary: Fix http method name in Cluster Application Timeout 
Update API example request
 Key: YARN-7937
 URL: https://issues.apache.org/jira/browse/YARN-7937
 Project: Hadoop YARN
  Issue Type: Bug
  Components: docs, documentation
Affects Versions: 3.0.0, 2.9.0
Reporter: Charan Hebri


In section, Cluster Application Timeout Update API, the example requests for 
both XML and JSON formats show "GET" as the http method. This should actually 
be a PUT.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-7292) Revisit Resource Profile Behavior

2018-02-15 Thread Wangda Tan (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-7292?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wangda Tan updated YARN-7292:
-
Attachment: YARN-7292.007.patch

> Revisit Resource Profile Behavior
> -
>
> Key: YARN-7292
> URL: https://issues.apache.org/jira/browse/YARN-7292
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: nodemanager, resourcemanager
>Reporter: Wangda Tan
>Assignee: Wangda Tan
>Priority: Blocker
> Attachments: YARN-7292.002.patch, YARN-7292.003.patch, 
> YARN-7292.004.patch, YARN-7292.005.patch, YARN-7292.006.patch, 
> YARN-7292.007.patch, YARN-7292.wip.001.patch
>
>
> Had discussions with [~templedf], [~vvasudev], [~sunilg] offline. There're a 
> couple of resource profile related behaviors might need to be updated:
> 1) Configure resource profile in server side or client side: 
> Currently resource profile can be only configured centrally:
> - Advantages:
> A given resource profile has a the same meaning in the cluster. It won’t 
> change when we run different apps in different configurations. A job can run 
> under Amazon’s G2.8X can also run on YARN with G2.8X profile. A side benefit 
> is YARN scheduler can potentially do better bin packing.
> - Disadvantages: 
> Hard for applications to add their own resource profiles. 
> 2) Do we really need mandatory resource profiles such as 
> minimum/maximum/default? 
> 3) Should we send resource profile name inside ResourceRequest, or should 
> client/AM translate it to resource and set it to the existing resource 
> fields? 
> 4) Related to above, should we allow resource overrides or client/AM should 
> send final resource to RM?



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7292) Revisit Resource Profile Behavior

2018-02-15 Thread Wangda Tan (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7292?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16365250#comment-16365250
 ] 

Wangda Tan commented on YARN-7292:
--

Rebased to latest trunk (007)

> Revisit Resource Profile Behavior
> -
>
> Key: YARN-7292
> URL: https://issues.apache.org/jira/browse/YARN-7292
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: nodemanager, resourcemanager
>Reporter: Wangda Tan
>Assignee: Wangda Tan
>Priority: Blocker
> Attachments: YARN-7292.002.patch, YARN-7292.003.patch, 
> YARN-7292.004.patch, YARN-7292.005.patch, YARN-7292.006.patch, 
> YARN-7292.007.patch, YARN-7292.wip.001.patch
>
>
> Had discussions with [~templedf], [~vvasudev], [~sunilg] offline. There're a 
> couple of resource profile related behaviors might need to be updated:
> 1) Configure resource profile in server side or client side: 
> Currently resource profile can be only configured centrally:
> - Advantages:
> A given resource profile has a the same meaning in the cluster. It won’t 
> change when we run different apps in different configurations. A job can run 
> under Amazon’s G2.8X can also run on YARN with G2.8X profile. A side benefit 
> is YARN scheduler can potentially do better bin packing.
> - Disadvantages: 
> Hard for applications to add their own resource profiles. 
> 2) Do we really need mandatory resource profiles such as 
> minimum/maximum/default? 
> 3) Should we send resource profile name inside ResourceRequest, or should 
> client/AM translate it to resource and set it to the existing resource 
> fields? 
> 4) Related to above, should we allow resource overrides or client/AM should 
> send final resource to RM?



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-7920) Cleanup configuration of PlacementConstraints

2018-02-15 Thread Wangda Tan (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-7920?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wangda Tan updated YARN-7920:
-
Attachment: YARN-7920.006.patch

> Cleanup configuration of PlacementConstraints
> -
>
> Key: YARN-7920
> URL: https://issues.apache.org/jira/browse/YARN-7920
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Wangda Tan
>Assignee: Wangda Tan
>Priority: Blocker
> Attachments: YARN-7920.001.patch, YARN-7920.002.patch, 
> YARN-7920.003.patch, YARN-7920.004.patch, YARN-7920.005.patch, 
> YARN-7920.006.patch
>
>
> Currently it is very confusing to have the two configs in two different files 
> (yarn-site.xml and capacity-scheduler.xml). 
>  
> Maybe a better approach is: we can delete the scheduling-request.allowed in 
> CS, and update placement-constraints configs in yarn-site.xml a bit: 
>  
> - Remove placement-constraints.enabled, and add a new 
> placement-constraints.handler, by default is none, and other acceptable 
> values are a. external-processor (since algorithm is too generic to me), b. 
> scheduler. 
> - And add a new PlacementProcessor just to pass SchedulingRequest to 
> scheduler without any modifications.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7920) Cleanup configuration of PlacementConstraints

2018-02-15 Thread Wangda Tan (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7920?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16365246#comment-16365246
 ] 

Wangda Tan commented on YARN-7920:
--

Attached ver.6 patch for the ASF warning, failed test case is not related, 
which is tracked by YARN-7918.

> Cleanup configuration of PlacementConstraints
> -
>
> Key: YARN-7920
> URL: https://issues.apache.org/jira/browse/YARN-7920
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Wangda Tan
>Assignee: Wangda Tan
>Priority: Blocker
> Attachments: YARN-7920.001.patch, YARN-7920.002.patch, 
> YARN-7920.003.patch, YARN-7920.004.patch, YARN-7920.005.patch, 
> YARN-7920.006.patch
>
>
> Currently it is very confusing to have the two configs in two different files 
> (yarn-site.xml and capacity-scheduler.xml). 
>  
> Maybe a better approach is: we can delete the scheduling-request.allowed in 
> CS, and update placement-constraints configs in yarn-site.xml a bit: 
>  
> - Remove placement-constraints.enabled, and add a new 
> placement-constraints.handler, by default is none, and other acceptable 
> values are a. external-processor (since algorithm is too generic to me), b. 
> scheduler. 
> - And add a new PlacementProcessor just to pass SchedulingRequest to 
> scheduler without any modifications.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org