[jira] [Commented] (YARN-6625) yarn application -list returns a tracking URL for AM that doesn't work in secured and HA environment

2017-10-02 Thread Yufei Gu (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6625?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16189295#comment-16189295
 ] 

Yufei Gu commented on YARN-6625:


Hi [~wangda], I didn't see any change in the patch v5. Did you upload the right 
patch?

> yarn application -list returns a tracking URL for AM that doesn't work in 
> secured and HA environment
> 
>
> Key: YARN-6625
> URL: https://issues.apache.org/jira/browse/YARN-6625
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: amrmproxy
>Affects Versions: 3.0.0-alpha2
>Reporter: Yufei Gu
>Assignee: Yufei Gu
> Fix For: 3.0.0-beta1
>
> Attachments: YARN-6625.001.patch, YARN-6625.002.patch, 
> YARN-6625.003.patch, YARN-6625.004.patch, YARN-6625.branch-2.005.patch
>
>
> The tracking URL given at the command line should work secured or not. The 
> tracking URLs are like http://node-2.abc.com:47014 and AM web server supposed 
> to redirect it to a RM address like this 
> http://node-1.abc.com:8088/proxy/application_1494544954891_0002/, but it 
> fails to do that because the connection is rejected when AM is talking to RM 
> admin service to get HA status.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-7209) [YARN-3368] CSS changes in new YARN-UI

2017-10-02 Thread Akhil PB (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-7209?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Akhil PB updated YARN-7209:
---
Description: 
# Fix missing CSS background for breadcrumbs.
# Reduce panel border radius by few pixels.

  was:After latest styles changes in YANN-UI, CSS background for breadcrumbs is 
missing which was there initially.


> [YARN-3368] CSS changes in new YARN-UI
> --
>
> Key: YARN-7209
> URL: https://issues.apache.org/jira/browse/YARN-7209
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Akhil PB
>Assignee: Akhil PB
>
> # Fix missing CSS background for breadcrumbs.
> # Reduce panel border radius by few pixels.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-7209) [YARN-3368] CSS changes in new YARN-UI

2017-10-02 Thread Akhil PB (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-7209?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Akhil PB updated YARN-7209:
---
Summary: [YARN-3368] CSS changes in new YARN-UI  (was: [YARN-3368] CSS 
backgound missing for breadcrumbs in new YARN-UI)

> [YARN-3368] CSS changes in new YARN-UI
> --
>
> Key: YARN-7209
> URL: https://issues.apache.org/jira/browse/YARN-7209
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Akhil PB
>Assignee: Akhil PB
>
> After latest styles changes in YANN-UI, CSS background for breadcrumbs is 
> missing which was there initially.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-7258) Add Node and Rack Hints to Opportunistic Scheduler

2017-10-02 Thread kartheek muthyala (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-7258?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

kartheek muthyala updated YARN-7258:

Attachment: YARN-7258.001.patch

> Add Node and Rack Hints to Opportunistic Scheduler
> --
>
> Key: YARN-7258
> URL: https://issues.apache.org/jira/browse/YARN-7258
> Project: Hadoop YARN
>  Issue Type: Bug
>Reporter: Arun Suresh
>Assignee: kartheek muthyala
> Attachments: YARN-7258.001.patch
>
>
> Currently, the Opportunistic Scheduler ignores the node and rack information 
> and allocates strictly on the least loaded node (based on queue length) at 
> the time it received the request. This JIRA is to track changes needed to 
> allow the OpportunisticContainerAllocator to take the node/rack name as hints.
> The flow would be:
> # If requested node found in the top K leastLoaded nodes, allocate on that 
> node
> # Else, allocate on least loaded node on the same rack from the top K least 
> Loaded nodes.
> # Else, allocate on least loaded node.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7283) Nodemanager can't start

2017-10-02 Thread Jonathan Hung (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7283?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16189247#comment-16189247
 ] 

Jonathan Hung commented on YARN-7283:
-

Hi [~Sorey], have you configured 
{{yarn.nodemanager.resource.memory-mb}}/{{yarn.nodemanager.resource.cpu-vcores}}
 on NM or 
{{yarn.scheduler.minimum-allocation-mb}}/{{yarn.scheduler.minimum-allocation-vcores}}
 on RM? {{yarn.nodemanager.resource.memory-mb}} is required to be at least 
{{yarn.scheduler.minimum-allocation-mb}} (same with vcores).

> Nodemanager can't start
> ---
>
> Key: YARN-7283
> URL: https://issues.apache.org/jira/browse/YARN-7283
> Project: Hadoop YARN
>  Issue Type: Bug
>Affects Versions: 2.7.4
>Reporter: Nguyen Xuan Tinh
>
> i installed hadoop with psedou mode  follow 
> https://hadoop.apache.org/docs/stable/hadoop-project-dist/hadoop-common/SingleCluster.html
>  . Then when i start-all.sh i got 
> 26177 SecondaryNameNode
> 26355 ResourceManager
> 12211 Jps
> 25814 NameNode
> 25976 DataNode
> so i saw log of nodemanager
> {code:java}
> Caused by: org.apache.hadoop.yarn.exceptions.YarnRuntimeException: Recieved 
> SHUTDOWN signal from Resourcemanager ,Registration of NodeManager failed, 
> Message from ResourceManager: NodeManager from  ubuntu doesn't satisfy 
> minimum allocations, Sending SHUTDOWN signal to the NodeManager.
>   at 
> org.apache.hadoop.yarn.server.nodemanager.NodeStatusUpdaterImpl.registerWithRM(NodeStatusUpdaterImpl.java:278)
>   at 
> org.apache.hadoop.yarn.server.nodemanager.NodeStatusUpdaterImpl.serviceStart(NodeStatusUpdaterImpl.java:197)
>   ... 6 more
> 2017-10-03 08:47:49,883 INFO org.apache.hadoop.service.AbstractService: 
> Service NodeManager failed in state STARTED; cause: 
> org.apache.hadoop.yarn.exceptions.YarnRuntimeException: 
> org.apache.hadoop.yarn.exceptions.YarnRuntimeException: Recieved SHUTDOWN 
> signal from Resourcemanager ,Registration of NodeManager failed, Message from 
> ResourceManager: NodeManager from  ubuntu doesn't satisfy minimum 
> allocations, Sending SHUTDOWN signal to the NodeManager.
> org.apache.hadoop.yarn.exceptions.YarnRuntimeException: 
> org.apache.hadoop.yarn.exceptions.YarnRuntimeException: Recieved SHUTDOWN 
> signal from Resourcemanager ,Registration of NodeManager failed, Message from 
> ResourceManager: NodeManager from  ubuntu doesn't satisfy minimum 
> allocations, Sending SHUTDOWN signal to the NodeManager.
>   at 
> org.apache.hadoop.yarn.server.nodemanager.NodeStatusUpdaterImpl.serviceStart(NodeStatusUpdaterImpl.java:203)
>   at 
> org.apache.hadoop.service.AbstractService.start(AbstractService.java:193)
>   at 
> org.apache.hadoop.service.CompositeService.serviceStart(CompositeService.java:120)
>   at 
> org.apache.hadoop.yarn.server.nodemanager.NodeManager.serviceStart(NodeManager.java:272)
>   at 
> org.apache.hadoop.service.AbstractService.start(AbstractService.java:193)
>   at 
> org.apache.hadoop.yarn.server.nodemanager.NodeManager.initAndStartNodeManager(NodeManager.java:496)
>   at 
> org.apache.hadoop.yarn.server.nodemanager.NodeManager.main(NodeManager.java:543)
> Caused by: org.apache.hadoop.yarn.exceptions.YarnRuntimeException: Recieved 
> SHUTDOWN signal from Resourcemanager ,Registration of NodeManager failed, 
> Message from ResourceManager: NodeManager from  ubuntu doesn't satisfy 
> minimum allocations, Sending SHUTDOWN signal to the NodeManager.
>   at 
> org.apache.hadoop.yarn.server.nodemanager.NodeStatusUpdaterImpl.registerWithRM(NodeStatusUpdaterImpl.java:278)
>   at 
> org.apache.hadoop.yarn.server.nodemanager.NodeStatusUpdaterImpl.serviceStart(NodeStatusUpdaterImpl.java:197)
>   ... 6 more
> {code}
> How i can resolve this ? Please help me, Thanks for reading



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Created] (YARN-7283) Nodemanager can't start

2017-10-02 Thread Nguyen Xuan Tinh (JIRA)
Nguyen Xuan Tinh created YARN-7283:
--

 Summary: Nodemanager can't start
 Key: YARN-7283
 URL: https://issues.apache.org/jira/browse/YARN-7283
 Project: Hadoop YARN
  Issue Type: Bug
Affects Versions: 2.7.4
Reporter: Nguyen Xuan Tinh


i installed hadoop with psedou mode  follow 
https://hadoop.apache.org/docs/stable/hadoop-project-dist/hadoop-common/SingleCluster.html
 . Then when i start-all.sh i got 
26177 SecondaryNameNode
26355 ResourceManager
12211 Jps
25814 NameNode
25976 DataNode
so i saw log of nodemanager
{code:java}
Caused by: org.apache.hadoop.yarn.exceptions.YarnRuntimeException: Recieved 
SHUTDOWN signal from Resourcemanager ,Registration of NodeManager failed, 
Message from ResourceManager: NodeManager from  ubuntu doesn't satisfy minimum 
allocations, Sending SHUTDOWN signal to the NodeManager.
at 
org.apache.hadoop.yarn.server.nodemanager.NodeStatusUpdaterImpl.registerWithRM(NodeStatusUpdaterImpl.java:278)
at 
org.apache.hadoop.yarn.server.nodemanager.NodeStatusUpdaterImpl.serviceStart(NodeStatusUpdaterImpl.java:197)
... 6 more
2017-10-03 08:47:49,883 INFO org.apache.hadoop.service.AbstractService: Service 
NodeManager failed in state STARTED; cause: 
org.apache.hadoop.yarn.exceptions.YarnRuntimeException: 
org.apache.hadoop.yarn.exceptions.YarnRuntimeException: Recieved SHUTDOWN 
signal from Resourcemanager ,Registration of NodeManager failed, Message from 
ResourceManager: NodeManager from  ubuntu doesn't satisfy minimum allocations, 
Sending SHUTDOWN signal to the NodeManager.
org.apache.hadoop.yarn.exceptions.YarnRuntimeException: 
org.apache.hadoop.yarn.exceptions.YarnRuntimeException: Recieved SHUTDOWN 
signal from Resourcemanager ,Registration of NodeManager failed, Message from 
ResourceManager: NodeManager from  ubuntu doesn't satisfy minimum allocations, 
Sending SHUTDOWN signal to the NodeManager.
at 
org.apache.hadoop.yarn.server.nodemanager.NodeStatusUpdaterImpl.serviceStart(NodeStatusUpdaterImpl.java:203)
at 
org.apache.hadoop.service.AbstractService.start(AbstractService.java:193)
at 
org.apache.hadoop.service.CompositeService.serviceStart(CompositeService.java:120)
at 
org.apache.hadoop.yarn.server.nodemanager.NodeManager.serviceStart(NodeManager.java:272)
at 
org.apache.hadoop.service.AbstractService.start(AbstractService.java:193)
at 
org.apache.hadoop.yarn.server.nodemanager.NodeManager.initAndStartNodeManager(NodeManager.java:496)
at 
org.apache.hadoop.yarn.server.nodemanager.NodeManager.main(NodeManager.java:543)
Caused by: org.apache.hadoop.yarn.exceptions.YarnRuntimeException: Recieved 
SHUTDOWN signal from Resourcemanager ,Registration of NodeManager failed, 
Message from ResourceManager: NodeManager from  ubuntu doesn't satisfy minimum 
allocations, Sending SHUTDOWN signal to the NodeManager.
at 
org.apache.hadoop.yarn.server.nodemanager.NodeStatusUpdaterImpl.registerWithRM(NodeStatusUpdaterImpl.java:278)
at 
org.apache.hadoop.yarn.server.nodemanager.NodeStatusUpdaterImpl.serviceStart(NodeStatusUpdaterImpl.java:197)
... 6 more

{code}
How i can resolve this ? Please help me, Thanks for reading



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7226) Whitelisted variables do not support delayed variable expansion

2017-10-02 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7226?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16189200#comment-16189200
 ] 

Hudson commented on YARN-7226:
--

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #13009 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/13009/])
YARN-7226. Whitelisted variables do not support delayed variable (sidharta: rev 
7eb846869cdb63743f1c9eca2ba91d57ad08)
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/linux/runtime/DockerLinuxContainerRuntime.java
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/ContainerExecutor.java
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/LinuxContainerExecutor.java
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/runtime/ContainerRuntime.java
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/linux/runtime/DefaultLinuxContainerRuntime.java
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/launcher/ContainerLaunch.java
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/linux/runtime/DelegatingLinuxContainerRuntime.java
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/launcher/TestContainerLaunch.java


> Whitelisted variables do not support delayed variable expansion
> ---
>
> Key: YARN-7226
> URL: https://issues.apache.org/jira/browse/YARN-7226
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: nodemanager
>Affects Versions: 2.9.0, 2.8.1, 3.0.0-alpha4
>Reporter: Jason Lowe
>Assignee: Jason Lowe
> Attachments: YARN-7226.001.patch, YARN-7226.002.patch, 
> YARN-7226.003.patch, YARN-7226.004.patch, YARN-7226.005.patch, 
> YARN-7226.006.patch
>
>
> The nodemanager supports a configurable list of environment variables, via 
> yarn.nodemanager.env-whitelist, that will be propagated to the container's 
> environment unless those variables were specified in the container launch 
> context.  Unfortunately the handling of these whitelisted variables prevents 
> using delayed variable expansion.  For example, if a user shipped their own 
> version of hadoop with their job via the distributed cache and specified:
> {noformat}
> HADOOP_COMMON_HOME={{PWD}}/my-private-hadoop/
> {noformat}
>  as part of their job, the variable will be set as the *literal* string:
> {noformat}
> $PWD/my-private-hadoop/
> {noformat}
> rather than having $PWD expand to the container's current directory as it 
> does for any other, non-whitelisted variable being set to the same value.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-2162) add ability in Fair Scheduler to optionally configure maxResources in terms of percentage

2017-10-02 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-2162?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16189195#comment-16189195
 ] 

Hadoop QA commented on YARN-2162:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  1m 
33s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 9 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  1m 
21s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 40m 
14s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 27m 
34s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  2m 
48s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  2m 
46s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
27m 44s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Skipped patched modules with no Java source: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-site {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m 
56s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
50s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
27s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
52s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 15m 
53s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 15m 
53s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  2m 
15s{color} | {color:green} hadoop-yarn-project/hadoop-yarn: The patch generated 
0 new + 347 unchanged - 6 fixed = 347 total (was 353) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  2m 
35s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
14m 57s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Skipped patched modules with no Java source: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-site {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
19s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} javadoc {color} | {color:red}  0m 
30s{color} | {color:red} hadoop-yarn-server-resourcemanager in the patch 
failed. {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 49m 12s{color} 
| {color:red} hadoop-yarn-server-resourcemanager in the patch failed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
17s{color} | {color:green} hadoop-yarn-site in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
39s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}202m 12s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | 
hadoop.yarn.server.resourcemanager.ahs.TestRMApplicationHistoryWriter |
|   | hadoop.yarn.server.resourcemanager.scheduler.fair.TestFairScheduler |
|   | 
hadoop.yarn.server.resourcemanager.scheduler.capacity.TestContainerAllocation |
|   | 

[jira] [Commented] (YARN-7226) Whitelisted variables do not support delayed variable expansion

2017-10-02 Thread Sidharta Seethana (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7226?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16189191#comment-16189191
 ] 

Sidharta Seethana commented on YARN-7226:
-

[~jlowe]

+1 to the latest patch - committed to trunk and branch-3.0 . Could you please 
add a branch-2 version ? Thanks. 

> Whitelisted variables do not support delayed variable expansion
> ---
>
> Key: YARN-7226
> URL: https://issues.apache.org/jira/browse/YARN-7226
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: nodemanager
>Affects Versions: 2.9.0, 2.8.1, 3.0.0-alpha4
>Reporter: Jason Lowe
>Assignee: Jason Lowe
> Attachments: YARN-7226.001.patch, YARN-7226.002.patch, 
> YARN-7226.003.patch, YARN-7226.004.patch, YARN-7226.005.patch, 
> YARN-7226.006.patch
>
>
> The nodemanager supports a configurable list of environment variables, via 
> yarn.nodemanager.env-whitelist, that will be propagated to the container's 
> environment unless those variables were specified in the container launch 
> context.  Unfortunately the handling of these whitelisted variables prevents 
> using delayed variable expansion.  For example, if a user shipped their own 
> version of hadoop with their job via the distributed cache and specified:
> {noformat}
> HADOOP_COMMON_HOME={{PWD}}/my-private-hadoop/
> {noformat}
>  as part of their job, the variable will be set as the *literal* string:
> {noformat}
> $PWD/my-private-hadoop/
> {noformat}
> rather than having $PWD expand to the container's current directory as it 
> does for any other, non-whitelisted variable being set to the same value.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7269) Tracking URL in the app state does not get redirected to ApplicationMaster for Running applications

2017-10-02 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7269?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16189163#comment-16189163
 ] 

Hadoop QA commented on YARN-7269:
-

| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
43s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 2 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 28m 
57s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
40s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
28s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
45s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
15m 30s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
58s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
31s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
37s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
35s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
35s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 24s{color} | {color:orange} 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-web-proxy:
 The patch generated 6 new + 23 unchanged - 0 fixed = 29 total (was 23) {color} 
|
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
37s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
16m 41s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
23s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
31s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  1m  
1s{color} | {color:green} hadoop-yarn-server-web-proxy in the patch passed. 
{color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
39s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 72m  2s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:71bbb86 |
| JIRA Issue | YARN-7269 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12890070/YARN-7269.003.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  shadedclient  findbugs  checkstyle  |
| uname | Linux 50869df4cb7e 3.13.0-119-generic #166-Ubuntu SMP Wed May 3 
12:18:55 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / 015abcd |
| Default Java | 1.8.0_144 |
| findbugs | v3.1.0-RC1 |
| checkstyle | 
https://builds.apache.org/job/PreCommit-YARN-Build/17739/artifact/patchprocess/diff-checkstyle-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-web-proxy.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-YARN-Build/17739/testReport/ |
| modules | C: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-web-proxy 
U: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-web-proxy 
|
| Console output | 

[jira] [Commented] (YARN-7278) LinuxContainer in docker mode will be failed when nodemanager restart, because timeout for docker is too slow.

2017-10-02 Thread zhengchenyu (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7278?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16189158#comment-16189158
 ] 

zhengchenyu commented on YARN-7278:
---

[~ebadger] 
Yes, you are right. I changed affects versions to 2.8.0. 
In fact, our company's hadoop version is 2.7.1, we add the feature of docker 
mode in linuxcontainere to our version. 

> LinuxContainer in docker mode will be failed when nodemanager restart, 
> because timeout for docker is too slow.
> --
>
> Key: YARN-7278
> URL: https://issues.apache.org/jira/browse/YARN-7278
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: nodemanager
>Affects Versions: 2.8.0
> Environment: CentOS
>Reporter: zhengchenyu
> Fix For: 2.9.0
>
>   Original Estimate: 1m
>  Remaining Estimate: 1m
>
> In our cluster, nodemanagere recovery is turn on, and we use LinuxConainer 
> with docker mode.
> Container may be failed when nodemanager restart, exception is below:
> {code}
> [2017-09-29T15:47:14.433+08:00] [INFO] 
> containermanager.monitor.ContainersMonitorImpl.run(ContainersMonitorImpl.java 
> 472) [Container Monitor] : Memory usage of ProcessTree 120523 for 
> container-id container_1506600355508_0023_01_04: -1B of 10 GB physical 
> memory used; -1B of 31 GB virtual memory used
> [2017-09-29T15:47:15.219+08:00] [ERROR] 
> containermanager.launcher.RecoveredContainerLaunch.call(RecoveredContainerLaunch.java
>  93) [ContainersLauncher #1] : Unable to recover container 
> container_1506600355508_0023_01_04
> java.io.IOException: Timeout while waiting for exit code from 
> container_1506600355508_0023_01_04
> [2017-09-29T15:47:15.220+08:00] [INFO] 
> containermanager.container.ContainerImpl.handle(ContainerImpl.java 1142) 
> [AsyncDispatcher event handler] : Container 
> container_1506600355508_0023_01_04 transitioned from RUNNING to 
> EXITED_WITH_FAILURE
> [2017-09-29T15:47:15.221+08:00] [INFO] 
> containermanager.launcher.ContainerLaunch.cleanupContainer(ContainerLaunch.java
>  440) [AsyncDispatcher event handler] : Cleaning up container 
> container_1506600355508_0023_01_04
> {code}
> I guess the proccess is done, but 2 seconde later( the variable is msecLeft), 
> the *.pid.exitcode wasn't created. Then I changed variable to 2ms, The 
> container is succeed when nodemanger is restart.
> So I think it is too short for docker container to complete the work.
> In docker mode of LinuxContainer, nm monitor the real task which is launched 
> by "docker run" command. Then "docker wait" command will wait for exitcode, 
> then "docker rm" will delete the docker container. Lastly, container-executor 
> will write the exit code. So if some docker command is slow enough, nm 
> wouldn't monitor the container. In fact, docker rm is always slow. 
> I think the exit code of docker rm dosen't matter with the real task, so I 
> think we could move the operation of write "*.pid.exitcode" before the 
> command of docker rm. Or monitor the docker wait proccess, but not the real 
> task.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-2037) Add work preserving restart support for Unmanaged AMs

2017-10-02 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-2037?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16189155#comment-16189155
 ] 

Hudson commented on YARN-2037:
--

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #13008 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/13008/])
YARN-2037. Add work preserving restart support for Unmanaged AMs. (subru: rev 
d4d2fd1acd2fdddf04f45e67897804eea30d79a1)
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/api/records/ApplicationSubmissionContext.java
* (add) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/TestWorkPreservingUnmanagedAM.java
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/DefaultAMSProcessor.java
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/ApplicationMasterService.java
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/api/ApplicationMasterProtocol.java
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/AbstractYarnScheduler.java
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/rmapp/attempt/RMAppAttemptImpl.java


> Add work preserving restart support for Unmanaged AMs
> -
>
> Key: YARN-2037
> URL: https://issues.apache.org/jira/browse/YARN-2037
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: resourcemanager
>Affects Versions: 2.4.0
>Reporter: Karthik Kambatla
>Assignee: Botong Huang
> Attachments: YARN-2037.v1.patch, YARN-2037.v2.patch, 
> YARN-2037.v3.patch, YARN-2037.v4.patch
>
>
> It would be nice to allow Unmanaged AMs also to restart in a work-preserving 
> way. 



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-7278) LinuxContainer in docker mode will be failed when nodemanager restart, because timeout for docker is too slow.

2017-10-02 Thread zhengchenyu (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-7278?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

zhengchenyu updated YARN-7278:
--
Affects Version/s: (was: 2.7.1)
   2.8.0

> LinuxContainer in docker mode will be failed when nodemanager restart, 
> because timeout for docker is too slow.
> --
>
> Key: YARN-7278
> URL: https://issues.apache.org/jira/browse/YARN-7278
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: nodemanager
>Affects Versions: 2.8.0
> Environment: CentOS
>Reporter: zhengchenyu
> Fix For: 2.9.0
>
>   Original Estimate: 1m
>  Remaining Estimate: 1m
>
> In our cluster, nodemanagere recovery is turn on, and we use LinuxConainer 
> with docker mode.
> Container may be failed when nodemanager restart, exception is below:
> {code}
> [2017-09-29T15:47:14.433+08:00] [INFO] 
> containermanager.monitor.ContainersMonitorImpl.run(ContainersMonitorImpl.java 
> 472) [Container Monitor] : Memory usage of ProcessTree 120523 for 
> container-id container_1506600355508_0023_01_04: -1B of 10 GB physical 
> memory used; -1B of 31 GB virtual memory used
> [2017-09-29T15:47:15.219+08:00] [ERROR] 
> containermanager.launcher.RecoveredContainerLaunch.call(RecoveredContainerLaunch.java
>  93) [ContainersLauncher #1] : Unable to recover container 
> container_1506600355508_0023_01_04
> java.io.IOException: Timeout while waiting for exit code from 
> container_1506600355508_0023_01_04
> [2017-09-29T15:47:15.220+08:00] [INFO] 
> containermanager.container.ContainerImpl.handle(ContainerImpl.java 1142) 
> [AsyncDispatcher event handler] : Container 
> container_1506600355508_0023_01_04 transitioned from RUNNING to 
> EXITED_WITH_FAILURE
> [2017-09-29T15:47:15.221+08:00] [INFO] 
> containermanager.launcher.ContainerLaunch.cleanupContainer(ContainerLaunch.java
>  440) [AsyncDispatcher event handler] : Cleaning up container 
> container_1506600355508_0023_01_04
> {code}
> I guess the proccess is done, but 2 seconde later( the variable is msecLeft), 
> the *.pid.exitcode wasn't created. Then I changed variable to 2ms, The 
> container is succeed when nodemanger is restart.
> So I think it is too short for docker container to complete the work.
> In docker mode of LinuxContainer, nm monitor the real task which is launched 
> by "docker run" command. Then "docker wait" command will wait for exitcode, 
> then "docker rm" will delete the docker container. Lastly, container-executor 
> will write the exit code. So if some docker command is slow enough, nm 
> wouldn't monitor the container. In fact, docker rm is always slow. 
> I think the exit code of docker rm dosen't matter with the real task, so I 
> think we could move the operation of write "*.pid.exitcode" before the 
> command of docker rm. Or monitor the docker wait proccess, but not the real 
> task.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-2497) Changes for fair scheduler to support allocate resource respect labels

2017-10-02 Thread Wangda Tan (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-2497?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16189149#comment-16189149
 ] 

Wangda Tan commented on YARN-2497:
--

Thanks [~templedf] for updating the patch, patch looks good to me in general, 
few minor suggestions:

Remove unused method of Resources:
{code}
  public static boolean isAnyMajorResourceNonZero(ResourceCalculator rc,
  Resource resource) {
return rc.isAnyMajorResourceNonZero(resource);
  }
{code}

Naming:
- getAppNodeLabelExpression -> getAppNodeLabelExpressionForDisplay
- getAmNodeLabelExpression -> getAmNodeLabelExpressionForDisplay

> Changes for fair scheduler to support allocate resource respect labels
> --
>
> Key: YARN-2497
> URL: https://issues.apache.org/jira/browse/YARN-2497
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: fairscheduler
>Reporter: Wangda Tan
>Assignee: Daniel Templeton
> Attachments: YARN-2497.001.patch, YARN-2497.002.patch, 
> YARN-2497.003.patch, YARN-2497.004.patch, YARN-2497.005.patch, 
> YARN-2497.006.patch, YARN-2497.007.patch, YARN-2497.008.patch, 
> YARN-2497.009.patch, YARN-2499.WIP01.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-2037) Add work preserving restart support for Unmanaged AMs

2017-10-02 Thread Subru Krishnan (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-2037?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16189147#comment-16189147
 ] 

Subru Krishnan commented on YARN-2037:
--

Thanks [~botong] for addressing the comments. I committed it to trunk but 
compile of {{TestWorkPreservingUnmanagedAM}} fails in branch-2. Can you kindly 
provide a patch for branch-2?

> Add work preserving restart support for Unmanaged AMs
> -
>
> Key: YARN-2037
> URL: https://issues.apache.org/jira/browse/YARN-2037
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: resourcemanager
>Affects Versions: 2.4.0
>Reporter: Karthik Kambatla
>Assignee: Botong Huang
> Attachments: YARN-2037.v1.patch, YARN-2037.v2.patch, 
> YARN-2037.v3.patch, YARN-2037.v4.patch
>
>
> It would be nice to allow Unmanaged AMs also to restart in a work-preserving 
> way. 



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-7202) End-to-end UT for api-server

2017-10-02 Thread Eric Yang (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-7202?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Eric Yang updated YARN-7202:

Attachment: YARN-7202.yarn-native-services.003.patch

Revised negative test to work without powermock.  Unfortunately, the way the 
ServiceClient is initialized, it requires an extra method to override 
ServiceClient for negative tests.

> End-to-end UT for api-server
> 
>
> Key: YARN-7202
> URL: https://issues.apache.org/jira/browse/YARN-7202
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Jian He
> Attachments: YARN-7202.yarn-native-services.001.patch, 
> YARN-7202.yarn-native-services.002.patch, 
> YARN-7202.yarn-native-services.003.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5140) NM usercache fill up with burst of jobs leading to rapid temp IO FS fill up and potentially NM outage

2017-10-02 Thread Chen He (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5140?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16189144#comment-16189144
 ] 

Chen He commented on YARN-5140:
---

Hi [~okalinin], this is a interesting issue. According to the description, if I 
understand correctly, could we avoid multiple NM crash if we reduce the 
"yarn.nodemanager.localizer.cache.cleanup.interval-ms" and increasing 
"yarn.nodemanager.localizer.cache.target-size-mb"?

> NM usercache fill up with burst of jobs leading to rapid temp IO FS fill up 
> and potentially NM outage
> -
>
> Key: YARN-5140
> URL: https://issues.apache.org/jira/browse/YARN-5140
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: nodemanager
>Affects Versions: 2.7.0
> Environment: Linux RHEL 6.7, Hadoop 2.7.0
>Reporter: Oleksandr Kalinin
>Priority: Minor
>
> A burst or rapid rate of submitted jobs with substantial NM usercache 
> resource localization footprint may lead to rapid fill up of the NM local 
> temporary IO FS (/tmp by default) with negative consequences in terms of 
> stability.
> The core issue seems to be the fact that NM continues to localize the 
> resources beyond the maximum local cache size 
> (yarn.nodemanager.localizer.cache.target-size-mb , default 10G). Since 
> maximum local cache size is effectively not taken into account when 
> localizing new resources (note that default cache cleanup interval is 10 min 
> controlled by yarn.nodemanager.localizer.cache.cleanup.interval-ms), this 
> basically leads to sort of self-destruction scenario : once /tmp FS 
> utilization reaches the threshold of 90%, NM will automatically de-register 
> from RM, effectively leading to NM outage.
> This issue may offline many NMs simultaneously at the same time and thus is 
> quite critical in terms of platform stability.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-2037) Add work preserving restart support for Unmanaged AMs

2017-10-02 Thread Subru Krishnan (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-2037?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Subru Krishnan updated YARN-2037:
-
Summary: Add work preserving restart support for Unmanaged AMs  (was: Add 
restart support for Unmanaged AMs)

> Add work preserving restart support for Unmanaged AMs
> -
>
> Key: YARN-2037
> URL: https://issues.apache.org/jira/browse/YARN-2037
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: resourcemanager
>Affects Versions: 2.4.0
>Reporter: Karthik Kambatla
>Assignee: Botong Huang
> Attachments: YARN-2037.v1.patch, YARN-2037.v2.patch, 
> YARN-2037.v3.patch, YARN-2037.v4.patch
>
>
> It would be nice to allow Unmanaged AMs also to restart in a work-preserving 
> way. 



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-6625) yarn application -list returns a tracking URL for AM that doesn't work in secured and HA environment

2017-10-02 Thread Wangda Tan (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-6625?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wangda Tan updated YARN-6625:
-
Attachment: YARN-6625.branch-2.005.patch

Attached ver.005 patch against branch-2. In the new patch I changed unit test 
to use mockito to stub {{isValidUrl}}. Major reason is I always facing issues 
to use {{org.mortbay.jetty}} implement same logics of {{org.eclipse.jetty}}. 
[~yufeigu], I'm not sure if you have any experiences of {{org.mortbay.jetty}}. 
Could you help to review this patch? Thanks.

> yarn application -list returns a tracking URL for AM that doesn't work in 
> secured and HA environment
> 
>
> Key: YARN-6625
> URL: https://issues.apache.org/jira/browse/YARN-6625
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: amrmproxy
>Affects Versions: 3.0.0-alpha2
>Reporter: Yufei Gu
>Assignee: Yufei Gu
> Fix For: 3.0.0-beta1
>
> Attachments: YARN-6625.001.patch, YARN-6625.002.patch, 
> YARN-6625.003.patch, YARN-6625.004.patch, YARN-6625.branch-2.005.patch
>
>
> The tracking URL given at the command line should work secured or not. The 
> tracking URLs are like http://node-2.abc.com:47014 and AM web server supposed 
> to redirect it to a RM address like this 
> http://node-1.abc.com:8088/proxy/application_1494544954891_0002/, but it 
> fails to do that because the connection is rejected when AM is talking to RM 
> admin service to get HA status.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Reopened] (YARN-6625) yarn application -list returns a tracking URL for AM that doesn't work in secured and HA environment

2017-10-02 Thread Wangda Tan (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-6625?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wangda Tan reopened YARN-6625:
--

> yarn application -list returns a tracking URL for AM that doesn't work in 
> secured and HA environment
> 
>
> Key: YARN-6625
> URL: https://issues.apache.org/jira/browse/YARN-6625
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: amrmproxy
>Affects Versions: 3.0.0-alpha2
>Reporter: Yufei Gu
>Assignee: Yufei Gu
> Fix For: 3.0.0-beta1
>
> Attachments: YARN-6625.001.patch, YARN-6625.002.patch, 
> YARN-6625.003.patch, YARN-6625.004.patch
>
>
> The tracking URL given at the command line should work secured or not. The 
> tracking URLs are like http://node-2.abc.com:47014 and AM web server supposed 
> to redirect it to a RM address like this 
> http://node-1.abc.com:8088/proxy/application_1494544954891_0002/, but it 
> fails to do that because the connection is rejected when AM is talking to RM 
> admin service to get HA status.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7241) Merge YARN-5734 to trunk/branch-2

2017-10-02 Thread Jonathan Hung (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7241?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16189119#comment-16189119
 ] 

Jonathan Hung commented on YARN-7241:
-

002 branch-2 fixes compiler errors

> Merge YARN-5734 to trunk/branch-2
> -
>
> Key: YARN-7241
> URL: https://issues.apache.org/jira/browse/YARN-7241
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Jonathan Hung
>Assignee: Jonathan Hung
> Attachments: YARN-7241.001.patch, YARN-7241.002.patch, 
> YARN-7241.003.patch, YARN-7241.004.patch, YARN-7241.005.patch, 
> YARN-7241.006.patch, YARN-7241-branch-2.001.patch, 
> YARN-7241-branch-2.002.patch
>
>
> Ticket for jenkins pre-commit for full diff.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-7241) Merge YARN-5734 to trunk/branch-2

2017-10-02 Thread Jonathan Hung (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-7241?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jonathan Hung updated YARN-7241:

Attachment: YARN-7241-branch-2.002.patch

> Merge YARN-5734 to trunk/branch-2
> -
>
> Key: YARN-7241
> URL: https://issues.apache.org/jira/browse/YARN-7241
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Jonathan Hung
>Assignee: Jonathan Hung
> Attachments: YARN-7241.001.patch, YARN-7241.002.patch, 
> YARN-7241.003.patch, YARN-7241.004.patch, YARN-7241.005.patch, 
> YARN-7241.006.patch, YARN-7241-branch-2.001.patch, 
> YARN-7241-branch-2.002.patch
>
>
> Ticket for jenkins pre-commit for full diff.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-7269) Tracking URL in the app state does not get redirected to ApplicationMaster for Running applications

2017-10-02 Thread Wangda Tan (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-7269?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wangda Tan updated YARN-7269:
-
Attachment: YARN-7269.003.patch

Thanks [~jianhe] for reviewing the patch, uploaded ver.3 patch.

> Tracking URL in the app state does not get redirected to ApplicationMaster 
> for Running applications
> ---
>
> Key: YARN-7269
> URL: https://issues.apache.org/jira/browse/YARN-7269
> Project: Hadoop YARN
>  Issue Type: Bug
>Reporter: Sumana Sathish
>Assignee: Tan, Wangda
>Priority: Critical
> Attachments: YARN-7269.001.patch, YARN-7269.002.patch, 
> YARN-7269.003.patch
>
>
> Tracking URL in the app state does not get redirected to ApplicationMaster 
> for Running applications. It gives following exception
> {code}
>  org.mortbay.log: /ws/v1/mapreduce/info
> javax.servlet.ServletException: Could not determine the proxy server for 
> redirection
>   at 
> org.apache.hadoop.yarn.server.webproxy.amfilter.AmIpFilter.findRedirectUrl(AmIpFilter.java:199)
>   at 
> org.apache.hadoop.yarn.server.webproxy.amfilter.AmIpFilter.doFilter(AmIpFilter.java:141)
>   at 
> org.mortbay.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1212)
>   at 
> org.apache.hadoop.http.HttpServer2$QuotingInputFilter.doFilter(HttpServer2.java:1426)
>   at 
> org.mortbay.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1212)
>   at org.apache.hadoop.http.NoCacheFilter.doFilter(NoCacheFilter.java:45)
>   at 
> org.mortbay.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1212)
>   at 
> org.mortbay.jetty.servlet.ServletHandler.handle(ServletHandler.java:399)
>   at 
> org.mortbay.jetty.security.SecurityHandler.handle(SecurityHandler.java:216)
>   at 
> org.mortbay.jetty.servlet.SessionHandler.handle(SessionHandler.java:182)
>   at 
> org.mortbay.jetty.handler.ContextHandler.handle(ContextHandler.java:766)
>   at org.mortbay.jetty.webapp.WebAppContext.handle(WebAppContext.java:450)
>   at 
> org.mortbay.jetty.handler.ContextHandlerCollection.handle(ContextHandlerCollection.java:230)
>   at 
> org.mortbay.jetty.handler.HandlerWrapper.handle(HandlerWrapper.java:152)
>   at org.mortbay.jetty.Server.handle(Server.java:326)
>   at 
> org.mortbay.jetty.HttpConnection.handleRequest(HttpConnection.java:542)
>   at 
> org.mortbay.jetty.HttpConnection$RequestHandler.headerComplete(HttpConnection.java:928)
>   at org.mortbay.jetty.HttpParser.parseNext(HttpParser.java:549)
>   at org.mortbay.jetty.HttpParser.parseAvailable(HttpParser.java:212)
>   at org.mortbay.jetty.HttpConnection.handle(HttpConnection.java:404)
>   at 
> org.mortbay.io.nio.SelectChannelEndPoint.run(SelectChannelEndPoint.java:410)
>   at 
> org.mortbay.thread.QueuedThreadPool$PoolThread.run(QueuedThreadPool.java:582)
> {code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7241) Merge YARN-5734 to trunk/branch-2

2017-10-02 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7241?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16189079#comment-16189079
 ] 

Hadoop QA commented on YARN-7241:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
19s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 10 new or modified test 
files. {color} |
|| || || || {color:brown} branch-2 Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
26s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  7m 
14s{color} | {color:green} branch-2 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  2m 
17s{color} | {color:green} branch-2 passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
53s{color} | {color:green} branch-2 passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  5m 
24s{color} | {color:green} branch-2 passed {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Skipped patched modules with no Java source: 
hadoop-yarn-project/hadoop-yarn 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-site {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  3m 
55s{color} | {color:green} branch-2 passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  3m 
22s{color} | {color:green} branch-2 passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
13s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:red}-1{color} | {color:red} mvninstall {color} | {color:red}  1m 
42s{color} | {color:red} hadoop-yarn in the patch failed. {color} |
| {color:red}-1{color} | {color:red} mvninstall {color} | {color:red}  0m 
29s{color} | {color:red} hadoop-yarn-server-resourcemanager in the patch 
failed. {color} |
| {color:red}-1{color} | {color:red} compile {color} | {color:red}  1m 
47s{color} | {color:red} hadoop-yarn in the patch failed. {color} |
| {color:red}-1{color} | {color:red} javac {color} | {color:red}  1m 47s{color} 
| {color:red} hadoop-yarn in the patch failed. {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 58s{color} | {color:orange} hadoop-yarn-project/hadoop-yarn: The patch 
generated 18 new + 603 unchanged - 1 fixed = 621 total (was 604) {color} |
| {color:red}-1{color} | {color:red} mvnsite {color} | {color:red}  1m 
55s{color} | {color:red} hadoop-yarn in the patch failed. {color} |
| {color:red}-1{color} | {color:red} mvnsite {color} | {color:red}  0m 
33s{color} | {color:red} hadoop-yarn-server-resourcemanager in the patch 
failed. {color} |
| {color:green}+1{color} | {color:green} shellcheck {color} | {color:green}  0m 
 1s{color} | {color:green} There were no new shellcheck issues. {color} |
| {color:green}+1{color} | {color:green} shelldocs {color} | {color:green}  0m  
7s{color} | {color:green} There were no new shelldocs issues. {color} |
| {color:red}-1{color} | {color:red} whitespace {color} | {color:red}  0m  
0s{color} | {color:red} The patch has 1 line(s) that end in whitespace. Use git 
apply --whitespace=fix <>. Refer https://git-scm.com/docs/git-apply 
{color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green}  0m  
1s{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Skipped patched modules with no Java source: 
hadoop-yarn-project/hadoop-yarn 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-site {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  0m 
22s{color} | {color:red} hadoop-yarn-server-resourcemanager in the patch 
failed. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  3m 
16s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 22m 43s{color} 
| {color:red} hadoop-yarn in the patch failed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
29s{color} | {color:green} hadoop-yarn-api in the patch passed. {color} |
| {color:green}+1{color} | 

[jira] [Commented] (YARN-7270) Resource#getVirtualCores() does unsafe casting from long to int.

2017-10-02 Thread Yufei Gu (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7270?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16189067#comment-16189067
 ] 

Yufei Gu commented on YARN-7270:


Thanks [~sunilg] for the review.
# I'm not sure either, but I think if we want to test class {{Resource}}. 
{{TestResource}} is the best place to do that. 
# Can you elaborate why non-static is better?



> Resource#getVirtualCores() does unsafe casting from long to int.
> 
>
> Key: YARN-7270
> URL: https://issues.apache.org/jira/browse/YARN-7270
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: resourcemanager
>Affects Versions: 3.1.0
>Reporter: Yufei Gu
>Assignee: Yufei Gu
> Attachments: YARN-7270.001.patch
>
>
> Class {{Resource}} has three sub classes(FixedValueResource, 
> LightWeightResource, and ResourcePBImpl). Only FixedValueResource handle 
> long-to-int casting nicely. The other two didn't. This bug is introduced by 
> resource type feature and causes several unit test failures. For example:
> {code}
> Error Message
> expected:<> but was:<>
> Stacktrace
> java.lang.AssertionError: expected:<> but 
> was:<>
>   at org.junit.Assert.fail(Assert.java:88)
>   at org.junit.Assert.failNotEquals(Assert.java:743)
>   at org.junit.Assert.assertEquals(Assert.java:118)
>   at org.junit.Assert.assertEquals(Assert.java:144)
>   at 
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.fair.TestFSAppAttempt.testHeadroomWithBlackListedNodes(TestFSAppAttempt.java:325)
> {code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-7259) Add size-based rolling policy to LogAggregationIndexedFileController

2017-10-02 Thread Xuan Gong (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-7259?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xuan Gong updated YARN-7259:

Fix Version/s: 3.0.0-beta1

> Add size-based rolling policy to LogAggregationIndexedFileController
> 
>
> Key: YARN-7259
> URL: https://issues.apache.org/jira/browse/YARN-7259
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Xuan Gong
>Assignee: Xuan Gong
> Fix For: 2.9.0, 3.0.0-beta1, 3.1.0
>
> Attachments: YARN-7259.1.patch, YARN-7259.2.patch
>
>
> We would roll over the log files based on the size. It only happens when the 
> partial log aggregation is enabled.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7259) Add size-based rolling policy to LogAggregationIndexedFileController

2017-10-02 Thread Xuan Gong (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7259?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16189061#comment-16189061
 ] 

Xuan Gong commented on YARN-7259:
-

pushed to branch-3.0 as well

> Add size-based rolling policy to LogAggregationIndexedFileController
> 
>
> Key: YARN-7259
> URL: https://issues.apache.org/jira/browse/YARN-7259
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Xuan Gong
>Assignee: Xuan Gong
> Fix For: 2.9.0, 3.0.0-beta1, 3.1.0
>
> Attachments: YARN-7259.1.patch, YARN-7259.2.patch
>
>
> We would roll over the log files based on the size. It only happens when the 
> partial log aggregation is enabled.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-7194) Log aggregation status is always Failed with the newly added log aggregation IndexedFileFormat

2017-10-02 Thread Xuan Gong (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-7194?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xuan Gong updated YARN-7194:

Fix Version/s: 3.0.0-beta1

> Log aggregation status is always Failed with the newly added log aggregation 
> IndexedFileFormat
> --
>
> Key: YARN-7194
> URL: https://issues.apache.org/jira/browse/YARN-7194
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Xuan Gong
>Assignee: Xuan Gong
> Fix For: 2.9.0, 3.0.0-beta1, 3.1.0
>
> Attachments: YARN-7194.1.patch, YARN-7194.2.patch, YARN-7194.3.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7194) Log aggregation status is always Failed with the newly added log aggregation IndexedFileFormat

2017-10-02 Thread Xuan Gong (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7194?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16189060#comment-16189060
 ] 

Xuan Gong commented on YARN-7194:
-

pushed to branch-3.0 as well

> Log aggregation status is always Failed with the newly added log aggregation 
> IndexedFileFormat
> --
>
> Key: YARN-7194
> URL: https://issues.apache.org/jira/browse/YARN-7194
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Xuan Gong
>Assignee: Xuan Gong
> Fix For: 2.9.0, 3.1.0
>
> Attachments: YARN-7194.1.patch, YARN-7194.2.patch, YARN-7194.3.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (YARN-2162) add ability in Fair Scheduler to optionally configure maxResources in terms of percentage

2017-10-02 Thread Yufei Gu (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-2162?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16189056#comment-16189056
 ] 

Yufei Gu edited comment on YARN-2162 at 10/2/17 11:35 PM:
--

Run SLS with 400 nodes, 200 apps, and 2k containers. Both base line and patch 
YARN-2162 runs 20 times. Result is in uploaded file 
test-400nm-200app-2k_NODE_UPDATE.timecost.svg. Please ignore the "Resource Type 
0", it is the yarn-2162 patch actually. No obvious performance regression. The 
patch version even performs a little better due to normal fluctuation.


was (Author: yufeigu):
Run SLS with 400 nodes, 200 apps, and 2k containers. Both base line and patch 
YARN-2162 runs 20 times. 
!test-400nm-200app-2k_NODE_UPDATE.timecost.svg|thumbnail!. Please ignore the 
"Resource Type 0", it is the yarn-2162 patch actually. Don't see obvious 
performance regression. The patch version even performs a little better due to 
normal fluctuation.

> add ability in Fair Scheduler to optionally configure maxResources in terms 
> of percentage
> -
>
> Key: YARN-2162
> URL: https://issues.apache.org/jira/browse/YARN-2162
> Project: Hadoop YARN
>  Issue Type: Improvement
>  Components: fairscheduler, scheduler
>Reporter: Ashwin Shankar
>Assignee: Yufei Gu
>  Labels: scheduler
> Attachments: test-400nm-200app-2k_NODE_UPDATE.timecost.svg, 
> YARN-2162.001.patch, YARN-2162.002.patch, YARN-2162.003.patch, 
> YARN-2162.004.patch, YARN-2162.005.patch, YARN-2162.006.patch, 
> YARN-2162.007.patch
>
>
> minResources and maxResources in fair scheduler configs are expressed in 
> terms of absolute numbers X mb, Y vcores. 
> As a result, when we expand or shrink our hadoop cluster, we need to 
> recalculate and change minResources/maxResources accordingly, which is pretty 
> inconvenient.
> We can circumvent this problem if we can optionally configure these 
> properties in terms of percentage of cluster capacity. 



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-7072) Add a new log aggregation file format controller

2017-10-02 Thread Xuan Gong (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-7072?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xuan Gong updated YARN-7072:

Fix Version/s: 3.0.0-beta1

> Add a new log aggregation file format controller
> 
>
> Key: YARN-7072
> URL: https://issues.apache.org/jira/browse/YARN-7072
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Xuan Gong
>Assignee: Xuan Gong
> Fix For: 2.9.0, 3.0.0-beta1, 3.1.0
>
> Attachments: YARN-7072-branch-2.001.patch, YARN-7072-trunk.001.patch, 
> YARN-7072.trunk.002.patch, YARN-7072-trunk.003.patch, 
> YARN-7072-trunk.004.patch, YARN-7072-trunk.005.patch, 
> YARN-7072-trunk.006.patch, YARN-7072-trunk.007.patch, 
> YARN-7072-trunk.008.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (YARN-2162) add ability in Fair Scheduler to optionally configure maxResources in terms of percentage

2017-10-02 Thread Yufei Gu (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-2162?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16189056#comment-16189056
 ] 

Yufei Gu edited comment on YARN-2162 at 10/2/17 11:34 PM:
--

Run SLS with 400 nodes, 200 apps, and 2k containers. Both base line and patch 
YARN-2162 runs 20 times. 
!test-400nm-200app-2k_NODE_UPDATE.timecost.svg|thumbnail!. Please ignore the 
"Resource Type 0", it is the yarn-2162 patch actually. Don't see obvious 
performance regression. The patch version even performs a little better due to 
normal fluctuation.


was (Author: yufeigu):
Run SLS with 400 nodes, 200 apps, and 2k containers. Both base line and patch 
YARN-2162 runs 20 times. !attachment-name.jpg|thumbnail!. Please ignore the 
"Resource Type 0", it is the yarn-2162 patch actually. Don't see obvious 
performance regression. The patch version even performs a little better due to 
normal fluctuation.

> add ability in Fair Scheduler to optionally configure maxResources in terms 
> of percentage
> -
>
> Key: YARN-2162
> URL: https://issues.apache.org/jira/browse/YARN-2162
> Project: Hadoop YARN
>  Issue Type: Improvement
>  Components: fairscheduler, scheduler
>Reporter: Ashwin Shankar
>Assignee: Yufei Gu
>  Labels: scheduler
> Attachments: test-400nm-200app-2k_NODE_UPDATE.timecost.svg, 
> YARN-2162.001.patch, YARN-2162.002.patch, YARN-2162.003.patch, 
> YARN-2162.004.patch, YARN-2162.005.patch, YARN-2162.006.patch, 
> YARN-2162.007.patch
>
>
> minResources and maxResources in fair scheduler configs are expressed in 
> terms of absolute numbers X mb, Y vcores. 
> As a result, when we expand or shrink our hadoop cluster, we need to 
> recalculate and change minResources/maxResources accordingly, which is pretty 
> inconvenient.
> We can circumvent this problem if we can optionally configure these 
> properties in terms of percentage of cluster capacity. 



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-2162) add ability in Fair Scheduler to optionally configure maxResources in terms of percentage

2017-10-02 Thread Yufei Gu (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-2162?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16189056#comment-16189056
 ] 

Yufei Gu commented on YARN-2162:


Run SLS with 400 nodes, 200 apps, and 2k containers. Both base line and patch 
YARN-2162 runs 20 times. !attachment-name.jpg|thumbnail!. Please ignore the 
"Resource Type 0", it is the yarn-2162 patch actually. Don't see obvious 
performance regression. The patch version even performs a little better due to 
normal fluctuation.

> add ability in Fair Scheduler to optionally configure maxResources in terms 
> of percentage
> -
>
> Key: YARN-2162
> URL: https://issues.apache.org/jira/browse/YARN-2162
> Project: Hadoop YARN
>  Issue Type: Improvement
>  Components: fairscheduler, scheduler
>Reporter: Ashwin Shankar
>Assignee: Yufei Gu
>  Labels: scheduler
> Attachments: test-400nm-200app-2k_NODE_UPDATE.timecost.svg, 
> YARN-2162.001.patch, YARN-2162.002.patch, YARN-2162.003.patch, 
> YARN-2162.004.patch, YARN-2162.005.patch, YARN-2162.006.patch, 
> YARN-2162.007.patch
>
>
> minResources and maxResources in fair scheduler configs are expressed in 
> terms of absolute numbers X mb, Y vcores. 
> As a result, when we expand or shrink our hadoop cluster, we need to 
> recalculate and change minResources/maxResources accordingly, which is pretty 
> inconvenient.
> We can circumvent this problem if we can optionally configure these 
> properties in terms of percentage of cluster capacity. 



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-2960) Add documentation for the YARN shared cache

2017-10-02 Thread Chris Trezzo (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-2960?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16189050#comment-16189050
 ] 

Chris Trezzo commented on YARN-2960:


Patch should be good to go, please let me know if you see any issues. If I get 
a +1, I plan to commit this patch to trunk, branch-3.0 and branch-2. Thanks!

> Add documentation for the YARN shared cache
> ---
>
> Key: YARN-2960
> URL: https://issues.apache.org/jira/browse/YARN-2960
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Chris Trezzo
>Assignee: Chris Trezzo
> Attachments: YARN-2960-trunk-001.patch
>
>
> Add documentation around the architecture, api's and administration of the 
> YARN shared cache.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-2162) add ability in Fair Scheduler to optionally configure maxResources in terms of percentage

2017-10-02 Thread Yufei Gu (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-2162?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yufei Gu updated YARN-2162:
---
Attachment: test-400nm-200app-2k_NODE_UPDATE.timecost.svg

> add ability in Fair Scheduler to optionally configure maxResources in terms 
> of percentage
> -
>
> Key: YARN-2162
> URL: https://issues.apache.org/jira/browse/YARN-2162
> Project: Hadoop YARN
>  Issue Type: Improvement
>  Components: fairscheduler, scheduler
>Reporter: Ashwin Shankar
>Assignee: Yufei Gu
>  Labels: scheduler
> Attachments: test-400nm-200app-2k_NODE_UPDATE.timecost.svg, 
> YARN-2162.001.patch, YARN-2162.002.patch, YARN-2162.003.patch, 
> YARN-2162.004.patch, YARN-2162.005.patch, YARN-2162.006.patch, 
> YARN-2162.007.patch
>
>
> minResources and maxResources in fair scheduler configs are expressed in 
> terms of absolute numbers X mb, Y vcores. 
> As a result, when we expand or shrink our hadoop cluster, we need to 
> recalculate and change minResources/maxResources accordingly, which is pretty 
> inconvenient.
> We can circumvent this problem if we can optionally configure these 
> properties in terms of percentage of cluster capacity. 



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6550) Capture launch_container.sh logs to a separate log file

2017-10-02 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6550?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16189028#comment-16189028
 ] 

Hadoop QA commented on YARN-6550:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 12m 
32s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} branch-2 Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 12m 
11s{color} | {color:green} branch-2 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
35s{color} | {color:green} branch-2 passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
23s{color} | {color:green} branch-2 passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
37s{color} | {color:green} branch-2 passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m  
6s{color} | {color:green} branch-2 passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
23s{color} | {color:green} branch-2 passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
28s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
31s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} javac {color} | {color:red}  0m 31s{color} 
| {color:red} 
hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-nodemanager
 generated 1 new + 24 unchanged - 1 fixed = 25 total (was 25) {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 20s{color} | {color:orange} 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager:
 The patch generated 29 new + 161 unchanged - 0 fixed = 190 total (was 161) 
{color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
32s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
13s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
19s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 42m 24s{color} 
| {color:red} hadoop-yarn-server-nodemanager in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
22s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 75m 31s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.yarn.server.nodemanager.TestNodeManagerReboot |
| Timed out junit tests | 
org.apache.hadoop.yarn.server.nodemanager.TestNodeStatusUpdater |
|   | org.apache.hadoop.yarn.server.nodemanager.TestNodeManagerResync |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:eaf5c66 |
| JIRA Issue | YARN-6550 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12889396/YARN-6550.branch-2.001.patch
 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  shadedclient  findbugs  checkstyle  |
| uname | Linux 012d3844b1d6 3.13.0-123-generic #172-Ubuntu SMP Mon Jun 26 
18:04:35 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | branch-2 / 1eecf8a |
| Default Java | 1.7.0_151 |
| findbugs | v3.0.0 |
| javac | 
https://builds.apache.org/job/PreCommit-YARN-Build/17736/artifact/patchprocess/diff-compile-javac-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-nodemanager.txt
 |
| checkstyle | 
https://builds.apache.org/job/PreCommit-YARN-Build/17736/artifact/patchprocess/diff-checkstyle-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-nodemanager.txt
 |
| unit | 

[jira] [Commented] (YARN-7259) Add size-based rolling policy to LogAggregationIndexedFileController

2017-10-02 Thread Wangda Tan (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7259?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16189027#comment-16189027
 ] 

Wangda Tan commented on YARN-7259:
--

Committed to branch-2/trunk, thanks Xuan!

> Add size-based rolling policy to LogAggregationIndexedFileController
> 
>
> Key: YARN-7259
> URL: https://issues.apache.org/jira/browse/YARN-7259
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Xuan Gong
>Assignee: Xuan Gong
> Fix For: 2.9.0, 3.1.0
>
> Attachments: YARN-7259.1.patch, YARN-7259.2.patch
>
>
> We would roll over the log files based on the size. It only happens when the 
> partial log aggregation is enabled.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-7259) Add size-based rolling policy to LogAggregationIndexedFileController

2017-10-02 Thread Wangda Tan (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-7259?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wangda Tan updated YARN-7259:
-
Fix Version/s: 3.1.0
   2.9.0

> Add size-based rolling policy to LogAggregationIndexedFileController
> 
>
> Key: YARN-7259
> URL: https://issues.apache.org/jira/browse/YARN-7259
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Xuan Gong
>Assignee: Xuan Gong
> Fix For: 2.9.0, 3.1.0
>
> Attachments: YARN-7259.1.patch, YARN-7259.2.patch
>
>
> We would roll over the log files based on the size. It only happens when the 
> partial log aggregation is enabled.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7269) Tracking URL in the app state does not get redirected to ApplicationMaster for Running applications

2017-10-02 Thread Jian He (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7269?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16189024#comment-16189024
 ] 

Jian He commented on YARN-7269:
---

Should YarnConfiguration be checked always for compatibility ?

> Tracking URL in the app state does not get redirected to ApplicationMaster 
> for Running applications
> ---
>
> Key: YARN-7269
> URL: https://issues.apache.org/jira/browse/YARN-7269
> Project: Hadoop YARN
>  Issue Type: Bug
>Reporter: Sumana Sathish
>Assignee: Tan, Wangda
>Priority: Critical
> Attachments: YARN-7269.001.patch, YARN-7269.002.patch
>
>
> Tracking URL in the app state does not get redirected to ApplicationMaster 
> for Running applications. It gives following exception
> {code}
>  org.mortbay.log: /ws/v1/mapreduce/info
> javax.servlet.ServletException: Could not determine the proxy server for 
> redirection
>   at 
> org.apache.hadoop.yarn.server.webproxy.amfilter.AmIpFilter.findRedirectUrl(AmIpFilter.java:199)
>   at 
> org.apache.hadoop.yarn.server.webproxy.amfilter.AmIpFilter.doFilter(AmIpFilter.java:141)
>   at 
> org.mortbay.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1212)
>   at 
> org.apache.hadoop.http.HttpServer2$QuotingInputFilter.doFilter(HttpServer2.java:1426)
>   at 
> org.mortbay.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1212)
>   at org.apache.hadoop.http.NoCacheFilter.doFilter(NoCacheFilter.java:45)
>   at 
> org.mortbay.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1212)
>   at 
> org.mortbay.jetty.servlet.ServletHandler.handle(ServletHandler.java:399)
>   at 
> org.mortbay.jetty.security.SecurityHandler.handle(SecurityHandler.java:216)
>   at 
> org.mortbay.jetty.servlet.SessionHandler.handle(SessionHandler.java:182)
>   at 
> org.mortbay.jetty.handler.ContextHandler.handle(ContextHandler.java:766)
>   at org.mortbay.jetty.webapp.WebAppContext.handle(WebAppContext.java:450)
>   at 
> org.mortbay.jetty.handler.ContextHandlerCollection.handle(ContextHandlerCollection.java:230)
>   at 
> org.mortbay.jetty.handler.HandlerWrapper.handle(HandlerWrapper.java:152)
>   at org.mortbay.jetty.Server.handle(Server.java:326)
>   at 
> org.mortbay.jetty.HttpConnection.handleRequest(HttpConnection.java:542)
>   at 
> org.mortbay.jetty.HttpConnection$RequestHandler.headerComplete(HttpConnection.java:928)
>   at org.mortbay.jetty.HttpParser.parseNext(HttpParser.java:549)
>   at org.mortbay.jetty.HttpParser.parseAvailable(HttpParser.java:212)
>   at org.mortbay.jetty.HttpConnection.handle(HttpConnection.java:404)
>   at 
> org.mortbay.io.nio.SelectChannelEndPoint.run(SelectChannelEndPoint.java:410)
>   at 
> org.mortbay.thread.QueuedThreadPool$PoolThread.run(QueuedThreadPool.java:582)
> {code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-2037) Add restart support for Unmanaged AMs

2017-10-02 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-2037?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16189023#comment-16189023
 ] 

Hadoop QA commented on YARN-2037:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
21s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
46s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 12m 
41s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  9m  
3s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
50s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m  
4s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
10m 27s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
58s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
48s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m  
9s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
53s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  5m 
35s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  5m 
35s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 47s{color} | {color:orange} hadoop-yarn-project/hadoop-yarn: The patch 
generated 1 new + 187 unchanged - 3 fixed = 188 total (was 190) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m  
2s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green}  
9m 44s{color} | {color:green} patch has no errors when building and testing our 
client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m 
12s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
42s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
36s{color} | {color:green} hadoop-yarn-api in the patch passed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 44m 42s{color} 
| {color:red} hadoop-yarn-server-resourcemanager in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
21s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}110m  3s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | 
hadoop.yarn.server.resourcemanager.scheduler.capacity.TestContainerAllocation |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:71bbb86 |
| JIRA Issue | YARN-2037 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12890043/YARN-2037.v4.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  shadedclient  findbugs  checkstyle  |
| uname | Linux 7b4e351ec4f9 4.4.0-43-generic #63-Ubuntu SMP Wed Oct 12 
13:48:03 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / 2c62ff7 |
| Default Java | 1.8.0_144 |
| findbugs | v3.1.0-RC1 |
| checkstyle | 

[jira] [Commented] (YARN-2960) Add documentation for the YARN shared cache

2017-10-02 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-2960?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16189017#comment-16189017
 ] 

Hadoop QA commented on YARN-2960:
-

| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
18s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
16s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 14m 
 8s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
26s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
23m 27s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
14s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
20s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green}  0m  
2s{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
10m 17s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
17s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 35m 13s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:71bbb86 |
| JIRA Issue | YARN-2960 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12890058/YARN-2960-trunk-001.patch
 |
| Optional Tests |  asflicense  mvnsite  xml  |
| uname | Linux 2fa37742efbd 3.13.0-129-generic #178-Ubuntu SMP Fri Aug 11 
12:48:20 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / 2c62ff7 |
| modules | C: hadoop-project hadoop-yarn-project/hadoop-yarn/hadoop-yarn-site 
U: . |
| Console output | 
https://builds.apache.org/job/PreCommit-YARN-Build/17737/console |
| Powered by | Apache Yetus 0.6.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> Add documentation for the YARN shared cache
> ---
>
> Key: YARN-2960
> URL: https://issues.apache.org/jira/browse/YARN-2960
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Chris Trezzo
>Assignee: Chris Trezzo
> Attachments: YARN-2960-trunk-001.patch
>
>
> Add documentation around the architecture, api's and administration of the 
> YARN shared cache.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7259) Add size-based rolling policy to LogAggregationIndexedFileController

2017-10-02 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7259?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16189006#comment-16189006
 ] 

Hudson commented on YARN-7259:
--

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #13006 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/13006/])
YARN-7259. Add size-based rolling policy to (wangda: rev 
280080fad01304c85a9ede4d4f7b707eb36c0155)
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/test/java/org/apache/hadoop/yarn/logaggregation/filecontroller/ifile/TestLogAggregationIndexFileController.java
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/java/org/apache/hadoop/yarn/logaggregation/filecontroller/ifile/LogAggregationIndexedFileController.java
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/java/org/apache/hadoop/yarn/logaggregation/filecontroller/ifile/IndexedFileAggregatedLogsBlock.java


> Add size-based rolling policy to LogAggregationIndexedFileController
> 
>
> Key: YARN-7259
> URL: https://issues.apache.org/jira/browse/YARN-7259
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Xuan Gong
>Assignee: Xuan Gong
> Attachments: YARN-7259.1.patch, YARN-7259.2.patch
>
>
> We would roll over the log files based on the size. It only happens when the 
> partial log aggregation is enabled.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7226) Whitelisted variables do not support delayed variable expansion

2017-10-02 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7226?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16188988#comment-16188988
 ] 

Hadoop QA commented on YARN-7226:
-

| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
25s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 14m 
58s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
48s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
21s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
29s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green}  
9m 39s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
48s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
20s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
28s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
45s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
45s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
17s{color} | {color:green} 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager:
 The patch generated 0 new + 160 unchanged - 3 fixed = 160 total (was 163) 
{color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
28s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
10m 12s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
52s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
18s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 14m 
59s{color} | {color:green} hadoop-yarn-server-nodemanager in the patch passed. 
{color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
19s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 56m 55s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:71bbb86 |
| JIRA Issue | YARN-7226 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12890045/YARN-7226.006.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  shadedclient  findbugs  checkstyle  |
| uname | Linux 671d74ce8d31 3.13.0-123-generic #172-Ubuntu SMP Mon Jun 26 
18:04:35 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / 2c62ff7 |
| Default Java | 1.8.0_144 |
| findbugs | v3.1.0-RC1 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-YARN-Build/17735/testReport/ |
| modules | C: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager
 U: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager
 |
| Console output | 
https://builds.apache.org/job/PreCommit-YARN-Build/17735/console |
| Powered by | Apache Yetus 0.6.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> Whitelisted variables do not support 

[jira] [Commented] (YARN-7226) Whitelisted variables do not support delayed variable expansion

2017-10-02 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7226?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16188981#comment-16188981
 ] 

Hadoop QA commented on YARN-7226:
-

| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
45s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 15m 
46s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
50s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
21s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
30s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green}  
9m 35s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
48s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
20s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
28s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
44s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
44s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
19s{color} | {color:green} 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager:
 The patch generated 0 new + 160 unchanged - 3 fixed = 160 total (was 163) 
{color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
28s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
10m 15s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
54s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
18s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 15m  
3s{color} | {color:green} hadoop-yarn-server-nodemanager in the patch passed. 
{color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
23s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 58m 22s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:71bbb86 |
| JIRA Issue | YARN-7226 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12890045/YARN-7226.006.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  shadedclient  findbugs  checkstyle  |
| uname | Linux a4c510de8e65 3.13.0-123-generic #172-Ubuntu SMP Mon Jun 26 
18:04:35 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / 2c62ff7 |
| Default Java | 1.8.0_144 |
| findbugs | v3.1.0-RC1 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-YARN-Build/17734/testReport/ |
| modules | C: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager
 U: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager
 |
| Console output | 
https://builds.apache.org/job/PreCommit-YARN-Build/17734/console |
| Powered by | Apache Yetus 0.6.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> Whitelisted variables do not support 

[jira] [Resolved] (YARN-1016) Define a HDFS based repository that allows YARN services to share resources

2017-10-02 Thread Chris Trezzo (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-1016?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chris Trezzo resolved YARN-1016.

Resolution: Duplicate

Resolving this as a duplicate of YARN-1492. Please let me know if you think 
otherwise.

> Define a HDFS based repository that allows YARN services to share resources
> ---
>
> Key: YARN-1016
> URL: https://issues.apache.org/jira/browse/YARN-1016
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: api
>Affects Versions: 3.0.0-alpha1
>Reporter: Kam Kasravi
>
> YARN services both short and long lived can benefit from a resource repo 
> rather than packaging resources within the YARN client to be extracted and 
> used by the Application Master and (later) the containers. Standardizing a 
> resource repo will provide performance benefits as well. The repo should be 
> similar to maven or ivy repo's so discovery and versioning are built-in.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-1492) truly shared cache for jars (jobjar/libjar)

2017-10-02 Thread Chris Trezzo (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-1492?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chris Trezzo updated YARN-1492:
---
Release Note: The YARN Shared Cache provides the facility to upload and 
manage shared application resources to HDFS in a safe and scalable manner. YARN 
applications can leverage resources uploaded by other applications or previous 
runs of the same application without having to re-­upload and localize 
identical files multiple times. This will save network resources and reduce 
YARN application startup time.

> truly shared cache for jars (jobjar/libjar)
> ---
>
> Key: YARN-1492
> URL: https://issues.apache.org/jira/browse/YARN-1492
> Project: Hadoop YARN
>  Issue Type: New Feature
>Affects Versions: 2.0.4-alpha
>Reporter: Sangjin Lee
>Assignee: Chris Trezzo
> Attachments: shared_cache_design.pdf, shared_cache_design_v2.pdf, 
> shared_cache_design_v3.pdf, shared_cache_design_v4.pdf, 
> shared_cache_design_v5.pdf, shared_cache_design_v6.pdf, 
> YARN-1492-all-trunk-v1.patch, YARN-1492-all-trunk-v2.patch, 
> YARN-1492-all-trunk-v3.patch, YARN-1492-all-trunk-v4.patch, 
> YARN-1492-all-trunk-v5.patch
>
>
> Currently there is the distributed cache that enables you to cache jars and 
> files so that attempts from the same job can reuse them. However, sharing is 
> limited with the distributed cache because it is normally on a per-job basis. 
> On a large cluster, sometimes copying of jobjars and libjars becomes so 
> prevalent that it consumes a large portion of the network bandwidth, not to 
> speak of defeating the purpose of "bringing compute to where data is". This 
> is wasteful because in most cases code doesn't change much across many jobs.
> I'd like to propose and discuss feasibility of introducing a truly shared 
> cache so that multiple jobs from multiple users can share and cache jars. 
> This JIRA is to open the discussion.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-7259) Add size-based rolling policy to LogAggregationIndexedFileController

2017-10-02 Thread Wangda Tan (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-7259?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wangda Tan updated YARN-7259:
-
Summary: Add size-based rolling policy to 
LogAggregationIndexedFileController  (was: Add rolling policy to 
LogAggregationIndexedFileController)

> Add size-based rolling policy to LogAggregationIndexedFileController
> 
>
> Key: YARN-7259
> URL: https://issues.apache.org/jira/browse/YARN-7259
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Xuan Gong
>Assignee: Xuan Gong
> Attachments: YARN-7259.1.patch, YARN-7259.2.patch
>
>
> We would roll over the log files based on the size. It only happens when the 
> partial log aggregation is enabled.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-7259) Add rolling policy to LogAggregationIndexedFileController

2017-10-02 Thread Wangda Tan (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-7259?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wangda Tan updated YARN-7259:
-
Summary: Add rolling policy to LogAggregationIndexedFileController  (was: 
add rolling policy to LogAggregationIndexedFileController)

> Add rolling policy to LogAggregationIndexedFileController
> -
>
> Key: YARN-7259
> URL: https://issues.apache.org/jira/browse/YARN-7259
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Xuan Gong
>Assignee: Xuan Gong
> Attachments: YARN-7259.1.patch, YARN-7259.2.patch
>
>
> We would roll over the log files based on the size. It only happens when the 
> partial log aggregation is enabled.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-2960) Add documentation for the YARN shared cache

2017-10-02 Thread Chris Trezzo (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-2960?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chris Trezzo updated YARN-2960:
---
Attachment: YARN-2960-trunk-001.patch

Trunk v1 attached.

> Add documentation for the YARN shared cache
> ---
>
> Key: YARN-2960
> URL: https://issues.apache.org/jira/browse/YARN-2960
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Chris Trezzo
>Assignee: Chris Trezzo
> Attachments: YARN-2960-trunk-001.patch
>
>
> Add documentation around the architecture, api's and administration of the 
> YARN shared cache.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7202) End-to-end UT for api-server

2017-10-02 Thread Eric Yang (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7202?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16188942#comment-16188942
 ] 

Eric Yang commented on YARN-7202:
-

TestYarnNativeServices and TestApiServer combination provides end-to-end unit 
tests coverages.  After reviewing HADOOP-9122, it doesn't appear that powermock 
can help in Hadoop unit test.

> End-to-end UT for api-server
> 
>
> Key: YARN-7202
> URL: https://issues.apache.org/jira/browse/YARN-7202
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Jian He
> Attachments: YARN-7202.yarn-native-services.001.patch, 
> YARN-7202.yarn-native-services.002.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7102) NM heartbeat stuck when responseId overflows MAX_INT

2017-10-02 Thread Jason Lowe (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7102?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16188935#comment-16188935
 ] 

Jason Lowe commented on YARN-7102:
--

Thanks for updating the patch!

This now grabs the RMNodeImpl write lock three times in quick succession for 
updateNodeHeartbeatResponseForCleanup, 
updateNodeHeartbeatResponseForUpdatedContainers, and 
setLastNodeHeartBeatResponse.  I think it would be simpler and more efficient 
to have one method, setAndUpdateNodeHeartbeatResponse() that does it all with 
only one write lock acquisition.


> NM heartbeat stuck when responseId overflows MAX_INT
> 
>
> Key: YARN-7102
> URL: https://issues.apache.org/jira/browse/YARN-7102
> Project: Hadoop YARN
>  Issue Type: Bug
>Reporter: Botong Huang
>Assignee: Botong Huang
>Priority: Critical
> Attachments: YARN-7102.v1.patch, YARN-7102.v2.patch, 
> YARN-7102.v3.patch, YARN-7102.v4.patch, YARN-7102.v5.patch, 
> YARN-7102.v6.patch, YARN-7102.v7.patch, YARN-7102.v8.patch
>
>
> ResponseId overflow problem in NM-RM heartbeat. This is same as AM-RM 
> heartbeat in YARN-6640, please refer to YARN-6640 for details. 



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7134) AppSchedulingInfo has a dependency on capacity scheduler

2017-10-02 Thread Wangda Tan (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7134?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16188920#comment-16188920
 ] 

Wangda Tan commented on YARN-7134:
--

[~templedf], the SchedulingMode and AppSchedulingInfo are all internal classes 
of scheduler. I don't think this blocks anything. Downgrading to major.

> AppSchedulingInfo has a dependency on capacity scheduler
> 
>
> Key: YARN-7134
> URL: https://issues.apache.org/jira/browse/YARN-7134
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: scheduler
>Affects Versions: 3.0.0-alpha4
>Reporter: Daniel Templeton
>Assignee: Sunil G
>Priority: Blocker
>
> The common scheduling code should be independent of all scheduler 
> implementations.  YARN-6040 introduced capacity scheduler's 
> {{SchedulingMode}} into {{AppSchedulingInfo}}.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-7134) AppSchedulingInfo has a dependency on capacity scheduler

2017-10-02 Thread Wangda Tan (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-7134?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wangda Tan updated YARN-7134:
-
Priority: Major  (was: Blocker)

> AppSchedulingInfo has a dependency on capacity scheduler
> 
>
> Key: YARN-7134
> URL: https://issues.apache.org/jira/browse/YARN-7134
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: scheduler
>Affects Versions: 3.0.0-alpha4
>Reporter: Daniel Templeton
>Assignee: Sunil G
>
> The common scheduling code should be independent of all scheduler 
> implementations.  YARN-6040 introduced capacity scheduler's 
> {{SchedulingMode}} into {{AppSchedulingInfo}}.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7190) Ensure only NM classpath in 2.x gets TSv2 related hbase jars, not the user classpath

2017-10-02 Thread Vrushali C (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7190?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16188898#comment-16188898
 ] 

Vrushali C commented on YARN-7190:
--

Thanks [~varun_saxena] , let's go ahead with share/hadoop/yarn/timelineservice 
and share/hadoop/yarn/timelineservice/lib to place timeline service specific 
jars and their dependencies.

> Ensure only NM classpath in 2.x gets TSv2 related hbase jars, not the user 
> classpath
> 
>
> Key: YARN-7190
> URL: https://issues.apache.org/jira/browse/YARN-7190
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: timelineclient, timelinereader, timelineserver
>Reporter: Vrushali C
>Assignee: Varun Saxena
>
> [~jlowe] had a good observation about the user classpath getting extra jars 
> in hadoop 2.x brought in with TSv2.  If users start picking up Hadoop 2,x's 
> version of HBase jars instead of the ones they shipped with their job, it 
> could be a problem.
> So when TSv2 is to be used in 2,x, the hbase related jars should come into 
> only the NM classpath not the user classpath.
> Here is a list of some jars
> {code}
> commons-csv-1.0.jar
> commons-el-1.0.jar
> commons-httpclient-3.1.jar
> disruptor-3.3.0.jar
> findbugs-annotations-1.3.9-1.jar
> hbase-annotations-1.2.6.jar
> hbase-client-1.2.6.jar
> hbase-common-1.2.6.jar
> hbase-hadoop2-compat-1.2.6.jar
> hbase-hadoop-compat-1.2.6.jar
> hbase-prefix-tree-1.2.6.jar
> hbase-procedure-1.2.6.jar
> hbase-protocol-1.2.6.jar
> hbase-server-1.2.6.jar
> htrace-core-3.1.0-incubating.jar
> jamon-runtime-2.4.1.jar
> jasper-compiler-5.5.23.jar
> jasper-runtime-5.5.23.jar
> jcodings-1.0.8.jar
> joni-2.1.2.jar
> jsp-2.1-6.1.14.jar
> jsp-api-2.1-6.1.14.jar
> jsr311-api-1.1.1.jar
> metrics-core-2.2.0.jar
> servlet-api-2.5-6.1.14.jar
> {code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7241) Merge YARN-5734 to trunk/branch-2

2017-10-02 Thread Jonathan Hung (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7241?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=1612#comment-1612
 ] 

Jonathan Hung commented on YARN-7241:
-

Attached 001 branch-2 patch.

> Merge YARN-5734 to trunk/branch-2
> -
>
> Key: YARN-7241
> URL: https://issues.apache.org/jira/browse/YARN-7241
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Jonathan Hung
>Assignee: Jonathan Hung
> Attachments: YARN-7241.001.patch, YARN-7241.002.patch, 
> YARN-7241.003.patch, YARN-7241.004.patch, YARN-7241.005.patch, 
> YARN-7241.006.patch, YARN-7241-branch-2.001.patch
>
>
> Ticket for jenkins pre-commit for full diff.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-7241) Merge YARN-5734 to trunk/branch-2

2017-10-02 Thread Jonathan Hung (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-7241?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jonathan Hung updated YARN-7241:

Attachment: YARN-7241-branch-2.001.patch

> Merge YARN-5734 to trunk/branch-2
> -
>
> Key: YARN-7241
> URL: https://issues.apache.org/jira/browse/YARN-7241
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Jonathan Hung
>Assignee: Jonathan Hung
> Attachments: YARN-7241.001.patch, YARN-7241.002.patch, 
> YARN-7241.003.patch, YARN-7241.004.patch, YARN-7241.005.patch, 
> YARN-7241.006.patch, YARN-7241-branch-2.001.patch
>
>
> Ticket for jenkins pre-commit for full diff.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7102) NM heartbeat stuck when responseId overflows MAX_INT

2017-10-02 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7102?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16188878#comment-16188878
 ] 

Hadoop QA commented on YARN-7102:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
20s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 5 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  1m 
27s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 13m 
29s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 15m 
17s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  2m 
 1s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
12s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
13m 58s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
51s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
52s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
18s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
55s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 10m 
29s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 10m 
29s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
2m  4s{color} | {color:orange} root: The patch generated 1 new + 344 unchanged 
- 2 fixed = 345 total (was 346) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
23s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
11m 46s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m 
12s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m  
5s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 47m  8s{color} 
| {color:red} hadoop-yarn-server-resourcemanager in the patch failed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red}  1m 36s{color} 
| {color:red} hadoop-sls in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
25s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}148m 32s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | 
hadoop.yarn.server.resourcemanager.scheduler.capacity.TestContainerAllocation |
|   | hadoop.yarn.server.resourcemanager.scheduler.fair.TestFSAppStarvation |
|   | hadoop.yarn.sls.nodemanager.TestNMSimulator |
|   | hadoop.yarn.sls.TestReservationSystemInvariants |
|   | hadoop.yarn.sls.TestSLSRunner |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:71bbb86 |
| JIRA Issue | YARN-7102 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12890006/YARN-7102.v8.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  shadedclient  findbugs  checkstyle  |
| uname | Linux be9a3bd9ee59 4.4.0-43-generic #63-Ubuntu SMP Wed Oct 12 
13:48:03 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| 

[jira] [Updated] (YARN-7226) Whitelisted variables do not support delayed variable expansion

2017-10-02 Thread Jason Lowe (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-7226?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jason Lowe updated YARN-7226:
-
Attachment: YARN-7226.006.patch

Rebased the patch on trunk.  I can provide patches for branch-2. and branch-2.8 
if necessary once we're happy with the trunk version.

> Whitelisted variables do not support delayed variable expansion
> ---
>
> Key: YARN-7226
> URL: https://issues.apache.org/jira/browse/YARN-7226
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: nodemanager
>Affects Versions: 2.9.0, 2.8.1, 3.0.0-alpha4
>Reporter: Jason Lowe
>Assignee: Jason Lowe
> Attachments: YARN-7226.001.patch, YARN-7226.002.patch, 
> YARN-7226.003.patch, YARN-7226.004.patch, YARN-7226.005.patch, 
> YARN-7226.006.patch
>
>
> The nodemanager supports a configurable list of environment variables, via 
> yarn.nodemanager.env-whitelist, that will be propagated to the container's 
> environment unless those variables were specified in the container launch 
> context.  Unfortunately the handling of these whitelisted variables prevents 
> using delayed variable expansion.  For example, if a user shipped their own 
> version of hadoop with their job via the distributed cache and specified:
> {noformat}
> HADOOP_COMMON_HOME={{PWD}}/my-private-hadoop/
> {noformat}
>  as part of their job, the variable will be set as the *literal* string:
> {noformat}
> $PWD/my-private-hadoop/
> {noformat}
> rather than having $PWD expand to the container's current directory as it 
> does for any other, non-whitelisted variable being set to the same value.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-2037) Add restart support for Unmanaged AMs

2017-10-02 Thread Botong Huang (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-2037?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Botong Huang updated YARN-2037:
---
Attachment: YARN-2037.v4.patch

> Add restart support for Unmanaged AMs
> -
>
> Key: YARN-2037
> URL: https://issues.apache.org/jira/browse/YARN-2037
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: resourcemanager
>Affects Versions: 2.4.0
>Reporter: Karthik Kambatla
>Assignee: Botong Huang
> Attachments: YARN-2037.v1.patch, YARN-2037.v2.patch, 
> YARN-2037.v3.patch, YARN-2037.v4.patch
>
>
> It would be nice to allow Unmanaged AMs also to restart in a work-preserving 
> way. 



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7211) Task in SLS does't work

2017-10-02 Thread Yufei Gu (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7211?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16188822#comment-16188822
 ] 

Yufei Gu commented on YARN-7211:


Hi [~botong], can you help look at this issue. I saw you did some change to 
MockAM in YARN-6640. There is a AM simulator inside SLS as well. We might need 
to similar stuff as well. Thanks. 

> Task in SLS does't work
> ---
>
> Key: YARN-7211
> URL: https://issues.apache.org/jira/browse/YARN-7211
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: scheduler-load-simulator
>Affects Versions: 3.0.0-beta1, 3.1.0
>Reporter: Yufei Gu
>Assignee: Yufei Gu
>Priority: Blocker
>
> {code}
> java.lang.reflect.UndeclaredThrowableException
>   at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1980)
>   at 
> org.apache.hadoop.yarn.sls.appmaster.MRAMSimulator.sendContainerRequest(MRAMSimulator.java:339)
>   at 
> org.apache.hadoop.yarn.sls.appmaster.AMSimulator.middleStep(AMSimulator.java:201)
>   at 
> org.apache.hadoop.yarn.sls.scheduler.TaskRunner$Task.run(TaskRunner.java:94)
>   at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
>   at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
>   at java.lang.Thread.run(Thread.java:745)
> Caused by: 
> org.apache.hadoop.yarn.exceptions.InvalidApplicationMasterRequestException: 
> Invalid responseId in AllocateRequest from application attempt: 
> appattempt_1505762809623_0001_01, expect responseId to be 0, but get 1
>   at 
> org.apache.hadoop.yarn.server.resourcemanager.ApplicationMasterService.allocate(ApplicationMasterService.java:377)
>   at 
> org.apache.hadoop.yarn.sls.appmaster.MRAMSimulator$1.run(MRAMSimulator.java:343)
>   at 
> org.apache.hadoop.yarn.sls.appmaster.MRAMSimulator$1.run(MRAMSimulator.java:340)
>   at java.security.AccessController.doPrivileged(Native Method)
>   at javax.security.auth.Subject.doAs(Subject.java:422)
>   at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1962)
>   ... 6 more
> Exception in thread "pool-4-thread-8" java.lang.NullPointerException
>   at 
> org.apache.hadoop.yarn.sls.scheduler.TaskRunner$Task.run(TaskRunner.java:103)
>   at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
>   at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
>   at java.lang.Thread.run(Thread.java:745)
> {code}
> Seems like it is broken by YARN-6640. SLS works after revert YARN-6640 as I 
> tested.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7226) Whitelisted variables do not support delayed variable expansion

2017-10-02 Thread Sidharta Seethana (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7226?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16188820#comment-16188820
 ] 

Sidharta Seethana commented on YARN-7226:
-

[~jlowe], recent changes to trunk (YARN-6550) seem to have modified 
{{ContainerLaunch.java}} and the latest patch from this JIRA no longer applies 
cleanly - could you please take a look?

> Whitelisted variables do not support delayed variable expansion
> ---
>
> Key: YARN-7226
> URL: https://issues.apache.org/jira/browse/YARN-7226
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: nodemanager
>Affects Versions: 2.9.0, 2.8.1, 3.0.0-alpha4
>Reporter: Jason Lowe
>Assignee: Jason Lowe
> Attachments: YARN-7226.001.patch, YARN-7226.002.patch, 
> YARN-7226.003.patch, YARN-7226.004.patch, YARN-7226.005.patch
>
>
> The nodemanager supports a configurable list of environment variables, via 
> yarn.nodemanager.env-whitelist, that will be propagated to the container's 
> environment unless those variables were specified in the container launch 
> context.  Unfortunately the handling of these whitelisted variables prevents 
> using delayed variable expansion.  For example, if a user shipped their own 
> version of hadoop with their job via the distributed cache and specified:
> {noformat}
> HADOOP_COMMON_HOME={{PWD}}/my-private-hadoop/
> {noformat}
>  as part of their job, the variable will be set as the *literal* string:
> {noformat}
> $PWD/my-private-hadoop/
> {noformat}
> rather than having $PWD expand to the container's current directory as it 
> does for any other, non-whitelisted variable being set to the same value.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-1492) truly shared cache for jars (jobjar/libjar)

2017-10-02 Thread Chris Trezzo (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-1492?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16188808#comment-16188808
 ] 

Chris Trezzo commented on YARN-1492:


Please let me know if you have any concerns about this. Thanks!

> truly shared cache for jars (jobjar/libjar)
> ---
>
> Key: YARN-1492
> URL: https://issues.apache.org/jira/browse/YARN-1492
> Project: Hadoop YARN
>  Issue Type: New Feature
>Affects Versions: 2.0.4-alpha
>Reporter: Sangjin Lee
>Assignee: Chris Trezzo
> Attachments: shared_cache_design.pdf, shared_cache_design_v2.pdf, 
> shared_cache_design_v3.pdf, shared_cache_design_v4.pdf, 
> shared_cache_design_v5.pdf, shared_cache_design_v6.pdf, 
> YARN-1492-all-trunk-v1.patch, YARN-1492-all-trunk-v2.patch, 
> YARN-1492-all-trunk-v3.patch, YARN-1492-all-trunk-v4.patch, 
> YARN-1492-all-trunk-v5.patch
>
>
> Currently there is the distributed cache that enables you to cache jars and 
> files so that attempts from the same job can reuse them. However, sharing is 
> limited with the distributed cache because it is normally on a per-job basis. 
> On a large cluster, sometimes copying of jobjars and libjars becomes so 
> prevalent that it consumes a large portion of the network bandwidth, not to 
> speak of defeating the purpose of "bringing compute to where data is". This 
> is wasteful because in most cases code doesn't change much across many jobs.
> I'd like to propose and discuss feasibility of introducing a truly shared 
> cache so that multiple jobs from multiple users can share and cache jars. 
> This JIRA is to open the discussion.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (YARN-1492) truly shared cache for jars (jobjar/libjar)

2017-10-02 Thread Chris Trezzo (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-1492?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16188801#comment-16188801
 ] 

Chris Trezzo edited comment on YARN-1492 at 10/2/17 8:50 PM:
-

[~asuresh] [~subru] I have set the target version for this jira back to 2.9.0. 
The only jira that is left for this first phase is the documentation patch 
(YARN-2960) and the startup script patch (YARN-4858). Both should be able to 
make 2.9.0. The rest of the feature is already in branch-2. I have split out 
some of the major features that still need to be finished in the shared cache 
into a phase 2 jira (YARN-7282). That being said, the core parts of this 
feature are committed and ready to be used in deployments that do not need 
phase 2 features.


was (Author: ctrezzo):
[~asuresh] [~subru] I have set the target version for this jira back to 2.9.0. 
The only jira that is left for this first phase is the documentation patch and 
YARN-4858. Both should be able to make 2.9.0. The rest of the feature is 
already in branch-2. I have split out some of the major features that still 
need to be finished in the shared cache into a phase 2 jira (YARN-7282). That 
being said, the core parts of this feature are committed and ready to be used 
in deployments that do not need phase 2 features.

> truly shared cache for jars (jobjar/libjar)
> ---
>
> Key: YARN-1492
> URL: https://issues.apache.org/jira/browse/YARN-1492
> Project: Hadoop YARN
>  Issue Type: New Feature
>Affects Versions: 2.0.4-alpha
>Reporter: Sangjin Lee
>Assignee: Chris Trezzo
> Attachments: shared_cache_design.pdf, shared_cache_design_v2.pdf, 
> shared_cache_design_v3.pdf, shared_cache_design_v4.pdf, 
> shared_cache_design_v5.pdf, shared_cache_design_v6.pdf, 
> YARN-1492-all-trunk-v1.patch, YARN-1492-all-trunk-v2.patch, 
> YARN-1492-all-trunk-v3.patch, YARN-1492-all-trunk-v4.patch, 
> YARN-1492-all-trunk-v5.patch
>
>
> Currently there is the distributed cache that enables you to cache jars and 
> files so that attempts from the same job can reuse them. However, sharing is 
> limited with the distributed cache because it is normally on a per-job basis. 
> On a large cluster, sometimes copying of jobjars and libjars becomes so 
> prevalent that it consumes a large portion of the network bandwidth, not to 
> speak of defeating the purpose of "bringing compute to where data is". This 
> is wasteful because in most cases code doesn't change much across many jobs.
> I'd like to propose and discuss feasibility of introducing a truly shared 
> cache so that multiple jobs from multiple users can share and cache jars. 
> This JIRA is to open the discussion.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-1492) truly shared cache for jars (jobjar/libjar)

2017-10-02 Thread Chris Trezzo (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-1492?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16188801#comment-16188801
 ] 

Chris Trezzo commented on YARN-1492:


[~asuresh] [~subru] I have set the target version for this jira back to 2.9.0. 
The only jira that is left for this first phase is the documentation patch and 
YARN-4858. Both should be able to make 2.9.0. The rest of the feature is 
already in branch-2. I have split out some of the major features that still 
need to be finished in the shared cache into a phase 2 jira (YARN-7282). That 
being said, the core parts of this feature are committed and ready to be used 
in deployments that do not need phase 2 features.

> truly shared cache for jars (jobjar/libjar)
> ---
>
> Key: YARN-1492
> URL: https://issues.apache.org/jira/browse/YARN-1492
> Project: Hadoop YARN
>  Issue Type: New Feature
>Affects Versions: 2.0.4-alpha
>Reporter: Sangjin Lee
>Assignee: Chris Trezzo
> Attachments: shared_cache_design.pdf, shared_cache_design_v2.pdf, 
> shared_cache_design_v3.pdf, shared_cache_design_v4.pdf, 
> shared_cache_design_v5.pdf, shared_cache_design_v6.pdf, 
> YARN-1492-all-trunk-v1.patch, YARN-1492-all-trunk-v2.patch, 
> YARN-1492-all-trunk-v3.patch, YARN-1492-all-trunk-v4.patch, 
> YARN-1492-all-trunk-v5.patch
>
>
> Currently there is the distributed cache that enables you to cache jars and 
> files so that attempts from the same job can reuse them. However, sharing is 
> limited with the distributed cache because it is normally on a per-job basis. 
> On a large cluster, sometimes copying of jobjars and libjars becomes so 
> prevalent that it consumes a large portion of the network bandwidth, not to 
> speak of defeating the purpose of "bringing compute to where data is". This 
> is wasteful because in most cases code doesn't change much across many jobs.
> I'd like to propose and discuss feasibility of introducing a truly shared 
> cache so that multiple jobs from multiple users can share and cache jars. 
> This JIRA is to open the discussion.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-1492) truly shared cache for jars (jobjar/libjar)

2017-10-02 Thread Chris Trezzo (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-1492?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chris Trezzo updated YARN-1492:
---
Target Version/s: 2.9.0  (was: 3.1.0)

> truly shared cache for jars (jobjar/libjar)
> ---
>
> Key: YARN-1492
> URL: https://issues.apache.org/jira/browse/YARN-1492
> Project: Hadoop YARN
>  Issue Type: New Feature
>Affects Versions: 2.0.4-alpha
>Reporter: Sangjin Lee
>Assignee: Chris Trezzo
> Attachments: shared_cache_design.pdf, shared_cache_design_v2.pdf, 
> shared_cache_design_v3.pdf, shared_cache_design_v4.pdf, 
> shared_cache_design_v5.pdf, shared_cache_design_v6.pdf, 
> YARN-1492-all-trunk-v1.patch, YARN-1492-all-trunk-v2.patch, 
> YARN-1492-all-trunk-v3.patch, YARN-1492-all-trunk-v4.patch, 
> YARN-1492-all-trunk-v5.patch
>
>
> Currently there is the distributed cache that enables you to cache jars and 
> files so that attempts from the same job can reuse them. However, sharing is 
> limited with the distributed cache because it is normally on a per-job basis. 
> On a large cluster, sometimes copying of jobjars and libjars becomes so 
> prevalent that it consumes a large portion of the network bandwidth, not to 
> speak of defeating the purpose of "bringing compute to where data is". This 
> is wasteful because in most cases code doesn't change much across many jobs.
> I'd like to propose and discuss feasibility of introducing a truly shared 
> cache so that multiple jobs from multiple users can share and cache jars. 
> This JIRA is to open the discussion.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Assigned] (YARN-5727) Improve YARN shared cache support for LinuxContainerExecutor

2017-10-02 Thread Chris Trezzo (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-5727?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chris Trezzo reassigned YARN-5727:
--

Assignee: (was: Chris Trezzo)

> Improve YARN shared cache support for LinuxContainerExecutor
> 
>
> Key: YARN-5727
> URL: https://issues.apache.org/jira/browse/YARN-5727
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Chris Trezzo
> Attachments: YARN-5727-Design-v1.pdf
>
>
> When running LinuxContainerExecutor in a secure mode 
> ({{yarn.nodemanager.linux-container-executor.nonsecure-mode.limit-users}} set 
> to {{false}}), all localized files are owned by the user that owns the 
> container which localized the resource. This presents a problem for the 
> shared cache when a YARN application requests a resource to be uploaded to 
> the shared cache that has a non-public visibility. The shared cache uploader 
> (running as the node manager user) does not have access to the localized 
> files and can not compute the checksum of the file or upload it to the cache. 
> The solution should ideally satisfy the following three requirements:
> # Localized files should still be safe/secure. Other users that run 
> containers should not be able to modify, or delete the publicly localized 
> files of others.
> # The node manager user should be able to access these files for the purpose 
> of checksumming and uploading to the shared cache without being a privileged 
> user.
> # The solution should avoid making unnecessary copies of the localized files.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-5727) Improve YARN shared cache support for LinuxContainerExecutor

2017-10-02 Thread Chris Trezzo (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-5727?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chris Trezzo updated YARN-5727:
---
Issue Type: Sub-task  (was: Bug)
Parent: YARN-7282

> Improve YARN shared cache support for LinuxContainerExecutor
> 
>
> Key: YARN-5727
> URL: https://issues.apache.org/jira/browse/YARN-5727
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Chris Trezzo
>Assignee: Chris Trezzo
> Attachments: YARN-5727-Design-v1.pdf
>
>
> When running LinuxContainerExecutor in a secure mode 
> ({{yarn.nodemanager.linux-container-executor.nonsecure-mode.limit-users}} set 
> to {{false}}), all localized files are owned by the user that owns the 
> container which localized the resource. This presents a problem for the 
> shared cache when a YARN application requests a resource to be uploaded to 
> the shared cache that has a non-public visibility. The shared cache uploader 
> (running as the node manager user) does not have access to the localized 
> files and can not compute the checksum of the file or upload it to the cache. 
> The solution should ideally satisfy the following three requirements:
> # Localized files should still be safe/secure. Other users that run 
> containers should not be able to modify, or delete the publicly localized 
> files of others.
> # The node manager user should be able to access these files for the purpose 
> of checksumming and uploading to the shared cache without being a privileged 
> user.
> # The solution should avoid making unnecessary copies of the localized files.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-2781) support more flexible policy for uploading in shared cache

2017-10-02 Thread Chris Trezzo (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-2781?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chris Trezzo updated YARN-2781:
---
Issue Type: Sub-task  (was: New Feature)
Parent: YARN-7282

> support more flexible policy for uploading in shared cache
> --
>
> Key: YARN-2781
> URL: https://issues.apache.org/jira/browse/YARN-2781
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Sangjin Lee
>
> Today all resources are always uploaded as long as the client wants to upload 
> it. We may want to implement a feature where the shared cache manager can 
> instruct the node managers not to upload under some circumstances.
> Some examples may be uploading a resource if it is seen more than N number of 
> times.
> This doesn't need to be included in the first version of the shared cache.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-6097) Add support for directories in the Shared Cache

2017-10-02 Thread Chris Trezzo (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-6097?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chris Trezzo updated YARN-6097:
---
Issue Type: Sub-task  (was: Bug)
Parent: YARN-7282

> Add support for directories in the Shared Cache
> ---
>
> Key: YARN-6097
> URL: https://issues.apache.org/jira/browse/YARN-6097
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Chris Trezzo
>
> Add support for directories in the shared cache.
> If a LocalResource URL points to a directory, the directory structure is 
> preserved during localization on the node manager. Currently, the shared 
> cache does not support directories and will fail to upload the URL to the 
> cache if shouldBeUploadedToSharedCache is set to true.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-2663) Race condintion in shared cache CleanerTask during deletion of resource

2017-10-02 Thread Chris Trezzo (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-2663?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chris Trezzo updated YARN-2663:
---
Parent Issue: YARN-7282  (was: YARN-1492)

> Race condintion in shared cache CleanerTask during deletion of resource
> ---
>
> Key: YARN-2663
> URL: https://issues.apache.org/jira/browse/YARN-2663
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Chris Trezzo
>Priority: Blocker
>
> In CleanerTask, store.removeResource(key) and 
> removeResourceFromCacheFileSystem(path) do not happen together in atomic 
> fashion.
> Since resources could be uploaded with different file names, the SCM could 
> receive a notification to add a resource to the SCM between the two 
> operations. Thus, we have a scenario where the cleaner service deletes the 
> entry from the scm, receives a notification from the uploader (adding the 
> entry back into the scm) and then deletes the file from HDFS.
> Cleaner code that deletes resource:
> {code}
>   if (store.isResourceEvictable(key, resource)) {
> try {
>   /*
>* TODO: There is a race condition between store.removeResource(key)
>* and removeResourceFromCacheFileSystem(path) operations because 
> they
>* do not happen atomically and resources can be uploaded with
>* different file names by the node managers.
>*/
>   // remove the resource from scm (checks for appIds as well)
>   if (store.removeResource(key)) {
> // remove the resource from the file system
> boolean deleted = removeResourceFromCacheFileSystem(path);
> if (deleted) {
>   resourceStatus = ResourceStatus.DELETED;
> } else {
>   LOG.error("Failed to remove path from the file system."
>   + " Skipping this resource: " + path);
>   resourceStatus = ResourceStatus.ERROR;
> }
>   } else {
> // we did not delete the resource because it contained application
> // ids
> resourceStatus = ResourceStatus.PROCESSED;
>   }
> } catch (IOException e) {
>   LOG.error(
>   "Failed to remove path from the file system. Skipping this 
> resource: "
>   + path, e);
>   resourceStatus = ResourceStatus.ERROR;
> }
>   } else {
> resourceStatus = ResourceStatus.PROCESSED;
>   }
> {code}
> Uploader code that uploads resource:
> {code}
>   // create the temporary file
>   tempPath = new Path(directoryPath, getTemporaryFileName(actualPath));
>   if (!uploadFile(actualPath, tempPath)) {
> LOG.warn("Could not copy the file to the shared cache at " + 
> tempPath);
> return false;
>   }
>   // set the permission so that it is readable but not writable
>   // TODO should I create the file with the right permission so I save the
>   // permission call?
>   fs.setPermission(tempPath, FILE_PERMISSION);
>   // rename it to the final filename
>   Path finalPath = new Path(directoryPath, actualPath.getName());
>   if (!fs.rename(tempPath, finalPath)) {
> LOG.warn("The file already exists under " + finalPath +
> ". Ignoring this attempt.");
> deleteTempFile(tempPath);
> return false;
>   }
>   // notify the SCM
>   if (!notifySharedCacheManager(checksumVal, actualPath.getName())) {
> // the shared cache manager rejected the upload (as it is likely
> // uploaded under a different name
> // clean up this file and exit
> fs.delete(finalPath, false);
> return false;
>   }
> {code}
> One solution is to have the UploaderService always rename the resource file 
> to the checksum of the resource plus the extension. With this fix we will 
> never receive a notify for the resource before the delete from the FS has 
> happened because the rename on the node manager will fail. If the node 
> manager uploads the file after it is deleted from the FS, we are ok and the 
> resource will simply get added back to the scm once a notification is 
> received.
> The classpath at the MapReduce layer is still usable because we leverage 
> links to preserve the original client file name.
> The downside is that now the shared cache files in HDFS are less readable. 
> This could be mitigated with an added admin command to the SCM that gives a 
> list of filenames associated with a checksum or vice versa.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: 

[jira] [Updated] (YARN-2774) shared cache service should authorize calls properly

2017-10-02 Thread Chris Trezzo (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-2774?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chris Trezzo updated YARN-2774:
---
Parent Issue: YARN-7282  (was: YARN-1492)

> shared cache service should authorize calls properly
> 
>
> Key: YARN-2774
> URL: https://issues.apache.org/jira/browse/YARN-2774
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Sangjin Lee
>
> The shared cache manager (SCM) services should authorize calls properly.
> Currently, the uploader service (done in YARN-2186) does not authorize calls 
> to notify the SCM on newly uploaded resource. Proper security/authorization 
> needs to be done in this RPC call. Also, the use/release calls (done in 
> YARN-2188) and the scmAdmin commands (done in YARN-2189) are not properly 
> authorized. The SCM UI done in YARN-2203 as well.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Created] (YARN-7282) Shared Cache Phase 2

2017-10-02 Thread Chris Trezzo (JIRA)
Chris Trezzo created YARN-7282:
--

 Summary: Shared Cache Phase 2
 Key: YARN-7282
 URL: https://issues.apache.org/jira/browse/YARN-7282
 Project: Hadoop YARN
  Issue Type: Improvement
Reporter: Chris Trezzo


Phase 2 will address more features that need to be built as part of the shared 
cache project. See YARN-1492 for the first release of the shared cache.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-3661) Basic Federation UI

2017-10-02 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-3661?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16188752#comment-16188752
 ] 

Hudson commented on YARN-3661:
--

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #13004 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/13004/])
YARN-3661. Basic Federation UI. (Contributed by Inigo Goiri via curino) (carlo 
curino: rev ceca9694f9a0c78d07cab2c382036f175183e67b)
* (add) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-router/src/main/java/org/apache/hadoop/yarn/server/router/webapp/NavBlock.java
* (add) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-router/src/main/java/org/apache/hadoop/yarn/server/router/webapp/FederationBlock.java
* (add) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-router/src/main/java/org/apache/hadoop/yarn/server/router/webapp/NodesBlock.java
* (add) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-router/src/main/java/org/apache/hadoop/yarn/server/router/webapp/RouterView.java
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-router/src/main/java/org/apache/hadoop/yarn/server/router/webapp/RouterWebApp.java
* (add) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-router/src/main/java/org/apache/hadoop/yarn/server/router/webapp/AppsBlock.java
* (add) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-router/src/main/java/org/apache/hadoop/yarn/server/router/webapp/RouterController.java
* (add) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-router/src/main/java/org/apache/hadoop/yarn/server/router/webapp/NodesPage.java
* (add) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-router/src/main/java/org/apache/hadoop/yarn/server/router/webapp/AboutPage.java
* (add) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-router/src/main/java/org/apache/hadoop/yarn/server/router/webapp/AppsPage.java
* (add) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-router/src/main/java/org/apache/hadoop/yarn/server/router/webapp/AboutBlock.java
* (add) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-router/src/main/java/org/apache/hadoop/yarn/server/router/webapp/FederationPage.java
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-site/src/site/markdown/Federation.md


> Basic Federation UI 
> 
>
> Key: YARN-3661
> URL: https://issues.apache.org/jira/browse/YARN-3661
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: nodemanager, resourcemanager
>Reporter: Giovanni Matteo Fumarola
>Assignee: Íñigo Goiri
> Attachments: YARN-3661-000.patch, YARN-3661-001.patch, 
> YARN-3661-002.patch, YARN-3661-003.patch, YARN-3661-004.patch, 
> YARN-3661-005.patch, YARN-3661-006.patch, YARN-3661-007.patch, 
> YARN-3661-008.patch, YARN-3661-009.patch, YARN-3661-010.patch, 
> YARN-3661-011.patch, YARN-3661-012.patch, YARN-3661-013.patch, 
> YARN-3661-014.patch, YARN-3661-015.patch
>
>
> The UIs provided by each RM, provide a correct "local" view of what is 
> running in a sub-cluster. In the context of federation we need new 
> UIs that can track load, jobs, users across sub-clusters.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-2037) Add restart support for Unmanaged AMs

2017-10-02 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-2037?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16188743#comment-16188743
 ] 

Hadoop QA commented on YARN-2037:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
25s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
58s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 17m 
20s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  9m 
18s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
 1s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
21s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
12m  5s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m 
19s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
54s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
10s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
 0s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  5m 
57s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  5m 
57s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
59s{color} | {color:green} hadoop-yarn-project/hadoop-yarn: The patch generated 
0 new + 186 unchanged - 3 fixed = 186 total (was 189) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
15s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
10m 30s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m 
34s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} javadoc {color} | {color:red}  0m 
26s{color} | {color:red} hadoop-yarn-api in the patch failed. {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
36s{color} | {color:green} hadoop-yarn-api in the patch passed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 45m 49s{color} 
| {color:red} hadoop-yarn-server-resourcemanager in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
28s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}120m 52s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | 
hadoop.yarn.server.resourcemanager.scheduler.capacity.TestContainerAllocation |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:71bbb86 |
| JIRA Issue | YARN-2037 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12889993/YARN-2037.v3.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  shadedclient  findbugs  checkstyle  |
| uname | Linux d158b620c80b 3.13.0-119-generic #166-Ubuntu SMP Wed May 3 
12:18:55 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / 563dcdf |
| Default Java | 1.8.0_144 |
| findbugs | v3.1.0-RC1 |
| 

[jira] [Commented] (YARN-3661) Basic Federation UI

2017-10-02 Thread Carlo Curino (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-3661?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16188736#comment-16188736
 ] 

Carlo Curino commented on YARN-3661:


Thanks [~elgoiri] and [~giovanni.fumarola] for code and reviews! Thanks 
[~subru] for reviews as well.  
LGTM, I committed this to trunk. 

Please provide an updated patch for branch-2  and re-open the issue if you want 
this version of the UI in branch-2 as well.

> Basic Federation UI 
> 
>
> Key: YARN-3661
> URL: https://issues.apache.org/jira/browse/YARN-3661
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: nodemanager, resourcemanager
>Reporter: Giovanni Matteo Fumarola
>Assignee: Íñigo Goiri
> Attachments: YARN-3661-000.patch, YARN-3661-001.patch, 
> YARN-3661-002.patch, YARN-3661-003.patch, YARN-3661-004.patch, 
> YARN-3661-005.patch, YARN-3661-006.patch, YARN-3661-007.patch, 
> YARN-3661-008.patch, YARN-3661-009.patch, YARN-3661-010.patch, 
> YARN-3661-011.patch, YARN-3661-012.patch, YARN-3661-013.patch, 
> YARN-3661-014.patch, YARN-3661-015.patch
>
>
> The UIs provided by each RM, provide a correct "local" view of what is 
> running in a sub-cluster. In the context of federation we need new 
> UIs that can track load, jobs, users across sub-clusters.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7278) LinuxContainer in docker mode will be failed when nodemanager restart, because timeout for docker is too slow.

2017-10-02 Thread Eric Badger (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7278?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16188731#comment-16188731
 ] 

Eric Badger commented on YARN-7278:
---

The affect version is set to 2.7.1. So is this a bug related to 
DockerContainerExecutor? DockerContainerExecutor has been deprecated in 2.9 and 
removed in 3.0. If this is a problem with DockerLinuxContainerRuntime, then the 
affect version shouldn't be set to 2.7.1. 

> LinuxContainer in docker mode will be failed when nodemanager restart, 
> because timeout for docker is too slow.
> --
>
> Key: YARN-7278
> URL: https://issues.apache.org/jira/browse/YARN-7278
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: nodemanager
>Affects Versions: 2.7.1
> Environment: CentOS
>Reporter: zhengchenyu
> Fix For: 2.9.0
>
>   Original Estimate: 1m
>  Remaining Estimate: 1m
>
> In our cluster, nodemanagere recovery is turn on, and we use LinuxConainer 
> with docker mode.
> Container may be failed when nodemanager restart, exception is below:
> {code}
> [2017-09-29T15:47:14.433+08:00] [INFO] 
> containermanager.monitor.ContainersMonitorImpl.run(ContainersMonitorImpl.java 
> 472) [Container Monitor] : Memory usage of ProcessTree 120523 for 
> container-id container_1506600355508_0023_01_04: -1B of 10 GB physical 
> memory used; -1B of 31 GB virtual memory used
> [2017-09-29T15:47:15.219+08:00] [ERROR] 
> containermanager.launcher.RecoveredContainerLaunch.call(RecoveredContainerLaunch.java
>  93) [ContainersLauncher #1] : Unable to recover container 
> container_1506600355508_0023_01_04
> java.io.IOException: Timeout while waiting for exit code from 
> container_1506600355508_0023_01_04
> [2017-09-29T15:47:15.220+08:00] [INFO] 
> containermanager.container.ContainerImpl.handle(ContainerImpl.java 1142) 
> [AsyncDispatcher event handler] : Container 
> container_1506600355508_0023_01_04 transitioned from RUNNING to 
> EXITED_WITH_FAILURE
> [2017-09-29T15:47:15.221+08:00] [INFO] 
> containermanager.launcher.ContainerLaunch.cleanupContainer(ContainerLaunch.java
>  440) [AsyncDispatcher event handler] : Cleaning up container 
> container_1506600355508_0023_01_04
> {code}
> I guess the proccess is done, but 2 seconde later( the variable is msecLeft), 
> the *.pid.exitcode wasn't created. Then I changed variable to 2ms, The 
> container is succeed when nodemanger is restart.
> So I think it is too short for docker container to complete the work.
> In docker mode of LinuxContainer, nm monitor the real task which is launched 
> by "docker run" command. Then "docker wait" command will wait for exitcode, 
> then "docker rm" will delete the docker container. Lastly, container-executor 
> will write the exit code. So if some docker command is slow enough, nm 
> wouldn't monitor the container. In fact, docker rm is always slow. 
> I think the exit code of docker rm dosen't matter with the real task, so I 
> think we could move the operation of write "*.pid.exitcode" before the 
> command of docker rm. Or monitor the docker wait proccess, but not the real 
> task.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7134) AppSchedulingInfo has a dependency on capacity scheduler

2017-10-02 Thread Andrew Wang (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7134?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16188698#comment-16188698
 ] 

Andrew Wang commented on YARN-7134:
---

Thanks Sunil!

> AppSchedulingInfo has a dependency on capacity scheduler
> 
>
> Key: YARN-7134
> URL: https://issues.apache.org/jira/browse/YARN-7134
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: scheduler
>Affects Versions: 3.0.0-alpha4
>Reporter: Daniel Templeton
>Assignee: Sunil G
>Priority: Blocker
>
> The common scheduling code should be independent of all scheduler 
> implementations.  YARN-6040 introduced capacity scheduler's 
> {{SchedulingMode}} into {{AppSchedulingInfo}}.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7117) Capacity Scheduler: Support Auto Creation of Leaf Queues While Doing Queue Mapping

2017-10-02 Thread Jason Lowe (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7117?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16188674#comment-16188674
 ] 

Jason Lowe commented on YARN-7117:
--

bq. The current CS code has a bug in that it allows "." in the queue names. Do 
you think we should fix this for 3.0? 

That's unfortunate, but I do not think it has to be a show-stopper.  A couple 
of ways to fix it:

1) Preclude the use of '.' in auto-queue names, then we always know the last 
word when we split with '.' is the child queue and the rest is the parent queue.

or

2) The parsing of the specified queue names becomes trickier.  Rather than the 
code blindly assuming it can split the name on '.' to get queue names, it has 
to check to see if the parent exists.  If it doesn't then it adds the next 
chunk from the split and see if that's a valid parent queue, etc.  There may be 
some issues with ambiguity, but I suspect there would be other problems with 
getting queue configs parsed properly if there were truly ambiguous resolutions.

bq. This could be done though it might be a backward in-compatible change. 

Which part has the compatibility concern?  There aren't any existing semantics 
of auto-queues since they doesn't exist yet, so I'm confused where the 
compatibility issue lies.


> Capacity Scheduler: Support Auto Creation of Leaf Queues While Doing Queue 
> Mapping
> --
>
> Key: YARN-7117
> URL: https://issues.apache.org/jira/browse/YARN-7117
> Project: Hadoop YARN
>  Issue Type: New Feature
>  Components: capacity scheduler
>Reporter: Wangda Tan
>Assignee: Wangda Tan
> Attachments: 
> YARN-7117.Capacity.Scheduler.Support.Auto.Creation.Of.Leaf.Queue.pdf
>
>
> Currently Capacity Scheduler doesn't support auto creation of queues when 
> doing queue mapping. We saw more and more use cases which has complex queue 
> mapping policies configured to handle application to queues mapping. 
> The most common use case of CapacityScheduler queue mapping is to create one 
> queue for each user/group. However update {{capacity-scheduler.xml}} and 
> {{RMAdmin:refreshQueues}} needs to be done when new user/group onboard. One 
> of the option to solve the problem is automatically create queues when new 
> user/group arrives.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Created] (YARN-7281) Auto inject AllocationRequestId in AMRMClient.ContainerRequest when not supplied

2017-10-02 Thread Botong Huang (JIRA)
Botong Huang created YARN-7281:
--

 Summary: Auto inject AllocationRequestId in 
AMRMClient.ContainerRequest when not supplied
 Key: YARN-7281
 URL: https://issues.apache.org/jira/browse/YARN-7281
 Project: Hadoop YARN
  Issue Type: Task
Reporter: Botong Huang
Assignee: Botong Huang
Priority: Minor


AllocationRequestId is introduced in YARN-4879 to simplify the resource 
allocation protocol inside AM-RM heartbeat. Many new features (e.g. Yarn 
Federation) are/will be built preferring AllocationRequestId to present. 

This Jira is modifying AMRMClient so that when AM is not supplying the 
AllocationRequestId, it will be auto-generated in the constructor of 
AMRMClient.ContainerRequest. 



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-7102) NM heartbeat stuck when responseId overflows MAX_INT

2017-10-02 Thread Botong Huang (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-7102?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Botong Huang updated YARN-7102:
---
Attachment: YARN-7102.v8.patch

> NM heartbeat stuck when responseId overflows MAX_INT
> 
>
> Key: YARN-7102
> URL: https://issues.apache.org/jira/browse/YARN-7102
> Project: Hadoop YARN
>  Issue Type: Bug
>Reporter: Botong Huang
>Assignee: Botong Huang
>Priority: Critical
> Attachments: YARN-7102.v1.patch, YARN-7102.v2.patch, 
> YARN-7102.v3.patch, YARN-7102.v4.patch, YARN-7102.v5.patch, 
> YARN-7102.v6.patch, YARN-7102.v7.patch, YARN-7102.v8.patch
>
>
> ResponseId overflow problem in NM-RM heartbeat. This is same as AM-RM 
> heartbeat in YARN-6640, please refer to YARN-6640 for details. 



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-7211) Task in SLS does't work

2017-10-02 Thread Yufei Gu (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-7211?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yufei Gu updated YARN-7211:
---
Description: 
{code}
java.lang.reflect.UndeclaredThrowableException
at 
org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1980)
at 
org.apache.hadoop.yarn.sls.appmaster.MRAMSimulator.sendContainerRequest(MRAMSimulator.java:339)
at 
org.apache.hadoop.yarn.sls.appmaster.AMSimulator.middleStep(AMSimulator.java:201)
at 
org.apache.hadoop.yarn.sls.scheduler.TaskRunner$Task.run(TaskRunner.java:94)
at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)
Caused by: 
org.apache.hadoop.yarn.exceptions.InvalidApplicationMasterRequestException: 
Invalid responseId in AllocateRequest from application attempt: 
appattempt_1505762809623_0001_01, expect responseId to be 0, but get 1
at 
org.apache.hadoop.yarn.server.resourcemanager.ApplicationMasterService.allocate(ApplicationMasterService.java:377)
at 
org.apache.hadoop.yarn.sls.appmaster.MRAMSimulator$1.run(MRAMSimulator.java:343)
at 
org.apache.hadoop.yarn.sls.appmaster.MRAMSimulator$1.run(MRAMSimulator.java:340)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:422)
at 
org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1962)
... 6 more
Exception in thread "pool-4-thread-8" java.lang.NullPointerException
at 
org.apache.hadoop.yarn.sls.scheduler.TaskRunner$Task.run(TaskRunner.java:103)
at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)
{code}
Seems like it is broken by YARN-6640. SLS works after revert YARN-6640 as I 
tested.

  was:
{code}
java.lang.reflect.UndeclaredThrowableException
at 
org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1980)
at 
org.apache.hadoop.yarn.sls.appmaster.MRAMSimulator.sendContainerRequest(MRAMSimulator.java:339)
at 
org.apache.hadoop.yarn.sls.appmaster.AMSimulator.middleStep(AMSimulator.java:201)
at 
org.apache.hadoop.yarn.sls.scheduler.TaskRunner$Task.run(TaskRunner.java:94)
at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)
Caused by: 
org.apache.hadoop.yarn.exceptions.InvalidApplicationMasterRequestException: 
Invalid responseId in AllocateRequest from application attempt: 
appattempt_1505762809623_0001_01, expect responseId to be 0, but get 1
at 
org.apache.hadoop.yarn.server.resourcemanager.ApplicationMasterService.allocate(ApplicationMasterService.java:377)
at 
org.apache.hadoop.yarn.sls.appmaster.MRAMSimulator$1.run(MRAMSimulator.java:343)
at 
org.apache.hadoop.yarn.sls.appmaster.MRAMSimulator$1.run(MRAMSimulator.java:340)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:422)
at 
org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1962)
... 6 more
Exception in thread "pool-4-thread-8" java.lang.NullPointerException
at 
org.apache.hadoop.yarn.sls.scheduler.TaskRunner$Task.run(TaskRunner.java:103)
at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)
{code}
Seems like it is broken by YARN-6640.


> Task in SLS does't work
> ---
>
> Key: YARN-7211
> URL: https://issues.apache.org/jira/browse/YARN-7211
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: scheduler-load-simulator
>Affects Versions: 3.0.0-beta1, 3.1.0
>Reporter: Yufei Gu
>Assignee: Yufei Gu
>Priority: Blocker
>
> {code}
> java.lang.reflect.UndeclaredThrowableException
>   at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1980)
>   at 
> org.apache.hadoop.yarn.sls.appmaster.MRAMSimulator.sendContainerRequest(MRAMSimulator.java:339)
>   at 
> org.apache.hadoop.yarn.sls.appmaster.AMSimulator.middleStep(AMSimulator.java:201)
>   at 
> org.apache.hadoop.yarn.sls.scheduler.TaskRunner$Task.run(TaskRunner.java:94)
>   at 
> 

[jira] [Updated] (YARN-7211) Task in SLS does't work

2017-10-02 Thread Yufei Gu (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-7211?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yufei Gu updated YARN-7211:
---
Priority: Blocker  (was: Major)

> Task in SLS does't work
> ---
>
> Key: YARN-7211
> URL: https://issues.apache.org/jira/browse/YARN-7211
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: scheduler-load-simulator
>Affects Versions: 3.0.0-beta1, 3.1.0
>Reporter: Yufei Gu
>Assignee: Yufei Gu
>Priority: Blocker
>
> {code}
> java.lang.reflect.UndeclaredThrowableException
>   at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1980)
>   at 
> org.apache.hadoop.yarn.sls.appmaster.MRAMSimulator.sendContainerRequest(MRAMSimulator.java:339)
>   at 
> org.apache.hadoop.yarn.sls.appmaster.AMSimulator.middleStep(AMSimulator.java:201)
>   at 
> org.apache.hadoop.yarn.sls.scheduler.TaskRunner$Task.run(TaskRunner.java:94)
>   at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
>   at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
>   at java.lang.Thread.run(Thread.java:745)
> Caused by: 
> org.apache.hadoop.yarn.exceptions.InvalidApplicationMasterRequestException: 
> Invalid responseId in AllocateRequest from application attempt: 
> appattempt_1505762809623_0001_01, expect responseId to be 0, but get 1
>   at 
> org.apache.hadoop.yarn.server.resourcemanager.ApplicationMasterService.allocate(ApplicationMasterService.java:377)
>   at 
> org.apache.hadoop.yarn.sls.appmaster.MRAMSimulator$1.run(MRAMSimulator.java:343)
>   at 
> org.apache.hadoop.yarn.sls.appmaster.MRAMSimulator$1.run(MRAMSimulator.java:340)
>   at java.security.AccessController.doPrivileged(Native Method)
>   at javax.security.auth.Subject.doAs(Subject.java:422)
>   at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1962)
>   ... 6 more
> Exception in thread "pool-4-thread-8" java.lang.NullPointerException
>   at 
> org.apache.hadoop.yarn.sls.scheduler.TaskRunner$Task.run(TaskRunner.java:103)
>   at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
>   at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
>   at java.lang.Thread.run(Thread.java:745)
> {code}
> Seems like it is broken by YARN-6640.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-3661) Basic Federation UI

2017-10-02 Thread Giovanni Matteo Fumarola (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-3661?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16188598#comment-16188598
 ] 

Giovanni Matteo Fumarola commented on YARN-3661:


Thanks [~elgoiri] for the patch.
LGTM +1.

> Basic Federation UI 
> 
>
> Key: YARN-3661
> URL: https://issues.apache.org/jira/browse/YARN-3661
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: nodemanager, resourcemanager
>Reporter: Giovanni Matteo Fumarola
>Assignee: Íñigo Goiri
> Attachments: YARN-3661-000.patch, YARN-3661-001.patch, 
> YARN-3661-002.patch, YARN-3661-003.patch, YARN-3661-004.patch, 
> YARN-3661-005.patch, YARN-3661-006.patch, YARN-3661-007.patch, 
> YARN-3661-008.patch, YARN-3661-009.patch, YARN-3661-010.patch, 
> YARN-3661-011.patch, YARN-3661-012.patch, YARN-3661-013.patch, 
> YARN-3661-014.patch, YARN-3661-015.patch
>
>
> The UIs provided by each RM, provide a correct "local" view of what is 
> running in a sub-cluster. In the context of federation we need new 
> UIs that can track load, jobs, users across sub-clusters.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-4859) [Bug] Unable to submit a job to a reservation when using FairScheduler

2017-10-02 Thread Andrew Wang (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-4859?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16188588#comment-16188588
 ] 

Andrew Wang commented on YARN-4859:
---

3.0.0 release information is kept up to date here:

https://cwiki.apache.org/confluence/display/HADOOP/Hadoop+3.0.0+release

I'm trying to avoid slipping the GA date, so we need all blockers in within the 
next month.

> [Bug] Unable to submit a job to a reservation when using FairScheduler
> --
>
> Key: YARN-4859
> URL: https://issues.apache.org/jira/browse/YARN-4859
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: fairscheduler
>Reporter: Subru Krishnan
>Assignee: Yufei Gu
>Priority: Blocker
>
> Jobs submitted to a reservation get stuck at scheduled stage when using 
> FairScheduler. I came across this when working on YARN-4827 (documentation 
> for configuring ReservationSystem for FairScheduler)



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7198) Add jsvc support for RegistryDNS

2017-10-02 Thread Jian He (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7198?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16188571#comment-16188571
 ] 

Jian He commented on YARN-7198:
---

[~aw], sorry to ping again. I know you must be busy. But as you know Andrew is 
pushing for GA and we don't want to miss the train. 
So we'd like to get this patch committed by Wednesday. You are welcome to raise 
issues when you got time to look at it.


> Add jsvc support for RegistryDNS
> 
>
> Key: YARN-7198
> URL: https://issues.apache.org/jira/browse/YARN-7198
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: yarn-native-services
>Reporter: Billie Rinaldi
>Assignee: Billie Rinaldi
>Priority: Critical
> Attachments: YARN-7198-yarn-native-services.01.patch, 
> YARN-7198-yarn-native-services.02.patch, 
> YARN-7198-yarn-native-services.03.patch, 
> YARN-7198-yarn-native-services.04.patch
>
>
> RegistryDNS should have jsvc support and be managed through the shell 
> scripts, rather than being started manually. See original comments on 
> YARN-7191.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-2037) Add restart support for Unmanaged AMs

2017-10-02 Thread Botong Huang (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-2037?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Botong Huang updated YARN-2037:
---
Attachment: YARN-2037.v3.patch

> Add restart support for Unmanaged AMs
> -
>
> Key: YARN-2037
> URL: https://issues.apache.org/jira/browse/YARN-2037
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: resourcemanager
>Affects Versions: 2.4.0
>Reporter: Karthik Kambatla
>Assignee: Botong Huang
> Attachments: YARN-2037.v1.patch, YARN-2037.v2.patch, 
> YARN-2037.v3.patch
>
>
> It would be nice to allow Unmanaged AMs also to restart in a work-preserving 
> way. 



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7271) Add a yarn application cost calculation framework in TimelineService v2

2017-10-02 Thread Vrushali C (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7271?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16188527#comment-16188527
 ] 

Vrushali C commented on YARN-7271:
--

Ah, thanks [~eepayne] , that does seem around the lines of what I had in mind 
as well. Will take a closer look and will then perhaps close out this jira out. 

> Add a yarn application cost calculation framework in TimelineService v2
> ---
>
> Key: YARN-7271
> URL: https://issues.apache.org/jira/browse/YARN-7271
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: timelineclient, timelinereader, timelineserver
>Reporter: Vrushali C
>
> Timeline Service v2 captures information about a yarn application. From this 
> info, we would like to calculate the "cost" of an yarn application. This 
> would be rolled up to the flow level  as well (and user and queue level 
> eventually).
> We need a way to accept machine cost (TCO per day) and enable this 
> calculation. This will help in chargeback for yarn apps. 



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7271) Add a yarn application cost calculation framework in TimelineService v2

2017-10-02 Thread Eric Payne (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7271?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16188450#comment-16188450
 ] 

Eric Payne commented on YARN-7271:
--

[~vrushalic], The RM has a built-in calculation that keeps track of memory and 
vcore usage. I'm linking YARN-415 to see if it meets your needs.

> Add a yarn application cost calculation framework in TimelineService v2
> ---
>
> Key: YARN-7271
> URL: https://issues.apache.org/jira/browse/YARN-7271
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: timelineclient, timelinereader, timelineserver
>Reporter: Vrushali C
>
> Timeline Service v2 captures information about a yarn application. From this 
> info, we would like to calculate the "cost" of an yarn application. This 
> would be rolled up to the flow level  as well (and user and queue level 
> eventually).
> We need a way to accept machine cost (TCO per day) and enable this 
> calculation. This will help in chargeback for yarn apps. 



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7245) In Cap Sched UI, Max AM Resource column in Active Users Info section should be per-user

2017-10-02 Thread Eric Payne (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7245?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16188018#comment-16188018
 ] 

Eric Payne commented on YARN-7245:
--

bq. This is bad. We ideally need user based max-am-limit.
[~sunilg], the value for {{Max Application Master Resources Per User}} exists 
and is used by the scheduler. However, the per-user section under {{Active 
Users Info}} displays the value for the whole queue instead of per user. This 
is a problem in the GUI only.

> In Cap Sched UI, Max AM Resource column in Active Users Info section should 
> be per-user
> ---
>
> Key: YARN-7245
> URL: https://issues.apache.org/jira/browse/YARN-7245
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: capacity scheduler, yarn
>Affects Versions: 2.9.0, 2.8.1, 3.0.0-alpha4
>Reporter: Eric Payne
> Attachments: CapSched UI Showing Inaccurate Per User Max AM 
> Resource.png
>
>




--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Assigned] (YARN-7245) In Cap Sched UI, Max AM Resource column in Active Users Info section should be per-user

2017-10-02 Thread Eric Payne (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-7245?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Eric Payne reassigned YARN-7245:


Assignee: Eric Payne

> In Cap Sched UI, Max AM Resource column in Active Users Info section should 
> be per-user
> ---
>
> Key: YARN-7245
> URL: https://issues.apache.org/jira/browse/YARN-7245
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: capacity scheduler, yarn
>Affects Versions: 2.9.0, 2.8.1, 3.0.0-alpha4
>Reporter: Eric Payne
>Assignee: Eric Payne
> Attachments: CapSched UI Showing Inaccurate Per User Max AM 
> Resource.png
>
>




--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7205) Log improvements for the ResourceUtils

2017-10-02 Thread Sunil G (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7205?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16187993#comment-16187993
 ] 

Sunil G commented on YARN-7205:
---

{{17/09/15 10:26:32 INFO conf.Configuration: resource-types.xml not found}}

this log is coming when we try to load *resource-types.xml* and conf object 
complaints its not there. This cant be debug as its a common information 
message used Confguration class. Other than that all other logs are pushed to 
debug.

> Log improvements for the ResourceUtils
> --
>
> Key: YARN-7205
> URL: https://issues.apache.org/jira/browse/YARN-7205
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: nodemanager, resourcemanager
>Reporter: Jian He
>Assignee: Sunil G
> Attachments: YARN-7205.001.patch, YARN-7205.002.patch
>
>
> I've seen below logs printed at the service client console after the merge, 
> can this be moved to debug level log ? cc  [~sunilg], [~leftnoteasy]
> {code}
> 17/09/15 10:26:32 INFO conf.Configuration: resource-types.xml not found
> 17/09/15 10:26:32 INFO resource.ResourceUtils: Unable to find 
> 'resource-types.xml'. Falling back to memory and vcores as resources.
> 17/09/15 10:26:32 INFO resource.ResourceUtils: Adding resource type - name = 
> memory-mb, units = Mi, type = COUNTABLE
> 17/09/15 10:26:32 INFO resource.ResourceUtils: Adding resource type - name = 
> vcores, units = , type = COUNTABLE
> {code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7205) Log improvements for the ResourceUtils

2017-10-02 Thread Daniel Templeton (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7205?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16187792#comment-16187792
 ] 

Daniel Templeton commented on YARN-7205:


I hear ya, but given that resource types can't actually be turned off, I don't 
see the point in trying to make it look like it can.  As you said, the admin 
can choose to configure additional resources or not.  I don't think it helps to 
add yet another config property to control whether that config file is read.  
Since these logs are all at debug level now, I don't think it's a big issue in 
any case.

> Log improvements for the ResourceUtils
> --
>
> Key: YARN-7205
> URL: https://issues.apache.org/jira/browse/YARN-7205
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: nodemanager, resourcemanager
>Reporter: Jian He
>Assignee: Sunil G
> Attachments: YARN-7205.001.patch, YARN-7205.002.patch
>
>
> I've seen below logs printed at the service client console after the merge, 
> can this be moved to debug level log ? cc  [~sunilg], [~leftnoteasy]
> {code}
> 17/09/15 10:26:32 INFO conf.Configuration: resource-types.xml not found
> 17/09/15 10:26:32 INFO resource.ResourceUtils: Unable to find 
> 'resource-types.xml'. Falling back to memory and vcores as resources.
> 17/09/15 10:26:32 INFO resource.ResourceUtils: Adding resource type - name = 
> memory-mb, units = Mi, type = COUNTABLE
> 17/09/15 10:26:32 INFO resource.ResourceUtils: Adding resource type - name = 
> vcores, units = , type = COUNTABLE
> {code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7205) Log improvements for the ResourceUtils

2017-10-02 Thread Sunil G (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7205?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16187765#comment-16187765
 ] 

Sunil G commented on YARN-7205:
---

bq.Resource types and resource profiles are distinct features--each makes sense 
without the other.
Yes. Thats correct. However i could see that resource type file will be loaded 
by default by RM w/o any toggle OFF option. I am also in line with making this 
default ON as admin has to place resource-types.xml manually. But there are 
many loggings etc will come additionally. If everyone is fine with that, i 
think its fine to have this as default ON. Or we need to think of introducing a 
new config to switch on supporting more resource types.
other comments are fine, i ll take care in next patch.

> Log improvements for the ResourceUtils
> --
>
> Key: YARN-7205
> URL: https://issues.apache.org/jira/browse/YARN-7205
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: nodemanager, resourcemanager
>Reporter: Jian He
>Assignee: Sunil G
> Attachments: YARN-7205.001.patch, YARN-7205.002.patch
>
>
> I've seen below logs printed at the service client console after the merge, 
> can this be moved to debug level log ? cc  [~sunilg], [~leftnoteasy]
> {code}
> 17/09/15 10:26:32 INFO conf.Configuration: resource-types.xml not found
> 17/09/15 10:26:32 INFO resource.ResourceUtils: Unable to find 
> 'resource-types.xml'. Falling back to memory and vcores as resources.
> 17/09/15 10:26:32 INFO resource.ResourceUtils: Adding resource type - name = 
> memory-mb, units = Mi, type = COUNTABLE
> 17/09/15 10:26:32 INFO resource.ResourceUtils: Adding resource type - name = 
> vcores, units = , type = COUNTABLE
> {code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6855) CLI Proto Modifications to support Node Attributes

2017-10-02 Thread Sunil G (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6855?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16187736#comment-16187736
 ] 

Sunil G commented on YARN-6855:
---

[~naganarasimha...@apache.org] could u please help to check the shaded client 
issue.

> CLI Proto Modifications to support Node Attributes
> --
>
> Key: YARN-6855
> URL: https://issues.apache.org/jira/browse/YARN-6855
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: api, capacityscheduler, client
>Reporter: Naganarasimha G R
>Assignee: Naganarasimha G R
> Attachments: YARN-6855-YARN-3409.001.patch, 
> YARN-6855-YARN-3409.002.patch, YARN-6855-YARN-3409.003.patch, 
> YARN-6855-YARN-3409.004.patch, YARN-6855-YARN-3409.005.patch, 
> YARN-6855-yarn-3409.006.patch, YARN-6855-YARN-3409.006.patch
>
>
> This jira focuses only on the proto modifications required for the CLI



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6855) CLI Proto Modifications to support Node Attributes

2017-10-02 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6855?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16187729#comment-16187729
 ] 

Hadoop QA commented on YARN-6855:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
40s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 3 new or modified test 
files. {color} |
|| || || || {color:brown} yarn-3409 Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  3m 
18s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 16m 
 1s{color} | {color:green} yarn-3409 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  9m 
17s{color} | {color:green} yarn-3409 passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
59s{color} | {color:green} yarn-3409 passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  2m 
56s{color} | {color:green} yarn-3409 passed {color} |
| {color:red}-1{color} | {color:red} shadedclient {color} | {color:red} 13m 
59s{color} | {color:red} branch has errors when building and testing our client 
artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  4m 
54s{color} | {color:green} yarn-3409 passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  2m 
22s{color} | {color:green} yarn-3409 passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
10s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  2m 
10s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  5m 
40s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} cc {color} | {color:green}  5m 
40s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  5m 
40s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 56s{color} | {color:orange} hadoop-yarn-project/hadoop-yarn: The patch 
generated 12 new + 95 unchanged - 0 fixed = 107 total (was 95) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  2m 
51s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} whitespace {color} | {color:red}  0m  
0s{color} | {color:red} The patch 2 line(s) with tabs. {color} |
| {color:red}-1{color} | {color:red} shadedclient {color} | {color:red} 11m  
2s{color} | {color:red} patch has errors when building and testing our client 
artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  5m 
34s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  2m 
19s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
34s{color} | {color:green} hadoop-yarn-api in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  2m 
32s{color} | {color:green} hadoop-yarn-common in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  1m 
44s{color} | {color:green} hadoop-yarn-server-common in the patch passed. 
{color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 44m 32s{color} 
| {color:red} hadoop-yarn-server-resourcemanager in the patch failed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  1m 
12s{color} | {color:green} hadoop-yarn-server-router in the patch passed. 
{color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
32s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}140m 29s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | 
hadoop.yarn.server.resourcemanager.scheduler.capacity.TestContainerResizing |
|   | 

[jira] [Commented] (YARN-7205) Log improvements for the ResourceUtils

2017-10-02 Thread Daniel Templeton (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7205?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16187724#comment-16187724
 ] 

Daniel Templeton commented on YARN-7205:


I don't see why it makes sense to only load the resource types config files if 
resource profiles is enabled.  Resource types and resource profiles are 
distinct features--each makes sense without the other.  Also, the additional 
log message after {{addResourcesFileToConf()}} is redundant.

For this message: "Couldn't find node resources file named:resource-types.xml", 
can we make it "Couldn't find node resources file: resource-types.xml"

> Log improvements for the ResourceUtils
> --
>
> Key: YARN-7205
> URL: https://issues.apache.org/jira/browse/YARN-7205
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: nodemanager, resourcemanager
>Reporter: Jian He
>Assignee: Sunil G
> Attachments: YARN-7205.001.patch, YARN-7205.002.patch
>
>
> I've seen below logs printed at the service client console after the merge, 
> can this be moved to debug level log ? cc  [~sunilg], [~leftnoteasy]
> {code}
> 17/09/15 10:26:32 INFO conf.Configuration: resource-types.xml not found
> 17/09/15 10:26:32 INFO resource.ResourceUtils: Unable to find 
> 'resource-types.xml'. Falling back to memory and vcores as resources.
> 17/09/15 10:26:32 INFO resource.ResourceUtils: Adding resource type - name = 
> memory-mb, units = Mi, type = COUNTABLE
> 17/09/15 10:26:32 INFO resource.ResourceUtils: Adding resource type - name = 
> vcores, units = , type = COUNTABLE
> {code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7280) Rescan fair-scheduler.xml every n seconds

2017-10-02 Thread Andras Piros (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7280?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16187708#comment-16187708
 ] 

Andras Piros commented on YARN-7280:


Thanks [~yufeigu]!

> Rescan fair-scheduler.xml every n seconds
> -
>
> Key: YARN-7280
> URL: https://issues.apache.org/jira/browse/YARN-7280
> Project: Hadoop YARN
>  Issue Type: Improvement
>  Components: fairscheduler
>Affects Versions: 3.0.0-alpha4
>Reporter: Andras Piros
>Assignee: Andras Piros
>
> Currently, the {{FairScheduler}} configuration file {{fair-scheduler.xml}} is 
> loaded only on ResourceManager startup.
> We need:
> * a mechanism to rescan / reload-if-changed this, often-changing piece of RM 
> configuration
> * from classpath / configuration file:
> ** configuration file wins over classpath
> * rescan every {{n}} seconds, where:
> ** {{n <= 0}} means don't rescan
> ** {{n > 0}} means rescan every {{n}} seconds



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org