[jira] [Commented] (YARN-6510) Fix warning - procfs stat file is not in the expected format: YARN-3344 is not enough

2017-04-26 Thread Wilfred Spiegelenburg (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6510?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15984304#comment-15984304
 ] 

Wilfred Spiegelenburg commented on YARN-6510:
-

There only seems to be a maximum limit on the length of the name (maximum 16 
characters, including the ending null) in the underlying structures in the OS. 
Nothing stops an empty name at the OS level. We should thus be able to handle 
the empty name even if it is not a really useful case to support.

> Fix warning - procfs stat file is not in the expected format: YARN-3344 is 
> not enough
> -
>
> Key: YARN-6510
> URL: https://issues.apache.org/jira/browse/YARN-6510
> Project: Hadoop YARN
>  Issue Type: Bug
>Affects Versions: 2.8.0, 3.0.0-alpha2
>Reporter: Wilfred Spiegelenburg
>Assignee: Wilfred Spiegelenburg
> Attachments: YARN-6510.01.patch
>
>
> Even with the fix for YARN-3344 we still have issues with the procfs format.
> This is the case that is causing issues:
> {code}
> [user@nm1 ~]$ cat /proc/2406/stat
> 2406 (ib_fmr(mlx4_0)) S 2 0 0 0 -1 2149613632 0 0 0 0 166 126908 0 0 20 0 1 0 
> 4284 0 0 18446744073709551615 0 0 0 0 0 0 0 2147483647 0 18446744073709551615 
> 0 0 17 6 0 0 0 0 0
> {code}
> We do not handle the parenthesis in the name which causes the pattern 
> matching to fail



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6521) Yarn Log-aggregation Transform Enable not to Spam the NameNode

2017-04-26 Thread zhangyubiao (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6521?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15984297#comment-15984297
 ] 

zhangyubiao commented on YARN-6521:
---

OK, apps incr mean app increase,container incr mean  container increase.
Transform the log cli service means that don't make the log-aggregation enable  
also can get the app logs by the command "yarn logs  -applicationId 
applicationXXX"





> Yarn Log-aggregation Transform Enable not to Spam the NameNode
> --
>
> Key: YARN-6521
> URL: https://issues.apache.org/jira/browse/YARN-6521
> Project: Hadoop YARN
>  Issue Type: Improvement
>Reporter: zhangyubiao
>
> Nowdays We have large cluster,with the apps incr and container incr ,We split 
> an  namespace to store applogs. 
> But the  log-aggregation /tmp/app-logs incr also make the namespace responce 
> slow.  
> We want to chang yarn.log-aggregation-enable true -> false,But Transform the 
> the yarn log cli service also can get the app-logs.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5007) Remove deprecated constructors of MiniYARNCluster and MiniMRYarnCluster

2017-04-26 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5007?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15984333#comment-15984333
 ] 

Hudson commented on YARN-5007:
--

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #11633 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/11633/])
Revert "YARN-5007. Remove deprecated constructors of MiniYARNCluster and 
(aajisaka: rev 8a99eba96d7db6031e0c4a6ae5d8f8a842572a8d)
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-client/src/test/java/org/apache/hadoop/yarn/client/ProtocolHATestBase.java
* (edit) 
hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-jobclient/src/test/java/org/apache/hadoop/mapred/TestMRTimelineEventHandling.java
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-tests/src/test/java/org/apache/hadoop/yarn/server/MiniYARNCluster.java
* (edit) 
hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-jobclient/src/test/java/org/apache/hadoop/mapreduce/v2/MiniMRYarnCluster.java
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-tests/src/test/java/org/apache/hadoop/yarn/server/TestMiniYarnCluster.java


> Remove deprecated constructors of MiniYARNCluster and MiniMRYarnCluster
> ---
>
> Key: YARN-5007
> URL: https://issues.apache.org/jira/browse/YARN-5007
> Project: Hadoop YARN
>  Issue Type: Test
>Reporter: Andras Bokor
>Assignee: Andras Bokor
>  Labels: oct16-easy
> Attachments: YARN-5007.01.patch, YARN-5007.02.patch, 
> YARN-5007.03.patch
>
>
> MiniYarnCluster has a deprecated constructor which is called by the other 
> constructors and it causes javac warnings during the build.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Created] (YARN-6531) Sanity check appStateData size before saving to Zookeeper

2017-04-26 Thread Bibin A Chundatt (JIRA)
Bibin A Chundatt created YARN-6531:
--

 Summary: Sanity check appStateData size before saving to Zookeeper
 Key: YARN-6531
 URL: https://issues.apache.org/jira/browse/YARN-6531
 Project: Hadoop YARN
  Issue Type: Bug
Reporter: Bibin A Chundatt
Assignee: Bibin A Chundatt
Priority: Critical


Application with large size Application submission context could cause store to 
Zookeeper failure due to znode size limit. Zookeeper znode limit exception 
thrown {{org.apache.zookeeper.KeeperException$ConnectionLossException}}. 
ZkStateStore will retry for configured times and will throw 
ConnectionLossException after configured limit.
Which could cause Resource manager to switch from active To StandBy and other 
application submitted not getting save to ZK.
Solution {{ApplicationStateData}} size to be validated before saving and reject 
application so that ResourceManager is not impacted.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-6531) Check appStateData size before saving to Zookeeper

2017-04-26 Thread Bibin A Chundatt (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-6531?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bibin A Chundatt updated YARN-6531:
---
Summary: Check appStateData size before saving to Zookeeper  (was: Sanity 
check appStateData size before saving to Zookeeper)

> Check appStateData size before saving to Zookeeper
> --
>
> Key: YARN-6531
> URL: https://issues.apache.org/jira/browse/YARN-6531
> Project: Hadoop YARN
>  Issue Type: Bug
>Reporter: Bibin A Chundatt
>Assignee: Bibin A Chundatt
>Priority: Critical
>
> Application with large size Application submission context could cause store 
> to Zookeeper failure due to znode size limit. Zookeeper znode limit exception 
> thrown {{org.apache.zookeeper.KeeperException$ConnectionLossException}}. 
> ZkStateStore will retry for configured times and will throw 
> ConnectionLossException after configured limit.
> Which could cause Resource manager to switch from active To StandBy and other 
> application submitted not getting save to ZK.
> Solution {{ApplicationStateData}} size to be validated before saving and 
> reject application so that ResourceManager is not impacted.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-2113) Add cross-user preemption within CapacityScheduler's leaf-queue

2017-04-26 Thread Sunil G (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-2113?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sunil G updated YARN-2113:
--
Attachment: YARN-2113.0011.patch

> Add cross-user preemption within CapacityScheduler's leaf-queue
> ---
>
> Key: YARN-2113
> URL: https://issues.apache.org/jira/browse/YARN-2113
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: scheduler
>Reporter: Vinod Kumar Vavilapalli
>Assignee: Sunil G
> Attachments: IntraQueue Preemption-Impact Analysis.pdf, 
> TestNoIntraQueuePreemptionIfBelowUserLimitAndDifferentPrioritiesWithExtraUsers.txt,
>  YARN-2113.0001.patch, YARN-2113.0002.patch, YARN-2113.0003.patch, 
> YARN-2113.0004.patch, YARN-2113.0005.patch, YARN-2113.0006.patch, 
> YARN-2113.0007.patch, YARN-2113.0008.patch, YARN-2113.0009.patch, 
> YARN-2113.0010.patch, YARN-2113.0011.patch, YARN-2113.v0.patch
>
>
> Preemption today only works across queues and moves around resources across 
> queues per demand and usage. We should also have user-level preemption within 
> a queue, to balance capacity across users in a predictable manner.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6531) Check appStateData size before saving to Zookeeper

2017-04-26 Thread Naganarasimha G R (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6531?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15984406#comment-15984406
 ] 

Naganarasimha G R commented on YARN-6531:
-

[~bibinchundatt], Agree a bad app in a public cluster kind of setup would be 
riskier. But the catch is, limit is in a pluggable state store so we need to 
ensure a cleaner way to do the check before we send a event to the state store.

> Check appStateData size before saving to Zookeeper
> --
>
> Key: YARN-6531
> URL: https://issues.apache.org/jira/browse/YARN-6531
> Project: Hadoop YARN
>  Issue Type: Bug
>Reporter: Bibin A Chundatt
>Assignee: Bibin A Chundatt
>Priority: Critical
>
> Application with large size Application submission context could cause store 
> to Zookeeper failure due to znode size limit. Zookeeper znode limit exception 
> thrown {{org.apache.zookeeper.KeeperException$ConnectionLossException}}. 
> ZkStateStore will retry for configured times and will throw 
> ConnectionLossException after configured limit.
> Which could cause Resource manager to switch from active To StandBy and other 
> application submitted not getting save to ZK.
> Solution {{ApplicationStateData}} size to be validated before saving and 
> reject application so that ResourceManager is not impacted.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-6419) Support to launch native-service deployment from new YARN UI

2017-04-26 Thread Akhil PB (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-6419?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Akhil PB updated YARN-6419:
---
Attachment: Screenshot-deploy-service-add-component-form-input.png
Screenshot-deploy-new-service-json-input.png
Screenshot-deploy-new-service-form-input.png

> Support to launch native-service deployment from new YARN UI
> 
>
> Key: YARN-6419
> URL: https://issues.apache.org/jira/browse/YARN-6419
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: yarn-ui-v2
>Reporter: Akhil PB
>Assignee: Akhil PB
> Attachments: Screenshot-deploy-new-service-form-input.png, 
> Screenshot-deploy-new-service-json-input.png, 
> Screenshot-deploy-service-add-component-form-input.png, YARN-6419.001.patch, 
> YARN-6419.002.patch, YARN-6419.003.patch, YARN-6419.004.patch, 
> YARN-6419-yarn-native-services.001.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6517) Fix warnings from Spotbugs in hadoop-yarn-common

2017-04-26 Thread Weiwei Yang (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6517?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15984472#comment-15984472
 ] 

Weiwei Yang commented on YARN-6517:
---

Hi [~wilfreds]

Appreciate that you to duplicate your work with this one and thanks for your 
comments. Quite helpful.

bq. A lot of indentation changes to re-use a return statement in both changes...
bq. If we simplify the change to return an empty set, we could pass 0 to the 
HashSet constructor given that we don't reuse it.

Both are good suggestions, I've done that in latest patch. Thanks.

bq. Checking procfsDir for null will hide the case of an incorrect 
configuration. This currently could only happen in test cases but it could hide 
a broken test and would be careful doing this.

Like you said, that's better to have some better logic to handle the 
possibility of a null {{procfsDir}}, removed this check from this patch since 
it aims to fix the findbugs warning.

Thanks

> Fix warnings from Spotbugs in hadoop-yarn-common
> 
>
> Key: YARN-6517
> URL: https://issues.apache.org/jira/browse/YARN-6517
> Project: Hadoop YARN
>  Issue Type: Bug
>Reporter: Weiwei Yang
>Assignee: Weiwei Yang
>  Labels: findbugs
> Attachments: YARN-6517.001.patch
>
>
> There are 2 findbugs warnings in hadoop-yarn-common project since switched to 
> spotbugs,
> # Possible null pointer dereference in 
> org.apache.hadoop.yarn.logaggregation.AggregatedLogFormat$LogValue.getPendingLogFilesToUpload(File)
>  due to return value of called method
> # Possible null pointer dereference in 
> org.apache.hadoop.yarn.util.ProcfsBasedProcessTree.getProcessList() due to 
> return value of called method
> see more in 
> [https://builds.apache.org/job/PreCommit-HADOOP-Build/12157/artifact/patchprocess/branch-findbugs-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-common-warnings.html]



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-6517) Fix warnings from Spotbugs in hadoop-yarn-common

2017-04-26 Thread Weiwei Yang (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-6517?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Weiwei Yang updated YARN-6517:
--
Attachment: YARN-6517.002.patch

> Fix warnings from Spotbugs in hadoop-yarn-common
> 
>
> Key: YARN-6517
> URL: https://issues.apache.org/jira/browse/YARN-6517
> Project: Hadoop YARN
>  Issue Type: Bug
>Reporter: Weiwei Yang
>Assignee: Weiwei Yang
>  Labels: findbugs
> Attachments: YARN-6517.001.patch, YARN-6517.002.patch
>
>
> There are 2 findbugs warnings in hadoop-yarn-common project since switched to 
> spotbugs,
> # Possible null pointer dereference in 
> org.apache.hadoop.yarn.logaggregation.AggregatedLogFormat$LogValue.getPendingLogFilesToUpload(File)
>  due to return value of called method
> # Possible null pointer dereference in 
> org.apache.hadoop.yarn.util.ProcfsBasedProcessTree.getProcessList() due to 
> return value of called method
> see more in 
> [https://builds.apache.org/job/PreCommit-HADOOP-Build/12157/artifact/patchprocess/branch-findbugs-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-common-warnings.html]



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-2113) Add cross-user preemption within CapacityScheduler's leaf-queue

2017-04-26 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-2113?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15984550#comment-15984550
 ] 

Hadoop QA commented on YARN-2113:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
15s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 3 new or modified test 
files. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 15m 
31s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
38s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
30s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
40s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
20s{color} | {color:green} trunk passed {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  1m  
7s{color} | {color:red} 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager
 in trunk has 8 extant Findbugs warnings. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
21s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
42s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
35s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
35s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 29s{color} | {color:orange} 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager:
 The patch generated 27 new + 167 unchanged - 3 fixed = 194 total (was 170) 
{color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
38s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
15s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
19s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
22s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 39m 35s{color} 
| {color:red} hadoop-yarn-server-resourcemanager in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
19s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 65m  0s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.yarn.server.resourcemanager.TestRMRestart |
|   | hadoop.yarn.server.resourcemanager.security.TestDelegationTokenRenewer |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:0ac17dc |
| JIRA Issue | YARN-2113 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12865097/YARN-2113.0011.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux 8e65fde10e6c 3.13.0-106-generic #153-Ubuntu SMP Tue Dec 6 
15:44:32 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / 93fa48f |
| Default Java | 1.8.0_121 |
| findbugs | v3.1.0-RC1 |
| findbugs | 
https://builds.apache.org/job/PreCommit-YARN-Build/15739/artifact/patchprocess/branch-findbugs-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager-warnings.html
 |
| checkstyle | 
https://builds.apache.org/job/PreCommit-YARN-Build/15739/artifact/patchprocess/diff-checkstyle-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager.txt
 |
| unit | 
https://builds.apache.org/job/PreCommit-YARN-Build/15739/artifact/patchprocess/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yar

[jira] [Created] (YARN-6532) Allocated container metrics per applciation should be exposed using Yarn & ATS rest APIs

2017-04-26 Thread Ravi Teja Chilukuri (JIRA)
Ravi Teja Chilukuri created YARN-6532:
-

 Summary: Allocated container metrics per applciation should be 
exposed using Yarn & ATS rest APIs
 Key: YARN-6532
 URL: https://issues.apache.org/jira/browse/YARN-6532
 Project: Hadoop YARN
  Issue Type: Improvement
  Components: ATSv2, resourcemanager, restapi
Reporter: Ravi Teja Chilukuri


Currently *allocatedMB* and *allocatedVCores* are being exposed by the RM and 
ATS rest APIs.
But we don't have the allocatedContainers exposed per application.

This metric can be exposed as a additional param in the existing rest APIs.

*RM:*  http:///ws/v1/cluster/apps/{appid}
*ATS:* http(s):///ws/v1/applicationhistory/apps/{appid}


This would be essential for application types like TEZ, where there container 
re-use. 



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-6532) Allocated container metrics per applciation should be exposed using Yarn & ATS rest APIs

2017-04-26 Thread Ravi Teja Chilukuri (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-6532?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ravi Teja Chilukuri updated YARN-6532:
--
Priority: Minor  (was: Major)

> Allocated container metrics per applciation should be exposed using Yarn & 
> ATS rest APIs
> 
>
> Key: YARN-6532
> URL: https://issues.apache.org/jira/browse/YARN-6532
> Project: Hadoop YARN
>  Issue Type: Improvement
>  Components: ATSv2, resourcemanager, restapi
>Reporter: Ravi Teja Chilukuri
>Priority: Minor
>
> Currently *allocatedMB* and *allocatedVCores* are being exposed by the RM and 
> ATS rest APIs.
> But we don't have the allocatedContainers exposed per application.
> This metric can be exposed as a additional param in the existing rest APIs.
> *RM:*  http:///ws/v1/cluster/apps/{appid}
> *ATS:* http(s):// address:port>/ws/v1/applicationhistory/apps/{appid}
> This would be essential for application types like TEZ, where there container 
> re-use. 



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-6532) Allocated container metrics per application should be exposed using Yarn & ATS rest APIs

2017-04-26 Thread Ravi Teja Chilukuri (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-6532?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ravi Teja Chilukuri updated YARN-6532:
--
Summary: Allocated container metrics per application should be exposed 
using Yarn & ATS rest APIs  (was: Allocated container metrics per applciation 
should be exposed using Yarn & ATS rest APIs)

> Allocated container metrics per application should be exposed using Yarn & 
> ATS rest APIs
> 
>
> Key: YARN-6532
> URL: https://issues.apache.org/jira/browse/YARN-6532
> Project: Hadoop YARN
>  Issue Type: Improvement
>  Components: ATSv2, resourcemanager, restapi
>Reporter: Ravi Teja Chilukuri
>Priority: Minor
>
> Currently *allocatedMB* and *allocatedVCores* are being exposed by the RM and 
> ATS rest APIs.
> But we don't have the allocatedContainers exposed per application.
> This metric can be exposed as a additional param in the existing rest APIs.
> *RM:*  http:///ws/v1/cluster/apps/{appid}
> *ATS:* http(s):// address:port>/ws/v1/applicationhistory/apps/{appid}
> This would be essential for application types like TEZ, where there container 
> re-use. 



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-6532) Allocated container metrics per application should be exposed using Yarn & ATS rest APIs

2017-04-26 Thread Ravi Teja Chilukuri (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-6532?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ravi Teja Chilukuri updated YARN-6532:
--
Description: 
Currently *allocatedMB* and *allocatedVCores* are being exposed by the RM and 
ATS rest APIs.
But we don't have the allocatedContainers exposed per application.

This metric can be exposed as a additional param in the existing rest APIs.

*RM:*  http:///ws/v1/cluster/apps/{appid}
*ATS:* http(s):///ws/v1/applicationhistory/apps/{appid}

This would be essential for application types like TEZ which have container 
re-use. 

  was:
Currently *allocatedMB* and *allocatedVCores* are being exposed by the RM and 
ATS rest APIs.
But we don't have the allocatedContainers exposed per application.

This metric can be exposed as a additional param in the existing rest APIs.

*RM:*  http:///ws/v1/cluster/apps/{appid}
*ATS:* http(s):///ws/v1/applicationhistory/apps/{appid}


This would be essential for application types like TEZ, where there container 
re-use. 


> Allocated container metrics per application should be exposed using Yarn & 
> ATS rest APIs
> 
>
> Key: YARN-6532
> URL: https://issues.apache.org/jira/browse/YARN-6532
> Project: Hadoop YARN
>  Issue Type: Improvement
>  Components: ATSv2, resourcemanager, restapi
>Reporter: Ravi Teja Chilukuri
>Priority: Minor
>
> Currently *allocatedMB* and *allocatedVCores* are being exposed by the RM and 
> ATS rest APIs.
> But we don't have the allocatedContainers exposed per application.
> This metric can be exposed as a additional param in the existing rest APIs.
> *RM:*  http:///ws/v1/cluster/apps/{appid}
> *ATS:* http(s):// address:port>/ws/v1/applicationhistory/apps/{appid}
> This would be essential for application types like TEZ which have container 
> re-use. 



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-6419) Support to launch native-service deployment from new YARN UI

2017-04-26 Thread Akhil PB (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-6419?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Akhil PB updated YARN-6419:
---
Attachment: YARN-6419-yarn-native-services.002.patch

YARN-6419 v2 patch rebased on {{yarn-native-services}} branch.
Fixes JSON parse error handling, unique component name check, and some code 
cleaning.

cc [~sunilg] [~jianhe]

> Support to launch native-service deployment from new YARN UI
> 
>
> Key: YARN-6419
> URL: https://issues.apache.org/jira/browse/YARN-6419
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: yarn-ui-v2
>Reporter: Akhil PB
>Assignee: Akhil PB
> Attachments: Screenshot-deploy-new-service-form-input.png, 
> Screenshot-deploy-new-service-json-input.png, 
> Screenshot-deploy-service-add-component-form-input.png, YARN-6419.001.patch, 
> YARN-6419.002.patch, YARN-6419.003.patch, YARN-6419.004.patch, 
> YARN-6419-yarn-native-services.001.patch, 
> YARN-6419-yarn-native-services.002.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6523) RM requires large memory in sending out security tokens as part of Node Heartbeat in large cluster

2017-04-26 Thread Rohith Sharma K S (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6523?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15984830#comment-15984830
 ] 

Rohith Sharma K S commented on YARN-6523:
-

+1 for the issue.  I am not sure why all the apps tokens are sent to NM rather 
than sending only of applications which are running on that node. In 2nd and 
3rd approaches has to deal with renewal of credentials. It never be known that 
does credentials are renewed. But in 1st approach, performance need to be 
compromised for node heartbeat response time. 

How about keeping app credentials in RMNodeImpl i.e proposal is let RMnodeImpl 
maintains copy of credentials and these are sent in heartbeat response. By this 
way, RMnodeImpl maintains app credential for running applications on node. In 
case of credential renewal, an event triggered to RMnodeImpl to change its 
credentials. But, there would be corner cases, updated credentials will misses 
for couple of heartbeats. cc:/ [~jlowe]

> RM requires large memory in sending out security tokens as part of Node 
> Heartbeat in large cluster
> --
>
> Key: YARN-6523
> URL: https://issues.apache.org/jira/browse/YARN-6523
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: RM
>Affects Versions: 2.8.0, 2.7.3
>Reporter: Naganarasimha G R
>Assignee: Naganarasimha G R
>Priority: Critical
>
> Currently as part of heartbeat response RM sets all application's tokens 
> though all applications might not be active on the node. On top of it 
> NodeHeartbeatResponsePBImpl converts tokens for each app into 
> SystemCredentialsForAppsProto. Hence for each node and each heartbeat too 
> many SystemCredentialsForAppsProto objects were getting created.
> We hit a OOM while testing for 2000 concurrent apps on 500 nodes cluster with 
> 8GB RAM configured for RM



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6523) RM requires large memory in sending out security tokens as part of Node Heartbeat in large cluster

2017-04-26 Thread Sunil G (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6523?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15984853#comment-15984853
 ] 

Sunil G commented on YARN-6523:
---

Yes. Its ideal to keep a copy on each RMNodeImpl itself. I think RMNodeImpl 
could pull for renewed tokens from {{DelegationTokenRenewer]} at regular 
intervals if there is a concern over increased number of events to be fired to 
RMNodeImpl regarding token renewal. I still feel an event publishing model will 
be more concrete.

> RM requires large memory in sending out security tokens as part of Node 
> Heartbeat in large cluster
> --
>
> Key: YARN-6523
> URL: https://issues.apache.org/jira/browse/YARN-6523
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: RM
>Affects Versions: 2.8.0, 2.7.3
>Reporter: Naganarasimha G R
>Assignee: Naganarasimha G R
>Priority: Critical
>
> Currently as part of heartbeat response RM sets all application's tokens 
> though all applications might not be active on the node. On top of it 
> NodeHeartbeatResponsePBImpl converts tokens for each app into 
> SystemCredentialsForAppsProto. Hence for each node and each heartbeat too 
> many SystemCredentialsForAppsProto objects were getting created.
> We hit a OOM while testing for 2000 concurrent apps on 500 nodes cluster with 
> 8GB RAM configured for RM



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6531) Check appStateData size before saving to Zookeeper

2017-04-26 Thread Rohith Sharma K S (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6531?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15984860#comment-15984860
 ] 

Rohith Sharma K S commented on YARN-6531:
-

It is dupe of YARN-5006.

> Check appStateData size before saving to Zookeeper
> --
>
> Key: YARN-6531
> URL: https://issues.apache.org/jira/browse/YARN-6531
> Project: Hadoop YARN
>  Issue Type: Bug
>Reporter: Bibin A Chundatt
>Assignee: Bibin A Chundatt
>Priority: Critical
>
> Application with large size Application submission context could cause store 
> to Zookeeper failure due to znode size limit. Zookeeper znode limit exception 
> thrown {{org.apache.zookeeper.KeeperException$ConnectionLossException}}. 
> ZkStateStore will retry for configured times and will throw 
> ConnectionLossException after configured limit.
> Which could cause Resource manager to switch from active To StandBy and other 
> application submitted not getting save to ZK.
> Solution {{ApplicationStateData}} size to be validated before saving and 
> reject application so that ResourceManager is not impacted.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6531) Check appStateData size before saving to Zookeeper

2017-04-26 Thread Bibin A Chundatt (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6531?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15984867#comment-15984867
 ] 

Bibin A Chundatt commented on YARN-6531:


[~naganarasimha...@apache.org]

Since the limit is in zookeeper side my point is check should be part of 
ZkStateStore. Similar to zk retry times limit validation we can have in 
ZkStateStore once the event is processed. Since StateStore dispatchers are 
different i think the cost for having the check after sending also should be 
fine. Application can be rejected from {{NEW_SAVING}} state directly. 


> Check appStateData size before saving to Zookeeper
> --
>
> Key: YARN-6531
> URL: https://issues.apache.org/jira/browse/YARN-6531
> Project: Hadoop YARN
>  Issue Type: Bug
>Reporter: Bibin A Chundatt
>Assignee: Bibin A Chundatt
>Priority: Critical
>
> Application with large size Application submission context could cause store 
> to Zookeeper failure due to znode size limit. Zookeeper znode limit exception 
> thrown {{org.apache.zookeeper.KeeperException$ConnectionLossException}}. 
> ZkStateStore will retry for configured times and will throw 
> ConnectionLossException after configured limit.
> Which could cause Resource manager to switch from active To StandBy and other 
> application submitted not getting save to ZK.
> Solution {{ApplicationStateData}} size to be validated before saving and 
> reject application so that ResourceManager is not impacted.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Resolved] (YARN-6531) Check appStateData size before saving to Zookeeper

2017-04-26 Thread Rohith Sharma K S (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-6531?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Rohith Sharma K S resolved YARN-6531.
-
Resolution: Duplicate

> Check appStateData size before saving to Zookeeper
> --
>
> Key: YARN-6531
> URL: https://issues.apache.org/jira/browse/YARN-6531
> Project: Hadoop YARN
>  Issue Type: Bug
>Reporter: Bibin A Chundatt
>Assignee: Bibin A Chundatt
>Priority: Critical
>
> Application with large size Application submission context could cause store 
> to Zookeeper failure due to znode size limit. Zookeeper znode limit exception 
> thrown {{org.apache.zookeeper.KeeperException$ConnectionLossException}}. 
> ZkStateStore will retry for configured times and will throw 
> ConnectionLossException after configured limit.
> Which could cause Resource manager to switch from active To StandBy and other 
> application submitted not getting save to ZK.
> Solution {{ApplicationStateData}} size to be validated before saving and 
> reject application so that ResourceManager is not impacted.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5006) ResourceManager quit due to ApplicationStateData exceed the limit size of znode in zk

2017-04-26 Thread Bibin A Chundatt (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5006?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15984874#comment-15984874
 ] 

Bibin A Chundatt commented on YARN-5006:


[~luhuichun]
Any progress on this patch? If not started can take over.

> ResourceManager quit due to ApplicationStateData exceed the limit  size of 
> znode in zk
> --
>
> Key: YARN-5006
> URL: https://issues.apache.org/jira/browse/YARN-5006
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: resourcemanager
>Affects Versions: 2.6.0, 2.7.2
>Reporter: dongtingting
>Assignee: luhuichun
>Priority: Critical
>
> Client submit a job, this job add 1 file into DistributedCache. when the 
> job is submitted, ResourceManager sotre ApplicationStateData into zk. 
> ApplicationStateData  is exceed the limit size of znode. RM exit 1.   
> The related code in RMStateStore.java :
> {code}
>   private static class StoreAppTransition
>   implements SingleArcTransition {
> @Override
> public void transition(RMStateStore store, RMStateStoreEvent event) {
>   if (!(event instanceof RMStateStoreAppEvent)) {
> // should never happen
> LOG.error("Illegal event type: " + event.getClass());
> return;
>   }
>   ApplicationState appState = ((RMStateStoreAppEvent) 
> event).getAppState();
>   ApplicationId appId = appState.getAppId();
>   ApplicationStateData appStateData = ApplicationStateData
>   .newInstance(appState);
>   LOG.info("Storing info for app: " + appId);
>   try {  
> store.storeApplicationStateInternal(appId, appStateData);  //store 
> the appStateData
> store.notifyApplication(new RMAppEvent(appId,
>RMAppEventType.APP_NEW_SAVED));
>   } catch (Exception e) {
> LOG.error("Error storing app: " + appId, e);
> store.notifyStoreOperationFailed(e);   //handle fail event, system 
> exit 
>   }
> };
>   }
> {code}
> The Exception log:
> {code}
>  ...
> 2016-04-20 11:26:35,732 INFO 
> org.apache.hadoop.yarn.server.resourcemanager.recovery.ZKRMStateStore 
> AsyncDispatcher event handler: Maxed out ZK retries. Giving up!
> 2016-04-20 11:26:35,732 ERROR 
> org.apache.hadoop.yarn.server.resourcemanager.recovery.RMStateStore 
> AsyncDispatcher event handler: Error storing app: 
> application_1461061795989_17671
> org.apache.zookeeper.KeeperException$ConnectionLossException: KeeperErrorCode 
> = ConnectionLoss
> at 
> org.apache.zookeeper.KeeperException.create(KeeperException.java:99)
> at org.apache.zookeeper.ZooKeeper.multiInternal(ZooKeeper.java:931)
> at org.apache.zookeeper.ZooKeeper.multi(ZooKeeper.java:911)
> at 
> org.apache.hadoop.yarn.server.resourcemanager.recovery.ZKRMStateStore$4.run(ZKRMStateStore.java:936)
> at 
> org.apache.hadoop.yarn.server.resourcemanager.recovery.ZKRMStateStore$4.run(ZKRMStateStore.java:933)
> at 
> org.apache.hadoop.yarn.server.resourcemanager.recovery.ZKRMStateStore$ZKAction.runWithCheck(ZKRMStateStore.java:1075)
> at 
> org.apache.hadoop.yarn.server.resourcemanager.recovery.ZKRMStateStore$ZKAction.runWithRetries(ZKRMStateStore.java:1096)
> at 
> org.apache.hadoop.yarn.server.resourcemanager.recovery.ZKRMStateStore.doMultiWithRetries(ZKRMStateStore.java:933)
> at 
> org.apache.hadoop.yarn.server.resourcemanager.recovery.ZKRMStateStore.doMultiWithRetries(ZKRMStateStore.java:947)
> at 
> org.apache.hadoop.yarn.server.resourcemanager.recovery.ZKRMStateStore.createWithRetries(ZKRMStateStore.java:956)
> at 
> org.apache.hadoop.yarn.server.resourcemanager.recovery.ZKRMStateStore.storeApplicationStateInternal(ZKRMStateStore.java:626)
> at 
> org.apache.hadoop.yarn.server.resourcemanager.recovery.RMStateStore$StoreAppTransition.transition(RMStateStore.java:138)
> at 
> org.apache.hadoop.yarn.server.resourcemanager.recovery.RMStateStore$StoreAppTransition.transition(RMStateStore.java:123)
> at 
> org.apache.hadoop.yarn.state.StateMachineFactory$SingleInternalArc.doTransition(StateMachineFactory.java:362)
> at 
> org.apache.hadoop.yarn.state.StateMachineFactory.doTransition(StateMachineFactory.java:302)
> at 
> org.apache.hadoop.yarn.state.StateMachineFactory.access$300(StateMachineFactory.java:46)
> at 
> org.apache.hadoop.yarn.state.StateMachineFactory$InternalStateMachine.doTransition(StateMachineFactory.java:448)
> at 
> org.apache.hadoop.yarn.server.resourcemanager.recovery.RMStateStore.handleStoreEvent(RMStateStore.java:806)
> at 
> org.apache.hadoop.yarn.server.resourcemanager.recovery.RMStateStore$ForwardingEventHandler.handle(RMStateStore.java:860)
> at 
> org.apach

[jira] [Commented] (YARN-6531) Check appStateData size before saving to Zookeeper

2017-04-26 Thread Bibin A Chundatt (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6531?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15984890#comment-15984890
 ] 

Bibin A Chundatt commented on YARN-6531:


[~rohithsharma]
Thank you for pointing out the jira.  Will fix this as part of YARN-5006.

> Check appStateData size before saving to Zookeeper
> --
>
> Key: YARN-6531
> URL: https://issues.apache.org/jira/browse/YARN-6531
> Project: Hadoop YARN
>  Issue Type: Bug
>Reporter: Bibin A Chundatt
>Assignee: Bibin A Chundatt
>Priority: Critical
>
> Application with large size Application submission context could cause store 
> to Zookeeper failure due to znode size limit. Zookeeper znode limit exception 
> thrown {{org.apache.zookeeper.KeeperException$ConnectionLossException}}. 
> ZkStateStore will retry for configured times and will throw 
> ConnectionLossException after configured limit.
> Which could cause Resource manager to switch from active To StandBy and other 
> application submitted not getting save to ZK.
> Solution {{ApplicationStateData}} size to be validated before saving and 
> reject application so that ResourceManager is not impacted.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6531) Check appStateData size before saving to Zookeeper

2017-04-26 Thread Rohith Sharma K S (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6531?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15984878#comment-15984878
 ] 

Rohith Sharma K S commented on YARN-6531:
-

I would suggest further discussions can be done in YARN-5006. There were 
discussions done in that JIRA, those can be continued. 

> Check appStateData size before saving to Zookeeper
> --
>
> Key: YARN-6531
> URL: https://issues.apache.org/jira/browse/YARN-6531
> Project: Hadoop YARN
>  Issue Type: Bug
>Reporter: Bibin A Chundatt
>Assignee: Bibin A Chundatt
>Priority: Critical
>
> Application with large size Application submission context could cause store 
> to Zookeeper failure due to znode size limit. Zookeeper znode limit exception 
> thrown {{org.apache.zookeeper.KeeperException$ConnectionLossException}}. 
> ZkStateStore will retry for configured times and will throw 
> ConnectionLossException after configured limit.
> Which could cause Resource manager to switch from active To StandBy and other 
> application submitted not getting save to ZK.
> Solution {{ApplicationStateData}} size to be validated before saving and 
> reject application so that ResourceManager is not impacted.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (YARN-5006) ResourceManager quit due to ApplicationStateData exceed the limit size of znode in zk

2017-04-26 Thread Bibin A Chundatt (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5006?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15984874#comment-15984874
 ] 

Bibin A Chundatt edited comment on YARN-5006 at 4/26/17 2:21 PM:
-

[~luhuichun]

Any progress on this patch? Hoping the implementation is not started yet. 
Assigning to my name. Will be adding limit as part of this jira. YARN-3935 and 
YARN-6426 (progress available) is available for compression. 




was (Author: bibinchundatt):
[~luhuichun]
Any progress on this patch? If not started can take over.

> ResourceManager quit due to ApplicationStateData exceed the limit  size of 
> znode in zk
> --
>
> Key: YARN-5006
> URL: https://issues.apache.org/jira/browse/YARN-5006
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: resourcemanager
>Affects Versions: 2.6.0, 2.7.2
>Reporter: dongtingting
>Assignee: luhuichun
>Priority: Critical
>
> Client submit a job, this job add 1 file into DistributedCache. when the 
> job is submitted, ResourceManager sotre ApplicationStateData into zk. 
> ApplicationStateData  is exceed the limit size of znode. RM exit 1.   
> The related code in RMStateStore.java :
> {code}
>   private static class StoreAppTransition
>   implements SingleArcTransition {
> @Override
> public void transition(RMStateStore store, RMStateStoreEvent event) {
>   if (!(event instanceof RMStateStoreAppEvent)) {
> // should never happen
> LOG.error("Illegal event type: " + event.getClass());
> return;
>   }
>   ApplicationState appState = ((RMStateStoreAppEvent) 
> event).getAppState();
>   ApplicationId appId = appState.getAppId();
>   ApplicationStateData appStateData = ApplicationStateData
>   .newInstance(appState);
>   LOG.info("Storing info for app: " + appId);
>   try {  
> store.storeApplicationStateInternal(appId, appStateData);  //store 
> the appStateData
> store.notifyApplication(new RMAppEvent(appId,
>RMAppEventType.APP_NEW_SAVED));
>   } catch (Exception e) {
> LOG.error("Error storing app: " + appId, e);
> store.notifyStoreOperationFailed(e);   //handle fail event, system 
> exit 
>   }
> };
>   }
> {code}
> The Exception log:
> {code}
>  ...
> 2016-04-20 11:26:35,732 INFO 
> org.apache.hadoop.yarn.server.resourcemanager.recovery.ZKRMStateStore 
> AsyncDispatcher event handler: Maxed out ZK retries. Giving up!
> 2016-04-20 11:26:35,732 ERROR 
> org.apache.hadoop.yarn.server.resourcemanager.recovery.RMStateStore 
> AsyncDispatcher event handler: Error storing app: 
> application_1461061795989_17671
> org.apache.zookeeper.KeeperException$ConnectionLossException: KeeperErrorCode 
> = ConnectionLoss
> at 
> org.apache.zookeeper.KeeperException.create(KeeperException.java:99)
> at org.apache.zookeeper.ZooKeeper.multiInternal(ZooKeeper.java:931)
> at org.apache.zookeeper.ZooKeeper.multi(ZooKeeper.java:911)
> at 
> org.apache.hadoop.yarn.server.resourcemanager.recovery.ZKRMStateStore$4.run(ZKRMStateStore.java:936)
> at 
> org.apache.hadoop.yarn.server.resourcemanager.recovery.ZKRMStateStore$4.run(ZKRMStateStore.java:933)
> at 
> org.apache.hadoop.yarn.server.resourcemanager.recovery.ZKRMStateStore$ZKAction.runWithCheck(ZKRMStateStore.java:1075)
> at 
> org.apache.hadoop.yarn.server.resourcemanager.recovery.ZKRMStateStore$ZKAction.runWithRetries(ZKRMStateStore.java:1096)
> at 
> org.apache.hadoop.yarn.server.resourcemanager.recovery.ZKRMStateStore.doMultiWithRetries(ZKRMStateStore.java:933)
> at 
> org.apache.hadoop.yarn.server.resourcemanager.recovery.ZKRMStateStore.doMultiWithRetries(ZKRMStateStore.java:947)
> at 
> org.apache.hadoop.yarn.server.resourcemanager.recovery.ZKRMStateStore.createWithRetries(ZKRMStateStore.java:956)
> at 
> org.apache.hadoop.yarn.server.resourcemanager.recovery.ZKRMStateStore.storeApplicationStateInternal(ZKRMStateStore.java:626)
> at 
> org.apache.hadoop.yarn.server.resourcemanager.recovery.RMStateStore$StoreAppTransition.transition(RMStateStore.java:138)
> at 
> org.apache.hadoop.yarn.server.resourcemanager.recovery.RMStateStore$StoreAppTransition.transition(RMStateStore.java:123)
> at 
> org.apache.hadoop.yarn.state.StateMachineFactory$SingleInternalArc.doTransition(StateMachineFactory.java:362)
> at 
> org.apache.hadoop.yarn.state.StateMachineFactory.doTransition(StateMachineFactory.java:302)
> at 
> org.apache.hadoop.yarn.state.StateMachineFactory.access$300(StateMachineFactory.java:46)
> at 
> org.apache.hadoop.yarn.state.StateMachineFactory$InternalStateMachine.doTransition(Sta

[jira] [Commented] (YARN-6419) Support to launch native-service deployment from new YARN UI

2017-04-26 Thread Sunil G (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6419?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15984893#comment-15984893
 ] 

Sunil G commented on YARN-6419:
---

Thanks [~akhilpb] for the patch and screen shots. Looks good.

Few comments
# Could jqueryAjax be moved to AbstractAdapter class if this is reusable?
# Is config-tooltip.js used still?
# Rename config-upload-modal.js to upload-config.js
# deploy-service.hbs and service-config-table.hbs needs some more refactoring 
may be?
# are we using yarn-services.hbs still ? If not, please remove.

> Support to launch native-service deployment from new YARN UI
> 
>
> Key: YARN-6419
> URL: https://issues.apache.org/jira/browse/YARN-6419
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: yarn-ui-v2
>Reporter: Akhil PB
>Assignee: Akhil PB
> Attachments: Screenshot-deploy-new-service-form-input.png, 
> Screenshot-deploy-new-service-json-input.png, 
> Screenshot-deploy-service-add-component-form-input.png, YARN-6419.001.patch, 
> YARN-6419.002.patch, YARN-6419.003.patch, YARN-6419.004.patch, 
> YARN-6419-yarn-native-services.001.patch, 
> YARN-6419-yarn-native-services.002.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6197) CS Leaf queue am usage gets updated for unmanaged AM

2017-04-26 Thread Bibin A Chundatt (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6197?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15984898#comment-15984898
 ] 

Bibin A Chundatt commented on YARN-6197:


[~leftnoteasy]
Thank you for reply .Agree with you.
 Closing as no need to fix.

> CS Leaf queue am usage gets updated for unmanaged AM
> 
>
> Key: YARN-6197
> URL: https://issues.apache.org/jira/browse/YARN-6197
> Project: Hadoop YARN
>  Issue Type: Bug
>Reporter: Bibin A Chundatt
>Assignee: Bibin A Chundatt
>
> {{LeafQueue#activateApplication()}} for unmanaged AM  the am_usage is updated 
> with scheduler minimum allocation size. Cluster resource/AM limit headroom 
> for other apps in queue will get reduced .
> Solution: FicaScheduler unManagedAM flag can be used to check AM type.
> Based on flag the queueusage need to be updated during activation and removal
> Thoughts??



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6523) RM requires large memory in sending out security tokens as part of Node Heartbeat in large cluster

2017-04-26 Thread Jason Lowe (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6523?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15984919#comment-15984919
 ] 

Jason Lowe commented on YARN-6523:
--

I don't know the full story behind the SystemCredentialsForApps thing.  Looks 
like something that was put in for Slider and other long-running services where 
the initial tokens can expire.  It would be good to get input from [~vinodkv] 
and [~jianhe] since they were more involved in this.

I agree it seems silly for every node in the cluster to get _all_ apps HDFS 
credentials on _every heartbeat_.  I suspect this was the simplest thing to 
implement, but it's far from efficient.  Going to the other extreme of just 
sending the app credentials only once for just the apps that could be active on 
the node is a lot more complicated.  It's true that RMNodeImpl is tracking what 
applications are on the node, but this is _reactive_ tracking to what the node 
is already doing.  There are some scenarios where the updated tokens need to be 
on the node _before_ the container launch request arrives at the node and 
therefore the app becomes active in the node's RMNodeImpl.  For example, a 
Slider app runs for months.  The initial tokens at app submit time have long 
expired, so the RM has had to re-fetch the tokens.  Then suddenly the Slider 
app wants to launch a container on a node it's never touched before.  The 
node's RMNodeImpl doesn't know the app is active until a container starts 
running on it, but the container can't localize without the updated tokens that 
the node has never received yet.  So we'd need to send the credentials when the 
scheduler allocates an app's container on the node for the first time and then 
also when any of the app's credentials are updated (e.g.: when a token is 
replaced with a refreshed version).  And then there's handling lost heartbeats, 
node reconnect, etc.  In short, efficient delta is a lot more complicated.

Rather than going straight to the complicated, fully optimal implementation we 
could do something in-between.  For example, we could have a sequence number 
associated with the system credentials.  Nodes would send the last sequence 
number that they have received, and if it matches the current sequence number 
then the RM does _not_ send them in the heartbeat response.  If the sequence 
numbers don't match then the RM sends the current sequence number along with 
the system credentials.  It's still sending all the credentials instead of 
optimal deltas, but at least they're only being sent when the node needs the 
updated version.  And yes, we should precompute the 
SystemCredentialsForAppsProto once when the credentials change and re-send the 
same object to any node that needs the updated credentials rather than recreate 
the same object over and over and over.  That should drastically cut down on 
the number of objects related to system credentials in heartbeats and how often 
we're sending them.


> RM requires large memory in sending out security tokens as part of Node 
> Heartbeat in large cluster
> --
>
> Key: YARN-6523
> URL: https://issues.apache.org/jira/browse/YARN-6523
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: RM
>Affects Versions: 2.8.0, 2.7.3
>Reporter: Naganarasimha G R
>Assignee: Naganarasimha G R
>Priority: Critical
>
> Currently as part of heartbeat response RM sets all application's tokens 
> though all applications might not be active on the node. On top of it 
> NodeHeartbeatResponsePBImpl converts tokens for each app into 
> SystemCredentialsForAppsProto. Hence for each node and each heartbeat too 
> many SystemCredentialsForAppsProto objects were getting created.
> We hit a OOM while testing for 2000 concurrent apps on 500 nodes cluster with 
> 8GB RAM configured for RM



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (YARN-6523) RM requires large memory in sending out security tokens as part of Node Heartbeat in large cluster

2017-04-26 Thread Sunil G (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6523?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15984853#comment-15984853
 ] 

Sunil G edited comment on YARN-6523 at 4/26/17 2:16 PM:


Yes. Its ideal to keep a copy of apps tokens on each RMNodeImpl itself. I think 
RMNodeImpl could pull renewed tokens from {{DelegationTokenRenewer}} at regular 
intervals if there is a concern over increased number of events to be fired to 
RMNodeImpl regarding token renewal. I still feel an event publishing model will 
be more concrete.


was (Author: sunilg):
Yes. Its ideal to keep a copy on each RMNodeImpl itself. I think RMNodeImpl 
could pull for renewed tokens from {{DelegationTokenRenewer]} at regular 
intervals if there is a concern over increased number of events to be fired to 
RMNodeImpl regarding token renewal. I still feel an event publishing model will 
be more concrete.

> RM requires large memory in sending out security tokens as part of Node 
> Heartbeat in large cluster
> --
>
> Key: YARN-6523
> URL: https://issues.apache.org/jira/browse/YARN-6523
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: RM
>Affects Versions: 2.8.0, 2.7.3
>Reporter: Naganarasimha G R
>Assignee: Naganarasimha G R
>Priority: Critical
>
> Currently as part of heartbeat response RM sets all application's tokens 
> though all applications might not be active on the node. On top of it 
> NodeHeartbeatResponsePBImpl converts tokens for each app into 
> SystemCredentialsForAppsProto. Hence for each node and each heartbeat too 
> many SystemCredentialsForAppsProto objects were getting created.
> We hit a OOM while testing for 2000 concurrent apps on 500 nodes cluster with 
> 8GB RAM configured for RM



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6532) Allocated container metrics per application should be exposed using Yarn & ATS rest APIs

2017-04-26 Thread Rohith Sharma K S (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6532?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15984923#comment-15984923
 ] 

Rohith Sharma K S commented on YARN-6532:
-

AllocatedMB/AllocatedVCores is snapshot of used resources by running 
containers. Similarly, runningContainers gives count for current running 
containers for an application. Are you expecting total allocated containers 
count for an application? 

> Allocated container metrics per application should be exposed using Yarn & 
> ATS rest APIs
> 
>
> Key: YARN-6532
> URL: https://issues.apache.org/jira/browse/YARN-6532
> Project: Hadoop YARN
>  Issue Type: Improvement
>  Components: ATSv2, resourcemanager, restapi
>Reporter: Ravi Teja Chilukuri
>Priority: Minor
>
> Currently *allocatedMB* and *allocatedVCores* are being exposed by the RM and 
> ATS rest APIs.
> But we don't have the allocatedContainers exposed per application.
> This metric can be exposed as a additional param in the existing rest APIs.
> *RM:*  http:///ws/v1/cluster/apps/{appid}
> *ATS:* http(s):// address:port>/ws/v1/applicationhistory/apps/{appid}
> This would be essential for application types like TEZ which have container 
> re-use. 



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Assigned] (YARN-5006) ResourceManager quit due to ApplicationStateData exceed the limit size of znode in zk

2017-04-26 Thread Bibin A Chundatt (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-5006?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bibin A Chundatt reassigned YARN-5006:
--

Assignee: Bibin A Chundatt  (was: luhuichun)

> ResourceManager quit due to ApplicationStateData exceed the limit  size of 
> znode in zk
> --
>
> Key: YARN-5006
> URL: https://issues.apache.org/jira/browse/YARN-5006
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: resourcemanager
>Affects Versions: 2.6.0, 2.7.2
>Reporter: dongtingting
>Assignee: Bibin A Chundatt
>Priority: Critical
>
> Client submit a job, this job add 1 file into DistributedCache. when the 
> job is submitted, ResourceManager sotre ApplicationStateData into zk. 
> ApplicationStateData  is exceed the limit size of znode. RM exit 1.   
> The related code in RMStateStore.java :
> {code}
>   private static class StoreAppTransition
>   implements SingleArcTransition {
> @Override
> public void transition(RMStateStore store, RMStateStoreEvent event) {
>   if (!(event instanceof RMStateStoreAppEvent)) {
> // should never happen
> LOG.error("Illegal event type: " + event.getClass());
> return;
>   }
>   ApplicationState appState = ((RMStateStoreAppEvent) 
> event).getAppState();
>   ApplicationId appId = appState.getAppId();
>   ApplicationStateData appStateData = ApplicationStateData
>   .newInstance(appState);
>   LOG.info("Storing info for app: " + appId);
>   try {  
> store.storeApplicationStateInternal(appId, appStateData);  //store 
> the appStateData
> store.notifyApplication(new RMAppEvent(appId,
>RMAppEventType.APP_NEW_SAVED));
>   } catch (Exception e) {
> LOG.error("Error storing app: " + appId, e);
> store.notifyStoreOperationFailed(e);   //handle fail event, system 
> exit 
>   }
> };
>   }
> {code}
> The Exception log:
> {code}
>  ...
> 2016-04-20 11:26:35,732 INFO 
> org.apache.hadoop.yarn.server.resourcemanager.recovery.ZKRMStateStore 
> AsyncDispatcher event handler: Maxed out ZK retries. Giving up!
> 2016-04-20 11:26:35,732 ERROR 
> org.apache.hadoop.yarn.server.resourcemanager.recovery.RMStateStore 
> AsyncDispatcher event handler: Error storing app: 
> application_1461061795989_17671
> org.apache.zookeeper.KeeperException$ConnectionLossException: KeeperErrorCode 
> = ConnectionLoss
> at 
> org.apache.zookeeper.KeeperException.create(KeeperException.java:99)
> at org.apache.zookeeper.ZooKeeper.multiInternal(ZooKeeper.java:931)
> at org.apache.zookeeper.ZooKeeper.multi(ZooKeeper.java:911)
> at 
> org.apache.hadoop.yarn.server.resourcemanager.recovery.ZKRMStateStore$4.run(ZKRMStateStore.java:936)
> at 
> org.apache.hadoop.yarn.server.resourcemanager.recovery.ZKRMStateStore$4.run(ZKRMStateStore.java:933)
> at 
> org.apache.hadoop.yarn.server.resourcemanager.recovery.ZKRMStateStore$ZKAction.runWithCheck(ZKRMStateStore.java:1075)
> at 
> org.apache.hadoop.yarn.server.resourcemanager.recovery.ZKRMStateStore$ZKAction.runWithRetries(ZKRMStateStore.java:1096)
> at 
> org.apache.hadoop.yarn.server.resourcemanager.recovery.ZKRMStateStore.doMultiWithRetries(ZKRMStateStore.java:933)
> at 
> org.apache.hadoop.yarn.server.resourcemanager.recovery.ZKRMStateStore.doMultiWithRetries(ZKRMStateStore.java:947)
> at 
> org.apache.hadoop.yarn.server.resourcemanager.recovery.ZKRMStateStore.createWithRetries(ZKRMStateStore.java:956)
> at 
> org.apache.hadoop.yarn.server.resourcemanager.recovery.ZKRMStateStore.storeApplicationStateInternal(ZKRMStateStore.java:626)
> at 
> org.apache.hadoop.yarn.server.resourcemanager.recovery.RMStateStore$StoreAppTransition.transition(RMStateStore.java:138)
> at 
> org.apache.hadoop.yarn.server.resourcemanager.recovery.RMStateStore$StoreAppTransition.transition(RMStateStore.java:123)
> at 
> org.apache.hadoop.yarn.state.StateMachineFactory$SingleInternalArc.doTransition(StateMachineFactory.java:362)
> at 
> org.apache.hadoop.yarn.state.StateMachineFactory.doTransition(StateMachineFactory.java:302)
> at 
> org.apache.hadoop.yarn.state.StateMachineFactory.access$300(StateMachineFactory.java:46)
> at 
> org.apache.hadoop.yarn.state.StateMachineFactory$InternalStateMachine.doTransition(StateMachineFactory.java:448)
> at 
> org.apache.hadoop.yarn.server.resourcemanager.recovery.RMStateStore.handleStoreEvent(RMStateStore.java:806)
> at 
> org.apache.hadoop.yarn.server.resourcemanager.recovery.RMStateStore$ForwardingEventHandler.handle(RMStateStore.java:860)
> at 
> org.apache.hadoop.yarn.server.resourcemanager.recovery.RMStateStore$Forward

[jira] [Resolved] (YARN-6197) CS Leaf queue am usage gets updated for unmanaged AM

2017-04-26 Thread Bibin A Chundatt (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-6197?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bibin A Chundatt resolved YARN-6197.

Resolution: Not A Problem

> CS Leaf queue am usage gets updated for unmanaged AM
> 
>
> Key: YARN-6197
> URL: https://issues.apache.org/jira/browse/YARN-6197
> Project: Hadoop YARN
>  Issue Type: Bug
>Reporter: Bibin A Chundatt
>Assignee: Bibin A Chundatt
>
> {{LeafQueue#activateApplication()}} for unmanaged AM  the am_usage is updated 
> with scheduler minimum allocation size. Cluster resource/AM limit headroom 
> for other apps in queue will get reduced .
> Solution: FicaScheduler unManagedAM flag can be used to check AM type.
> Based on flag the queueusage need to be updated during activation and removal
> Thoughts??



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6520) Fix warnings from Spotbugs in hadoop-yarn-client

2017-04-26 Thread Weiwei Yang (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6520?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15984977#comment-15984977
 ] 

Weiwei Yang commented on YARN-6520:
---

Thanks [~Naganarasimha] for your help !

> Fix warnings from Spotbugs in hadoop-yarn-client
> 
>
> Key: YARN-6520
> URL: https://issues.apache.org/jira/browse/YARN-6520
> Project: Hadoop YARN
>  Issue Type: Bug
>Reporter: Weiwei Yang
>Assignee: Weiwei Yang
>  Labels: findbugs
> Attachments: YARN-6520.001.patch
>
>
> There 2 findbugs warning in hadoop-yarn-client since switched to spotbugs
> # Possible exposure of partially initialized object in 
> org.apache.hadoop.yarn.client.api.impl.YarnClientImpl.getTimelineDelegationToken()
> # Useless condition: it's known that isAppFinished == false at this point
> See more info from 
> [https://builds.apache.org/job/PreCommit-HADOOP-Build/12157/artifact/patchprocess/branch-findbugs-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-client-warnings.html]



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (YARN-6520) Fix warnings from Spotbugs in hadoop-yarn-client

2017-04-26 Thread Weiwei Yang (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6520?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15984977#comment-15984977
 ] 

Weiwei Yang edited comment on YARN-6520 at 4/26/17 3:17 PM:


Thanks [~Naganarasimha] for your help! Seems the jenkins job still cannot be 
triggered, please let me know if I need to submit the patch again. Thank you!


was (Author: cheersyang):
Thanks [~Naganarasimha] for your help !

> Fix warnings from Spotbugs in hadoop-yarn-client
> 
>
> Key: YARN-6520
> URL: https://issues.apache.org/jira/browse/YARN-6520
> Project: Hadoop YARN
>  Issue Type: Bug
>Reporter: Weiwei Yang
>Assignee: Weiwei Yang
>  Labels: findbugs
> Attachments: YARN-6520.001.patch
>
>
> There 2 findbugs warning in hadoop-yarn-client since switched to spotbugs
> # Possible exposure of partially initialized object in 
> org.apache.hadoop.yarn.client.api.impl.YarnClientImpl.getTimelineDelegationToken()
> # Useless condition: it's known that isAppFinished == false at this point
> See more info from 
> [https://builds.apache.org/job/PreCommit-HADOOP-Build/12157/artifact/patchprocess/branch-findbugs-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-client-warnings.html]



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6519) Fix warnings from Spotbugs in hadoop-yarn-server-resourcemanager

2017-04-26 Thread Weiwei Yang (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6519?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15984978#comment-15984978
 ] 

Weiwei Yang commented on YARN-6519:
---

Thanks [~Naganarasimha] for your help !

> Fix warnings from Spotbugs in hadoop-yarn-server-resourcemanager
> 
>
> Key: YARN-6519
> URL: https://issues.apache.org/jira/browse/YARN-6519
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: resourcemanager
>Reporter: Weiwei Yang
>Assignee: Weiwei Yang
>  Labels: findbugs
> Attachments: YARN-6519.001.patch, YARN-6519.002.patch
>
>
> There is 8 findbugs warning in hadoop-yarn-server-timelineservice since 
> switched to spotbugs
> # 
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.CapacitySchedulerQueueManager$1.compare(CSQueue,
>  CSQueue) incorrectly handles float value
> # org.apache.hadoop.yarn.server.resourcemanager.scheduler.NodeType.index 
> field is public and mutable
> # 
> org.apache.hadoop.yarn.server.resourcemanager.ApplicationMasterService.EMPTY_CONTAINER_LIST
>  is a mutable collection which should be package protected
> # 
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.AbstractYarnScheduler.EMPTY_CONTAINER_LIST
>  is a mutable collection which should be package protected
> # 
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.QueueMetrics.queueMetrics
>  is a mutable collection
> # 
> org.apache.hadoop.yarn.server.resourcemanager.monitor.capacity.ProportionalCapacityPreemptionPolicy.cleanupStaledPreemptionCandidates(long)
>  makes inefficient use of keySet iterator instead of entrySet iterator
> # 
> org.apache.hadoop.yarn.server.resourcemanager.rmapp.attempt.RMAppAttemptImpl.transferStateFromAttempt(RMAppAttempt)
>  makes inefficient use of keySet iterator instead of entrySet iterator
> # 
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.fair.FSSchedulerNode.cleanupPreemptionList()
>  makes inefficient use of keySet iterator instead of entrySet iterator
> See more from 
> [https://builds.apache.org/job/PreCommit-HADOOP-Build/12157/artifact/patchprocess/branch-findbugs-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager-warnings.html]



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6518) Fix warnings from Spotbugs in hadoop-yarn-server-timelineservice

2017-04-26 Thread Weiwei Yang (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6518?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15984981#comment-15984981
 ] 

Weiwei Yang commented on YARN-6518:
---

Thanks [~Naganarasimha] for your review. Not sure why jenkins job still not 
work.

> Fix warnings from Spotbugs in hadoop-yarn-server-timelineservice
> 
>
> Key: YARN-6518
> URL: https://issues.apache.org/jira/browse/YARN-6518
> Project: Hadoop YARN
>  Issue Type: Bug
>Reporter: Weiwei Yang
>Assignee: Weiwei Yang
>  Labels: findbugs
> Attachments: YARN-6518.001.patch
>
>
> There is 1 findbugs warning in hadoop-yarn-server-timelineservice since 
> switched to spotbugs
> # Possible null pointer dereference in 
> org.apache.hadoop.yarn.server.timelineservice.storage.FileSystemTimelineReaderImpl.getEntities(File,
>  String, TimelineEntityFilters, TimelineDataToRetrieve) due to return value 
> of called method
> See more in 
> [https://builds.apache.org/job/PreCommit-HADOOP-Build/12157/artifact/patchprocess/branch-findbugs-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-timelineservice-warnings.html]



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6306) NMClient API change for container upgrade

2017-04-26 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6306?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15985016#comment-15985016
 ] 

Hadoop QA commented on YARN-6306:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 19m 
15s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 14m 
10s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
21s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
15s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
23s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
17s{color} | {color:green} trunk passed {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  0m 
33s{color} | {color:red} hadoop-yarn-project/hadoop-yarn/hadoop-yarn-client in 
trunk has 2 extant Findbugs warnings. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
16s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
19s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
18s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
18s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 12s{color} | {color:orange} 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-client: The patch generated 4 new + 
37 unchanged - 0 fixed = 41 total (was 37) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
20s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
14s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  0m 
37s{color} | {color:red} hadoop-yarn-project/hadoop-yarn/hadoop-yarn-client 
generated 2 new + 2 unchanged - 0 fixed = 4 total (was 2) {color} |
| {color:red}-1{color} | {color:red} javadoc {color} | {color:red}  0m 
11s{color} | {color:red} hadoop-yarn-project_hadoop-yarn_hadoop-yarn-client 
generated 12 new + 158 unchanged - 0 fixed = 170 total (was 158) {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 19m 
20s{color} | {color:green} hadoop-yarn-client in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
24s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 59m  2s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| FindBugs | module:hadoop-yarn-project/hadoop-yarn/hadoop-yarn-client |
|  |  Unchecked/unconfirmed cast from 
org.apache.hadoop.yarn.client.api.async.impl.NMClientAsyncImpl$ContainerEvent 
to 
org.apache.hadoop.yarn.client.api.async.impl.NMClientAsyncImpl$ReInitializeContainerEvevnt
 in 
org.apache.hadoop.yarn.client.api.async.impl.NMClientAsyncImpl$StatefulContainer$ReInitializeContainerTransition.transition(NMClientAsyncImpl$StatefulContainer,
 NMClientAsyncImpl$ContainerEvent)  At 
NMClientAsyncImpl.java:org.apache.hadoop.yarn.client.api.async.impl.NMClientAsyncImpl$ReInitializeContainerEvevnt
 in 
org.apache.hadoop.yarn.client.api.async.impl.NMClientAsyncImpl$StatefulContainer$ReInitializeContainerTransition.transition(NMClientAsyncImpl$StatefulContainer,
 NMClientAsyncImpl$ContainerEvent)  At NMClientAsyncImpl.java:[line 691] |
|  |  Switch statement found in 
org.apache.hadoop.yarn.client.api.impl.NMClientImpl.restartCommitOrRollbackContainer(ContainerId,
 NMClient$UpgradeOp) where default case is missing  At 
NMClientImpl.java:NMClient$UpgradeOp) where default case is missing  At 
NMClientImpl.java:[li

[jira] [Commented] (YARN-6405) Improve configuring services through REST API

2017-04-26 Thread Billie Rinaldi (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6405?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15985023#comment-15985023
 ] 

Billie Rinaldi commented on YARN-6405:
--

I'm going to commit patch 7, now that the precommit build has completed. It 
shows a few errors, but these are due to the recent merge from trunk rather 
than being introduced in this patch. We can clean up those issues in a separate 
ticket.

> Improve configuring services through REST API
> -
>
> Key: YARN-6405
> URL: https://issues.apache.org/jira/browse/YARN-6405
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Jian He
>Assignee: Jian He
> Attachments: unique-names-test.patch, 
> YARN-6405.yarn-native-services.01.patch, 
> YARN-6405.yarn-native-services.02.patch, 
> YARN-6405.yarn-native-services.03.patch, 
> YARN-6405.yarn-native-services.04.patch, 
> YARN-6405.yarn-native-services.05.patch, 
> YARN-6405.yarn-native-services.06.patch, 
> YARN-6405.yarn-native-services.07.patch
>
>
> YARN-4793 defined various ways that user can config services through the 
> configuration section of the REST API.  But, some semantics are not yet 
> supported in the back-end server (AM).  YARN-6255 has done some work, this 
> jira is to complete this task -  support configuring services through REST API



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-2113) Add cross-user preemption within CapacityScheduler's leaf-queue

2017-04-26 Thread Eric Payne (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-2113?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15985032#comment-15985032
 ] 

Eric Payne commented on YARN-2113:
--

[~sunilg], the latest patch seems to work almost as I expect. However, I still 
have one question.

When the {{preemption-order}} property is set to {{USERLIMIT_FIRST}}, priority 
preemption will not happen. For Example:
- {{preemption-order = USERLIMIT_FIRST}}
- {{user1}} starts {{app1}} at priority 0 and consumes the whole cluster
- {{user1}} starts {{app2}} at priority 1

In this scenario, no preemption occurs.  Is that expected? I don't expect this 
behavior, since if {{preemption-order = PRIORITY_FIRST}}, user-limit preemption 
will still happen as expected when all apps are the same priority.

> Add cross-user preemption within CapacityScheduler's leaf-queue
> ---
>
> Key: YARN-2113
> URL: https://issues.apache.org/jira/browse/YARN-2113
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: scheduler
>Reporter: Vinod Kumar Vavilapalli
>Assignee: Sunil G
> Attachments: IntraQueue Preemption-Impact Analysis.pdf, 
> TestNoIntraQueuePreemptionIfBelowUserLimitAndDifferentPrioritiesWithExtraUsers.txt,
>  YARN-2113.0001.patch, YARN-2113.0002.patch, YARN-2113.0003.patch, 
> YARN-2113.0004.patch, YARN-2113.0005.patch, YARN-2113.0006.patch, 
> YARN-2113.0007.patch, YARN-2113.0008.patch, YARN-2113.0009.patch, 
> YARN-2113.0010.patch, YARN-2113.0011.patch, YARN-2113.v0.patch
>
>
> Preemption today only works across queues and moves around resources across 
> queues per demand and usage. We should also have user-level preemption within 
> a queue, to balance capacity across users in a predictable manner.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Created] (YARN-6533) Race condition in writing service record to registry in yarn native services

2017-04-26 Thread Billie Rinaldi (JIRA)
Billie Rinaldi created YARN-6533:


 Summary: Race condition in writing service record to registry in 
yarn native services
 Key: YARN-6533
 URL: https://issues.apache.org/jira/browse/YARN-6533
 Project: Hadoop YARN
  Issue Type: Sub-task
Reporter: Billie Rinaldi
Assignee: Billie Rinaldi


The ServiceRecord is written twice, once when the container is initially 
registered and again in the Docker provider once the IP has been obtained for 
the container. These occur asynchronously, so the more important record (the 
one with the IP) can be overwritten by the initial record. Only one record 
needs to be written, so we can stop writing the initial record when the Docker 
provider is being used.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6197) CS Leaf queue am usage gets updated for unmanaged AM

2017-04-26 Thread Varun Saxena (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6197?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15985045#comment-15985045
 ] 

Varun Saxena commented on YARN-6197:


Sorry I had missed this.

[~leftnoteasy] makes a fair point about AM limit doubling up as a way to 
control the number of concurrent applications launched for each queue.
The point about avoiding all resources in a queue being consumed by AM, well 
even the current solution of considering a default minimum value is not quite 
correct as unmanaged AM may actually be using more resources. So, the most 
effective way of accounting for those resources would be to deduct the max 
resources required for this AM from resources of the node where the AM is 
launched while configuring the NM capability.
Assuming that's done, the main issue would be with regards to controlling apps.
However, I wonder if defining a configurable running app limit at a queue level 
won't be a better way to control the number of concurrent apps instead of using 
AM limit? 

> CS Leaf queue am usage gets updated for unmanaged AM
> 
>
> Key: YARN-6197
> URL: https://issues.apache.org/jira/browse/YARN-6197
> Project: Hadoop YARN
>  Issue Type: Bug
>Reporter: Bibin A Chundatt
>Assignee: Bibin A Chundatt
>
> {{LeafQueue#activateApplication()}} for unmanaged AM  the am_usage is updated 
> with scheduler minimum allocation size. Cluster resource/AM limit headroom 
> for other apps in queue will get reduced .
> Solution: FicaScheduler unManagedAM flag can be used to check AM type.
> Based on flag the queueusage need to be updated during activation and removal
> Thoughts??



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-6533) Race condition in writing service record to registry in yarn native services

2017-04-26 Thread Billie Rinaldi (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-6533?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Billie Rinaldi updated YARN-6533:
-
Attachment: YARN-6533-yarn-native-services.001.patch

This patch does not write the initial ServiceRecord (without the IP) in the 
case of the Docker provider, to prevent this initial registration from 
asynchronously occurring after the ServiceRecord with the IP is written. It 
also logs and retries the writing of the ServiceRecord containing the IP if an 
IOException is thrown, instead of just logging the exception.

> Race condition in writing service record to registry in yarn native services
> 
>
> Key: YARN-6533
> URL: https://issues.apache.org/jira/browse/YARN-6533
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Billie Rinaldi
>Assignee: Billie Rinaldi
> Attachments: YARN-6533-yarn-native-services.001.patch
>
>
> The ServiceRecord is written twice, once when the container is initially 
> registered and again in the Docker provider once the IP has been obtained for 
> the container. These occur asynchronously, so the more important record (the 
> one with the IP) can be overwritten by the initial record. Only one record 
> needs to be written, so we can stop writing the initial record when the 
> Docker provider is being used.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-3839) Quit throwing NMNotYetReadyException

2017-04-26 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-3839?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15985151#comment-15985151
 ] 

Hadoop QA commented on YARN-3839:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
27s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 8 new or modified test 
files. {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
11s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 17m 
12s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 11m 
29s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
55s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  2m  
8s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  1m 
30s{color} | {color:green} trunk passed {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Skipped patched modules with no Java source: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-tests 
{color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  1m  
2s{color} | {color:red} hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common in 
trunk has 2 extant Findbugs warnings. {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  0m 
47s{color} | {color:red} 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager
 in trunk has 5 extant Findbugs warnings. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
40s{color} | {color:green} trunk passed {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m  
9s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
34s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  9m 
34s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  9m 
34s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 53s{color} | {color:orange} hadoop-yarn-project/hadoop-yarn: The patch 
generated 2 new + 325 unchanged - 17 fixed = 327 total (was 342) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  2m  
5s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  1m 
25s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Skipped patched modules with no Java source: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-tests 
{color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  3m 
22s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
41s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
43s{color} | {color:green} hadoop-yarn-api in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  3m 
20s{color} | {color:green} hadoop-yarn-common in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 15m 
27s{color} | {color:green} hadoop-yarn-server-nodemanager in the patch passed. 
{color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red}  6m 38s{color} 
| {color:red} hadoop-yarn-server-tests in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
35s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 94m 46s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Te

[jira] [Commented] (YARN-6475) Fix some long function checkstyle issues

2017-04-26 Thread Soumabrata Chakraborty (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6475?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15985169#comment-15985169
 ] 

Soumabrata Chakraborty commented on YARN-6475:
--

Hi [~miklos.szeg...@cloudera.com]
Turns out the project had about 5 method length related violations in src/main 
and the remaining in src/test
I fixed all of the violations in src/main
I left the ones in src/test as many had lots of Asserts and I felt that 
refactoring those would in fact reduce the readability 

Will raise a PR shortly

> Fix some long function checkstyle issues
> 
>
> Key: YARN-6475
> URL: https://issues.apache.org/jira/browse/YARN-6475
> Project: Hadoop YARN
>  Issue Type: Bug
>Reporter: Miklos Szegedi
>Assignee: Soumabrata Chakraborty
>Priority: Trivial
>  Labels: newbie
>
> I am talking about these two:
> {code}
> ./hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/LinuxContainerExecutor.java:441:
>   @Override:3: Method length is 176 lines (max allowed is 150). [MethodLength]
> ./hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/launcher/ContainerLaunch.java:159:
>   @Override:3: Method length is 158 lines (max allowed is 150). [MethodLength]
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-2113) Add cross-user preemption within CapacityScheduler's leaf-queue

2017-04-26 Thread Sunil G (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-2113?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15985172#comment-15985172
 ] 

Sunil G commented on YARN-2113:
---

[~eepayne]

In this example, if *used* resources for {{user1}} is under queue limit, UL 
will be same as queue capacity (Given we have one user). As per new check, if 
{{USERLIMIT_FIRST}} is configured, we will not preempt containers if usage is 
coming under user limit. Hence there were no preemption.
But if demand is within this user itself (based on priority), we have to 
preempt. I think i need to revisit the new logic added to consider local 
demand. I ll get back.

> Add cross-user preemption within CapacityScheduler's leaf-queue
> ---
>
> Key: YARN-2113
> URL: https://issues.apache.org/jira/browse/YARN-2113
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: scheduler
>Reporter: Vinod Kumar Vavilapalli
>Assignee: Sunil G
> Attachments: IntraQueue Preemption-Impact Analysis.pdf, 
> TestNoIntraQueuePreemptionIfBelowUserLimitAndDifferentPrioritiesWithExtraUsers.txt,
>  YARN-2113.0001.patch, YARN-2113.0002.patch, YARN-2113.0003.patch, 
> YARN-2113.0004.patch, YARN-2113.0005.patch, YARN-2113.0006.patch, 
> YARN-2113.0007.patch, YARN-2113.0008.patch, YARN-2113.0009.patch, 
> YARN-2113.0010.patch, YARN-2113.0011.patch, YARN-2113.v0.patch
>
>
> Preemption today only works across queues and moves around resources across 
> queues per demand and usage. We should also have user-level preemption within 
> a queue, to balance capacity across users in a predictable manner.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6475) Fix some long function checkstyle issues

2017-04-26 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6475?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15985180#comment-15985180
 ] 

ASF GitHub Bot commented on YARN-6475:
--

GitHub user soumabrata-chakraborty opened a pull request:

https://github.com/apache/hadoop/pull/218

YARN-6475: Refactor long methods in hadoop-yarn-server-nodemanager

Refactor all methods in hadoop-yarn-server-nodemanager that exceed 150 
lines in length.  Also fixes the method length related Checkstyle violations.

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/soumabrata-chakraborty/hadoop YARN-6475

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/hadoop/pull/218.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #218


commit b2a01a19676d8fa70b6dc906f453c8ddd950bddf
Author: Soumabrata Chakraborty 
Date:   2017-04-25T18:26:29Z

Refactor code for methods above 150 lines in length

commit 4621373c878307e7f0a223638501cdd43c9358dd
Author: Soumabrata Chakraborty 
Date:   2017-04-26T17:03:41Z

Refactor code for methods above 150 lines in length




> Fix some long function checkstyle issues
> 
>
> Key: YARN-6475
> URL: https://issues.apache.org/jira/browse/YARN-6475
> Project: Hadoop YARN
>  Issue Type: Bug
>Reporter: Miklos Szegedi
>Assignee: Soumabrata Chakraborty
>Priority: Trivial
>  Labels: newbie
>
> I am talking about these two:
> {code}
> ./hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/LinuxContainerExecutor.java:441:
>   @Override:3: Method length is 176 lines (max allowed is 150). [MethodLength]
> ./hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/launcher/ContainerLaunch.java:159:
>   @Override:3: Method length is 158 lines (max allowed is 150). [MethodLength]
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6475) Fix some long function checkstyle issues

2017-04-26 Thread Soumabrata Chakraborty (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6475?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15985183#comment-15985183
 ] 

Soumabrata Chakraborty commented on YARN-6475:
--

Hi [~miklos.szeg...@cloudera.com]
Raised a PR for this issue - https://github.com/apache/hadoop/pull/218

> Fix some long function checkstyle issues
> 
>
> Key: YARN-6475
> URL: https://issues.apache.org/jira/browse/YARN-6475
> Project: Hadoop YARN
>  Issue Type: Bug
>Reporter: Miklos Szegedi
>Assignee: Soumabrata Chakraborty
>Priority: Trivial
>  Labels: newbie
>
> I am talking about these two:
> {code}
> ./hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/LinuxContainerExecutor.java:441:
>   @Override:3: Method length is 176 lines (max allowed is 150). [MethodLength]
> ./hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/launcher/ContainerLaunch.java:159:
>   @Override:3: Method length is 158 lines (max allowed is 150). [MethodLength]
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6521) Yarn Log-aggregation Transform Enable not to Spam the NameNode

2017-04-26 Thread Haibo Chen (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6521?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15985200#comment-15985200
 ] 

Haibo Chen commented on YARN-6521:
--

Thanks [~piaoyu zhang] for the clarification! It seems a configuration problem 
to me, that is, the configuration change is not propagated to all NMs. I tested 
it in CDH YARN cluster, could not reproduce the problem. Can you please specify 
what version of hadoop you were running? 

> Yarn Log-aggregation Transform Enable not to Spam the NameNode
> --
>
> Key: YARN-6521
> URL: https://issues.apache.org/jira/browse/YARN-6521
> Project: Hadoop YARN
>  Issue Type: Improvement
>Reporter: zhangyubiao
>
> Nowdays We have large cluster,with the apps incr and container incr ,We split 
> an  namespace to store applogs. 
> But the  log-aggregation /tmp/app-logs incr also make the namespace responce 
> slow.  
> We want to chang yarn.log-aggregation-enable true -> false,But Transform the 
> the yarn log cli service also can get the app-logs.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5894) fixed license warning caused by de.ruedigermoeller:fst:jar:2.24

2017-04-26 Thread Haibo Chen (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5894?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15985224#comment-15985224
 ] 

Haibo Chen commented on YARN-5894:
--

Looks like objenesis is needed only for android per 
https://github.com/RuedigerMoeller/fast-serialization/blob/aceaad0075b2e1ef796597a1098aeb39fbea7fc9/pom.xml#L141

I am going to exclude that as well, and see if it causes any problem.

> fixed license warning caused by de.ruedigermoeller:fst:jar:2.24
> ---
>
> Key: YARN-5894
> URL: https://issues.apache.org/jira/browse/YARN-5894
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: yarn
>Affects Versions: 3.0.0-alpha1
>Reporter: Haibo Chen
>Assignee: Haibo Chen
>Priority: Blocker
> Attachments: YARN-5894.00.patch
>
>
> The artifact de.ruedigermoeller:fst:jar:2.24, that ApplicationHistoryService 
> depends on,  shows its license being LGPL 2.1 in our license checking.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6521) Yarn Log-aggregation Transform Enable not to Spam the NameNode

2017-04-26 Thread Robert Kanter (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6521?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15985230#comment-15985230
 ] 

Robert Kanter commented on YARN-6521:
-

I think what [~piaoyu zhang] is asking is that when you disable log aggregation 
(because it was making too many small files and causing a problem for the NN), 
he still wants to be able to get logs from the YARN CLI logs command, which 
requires log aggregation.  

[~piaoyu zhang], instead of disabling log aggregation, you could periodically 
run the Archive Logs tool 
(https://hadoop.apache.org/docs/r2.8.0/hadoop-archive-logs/HadoopArchiveLogs.html),
 which combines all of the log files for an Application into one file.  The 
YARN CLI logs command and JHS can still read these files.  It does require 
Hadoop 2.8.0+ though.

> Yarn Log-aggregation Transform Enable not to Spam the NameNode
> --
>
> Key: YARN-6521
> URL: https://issues.apache.org/jira/browse/YARN-6521
> Project: Hadoop YARN
>  Issue Type: Improvement
>Reporter: zhangyubiao
>
> Nowdays We have large cluster,with the apps incr and container incr ,We split 
> an  namespace to store applogs. 
> But the  log-aggregation /tmp/app-logs incr also make the namespace responce 
> slow.  
> We want to chang yarn.log-aggregation-enable true -> false,But Transform the 
> the yarn log cli service also can get the app-logs.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6520) Fix warnings from Spotbugs in hadoop-yarn-client

2017-04-26 Thread Naganarasimha G R (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6520?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15985234#comment-15985234
 ] 

Naganarasimha G R commented on YARN-6520:
-

[~cheersyang], have triggered the build again manually, lets check this time !

> Fix warnings from Spotbugs in hadoop-yarn-client
> 
>
> Key: YARN-6520
> URL: https://issues.apache.org/jira/browse/YARN-6520
> Project: Hadoop YARN
>  Issue Type: Bug
>Reporter: Weiwei Yang
>Assignee: Weiwei Yang
>  Labels: findbugs
> Attachments: YARN-6520.001.patch
>
>
> There 2 findbugs warning in hadoop-yarn-client since switched to spotbugs
> # Possible exposure of partially initialized object in 
> org.apache.hadoop.yarn.client.api.impl.YarnClientImpl.getTimelineDelegationToken()
> # Useless condition: it's known that isAppFinished == false at this point
> See more info from 
> [https://builds.apache.org/job/PreCommit-HADOOP-Build/12157/artifact/patchprocess/branch-findbugs-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-client-warnings.html]



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6496) Rethink on configurations published into ATSv2

2017-04-26 Thread Varun Saxena (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6496?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15985238#comment-15985238
 ] 

Varun Saxena commented on YARN-6496:


[~rohithsharma], these are still effectively kV-pairs. 
Can't we flatten out the JSON representation to send key-value pairs. What I 
mean is KVs' in example above can be
{code}
properties.rohith.test.properties: "inside-properties"
env.NUM_SHARDS: "insde-env"
files.type: "HADOOP_XML_TEMPLATE",
files.src_file: "hdfs://yclouddev/tmp/conf/core-site.xml",
files.dest_file: "/etc/hadoop/conf/core-site.xml"
{code}

Do you have to reconstruct the JSON representation after reading it back from 
ATSv2 Reader?

> Rethink on configurations published into ATSv2
> --
>
> Key: YARN-6496
> URL: https://issues.apache.org/jira/browse/YARN-6496
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Rohith Sharma K S
>Assignee: Rohith Sharma K S
>
> Slider configuration model is bit different from Hadoop. Slider has 
> configuration object with several fields differentiations such as properties, 
> environment and files that has more functionality such as fileName, fileType, 
> srcFile, destFile. 
> {code}
> "configuration" : {
> "properties": {
> "rohith.test.properties": "inside-properties"
> },
> "env": {
> "NUM_SHARDS": "insde-env"
> }
> "files" : [
> {
> "type": "HADOOP_XML_TEMPLATE",
> "src_file": "hdfs://yclouddev/tmp/conf/core-site.xml",
> "dest_file": "/etc/hadoop/conf/core-site.xml"
> }]
> {code}
> Timeline entity config is modeled as a flattened map of  String to String, 
> that is not flexible enough to satisfy above use-case. This need to rethink 
> how can it be modeled.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-5894) fixed license warning caused by de.ruedigermoeller:fst:jar:2.24

2017-04-26 Thread Haibo Chen (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-5894?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Haibo Chen updated YARN-5894:
-
Attachment: YARN-5894.01.patch

Updated the patch to also exclude objenesis

> fixed license warning caused by de.ruedigermoeller:fst:jar:2.24
> ---
>
> Key: YARN-5894
> URL: https://issues.apache.org/jira/browse/YARN-5894
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: yarn
>Affects Versions: 3.0.0-alpha1
>Reporter: Haibo Chen
>Assignee: Haibo Chen
>Priority: Blocker
> Attachments: YARN-5894.00.patch, YARN-5894.01.patch
>
>
> The artifact de.ruedigermoeller:fst:jar:2.24, that ApplicationHistoryService 
> depends on,  shows its license being LGPL 2.1 in our license checking.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6518) Fix warnings from Spotbugs in hadoop-yarn-server-timelineservice

2017-04-26 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6518?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15985269#comment-15985269
 ] 

Hadoop QA commented on YARN-6518:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
21s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 13m 
56s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
18s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
13s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
21s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
16s{color} | {color:green} trunk passed {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  0m 
30s{color} | {color:red} 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-timelineservice
 in trunk has 1 extant Findbugs warnings. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
14s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
18s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
17s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
17s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
10s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
18s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
12s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
36s{color} | {color:green} 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-timelineservice
 generated 0 new + 0 unchanged - 1 fixed = 0 total (was 1) {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
13s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
47s{color} | {color:green} hadoop-yarn-server-timelineservice in the patch 
passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
18s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 20m 38s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:0ac17dc |
| JIRA Issue | YARN-6518 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12864721/YARN-6518.001.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux 9a3b2d3f5625 3.13.0-108-generic #155-Ubuntu SMP Wed Jan 11 
16:58:52 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / 93fa48f |
| Default Java | 1.8.0_121 |
| findbugs | v3.1.0-RC1 |
| findbugs | 
https://builds.apache.org/job/PreCommit-YARN-Build/15744/artifact/patchprocess/branch-findbugs-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-timelineservice-warnings.html
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-YARN-Build/15744/testReport/ |
| modules | C: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-timelineservice
 U: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-timelineservice
 |
| Console output | 
https://builds.apache.org/job/PreCommit-YARN-Build/15744/console |
| Powered by | Apache Yetus 0.5.0-SNAPSH

[jira] [Commented] (YARN-6517) Fix warnings from Spotbugs in hadoop-yarn-common

2017-04-26 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6517?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15985272#comment-15985272
 ] 

Hadoop QA commented on YARN-6517:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
32s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 15m 
 9s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
28s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
21s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
30s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
15s{color} | {color:green} trunk passed {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  0m 
57s{color} | {color:red} hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common in 
trunk has 2 extant Findbugs warnings. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
27s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
27s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
25s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
25s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
18s{color} | {color:green} hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common: 
The patch generated 0 new + 52 unchanged - 1 fixed = 52 total (was 53) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
27s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
12s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m  
1s{color} | {color:green} hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common 
generated 0 new + 1 unchanged - 1 fixed = 1 total (was 2) {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
24s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  2m 
18s{color} | {color:green} hadoop-yarn-common in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
16s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 25m 48s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:0ac17dc |
| JIRA Issue | YARN-6517 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12865114/YARN-6517.002.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux 1dc87e40f101 3.13.0-106-generic #153-Ubuntu SMP Tue Dec 6 
15:44:32 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / 93fa48f |
| Default Java | 1.8.0_121 |
| findbugs | v3.1.0-RC1 |
| findbugs | 
https://builds.apache.org/job/PreCommit-YARN-Build/15742/artifact/patchprocess/branch-findbugs-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-common-warnings.html
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-YARN-Build/15742/testReport/ |
| modules | C: hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common U: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common |
| Console output | 
https://builds.apache.org/job/PreCommit-YARN-Build/15742/console |
| Powered by | Apache Yetus 0.5.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> Fix warning

[jira] [Updated] (YARN-3839) Quit throwing NMNotYetReadyException

2017-04-26 Thread Manikandan R (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-3839?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Manikandan R updated YARN-3839:
---
Attachment: YARN-3839.006.patch

> Quit throwing NMNotYetReadyException
> 
>
> Key: YARN-3839
> URL: https://issues.apache.org/jira/browse/YARN-3839
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: nodemanager
>Reporter: Karthik Kambatla
>Assignee: Manikandan R
> Attachments: YARN-3839.001.patch, YARN-3839.002.patch, 
> YARN-3839.003.patch, YARN-3839.004.patch, YARN-3839.005.patch, 
> YARN-3839.006.patch
>
>
> Quit throwing NMNotYetReadyException when NM has not yet registered with the 
> RM.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-3839) Quit throwing NMNotYetReadyException

2017-04-26 Thread Manikandan R (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-3839?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15985281#comment-15985281
 ] 

Manikandan R commented on YARN-3839:


[~jlowe], Thanks for manual trigger.

Though Junits (except TestContainerManagerSecurity) are not related, verified 
the same in my working copy. TestContainerManagerSecurity.java also ran 
successfully. Since failure for TestContainerManagerSecurity is related to 
timed out, it should run fine in next attempt.

Fixed checkstyle and attaching new patch.

> Quit throwing NMNotYetReadyException
> 
>
> Key: YARN-3839
> URL: https://issues.apache.org/jira/browse/YARN-3839
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: nodemanager
>Reporter: Karthik Kambatla
>Assignee: Manikandan R
> Attachments: YARN-3839.001.patch, YARN-3839.002.patch, 
> YARN-3839.003.patch, YARN-3839.004.patch, YARN-3839.005.patch, 
> YARN-3839.006.patch
>
>
> Quit throwing NMNotYetReadyException when NM has not yet registered with the 
> RM.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6515) Fix warnings from Spotbugs in hadoop-yarn-server-nodemanager

2017-04-26 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6515?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15985282#comment-15985282
 ] 

Hadoop QA commented on YARN-6515:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
21s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 11m 
59s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
27s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
15s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
25s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
13s{color} | {color:green} trunk passed {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  0m 
43s{color} | {color:red} 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager
 in trunk has 5 extant Findbugs warnings. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
16s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
23s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
24s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
24s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
13s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
22s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
13s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
45s{color} | {color:green} 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager
 generated 0 new + 0 unchanged - 5 fixed = 0 total (was 5) {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
14s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 12m 
40s{color} | {color:green} hadoop-yarn-server-nodemanager in the patch passed. 
{color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
15s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 31m 22s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:0ac17dc |
| JIRA Issue | YARN-6515 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12864692/YARN-6515.001.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux 84b0eef219f9 4.4.0-43-generic #63-Ubuntu SMP Wed Oct 12 
13:48:03 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / 93fa48f |
| Default Java | 1.8.0_121 |
| findbugs | v3.1.0-RC1 |
| findbugs | 
https://builds.apache.org/job/PreCommit-YARN-Build/15743/artifact/patchprocess/branch-findbugs-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-nodemanager-warnings.html
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-YARN-Build/15743/testReport/ |
| modules | C: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager
 U: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager
 |
| Console output | 
https://builds.apache.org/job/PreCommit-YARN-Build/15743/console |
| Powered by | Apache Yetus 0.5.0-SNAPSHOT   http://yetus.apache.or

[jira] [Commented] (YARN-6521) Yarn Log-aggregation Transform Enable not to Spam the NameNode

2017-04-26 Thread Junping Du (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6521?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15985289#comment-15985289
 ] 

Junping Du commented on YARN-6521:
--

[~piaoyu zhang], I think what [~rkanter] mentioned above should at least 
partially work for you. If that is still not enough, I am concerning disabling 
log aggregation and allow some app's aggregated log on demand will bring many 
other issues, like: how to manage local container logs and how to make sure 
app's log still exists when log CLI runs, etc. 
An alternative way is to set TTL to aggregated logs to keep latest 
finished/running apps only, it can be flexible in policies, like: time based, 
e.g. latest 15 days; app number based, e.g. latest 1000 apps or even size based 
(considering long running apps).
CC [~xgong], [~jlowe].

> Yarn Log-aggregation Transform Enable not to Spam the NameNode
> --
>
> Key: YARN-6521
> URL: https://issues.apache.org/jira/browse/YARN-6521
> Project: Hadoop YARN
>  Issue Type: Improvement
>Reporter: zhangyubiao
>
> Nowdays We have large cluster,with the apps incr and container incr ,We split 
> an  namespace to store applogs. 
> But the  log-aggregation /tmp/app-logs incr also make the namespace responce 
> slow.  
> We want to chang yarn.log-aggregation-enable true -> false,But Transform the 
> the yarn log cli service also can get the app-logs.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6521) Yarn Log-aggregation Transform Enable not to Spam the NameNode

2017-04-26 Thread Robert Kanter (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6521?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15985307#comment-15985307
 ] 

Robert Kanter commented on YARN-6521:
-

[~djp], having policies for that is kind of what I was getting at in YARN-6356. 
 YARN-6356 suggests adding a separate retain-seconds config for succeeded jobs 
vs failed jobs so you could, for example, keep succeed job logs for 1 day 
(because you don't really care about them) and failed job logs for 1 week 
(because you need time to look at them).  But generalizing that to be more 
flexible is a good idea.

> Yarn Log-aggregation Transform Enable not to Spam the NameNode
> --
>
> Key: YARN-6521
> URL: https://issues.apache.org/jira/browse/YARN-6521
> Project: Hadoop YARN
>  Issue Type: Improvement
>Reporter: zhangyubiao
>
> Nowdays We have large cluster,with the apps incr and container incr ,We split 
> an  namespace to store applogs. 
> But the  log-aggregation /tmp/app-logs incr also make the namespace responce 
> slow.  
> We want to chang yarn.log-aggregation-enable true -> false,But Transform the 
> the yarn log cli service also can get the app-logs.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6520) Fix warnings from Spotbugs in hadoop-yarn-client

2017-04-26 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6520?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15985323#comment-15985323
 ] 

Hadoop QA commented on YARN-6520:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
23s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
1s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 13m 
 5s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
21s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
14s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
23s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
17s{color} | {color:green} trunk passed {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  0m 
30s{color} | {color:red} hadoop-yarn-project/hadoop-yarn/hadoop-yarn-client in 
trunk has 2 extant Findbugs warnings. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
15s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
18s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
18s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
18s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
12s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
19s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
14s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
34s{color} | {color:green} hadoop-yarn-project/hadoop-yarn/hadoop-yarn-client 
generated 0 new + 0 unchanged - 2 fixed = 0 total (was 2) {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
12s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 34m 27s{color} 
| {color:red} hadoop-yarn-client in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
19s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 53m 45s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Timed out junit tests | 
org.apache.hadoop.yarn.client.api.impl.TestOpportunisticContainerAllocation |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:0ac17dc |
| JIRA Issue | YARN-6520 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12864720/YARN-6520.001.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux 2ac9b7e4281e 4.4.0-43-generic #63-Ubuntu SMP Wed Oct 12 
13:48:03 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / 93fa48f |
| Default Java | 1.8.0_121 |
| findbugs | v3.1.0-RC1 |
| findbugs | 
https://builds.apache.org/job/PreCommit-YARN-Build/15745/artifact/patchprocess/branch-findbugs-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-client-warnings.html
 |
| unit | 
https://builds.apache.org/job/PreCommit-YARN-Build/15745/artifact/patchprocess/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-client.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-YARN-Build/15745/testReport/ |
| modules | C: hadoop-yarn-project/hadoop-yarn/hadoop-yarn-client U: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-client |
| Console output | 
https://builds.apache.or

[jira] [Commented] (YARN-5301) NM mount cpu cgroups failed on some systems

2017-04-26 Thread Daniel Templeton (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5301?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15985324#comment-15985324
 ] 

Daniel Templeton commented on YARN-5301:


My concern was that swallowing the exception might leave things in a bad state. 
 I tested on MacOS, and it looks like everything works out in the end.  My one 
final quibble, though, is that the comment about unit tests is a little 
misleading.  I just exercised this code path by running a node manager on 
MacOS, not through a unit test.

> NM mount cpu cgroups failed on some systems
> ---
>
> Key: YARN-5301
> URL: https://issues.apache.org/jira/browse/YARN-5301
> Project: Hadoop YARN
>  Issue Type: Bug
>Reporter: sandflee
>Assignee: Miklos Szegedi
> Attachments: YARN-5301.000.patch, YARN-5301.001.patch, 
> YARN-5301.002.patch, YARN-5301.003.patch, YARN-5301.004.patch, 
> YARN-5301.005.patch, YARN-5301.006.patch
>
>
> on ubuntu  with linux kernel 3.19, , NM start failed if enable auto mount 
> cgroup. try command:
> ./bin/container-executor --mount-cgroups yarn-hadoop cpu=/cgroup/cpufail
> ./bin/container-executor --mount-cgroups yarn-hadoop cpu,cpuacct=/cgroup/cpu  
>   succ



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-6510) Fix profs stat file warning caused by process names that includes parenthesis

2017-04-26 Thread Haibo Chen (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-6510?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Haibo Chen updated YARN-6510:
-
Summary: Fix profs stat file warning caused by process names that includes 
parenthesis  (was: Fix warning - procfs stat file is not in the expected 
format: YARN-3344 is not enough)

> Fix profs stat file warning caused by process names that includes parenthesis
> -
>
> Key: YARN-6510
> URL: https://issues.apache.org/jira/browse/YARN-6510
> Project: Hadoop YARN
>  Issue Type: Bug
>Affects Versions: 2.8.0, 3.0.0-alpha2
>Reporter: Wilfred Spiegelenburg
>Assignee: Wilfred Spiegelenburg
> Attachments: YARN-6510.01.patch
>
>
> Even with the fix for YARN-3344 we still have issues with the procfs format.
> This is the case that is causing issues:
> {code}
> [user@nm1 ~]$ cat /proc/2406/stat
> 2406 (ib_fmr(mlx4_0)) S 2 0 0 0 -1 2149613632 0 0 0 0 166 126908 0 0 20 0 1 0 
> 4284 0 0 18446744073709551615 0 0 0 0 0 0 0 2147483647 0 18446744073709551615 
> 0 0 17 6 0 0 0 0 0
> {code}
> We do not handle the parenthesis in the name which causes the pattern 
> matching to fail



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6510) Fix profs stat file warning caused by process names that includes parenthesis

2017-04-26 Thread Haibo Chen (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6510?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15985345#comment-15985345
 ] 

Haibo Chen commented on YARN-6510:
--

Thanks for the clarification. +1. checking this in.

> Fix profs stat file warning caused by process names that includes parenthesis
> -
>
> Key: YARN-6510
> URL: https://issues.apache.org/jira/browse/YARN-6510
> Project: Hadoop YARN
>  Issue Type: Bug
>Affects Versions: 2.8.0, 3.0.0-alpha2
>Reporter: Wilfred Spiegelenburg
>Assignee: Wilfred Spiegelenburg
> Attachments: YARN-6510.01.patch
>
>
> Even with the fix for YARN-3344 we still have issues with the procfs format.
> This is the case that is causing issues:
> {code}
> [user@nm1 ~]$ cat /proc/2406/stat
> 2406 (ib_fmr(mlx4_0)) S 2 0 0 0 -1 2149613632 0 0 0 0 166 126908 0 0 20 0 1 0 
> 4284 0 0 18446744073709551615 0 0 0 0 0 0 0 2147483647 0 18446744073709551615 
> 0 0 17 6 0 0 0 0 0
> {code}
> We do not handle the parenthesis in the name which causes the pattern 
> matching to fail



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-2962) ZKRMStateStore: Limit the number of znodes under a znode

2017-04-26 Thread Varun Saxena (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-2962?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15985361#comment-15985361
 ] 

Varun Saxena commented on YARN-2962:


bq. Why are you removing the rm1.close() and rm2.close() in 
TestZKRMStateStore.testFencing()?
I am actually adding it.

Fixed other comments.
Sorry was on leave so could not update patch earlier.

> ZKRMStateStore: Limit the number of znodes under a znode
> 
>
> Key: YARN-2962
> URL: https://issues.apache.org/jira/browse/YARN-2962
> Project: Hadoop YARN
>  Issue Type: Improvement
>  Components: resourcemanager
>Affects Versions: 2.6.0
>Reporter: Karthik Kambatla
>Assignee: Varun Saxena
>Priority: Critical
> Attachments: YARN-2962.006.patch, YARN-2962.007.patch, 
> YARN-2962.008.patch, YARN-2962.008.patch, YARN-2962.009.patch, 
> YARN-2962.010.patch, YARN-2962.011.patch, YARN-2962.01.patch, 
> YARN-2962.04.patch, YARN-2962.05.patch, YARN-2962.2.patch, YARN-2962.3.patch
>
>
> We ran into this issue where we were hitting the default ZK server message 
> size configs, primarily because the message had too many znodes even though 
> they individually they were all small.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-2962) ZKRMStateStore: Limit the number of znodes under a znode

2017-04-26 Thread Varun Saxena (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-2962?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Varun Saxena updated YARN-2962:
---
Attachment: YARN-2962.011.patch

> ZKRMStateStore: Limit the number of znodes under a znode
> 
>
> Key: YARN-2962
> URL: https://issues.apache.org/jira/browse/YARN-2962
> Project: Hadoop YARN
>  Issue Type: Improvement
>  Components: resourcemanager
>Affects Versions: 2.6.0
>Reporter: Karthik Kambatla
>Assignee: Varun Saxena
>Priority: Critical
> Attachments: YARN-2962.006.patch, YARN-2962.007.patch, 
> YARN-2962.008.patch, YARN-2962.008.patch, YARN-2962.009.patch, 
> YARN-2962.010.patch, YARN-2962.011.patch, YARN-2962.01.patch, 
> YARN-2962.04.patch, YARN-2962.05.patch, YARN-2962.2.patch, YARN-2962.3.patch
>
>
> We ran into this issue where we were hitting the default ZK server message 
> size configs, primarily because the message had too many znodes even though 
> they individually they were all small.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-6510) Fix profs stat file warning caused by process names that includes parenthesis

2017-04-26 Thread Haibo Chen (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-6510?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Haibo Chen updated YARN-6510:
-
Fix Version/s: 3.0.0-alpha3
   2.9.0

> Fix profs stat file warning caused by process names that includes parenthesis
> -
>
> Key: YARN-6510
> URL: https://issues.apache.org/jira/browse/YARN-6510
> Project: Hadoop YARN
>  Issue Type: Bug
>Affects Versions: 2.8.0, 3.0.0-alpha2
>Reporter: Wilfred Spiegelenburg
>Assignee: Wilfred Spiegelenburg
> Fix For: 2.9.0, 3.0.0-alpha3
>
> Attachments: YARN-6510.01.patch
>
>
> Even with the fix for YARN-3344 we still have issues with the procfs format.
> This is the case that is causing issues:
> {code}
> [user@nm1 ~]$ cat /proc/2406/stat
> 2406 (ib_fmr(mlx4_0)) S 2 0 0 0 -1 2149613632 0 0 0 0 166 126908 0 0 20 0 1 0 
> 4284 0 0 18446744073709551615 0 0 0 0 0 0 0 2147483647 0 18446744073709551615 
> 0 0 17 6 0 0 0 0 0
> {code}
> We do not handle the parenthesis in the name which causes the pattern 
> matching to fail



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6510) Fix profs stat file warning caused by process names that includes parenthesis

2017-04-26 Thread Haibo Chen (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6510?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15985444#comment-15985444
 ] 

Haibo Chen commented on YARN-6510:
--

Thanks for your contribution, [~wilfreds]! I have committed it to branch-2 and 
trunk.

> Fix profs stat file warning caused by process names that includes parenthesis
> -
>
> Key: YARN-6510
> URL: https://issues.apache.org/jira/browse/YARN-6510
> Project: Hadoop YARN
>  Issue Type: Bug
>Affects Versions: 2.8.0, 3.0.0-alpha2
>Reporter: Wilfred Spiegelenburg
>Assignee: Wilfred Spiegelenburg
> Fix For: 2.9.0, 3.0.0-alpha3
>
> Attachments: YARN-6510.01.patch
>
>
> Even with the fix for YARN-3344 we still have issues with the procfs format.
> This is the case that is causing issues:
> {code}
> [user@nm1 ~]$ cat /proc/2406/stat
> 2406 (ib_fmr(mlx4_0)) S 2 0 0 0 -1 2149613632 0 0 0 0 166 126908 0 0 20 0 1 0 
> 4284 0 0 18446744073709551615 0 0 0 0 0 0 0 2147483647 0 18446744073709551615 
> 0 0 17 6 0 0 0 0 0
> {code}
> We do not handle the parenthesis in the name which causes the pattern 
> matching to fail



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-5301) NM mount cpu cgroups failed on some systems

2017-04-26 Thread Miklos Szegedi (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-5301?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Miklos Szegedi updated YARN-5301:
-
Attachment: YARN-5301.007.patch

Thank you for the review [~templedf]. I updated the patch.

> NM mount cpu cgroups failed on some systems
> ---
>
> Key: YARN-5301
> URL: https://issues.apache.org/jira/browse/YARN-5301
> Project: Hadoop YARN
>  Issue Type: Bug
>Reporter: sandflee
>Assignee: Miklos Szegedi
> Attachments: YARN-5301.000.patch, YARN-5301.001.patch, 
> YARN-5301.002.patch, YARN-5301.003.patch, YARN-5301.004.patch, 
> YARN-5301.005.patch, YARN-5301.006.patch, YARN-5301.007.patch
>
>
> on ubuntu  with linux kernel 3.19, , NM start failed if enable auto mount 
> cgroup. try command:
> ./bin/container-executor --mount-cgroups yarn-hadoop cpu=/cgroup/cpufail
> ./bin/container-executor --mount-cgroups yarn-hadoop cpu,cpuacct=/cgroup/cpu  
>   succ



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5411) Create a proxy chain for ApplicationClientProtocol in the Router

2017-04-26 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5411?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15985476#comment-15985476
 ] 

Hadoop QA commented on YARN-5411:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
18s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 5 new or modified test 
files. {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
10s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 17m 
16s{color} | {color:green} YARN-2915 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 11m 
42s{color} | {color:green} YARN-2915 passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
55s{color} | {color:green} YARN-2915 passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
57s{color} | {color:green} YARN-2915 passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
42s{color} | {color:green} YARN-2915 passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
37s{color} | {color:green} YARN-2915 passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
45s{color} | {color:green} YARN-2915 passed {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m  
9s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
40s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  9m 
45s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  9m 
45s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 53s{color} | {color:orange} hadoop-yarn-project/hadoop-yarn: The patch 
generated 20 new + 206 unchanged - 0 fixed = 226 total (was 206) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
54s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
39s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} whitespace {color} | {color:red}  0m  
0s{color} | {color:red} The patch has 5 line(s) that end in whitespace. Use git 
apply --whitespace=fix <>. Refer https://git-scm.com/docs/git-apply 
{color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green}  0m  
2s{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  0m 
38s{color} | {color:red} 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-router 
generated 1 new + 0 unchanged - 0 fixed = 1 total (was 0) {color} |
| {color:red}-1{color} | {color:red} javadoc {color} | {color:red}  0m 
19s{color} | {color:red} hadoop-yarn-server-router in the patch failed. {color} 
|
| {color:red}-1{color} | {color:red} unit {color} | {color:red}  0m 33s{color} 
| {color:red} hadoop-yarn-api in the patch failed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red}  0m 21s{color} 
| {color:red} hadoop-yarn-server-router in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
34s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 60m 42s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| FindBugs | 
module:hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-router
 |
|  |  
org.apache.hadoop.yarn.server.router.clientrmproxy.ClientRMProxyService$RequestInterceptorChainWrapper.finalize()
 is public; should be protected  At ClientRMProxyService.java:protected  At 
ClientRMProxyService.java:[lines 535-536] |
| Failed junit tests | hadoop.yarn.conf.TestYarnConfigurationFields |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:0ac17dc |
| JIRA Issue | YARN-5411 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12865044/YARN-5411-YARN-2915.v1.patch
 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall

[jira] [Updated] (YARN-5067) Support specifying resources for AM containers in SLS

2017-04-26 Thread Yufei Gu (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-5067?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yufei Gu updated YARN-5067:
---
Attachment: YARN-5067.002.patch

Rebased.

> Support specifying resources for AM containers in SLS
> -
>
> Key: YARN-5067
> URL: https://issues.apache.org/jira/browse/YARN-5067
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Wangda Tan
>Assignee: Yufei Gu
> Attachments: YARN-5067.001.patch, YARN-5067.002.patch
>
>
> Now resource of application masters in SLS is hardcoded to mem=1024 vcores=1.
> We should be able to specify AM resources from trace input file.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6510) Fix profs stat file warning caused by process names that includes parenthesis

2017-04-26 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6510?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15985542#comment-15985542
 ] 

Hudson commented on YARN-6510:
--

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #11635 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/11635/])
YARN-6510. Fix profs stat file warning caused by process names that (haibochen: 
rev 4f3ca0396a810f54f7fd0489a224c1bb13143aa4)
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/java/org/apache/hadoop/yarn/util/ProcfsBasedProcessTree.java
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/test/java/org/apache/hadoop/yarn/util/TestProcfsBasedProcessTree.java


> Fix profs stat file warning caused by process names that includes parenthesis
> -
>
> Key: YARN-6510
> URL: https://issues.apache.org/jira/browse/YARN-6510
> Project: Hadoop YARN
>  Issue Type: Bug
>Affects Versions: 2.8.0, 3.0.0-alpha2
>Reporter: Wilfred Spiegelenburg
>Assignee: Wilfred Spiegelenburg
> Fix For: 2.9.0, 3.0.0-alpha3
>
> Attachments: YARN-6510.01.patch
>
>
> Even with the fix for YARN-3344 we still have issues with the procfs format.
> This is the case that is causing issues:
> {code}
> [user@nm1 ~]$ cat /proc/2406/stat
> 2406 (ib_fmr(mlx4_0)) S 2 0 0 0 -1 2149613632 0 0 0 0 166 126908 0 0 20 0 1 0 
> 4284 0 0 18446744073709551615 0 0 0 0 0 0 0 2147483647 0 18446744073709551615 
> 0 0 17 6 0 0 0 0 0
> {code}
> We do not handle the parenthesis in the name which causes the pattern 
> matching to fail



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5411) Create a proxy chain for ApplicationClientProtocol in the Router

2017-04-26 Thread Subru Krishnan (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5411?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15985613#comment-15985613
 ] 

Subru Krishnan commented on YARN-5411:
--

Thanks [~giovanni.fumarola] for the patch. I looked at it & please find my 
comments below:

  * There are quite a few Yetus warnings, kindly fix them.
  * I saw some TODOs, can you either address them or open follow-up jira and 
refer in comment.
  * {{LRUCacheHashMap}} should be moved to yarn-common as it's generally useful.
  * I feel we should drop the _proxy_ suffix in package names.
  * I am not a fan of adding more configs in {{YarnConfiguration}}. Can we 
reuse from the RM config, for e.g.: the number of worker threads.
  * There's a typo in {{YarnConfiguration}}, ROUTER_PREFIX should be 
YARN_FEDERATION_PREFIX + "router".
  * In {{ClientRMProxyService}}, please add code comments on the usage of 
LRUCache.
  * We should refactor & reuse 
*ClientRMProxyService::getInterceptorClassNames/createRequestInterceptorChain* 
with {{AMRMProxyService}}?
  * *getPipelines* should have only test visibility in {{ClientRMProxyService}}.
  * {{ClientRequestInterceptor::init}}, may be more efficient to directly use 
UGI as we seem to double translating now.
  * Rename *clientRM* --> *clientRMProxy* in 
{{DefaultClientRequestInterceptor}}.

Feedback on tests:

  * We should run the {{ClientRMProxyService}} with the 
{{DefaultClientRequestInterceptor}} directly using the 
[TestClientRMService|https://github.com/apache/hadoop/blob/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/TestClientRMService.java]
 in the RM module.
  * {{MockResourceManagerFacade}} - we already added one in YARN-2884 and I 
have already suggested to reuse it in YARN-5531. Can you also refactor into 
common module and reuse?
  * {{MockClientRequestInterceptor}} should extend 
DefaultClientRequestInterceptor and only override init.
  * *testRequestInterceptorChainCreation* - remove the switch case as it's 
confusing.
  * I didn't grok *testUsersChainMapWithLRUCache*, the pipeline count should 
not increase as we are not using different users and even if it did increase, 
_test1_ should be evicted?

> Create a proxy chain for ApplicationClientProtocol in the Router
> 
>
> Key: YARN-5411
> URL: https://issues.apache.org/jira/browse/YARN-5411
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: nodemanager, resourcemanager
>Reporter: Subru Krishnan
>Assignee: Giovanni Matteo Fumarola
> Attachments: YARN-5411-YARN-2915.v1.patch
>
>
> As detailed in the proposal in the umbrella JIRA, we are introducing a new 
> component that routes client request to appropriate ResourceManager(s). This 
> JIRA tracks the creation of a proxy for ApplicationClientProtocol in the 
> Router. This provides a placeholder for:
> 1) throttling mis-behaving clients (YARN-1546)
> 3) mask the access to multiple RMs (YARN-3659)
> We are planning to follow the interceptor pattern like we did in YARN-2884 to 
> generalize the approach and have only dynamically coupling for Federation.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Created] (YARN-6534) ResourceManager failed due to TimelineClient try to init SSLFactory even https is not enabled

2017-04-26 Thread Junping Du (JIRA)
Junping Du created YARN-6534:


 Summary: ResourceManager failed due to TimelineClient try to init 
SSLFactory even https is not enabled
 Key: YARN-6534
 URL: https://issues.apache.org/jira/browse/YARN-6534
 Project: Hadoop YARN
  Issue Type: Bug
Affects Versions: 3.0.0-alpha3
Reporter: Junping Du
Priority: Blocker


In a non-secured cluster, RM get failed consistently due to 
TimelineServiceV1Publisher tries to init TimelineClient with SSLFactory without 
any checking on if https get used.

{noformat}
2017-04-26 21:09:10,683 FATAL resourcemanager.ResourceManager 
(ResourceManager.java:main(1457)) - Error starting ResourceManager
org.apache.hadoop.service.ServiceStateException: java.io.FileNotFoundException: 
/etc/security/clientKeys/all.jks (No such file or directory)
at 
org.apache.hadoop.service.ServiceStateException.convert(ServiceStateException.java:59)
at 
org.apache.hadoop.service.AbstractService.init(AbstractService.java:172)
at 
org.apache.hadoop.service.CompositeService.serviceInit(CompositeService.java:107)
at 
org.apache.hadoop.yarn.client.api.impl.TimelineClientImpl.serviceInit(TimelineClientImpl.java:131)
at 
org.apache.hadoop.service.AbstractService.init(AbstractService.java:163)
at 
org.apache.hadoop.service.CompositeService.serviceInit(CompositeService.java:107)
at 
org.apache.hadoop.yarn.server.resourcemanager.metrics.AbstractSystemMetricsPublisher.serviceInit(AbstractSystemMetricsPublisher.java:59)
at 
org.apache.hadoop.yarn.server.resourcemanager.metrics.TimelineServiceV1Publisher.serviceInit(TimelineServiceV1Publisher.java:67)
at 
org.apache.hadoop.service.AbstractService.init(AbstractService.java:163)
at 
org.apache.hadoop.service.CompositeService.serviceInit(CompositeService.java:107)
at 
org.apache.hadoop.yarn.server.resourcemanager.ResourceManager.serviceInit(ResourceManager.java:344)
at 
org.apache.hadoop.service.AbstractService.init(AbstractService.java:163)
at 
org.apache.hadoop.yarn.server.resourcemanager.ResourceManager.main(ResourceManager.java:1453)
Caused by: java.io.FileNotFoundException: /etc/security/clientKeys/all.jks (No 
such file or directory)
at java.io.FileInputStream.open0(Native Method)
at java.io.FileInputStream.open(FileInputStream.java:195)
at java.io.FileInputStream.(FileInputStream.java:138)
at 
org.apache.hadoop.security.ssl.ReloadingX509TrustManager.loadTrustManager(ReloadingX509TrustManager.java:168)
at 
org.apache.hadoop.security.ssl.ReloadingX509TrustManager.(ReloadingX509TrustManager.java:86)
at 
org.apache.hadoop.security.ssl.FileBasedKeyStoresFactory.init(FileBasedKeyStoresFactory.java:219)
at org.apache.hadoop.security.ssl.SSLFactory.init(SSLFactory.java:179)
at 
org.apache.hadoop.yarn.client.api.impl.TimelineConnector.getSSLFactory(TimelineConnector.java:176)
at 
org.apache.hadoop.yarn.client.api.impl.TimelineConnector.serviceInit(TimelineConnector.java:106)
at 
org.apache.hadoop.service.AbstractService.init(AbstractService.java:163)
... 11 more
{noformat}
CC [~rohithsharma] and [~gtCarrera9]



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5881) Enable configuration of queue capacity in terms of absolute resources

2017-04-26 Thread Wangda Tan (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5881?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15985661#comment-15985661
 ] 

Wangda Tan commented on YARN-5881:
--

Thanks [~sunilg] for the POC patch, general approach looks good, several 
structural/cosmetic suggestions: 

1) For the quotas (Like effective min/max, configured min/max, etc.) I think it 
is better put all of then to one class. Because it can handle following things 
better: a. for not existed label, it can return a zero resource instead of 
null. b. can have a separate lock.

What I'm thinking now is moving common methods / fields from ResourceUsage to a 
common class, and leave all type-specifc methods (such as setCachedUsed) in the 
subclass.

And we can create a a new class (Such as QueueResourceQuotas), which extend the 
base class, but with some methods to read/write queue resource quota types.

Downside of the approach is, resources associated to one partition is 
determined by ResourceUsage#ResourceType. Without lots of changes, for each 
partition, it will include many unnecessary resource types (for example 
used_resource is unnecessary for ResourceQuotas). It looks not a big problem to 
me since accessing items in an array is very fast and number of partitions is 
limited.

2) Can we move {{loadResourceConstraintByLabelsFromConf}} from {{CSQueueUtils}} 
to AbstractCSQueue? Since it is more related to the queue and its parent queue, 
with this change lots of input parameters of 
{{loadResourceConstraintByLabelsFromConf}} can be removed.

3) For set effective min/max Resource, do you think we can only do it inside 
{{LeafQueue/ParentQueue#updateClusterResource}}? Now 
{{computeEffectiveResourcesForAllQueues}} creates a separate recursive call, I 
think it will be better to put all queue-resource-related changes inside 
{{LeafQueue/ParentQueue#updateClusterResource}}. 

Please let me know your thoughts. And for the next patch, it's better if you 
can update all scheduler paths to use getEffective*.

> Enable configuration of queue capacity in terms of absolute resources
> -
>
> Key: YARN-5881
> URL: https://issues.apache.org/jira/browse/YARN-5881
> Project: Hadoop YARN
>  Issue Type: Improvement
>Reporter: Sean Po
>Assignee: Wangda Tan
> Attachments: 
> YARN-5881.Support.Absolute.Min.Max.Resource.In.Capacity.Scheduler.design-doc.v1.pdf,
>  YARN-5881.v0.patch, YARN-5881.v1.patch
>
>
> Currently, Yarn RM supports the configuration of queue capacity in terms of a 
> proportion to cluster capacity. In the context of Yarn being used as a public 
> cloud service, it makes more sense if queues can be configured absolutely. 
> This will allow administrators to set usage limits more concretely and 
> simplify customer expectations for cluster allocation.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-6534) ResourceManager failed due to TimelineClient try to init SSLFactory even https is not enabled

2017-04-26 Thread Junping Du (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-6534?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Junping Du updated YARN-6534:
-
Target Version/s: YARN-5355  (was: 3.0.0-alpha3)

> ResourceManager failed due to TimelineClient try to init SSLFactory even 
> https is not enabled
> -
>
> Key: YARN-6534
> URL: https://issues.apache.org/jira/browse/YARN-6534
> Project: Hadoop YARN
>  Issue Type: Bug
>Affects Versions: 3.0.0-alpha3
>Reporter: Junping Du
>Priority: Blocker
>
> In a non-secured cluster, RM get failed consistently due to 
> TimelineServiceV1Publisher tries to init TimelineClient with SSLFactory 
> without any checking on if https get used.
> {noformat}
> 2017-04-26 21:09:10,683 FATAL resourcemanager.ResourceManager 
> (ResourceManager.java:main(1457)) - Error starting ResourceManager
> org.apache.hadoop.service.ServiceStateException: 
> java.io.FileNotFoundException: /etc/security/clientKeys/all.jks (No such file 
> or directory)
> at 
> org.apache.hadoop.service.ServiceStateException.convert(ServiceStateException.java:59)
> at 
> org.apache.hadoop.service.AbstractService.init(AbstractService.java:172)
> at 
> org.apache.hadoop.service.CompositeService.serviceInit(CompositeService.java:107)
> at 
> org.apache.hadoop.yarn.client.api.impl.TimelineClientImpl.serviceInit(TimelineClientImpl.java:131)
> at 
> org.apache.hadoop.service.AbstractService.init(AbstractService.java:163)
> at 
> org.apache.hadoop.service.CompositeService.serviceInit(CompositeService.java:107)
> at 
> org.apache.hadoop.yarn.server.resourcemanager.metrics.AbstractSystemMetricsPublisher.serviceInit(AbstractSystemMetricsPublisher.java:59)
> at 
> org.apache.hadoop.yarn.server.resourcemanager.metrics.TimelineServiceV1Publisher.serviceInit(TimelineServiceV1Publisher.java:67)
> at 
> org.apache.hadoop.service.AbstractService.init(AbstractService.java:163)
> at 
> org.apache.hadoop.service.CompositeService.serviceInit(CompositeService.java:107)
> at 
> org.apache.hadoop.yarn.server.resourcemanager.ResourceManager.serviceInit(ResourceManager.java:344)
> at 
> org.apache.hadoop.service.AbstractService.init(AbstractService.java:163)
> at 
> org.apache.hadoop.yarn.server.resourcemanager.ResourceManager.main(ResourceManager.java:1453)
> Caused by: java.io.FileNotFoundException: /etc/security/clientKeys/all.jks 
> (No such file or directory)
> at java.io.FileInputStream.open0(Native Method)
> at java.io.FileInputStream.open(FileInputStream.java:195)
> at java.io.FileInputStream.(FileInputStream.java:138)
> at 
> org.apache.hadoop.security.ssl.ReloadingX509TrustManager.loadTrustManager(ReloadingX509TrustManager.java:168)
> at 
> org.apache.hadoop.security.ssl.ReloadingX509TrustManager.(ReloadingX509TrustManager.java:86)
> at 
> org.apache.hadoop.security.ssl.FileBasedKeyStoresFactory.init(FileBasedKeyStoresFactory.java:219)
> at org.apache.hadoop.security.ssl.SSLFactory.init(SSLFactory.java:179)
> at 
> org.apache.hadoop.yarn.client.api.impl.TimelineConnector.getSSLFactory(TimelineConnector.java:176)
> at 
> org.apache.hadoop.yarn.client.api.impl.TimelineConnector.serviceInit(TimelineConnector.java:106)
> at 
> org.apache.hadoop.service.AbstractService.init(AbstractService.java:163)
> ... 11 more
> {noformat}
> CC [~rohithsharma] and [~gtCarrera9]



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-6534) ResourceManager failed due to TimelineClient try to init SSLFactory even https is not enabled

2017-04-26 Thread Junping Du (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-6534?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Junping Du updated YARN-6534:
-
Target Version/s: YARN-5355, 3.0.0-alpha3  (was: YARN-5355)

> ResourceManager failed due to TimelineClient try to init SSLFactory even 
> https is not enabled
> -
>
> Key: YARN-6534
> URL: https://issues.apache.org/jira/browse/YARN-6534
> Project: Hadoop YARN
>  Issue Type: Bug
>Affects Versions: 3.0.0-alpha3
>Reporter: Junping Du
>Priority: Blocker
>
> In a non-secured cluster, RM get failed consistently due to 
> TimelineServiceV1Publisher tries to init TimelineClient with SSLFactory 
> without any checking on if https get used.
> {noformat}
> 2017-04-26 21:09:10,683 FATAL resourcemanager.ResourceManager 
> (ResourceManager.java:main(1457)) - Error starting ResourceManager
> org.apache.hadoop.service.ServiceStateException: 
> java.io.FileNotFoundException: /etc/security/clientKeys/all.jks (No such file 
> or directory)
> at 
> org.apache.hadoop.service.ServiceStateException.convert(ServiceStateException.java:59)
> at 
> org.apache.hadoop.service.AbstractService.init(AbstractService.java:172)
> at 
> org.apache.hadoop.service.CompositeService.serviceInit(CompositeService.java:107)
> at 
> org.apache.hadoop.yarn.client.api.impl.TimelineClientImpl.serviceInit(TimelineClientImpl.java:131)
> at 
> org.apache.hadoop.service.AbstractService.init(AbstractService.java:163)
> at 
> org.apache.hadoop.service.CompositeService.serviceInit(CompositeService.java:107)
> at 
> org.apache.hadoop.yarn.server.resourcemanager.metrics.AbstractSystemMetricsPublisher.serviceInit(AbstractSystemMetricsPublisher.java:59)
> at 
> org.apache.hadoop.yarn.server.resourcemanager.metrics.TimelineServiceV1Publisher.serviceInit(TimelineServiceV1Publisher.java:67)
> at 
> org.apache.hadoop.service.AbstractService.init(AbstractService.java:163)
> at 
> org.apache.hadoop.service.CompositeService.serviceInit(CompositeService.java:107)
> at 
> org.apache.hadoop.yarn.server.resourcemanager.ResourceManager.serviceInit(ResourceManager.java:344)
> at 
> org.apache.hadoop.service.AbstractService.init(AbstractService.java:163)
> at 
> org.apache.hadoop.yarn.server.resourcemanager.ResourceManager.main(ResourceManager.java:1453)
> Caused by: java.io.FileNotFoundException: /etc/security/clientKeys/all.jks 
> (No such file or directory)
> at java.io.FileInputStream.open0(Native Method)
> at java.io.FileInputStream.open(FileInputStream.java:195)
> at java.io.FileInputStream.(FileInputStream.java:138)
> at 
> org.apache.hadoop.security.ssl.ReloadingX509TrustManager.loadTrustManager(ReloadingX509TrustManager.java:168)
> at 
> org.apache.hadoop.security.ssl.ReloadingX509TrustManager.(ReloadingX509TrustManager.java:86)
> at 
> org.apache.hadoop.security.ssl.FileBasedKeyStoresFactory.init(FileBasedKeyStoresFactory.java:219)
> at org.apache.hadoop.security.ssl.SSLFactory.init(SSLFactory.java:179)
> at 
> org.apache.hadoop.yarn.client.api.impl.TimelineConnector.getSSLFactory(TimelineConnector.java:176)
> at 
> org.apache.hadoop.yarn.client.api.impl.TimelineConnector.serviceInit(TimelineConnector.java:106)
> at 
> org.apache.hadoop.service.AbstractService.init(AbstractService.java:163)
> ... 11 more
> {noformat}
> CC [~rohithsharma] and [~gtCarrera9]



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Reopened] (YARN-6344) Add parameter for rack locality delay in CapacityScheduler

2017-04-26 Thread Wangda Tan (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-6344?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wangda Tan reopened YARN-6344:
--

> Add parameter for rack locality delay in CapacityScheduler
> --
>
> Key: YARN-6344
> URL: https://issues.apache.org/jira/browse/YARN-6344
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: capacityscheduler
>Reporter: Konstantinos Karanasos
>Assignee: Konstantinos Karanasos
> Fix For: 2.9.0, 3.0.0-alpha3
>
> Attachments: YARN-6344.001.patch, YARN-6344.002.patch, 
> YARN-6344.003.patch, YARN-6344.004.patch, YARN-6344-branch-2.8.patch
>
>
> When relaxing locality from node to rack, the {{node-locality-parameter}} is 
> used: when scheduling opportunities for a scheduler key are more than the 
> value of this parameter, we relax locality and try to assign the container to 
> a node in the corresponding rack.
> On the other hand, when relaxing locality to off-switch (i.e., assign the 
> container anywhere in the cluster), we are using a {{localityWaitFactor}}, 
> which is computed based on the number of outstanding requests for a specific 
> scheduler key, which is divided by the size of the cluster. 
> In case of applications that request containers in big batches (e.g., 
> traditional MR jobs), and for relatively small clusters, the 
> localityWaitFactor does not affect relaxing locality much.
> However, in case of applications that request containers in small batches, 
> this load factor takes a very small value, which leads to assigning 
> off-switch containers too soon. This situation is even more pronounced in big 
> clusters.
> For example, if an application requests only one container per request, the 
> locality will be relaxed after a single missed scheduling opportunity.
> The purpose of this JIRA is to rethink the way we are relaxing locality for 
> off-switch assignments.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6344) Add parameter for rack locality delay in CapacityScheduler

2017-04-26 Thread Wangda Tan (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6344?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15985667#comment-15985667
 ] 

Wangda Tan commented on YARN-6344:
--

Thanks [~Huangkx6810] for testing and getting back to us.

I just reopened the JIRA so Jenkins can run tests for the 2.8 patch, will 
commit once Jenkins comes back.

> Add parameter for rack locality delay in CapacityScheduler
> --
>
> Key: YARN-6344
> URL: https://issues.apache.org/jira/browse/YARN-6344
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: capacityscheduler
>Reporter: Konstantinos Karanasos
>Assignee: Konstantinos Karanasos
> Fix For: 2.9.0, 3.0.0-alpha3
>
> Attachments: YARN-6344.001.patch, YARN-6344.002.patch, 
> YARN-6344.003.patch, YARN-6344.004.patch, YARN-6344-branch-2.8.patch
>
>
> When relaxing locality from node to rack, the {{node-locality-parameter}} is 
> used: when scheduling opportunities for a scheduler key are more than the 
> value of this parameter, we relax locality and try to assign the container to 
> a node in the corresponding rack.
> On the other hand, when relaxing locality to off-switch (i.e., assign the 
> container anywhere in the cluster), we are using a {{localityWaitFactor}}, 
> which is computed based on the number of outstanding requests for a specific 
> scheduler key, which is divided by the size of the cluster. 
> In case of applications that request containers in big batches (e.g., 
> traditional MR jobs), and for relatively small clusters, the 
> localityWaitFactor does not affect relaxing locality much.
> However, in case of applications that request containers in small batches, 
> this load factor takes a very small value, which leads to assigning 
> off-switch containers too soon. This situation is even more pronounced in big 
> clusters.
> For example, if an application requests only one container per request, the 
> locality will be relaxed after a single missed scheduling opportunity.
> The purpose of this JIRA is to rethink the way we are relaxing locality for 
> off-switch assignments.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6374) Improve test coverage and add utility classes for common Docker operations

2017-04-26 Thread Sidharta Seethana (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6374?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15985754#comment-15985754
 ] 

Sidharta Seethana commented on YARN-6374:
-

[~shaneku...@gmail.com], thanks for the updated patch. Here is some feedback : 

{code}
dockerOp.disableFailureLogging();
{code}

Why is failure logging disabled for all operations that use this method? So far 
this has only been the case for signaling containers.

{code}
LOG.debug("Running docker rm. ContainerId: " + containerId);
{code}

Could you enclose such debug log lines with checks to see if debug logging is 
enabled? 

{code}
public static void executeDockerRm(String containerId,
{code}

I see the value in {{executeDockerCommand}} - but the other methods (e.g 
{{executeDockerRm}}) might not actually be beneficial. Instead of  
{{DockerCommandExecutor.executeDockerRm(containerId, …)}}, it would be 
{{DockerCommandExecutor.executeDockerCommand(new DockerRmCommand(containerId), 
... )}} . I am not sure there is much point to these additional wrappers. In 
fact, this would mean additional work every time a new subclass of 
{{DockerCommand}} is added. 

{code}
  public DockerContainerStatus getContainerStatus(String containerId,
  Configuration conf,
  PrivilegedOperationExecutor privilegedOperationExecutor) {

{code}

Is there a reason this method isn’t static like the utility methods in the 
{{DockerCommandExecutor}} class? What do you think about simplifying this by 
combining both of the these into one class (i.e is there a reason 
{{getContainerStatus}} can’t be bundled with {{DockerCommandExecutor}} ?)

{code}
  public DockerLinuxContainerRuntime(PrivilegedOperationExecutor
  privilegedOperationExecutor, CGroupsHandler cGroupsHandler,
  DockerContainerStatusHandler dockerContainerStatusHandler) {
{code}

So far, everything docker related has been encapsulated within the runtime 
class ( and this includes the {{DockerClient}} which {{DockerCommandExecutor}} 
uses). It seems counter-intuitive to inject something as specific as a 
{{DockerContainerStatusHandler}} as a dependency for 
{{DockerLinuxContainerRuntime}} itself.  

{code}
public class MockDockerContainerStatusHandler
extends DockerContainerStatusHandler {
.
.
.
protected String executeStatusCommand(String containerId,
  Configuration conf,
  PrivilegedOperationExecutor privilegedOperationExecutor)
  throws ContainerExecutionException {
{code}
What would be the reason to execute the inspect if the objective is return a 
mocked/injected status? 

{code}
  @Test
  public void testExecuteDockerRm() throws Exception {
DockerCommandExecutor
{code}

Same question as earlier - is it necessary to add all these wrappers?

{code}
@Test
  public void testGetContainerStatusForCreated()
{code}

There seem to be a lot of these test methods to test one if-else tree in 
{{getContainerStatus}}. Wouldn’t it easier to list all the distinct enum values 
and test this in a loop as opposed to writing 8 functions?

{code}
  @Test
  public void testRemoveContainerOnExit() throws Exception {
{code}

Lot of new tests here to test argument concatenation in the 
{{DockerRunCommand}} class - do you think these can be consolidated into fewer 
methods? Some of this is already tested in the {{TestDockerContainerRuntime}} 
class. 







> Improve test coverage and add utility classes for common Docker operations
> --
>
> Key: YARN-6374
> URL: https://issues.apache.org/jira/browse/YARN-6374
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: nodemanager, yarn
>Reporter: Shane Kumpf
>Assignee: Shane Kumpf
> Attachments: YARN-6374.001.patch, YARN-6374.002.patch
>
>
> Currently, it is tedious to execute Docker related operations due to the 
> plumbing needed to define the DockerCommand, writing the command file, 
> configuring privileged operation, and finally executing the command and 
> validating the result. Obtaining the current status of a Docker container can 
> also be improved. Finally, the test coverage is lacking for Docker Commands. 



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-5892) Capacity Scheduler: Support user-specific minimum user limit percent

2017-04-26 Thread Eric Payne (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-5892?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Eric Payne updated YARN-5892:
-
Attachment: YARN-5892.012.patch

[~leftnoteasy]
bq. updateUserWeights: Should we re-count 
Yes. Thanks for catching this.
bq. count users weight is an O(n)  ... I don't want this patch blocked by the 
comment, we can revisit it once we identify any perf issues.
OK
bq. I suggest you can move the check ... to LeafQueue#setupQueueConfigs.
Done. Good idea. Thanks.

> Capacity Scheduler: Support user-specific minimum user limit percent
> 
>
> Key: YARN-5892
> URL: https://issues.apache.org/jira/browse/YARN-5892
> Project: Hadoop YARN
>  Issue Type: Improvement
>  Components: capacityscheduler
>Reporter: Eric Payne
>Assignee: Eric Payne
> Attachments: Active users highlighted.jpg, YARN-5892.001.patch, 
> YARN-5892.002.patch, YARN-5892.003.patch, YARN-5892.004.patch, 
> YARN-5892.005.patch, YARN-5892.006.patch, YARN-5892.007.patch, 
> YARN-5892.008.patch, YARN-5892.009.patch, YARN-5892.010.patch, 
> YARN-5892.012.patch
>
>
> Currently, in the capacity scheduler, the {{minimum-user-limit-percent}} 
> property is per queue. A cluster admin should be able to set the minimum user 
> limit percent on a per-user basis within the queue.
> This functionality is needed so that when intra-queue preemption is enabled 
> (YARN-4945 / YARN-2113), some users can be deemed as more important than 
> other users, and resources from VIP users won't be as likely to be preempted.
> For example, if the {{getstuffdone}} queue has a MULP of 25 percent, but user 
> {{jane}} is a power user of queue {{getstuffdone}} and needs to be guaranteed 
> 75 percent, the properties for {{getstuffdone}} and {{jane}} would look like 
> this:
> {code}
>   
> 
> yarn.scheduler.capacity.root.getstuffdone.minimum-user-limit-percent
> 25
>   
>   
> 
> yarn.scheduler.capacity.root.getstuffdone.jane.minimum-user-limit-percent
> 75
>   
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Created] (YARN-6535) Program need to exit when SLS finishes.

2017-04-26 Thread Yufei Gu (JIRA)
Yufei Gu created YARN-6535:
--

 Summary: Program need to exit when SLS finishes. 
 Key: YARN-6535
 URL: https://issues.apache.org/jira/browse/YARN-6535
 Project: Hadoop YARN
  Issue Type: Bug
  Components: scheduler-load-simulator
Affects Versions: 3.0.0-alpha2
Reporter: Yufei Gu
Assignee: Yufei Gu


Program need to exit when SLS finishes except in unit tests.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-6535) Program needs to exit when SLS finishes.

2017-04-26 Thread Yufei Gu (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-6535?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yufei Gu updated YARN-6535:
---
Summary: Program needs to exit when SLS finishes.   (was: Program need to 
exit when SLS finishes. )

> Program needs to exit when SLS finishes. 
> -
>
> Key: YARN-6535
> URL: https://issues.apache.org/jira/browse/YARN-6535
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: scheduler-load-simulator
>Affects Versions: 3.0.0-alpha2
>Reporter: Yufei Gu
>Assignee: Yufei Gu
>
> Program need to exit when SLS finishes except in unit tests.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-6535) Program needs to exit when SLS finishes.

2017-04-26 Thread Yufei Gu (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-6535?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yufei Gu updated YARN-6535:
---
Attachment: YARN-6535.001.patch

> Program needs to exit when SLS finishes. 
> -
>
> Key: YARN-6535
> URL: https://issues.apache.org/jira/browse/YARN-6535
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: scheduler-load-simulator
>Affects Versions: 3.0.0-alpha2
>Reporter: Yufei Gu
>Assignee: Yufei Gu
> Attachments: YARN-6535.001.patch
>
>
> Program need to exit when SLS finishes except in unit tests.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6344) Add parameter for rack locality delay in CapacityScheduler

2017-04-26 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6344?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15985789#comment-15985789
 ] 

Hadoop QA commented on YARN-6344:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 17m 
16s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  8m 
47s{color} | {color:green} branch-2.8 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
27s{color} | {color:green} branch-2.8 passed with JDK v1.8.0_131 {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
32s{color} | {color:green} branch-2.8 passed with JDK v1.7.0_121 {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
20s{color} | {color:green} branch-2.8 passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
44s{color} | {color:green} branch-2.8 passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
19s{color} | {color:green} branch-2.8 passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
11s{color} | {color:green} branch-2.8 passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
21s{color} | {color:green} branch-2.8 passed with JDK v1.8.0_131 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
24s{color} | {color:green} branch-2.8 passed with JDK v1.7.0_121 {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
30s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
25s{color} | {color:green} the patch passed with JDK v1.8.0_131 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
25s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
28s{color} | {color:green} the patch passed with JDK v1.7.0_121 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
28s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 17s{color} | {color:orange} 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager:
 The patch generated 3 new + 103 unchanged - 2 fixed = 106 total (was 105) 
{color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
34s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
14s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green}  0m  
1s{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
19s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
17s{color} | {color:green} the patch passed with JDK v1.8.0_131 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
21s{color} | {color:green} the patch passed with JDK v1.7.0_121 {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red}  0m  7s{color} 
| {color:red} hadoop-yarn-server-resourcemanager in the patch failed with JDK 
v1.7.0_121. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
31s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}113m 23s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| JDK v1.8.0_131 Failed junit tests | 
hadoop.yarn.server.resourcemanager.webapp.TestRMWebServicesAppsModification |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:5970e82 |
| JIRA Issue | YARN-6344 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12863501/YARN-6344-branch-2.8.patch
 |
| Optional Tests |  asflicense  xml  compile  javac  javadoc  mvninstall  
mvnsite  unit  findbugs  ch

[jira] [Commented] (YARN-5301) NM mount cpu cgroups failed on some systems

2017-04-26 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5301?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15985792#comment-15985792
 ] 

Hadoop QA commented on YARN-5301:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m  
0s{color} | {color:blue} Docker mode activated. {color} |
| {color:red}-1{color} | {color:red} patch {color} | {color:red}  0m  7s{color} 
| {color:red} YARN-5301 does not apply to trunk. Rebase required? Wrong Branch? 
See https://wiki.apache.org/hadoop/HowToContribute for help. {color} |
\\
\\
|| Subsystem || Report/Notes ||
| JIRA Issue | YARN-5301 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12865203/YARN-5301.007.patch |
| Console output | 
https://builds.apache.org/job/PreCommit-YARN-Build/15750/console |
| Powered by | Apache Yetus 0.5.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> NM mount cpu cgroups failed on some systems
> ---
>
> Key: YARN-5301
> URL: https://issues.apache.org/jira/browse/YARN-5301
> Project: Hadoop YARN
>  Issue Type: Bug
>Reporter: sandflee
>Assignee: Miklos Szegedi
> Attachments: YARN-5301.000.patch, YARN-5301.001.patch, 
> YARN-5301.002.patch, YARN-5301.003.patch, YARN-5301.004.patch, 
> YARN-5301.005.patch, YARN-5301.006.patch, YARN-5301.007.patch
>
>
> on ubuntu  with linux kernel 3.19, , NM start failed if enable auto mount 
> cgroup. try command:
> ./bin/container-executor --mount-cgroups yarn-hadoop cpu=/cgroup/cpufail
> ./bin/container-executor --mount-cgroups yarn-hadoop cpu,cpuacct=/cgroup/cpu  
>   succ



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6517) Fix warnings from Spotbugs in hadoop-yarn-common

2017-04-26 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6517?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15985824#comment-15985824
 ] 

Hadoop QA commented on YARN-6517:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
13s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 12m 
42s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
23s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
17s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
25s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
13s{color} | {color:green} trunk passed {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  0m 
56s{color} | {color:red} hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common in 
trunk has 2 extant Findbugs warnings. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
24s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
26s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
25s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
25s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
16s{color} | {color:green} hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common: 
The patch generated 0 new + 52 unchanged - 1 fixed = 52 total (was 53) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
28s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
11s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m  
1s{color} | {color:green} hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common 
generated 0 new + 1 unchanged - 1 fixed = 1 total (was 2) {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
23s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  2m 
15s{color} | {color:green} hadoop-yarn-common in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
14s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 22m 25s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:0ac17dc |
| JIRA Issue | YARN-6517 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12865114/YARN-6517.002.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux d9dd96d91cef 4.4.0-43-generic #63-Ubuntu SMP Wed Oct 12 
13:48:03 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / 8b5f2c3 |
| Default Java | 1.8.0_121 |
| findbugs | v3.1.0-RC1 |
| findbugs | 
https://builds.apache.org/job/PreCommit-YARN-Build/15755/artifact/patchprocess/branch-findbugs-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-common-warnings.html
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-YARN-Build/15755/testReport/ |
| modules | C: hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common U: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common |
| Console output | 
https://builds.apache.org/job/PreCommit-YARN-Build/15755/console |
| Powered by | Apache Yetus 0.5.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> Fix warnings 

[jira] [Commented] (YARN-5894) fixed license warning caused by de.ruedigermoeller:fst:jar:2.24

2017-04-26 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5894?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15985823#comment-15985823
 ] 

Hadoop QA commented on YARN-5894:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
28s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 15m 
42s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
24s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
26s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
17s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
17s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
24s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
20s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
20s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
20s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
13s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green}  0m  
2s{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
12s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  2m 
50s{color} | {color:green} hadoop-yarn-server-applicationhistoryservice in the 
patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
19s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 22m 59s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:0ac17dc |
| JIRA Issue | YARN-5894 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12865189/YARN-5894.01.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  xml  |
| uname | Linux 23444e724a67 3.13.0-107-generic #154-Ubuntu SMP Tue Dec 20 
09:57:27 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / 8b5f2c3 |
| Default Java | 1.8.0_121 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-YARN-Build/15752/testReport/ |
| modules | C: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-applicationhistoryservice
 U: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-applicationhistoryservice
 |
| Console output | 
https://builds.apache.org/job/PreCommit-YARN-Build/15752/console |
| Powered by | Apache Yetus 0.5.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> fixed license warning caused by de.ruedigermoeller:fst:jar:2.24
> ---
>
> Key: YARN-5894
> URL: https://issues.apache.org/jira/browse/YARN-5894
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: yarn
>Affects Versions: 3.0.0-alpha1
>Reporter: Haibo Chen
>Assignee: Haibo Chen
>Priority: Blocker
> Attachments: YARN-5894.00.patch, YARN-5894.01.patch
>
>
> The artifact de.ruedigermoeller:fst:jar:2.24, that ApplicationHistoryService 
> depends on,  shows its license being LGPL 2.1 in our license checking.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (YARN-5067) Support specifying resources for AM containers in SLS

2017-04-26 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5067?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15985826#comment-15985826
 ] 

Hadoop QA commented on YARN-5067:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
35s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 2 new or modified test 
files. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 14m 
54s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
18s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
13s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
20s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
18s{color} | {color:green} trunk passed {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  0m 
32s{color} | {color:red} hadoop-tools/hadoop-sls in trunk has 1 extant Findbugs 
warnings. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
15s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
17s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
15s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
15s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 11s{color} | {color:orange} hadoop-tools/hadoop-sls: The patch generated 2 
new + 60 unchanged - 2 fixed = 62 total (was 62) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
19s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
16s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
35s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
11s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  4m 
57s{color} | {color:green} hadoop-sls in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
21s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 26m  6s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:0ac17dc |
| JIRA Issue | YARN-5067 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12865214/YARN-5067.002.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux 45e4546a56e4 3.13.0-116-generic #163-Ubuntu SMP Fri Mar 31 
14:13:22 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / 8b5f2c3 |
| Default Java | 1.8.0_121 |
| findbugs | v3.1.0-RC1 |
| findbugs | 
https://builds.apache.org/job/PreCommit-YARN-Build/15753/artifact/patchprocess/branch-findbugs-hadoop-tools_hadoop-sls-warnings.html
 |
| checkstyle | 
https://builds.apache.org/job/PreCommit-YARN-Build/15753/artifact/patchprocess/diff-checkstyle-hadoop-tools_hadoop-sls.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-YARN-Build/15753/testReport/ |
| modules | C: hadoop-tools/hadoop-sls U: hadoop-tools/hadoop-sls |
| Console output | 
https://builds.apache.org/job/PreCommit-YARN-Build/15753/console |
| Powered by | Apache Yetus 0.5.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> Support specifying resources for AM containers in SLS
> -
>
> Key: YARN-5067
> URL: https://issues.apache.org/jira/browse/YARN-5067
>

[jira] [Updated] (YARN-5301) NM mount cpu cgroups failed on some systems

2017-04-26 Thread Miklos Szegedi (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-5301?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Miklos Szegedi updated YARN-5301:
-
Attachment: YARN-5301.008.patch

> NM mount cpu cgroups failed on some systems
> ---
>
> Key: YARN-5301
> URL: https://issues.apache.org/jira/browse/YARN-5301
> Project: Hadoop YARN
>  Issue Type: Bug
>Reporter: sandflee
>Assignee: Miklos Szegedi
> Attachments: YARN-5301.000.patch, YARN-5301.001.patch, 
> YARN-5301.002.patch, YARN-5301.003.patch, YARN-5301.004.patch, 
> YARN-5301.005.patch, YARN-5301.006.patch, YARN-5301.007.patch, 
> YARN-5301.008.patch
>
>
> on ubuntu  with linux kernel 3.19, , NM start failed if enable auto mount 
> cgroup. try command:
> ./bin/container-executor --mount-cgroups yarn-hadoop cpu=/cgroup/cpufail
> ./bin/container-executor --mount-cgroups yarn-hadoop cpu,cpuacct=/cgroup/cpu  
>   succ



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5892) Capacity Scheduler: Support user-specific minimum user limit percent

2017-04-26 Thread Sunil G (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5892?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15985868#comment-15985868
 ] 

Sunil G commented on YARN-5892:
---

[~eepayne]
Thanks for the work here.

I have few doubts related to YARN-2113 and this patch together. I remember you 
tested these two patches together.
# Give user-weight is 0, we can get user-limit for certain users as 0. Hence 
intra-queue preemption module could preempt all containers of user with 0 
weight leaving its AM containers behind.
# Similar behavior could occur if there are few users in the cluster whose 
weights are below 1 (and together the sum of weights is less than 1). These 
edge cases could force more preemption.

Could you please help to confirm the same.

> Capacity Scheduler: Support user-specific minimum user limit percent
> 
>
> Key: YARN-5892
> URL: https://issues.apache.org/jira/browse/YARN-5892
> Project: Hadoop YARN
>  Issue Type: Improvement
>  Components: capacityscheduler
>Reporter: Eric Payne
>Assignee: Eric Payne
> Attachments: Active users highlighted.jpg, YARN-5892.001.patch, 
> YARN-5892.002.patch, YARN-5892.003.patch, YARN-5892.004.patch, 
> YARN-5892.005.patch, YARN-5892.006.patch, YARN-5892.007.patch, 
> YARN-5892.008.patch, YARN-5892.009.patch, YARN-5892.010.patch, 
> YARN-5892.012.patch
>
>
> Currently, in the capacity scheduler, the {{minimum-user-limit-percent}} 
> property is per queue. A cluster admin should be able to set the minimum user 
> limit percent on a per-user basis within the queue.
> This functionality is needed so that when intra-queue preemption is enabled 
> (YARN-4945 / YARN-2113), some users can be deemed as more important than 
> other users, and resources from VIP users won't be as likely to be preempted.
> For example, if the {{getstuffdone}} queue has a MULP of 25 percent, but user 
> {{jane}} is a power user of queue {{getstuffdone}} and needs to be guaranteed 
> 75 percent, the properties for {{getstuffdone}} and {{jane}} would look like 
> this:
> {code}
>   
> 
> yarn.scheduler.capacity.root.getstuffdone.minimum-user-limit-percent
> 25
>   
>   
> 
> yarn.scheduler.capacity.root.getstuffdone.jane.minimum-user-limit-percent
> 75
>   
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-3839) Quit throwing NMNotYetReadyException

2017-04-26 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-3839?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15985871#comment-15985871
 ] 

Hadoop QA commented on YARN-3839:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
22s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 8 new or modified test 
files. {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
42s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 13m 
42s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 10m 
35s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
55s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  2m 
11s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  1m 
27s{color} | {color:green} trunk passed {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Skipped patched modules with no Java source: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-tests 
{color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  1m 
12s{color} | {color:red} hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common in 
trunk has 2 extant Findbugs warnings. {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  0m 
57s{color} | {color:red} 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager
 in trunk has 5 extant Findbugs warnings. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
41s{color} | {color:green} trunk passed {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
10s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
48s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  9m 
33s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  9m 
33s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
55s{color} | {color:green} hadoop-yarn-project/hadoop-yarn: The patch generated 
0 new + 324 unchanged - 17 fixed = 324 total (was 341) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  2m 
15s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  1m 
21s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Skipped patched modules with no Java source: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-tests 
{color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  3m 
24s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
44s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
38s{color} | {color:green} hadoop-yarn-api in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  2m 
28s{color} | {color:green} hadoop-yarn-common in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 14m 
18s{color} | {color:green} hadoop-yarn-server-nodemanager in the patch passed. 
{color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red}  6m 48s{color} 
| {color:red} hadoop-yarn-server-tests in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
33s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 89m 43s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests 

[jira] [Commented] (YARN-2962) ZKRMStateStore: Limit the number of znodes under a znode

2017-04-26 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-2962?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15985881#comment-15985881
 ] 

Hadoop QA commented on YARN-2962:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
16s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 3 new or modified test 
files. {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
38s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 14m 
 5s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 10m 
33s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
53s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
56s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  1m 
 3s{color} | {color:green} trunk passed {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  1m  
6s{color} | {color:red} hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common in 
trunk has 2 extant Findbugs warnings. {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  1m 
15s{color} | {color:red} 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager
 in trunk has 8 extant Findbugs warnings. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
29s{color} | {color:green} trunk passed {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m  
9s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
41s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  9m 
16s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  9m 
16s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 53s{color} | {color:orange} hadoop-yarn-project/hadoop-yarn: The patch 
generated 5 new + 250 unchanged - 2 fixed = 255 total (was 252) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
54s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
59s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green}  0m  
2s{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  3m 
47s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
24s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
32s{color} | {color:green} hadoop-yarn-api in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  2m 
24s{color} | {color:green} hadoop-yarn-common in the patch passed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 39m 25s{color} 
| {color:red} hadoop-yarn-server-resourcemanager in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
37s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}106m  1s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.yarn.server.resourcemanager.TestRMRestart |
|   | hadoop.yarn.server.resourcemanager.security.TestDelegationTokenRenewer |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:0ac17dc |
| JIRA Issue | YARN-2962 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12865198/YARN-2962.011.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle

[jira] [Commented] (YARN-5892) Capacity Scheduler: Support user-specific minimum user limit percent

2017-04-26 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5892?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15985882#comment-15985882
 ] 

Hadoop QA commented on YARN-5892:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
18s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 2 new or modified test 
files. {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
45s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 16m 
 0s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 14m 
35s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
13s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
56s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  1m 
13s{color} | {color:green} trunk passed {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Skipped patched modules with no Java source: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-site {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  1m 
12s{color} | {color:red} hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common in 
trunk has 2 extant Findbugs warnings. {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  1m 
14s{color} | {color:red} 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager
 in trunk has 8 extant Findbugs warnings. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
36s{color} | {color:green} trunk passed {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
14s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
25s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 10m 
15s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 10m 
15s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
1m  1s{color} | {color:orange} hadoop-yarn-project/hadoop-yarn: The patch 
generated 19 new + 681 unchanged - 1 fixed = 700 total (was 682) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
38s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  1m 
 0s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Skipped patched modules with no Java source: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-site {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m 
24s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} javadoc {color} | {color:red}  0m 
27s{color} | {color:red} 
hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager
 generated 4 new + 880 unchanged - 0 fixed = 884 total (was 880) {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  2m 
34s{color} | {color:green} hadoop-yarn-common in the patch passed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 43m 22s{color} 
| {color:red} hadoop-yarn-server-resourcemanager in the patch failed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
17s{color} | {color:green} hadoop-yarn-site in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
36s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}115m  0s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.yarn.server.resourcemanager.TestRMRestart 

[jira] [Commented] (YARN-6535) Program needs to exit when SLS finishes.

2017-04-26 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6535?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15985951#comment-15985951
 ] 

Hadoop QA commented on YARN-6535:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
19s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 13m 
39s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
19s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
13s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
20s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
17s{color} | {color:green} trunk passed {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  0m 
36s{color} | {color:red} hadoop-tools/hadoop-sls in trunk has 1 extant Findbugs 
warnings. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
15s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
19s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
18s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
18s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 11s{color} | {color:orange} hadoop-tools/hadoop-sls: The patch generated 1 
new + 6 unchanged - 0 fixed = 7 total (was 6) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
20s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
16s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  0m 
45s{color} | {color:red} hadoop-tools/hadoop-sls generated 1 new + 1 unchanged 
- 0 fixed = 2 total (was 1) {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
13s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  4m 
54s{color} | {color:green} hadoop-sls in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
18s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 24m 52s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| FindBugs | module:hadoop-tools/hadoop-sls |
|  |  The method name 
org.apache.hadoop.yarn.sls.SLSRunner.AvoidExitAtTheFinish() doesn't start with 
a lower case letter  At SLSRunner.java:start with a lower case letter  At 
SLSRunner.java:[lines 146-147] |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:0ac17dc |
| JIRA Issue | YARN-6535 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12865240/YARN-6535.001.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux 5047748b520d 3.13.0-106-generic #153-Ubuntu SMP Tue Dec 6 
15:44:32 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / 28eb2aa |
| Default Java | 1.8.0_121 |
| findbugs | v3.1.0-RC1 |
| findbugs | 
https://builds.apache.org/job/PreCommit-YARN-Build/15757/artifact/patchprocess/branch-findbugs-hadoop-tools_hadoop-sls-warnings.html
 |
| checkstyle | 
https://builds.apache.org/job/PreCommit-YARN-Build/15757/artifact/patchprocess/diff-checkstyle-hadoop-tools_hadoop-sls.txt
 |
| findbugs | 
https://builds.apache.org/job/PreCommit-YARN-Build/15757/artifact/patchprocess/new-findbugs-hadoop-tools_hadoop-sls.html
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-YARN-Build/15757/testReport/ |
| modules | C: hadoop-tools

[jira] [Commented] (YARN-6344) Add parameter for rack locality delay in CapacityScheduler

2017-04-26 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6344?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15985956#comment-15985956
 ] 

Hadoop QA commented on YARN-6344:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 18m 
48s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  8m 
59s{color} | {color:green} branch-2.8 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
27s{color} | {color:green} branch-2.8 passed with JDK v1.8.0_131 {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
31s{color} | {color:green} branch-2.8 passed with JDK v1.7.0_121 {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
20s{color} | {color:green} branch-2.8 passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
39s{color} | {color:green} branch-2.8 passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
17s{color} | {color:green} branch-2.8 passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m  
9s{color} | {color:green} branch-2.8 passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
24s{color} | {color:green} branch-2.8 passed with JDK v1.8.0_131 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
26s{color} | {color:green} branch-2.8 passed with JDK v1.7.0_121 {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
33s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
27s{color} | {color:green} the patch passed with JDK v1.8.0_131 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
27s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
32s{color} | {color:green} the patch passed with JDK v1.7.0_121 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
32s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 19s{color} | {color:orange} 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager:
 The patch generated 3 new + 103 unchanged - 2 fixed = 106 total (was 105) 
{color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
37s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
14s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green}  0m  
0s{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
19s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
19s{color} | {color:green} the patch passed with JDK v1.8.0_131 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
23s{color} | {color:green} the patch passed with JDK v1.7.0_121 {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 81m 43s{color} 
| {color:red} hadoop-yarn-server-resourcemanager in the patch failed with JDK 
v1.7.0_121. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
19s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}199m 46s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| JDK v1.8.0_131 Failed junit tests | 
hadoop.yarn.server.resourcemanager.TestAMAuthorization |
|   | hadoop.yarn.server.resourcemanager.TestClientRMTokens |
|   | 
hadoop.yarn.server.resourcemanager.scheduler.capacity.TestCapacitySchedulerSurgicalPreemption
 |
| JDK v1.7.0_121 Failed junit tests | 
hadoop.yarn.server.resourcemanager.scheduler.capacity.TestCapacitySchedulerLazyPreemption
 |
|   | hadoop.yarn.server.resourcemanage

[jira] [Commented] (YARN-5301) NM mount cpu cgroups failed on some systems

2017-04-26 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5301?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15985964#comment-15985964
 ] 

Hadoop QA commented on YARN-5301:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
20s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 3 new or modified test 
files. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 13m 
51s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
29s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
19s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
34s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
16s{color} | {color:green} trunk passed {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  0m 
52s{color} | {color:red} 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager
 in trunk has 5 extant Findbugs warnings. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
19s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
31s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
31s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} cc {color} | {color:green}  0m 
31s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
31s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 17s{color} | {color:orange} 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager:
 The patch generated 2 new + 22 unchanged - 4 fixed = 24 total (was 26) {color} 
|
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
31s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
15s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m  
1s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
18s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 14m 
12s{color} | {color:green} hadoop-yarn-server-nodemanager in the patch passed. 
{color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
18s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 36m 18s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:0ac17dc |
| JIRA Issue | YARN-5301 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12865248/YARN-5301.008.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  cc  |
| uname | Linux 77b47ab1ac7b 3.13.0-108-generic #155-Ubuntu SMP Wed Jan 11 
16:58:52 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / 28eb2aa |
| Default Java | 1.8.0_121 |
| findbugs | v3.1.0-RC1 |
| findbugs | 
https://builds.apache.org/job/PreCommit-YARN-Build/15756/artifact/patchprocess/branch-findbugs-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-nodemanager-warnings.html
 |
| checkstyle | 
https://builds.apache.org/job/PreCommit-YARN-Build/15756/artifact/patchprocess/diff-checkstyle-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-nodemanager.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-YARN-Build/15756/testReport/ |
| modules | C: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager
 U: 
hadoop-yarn-pr

[jira] [Commented] (YARN-3839) Quit throwing NMNotYetReadyException

2017-04-26 Thread Manikandan R (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-3839?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15985981#comment-15985981
 ] 

Manikandan R commented on YARN-3839:


[~jlowe],

Failed junits are because of timed out errors. Otherwise, everything else looks 
fine. Can you please suggest next steps?

> Quit throwing NMNotYetReadyException
> 
>
> Key: YARN-3839
> URL: https://issues.apache.org/jira/browse/YARN-3839
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: nodemanager
>Reporter: Karthik Kambatla
>Assignee: Manikandan R
> Attachments: YARN-3839.001.patch, YARN-3839.002.patch, 
> YARN-3839.003.patch, YARN-3839.004.patch, YARN-3839.005.patch, 
> YARN-3839.006.patch
>
>
> Quit throwing NMNotYetReadyException when NM has not yet registered with the 
> RM.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-5301) NM mount cpu cgroups failed on some systems

2017-04-26 Thread Miklos Szegedi (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-5301?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Miklos Szegedi updated YARN-5301:
-
Attachment: YARN-5301.009.patch

Fixed checkstyle, changed all cgroup List references to Set

> NM mount cpu cgroups failed on some systems
> ---
>
> Key: YARN-5301
> URL: https://issues.apache.org/jira/browse/YARN-5301
> Project: Hadoop YARN
>  Issue Type: Bug
>Reporter: sandflee
>Assignee: Miklos Szegedi
> Attachments: YARN-5301.000.patch, YARN-5301.001.patch, 
> YARN-5301.002.patch, YARN-5301.003.patch, YARN-5301.004.patch, 
> YARN-5301.005.patch, YARN-5301.006.patch, YARN-5301.007.patch, 
> YARN-5301.008.patch, YARN-5301.009.patch
>
>
> on ubuntu  with linux kernel 3.19, , NM start failed if enable auto mount 
> cgroup. try command:
> ./bin/container-executor --mount-cgroups yarn-hadoop cpu=/cgroup/cpufail
> ./bin/container-executor --mount-cgroups yarn-hadoop cpu,cpuacct=/cgroup/cpu  
>   succ



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Assigned] (YARN-6534) ResourceManager failed due to TimelineClient try to init SSLFactory even https is not enabled

2017-04-26 Thread Rohith Sharma K S (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-6534?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Rohith Sharma K S reassigned YARN-6534:
---

Assignee: Rohith Sharma K S

> ResourceManager failed due to TimelineClient try to init SSLFactory even 
> https is not enabled
> -
>
> Key: YARN-6534
> URL: https://issues.apache.org/jira/browse/YARN-6534
> Project: Hadoop YARN
>  Issue Type: Bug
>Affects Versions: 3.0.0-alpha3
>Reporter: Junping Du
>Assignee: Rohith Sharma K S
>Priority: Blocker
>
> In a non-secured cluster, RM get failed consistently due to 
> TimelineServiceV1Publisher tries to init TimelineClient with SSLFactory 
> without any checking on if https get used.
> {noformat}
> 2017-04-26 21:09:10,683 FATAL resourcemanager.ResourceManager 
> (ResourceManager.java:main(1457)) - Error starting ResourceManager
> org.apache.hadoop.service.ServiceStateException: 
> java.io.FileNotFoundException: /etc/security/clientKeys/all.jks (No such file 
> or directory)
> at 
> org.apache.hadoop.service.ServiceStateException.convert(ServiceStateException.java:59)
> at 
> org.apache.hadoop.service.AbstractService.init(AbstractService.java:172)
> at 
> org.apache.hadoop.service.CompositeService.serviceInit(CompositeService.java:107)
> at 
> org.apache.hadoop.yarn.client.api.impl.TimelineClientImpl.serviceInit(TimelineClientImpl.java:131)
> at 
> org.apache.hadoop.service.AbstractService.init(AbstractService.java:163)
> at 
> org.apache.hadoop.service.CompositeService.serviceInit(CompositeService.java:107)
> at 
> org.apache.hadoop.yarn.server.resourcemanager.metrics.AbstractSystemMetricsPublisher.serviceInit(AbstractSystemMetricsPublisher.java:59)
> at 
> org.apache.hadoop.yarn.server.resourcemanager.metrics.TimelineServiceV1Publisher.serviceInit(TimelineServiceV1Publisher.java:67)
> at 
> org.apache.hadoop.service.AbstractService.init(AbstractService.java:163)
> at 
> org.apache.hadoop.service.CompositeService.serviceInit(CompositeService.java:107)
> at 
> org.apache.hadoop.yarn.server.resourcemanager.ResourceManager.serviceInit(ResourceManager.java:344)
> at 
> org.apache.hadoop.service.AbstractService.init(AbstractService.java:163)
> at 
> org.apache.hadoop.yarn.server.resourcemanager.ResourceManager.main(ResourceManager.java:1453)
> Caused by: java.io.FileNotFoundException: /etc/security/clientKeys/all.jks 
> (No such file or directory)
> at java.io.FileInputStream.open0(Native Method)
> at java.io.FileInputStream.open(FileInputStream.java:195)
> at java.io.FileInputStream.(FileInputStream.java:138)
> at 
> org.apache.hadoop.security.ssl.ReloadingX509TrustManager.loadTrustManager(ReloadingX509TrustManager.java:168)
> at 
> org.apache.hadoop.security.ssl.ReloadingX509TrustManager.(ReloadingX509TrustManager.java:86)
> at 
> org.apache.hadoop.security.ssl.FileBasedKeyStoresFactory.init(FileBasedKeyStoresFactory.java:219)
> at org.apache.hadoop.security.ssl.SSLFactory.init(SSLFactory.java:179)
> at 
> org.apache.hadoop.yarn.client.api.impl.TimelineConnector.getSSLFactory(TimelineConnector.java:176)
> at 
> org.apache.hadoop.yarn.client.api.impl.TimelineConnector.serviceInit(TimelineConnector.java:106)
> at 
> org.apache.hadoop.service.AbstractService.init(AbstractService.java:163)
> ... 11 more
> {noformat}
> CC [~rohithsharma] and [~gtCarrera9]



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



  1   2   >