[jira] [Commented] (YARN-4888) Changes in RM AppSchedulingInfo for identifying resource-requests explicitly

2016-07-15 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-4888?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15380439#comment-15380439
 ] 

Hadoop QA commented on YARN-4888:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 34s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 
0s {color} | {color:green} The patch appears to include 20 new or modified test 
files. {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 42s 
{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 7m 
45s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 2m 26s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 1m 
3s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 56s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
58s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 3m 
18s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 13s 
{color} | {color:green} trunk passed {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 10s 
{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 1m 
35s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 2m 23s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} cc {color} | {color:green} 2m 23s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 2m 23s 
{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} checkstyle {color} | {color:red} 1m 3s 
{color} | {color:red} hadoop-yarn-project/hadoop-yarn: The patch generated 151 
new + 2056 unchanged - 65 fixed = 2207 total (was 2121) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 51s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
49s {color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} whitespace {color} | {color:red} 0m 0s 
{color} | {color:red} The patch has 1 line(s) that end in whitespace. Use git 
apply --whitespace=fix. {color} |
| {color:red}-1{color} | {color:red} whitespace {color} | {color:red} 0m 0s 
{color} | {color:red} The patch 1 line(s) with tabs. {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red} 1m 6s 
{color} | {color:red} 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager
 generated 1 new + 0 unchanged - 0 fixed = 1 total (was 0) {color} |
| {color:red}-1{color} | {color:red} javadoc {color} | {color:red} 0m 19s 
{color} | {color:red} 
hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager
 generated 7 new + 982 unchanged - 7 fixed = 989 total (was 989) {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 0m 24s 
{color} | {color:green} hadoop-yarn-api in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 0m 26s 
{color} | {color:green} hadoop-yarn-server-common in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 13m 12s 
{color} | {color:green} hadoop-yarn-server-nodemanager in the patch passed. 
{color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 36m 44s {color} 
| {color:red} hadoop-yarn-server-resourcemanager in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 
17s {color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 84m 51s {color} 
| {color:black} {color} |
\\
\\
|| Reason || Tests ||
| FindBugs | 
module:hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager
 |
|  |  Dead store to priority in 
org.apache.hadoop.yarn.server.resourcemanager.scheduler.fair.FS

[jira] [Updated] (YARN-2962) ZKRMStateStore: Limit the number of znodes under a znode

2016-07-15 Thread Andrew Wang (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-2962?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew Wang updated YARN-2962:
--
Target Version/s: 3.0.0-alpha2  (was: 3.0.0-alpha1)

> ZKRMStateStore: Limit the number of znodes under a znode
> 
>
> Key: YARN-2962
> URL: https://issues.apache.org/jira/browse/YARN-2962
> Project: Hadoop YARN
>  Issue Type: Improvement
>  Components: resourcemanager
>Affects Versions: 2.6.0
>Reporter: Karthik Kambatla
>Assignee: Varun Saxena
>Priority: Critical
> Attachments: YARN-2962.01.patch, YARN-2962.04.patch, 
> YARN-2962.05.patch, YARN-2962.2.patch, YARN-2962.3.patch
>
>
> We ran into this issue where we were hitting the default ZK server message 
> size configs, primarily because the message had too many znodes even though 
> they individually they were all small.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-5275) Timeline application page cannot be loaded when no application submitted/running on the cluster after HADOOP-9613

2016-07-15 Thread Andrew Wang (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-5275?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew Wang updated YARN-5275:
--
Target Version/s: 3.0.0-alpha2  (was: 3.0.0-alpha1)

> Timeline application page cannot be loaded when no application 
> submitted/running on the cluster after HADOOP-9613
> -
>
> Key: YARN-5275
> URL: https://issues.apache.org/jira/browse/YARN-5275
> Project: Hadoop YARN
>  Issue Type: Bug
>Affects Versions: 3.0.0-alpha1
>Reporter: Tsuyoshi Ozawa
>Priority: Critical
>
> After HADOOP-9613, Timeline Web UI has a problem reported by [~leftnoteasy] 
> and [~sunilg]
> {quote}
> when no application submitted/running on the cluster, applications page 
> cannot be loaded. 
> {quote}
> We should investigate the reason and fix it.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-5194) Avoid adding yarn-site to all Configuration instances created by the JVM

2016-07-15 Thread Andrew Wang (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-5194?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew Wang updated YARN-5194:
--
Target Version/s: 3.0.0-alpha2  (was: 3.0.0-alpha1)

> Avoid adding yarn-site to all Configuration instances created by the JVM
> 
>
> Key: YARN-5194
> URL: https://issues.apache.org/jira/browse/YARN-5194
> Project: Hadoop YARN
>  Issue Type: Improvement
>Reporter: Siddharth Seth
>
> {code}
> static {
> addDeprecatedKeys();
> Configuration.addDefaultResource(YARN_DEFAULT_CONFIGURATION_FILE);
> Configuration.addDefaultResource(YARN_SITE_CONFIGURATION_FILE);
>   }
> {code}
> This puts the contents of yarn-default and yarn-site into every configuration 
> instance created in the VM after YarnConfiguration has been initialized.
> This should be changed to a local addResource for the specific 
> YarnConfiguration instance, instead of polluting every Configuration instance.
> Incompatible change. Have set the target version to 3.x. 
> The same applies to HdfsConfiguration (hdfs-site.xml), and Configuration 
> (core-site.xml etc).
> core-site may be worth including everywhere, however it would be better to 
> expect users to explicitly add the relevant resources.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-4464) default value of yarn.resourcemanager.state-store.max-completed-applications should lower.

2016-07-15 Thread Andrew Wang (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-4464?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew Wang updated YARN-4464:
--
Target Version/s: 3.0.0-alpha2  (was: 3.0.0-alpha1)

> default value of yarn.resourcemanager.state-store.max-completed-applications 
> should lower.
> --
>
> Key: YARN-4464
> URL: https://issues.apache.org/jira/browse/YARN-4464
> Project: Hadoop YARN
>  Issue Type: Wish
>  Components: resourcemanager
>Reporter: KWON BYUNGCHANG
>Assignee: Daniel Templeton
>Priority: Blocker
> Attachments: YARN-4464.001.patch, YARN-4464.002.patch, 
> YARN-4464.003.patch, YARN-4464.004.patch
>
>
> my cluster has 120 nodes.
> I configured RM Restart feature.
> {code}
> yarn.resourcemanager.recovery.enabled=true
> yarn.resourcemanager.store.class=org.apache.hadoop.yarn.server.resourcemanager.recovery.FileSystemRMStateStore
> yarn.resourcemanager.fs.state-store.uri=/system/yarn/rmstore
> {code}
> unfortunately I did not configure 
> {{yarn.resourcemanager.state-store.max-completed-applications}}.
> so that property configured default value 10,000.
> I have restarted RM due to changing another configuartion.
> I expected that RM restart immediately.
> recovery process was very slow.  I have waited about 20min.  
> realize missing 
> {{yarn.resourcemanager.state-store.max-completed-applications}}.
> its default value is very huge.  
> need to change lower value or document notice on [RM Restart 
> page|http://hadoop.apache.org/docs/stable/hadoop-yarn/hadoop-yarn-site/ResourceManagerRestart.html].



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-5121) fix some container-executor portability issues

2016-07-15 Thread Andrew Wang (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-5121?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew Wang updated YARN-5121:
--
Target Version/s: 3.0.0-alpha2  (was: 3.0.0-alpha1)

> fix some container-executor portability issues
> --
>
> Key: YARN-5121
> URL: https://issues.apache.org/jira/browse/YARN-5121
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: nodemanager
>Affects Versions: 3.0.0-alpha1
>Reporter: Allen Wittenauer
>Assignee: Allen Wittenauer
>Priority: Blocker
> Attachments: YARN-5121.00.patch, YARN-5121.01.patch, 
> YARN-5121.02.patch, YARN-5121.03.patch
>
>
> container-executor has some issues that are preventing it from even compiling 
> on the OS X jenkins instance.  Let's fix those.  While we're there, let's 
> also try to take care of some of the other portability problems that have 
> crept in over the years, since it used to work great on Solaris but now 
> doesn't.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (YARN-5261) Lease/Reclaim Extension to Yarn

2016-07-15 Thread Subru Krishnan (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5261?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15380427#comment-15380427
 ] 

Subru Krishnan edited comment on YARN-5261 at 7/16/16 2:16 AM:
---

[~y] thanks for the clarifications.

I understand there are slight nuances for your use case but generally it's good 
to consolidate related efforts as otherwise we will have an explosion of 
specialized features in YARN (already see a trend). IIUC, this is what [~jlowe] 
and [~sunilg] are both alluding to.

We do have the ability to reserve cluster resources as part of 
YARN-1051/YARN-2572/YARN-2573. Would that be sufficient or do you need to be 
able to do it at node level? If the latter is the case, you can consider 
achieving it in combination of reservation system with node labels (YARN-4193), 
intuition is each node that can be leased has a label (which can be it's name).

Another option is to have LeaseManager run outside of YARN _initially_ and use 
the dynamic node resource configuration (YARN-291) to affect the change you 
want.

Thoughts?


was (Author: subru):
@yuuu' thanks for the clarifications.

I understand there are slight nuances for your use case but generally it's good 
to consolidate related efforts as otherwise we will have an explosion of 
specialized features in YARN (already see a trend). IIUC, this is what [~jlowe] 
and [~sunilg] are both alluding to.

We do have the ability to reserve cluster resources as part of 
YARN-1051/YARN-2572/YARN-2573. Would that be sufficient or do you need to be 
able to do it at node level? If the latter is the case, you can consider 
achieving it in combination of reservation system with node labels (YARN-4193), 
intuition is each node that can be leased has a label (which can be it's name).

Another option is to have LeaseManager run outside of YARN initially and use 
the dynamic node resource configuration (YARN-291) to affect the change you 
want.

Thoughts?

> Lease/Reclaim Extension to Yarn
> ---
>
> Key: YARN-5261
> URL: https://issues.apache.org/jira/browse/YARN-5261
> Project: Hadoop YARN
>  Issue Type: New Feature
>  Components: scheduler
>Reporter: Yu
> Attachments: YARN-5261-1.patch, YARN-5261-2.patch, YARN-5261-3.patch, 
> Yarn-5261.pdf
>
>
> In some clusters outside of Yarn, the machines resources are not fully 
> utilized, e.g., resource usage is quite low at night.  
> To better utilize the resources while keep the existing SLA of the cluster, 
> Lease/Reclaim Extension to Yarn is introduced.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5261) Lease/Reclaim Extension to Yarn

2016-07-15 Thread Subru Krishnan (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5261?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15380427#comment-15380427
 ] 

Subru Krishnan commented on YARN-5261:
--

@yuuu' thanks for the clarifications.

I understand there are slight nuances for your use case but generally it's good 
to consolidate related efforts as otherwise we will have an explosion of 
specialized features in YARN (already see a trend). IIUC, this is what [~jlowe] 
and [~sunilg] are both alluding to.

We do have the ability to reserve cluster resources as part of 
YARN-1051/YARN-2572/YARN-2573. Would that be sufficient or do you need to be 
able to do it at node level? If the latter is the case, you can consider 
achieving it in combination of reservation system with node labels (YARN-4193), 
intuition is each node that can be leased has a label (which can be it's name).

Another option is to have LeaseManager run outside of YARN initially and use 
the dynamic node resource configuration (YARN-291) to affect the change you 
want.

Thoughts?

> Lease/Reclaim Extension to Yarn
> ---
>
> Key: YARN-5261
> URL: https://issues.apache.org/jira/browse/YARN-5261
> Project: Hadoop YARN
>  Issue Type: New Feature
>  Components: scheduler
>Reporter: Yu
> Attachments: YARN-5261-1.patch, YARN-5261-2.patch, YARN-5261-3.patch, 
> Yarn-5261.pdf
>
>
> In some clusters outside of Yarn, the machines resources are not fully 
> utilized, e.g., resource usage is quite low at night.  
> To better utilize the resources while keep the existing SLA of the cluster, 
> Lease/Reclaim Extension to Yarn is introduced.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-4996) Make TestNMReconnect.testCompareRMNodeAfterReconnect() scheduler agnostic, or better yet parameterized

2016-07-15 Thread Kai Sasaki (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-4996?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15380424#comment-15380424
 ] 

Kai Sasaki commented on YARN-4996:
--

[~templedf] Thanks for checking!
[~gtCarrera9] Could you review this when you are available please?

> Make TestNMReconnect.testCompareRMNodeAfterReconnect() scheduler agnostic, or 
> better yet parameterized
> --
>
> Key: YARN-4996
> URL: https://issues.apache.org/jira/browse/YARN-4996
> Project: Hadoop YARN
>  Issue Type: Improvement
>  Components: resourcemanager, test
>Affects Versions: 2.8.0
>Reporter: Daniel Templeton
>Assignee: Kai Sasaki
>Priority: Minor
>  Labels: newbie
> Attachments: YARN-4996.01.patch, YARN-4996.02.patch, 
> YARN-4996.03.patch, YARN-4996.04.patch, YARN-4996.05.patch, 
> YARN-4996.06.patch, YARN-4996.07.patch
>
>
> The test tests only the capacity scheduler.  It should also test fair 
> scheduler.  At a bare minimum, it should use the default scheduler.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-4954) TestYarnClient.testReservationAPIs fails on machines with less than 4 GB available memory

2016-07-15 Thread Vinod Kumar Vavilapalli (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-4954?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vinod Kumar Vavilapalli updated YARN-4954:
--
Priority: Critical  (was: Minor)

These tests are failing very consistently, bumping up priority..

> TestYarnClient.testReservationAPIs fails on machines with less than 4 GB 
> available memory
> -
>
> Key: YARN-4954
> URL: https://issues.apache.org/jira/browse/YARN-4954
> Project: Hadoop YARN
>  Issue Type: Test
>  Components: test
>Affects Versions: 3.0.0-alpha1
>Reporter: Gergely Novák
>Assignee: Gergely Novák
>Priority: Critical
> Attachments: YARN-4954.001.patch
>
>
> TestYarnClient.testReservationAPIs sometimes fails with this error:
> {noformat}
> java.lang.AssertionError: 
> org.apache.hadoop.yarn.server.resourcemanager.reservation.exceptions.PlanningException:
>  The request cannot be satisfied
>   at 
> org.apache.hadoop.yarn.ipc.RPCUtil.getRemoteException(RPCUtil.java:38)
>   at 
> org.apache.hadoop.yarn.server.resourcemanager.ClientRMService.submitReservation(ClientRMService.java:1254)
>   at 
> org.apache.hadoop.yarn.api.impl.pb.service.ApplicationClientProtocolPBServiceImpl.submitReservation(ApplicationClientProtocolPBServiceImpl.java:457)
>   at 
> org.apache.hadoop.yarn.proto.ApplicationClientProtocol$ApplicationClientProtocolService$2.callBlockingMethod(ApplicationClientProtocol.java:515)
>   at 
> org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:637)
>   at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:989)
>   at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2422)
>   at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2418)
>   at java.security.AccessController.doPrivileged(Native Method)
>   at javax.security.auth.Subject.doAs(Subject.java:415)
>   at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1742)
>   at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2416)
> Caused by: 
> org.apache.hadoop.yarn.server.resourcemanager.reservation.exceptions.PlanningException:
>  The request cannot be satisfied
>   at 
> org.apache.hadoop.yarn.server.resourcemanager.reservation.planning.IterativePlanner.computeJobAllocation(IterativePlanner.java:151)
>   at 
> org.apache.hadoop.yarn.server.resourcemanager.reservation.planning.PlanningAlgorithm.allocateUser(PlanningAlgorithm.java:64)
>   at 
> org.apache.hadoop.yarn.server.resourcemanager.reservation.planning.PlanningAlgorithm.createReservation(PlanningAlgorithm.java:140)
>   at 
> org.apache.hadoop.yarn.server.resourcemanager.reservation.planning.TryManyReservationAgents.createReservation(TryManyReservationAgents.java:55)
>   at 
> org.apache.hadoop.yarn.server.resourcemanager.reservation.planning.AlignedPlannerWithGreedy.createReservation(AlignedPlannerWithGreedy.java:84)
>   at 
> org.apache.hadoop.yarn.server.resourcemanager.ClientRMService.submitReservation(ClientRMService.java:1237)
>   ... 10 more
>   at org.junit.Assert.fail(Assert.java:88)
>   at 
> org.apache.hadoop.yarn.client.api.impl.TestYarnClient.testReservationAPIs(TestYarnClient.java:1227)
> {noformat}
> This is caused by really not having enough available memory to complete the 
> reservation (4 * 1024 MB). In my opinion lowering the required memory (either 
> by lowering the number of containers to 2, or the memory to 512 MB) would 
> make the test more stable. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5148) [YARN-3368] Add page to new YARN UI to view server side configurations/logs/JVM-metrics

2016-07-15 Thread Kai Sasaki (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5148?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15380411#comment-15380411
 ] 

Kai Sasaki commented on YARN-5148:
--

[~sunilg] I attached initial patch and screen shot which is rough in terms of 
design though. 
Could you review it and give me some feedback?

> [YARN-3368] Add page to new YARN UI to view server side 
> configurations/logs/JVM-metrics
> ---
>
> Key: YARN-5148
> URL: https://issues.apache.org/jira/browse/YARN-5148
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Wangda Tan
>Assignee: Kai Sasaki
> Attachments: YARN-5148-YARN-3368.01.patch, yarn-conf.png, 
> yarn-tools.png
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-3876) get_executable() assumes everything is Linux

2016-07-15 Thread Andrew Wang (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-3876?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew Wang updated YARN-3876:
--
Fix Version/s: (was: 3.0.0-alpha1)

> get_executable() assumes everything is Linux
> 
>
> Key: YARN-3876
> URL: https://issues.apache.org/jira/browse/YARN-3876
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: build
>Affects Versions: 2.7.0
>Reporter: Alan Burlison
>Assignee: Alan Burlison
> Attachments: YARN-3876.001.patch, YARN-3876.002.patch, 
> YARN-3876.003.patch
>
>
> get_executable() in container-executor.c is non-portable and is hard-coded to 
> assume Linux's /proc. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-4202) TestYarnClient#testReservationAPIs fails intermittently

2016-07-15 Thread Andrew Wang (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-4202?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew Wang updated YARN-4202:
--
Fix Version/s: (was: 3.0.0-alpha1)

> TestYarnClient#testReservationAPIs fails intermittently
> ---
>
> Key: YARN-4202
> URL: https://issues.apache.org/jira/browse/YARN-4202
> Project: Hadoop YARN
>  Issue Type: Bug
>Reporter: Mit Desai
>Assignee: nijel
>
> Found this failure while looking at the Pre-run on one of my Jiras.
> {noformat}
> org.apache.hadoop.yarn.server.resourcemanager.reservation.exceptions.PlanningException:
>  The planning algorithm could not find a valid allocation for your request
>  at org.apache.hadoop.yarn.ipc.RPCUtil.getRemoteException(RPCUtil.java:38)
>  at 
> org.apache.hadoop.yarn.server.resourcemanager.ClientRMService.submitReservation(ClientRMService.java:1149)
>  at 
> org.apache.hadoop.yarn.api.impl.pb.service.ApplicationClientProtocolPBServiceImpl.submitReservation(ApplicationClientProtocolPBServiceImpl.java:428)
>  at 
> org.apache.hadoop.yarn.proto.ApplicationClientProtocol$ApplicationClientProtocolService$2.callBlockingMethod(ApplicationClientProtocol.java:465)
>  at 
> org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:636)
>  at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:976)
>  at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2230)
>  at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2226)
>  at java.security.AccessController.doPrivileged(Native Method)
>  at javax.security.auth.Subject.doAs(Subject.java:415)
>  at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1667)
>  at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2224)
> Caused by: 
> org.apache.hadoop.yarn.server.resourcemanager.reservation.exceptions.PlanningException:
>  The planning algorithm could not find a valid allocation for your request
>  at 
> org.apache.hadoop.yarn.server.resourcemanager.reservation.planning.PlanningAlgorithm.allocateUser(PlanningAlgorithm.java:69)
>  at 
> org.apache.hadoop.yarn.server.resourcemanager.reservation.planning.PlanningAlgorithm.createReservation(PlanningAlgorithm.java:140)
>  at 
> org.apache.hadoop.yarn.server.resourcemanager.reservation.planning.TryManyReservationAgents.createReservation(TryManyReservationAgents.java:55)
>  at 
> org.apache.hadoop.yarn.server.resourcemanager.reservation.planning.AlignedPlannerWithGreedy.createReservation(AlignedPlannerWithGreedy.java:84)
>  at 
> org.apache.hadoop.yarn.server.resourcemanager.ClientRMService.submitReservation(ClientRMService.java:1132)
>  ... 10 more
> {noformat}
> TestReport Link: 
> https://builds.apache.org/job/PreCommit-YARN-Build/9243/testReport/
> When I ran this on my local box branch-2, it succeeds.
> {noformat}
> ---
>  T E S T S
> ---
> Running org.apache.hadoop.yarn.client.api.impl.TestYarnClient
> Tests run: 21, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 22.999 sec - 
> in org.apache.hadoop.yarn.client.api.impl.TestYarnClient
> Results :
> Tests run: 21, Failures: 0, Errors: 0, Skipped: 0
> [INFO] 
> 
> [INFO] BUILD SUCCESS
> [INFO] 
> 
> [INFO] Total time: 52.029 s
> [INFO] Finished at: 2015-09-23T11:25:04-06:00
> [INFO] Final Memory: 31M/391M
> [INFO] 
> 
> {noformat}
> Haven't tried if it is a problem in branch-2.7 or not.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5309) SSLFactory truststore reloader thread leak in TimelineClientImpl

2016-07-15 Thread Weiwei Yang (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5309?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15380403#comment-15380403
 ] 

Weiwei Yang commented on YARN-5309:
---

Hello [~ajisakaa]

I would prefer to have this fixed in 2.7.3, this currently crashed hive server2 
in a secure environment. [~tfriedr] what do you think?

> SSLFactory truststore reloader thread leak in TimelineClientImpl
> 
>
> Key: YARN-5309
> URL: https://issues.apache.org/jira/browse/YARN-5309
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: timelineserver, yarn
>Affects Versions: 2.7.1
>Reporter: Thomas Friedrich
>Assignee: Weiwei Yang
>Priority: Blocker
> Attachments: YARN-5309.001.patch, YARN-5309.002.patch, 
> YARN-5309.003.patch, YARN-5309.004.patch, YARN-5309.005.patch
>
>
> We found a similar issue as HADOOP-11368 in TimelineClientImpl. The class 
> creates an instance of SSLFactory in newSslConnConfigurator and subsequently 
> creates the ReloadingX509TrustManager instance which in turn starts a trust 
> store reloader thread. 
> However, the SSLFactory is never destroyed and hence the trust store reloader 
> threads are not killed.
> This problem was observed by a customer who had SSL enabled in Hadoop and 
> submitted many queries against the HiveServer2. After a few days, the HS2 
> instance crashed and from the Java dump we could see many (over 13000) 
> threads like this:
> "Truststore reloader thread" #126 daemon prio=5 os_prio=0 
> tid=0x7f680d2e3000 nid=0x98fd waiting on 
> condition [0x7f67e482c000]
>java.lang.Thread.State: TIMED_WAITING (sleeping)
> at java.lang.Thread.sleep(Native Method)
> at org.apache.hadoop.security.ssl.ReloadingX509TrustManager.run
> (ReloadingX509TrustManager.java:225)
> at java.lang.Thread.run(Thread.java:745)
> HiveServer2 uses the JobClient to submit a job:
> Thread [HiveServer2-Background-Pool: Thread-188] (Suspended (breakpoint at 
> line 89 in 
> ReloadingX509TrustManager))   
>   owns: Object  (id=464)  
>   owns: Object  (id=465)  
>   owns: Object  (id=466)  
>   owns: ServiceLoader  (id=210)
>   ReloadingX509TrustManager.(String, String, String, long) line: 89 
>   FileBasedKeyStoresFactory.init(SSLFactory$Mode) line: 209   
>   SSLFactory.init() line: 131 
>   TimelineClientImpl.newSslConnConfigurator(int, Configuration) line: 532 
>   TimelineClientImpl.newConnConfigurator(Configuration) line: 507 
>   TimelineClientImpl.serviceInit(Configuration) line: 269 
>   TimelineClientImpl(AbstractService).init(Configuration) line: 163   
>   YarnClientImpl.serviceInit(Configuration) line: 169 
>   YarnClientImpl(AbstractService).init(Configuration) line: 163   
>   ResourceMgrDelegate.serviceInit(Configuration) line: 102
>   ResourceMgrDelegate(AbstractService).init(Configuration) line: 163  
>   ResourceMgrDelegate.(YarnConfiguration) line: 96  
>   YARNRunner.(Configuration) line: 112  
>   YarnClientProtocolProvider.create(Configuration) line: 34   
>   Cluster.initialize(InetSocketAddress, Configuration) line: 95   
>   Cluster.(InetSocketAddress, Configuration) line: 82   
>   Cluster.(Configuration) line: 75  
>   JobClient.init(JobConf) line: 475   
>   JobClient.(JobConf) line: 454 
>   MapRedTask(ExecDriver).execute(DriverContext) line: 401 
>   MapRedTask.execute(DriverContext) line: 137 
>   MapRedTask(Task).executeTask() line: 160 
>   TaskRunner.runSequential() line: 88 
>   Driver.launchTask(Task, String, boolean, String, int, 
> DriverContext) line: 1653   
>   Driver.execute() line: 1412 
> For every job, a new instance of JobClient/YarnClientImpl/TimelineClientImpl 
> is created. But because the HS2 process stays up for days, the previous trust 
> store reloader threads are still hanging around in the HS2 process and 
> eventually use all the resources available. 
> It seems like a similar fix as HADOOP-11368 is needed in TimelineClientImpl 
> but it doesn't have a destroy method to begin with. 
> One option to avoid this problem is to disable the yarn timeline service 
> (yarn.timeline-service.enabled=false).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-5309) SSLFactory truststore reloader thread leak in TimelineClientImpl

2016-07-15 Thread Weiwei Yang (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-5309?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Weiwei Yang updated YARN-5309:
--
Target Version/s: 2.7.3

> SSLFactory truststore reloader thread leak in TimelineClientImpl
> 
>
> Key: YARN-5309
> URL: https://issues.apache.org/jira/browse/YARN-5309
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: timelineserver, yarn
>Affects Versions: 2.7.1
>Reporter: Thomas Friedrich
>Assignee: Weiwei Yang
>Priority: Blocker
> Attachments: YARN-5309.001.patch, YARN-5309.002.patch, 
> YARN-5309.003.patch, YARN-5309.004.patch, YARN-5309.005.patch
>
>
> We found a similar issue as HADOOP-11368 in TimelineClientImpl. The class 
> creates an instance of SSLFactory in newSslConnConfigurator and subsequently 
> creates the ReloadingX509TrustManager instance which in turn starts a trust 
> store reloader thread. 
> However, the SSLFactory is never destroyed and hence the trust store reloader 
> threads are not killed.
> This problem was observed by a customer who had SSL enabled in Hadoop and 
> submitted many queries against the HiveServer2. After a few days, the HS2 
> instance crashed and from the Java dump we could see many (over 13000) 
> threads like this:
> "Truststore reloader thread" #126 daemon prio=5 os_prio=0 
> tid=0x7f680d2e3000 nid=0x98fd waiting on 
> condition [0x7f67e482c000]
>java.lang.Thread.State: TIMED_WAITING (sleeping)
> at java.lang.Thread.sleep(Native Method)
> at org.apache.hadoop.security.ssl.ReloadingX509TrustManager.run
> (ReloadingX509TrustManager.java:225)
> at java.lang.Thread.run(Thread.java:745)
> HiveServer2 uses the JobClient to submit a job:
> Thread [HiveServer2-Background-Pool: Thread-188] (Suspended (breakpoint at 
> line 89 in 
> ReloadingX509TrustManager))   
>   owns: Object  (id=464)  
>   owns: Object  (id=465)  
>   owns: Object  (id=466)  
>   owns: ServiceLoader  (id=210)
>   ReloadingX509TrustManager.(String, String, String, long) line: 89 
>   FileBasedKeyStoresFactory.init(SSLFactory$Mode) line: 209   
>   SSLFactory.init() line: 131 
>   TimelineClientImpl.newSslConnConfigurator(int, Configuration) line: 532 
>   TimelineClientImpl.newConnConfigurator(Configuration) line: 507 
>   TimelineClientImpl.serviceInit(Configuration) line: 269 
>   TimelineClientImpl(AbstractService).init(Configuration) line: 163   
>   YarnClientImpl.serviceInit(Configuration) line: 169 
>   YarnClientImpl(AbstractService).init(Configuration) line: 163   
>   ResourceMgrDelegate.serviceInit(Configuration) line: 102
>   ResourceMgrDelegate(AbstractService).init(Configuration) line: 163  
>   ResourceMgrDelegate.(YarnConfiguration) line: 96  
>   YARNRunner.(Configuration) line: 112  
>   YarnClientProtocolProvider.create(Configuration) line: 34   
>   Cluster.initialize(InetSocketAddress, Configuration) line: 95   
>   Cluster.(InetSocketAddress, Configuration) line: 82   
>   Cluster.(Configuration) line: 75  
>   JobClient.init(JobConf) line: 475   
>   JobClient.(JobConf) line: 454 
>   MapRedTask(ExecDriver).execute(DriverContext) line: 401 
>   MapRedTask.execute(DriverContext) line: 137 
>   MapRedTask(Task).executeTask() line: 160 
>   TaskRunner.runSequential() line: 88 
>   Driver.launchTask(Task, String, boolean, String, int, 
> DriverContext) line: 1653   
>   Driver.execute() line: 1412 
> For every job, a new instance of JobClient/YarnClientImpl/TimelineClientImpl 
> is created. But because the HS2 process stays up for days, the previous trust 
> store reloader threads are still hanging around in the HS2 process and 
> eventually use all the resources available. 
> It seems like a similar fix as HADOOP-11368 is needed in TimelineClientImpl 
> but it doesn't have a destroy method to begin with. 
> One option to avoid this problem is to disable the yarn timeline service 
> (yarn.timeline-service.enabled=false).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-4888) Changes in RM AppSchedulingInfo for identifying resource-requests explicitly

2016-07-15 Thread Subru Krishnan (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-4888?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Subru Krishnan updated YARN-4888:
-
Attachment: YARN-4888-WIP.patch

Thanks [~asuresh] for taking a look at the v0 patch and updating it to add a 
{{SchedulerKey}} to define a composite key for scheduler requests. This makes 
the patch bigger (:)) but is more cleaner as we can now address YARN-314. 
Currently we have {{Priority}} and {{allocationRequestId}} (will default to _0_ 
for backward compatibility) as the composite keys. This changes also enables 
future extensions for the {{SchedulerKey}}.

> Changes in RM AppSchedulingInfo for identifying resource-requests explicitly
> 
>
> Key: YARN-4888
> URL: https://issues.apache.org/jira/browse/YARN-4888
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: resourcemanager
>Reporter: Subru Krishnan
>Assignee: Subru Krishnan
> Attachments: YARN-4888-WIP.patch, YARN-4888-v0.patch
>
>
> YARN-4879 puts forward the notion of identifying allocate requests 
> explicitly. This JIRA is to track the changes in RM app scheduling data 
> structures to accomplish it. Please refer to the design doc in the parent 
> JIRA for details.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5309) SSLFactory truststore reloader thread leak in TimelineClientImpl

2016-07-15 Thread Akira Ajisaka (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5309?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15380326#comment-15380326
 ] 

Akira Ajisaka commented on YARN-5309:
-

Hi [~tfriedr] and [~cheersyang], do you think this is a blocker for 2.7.3? If 
you think so, please set the target version to 2.7.3 so that we don't miss this 
issue in the release process. Thanks in advance.

> SSLFactory truststore reloader thread leak in TimelineClientImpl
> 
>
> Key: YARN-5309
> URL: https://issues.apache.org/jira/browse/YARN-5309
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: timelineserver, yarn
>Affects Versions: 2.7.1
>Reporter: Thomas Friedrich
>Assignee: Weiwei Yang
>Priority: Blocker
> Attachments: YARN-5309.001.patch, YARN-5309.002.patch, 
> YARN-5309.003.patch, YARN-5309.004.patch, YARN-5309.005.patch
>
>
> We found a similar issue as HADOOP-11368 in TimelineClientImpl. The class 
> creates an instance of SSLFactory in newSslConnConfigurator and subsequently 
> creates the ReloadingX509TrustManager instance which in turn starts a trust 
> store reloader thread. 
> However, the SSLFactory is never destroyed and hence the trust store reloader 
> threads are not killed.
> This problem was observed by a customer who had SSL enabled in Hadoop and 
> submitted many queries against the HiveServer2. After a few days, the HS2 
> instance crashed and from the Java dump we could see many (over 13000) 
> threads like this:
> "Truststore reloader thread" #126 daemon prio=5 os_prio=0 
> tid=0x7f680d2e3000 nid=0x98fd waiting on 
> condition [0x7f67e482c000]
>java.lang.Thread.State: TIMED_WAITING (sleeping)
> at java.lang.Thread.sleep(Native Method)
> at org.apache.hadoop.security.ssl.ReloadingX509TrustManager.run
> (ReloadingX509TrustManager.java:225)
> at java.lang.Thread.run(Thread.java:745)
> HiveServer2 uses the JobClient to submit a job:
> Thread [HiveServer2-Background-Pool: Thread-188] (Suspended (breakpoint at 
> line 89 in 
> ReloadingX509TrustManager))   
>   owns: Object  (id=464)  
>   owns: Object  (id=465)  
>   owns: Object  (id=466)  
>   owns: ServiceLoader  (id=210)
>   ReloadingX509TrustManager.(String, String, String, long) line: 89 
>   FileBasedKeyStoresFactory.init(SSLFactory$Mode) line: 209   
>   SSLFactory.init() line: 131 
>   TimelineClientImpl.newSslConnConfigurator(int, Configuration) line: 532 
>   TimelineClientImpl.newConnConfigurator(Configuration) line: 507 
>   TimelineClientImpl.serviceInit(Configuration) line: 269 
>   TimelineClientImpl(AbstractService).init(Configuration) line: 163   
>   YarnClientImpl.serviceInit(Configuration) line: 169 
>   YarnClientImpl(AbstractService).init(Configuration) line: 163   
>   ResourceMgrDelegate.serviceInit(Configuration) line: 102
>   ResourceMgrDelegate(AbstractService).init(Configuration) line: 163  
>   ResourceMgrDelegate.(YarnConfiguration) line: 96  
>   YARNRunner.(Configuration) line: 112  
>   YarnClientProtocolProvider.create(Configuration) line: 34   
>   Cluster.initialize(InetSocketAddress, Configuration) line: 95   
>   Cluster.(InetSocketAddress, Configuration) line: 82   
>   Cluster.(Configuration) line: 75  
>   JobClient.init(JobConf) line: 475   
>   JobClient.(JobConf) line: 454 
>   MapRedTask(ExecDriver).execute(DriverContext) line: 401 
>   MapRedTask.execute(DriverContext) line: 137 
>   MapRedTask(Task).executeTask() line: 160 
>   TaskRunner.runSequential() line: 88 
>   Driver.launchTask(Task, String, boolean, String, int, 
> DriverContext) line: 1653   
>   Driver.execute() line: 1412 
> For every job, a new instance of JobClient/YarnClientImpl/TimelineClientImpl 
> is created. But because the HS2 process stays up for days, the previous trust 
> store reloader threads are still hanging around in the HS2 process and 
> eventually use all the resources available. 
> It seems like a similar fix as HADOOP-11368 is needed in TimelineClientImpl 
> but it doesn't have a destroy method to begin with. 
> One option to avoid this problem is to disable the yarn timeline service 
> (yarn.timeline-service.enabled=false).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5047) Refactor nodeUpdate across schedulers

2016-07-15 Thread Ray Chiang (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5047?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15380318#comment-15380318
 ] 

Ray Chiang commented on YARN-5047:
--

Will do.  Thanks [~ajisakaa].

> Refactor nodeUpdate across schedulers
> -
>
> Key: YARN-5047
> URL: https://issues.apache.org/jira/browse/YARN-5047
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: capacityscheduler, fairscheduler, scheduler
>Affects Versions: 3.0.0-alpha1
>Reporter: Ray Chiang
>Assignee: Ray Chiang
> Attachments: YARN-5047.001.patch, YARN-5047.002.patch, 
> YARN-5047.003.patch, YARN-5047.004.patch, YARN-5047.005.patch, 
> YARN-5047.006.patch
>
>
> FairScheduler#nodeUpdate() and CapacityScheduler#nodeUpdate() have a lot of 
> commonality in their code.  See about refactoring the common parts into 
> AbstractYARNScheduler.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5047) Refactor nodeUpdate across schedulers

2016-07-15 Thread Akira Ajisaka (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5047?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15380288#comment-15380288
 ] 

Akira Ajisaka commented on YARN-5047:
-

mvn javadoc:javadoc fails by error.
{noformat}
[ERROR] 
/testptch/hadoop/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/AbstractYarnScheduler.java:802:
 error: unknown tag: returns
[ERROR] * @returns true if the scheduler is cleared to call assignContainer().
[ERROR] ^
{noformat}
Would you fix the error by unknown tag {{@returns}}?
{code}
  /**
   * Method determine whether assignContainers can be called.
   * @returns true if the scheduler is cleared to call assignContainer().
   */
  public boolean isReadyToAssignContainers() {
// Determine if work-preserving restart recovery time has not yet passed
return (!rmContext.isWorkPreservingRecoveryEnabled()
|| rmContext.isSchedulerReadyForAllocatingContainers());
  }
{code}


> Refactor nodeUpdate across schedulers
> -
>
> Key: YARN-5047
> URL: https://issues.apache.org/jira/browse/YARN-5047
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: capacityscheduler, fairscheduler, scheduler
>Affects Versions: 3.0.0-alpha1
>Reporter: Ray Chiang
>Assignee: Ray Chiang
> Attachments: YARN-5047.001.patch, YARN-5047.002.patch, 
> YARN-5047.003.patch, YARN-5047.004.patch, YARN-5047.005.patch, 
> YARN-5047.006.patch
>
>
> FairScheduler#nodeUpdate() and CapacityScheduler#nodeUpdate() have a lot of 
> commonality in their code.  See about refactoring the common parts into 
> AbstractYARNScheduler.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5047) Refactor nodeUpdate across schedulers

2016-07-15 Thread Ray Chiang (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5047?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15380282#comment-15380282
 ] 

Ray Chiang commented on YARN-5047:
--

RE: checkstyle

Same issue with protected variable

RE: Javadoc

Looks like JDK issues, covered by HADOOP-13369


> Refactor nodeUpdate across schedulers
> -
>
> Key: YARN-5047
> URL: https://issues.apache.org/jira/browse/YARN-5047
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: capacityscheduler, fairscheduler, scheduler
>Affects Versions: 3.0.0-alpha1
>Reporter: Ray Chiang
>Assignee: Ray Chiang
> Attachments: YARN-5047.001.patch, YARN-5047.002.patch, 
> YARN-5047.003.patch, YARN-5047.004.patch, YARN-5047.005.patch, 
> YARN-5047.006.patch
>
>
> FairScheduler#nodeUpdate() and CapacityScheduler#nodeUpdate() have a lot of 
> commonality in their code.  See about refactoring the common parts into 
> AbstractYARNScheduler.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5382) RM does not audit log kill request for active applications

2016-07-15 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5382?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15380264#comment-15380264
 ] 

Hadoop QA commented on YARN-5382:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 26s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red} 0m 0s 
{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 6m 
8s {color} | {color:green} branch-2.7 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 26s 
{color} | {color:green} branch-2.7 passed with JDK v1.8.0_91 {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 28s 
{color} | {color:green} branch-2.7 passed with JDK v1.7.0_101 {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
21s {color} | {color:green} branch-2.7 passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 33s 
{color} | {color:green} branch-2.7 passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
17s {color} | {color:green} branch-2.7 passed {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red} 1m 3s 
{color} | {color:red} 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager
 in branch-2.7 has 1 extant Findbugs warnings. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 18s 
{color} | {color:green} branch-2.7 passed with JDK v1.8.0_91 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 24s 
{color} | {color:green} branch-2.7 passed with JDK v1.7.0_101 {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 
27s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 24s 
{color} | {color:green} the patch passed with JDK v1.8.0_91 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 24s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 26s 
{color} | {color:green} the patch passed with JDK v1.7.0_101 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 26s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
17s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 32s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
14s {color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} whitespace {color} | {color:red} 0m 0s 
{color} | {color:red} The patch has 1620 line(s) that end in whitespace. Use 
git apply --whitespace=fix. {color} |
| {color:red}-1{color} | {color:red} whitespace {color} | {color:red} 0m 39s 
{color} | {color:red} The patch 74 line(s) with tabs. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 
12s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 16s 
{color} | {color:green} the patch passed with JDK v1.8.0_91 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 21s 
{color} | {color:green} the patch passed with JDK v1.7.0_101 {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 49m 23s {color} 
| {color:red} hadoop-yarn-server-resourcemanager in the patch failed with JDK 
v1.8.0_91. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 50m 34s {color} 
| {color:red} hadoop-yarn-server-resourcemanager in the patch failed with JDK 
v1.7.0_101. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 
16s {color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 116m 47s {color} 
| {color:black} {color} |
\\
\\
|| Reason || Tests ||
| JDK v1.8.0_91 Failed junit tests | 
hadoop.yarn.server.resourcemanager.TestAMAuthorization |
|   | hadoop.yarn.server.resourcemanager.Test

[jira] [Commented] (YARN-5047) Refactor nodeUpdate across schedulers

2016-07-15 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5047?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15380238#comment-15380238
 ] 

Hadoop QA commented on YARN-5047:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 20s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 
0s {color} | {color:green} The patch appears to include 3 new or modified test 
files. {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 15s 
{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 7m 
5s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 7m 44s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 1m 
42s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 15s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
37s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 
39s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 39s 
{color} | {color:green} trunk passed {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 15s 
{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 
58s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 9m 10s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 9m 10s 
{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} checkstyle {color} | {color:red} 1m 36s 
{color} | {color:red} root: The patch generated 1 new + 942 unchanged - 5 fixed 
= 943 total (was 947) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 3s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
33s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
0s {color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 
43s {color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} javadoc {color} | {color:red} 0m 22s 
{color} | {color:red} hadoop-yarn-server-resourcemanager in the patch failed. 
{color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 32m 49s 
{color} | {color:green} hadoop-yarn-server-resourcemanager in the patch passed. 
{color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 0m 55s 
{color} | {color:green} hadoop-sls in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 
22s {color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 72m 7s {color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:9560f25 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12818236/YARN-5047.006.patch |
| JIRA Issue | YARN-5047 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux 81bbb1f49c6a 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed 
Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / 4421620 |
| Default Java | 1.8.0_91 |
| findbugs | v3.0.0 |
| checkstyle | 
https://builds.apache.org/job/PreCommit-YARN-Build/12344/artifact/patchprocess/diff-checkstyle-root.txt
 |
| javadoc | 
https://builds.apache.org/job/PreCommit-YARN-Build/12344/artifact/patchprocess/patch-javadoc-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-YARN-Build/12344/testReport/ |
| modules | C: 
hadoop-yarn-project/hadoop-yarn/hadoop-y

[jira] [Commented] (YARN-5369) Improve Yarn logs command to get container logs based on Node Id

2016-07-15 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5369?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15380233#comment-15380233
 ] 

Hadoop QA commented on YARN-5369:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 21s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 
0s {color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 10s 
{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 6m 
57s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 2m 23s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
38s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 56s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
27s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 
24s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 41s 
{color} | {color:green} trunk passed {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 9s 
{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 
45s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 2m 17s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 2m 17s 
{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} checkstyle {color} | {color:red} 0m 38s 
{color} | {color:red} hadoop-yarn-project/hadoop-yarn: The patch generated 4 
new + 88 unchanged - 1 fixed = 92 total (was 89) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 51s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
24s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
0s {color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 
35s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 37s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 2m 13s 
{color} | {color:green} hadoop-yarn-common in the patch passed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 8m 8s {color} | 
{color:red} hadoop-yarn-client in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 
19s {color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 32m 41s {color} 
| {color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.yarn.client.cli.TestLogsCLI |
|   | hadoop.yarn.client.api.impl.TestYarnClient |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:9560f25 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12818245/YARN-5369.2.patch |
| JIRA Issue | YARN-5369 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux 752928a63c22 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed 
Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / f5f1c81 |
| Default Java | 1.8.0_91 |
| findbugs | v3.0.0 |
| checkstyle | 
https://builds.apache.org/job/PreCommit-YARN-Build/12345/artifact/patchprocess/diff-checkstyle-hadoop-yarn-project_hadoop-yarn.txt
 |
| unit | 
https://builds.apache.org/job/PreCommit-YARN-Build/12345/artifact/patchprocess/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-client.txt
 |
| unit test logs |  
https://builds.apache.org/j

[jira] [Commented] (YARN-5272) Handle queue names consistently in FairScheduler

2016-07-15 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5272?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15380190#comment-15380190
 ] 

Hudson commented on YARN-5272:
--

SUCCESS: Integrated in Hadoop-trunk-Commit #10110 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/10110/])
YARN-5272. Handle queue names consistently in FairScheduler. (Wilfred (rchiang: 
rev f5f1c81e7dcae0272e71ef4e6bedfc00b8c677d6)
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/fair/QueueManager.java
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/fair/TestFairScheduler.java
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/fair/TestQueueManager.java
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/fair/AllocationFileLoaderService.java
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/fair/TestAllocationFileLoaderService.java


> Handle queue names consistently in FairScheduler
> 
>
> Key: YARN-5272
> URL: https://issues.apache.org/jira/browse/YARN-5272
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: fairscheduler
>Affects Versions: 2.8.0
>Reporter: Wilfred Spiegelenburg
>Assignee: Wilfred Spiegelenburg
> Fix For: 2.9.0
>
> Attachments: YARN-5272.1.patch, YARN-5272.3.patch, YARN-5272.4.patch
>
>
> The fix used in YARN-3241 uses a the JDK trim() method to remove leading and 
> trailing spaces. The QueueMetrics uses a guava based trim when it splits the 
> queues.
> The guava based trim uses the unicode definition of a white space which is 
> different than the java trim as can be seen 
> [here|https://docs.google.com/a/cloudera.com/spreadsheets/d/1kq4ECwPjHX9B8QUCTPclgsDCXYaj7T-FlT4tB5q3ahk/pub]
> A queue name with a non-breaking white space will thus still cause the same 
> "Metrics source XXX already exists!" MetricsException.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-5369) Improve Yarn logs command to get container logs based on Node Id

2016-07-15 Thread Xuan Gong (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-5369?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xuan Gong updated YARN-5369:

Attachment: YARN-5369.2.patch

> Improve Yarn logs command to get container logs based on Node Id
> 
>
> Key: YARN-5369
> URL: https://issues.apache.org/jira/browse/YARN-5369
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Xuan Gong
>Assignee: Xuan Gong
> Attachments: YARN-5369.1.patch, YARN-5369.2.patch
>
>
> It is helpful if we could have yarn logs --applicationId appId --nodeAddress 
> ${nodeId} to get all the container logs which ran on the specific nm.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5181) ClusterNodeTracker: add method to get list of nodes matching a specific resourceName

2016-07-15 Thread Arun Suresh (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5181?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15380147#comment-15380147
 ] 

Arun Suresh commented on YARN-5181:
---

+1, checking this is in shortly..

> ClusterNodeTracker: add method to get list of nodes matching a specific 
> resourceName
> 
>
> Key: YARN-5181
> URL: https://issues.apache.org/jira/browse/YARN-5181
> Project: Hadoop YARN
>  Issue Type: Improvement
>  Components: scheduler
>Affects Versions: 2.8.0
>Reporter: Karthik Kambatla
>Assignee: Karthik Kambatla
> Attachments: yarn-5181-1.patch, yarn-5181-2.patch, yarn-5181-3.patch
>
>
> ClusterNodeTracker should have a method to return the list of nodes matching 
> a particular resourceName. This is so we could identify what all nodes a 
> particular ResourceRequest is interested in, which in turn is useful in 
> YARN-5139 (global scheduler) and YARN-4752 (FairScheduler preemption 
> overhaul). 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-5047) Refactor nodeUpdate across schedulers

2016-07-15 Thread Ray Chiang (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-5047?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ray Chiang updated YARN-5047:
-
Attachment: YARN-5047.006.patch

- Changed return type of AbstractYarnScheduler#getSchedulerNode() to templated 
value.  Updated unit test to match.


> Refactor nodeUpdate across schedulers
> -
>
> Key: YARN-5047
> URL: https://issues.apache.org/jira/browse/YARN-5047
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: capacityscheduler, fairscheduler, scheduler
>Affects Versions: 3.0.0-alpha1
>Reporter: Ray Chiang
>Assignee: Ray Chiang
> Attachments: YARN-5047.001.patch, YARN-5047.002.patch, 
> YARN-5047.003.patch, YARN-5047.004.patch, YARN-5047.005.patch, 
> YARN-5047.006.patch
>
>
> FairScheduler#nodeUpdate() and CapacityScheduler#nodeUpdate() have a lot of 
> commonality in their code.  See about refactoring the common parts into 
> AbstractYARNScheduler.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (YARN-5382) RM does not audit log kill request for active applications

2016-07-15 Thread Vrushali C (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5382?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15380047#comment-15380047
 ] 

Vrushali C edited comment on YARN-5382 at 7/15/16 8:28 PM:
---

Thanks [~jlowe]! Uploading patch v2 which uses RMAuditLogger.

But with this, I now see two exactly same log messages in the RM log since the 
forceKillApplication function is entered twice. 

I was wondering if the message can be made slightly different? But that would 
mean adding something to RMAuditLogger itself to log a different style of 
message.

>From my pseudo-distributed setup:
{code}

[hadoop-2.7.4-SNAPSHOT (branch-2.7)]$ grep -rni ClientRMService 
logs/yarn-vchannapattan-resourcemanager-cchannapattan.log | grep "Kill App" |  
grep  application_1468608647317_0001

logs/yarn-vchannapattan-resourcemanager-channapattan.log:348:2016-07-15 
11:51:10,773 INFO org.apache.hadoop.yarn.server.resourcemanager.RMAuditLogger: 
USER=vchannapattan   IP=127.0.0.1OPERATION=Kill Application Request 
 TARGET=ClientRMService  RESULT=SUCCESS  APPID=application_1468608647317_0001

logs/yarn-vchannapattan-resourcemanager-tw-mbp13-channapattan.log:387:2016-07-15
 11:51:10,987 INFO org.apache.hadoop.yarn.server.resourcemanager.RMAuditLogger: 
USER=vchannapattan  IP=127.0.0.1OPERATION=Kill Application Request  
TARGET=ClientRMService  RESULT=SUCCESS  APPID=application_1468608647317_0001

[ hadoop-2.7.4-SNAPSHOT (branch-2.7)]$
{code}




was (Author: vrushalic):
Thanks [~jlowe]! Uploading patch v2 which uses RMAuditLogger.

But with this, I now see two exactly same log messages in the RM log since the 
forceKillApplication function is entered twice. 

I was wondering if the message can be made slightly different? But that would 
mean adding something to RMAuditLogger itself to log a different style of 
message.

>From my pseudo-distributed setup:
{code}

[hadoop-2.7.4-SNAPSHOT (branch-2.7)]$ grep -rni ClientRMService 
logs/yarn-vchannapattan-resourcemanager-cchannapattan.log | grep "Kill App" |  
grep  application_1468608647317_0001

logs/yarn-vchannapattan-resourcemanager-channapattan.log:348:2016-07-15 
11:51:10,773 INFO org.apache.hadoop.yarn.server.resourcemanager.RMAuditLogger: 
USER=vchannapattan   IP=127.0.0.1OPERATION=Kill Application Request 
 TARGET=ClientRMService-11   RESULT=SUCCESS  
APPID=application_1468608647317_0001

logs/yarn-vchannapattan-resourcemanager-tw-mbp13-channapattan.log:387:2016-07-15
 11:51:10,987 INFO org.apache.hadoop.yarn.server.resourcemanager.RMAuditLogger: 
USER=vchannapattan  IP=127.0.0.1OPERATION=Kill Application Request  
TARGET=ClientRMService  RESULT=SUCCESS  APPID=application_1468608647317_0001

[ hadoop-2.7.4-SNAPSHOT (branch-2.7)]$
{code}



> RM does not audit log kill request for active applications
> --
>
> Key: YARN-5382
> URL: https://issues.apache.org/jira/browse/YARN-5382
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: resourcemanager
>Affects Versions: 2.7.2
>Reporter: Jason Lowe
>Assignee: Vrushali C
> Attachments: YARN-5382-branch-2.7.01.patch, 
> YARN-5382-branch-2.7.02.patch
>
>
> ClientRMService will audit a kill request but only if it either fails to 
> issue the kill or if the kill is sent to an already finished application.  It 
> does not create a log entry when the application is active which is arguably 
> the most important case to audit.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (YARN-5382) RM does not audit log kill request for active applications

2016-07-15 Thread Vrushali C (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5382?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15380047#comment-15380047
 ] 

Vrushali C edited comment on YARN-5382 at 7/15/16 8:27 PM:
---

Thanks [~jlowe]! Uploading patch v2 which uses RMAuditLogger.

But with this, I now see two exactly same log messages in the RM log since the 
forceKillApplication function is entered twice. 

I was wondering if the message can be made slightly different? But that would 
mean adding something to RMAuditLogger itself to log a different style of 
message.

>From my pseudo-distributed setup:
{code}

[hadoop-2.7.4-SNAPSHOT (branch-2.7)]$ grep -rni ClientRMService 
logs/yarn-vchannapattan-resourcemanager-cchannapattan.log | grep "Kill App" |  
grep  application_1468608647317_0001

logs/yarn-vchannapattan-resourcemanager-channapattan.log:348:2016-07-15 
11:51:10,773 INFO org.apache.hadoop.yarn.server.resourcemanager.RMAuditLogger: 
USER=vchannapattan   IP=127.0.0.1OPERATION=Kill Application Request 
 TARGET=ClientRMService-11   RESULT=SUCCESS  
APPID=application_1468608647317_0001

logs/yarn-vchannapattan-resourcemanager-tw-mbp13-channapattan.log:387:2016-07-15
 11:51:10,987 INFO org.apache.hadoop.yarn.server.resourcemanager.RMAuditLogger: 
USER=vchannapattan  IP=127.0.0.1OPERATION=Kill Application Request  
TARGET=ClientRMService  RESULT=SUCCESS  APPID=application_1468608647317_0001

[ hadoop-2.7.4-SNAPSHOT (branch-2.7)]$
{code}




was (Author: vrushalic):

Thanks [~jlowe]! Uploading patch v2 which uses RMAuditLogger.

But with this, I now see two exactly same log messages in the RM log since the 
forceKillApplication function is entered twice. 

I was wondering if the message can be made slightly different? But that would 
mean adding something to RMAuditLogger itself to log a different style of 
message.

>From my pseudo-distributed setup:
{code}
[hadoop-2.7.4-SNAPSHOT (branch-2.7)]$ grep -rni ClientRMService 
logs/yarn-vchannapattan-resourcemanager-vchannapattan.log | grep 
application_1468608647317_0001
logs/yarn-vchannapattan-resourcemanager-channapattan.log:298:2016-07-15 
11:50:58,702 INFO org.apache.hadoop.yarn.server.resourcemanager.RMAuditLogger: 
USER=vchannapattan   IP=127.0.0.1OPERATION=Submit Application Request   
 TARGET=ClientRMService  RESULT=SUCCESS  APPID=application_1468608647317_0001
logs/yarn-vchannapattan-resourcemanager-vchannapattan.log:348:2016-07-15 
11:51:10,773 INFO org.apache.hadoop.yarn.server.resourcemanager.RMAuditLogger: 
USER=vchannapattan  IP=127.0.0.1OPERATION=Kill Application Request  
TARGET=ClientRMService-11   RESULT=SUCCESS  
APPID=application_1468608647317_0001
logs/yarn-vchannapattan-resourcemanager-channapattan.log:387:2016-07-15 
11:51:10,987 INFO org.apache.hadoop.yarn.server.resourcemanager.RMAuditLogger: 
USER=vchannapattan   IP=127.0.0.1OPERATION=Kill Application Request 
 TARGET=ClientRMService  RESULT=SUCCESS  APPID=application_1468608647317_0001
[tw-mbp13-channapattan hadoop-2.7.4-SNAPSHOT (branch-2.7)]$
{code}


> RM does not audit log kill request for active applications
> --
>
> Key: YARN-5382
> URL: https://issues.apache.org/jira/browse/YARN-5382
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: resourcemanager
>Affects Versions: 2.7.2
>Reporter: Jason Lowe
>Assignee: Vrushali C
> Attachments: YARN-5382-branch-2.7.01.patch, 
> YARN-5382-branch-2.7.02.patch
>
>
> ClientRMService will audit a kill request but only if it either fails to 
> issue the kill or if the kill is sent to an already finished application.  It 
> does not create a log entry when the application is active which is arguably 
> the most important case to audit.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-5382) RM does not audit log kill request for active applications

2016-07-15 Thread Vrushali C (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-5382?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vrushali C updated YARN-5382:
-
Attachment: YARN-5382-branch-2.7.02.patch


Thanks [~jlowe]! Uploading patch v2 which uses RMAuditLogger.

But with this, I now see two exactly same log messages in the RM log since the 
forceKillApplication function is entered twice. 

I was wondering if the message can be made slightly different? But that would 
mean adding something to RMAuditLogger itself to log a different style of 
message.

>From my pseudo-distributed setup:
{code}
[hadoop-2.7.4-SNAPSHOT (branch-2.7)]$ grep -rni ClientRMService 
logs/yarn-vchannapattan-resourcemanager-vchannapattan.log | grep 
application_1468608647317_0001
logs/yarn-vchannapattan-resourcemanager-channapattan.log:298:2016-07-15 
11:50:58,702 INFO org.apache.hadoop.yarn.server.resourcemanager.RMAuditLogger: 
USER=vchannapattan   IP=127.0.0.1OPERATION=Submit Application Request   
 TARGET=ClientRMService  RESULT=SUCCESS  APPID=application_1468608647317_0001
logs/yarn-vchannapattan-resourcemanager-vchannapattan.log:348:2016-07-15 
11:51:10,773 INFO org.apache.hadoop.yarn.server.resourcemanager.RMAuditLogger: 
USER=vchannapattan  IP=127.0.0.1OPERATION=Kill Application Request  
TARGET=ClientRMService-11   RESULT=SUCCESS  
APPID=application_1468608647317_0001
logs/yarn-vchannapattan-resourcemanager-channapattan.log:387:2016-07-15 
11:51:10,987 INFO org.apache.hadoop.yarn.server.resourcemanager.RMAuditLogger: 
USER=vchannapattan   IP=127.0.0.1OPERATION=Kill Application Request 
 TARGET=ClientRMService  RESULT=SUCCESS  APPID=application_1468608647317_0001
[tw-mbp13-channapattan hadoop-2.7.4-SNAPSHOT (branch-2.7)]$
{code}


> RM does not audit log kill request for active applications
> --
>
> Key: YARN-5382
> URL: https://issues.apache.org/jira/browse/YARN-5382
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: resourcemanager
>Affects Versions: 2.7.2
>Reporter: Jason Lowe
>Assignee: Vrushali C
> Attachments: YARN-5382-branch-2.7.01.patch, 
> YARN-5382-branch-2.7.02.patch
>
>
> ClientRMService will audit a kill request but only if it either fails to 
> issue the kill or if the kill is sent to an already finished application.  It 
> does not create a log entry when the application is active which is arguably 
> the most important case to audit.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5181) ClusterNodeTracker: add method to get list of nodes matching a specific resourceName

2016-07-15 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5181?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15380034#comment-15380034
 ] 

Hadoop QA commented on YARN-5181:
-

| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 33s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 
0s {color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 6m 
42s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 32s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
20s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 38s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
18s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 0m 
59s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 20s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 
33s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 32s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 32s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
18s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 37s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
15s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
0s {color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 0s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 19s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 38m 15s 
{color} | {color:green} hadoop-yarn-server-resourcemanager in the patch passed. 
{color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 
16s {color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 53m 13s {color} 
| {color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:9560f25 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12818220/yarn-5181-3.patch |
| JIRA Issue | YARN-5181 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux 2b52a0571015 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed 
Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / c48e9d6 |
| Default Java | 1.8.0_91 |
| findbugs | v3.0.0 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-YARN-Build/12342/testReport/ |
| modules | C: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager
 U: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager
 |
| Console output | 
https://builds.apache.org/job/PreCommit-YARN-Build/12342/console |
| Powered by | Apache Yetus 0.3.0   http://yetus.apache.org |


This message was automatically generated.



> ClusterNodeTracker: add method to get list of nodes matching a specific 
> resourceName
> 
>
> Key: YARN-5181
> URL: https://issues.apache.org/jira/browse/YARN-5181
> Project: Hadoop YARN
>  Issue Type: Improvement
>  Components: scheduler
>Affects Versions: 2.8.0
>Reporter: Karthik Kambatla
>Assignee: Karthik Kambatla
> At

[jira] [Updated] (YARN-5181) ClusterNodeTracker: add method to get list of nodes matching a specific resourceName

2016-07-15 Thread Karthik Kambatla (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-5181?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Karthik Kambatla updated YARN-5181:
---
Attachment: yarn-5181-3.patch

Patch to fix javac, javadoc and checkstyle warnings. 

> ClusterNodeTracker: add method to get list of nodes matching a specific 
> resourceName
> 
>
> Key: YARN-5181
> URL: https://issues.apache.org/jira/browse/YARN-5181
> Project: Hadoop YARN
>  Issue Type: Improvement
>  Components: scheduler
>Affects Versions: 2.8.0
>Reporter: Karthik Kambatla
>Assignee: Karthik Kambatla
> Attachments: yarn-5181-1.patch, yarn-5181-2.patch, yarn-5181-3.patch
>
>
> ClusterNodeTracker should have a method to return the list of nodes matching 
> a particular resourceName. This is so we could identify what all nodes a 
> particular ResourceRequest is interested in, which in turn is useful in 
> YARN-5139 (global scheduler) and YARN-4752 (FairScheduler preemption 
> overhaul). 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5388) MAPREDUCE-6719 requires changes to DockerContainerExecutor

2016-07-15 Thread Karthik Kambatla (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5388?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15379927#comment-15379927
 ] 

Karthik Kambatla commented on YARN-5388:


Related-but-different question: [~sidharta-s], [~vinodkv] - what is the 
long-term plan for DCE given LCE is supporting Docker? Should we consider 
deprecating it in 2.8? 

> MAPREDUCE-6719 requires changes to DockerContainerExecutor
> --
>
> Key: YARN-5388
> URL: https://issues.apache.org/jira/browse/YARN-5388
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: nodemanager
>Reporter: Daniel Templeton
>Priority: Critical
> Fix For: 2.9.0
>
>
> Because the {{DockerContainerExecuter}} overrides the {{writeLaunchEnv()}} 
> method, it must also have the wildcard processing logic from 
> YARN-4958/YARN-5373 added to it.  Without it, the use of -libjars will fail 
> unless wildcarding is disabled.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5047) Refactor nodeUpdate across schedulers

2016-07-15 Thread Karthik Kambatla (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5047?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15379922#comment-15379922
 ] 

Karthik Kambatla commented on YARN-5047:


There is one pending comment that needs addressing. I can +1 it once that is 
fixed, and Ray should be able to commit. I should also fix whatever is wrong 
with my permissions. 

> Refactor nodeUpdate across schedulers
> -
>
> Key: YARN-5047
> URL: https://issues.apache.org/jira/browse/YARN-5047
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: capacityscheduler, fairscheduler, scheduler
>Affects Versions: 3.0.0-alpha1
>Reporter: Ray Chiang
>Assignee: Ray Chiang
> Attachments: YARN-5047.001.patch, YARN-5047.002.patch, 
> YARN-5047.003.patch, YARN-5047.004.patch, YARN-5047.005.patch
>
>
> FairScheduler#nodeUpdate() and CapacityScheduler#nodeUpdate() have a lot of 
> commonality in their code.  See about refactoring the common parts into 
> AbstractYARNScheduler.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5047) Refactor nodeUpdate across schedulers

2016-07-15 Thread Wangda Tan (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5047?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15379920#comment-15379920
 ] 

Wangda Tan commented on YARN-5047:
--

Patch LGTM, +1, Thanks [~rchiang] and reviews from [~kasha]. 

[~kasha], do you want yourself commit this patch or I? 

> Refactor nodeUpdate across schedulers
> -
>
> Key: YARN-5047
> URL: https://issues.apache.org/jira/browse/YARN-5047
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: capacityscheduler, fairscheduler, scheduler
>Affects Versions: 3.0.0-alpha1
>Reporter: Ray Chiang
>Assignee: Ray Chiang
> Attachments: YARN-5047.001.patch, YARN-5047.002.patch, 
> YARN-5047.003.patch, YARN-5047.004.patch, YARN-5047.005.patch
>
>
> FairScheduler#nodeUpdate() and CapacityScheduler#nodeUpdate() have a lot of 
> commonality in their code.  See about refactoring the common parts into 
> AbstractYARNScheduler.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5272) Handle queue names consistently in FairScheduler

2016-07-15 Thread Ray Chiang (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5272?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15379885#comment-15379885
 ] 

Ray Chiang commented on YARN-5272:
--

Filing a new JIRA for the whitespace fixing is fine by me.

The version 4 patch looks good to me.  I'll get this committed soon barring any 
last minute objections.  +1

> Handle queue names consistently in FairScheduler
> 
>
> Key: YARN-5272
> URL: https://issues.apache.org/jira/browse/YARN-5272
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: fairscheduler
>Affects Versions: 2.8.0
>Reporter: Wilfred Spiegelenburg
>Assignee: Wilfred Spiegelenburg
> Attachments: YARN-5272.1.patch, YARN-5272.3.patch, YARN-5272.4.patch
>
>
> The fix used in YARN-3241 uses a the JDK trim() method to remove leading and 
> trailing spaces. The QueueMetrics uses a guava based trim when it splits the 
> queues.
> The guava based trim uses the unicode definition of a white space which is 
> different than the java trim as can be seen 
> [here|https://docs.google.com/a/cloudera.com/spreadsheets/d/1kq4ECwPjHX9B8QUCTPclgsDCXYaj7T-FlT4tB5q3ahk/pub]
> A queue name with a non-breaking white space will thus still cause the same 
> "Metrics source XXX already exists!" MetricsException.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5380) NMTimelinePublisher should use getMemorySize instead of getMemory

2016-07-15 Thread Wangda Tan (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5380?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15379876#comment-15379876
 ] 

Wangda Tan commented on YARN-5380:
--

Thanks [~vrushalic] for fixing this and reviews from 
[~kasha]/[~naganarasimha...@apache.org]!

> NMTimelinePublisher should use getMemorySize instead of getMemory
> -
>
> Key: YARN-5380
> URL: https://issues.apache.org/jira/browse/YARN-5380
> Project: Hadoop YARN
>  Issue Type: Improvement
>  Components: timelineserver
>Affects Versions: 3.0.0-alpha1
>Reporter: Karthik Kambatla
>Assignee: Vrushali C
>  Labels: newbie
> Fix For: 3.0.0-alpha1
>
> Attachments: YARN-5380.01.patch
>
>
> NMTimelinePublisher should use getMemorySize instead of getMemory, because 
> the latter is deprecated in favor of the former. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Resolved] (YARN-4888) Changes in RM AppSchedulingInfo for identifying resource-requests explicitly

2016-07-15 Thread Arun Suresh (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-4888?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arun Suresh resolved YARN-4888.
---
Resolution: Resolved

Temporarily resolving then re-opening to move out of the 'in-progress' status

> Changes in RM AppSchedulingInfo for identifying resource-requests explicitly
> 
>
> Key: YARN-4888
> URL: https://issues.apache.org/jira/browse/YARN-4888
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: resourcemanager
>Reporter: Subru Krishnan
>Assignee: Subru Krishnan
> Attachments: YARN-4888-v0.patch
>
>
> YARN-4879 puts forward the notion of identifying allocate requests 
> explicitly. This JIRA is to track the changes in RM app scheduling data 
> structures to accomplish it. Please refer to the design doc in the parent 
> JIRA for details.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Reopened] (YARN-4888) Changes in RM AppSchedulingInfo for identifying resource-requests explicitly

2016-07-15 Thread Arun Suresh (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-4888?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arun Suresh reopened YARN-4888:
---

reopening..

> Changes in RM AppSchedulingInfo for identifying resource-requests explicitly
> 
>
> Key: YARN-4888
> URL: https://issues.apache.org/jira/browse/YARN-4888
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: resourcemanager
>Reporter: Subru Krishnan
>Assignee: Subru Krishnan
> Attachments: YARN-4888-v0.patch
>
>
> YARN-4879 puts forward the notion of identifying allocate requests 
> explicitly. This JIRA is to track the changes in RM app scheduling data 
> structures to accomplish it. Please refer to the design doc in the parent 
> JIRA for details.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5361) Obtaining logs for completed container says 'file belongs to a running container ' at the end

2016-07-15 Thread Junping Du (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5361?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15379842#comment-15379842
 ] 

Junping Du commented on YARN-5361:
--

The commit message is wrong for this commit which point to YARN-5339. For RMs 
for other branches, these two commits are actual belong to YARN-5361:
trunk commit: 7e5355c14e55fd6540f7f070df4b78fa94a81618
branch-2 commit: e3bc4faa96752d39f0864678ed0b68c9f5c95d1c

> Obtaining logs for completed container says 'file belongs to a running 
> container ' at the end
> -
>
> Key: YARN-5361
> URL: https://issues.apache.org/jira/browse/YARN-5361
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Sumana Sathish
>Assignee: Xuan Gong
>Priority: Critical
> Fix For: 2.9.0
>
> Attachments: YARN-5361.1.patch, YARN-5361.2.patch, YARN-5361.3.patch
>
>
> Obtaining logs via yarn CLI for completed container but running application 
> says "This log file belongs to a running container 
> (container_e32_1468319707096_0001_01_04) and so may not be complete" 
> which is not correct.
> {code}
> LogType:stdout
> Log Upload Time:Tue Jul 12 10:38:14 + 2016
> Log Contents:
> End of LogType:stdout. This log file belongs to a running container 
> (container_e32_1468319707096_0001_01_04) and so may not be complete.
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-1126) Add validation of users input nodes-states options to nodes CLI

2016-07-15 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-1126?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15379831#comment-15379831
 ] 

Hadoop QA commented on YARN-1126:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 31s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 
0s {color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 6m 
44s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 20s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
16s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 24s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
15s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 0m 
29s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 15s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 
19s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 17s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 17s 
{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} checkstyle {color} | {color:red} 0m 14s 
{color} | {color:red} hadoop-yarn-project/hadoop-yarn/hadoop-yarn-client: The 
patch generated 10 new + 133 unchanged - 9 fixed = 143 total (was 142) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 21s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
12s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
0s {color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 0m 
34s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 12s 
{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 8m 14s {color} 
| {color:red} hadoop-yarn-client in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 
17s {color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 20m 32s {color} 
| {color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.yarn.client.api.impl.TestYarnClient |
|   | hadoop.yarn.client.cli.TestLogsCLI |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:9560f25 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12818204/YARN-1126-004.patch |
| JIRA Issue | YARN-1126 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux c2344f0891bb 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed 
Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / a72cb38 |
| Default Java | 1.8.0_91 |
| findbugs | v3.0.0 |
| checkstyle | 
https://builds.apache.org/job/PreCommit-YARN-Build/12341/artifact/patchprocess/diff-checkstyle-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-client.txt
 |
| unit | 
https://builds.apache.org/job/PreCommit-YARN-Build/12341/artifact/patchprocess/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-client.txt
 |
| unit test logs |  
https://builds.apache.org/job/PreCommit-YARN-Build/12341/artifact/patchprocess/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-client.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-YARN-Build/12341/testReport/ |
| modules | C: hadoop-yarn-project/hadoop-yarn/hadoop-yarn-client U: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-client |
| Console output | 
https://builds.apache.org/job/PreCommit-YAR

[jira] [Commented] (YARN-5383) Fix findbugs for nodemanager & checkstyle warnings in nodemanager.ContainerExecutor

2016-07-15 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5383?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15379816#comment-15379816
 ] 

Hudson commented on YARN-5383:
--

SUCCESS: Integrated in Hadoop-trunk-Commit #10106 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/10106/])
YARN-5383. Fix findbugs and checkstyle issues in ContainerExecutor. 
(varunsaxena: rev a72cb3825a11830be9ad35ae7ddbf42a3d2892b0)
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/ContainerExecutor.java


> Fix findbugs for nodemanager & checkstyle warnings in 
> nodemanager.ContainerExecutor
> ---
>
> Key: YARN-5383
> URL: https://issues.apache.org/jira/browse/YARN-5383
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: nodemanager
>Affects Versions: 2.9.0
>Reporter: Vrushali C
>Assignee: Vrushali C
> Fix For: 2.9.0
>
> Attachments: YARN-5383.01.patch
>
>
> Nodemanager build shows a findbugs warning
> {code}
> Performance Warnings
> Code  Warning
> WMI   
> org.apache.hadoop.yarn.server.nodemanager.ContainerExecutor.writeLaunchEnv(OutputStream,
>  Map, Map, List, Path, String) makes inefficient use of keySet iterator 
> instead of entrySet iterator
> Bug type WMI_WRONG_MAP_ITERATOR (click for details) 
> In class org.apache.hadoop.yarn.server.nodemanager.ContainerExecutor
> In method 
> org.apache.hadoop.yarn.server.nodemanager.ContainerExecutor.writeLaunchEnv(OutputStream,
>  Map, Map, List, Path, String)
> At ContainerExecutor.java:[line 330]
> Details
> WMI_WRONG_MAP_ITERATOR: Inefficient use of keySet iterator instead of 
> entrySet iterator
> This method accesses the value of a Map entry, using a key that was retrieved 
> from a keySet iterator. It is more efficient to use an iterator on the 
> entrySet of the map, to avoid the Map.get(key) lookup.
> {code}
> There are also several checkstyle errors in the same class 
> org.apache.hadoop.yarn.server.nodemanager.ContainerExecutorContainerExecutor
> {code}
> [ERROR] 
> src/main/java/org/apache/hadoop/yarn/server/nodemanager/ContainerExecutor.java[308]
>  (indentation) Indentation: 'ContainerLaunch' have incorrect indentation 
> level 6, expected level should be 8.
> [ERROR] 
> src/main/java/org/apache/hadoop/yarn/server/nodemanager/ContainerExecutor.java[319:29]
>  (whitespace) WhitespaceAfter: ',' is not followed by whitespace.
> [ERROR] 
> src/main/java/org/apache/hadoop/yarn/server/nodemanager/ContainerExecutor.java[474:52]
>  (coding) HiddenField: 'conf' hides a field.
> [ERROR] 
> src/main/java/org/apache/hadoop/yarn/server/nodemanager/ContainerExecutor.java[497:52]
>  (coding) HiddenField: 'conf' hides a field.
> [ERROR] 
> src/main/java/org/apache/hadoop/yarn/server/nodemanager/ContainerExecutor.java[522:52]
>  (coding) HiddenField: 'conf' hides a field.
> [ERROR] 
> src/main/java/org/apache/hadoop/yarn/server/nodemanager/ContainerExecutor.java[529]
>  (sizes) LineLength: Line is longer than 80 characters (found 81).
> [ERROR] 
> src/main/java/org/apache/hadoop/yarn/server/nodemanager/ContainerExecutor.java[571:21]
>  (coding) HiddenField: 'conf' hides a field.
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5383) Fix findbugs for nodemanager & checkstyle warnings in nodemanager.ContainerExecutor

2016-07-15 Thread Varun Saxena (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5383?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15379815#comment-15379815
 ] 

Varun Saxena commented on YARN-5383:


Committed to trunk, branch-2.
Thanks [~vrushalic] for your contribution and [~ajisakaa] for review.

> Fix findbugs for nodemanager & checkstyle warnings in 
> nodemanager.ContainerExecutor
> ---
>
> Key: YARN-5383
> URL: https://issues.apache.org/jira/browse/YARN-5383
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: nodemanager
>Affects Versions: 2.9.0
>Reporter: Vrushali C
>Assignee: Vrushali C
> Fix For: 2.9.0
>
> Attachments: YARN-5383.01.patch
>
>
> Nodemanager build shows a findbugs warning
> {code}
> Performance Warnings
> Code  Warning
> WMI   
> org.apache.hadoop.yarn.server.nodemanager.ContainerExecutor.writeLaunchEnv(OutputStream,
>  Map, Map, List, Path, String) makes inefficient use of keySet iterator 
> instead of entrySet iterator
> Bug type WMI_WRONG_MAP_ITERATOR (click for details) 
> In class org.apache.hadoop.yarn.server.nodemanager.ContainerExecutor
> In method 
> org.apache.hadoop.yarn.server.nodemanager.ContainerExecutor.writeLaunchEnv(OutputStream,
>  Map, Map, List, Path, String)
> At ContainerExecutor.java:[line 330]
> Details
> WMI_WRONG_MAP_ITERATOR: Inefficient use of keySet iterator instead of 
> entrySet iterator
> This method accesses the value of a Map entry, using a key that was retrieved 
> from a keySet iterator. It is more efficient to use an iterator on the 
> entrySet of the map, to avoid the Map.get(key) lookup.
> {code}
> There are also several checkstyle errors in the same class 
> org.apache.hadoop.yarn.server.nodemanager.ContainerExecutorContainerExecutor
> {code}
> [ERROR] 
> src/main/java/org/apache/hadoop/yarn/server/nodemanager/ContainerExecutor.java[308]
>  (indentation) Indentation: 'ContainerLaunch' have incorrect indentation 
> level 6, expected level should be 8.
> [ERROR] 
> src/main/java/org/apache/hadoop/yarn/server/nodemanager/ContainerExecutor.java[319:29]
>  (whitespace) WhitespaceAfter: ',' is not followed by whitespace.
> [ERROR] 
> src/main/java/org/apache/hadoop/yarn/server/nodemanager/ContainerExecutor.java[474:52]
>  (coding) HiddenField: 'conf' hides a field.
> [ERROR] 
> src/main/java/org/apache/hadoop/yarn/server/nodemanager/ContainerExecutor.java[497:52]
>  (coding) HiddenField: 'conf' hides a field.
> [ERROR] 
> src/main/java/org/apache/hadoop/yarn/server/nodemanager/ContainerExecutor.java[522:52]
>  (coding) HiddenField: 'conf' hides a field.
> [ERROR] 
> src/main/java/org/apache/hadoop/yarn/server/nodemanager/ContainerExecutor.java[529]
>  (sizes) LineLength: Line is longer than 80 characters (found 81).
> [ERROR] 
> src/main/java/org/apache/hadoop/yarn/server/nodemanager/ContainerExecutor.java[571:21]
>  (coding) HiddenField: 'conf' hides a field.
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-5383) Fix findbugs for nodemanager & checkstyle warnings in nodemanager.ContainerExecutor

2016-07-15 Thread Varun Saxena (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-5383?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Varun Saxena updated YARN-5383:
---
Affects Version/s: (was: 3.0.0-alpha1)
   2.9.0

> Fix findbugs for nodemanager & checkstyle warnings in 
> nodemanager.ContainerExecutor
> ---
>
> Key: YARN-5383
> URL: https://issues.apache.org/jira/browse/YARN-5383
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: nodemanager
>Affects Versions: 2.9.0
>Reporter: Vrushali C
>Assignee: Vrushali C
> Fix For: 2.9.0
>
> Attachments: YARN-5383.01.patch
>
>
> Nodemanager build shows a findbugs warning
> {code}
> Performance Warnings
> Code  Warning
> WMI   
> org.apache.hadoop.yarn.server.nodemanager.ContainerExecutor.writeLaunchEnv(OutputStream,
>  Map, Map, List, Path, String) makes inefficient use of keySet iterator 
> instead of entrySet iterator
> Bug type WMI_WRONG_MAP_ITERATOR (click for details) 
> In class org.apache.hadoop.yarn.server.nodemanager.ContainerExecutor
> In method 
> org.apache.hadoop.yarn.server.nodemanager.ContainerExecutor.writeLaunchEnv(OutputStream,
>  Map, Map, List, Path, String)
> At ContainerExecutor.java:[line 330]
> Details
> WMI_WRONG_MAP_ITERATOR: Inefficient use of keySet iterator instead of 
> entrySet iterator
> This method accesses the value of a Map entry, using a key that was retrieved 
> from a keySet iterator. It is more efficient to use an iterator on the 
> entrySet of the map, to avoid the Map.get(key) lookup.
> {code}
> There are also several checkstyle errors in the same class 
> org.apache.hadoop.yarn.server.nodemanager.ContainerExecutorContainerExecutor
> {code}
> [ERROR] 
> src/main/java/org/apache/hadoop/yarn/server/nodemanager/ContainerExecutor.java[308]
>  (indentation) Indentation: 'ContainerLaunch' have incorrect indentation 
> level 6, expected level should be 8.
> [ERROR] 
> src/main/java/org/apache/hadoop/yarn/server/nodemanager/ContainerExecutor.java[319:29]
>  (whitespace) WhitespaceAfter: ',' is not followed by whitespace.
> [ERROR] 
> src/main/java/org/apache/hadoop/yarn/server/nodemanager/ContainerExecutor.java[474:52]
>  (coding) HiddenField: 'conf' hides a field.
> [ERROR] 
> src/main/java/org/apache/hadoop/yarn/server/nodemanager/ContainerExecutor.java[497:52]
>  (coding) HiddenField: 'conf' hides a field.
> [ERROR] 
> src/main/java/org/apache/hadoop/yarn/server/nodemanager/ContainerExecutor.java[522:52]
>  (coding) HiddenField: 'conf' hides a field.
> [ERROR] 
> src/main/java/org/apache/hadoop/yarn/server/nodemanager/ContainerExecutor.java[529]
>  (sizes) LineLength: Line is longer than 80 characters (found 81).
> [ERROR] 
> src/main/java/org/apache/hadoop/yarn/server/nodemanager/ContainerExecutor.java[571:21]
>  (coding) HiddenField: 'conf' hides a field.
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5369) Improve Yarn logs command to get container logs based on Node Id

2016-07-15 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5369?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15379763#comment-15379763
 ] 

Hadoop QA commented on YARN-5369:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 31s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 
0s {color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 10s 
{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 6m 
41s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 2m 18s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
39s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 4s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
29s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 
38s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 48s 
{color} | {color:green} trunk passed {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 11s 
{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 
53s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 2m 48s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 2m 48s 
{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} checkstyle {color} | {color:red} 0m 40s 
{color} | {color:red} hadoop-yarn-project/hadoop-yarn: The patch generated 4 
new + 88 unchanged - 1 fixed = 92 total (was 89) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 1s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
29s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
0s {color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 
52s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 45s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 2m 28s 
{color} | {color:green} hadoop-yarn-common in the patch passed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 8m 27s {color} 
| {color:red} hadoop-yarn-client in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 
18s {color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 34m 59s {color} 
| {color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.yarn.client.api.impl.TestYarnClient |
|   | hadoop.yarn.client.cli.TestLogsCLI |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:9560f25 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12818190/YARN-5369.1.patch |
| JIRA Issue | YARN-5369 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux 8d837c2a4391 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed 
Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / 7e5355c |
| Default Java | 1.8.0_91 |
| findbugs | v3.0.0 |
| checkstyle | 
https://builds.apache.org/job/PreCommit-YARN-Build/12340/artifact/patchprocess/diff-checkstyle-hadoop-yarn-project_hadoop-yarn.txt
 |
| unit | 
https://builds.apache.org/job/PreCommit-YARN-Build/12340/artifact/patchprocess/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-client.txt
 |
| unit test logs |  
https://builds.apache.org/j

[jira] [Updated] (YARN-1126) Add validation of users input nodes-states options to nodes CLI

2016-07-15 Thread Wei Yan (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-1126?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wei Yan updated YARN-1126:
--
Attachment: YARN-1126-004.patch

trigger the jenkins.

> Add validation of users input nodes-states options to nodes CLI
> ---
>
> Key: YARN-1126
> URL: https://issues.apache.org/jira/browse/YARN-1126
> Project: Hadoop YARN
>  Issue Type: Improvement
>Reporter: Wei Yan
>Assignee: Wei Yan
> Attachments: YARN-1126-002.patch, YARN-1126-003.patch, 
> YARN-1126-004.patch, YARN-905-addendum.patch
>
>
> Follow the discussion in YARN-905.
> (1) case-insensitive checks for "all".
> (2) validation of users input, exit with non-zero code and print all valid 
> states when user gives an invalid state.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-1126) Add validation of users input nodes-states options to nodes CLI

2016-07-15 Thread Wei Yan (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-1126?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wei Yan updated YARN-1126:
--
Attachment: (was: YARN-1126-004.patch)

> Add validation of users input nodes-states options to nodes CLI
> ---
>
> Key: YARN-1126
> URL: https://issues.apache.org/jira/browse/YARN-1126
> Project: Hadoop YARN
>  Issue Type: Improvement
>Reporter: Wei Yan
>Assignee: Wei Yan
> Attachments: YARN-1126-002.patch, YARN-1126-003.patch, 
> YARN-1126-004.patch, YARN-905-addendum.patch
>
>
> Follow the discussion in YARN-905.
> (1) case-insensitive checks for "all".
> (2) validation of users input, exit with non-zero code and print all valid 
> states when user gives an invalid state.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-5369) Improve Yarn logs command to get container logs based on Node Id

2016-07-15 Thread Xuan Gong (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-5369?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xuan Gong updated YARN-5369:

Attachment: YARN-5369.1.patch

> Improve Yarn logs command to get container logs based on Node Id
> 
>
> Key: YARN-5369
> URL: https://issues.apache.org/jira/browse/YARN-5369
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Xuan Gong
>Assignee: Xuan Gong
> Attachments: YARN-5369.1.patch
>
>
> It is helpful if we could have yarn logs --applicationId appId --nodeAddress 
> ${nodeId} to get all the container logs which ran on the specific nm.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Created] (YARN-5388) MAPREDUCE-6719 requires changes to DockerContainerExecutor

2016-07-15 Thread Daniel Templeton (JIRA)
Daniel Templeton created YARN-5388:
--

 Summary: MAPREDUCE-6719 requires changes to DockerContainerExecutor
 Key: YARN-5388
 URL: https://issues.apache.org/jira/browse/YARN-5388
 Project: Hadoop YARN
  Issue Type: Bug
  Components: nodemanager
Reporter: Daniel Templeton
Priority: Critical
 Fix For: 2.9.0


Because the {{DockerContainerExecuter}} overrides the {{writeLaunchEnv()}} 
method, it must also have the wildcard processing logic from 
YARN-4958/YARN-5373 added to it.  Without it, the use of -libjars will fail 
unless wildcarding is disabled.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Resolved] (YARN-5322) [YARN-3368] Add a node heat chart map

2016-07-15 Thread Sunil G (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-5322?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sunil G resolved YARN-5322.
---
  Resolution: Fixed
Hadoop Flags: Reviewed
Target Version/s: YARN-3368

Thanks [~leftnoteasy] for the contribution. I have verified and reviewed along 
with YARN-5321. Committed with YARN-5321. 
(1b56d537f5d5cee957dc66a04b4116534bb72f3a).

> [YARN-3368] Add a node heat chart map
> -
>
> Key: YARN-5322
> URL: https://issues.apache.org/jira/browse/YARN-5322
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Wangda Tan
>Assignee: Wangda Tan
> Attachments: sample-1.png
>
>
> With this we can easier figure out hotspot in the cluster.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5342) Improve non-exclusive node partition resource allocation in Capacity Scheduler

2016-07-15 Thread Sunil G (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5342?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15379671#comment-15379671
 ] 

Sunil G commented on YARN-5342:
---

Thanks [~Naganarasimha Garla] for the insightful thoughts.

By looking into one aspect like *“improve the allocation for non-exclusive 
label when requests are from an application of no_label”*, we can try to help 
each such app to go ahead with its allocation on a non-exclusive label by not 
waiting for all node heartbeats.
For that I think we can only look in to that very partition (node’s partition 
on which a node heartbeat is under processing for an app), and see whether we 
can use some resource for this no_label app. Yes, I agree with your top level 
view and its good to have an idea about other non-exclusive partition as well. 
Since we are having a node with us with some free space in current heartbeat, 
if we can push a no_label container here under limits, i think we are solving 
problem step by step.
And I very much agree to the comment about the chances of preemption to kick 
in. I think a fair balance is to be attained for the speed of allocations for 
no_label apps on a label against larger imbalances over queue’s capacity so 
that preemption may kick in.

So the checks which I have mentioned can be w.r.t an app or its queue so that 
we will try to solve the problem specific to each app by app. A much better and 
high level solution may cause lot of refactoring I guess. So suggested a 
simpler approach here. Thoughts?

> Improve non-exclusive node partition resource allocation in Capacity Scheduler
> --
>
> Key: YARN-5342
> URL: https://issues.apache.org/jira/browse/YARN-5342
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Wangda Tan
>Assignee: Sunil G
> Attachments: YARN-5342.1.patch
>
>
> In the previous implementation, one non-exclusive container allocation is 
> possible when the missed-opportunity >= #cluster-nodes. And 
> missed-opportunity will be reset when container allocated to any node.
> This will slow down the frequency of container allocation on non-exclusive 
> node partition: *When a non-exclusive partition=x has idle resource, we can 
> only allocate one container for this app in every 
> X=nodemanagers.heartbeat-interval secs for the whole cluster.*
> In this JIRA, I propose a fix to reset missed-opporunity only if we have >0 
> pending resource for the non-exclusive partition OR we get allocation from 
> the default partition.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5320) [YARN-3368] Add resource usage by applications and queues to cluster overview page.

2016-07-15 Thread Sunil G (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5320?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15379662#comment-15379662
 ] 

Sunil G commented on YARN-5320:
---

Hi [~cheersyang]
I think its a good idea to have. And we can track with an improvement jira. 
This one will focus on basic functionality. I will raise another jira ot track 
the same. pls feel free share your thoughts.

> [YARN-3368] Add resource usage by applications and queues to cluster overview 
> page.
> ---
>
> Key: YARN-5320
> URL: https://issues.apache.org/jira/browse/YARN-5320
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Wangda Tan
>Assignee: Wangda Tan
>
> With this, we can get understanding about which application / queue is 
> consuming most resource in the cluster.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Resolved] (YARN-5320) [YARN-3368] Add resource usage by applications and queues to cluster overview page.

2016-07-15 Thread Sunil G (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-5320?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sunil G resolved YARN-5320.
---
  Resolution: Fixed
Hadoop Flags: Reviewed
Target Version/s: YARN-3368

Thanks [~leftnoteasy] for the contribution. I have verified and reviewed along 
with YARN-5321. Committed with YARN-5321. 
(1b56d537f5d5cee957dc66a04b4116534bb72f3a).

> [YARN-3368] Add resource usage by applications and queues to cluster overview 
> page.
> ---
>
> Key: YARN-5320
> URL: https://issues.apache.org/jira/browse/YARN-5320
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Wangda Tan
>Assignee: Wangda Tan
>
> With this, we can get understanding about which application / queue is 
> consuming most resource in the cluster.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Issue Comment Deleted] (YARN-679) add an entry point that can start any Yarn service

2016-07-15 Thread Steve Loughran (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-679?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated YARN-679:

Comment: was deleted

(was: | (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 0s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:red}-1{color} | {color:red} patch {color} | {color:red} 0m 8s {color} 
| {color:red} YARN-679 does not apply to trunk. Rebase required? Wrong Branch? 
See https://wiki.apache.org/hadoop/HowToContribute for help. {color} |
\\
\\
|| Subsystem || Report/Notes ||
| JIRA Issue | YARN-679 |
| GITHUB PR | https://github.com/apache/hadoop/pull/68 |
| Console output | 
https://builds.apache.org/job/PreCommit-YARN-Build/11299/console |
| Powered by | Apache Yetus 0.2.0   http://yetus.apache.org |


This message was automatically generated.

)

> add an entry point that can start any Yarn service
> --
>
> Key: YARN-679
> URL: https://issues.apache.org/jira/browse/YARN-679
> Project: Hadoop YARN
>  Issue Type: New Feature
>  Components: api
>Affects Versions: 2.8.0
>Reporter: Steve Loughran
>Assignee: Steve Loughran
> Attachments: YARN-679-001.patch, YARN-679-002.patch, 
> YARN-679-002.patch, YARN-679-003.patch, YARN-679-004.patch, 
> YARN-679-005.patch, YARN-679-006.patch, YARN-679-007.patch, 
> YARN-679-008.patch, org.apache.hadoop.servic...mon 3.0.0-SNAPSHOT API).pdf
>
>  Time Spent: 72h
>  Remaining Estimate: 0h
>
> There's no need to write separate .main classes for every Yarn service, given 
> that the startup mechanism should be identical: create, init, start, wait for 
> stopped -with an interrupt handler to trigger a clean shutdown on a control-c 
> interrupt.
> Provide one that takes any classname, and a list of config files/options



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Issue Comment Deleted] (YARN-679) add an entry point that can start any Yarn service

2016-07-15 Thread Steve Loughran (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-679?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated YARN-679:

Comment: was deleted

(was: | (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 0s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:red}-1{color} | {color:red} patch {color} | {color:red} 0m 7s {color} 
| {color:red} YARN-679 does not apply to trunk. Rebase required? Wrong Branch? 
See https://wiki.apache.org/hadoop/HowToContribute for help. {color} |
\\
\\
|| Subsystem || Report/Notes ||
| JIRA Issue | YARN-679 |
| GITHUB PR | https://github.com/apache/hadoop/pull/68 |
| Console output | 
https://builds.apache.org/job/PreCommit-YARN-Build/11853/console |
| Powered by | Apache Yetus 0.3.0   http://yetus.apache.org |


This message was automatically generated.

)

> add an entry point that can start any Yarn service
> --
>
> Key: YARN-679
> URL: https://issues.apache.org/jira/browse/YARN-679
> Project: Hadoop YARN
>  Issue Type: New Feature
>  Components: api
>Affects Versions: 2.8.0
>Reporter: Steve Loughran
>Assignee: Steve Loughran
> Attachments: YARN-679-001.patch, YARN-679-002.patch, 
> YARN-679-002.patch, YARN-679-003.patch, YARN-679-004.patch, 
> YARN-679-005.patch, YARN-679-006.patch, YARN-679-007.patch, 
> YARN-679-008.patch, org.apache.hadoop.servic...mon 3.0.0-SNAPSHOT API).pdf
>
>  Time Spent: 72h
>  Remaining Estimate: 0h
>
> There's no need to write separate .main classes for every Yarn service, given 
> that the startup mechanism should be identical: create, init, start, wait for 
> stopped -with an interrupt handler to trigger a clean shutdown on a control-c 
> interrupt.
> Provide one that takes any classname, and a list of config files/options



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5339) passing file to -out for YARN log CLI doesnt give warning or error code

2016-07-15 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5339?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15379652#comment-15379652
 ] 

Hudson commented on YARN-5339:
--

SUCCESS: Integrated in Hadoop-trunk-Commit #10105 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/10105/])
YARN-5339. Passing file to -out for YARN log CLI doesnt give warning or 
(junping_du: rev 7e5355c14e55fd6540f7f070df4b78fa94a81618)
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-client/src/main/java/org/apache/hadoop/yarn/client/cli/LogsCLI.java
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/java/org/apache/hadoop/yarn/logaggregation/ContainerLogsRequest.java


> passing file to -out for YARN log CLI doesnt give warning or error code
> ---
>
> Key: YARN-5339
> URL: https://issues.apache.org/jira/browse/YARN-5339
> Project: Hadoop YARN
>  Issue Type: Improvement
>Reporter: Sumana Sathish
>Assignee: Xuan Gong
> Fix For: 2.9.0
>
> Attachments: YARN-5339.1.patch, YARN-5339.2.patch
>
>
> passing file to -out for YARN log CLI doesnt give warning or error code
> {code}
> yarn  logs -applicationId application_1467117709224_0003 -out 
> /grid/0/hadoopqe/artifacts/file.txt
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Resolved] (YARN-5348) [YARN-3368] Node details page improvements

2016-07-15 Thread Sunil G (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-5348?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sunil G resolved YARN-5348.
---
  Resolution: Fixed
Hadoop Flags: Reviewed
Target Version/s: YARN-3368

Thanks [~Sreenath] for the contribution. I have verified and reviewed along 
with YARN-5321. Committed with YARN-5321. 
(1b56d537f5d5cee957dc66a04b4116534bb72f3a).

> [YARN-3368] Node details page improvements
> --
>
> Key: YARN-5348
> URL: https://issues.apache.org/jira/browse/YARN-5348
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Sreenath Somarajapuram
>Assignee: Sreenath Somarajapuram
>
> - Improve the component styling
> - Correct padding in Node Information table



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Resolved] (YARN-5347) [YARN-3368] Applications page improvements

2016-07-15 Thread Sunil G (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-5347?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sunil G resolved YARN-5347.
---
  Resolution: Fixed
Hadoop Flags: Reviewed
Target Version/s: YARN-3368

Thanks [~Sreenath] for the contribution. I have verified and reviewed along 
with YARN-5321. Committed with YARN-5321. 
(1b56d537f5d5cee957dc66a04b4116534bb72f3a).

> [YARN-3368] Applications page improvements
> --
>
> Key: YARN-5347
> URL: https://issues.apache.org/jira/browse/YARN-5347
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Sreenath Somarajapuram
>Assignee: Sreenath Somarajapuram
>
> Applications page:
> - Add a "Long running service" sub-page
> Application details page:
> - Improve the layout
> -- Correct the component borders - Remove double border & the extra space
> -- Layout "Application Basic Information" vertically
> - List attempts under the application as a subpage
> - Hide the diagnostics panel when, diagnostics data is not available



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5309) SSLFactory truststore reloader thread leak in TimelineClientImpl

2016-07-15 Thread Weiwei Yang (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5309?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15379607#comment-15379607
 ] 

Weiwei Yang commented on YARN-5309:
---

Hello  [~vvasudev] 

V5 patch resolved the UT failure, added org.bouncycastle in pom.xml just like 
what added in HADOOP-11230. Please let me know if it looks good to you. Thanks.

> SSLFactory truststore reloader thread leak in TimelineClientImpl
> 
>
> Key: YARN-5309
> URL: https://issues.apache.org/jira/browse/YARN-5309
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: timelineserver, yarn
>Affects Versions: 2.7.1
>Reporter: Thomas Friedrich
>Assignee: Weiwei Yang
>Priority: Blocker
> Attachments: YARN-5309.001.patch, YARN-5309.002.patch, 
> YARN-5309.003.patch, YARN-5309.004.patch, YARN-5309.005.patch
>
>
> We found a similar issue as HADOOP-11368 in TimelineClientImpl. The class 
> creates an instance of SSLFactory in newSslConnConfigurator and subsequently 
> creates the ReloadingX509TrustManager instance which in turn starts a trust 
> store reloader thread. 
> However, the SSLFactory is never destroyed and hence the trust store reloader 
> threads are not killed.
> This problem was observed by a customer who had SSL enabled in Hadoop and 
> submitted many queries against the HiveServer2. After a few days, the HS2 
> instance crashed and from the Java dump we could see many (over 13000) 
> threads like this:
> "Truststore reloader thread" #126 daemon prio=5 os_prio=0 
> tid=0x7f680d2e3000 nid=0x98fd waiting on 
> condition [0x7f67e482c000]
>java.lang.Thread.State: TIMED_WAITING (sleeping)
> at java.lang.Thread.sleep(Native Method)
> at org.apache.hadoop.security.ssl.ReloadingX509TrustManager.run
> (ReloadingX509TrustManager.java:225)
> at java.lang.Thread.run(Thread.java:745)
> HiveServer2 uses the JobClient to submit a job:
> Thread [HiveServer2-Background-Pool: Thread-188] (Suspended (breakpoint at 
> line 89 in 
> ReloadingX509TrustManager))   
>   owns: Object  (id=464)  
>   owns: Object  (id=465)  
>   owns: Object  (id=466)  
>   owns: ServiceLoader  (id=210)
>   ReloadingX509TrustManager.(String, String, String, long) line: 89 
>   FileBasedKeyStoresFactory.init(SSLFactory$Mode) line: 209   
>   SSLFactory.init() line: 131 
>   TimelineClientImpl.newSslConnConfigurator(int, Configuration) line: 532 
>   TimelineClientImpl.newConnConfigurator(Configuration) line: 507 
>   TimelineClientImpl.serviceInit(Configuration) line: 269 
>   TimelineClientImpl(AbstractService).init(Configuration) line: 163   
>   YarnClientImpl.serviceInit(Configuration) line: 169 
>   YarnClientImpl(AbstractService).init(Configuration) line: 163   
>   ResourceMgrDelegate.serviceInit(Configuration) line: 102
>   ResourceMgrDelegate(AbstractService).init(Configuration) line: 163  
>   ResourceMgrDelegate.(YarnConfiguration) line: 96  
>   YARNRunner.(Configuration) line: 112  
>   YarnClientProtocolProvider.create(Configuration) line: 34   
>   Cluster.initialize(InetSocketAddress, Configuration) line: 95   
>   Cluster.(InetSocketAddress, Configuration) line: 82   
>   Cluster.(Configuration) line: 75  
>   JobClient.init(JobConf) line: 475   
>   JobClient.(JobConf) line: 454 
>   MapRedTask(ExecDriver).execute(DriverContext) line: 401 
>   MapRedTask.execute(DriverContext) line: 137 
>   MapRedTask(Task).executeTask() line: 160 
>   TaskRunner.runSequential() line: 88 
>   Driver.launchTask(Task, String, boolean, String, int, 
> DriverContext) line: 1653   
>   Driver.execute() line: 1412 
> For every job, a new instance of JobClient/YarnClientImpl/TimelineClientImpl 
> is created. But because the HS2 process stays up for days, the previous trust 
> store reloader threads are still hanging around in the HS2 process and 
> eventually use all the resources available. 
> It seems like a similar fix as HADOOP-11368 is needed in TimelineClientImpl 
> but it doesn't have a destroy method to begin with. 
> One option to avoid this problem is to disable the yarn timeline service 
> (yarn.timeline-service.enabled=false).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5345) [YARN-3368] Cluster overview page improvements

2016-07-15 Thread Sunil G (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5345?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15379596#comment-15379596
 ] 

Sunil G commented on YARN-5345:
---

Thanks [~Sreenath] for the contribution. I have verified and reviewed along 
with YARN-5321. Committed with YARN-5321. 
(1b56d537f5d5cee957dc66a04b4116534bb72f3a). 

> [YARN-3368] Cluster overview page improvements
> --
>
> Key: YARN-5345
> URL: https://issues.apache.org/jira/browse/YARN-5345
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Sreenath Somarajapuram
>Assignee: Sreenath Somarajapuram
>
> - Improve the border/font/color etc in existing donut charts
> -- Solid lines and colors might give a better looks
> -- Ensure the text is confined to the empty space in the donut
> -- Use color codes that convey the meaning of statuses



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-5345) [YARN-3368] Cluster overview page improvements

2016-07-15 Thread Sunil G (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-5345?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sunil G updated YARN-5345:
--
Release Note:   (was: Thanks [~Sreenath] for the contribution. I have 
verified and reviewed along with YARN-5321. Committed with YARN-5321. 
(1b56d537f5d5cee957dc66a04b4116534bb72f3a). )

> [YARN-3368] Cluster overview page improvements
> --
>
> Key: YARN-5345
> URL: https://issues.apache.org/jira/browse/YARN-5345
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Sreenath Somarajapuram
>Assignee: Sreenath Somarajapuram
>
> - Improve the border/font/color etc in existing donut charts
> -- Solid lines and colors might give a better looks
> -- Ensure the text is confined to the empty space in the donut
> -- Use color codes that convey the meaning of statuses



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5309) SSLFactory truststore reloader thread leak in TimelineClientImpl

2016-07-15 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5309?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15379604#comment-15379604
 ] 

Hadoop QA commented on YARN-5309:
-

| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 36s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 
0s {color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 6m 
43s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 26s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
18s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 30s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
13s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 0m 
56s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 27s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 
26s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 23s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 23s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
16s {color} | {color:green} hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common: 
The patch generated 0 new + 41 unchanged - 2 fixed = 41 total (was 43) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 28s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
10s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
0s {color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green} 0m 2s 
{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 0m 
59s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 25s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 2m 16s 
{color} | {color:green} hadoop-yarn-common in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 
15s {color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 16m 26s {color} 
| {color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:9560f25 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12818180/YARN-5309.005.patch |
| JIRA Issue | YARN-5309 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  xml  findbugs  checkstyle  |
| uname | Linux 8e578b894a34 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed 
Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / b5ee7db |
| Default Java | 1.8.0_91 |
| findbugs | v3.0.0 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-YARN-Build/12339/testReport/ |
| modules | C: hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common U: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common |
| Console output | 
https://builds.apache.org/job/PreCommit-YARN-Build/12339/console |
| Powered by | Apache Yetus 0.3.0   http://yetus.apache.org |


This message was automatically generated.



> SSLFactory truststore reloader thread leak in TimelineClientImpl
> 
>
> Key: YARN-5309
> URL: https://issues.apache.org/jira/browse/YARN-5309
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: timelinese

[jira] [Resolved] (YARN-5346) [YARN-3368] Queues page improvements

2016-07-15 Thread Sunil G (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-5346?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sunil G resolved YARN-5346.
---
  Resolution: Fixed
Hadoop Flags: Reviewed
Target Version/s: YARN-3368

Thanks [~Sreenath] for the contribution. I have verified and reviewed along 
with YARN-5321. Committed with YARN-5321. 
(1b56d537f5d5cee957dc66a04b4116534bb72f3a). 

> [YARN-3368] Queues page improvements
> 
>
> Key: YARN-5346
> URL: https://issues.apache.org/jira/browse/YARN-5346
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Sreenath Somarajapuram
>Assignee: Sreenath Somarajapuram
>
> Queues page:
> - Reorder contents in existing Queues page, and Improve UI components
> - On clicking a queue, the user must be taken to the respective queue's 
> details page.
> - Display queue details on mouseover
> - The bar and doughnut charts doesn't update on queue change, that needs to 
> be fixed
> Queue details page:
> - Add a sub-page for all applications running under the queue



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Resolved] (YARN-5345) [YARN-3368] Cluster overview page improvements

2016-07-15 Thread Sunil G (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-5345?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sunil G resolved YARN-5345.
---
  Resolution: Fixed
Hadoop Flags: Reviewed
Release Note: Thanks [~Sreenath] for the contribution. I have verified 
and reviewed along with YARN-5321. Committed with YARN-5321. 
(1b56d537f5d5cee957dc66a04b4116534bb72f3a). 
Target Version/s: YARN-3368

> [YARN-3368] Cluster overview page improvements
> --
>
> Key: YARN-5345
> URL: https://issues.apache.org/jira/browse/YARN-5345
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Sreenath Somarajapuram
>Assignee: Sreenath Somarajapuram
>
> - Improve the border/font/color etc in existing donut charts
> -- Solid lines and colors might give a better looks
> -- Ensure the text is confined to the empty space in the donut
> -- Use color codes that convey the meaning of statuses



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-5344) [YARN-3368] Generic UI improvements

2016-07-15 Thread Sunil G (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-5344?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sunil G updated YARN-5344:
--
Fix Version/s: YARN-3368

> [YARN-3368] Generic UI improvements
> ---
>
> Key: YARN-5344
> URL: https://issues.apache.org/jira/browse/YARN-5344
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Sreenath Somarajapuram
>Assignee: Sreenath Somarajapuram
> Fix For: YARN-3368
>
>
> - Add breadcrumps in all pages
> - Define a vertical space (to the left) for displaying sub-pages



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Resolved] (YARN-5344) [YARN-3368] Generic UI improvements

2016-07-15 Thread Sunil G (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-5344?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sunil G resolved YARN-5344.
---
  Resolution: Fixed
Hadoop Flags: Reviewed

Committed with YARN-5321. (1b56d537f5d5cee957dc66a04b4116534bb72f3a). Thanks 
[~Sreenath]

> [YARN-3368] Generic UI improvements
> ---
>
> Key: YARN-5344
> URL: https://issues.apache.org/jira/browse/YARN-5344
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Sreenath Somarajapuram
>Assignee: Sreenath Somarajapuram
>
> - Add breadcrumps in all pages
> - Define a vertical space (to the left) for displaying sub-pages



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5309) SSLFactory truststore reloader thread leak in TimelineClientImpl

2016-07-15 Thread Weiwei Yang (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5309?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15379570#comment-15379570
 ] 

Weiwei Yang commented on YARN-5309:
---

My bad, my local UT passed both in positive and negative cases so I overlooked 
this result from Jenkins. I just added the dependency to pom and triggered a 
new jenkins run. Thanks a lot [~vvasudev] !

> SSLFactory truststore reloader thread leak in TimelineClientImpl
> 
>
> Key: YARN-5309
> URL: https://issues.apache.org/jira/browse/YARN-5309
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: timelineserver, yarn
>Affects Versions: 2.7.1
>Reporter: Thomas Friedrich
>Assignee: Weiwei Yang
>Priority: Blocker
> Attachments: YARN-5309.001.patch, YARN-5309.002.patch, 
> YARN-5309.003.patch, YARN-5309.004.patch, YARN-5309.005.patch
>
>
> We found a similar issue as HADOOP-11368 in TimelineClientImpl. The class 
> creates an instance of SSLFactory in newSslConnConfigurator and subsequently 
> creates the ReloadingX509TrustManager instance which in turn starts a trust 
> store reloader thread. 
> However, the SSLFactory is never destroyed and hence the trust store reloader 
> threads are not killed.
> This problem was observed by a customer who had SSL enabled in Hadoop and 
> submitted many queries against the HiveServer2. After a few days, the HS2 
> instance crashed and from the Java dump we could see many (over 13000) 
> threads like this:
> "Truststore reloader thread" #126 daemon prio=5 os_prio=0 
> tid=0x7f680d2e3000 nid=0x98fd waiting on 
> condition [0x7f67e482c000]
>java.lang.Thread.State: TIMED_WAITING (sleeping)
> at java.lang.Thread.sleep(Native Method)
> at org.apache.hadoop.security.ssl.ReloadingX509TrustManager.run
> (ReloadingX509TrustManager.java:225)
> at java.lang.Thread.run(Thread.java:745)
> HiveServer2 uses the JobClient to submit a job:
> Thread [HiveServer2-Background-Pool: Thread-188] (Suspended (breakpoint at 
> line 89 in 
> ReloadingX509TrustManager))   
>   owns: Object  (id=464)  
>   owns: Object  (id=465)  
>   owns: Object  (id=466)  
>   owns: ServiceLoader  (id=210)
>   ReloadingX509TrustManager.(String, String, String, long) line: 89 
>   FileBasedKeyStoresFactory.init(SSLFactory$Mode) line: 209   
>   SSLFactory.init() line: 131 
>   TimelineClientImpl.newSslConnConfigurator(int, Configuration) line: 532 
>   TimelineClientImpl.newConnConfigurator(Configuration) line: 507 
>   TimelineClientImpl.serviceInit(Configuration) line: 269 
>   TimelineClientImpl(AbstractService).init(Configuration) line: 163   
>   YarnClientImpl.serviceInit(Configuration) line: 169 
>   YarnClientImpl(AbstractService).init(Configuration) line: 163   
>   ResourceMgrDelegate.serviceInit(Configuration) line: 102
>   ResourceMgrDelegate(AbstractService).init(Configuration) line: 163  
>   ResourceMgrDelegate.(YarnConfiguration) line: 96  
>   YARNRunner.(Configuration) line: 112  
>   YarnClientProtocolProvider.create(Configuration) line: 34   
>   Cluster.initialize(InetSocketAddress, Configuration) line: 95   
>   Cluster.(InetSocketAddress, Configuration) line: 82   
>   Cluster.(Configuration) line: 75  
>   JobClient.init(JobConf) line: 475   
>   JobClient.(JobConf) line: 454 
>   MapRedTask(ExecDriver).execute(DriverContext) line: 401 
>   MapRedTask.execute(DriverContext) line: 137 
>   MapRedTask(Task).executeTask() line: 160 
>   TaskRunner.runSequential() line: 88 
>   Driver.launchTask(Task, String, boolean, String, int, 
> DriverContext) line: 1653   
>   Driver.execute() line: 1412 
> For every job, a new instance of JobClient/YarnClientImpl/TimelineClientImpl 
> is created. But because the HS2 process stays up for days, the previous trust 
> store reloader threads are still hanging around in the HS2 process and 
> eventually use all the resources available. 
> It seems like a similar fix as HADOOP-11368 is needed in TimelineClientImpl 
> but it doesn't have a destroy method to begin with. 
> One option to avoid this problem is to disable the yarn timeline service 
> (yarn.timeline-service.enabled=false).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-5309) SSLFactory truststore reloader thread leak in TimelineClientImpl

2016-07-15 Thread Weiwei Yang (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-5309?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Weiwei Yang updated YARN-5309:
--
Attachment: YARN-5309.005.patch

> SSLFactory truststore reloader thread leak in TimelineClientImpl
> 
>
> Key: YARN-5309
> URL: https://issues.apache.org/jira/browse/YARN-5309
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: timelineserver, yarn
>Affects Versions: 2.7.1
>Reporter: Thomas Friedrich
>Assignee: Weiwei Yang
>Priority: Blocker
> Attachments: YARN-5309.001.patch, YARN-5309.002.patch, 
> YARN-5309.003.patch, YARN-5309.004.patch, YARN-5309.005.patch
>
>
> We found a similar issue as HADOOP-11368 in TimelineClientImpl. The class 
> creates an instance of SSLFactory in newSslConnConfigurator and subsequently 
> creates the ReloadingX509TrustManager instance which in turn starts a trust 
> store reloader thread. 
> However, the SSLFactory is never destroyed and hence the trust store reloader 
> threads are not killed.
> This problem was observed by a customer who had SSL enabled in Hadoop and 
> submitted many queries against the HiveServer2. After a few days, the HS2 
> instance crashed and from the Java dump we could see many (over 13000) 
> threads like this:
> "Truststore reloader thread" #126 daemon prio=5 os_prio=0 
> tid=0x7f680d2e3000 nid=0x98fd waiting on 
> condition [0x7f67e482c000]
>java.lang.Thread.State: TIMED_WAITING (sleeping)
> at java.lang.Thread.sleep(Native Method)
> at org.apache.hadoop.security.ssl.ReloadingX509TrustManager.run
> (ReloadingX509TrustManager.java:225)
> at java.lang.Thread.run(Thread.java:745)
> HiveServer2 uses the JobClient to submit a job:
> Thread [HiveServer2-Background-Pool: Thread-188] (Suspended (breakpoint at 
> line 89 in 
> ReloadingX509TrustManager))   
>   owns: Object  (id=464)  
>   owns: Object  (id=465)  
>   owns: Object  (id=466)  
>   owns: ServiceLoader  (id=210)
>   ReloadingX509TrustManager.(String, String, String, long) line: 89 
>   FileBasedKeyStoresFactory.init(SSLFactory$Mode) line: 209   
>   SSLFactory.init() line: 131 
>   TimelineClientImpl.newSslConnConfigurator(int, Configuration) line: 532 
>   TimelineClientImpl.newConnConfigurator(Configuration) line: 507 
>   TimelineClientImpl.serviceInit(Configuration) line: 269 
>   TimelineClientImpl(AbstractService).init(Configuration) line: 163   
>   YarnClientImpl.serviceInit(Configuration) line: 169 
>   YarnClientImpl(AbstractService).init(Configuration) line: 163   
>   ResourceMgrDelegate.serviceInit(Configuration) line: 102
>   ResourceMgrDelegate(AbstractService).init(Configuration) line: 163  
>   ResourceMgrDelegate.(YarnConfiguration) line: 96  
>   YARNRunner.(Configuration) line: 112  
>   YarnClientProtocolProvider.create(Configuration) line: 34   
>   Cluster.initialize(InetSocketAddress, Configuration) line: 95   
>   Cluster.(InetSocketAddress, Configuration) line: 82   
>   Cluster.(Configuration) line: 75  
>   JobClient.init(JobConf) line: 475   
>   JobClient.(JobConf) line: 454 
>   MapRedTask(ExecDriver).execute(DriverContext) line: 401 
>   MapRedTask.execute(DriverContext) line: 137 
>   MapRedTask(Task).executeTask() line: 160 
>   TaskRunner.runSequential() line: 88 
>   Driver.launchTask(Task, String, boolean, String, int, 
> DriverContext) line: 1653   
>   Driver.execute() line: 1412 
> For every job, a new instance of JobClient/YarnClientImpl/TimelineClientImpl 
> is created. But because the HS2 process stays up for days, the previous trust 
> store reloader threads are still hanging around in the HS2 process and 
> eventually use all the resources available. 
> It seems like a similar fix as HADOOP-11368 is needed in TimelineClientImpl 
> but it doesn't have a destroy method to begin with. 
> One option to avoid this problem is to disable the yarn timeline service 
> (yarn.timeline-service.enabled=false).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5321) [YARN-3368] Add resource usage for application by node managers

2016-07-15 Thread Sunil G (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5321?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15379562#comment-15379562
 ] 

Sunil G commented on YARN-5321:
---

Thanks [~Sreenath]. Committing the same. I will raise another ticket if any 
alignment  issues while resizing happens.

> [YARN-3368] Add resource usage for application by node managers
> ---
>
> Key: YARN-5321
> URL: https://issues.apache.org/jira/browse/YARN-5321
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Wangda Tan
>Assignee: Wangda Tan
> Attachments: YARN-5321-YARN-3368-0001.patch, 
> YARN-5321-YARN-3368.0002.patch, YARN-5321-YARN-3368.003.patch, 
> YARN-5321-YARN-3368.004.patch, YARN-5321-YARN-3368.005.patch, sample-1.png
>
>
> With this, user can understand distribution of resources allocated to this 
> application.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5382) RM does not audit log kill request for active applications

2016-07-15 Thread Jason Lowe (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5382?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15379561#comment-15379561
 ] 

Jason Lowe commented on YARN-5382:
--

Thanks for the patch, [~vrushalic]!

I don't think we should do a vanilla log here.  It won't appear in the audit 
log stream, and that's where users are going to look if they have configured 
the audit log to a separate file (as we have).  We should simply audit log a 
success here, as for all practical purposes it is a success.  We verified the 
app ID is valid, active, and the user has permissions to do so.  At this point 
the only race is whether the app completes before the kill arrives, but we 
already log success when the kill arrives after the app completes, so why not 
here as well?

> RM does not audit log kill request for active applications
> --
>
> Key: YARN-5382
> URL: https://issues.apache.org/jira/browse/YARN-5382
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: resourcemanager
>Affects Versions: 2.7.2
>Reporter: Jason Lowe
>Assignee: Vrushali C
> Attachments: YARN-5382-branch-2.7.01.patch
>
>
> ClientRMService will audit a kill request but only if it either fails to 
> issue the kill or if the kill is sent to an already finished application.  It 
> does not create a log entry when the application is active which is arguably 
> the most important case to audit.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5272) Handle queue names consistently in FairScheduler

2016-07-15 Thread Wilfred Spiegelenburg (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5272?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15379209#comment-15379209
 ] 

Wilfred Spiegelenburg commented on YARN-5272:
-

Sorry for the late reply. I ran into some DNS issues that took clusters down 
and have not paid attention to this one.

I would like to move the abstraction of the trim to a new jira. There is more 
trimming and splitting in the FairScheduler code that needs to be standardised 
and use utility methods.
The two points that we currently use it is split over two files and we probably 
want to introduce a new class FairSchedulerUtilities which has all these 
methods. Would you like me to log a new jira for it and make a start with the 
trim and split changes?

> Handle queue names consistently in FairScheduler
> 
>
> Key: YARN-5272
> URL: https://issues.apache.org/jira/browse/YARN-5272
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: fairscheduler
>Affects Versions: 2.8.0
>Reporter: Wilfred Spiegelenburg
>Assignee: Wilfred Spiegelenburg
> Attachments: YARN-5272.1.patch, YARN-5272.3.patch, YARN-5272.4.patch
>
>
> The fix used in YARN-3241 uses a the JDK trim() method to remove leading and 
> trailing spaces. The QueueMetrics uses a guava based trim when it splits the 
> queues.
> The guava based trim uses the unicode definition of a white space which is 
> different than the java trim as can be seen 
> [here|https://docs.google.com/a/cloudera.com/spreadsheets/d/1kq4ECwPjHX9B8QUCTPclgsDCXYaj7T-FlT4tB5q3ahk/pub]
> A queue name with a non-breaking white space will thus still cause the same 
> "Metrics source XXX already exists!" MetricsException.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-5272) Handle queue names consistently in FairScheduler

2016-07-15 Thread Wilfred Spiegelenburg (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-5272?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wilfred Spiegelenburg updated YARN-5272:

Description: 
The fix used in YARN-3241 uses a the JDK trim() method to remove leading and 
trailing spaces. The QueueMetrics uses a guava based trim when it splits the 
queues.

The guava based trim uses the unicode definition of a white space which is 
different than the java trim as can be seen 
[here|https://docs.google.com/a/cloudera.com/spreadsheets/d/1kq4ECwPjHX9B8QUCTPclgsDCXYaj7T-FlT4tB5q3ahk/pub]

A queue name with a non-breaking white space will thus still cause the same 
"Metrics source XXX already exists!" MetricsException.

  was:
The fix used in YARN-3214 uses a the JDK trim() method to remove leading and 
trailing spaces. The QueueMetrics uses a guava based trim when it splits the 
queues.

The guava based trim uses the unicode definition of a white space which is 
different than the java trim as can be seen 
[here|https://docs.google.com/a/cloudera.com/spreadsheets/d/1kq4ECwPjHX9B8QUCTPclgsDCXYaj7T-FlT4tB5q3ahk/pub]

A queue name with a non-breaking white space will thus still cause the same 
"Metrics source XXX already exists!" MetricsException.


> Handle queue names consistently in FairScheduler
> 
>
> Key: YARN-5272
> URL: https://issues.apache.org/jira/browse/YARN-5272
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: fairscheduler
>Affects Versions: 2.8.0
>Reporter: Wilfred Spiegelenburg
>Assignee: Wilfred Spiegelenburg
> Attachments: YARN-5272.1.patch, YARN-5272.3.patch, YARN-5272.4.patch
>
>
> The fix used in YARN-3241 uses a the JDK trim() method to remove leading and 
> trailing spaces. The QueueMetrics uses a guava based trim when it splits the 
> queues.
> The guava based trim uses the unicode definition of a white space which is 
> different than the java trim as can be seen 
> [here|https://docs.google.com/a/cloudera.com/spreadsheets/d/1kq4ECwPjHX9B8QUCTPclgsDCXYaj7T-FlT4tB5q3ahk/pub]
> A queue name with a non-breaking white space will thus still cause the same 
> "Metrics source XXX already exists!" MetricsException.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5309) SSLFactory truststore reloader thread leak in TimelineClientImpl

2016-07-15 Thread Varun Vasudev (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5309?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15379158#comment-15379158
 ] 

Varun Vasudev commented on YARN-5309:
-

Thanks for the patch [~cheersyang]! Can you investigate the unit test failure? 
It looks like the failed unit test is the one you added as part of your patch. 
I suspect you have to update the pom to add org.bouncycastle.

> SSLFactory truststore reloader thread leak in TimelineClientImpl
> 
>
> Key: YARN-5309
> URL: https://issues.apache.org/jira/browse/YARN-5309
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: timelineserver, yarn
>Affects Versions: 2.7.1
>Reporter: Thomas Friedrich
>Assignee: Weiwei Yang
>Priority: Blocker
> Attachments: YARN-5309.001.patch, YARN-5309.002.patch, 
> YARN-5309.003.patch, YARN-5309.004.patch
>
>
> We found a similar issue as HADOOP-11368 in TimelineClientImpl. The class 
> creates an instance of SSLFactory in newSslConnConfigurator and subsequently 
> creates the ReloadingX509TrustManager instance which in turn starts a trust 
> store reloader thread. 
> However, the SSLFactory is never destroyed and hence the trust store reloader 
> threads are not killed.
> This problem was observed by a customer who had SSL enabled in Hadoop and 
> submitted many queries against the HiveServer2. After a few days, the HS2 
> instance crashed and from the Java dump we could see many (over 13000) 
> threads like this:
> "Truststore reloader thread" #126 daemon prio=5 os_prio=0 
> tid=0x7f680d2e3000 nid=0x98fd waiting on 
> condition [0x7f67e482c000]
>java.lang.Thread.State: TIMED_WAITING (sleeping)
> at java.lang.Thread.sleep(Native Method)
> at org.apache.hadoop.security.ssl.ReloadingX509TrustManager.run
> (ReloadingX509TrustManager.java:225)
> at java.lang.Thread.run(Thread.java:745)
> HiveServer2 uses the JobClient to submit a job:
> Thread [HiveServer2-Background-Pool: Thread-188] (Suspended (breakpoint at 
> line 89 in 
> ReloadingX509TrustManager))   
>   owns: Object  (id=464)  
>   owns: Object  (id=465)  
>   owns: Object  (id=466)  
>   owns: ServiceLoader  (id=210)
>   ReloadingX509TrustManager.(String, String, String, long) line: 89 
>   FileBasedKeyStoresFactory.init(SSLFactory$Mode) line: 209   
>   SSLFactory.init() line: 131 
>   TimelineClientImpl.newSslConnConfigurator(int, Configuration) line: 532 
>   TimelineClientImpl.newConnConfigurator(Configuration) line: 507 
>   TimelineClientImpl.serviceInit(Configuration) line: 269 
>   TimelineClientImpl(AbstractService).init(Configuration) line: 163   
>   YarnClientImpl.serviceInit(Configuration) line: 169 
>   YarnClientImpl(AbstractService).init(Configuration) line: 163   
>   ResourceMgrDelegate.serviceInit(Configuration) line: 102
>   ResourceMgrDelegate(AbstractService).init(Configuration) line: 163  
>   ResourceMgrDelegate.(YarnConfiguration) line: 96  
>   YARNRunner.(Configuration) line: 112  
>   YarnClientProtocolProvider.create(Configuration) line: 34   
>   Cluster.initialize(InetSocketAddress, Configuration) line: 95   
>   Cluster.(InetSocketAddress, Configuration) line: 82   
>   Cluster.(Configuration) line: 75  
>   JobClient.init(JobConf) line: 475   
>   JobClient.(JobConf) line: 454 
>   MapRedTask(ExecDriver).execute(DriverContext) line: 401 
>   MapRedTask.execute(DriverContext) line: 137 
>   MapRedTask(Task).executeTask() line: 160 
>   TaskRunner.runSequential() line: 88 
>   Driver.launchTask(Task, String, boolean, String, int, 
> DriverContext) line: 1653   
>   Driver.execute() line: 1412 
> For every job, a new instance of JobClient/YarnClientImpl/TimelineClientImpl 
> is created. But because the HS2 process stays up for days, the previous trust 
> store reloader threads are still hanging around in the HS2 process and 
> eventually use all the resources available. 
> It seems like a similar fix as HADOOP-11368 is needed in TimelineClientImpl 
> but it doesn't have a destroy method to begin with. 
> One option to avoid this problem is to disable the yarn timeline service 
> (yarn.timeline-service.enabled=false).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5159) Wrong Javadoc tag in MiniYarnCluster

2016-07-15 Thread Andras Bokor (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5159?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15379131#comment-15379131
 ] 

Andras Bokor commented on YARN-5159:


Thanks [~ajisakaa]!

> Wrong Javadoc tag in MiniYarnCluster
> 
>
> Key: YARN-5159
> URL: https://issues.apache.org/jira/browse/YARN-5159
> Project: Hadoop YARN
>  Issue Type: Test
>  Components: documentation
>Affects Versions: 2.6.0
>Reporter: Andras Bokor
>Assignee: Andras Bokor
> Fix For: 2.8.0
>
> Attachments: YARN-5159.01.patch, YARN-5159.02.patch, 
> YARN-5159.03.patch
>
>
> {@YarnConfiguration.RM_SCHEDULER_INCLUDE_PORT_IN_NODE_NAME} is wrong. Should 
> be changed to 
>  {@value YarnConfiguration#RM_SCHEDULER_INCLUDE_PORT_IN_NODE_NAME}
> Edit:
> I noted that due to java 8 javadoc restrictions the javadoc:test-javadoc goal 
> fails on hadoop-yarn-server-tests project.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5321) [YARN-3368] Add resource usage for application by node managers

2016-07-15 Thread Sreenath Somarajapuram (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5321?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15379130#comment-15379130
 ] 

Sreenath Somarajapuram commented on YARN-5321:
--

+1 LGTM
Other alignment issues can be dealt with another ticket.

> [YARN-3368] Add resource usage for application by node managers
> ---
>
> Key: YARN-5321
> URL: https://issues.apache.org/jira/browse/YARN-5321
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Wangda Tan
>Assignee: Wangda Tan
> Attachments: YARN-5321-YARN-3368-0001.patch, 
> YARN-5321-YARN-3368.0002.patch, YARN-5321-YARN-3368.003.patch, 
> YARN-5321-YARN-3368.004.patch, YARN-5321-YARN-3368.005.patch, sample-1.png
>
>
> With this, user can understand distribution of resources allocated to this 
> application.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5382) RM does not audit log kill request for active applications

2016-07-15 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5382?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15379128#comment-15379128
 ] 

Hadoop QA commented on YARN-5382:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 58s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red} 0m 0s 
{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 9m 
13s {color} | {color:green} branch-2.7 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 27s 
{color} | {color:green} branch-2.7 passed with JDK v1.8.0_91 {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 31s 
{color} | {color:green} branch-2.7 passed with JDK v1.7.0_101 {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
25s {color} | {color:green} branch-2.7 passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 40s 
{color} | {color:green} branch-2.7 passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
23s {color} | {color:green} branch-2.7 passed {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red} 1m 14s 
{color} | {color:red} 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager
 in branch-2.7 has 1 extant Findbugs warnings. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 20s 
{color} | {color:green} branch-2.7 passed with JDK v1.8.0_91 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 23s 
{color} | {color:green} branch-2.7 passed with JDK v1.7.0_101 {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 
28s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 23s 
{color} | {color:green} the patch passed with JDK v1.8.0_91 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 23s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 26s 
{color} | {color:green} the patch passed with JDK v1.7.0_101 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 26s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
18s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 33s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
13s {color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} whitespace {color} | {color:red} 0m 0s 
{color} | {color:red} The patch has 1271 line(s) that end in whitespace. Use 
git apply --whitespace=fix. {color} |
| {color:red}-1{color} | {color:red} whitespace {color} | {color:red} 0m 36s 
{color} | {color:red} The patch 70 line(s) with tabs. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 
13s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 16s 
{color} | {color:green} the patch passed with JDK v1.8.0_91 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 21s 
{color} | {color:green} the patch passed with JDK v1.7.0_101 {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 54m 10s {color} 
| {color:red} hadoop-yarn-server-resourcemanager in the patch failed with JDK 
v1.8.0_91. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 54m 42s {color} 
| {color:red} hadoop-yarn-server-resourcemanager in the patch failed with JDK 
v1.7.0_101. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 
16s {color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 129m 54s {color} 
| {color:black} {color} |
\\
\\
|| Reason || Tests ||
| JDK v1.8.0_91 Failed junit tests | 
hadoop.yarn.server.resourcemanager.TestClientRMTokens |
|   | hadoop.yarn.server.resourcemanager.Tes

[jira] [Commented] (YARN-5380) NMTimelinePublisher should use getMemorySize instead of getMemory

2016-07-15 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5380?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15379096#comment-15379096
 ] 

Hudson commented on YARN-5380:
--

SUCCESS: Integrated in Hadoop-trunk-Commit #10104 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/10104/])
YARN-5380. NMTimelinePublisher should use getMemorySize instead of 
(naganarasimha_gr: rev b5ee7dbd8dde756bc556f823327328f511048021)
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/timelineservice/NMTimelinePublisher.java


> NMTimelinePublisher should use getMemorySize instead of getMemory
> -
>
> Key: YARN-5380
> URL: https://issues.apache.org/jira/browse/YARN-5380
> Project: Hadoop YARN
>  Issue Type: Improvement
>  Components: timelineserver
>Affects Versions: 3.0.0-alpha1
>Reporter: Karthik Kambatla
>Assignee: Vrushali C
>  Labels: newbie
> Fix For: 3.0.0-alpha1
>
> Attachments: YARN-5380.01.patch
>
>
> NMTimelinePublisher should use getMemorySize instead of getMemory, because 
> the latter is deprecated in favor of the former. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Created] (YARN-5387) FairScheduler: add the ability to specify a parent queue to all placement rules

2016-07-15 Thread Wilfred Spiegelenburg (JIRA)
Wilfred Spiegelenburg created YARN-5387:
---

 Summary: FairScheduler: add the ability to specify a parent queue 
to all placement rules
 Key: YARN-5387
 URL: https://issues.apache.org/jira/browse/YARN-5387
 Project: Hadoop YARN
  Issue Type: New Feature
  Components: fairscheduler
Reporter: Wilfred Spiegelenburg
Assignee: Wilfred Spiegelenburg


In the current placement policy there all rules generate a queue name under the 
root. The only exception is the nestedUserQueue rule. This rule allows a queue 
to be created under a parent queue defined by a second rule.

Instead of creating new rules to also allow nested groups, secondary groups or  
nested queues for new rules that we think of we should generalise this by 
allowing a parent attribute to be specified in each rule like the create flag.

The optional parent attribute for a rule should allow the following values:
- empty (which is the same as not specifying the attribute)
- a rule
- a fixed value (with or without the root prefix)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5380) NMTimelinePublisher should use getMemorySize instead of getMemory

2016-07-15 Thread Naganarasimha G R (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5380?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15379008#comment-15379008
 ] 

Naganarasimha G R commented on YARN-5380:
-

Seems to be a simple fix, will commit the patch shortly !

> NMTimelinePublisher should use getMemorySize instead of getMemory
> -
>
> Key: YARN-5380
> URL: https://issues.apache.org/jira/browse/YARN-5380
> Project: Hadoop YARN
>  Issue Type: Improvement
>  Components: timelineserver
>Affects Versions: 3.0.0-alpha1
>Reporter: Karthik Kambatla
>Assignee: Vrushali C
>  Labels: newbie
> Attachments: YARN-5380.01.patch
>
>
> NMTimelinePublisher should use getMemorySize instead of getMemory, because 
> the latter is deprecated in favor of the former. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (YARN-5342) Improve non-exclusive node partition resource allocation in Capacity Scheduler

2016-07-15 Thread Naganarasimha G R (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5342?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15378954#comment-15378954
 ] 

Naganarasimha G R edited comment on YARN-5342 at 7/15/16 8:13 AM:
--

Hi [~sunilg] & [~wangda],
   Few thoughts on the approach mentioned by both of you 
{quote}
if (Resources.greaterThan(rc,
application.getCSContext().getClusterResource(),
application.getCSContext().getClusterResourceUsage()
.getPending(node.getPartition()), Resources.none()) || StringUtils
.equals(node.getPartition(), RMNodeLabelsManager.NO_LABEL)) {
  application.resetMissedNonPartitionedRequestSchedulingOpportunity(
  priority);
}
{quote}
Issue i see is we are trying to revert the counter at application level but 
trying to evaluate the pending resources of current node's partition. What if 
multiple non exclusive partitions are there?
Similar things apply for the approaches captured by Sunil too.
I beleive idea here is to delay the allocation in non exclusive mode till some 
time so that preemption doesnt kick in. so would it be a good idea to depend on 
partition of the node ?



was (Author: naganarasimha):
Hi [~sunilg] & [~wangda],
   Few thoughts on the approach mentioned by both of you 
{quote}
if (Resources.greaterThan(rc,
application.getCSContext().getClusterResource(),
application.getCSContext().getClusterResourceUsage()
.getPending(node.getPartition()), Resources.none()) || StringUtils
.equals(node.getPartition(), RMNodeLabelsManager.NO_LABEL)) {
  application.resetMissedNonPartitionedRequestSchedulingOpportunity(
  priority);
}
{quote}
Issue i see is we are trying to reverting the counter at application level but 
trying to evaluate the pending resources of current node partition. What if 
multiple non exclusive partitions are there?
Similar things apply for the approaches captured by Sunil too.
I beleive idea here is to delay the allocation in non exclusive mode till some 
time so that preemption doesnt kick in. so would it be a good idea to depend on 
partition of the node ?


> Improve non-exclusive node partition resource allocation in Capacity Scheduler
> --
>
> Key: YARN-5342
> URL: https://issues.apache.org/jira/browse/YARN-5342
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Wangda Tan
>Assignee: Sunil G
> Attachments: YARN-5342.1.patch
>
>
> In the previous implementation, one non-exclusive container allocation is 
> possible when the missed-opportunity >= #cluster-nodes. And 
> missed-opportunity will be reset when container allocated to any node.
> This will slow down the frequency of container allocation on non-exclusive 
> node partition: *When a non-exclusive partition=x has idle resource, we can 
> only allocate one container for this app in every 
> X=nodemanagers.heartbeat-interval secs for the whole cluster.*
> In this JIRA, I propose a fix to reset missed-opporunity only if we have >0 
> pending resource for the non-exclusive partition OR we get allocation from 
> the default partition.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5047) Refactor nodeUpdate across schedulers

2016-07-15 Thread Wangda Tan (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5047?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15378956#comment-15378956
 ] 

Wangda Tan commented on YARN-5047:
--

[~kasha], please give me one more day to look at this, please feel free to 
commit if I cannot review it by tomorrow (Friday).

Thanks,

> Refactor nodeUpdate across schedulers
> -
>
> Key: YARN-5047
> URL: https://issues.apache.org/jira/browse/YARN-5047
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: capacityscheduler, fairscheduler, scheduler
>Affects Versions: 3.0.0-alpha1
>Reporter: Ray Chiang
>Assignee: Ray Chiang
> Attachments: YARN-5047.001.patch, YARN-5047.002.patch, 
> YARN-5047.003.patch, YARN-5047.004.patch, YARN-5047.005.patch
>
>
> FairScheduler#nodeUpdate() and CapacityScheduler#nodeUpdate() have a lot of 
> commonality in their code.  See about refactoring the common parts into 
> AbstractYARNScheduler.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5342) Improve non-exclusive node partition resource allocation in Capacity Scheduler

2016-07-15 Thread Naganarasimha G R (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5342?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15378954#comment-15378954
 ] 

Naganarasimha G R commented on YARN-5342:
-

Hi [~sunilg] & [~wangda],
   Few thoughts on the approach mentioned by both of you 
{quote}
if (Resources.greaterThan(rc,
application.getCSContext().getClusterResource(),
application.getCSContext().getClusterResourceUsage()
.getPending(node.getPartition()), Resources.none()) || StringUtils
.equals(node.getPartition(), RMNodeLabelsManager.NO_LABEL)) {
  application.resetMissedNonPartitionedRequestSchedulingOpportunity(
  priority);
}
{quote}
Issue i see is we are trying to reverting the counter at application level but 
trying to evaluate the pending resources of current node partition. What if 
multiple non exclusive partitions are there?
Similar things apply for the approaches captured by Sunil too.
I beleive idea here is to delay the allocation in non exclusive mode till some 
time so that preemption doesnt kick in. so would it be a good idea to depend on 
partition of the node ?


> Improve non-exclusive node partition resource allocation in Capacity Scheduler
> --
>
> Key: YARN-5342
> URL: https://issues.apache.org/jira/browse/YARN-5342
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Wangda Tan
>Assignee: Sunil G
> Attachments: YARN-5342.1.patch
>
>
> In the previous implementation, one non-exclusive container allocation is 
> possible when the missed-opportunity >= #cluster-nodes. And 
> missed-opportunity will be reset when container allocated to any node.
> This will slow down the frequency of container allocation on non-exclusive 
> node partition: *When a non-exclusive partition=x has idle resource, we can 
> only allocate one container for this app in every 
> X=nodemanagers.heartbeat-interval secs for the whole cluster.*
> In this JIRA, I propose a fix to reset missed-opporunity only if we have >0 
> pending resource for the non-exclusive partition OR we get allocation from 
> the default partition.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5382) RM does not audit log kill request for active applications

2016-07-15 Thread Naganarasimha G R (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5382?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15378949#comment-15378949
 ] 

Naganarasimha G R commented on YARN-5382:
-

Thanks for the info missed the mail for 2.7.3 by Vinod !  Have updated back

> RM does not audit log kill request for active applications
> --
>
> Key: YARN-5382
> URL: https://issues.apache.org/jira/browse/YARN-5382
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: resourcemanager
>Affects Versions: 2.7.2
>Reporter: Jason Lowe
>Assignee: Vrushali C
> Attachments: YARN-5382-branch-2.7.01.patch
>
>
> ClientRMService will audit a kill request but only if it either fails to 
> issue the kill or if the kill is sent to an already finished application.  It 
> does not create a log entry when the application is active which is arguably 
> the most important case to audit.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-5382) RM does not audit log kill request for active applications

2016-07-15 Thread Vrushali C (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-5382?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vrushali C updated YARN-5382:
-
Attachment: YARN-5382-branch-2.7.01.patch

Uploading patch v1.

[~jlowe] : Is this along the lines of what you had in mind? 

I used a simpler LOG.info and did not use the RMAuditLogger.logSuccess call 
since I figured we don't know at that point if it is a SUCCESS or not. 

> RM does not audit log kill request for active applications
> --
>
> Key: YARN-5382
> URL: https://issues.apache.org/jira/browse/YARN-5382
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: resourcemanager
>Affects Versions: 2.7.2
>Reporter: Jason Lowe
>Assignee: Vrushali C
> Attachments: YARN-5382-branch-2.7.01.patch
>
>
> ClientRMService will audit a kill request but only if it either fails to 
> issue the kill or if the kill is sent to an already finished application.  It 
> does not create a log entry when the application is active which is arguably 
> the most important case to audit.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-5382) RM does not audit log kill request for active applications

2016-07-15 Thread Naganarasimha G R (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-5382?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Naganarasimha G R updated YARN-5382:

Target Version/s: 2.7.4  (was: 2.7.3)

> RM does not audit log kill request for active applications
> --
>
> Key: YARN-5382
> URL: https://issues.apache.org/jira/browse/YARN-5382
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: resourcemanager
>Affects Versions: 2.7.2
>Reporter: Jason Lowe
>Assignee: Vrushali C
>
> ClientRMService will audit a kill request but only if it either fails to 
> issue the kill or if the kill is sent to an already finished application.  It 
> does not create a log entry when the application is active which is arguably 
> the most important case to audit.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org