[jira] [Commented] (YARN-4969) Fix more loggings in CapacityScheduler

2024-01-04 Thread Shilun Fan (Jira)


[ 
https://issues.apache.org/jira/browse/YARN-4969?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17802750#comment-17802750
 ] 

Shilun Fan commented on YARN-4969:
--

Bulk update: moved all 3.4.0 non-blocker issues, please move back if it is a 
blocker. Retarget 3.5.0.

> Fix more loggings in CapacityScheduler
> --
>
> Key: YARN-4969
> URL: https://issues.apache.org/jira/browse/YARN-4969
> Project: Hadoop YARN
>  Issue Type: Improvement
>  Components: capacity scheduler
>Reporter: Wangda Tan
>Assignee: Wangda Tan
>Priority: Major
>  Labels: oct16-easy
> Attachments: YARN-4969.1.patch
>
>
> YARN-3966 did logging cleanup for Capacity Scheduler before, however, 
> there're some loggings we need to improvement:
> Container allocation / complete / reservation / un-reserve messages for every 
> hierarchy (app/leaf/parent-queue) should be printed at INFO level:
> I'm debugging one issue that root queue's resource usage could be negative, 
> it is very hard to reproduce, so we cannot enable debug logging since RM 
> start, size of log cannot be fit in a single disk.
> Existing CS prints INFO message when container cannot be allocated, such as 
> re-reservation / node heartbeat, etc. we should avoid printing such message 
> at INFO level.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-4969) Fix more loggings in CapacityScheduler

2016-10-27 Thread Jonathan Hung (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-4969?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15613518#comment-15613518
 ] 

Jonathan Hung commented on YARN-4969:
-

This patch looks OK to me, one very minor nit, in AbstractContainerAllocator 
you can combine the strings in {noformat}"Reserved container " + " 
application="{noformat}.

> Fix more loggings in CapacityScheduler
> --
>
> Key: YARN-4969
> URL: https://issues.apache.org/jira/browse/YARN-4969
> Project: Hadoop YARN
>  Issue Type: Improvement
>  Components: capacity scheduler
>Reporter: Wangda Tan
>Assignee: Wangda Tan
>  Labels: oct16-easy
> Attachments: YARN-4969.1.patch
>
>
> YARN-3966 did logging cleanup for Capacity Scheduler before, however, 
> there're some loggings we need to improvement:
> Container allocation / complete / reservation / un-reserve messages for every 
> hierarchy (app/leaf/parent-queue) should be printed at INFO level:
> I'm debugging one issue that root queue's resource usage could be negative, 
> it is very hard to reproduce, so we cannot enable debug logging since RM 
> start, size of log cannot be fit in a single disk.
> Existing CS prints INFO message when container cannot be allocated, such as 
> re-reservation / node heartbeat, etc. we should avoid printing such message 
> at INFO level.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-4969) Fix more loggings in CapacityScheduler

2016-05-11 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-4969?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15281068#comment-15281068
 ] 

Hadoop QA commented on YARN-4969:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 13s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red} 0m 0s 
{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 8m 
20s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 37s 
{color} | {color:green} trunk passed with JDK v1.8.0_91 {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 35s 
{color} | {color:green} trunk passed with JDK v1.7.0_95 {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
29s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 43s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
18s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 
21s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 29s 
{color} | {color:green} trunk passed with JDK v1.8.0_91 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 33s 
{color} | {color:green} trunk passed with JDK v1.7.0_95 {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 
38s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 35s 
{color} | {color:green} the patch passed with JDK v1.8.0_91 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 35s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 34s 
{color} | {color:green} the patch passed with JDK v1.7.0_95 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 34s 
{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} checkstyle {color} | {color:red} 0m 27s 
{color} | {color:red} 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager:
 patch generated 1 new + 212 unchanged - 0 fixed = 213 total (was 212) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 40s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
15s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
0s {color} | {color:green} Patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 
35s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 27s 
{color} | {color:green} the patch passed with JDK v1.8.0_91 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 30s 
{color} | {color:green} the patch passed with JDK v1.7.0_95 {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 35m 23s {color} 
| {color:red} hadoop-yarn-server-resourcemanager in the patch failed with JDK 
v1.8.0_91. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 35m 57s {color} 
| {color:red} hadoop-yarn-server-resourcemanager in the patch failed with JDK 
v1.7.0_95. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 
21s {color} | {color:green} Patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 92m 11s {color} 
| {color:black} {color} |
\\
\\
|| Reason || Tests ||
| JDK v1.8.0_91 Failed junit tests | 
hadoop.yarn.server.resourcemanager.TestClientRMTokens |
|   | hadoop.yarn.server.resourcemanager.TestAMAuthorization |
|   | hadoop.yarn.server.resourcemanager.scheduler.capacity.TestChildQueueOrder 
|
|   | hadoop.yarn.server.resourcemanager.security.TestDelegationTokenRenewer |
| JDK v1.7.0_95 Failed junit tests | 

[jira] [Commented] (YARN-4969) Fix more loggings in CapacityScheduler

2016-05-02 Thread Wangda Tan (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-4969?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15267765#comment-15267765
 ] 

Wangda Tan commented on YARN-4969:
--

[~vvasudev],

I would prefer not to do that, since currently scheduler depends on other 
component, for example, ApplicationMasterService. If we move scheduler logs to 
a separate file, it gonna be inconvenient to view context of logs. For example, 
when application submit to RM and when application is submit to scheduler. We 
need to grep two log files to find answers like this.

Thoughts?

> Fix more loggings in CapacityScheduler
> --
>
> Key: YARN-4969
> URL: https://issues.apache.org/jira/browse/YARN-4969
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: capacity scheduler
>Reporter: Wangda Tan
>Assignee: Wangda Tan
> Attachments: YARN-4969.1.patch
>
>
> YARN-3966 did logging cleanup for Capacity Scheduler before, however, 
> there're some loggings we need to improvement:
> Container allocation / complete / reservation / un-reserve messages for every 
> hierarchy (app/leaf/parent-queue) should be printed at INFO level:
> I'm debugging one issue that root queue's resource usage could be negative, 
> it is very hard to reproduce, so we cannot enable debug logging since RM 
> start, size of log cannot be fit in a single disk.
> Existing CS prints INFO message when container cannot be allocated, such as 
> re-reservation / node heartbeat, etc. we should avoid printing such message 
> at INFO level.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-4969) Fix more loggings in CapacityScheduler

2016-05-01 Thread Varun Vasudev (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-4969?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15265656#comment-15265656
 ] 

Varun Vasudev commented on YARN-4969:
-

[~leftnoteasy] - you think it makes sense to just move all the capacity 
scheduler logging into it's own file?

> Fix more loggings in CapacityScheduler
> --
>
> Key: YARN-4969
> URL: https://issues.apache.org/jira/browse/YARN-4969
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: capacity scheduler
>Reporter: Wangda Tan
>Assignee: Wangda Tan
> Attachments: YARN-4969.1.patch
>
>
> YARN-3966 did logging cleanup for Capacity Scheduler before, however, 
> there're some loggings we need to improvement:
> Container allocation / complete / reservation / un-reserve messages for every 
> hierarchy (app/leaf/parent-queue) should be printed at INFO level:
> I'm debugging one issue that root queue's resource usage could be negative, 
> it is very hard to reproduce, so we cannot enable debug logging since RM 
> start, size of log cannot be fit in a single disk.
> Existing CS prints INFO message when container cannot be allocated, such as 
> re-reservation / node heartbeat, etc. we should avoid printing such message 
> at INFO level.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org