[jira] [Commented] (YARN-3860) rmadmin -transitionToActive should check the state of non-target node

2015-06-28 Thread Masatake Iwasaki (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-3860?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14605207#comment-14605207
 ] 

Masatake Iwasaki commented on YARN-3860:


Thanks, [~zxu] and [~djp]!

> rmadmin -transitionToActive should check the state of non-target node
> -
>
> Key: YARN-3860
> URL: https://issues.apache.org/jira/browse/YARN-3860
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: resourcemanager
>Affects Versions: 2.7.0
>Reporter: Masatake Iwasaki
>Assignee: Masatake Iwasaki
> Fix For: 2.8.0
>
> Attachments: YARN-3860.001.patch, YARN-3860.002.patch, 
> YARN-3860.003.patch
>
>
> Users can make both ResouceManagers active by {{rmadmin -transitionToActive}} 
> even if {{\--forceactive}} option is not given. {{haadmin 
> -transitionToActive}} of HDFS checks whether non-target nodes are already 
> active but {{rmadmin -transitionToActive}} does not do.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (YARN-3861) Add fav icon to YARN & MR daemons web UI

2015-06-28 Thread Devaraj K (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-3861?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Devaraj K updated YARN-3861:

Attachment: RM UI in IE-Without Patch.png.png
RM UI in IE-With Patch.png
RM UI in Chrome-Without Patch.png
RM UI in Chrome-With Patch.png

> Add fav icon to YARN & MR daemons web UI
> 
>
> Key: YARN-3861
> URL: https://issues.apache.org/jira/browse/YARN-3861
> Project: Hadoop YARN
>  Issue Type: Improvement
>  Components: webapp
>Reporter: Devaraj K
>Assignee: Devaraj K
> Attachments: RM UI in Chrome-With Patch.png, RM UI in Chrome-Without 
> Patch.png, RM UI in IE-With Patch.png, RM UI in IE-Without Patch.png.png, 
> YARN-3861.patch, hadoop-fav.png
>
>
> Add fav icon image to all YARN & MR daemons web UI.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (YARN-2004) Priority scheduling support in Capacity scheduler

2015-06-28 Thread Sunil G (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-2004?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14605153#comment-14605153
 ] 

Sunil G commented on YARN-2004:
---

Ah. About SchedulerAppkicationAttempt,  we still need null check for other 
schedulers. I ll update the patch with it.

> Priority scheduling support in Capacity scheduler
> -
>
> Key: YARN-2004
> URL: https://issues.apache.org/jira/browse/YARN-2004
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: capacityscheduler
>Reporter: Sunil G
>Assignee: Sunil G
> Attachments: 0001-YARN-2004.patch, 0002-YARN-2004.patch, 
> 0003-YARN-2004.patch, 0004-YARN-2004.patch, 0005-YARN-2004.patch, 
> 0006-YARN-2004.patch, 0007-YARN-2004.patch, 0008-YARN-2004.patch
>
>
> Based on the priority of the application, Capacity Scheduler should be able 
> to give preference to application while doing scheduling.
> Comparator applicationComparator can be changed as below.   
> 
> 1.Check for Application priority. If priority is available, then return 
> the highest priority job.
> 2.Otherwise continue with existing logic such as App ID comparison and 
> then TimeStamp comparison.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (YARN-3861) Add fav icon to YARN & MR daemons web UI

2015-06-28 Thread Devaraj K (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-3861?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Devaraj K updated YARN-3861:

Attachment: hadoop-fav.png
YARN-3861.patch

> Add fav icon to YARN & MR daemons web UI
> 
>
> Key: YARN-3861
> URL: https://issues.apache.org/jira/browse/YARN-3861
> Project: Hadoop YARN
>  Issue Type: Improvement
>  Components: webapp
>Reporter: Devaraj K
>Assignee: Devaraj K
> Attachments: YARN-3861.patch, hadoop-fav.png
>
>
> Add fav icon image to all YARN & MR daemons web UI.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (YARN-3445) Cache runningApps in RMNode for getting running apps on given NodeId

2015-06-28 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-3445?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14605139#comment-14605139
 ] 

Hadoop QA commented on YARN-3445:
-

\\
\\
| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | pre-patch |  17m 19s | Pre-patch trunk compilation is 
healthy. |
| {color:green}+1{color} | @author |   0m  0s | The patch does not contain any 
@author tags. |
| {color:green}+1{color} | tests included |   0m  0s | The patch appears to 
include 2 new or modified test files. |
| {color:green}+1{color} | javac |   7m 36s | There were no new javac warning 
messages. |
| {color:green}+1{color} | javadoc |   9m 39s | There were no new javadoc 
warning messages. |
| {color:green}+1{color} | release audit |   0m 22s | The applied patch does 
not increase the total number of release audit warnings. |
| {color:green}+1{color} | checkstyle |   1m  5s | There were no new checkstyle 
issues. |
| {color:green}+1{color} | whitespace |   0m  1s | The patch has no lines that 
end in whitespace. |
| {color:green}+1{color} | install |   1m 35s | mvn install still works. |
| {color:green}+1{color} | eclipse:eclipse |   0m 33s | The patch built with 
eclipse:eclipse. |
| {color:green}+1{color} | findbugs |   2m 14s | The patch does not introduce 
any new Findbugs (version 3.0.0) warnings. |
| {color:green}+1{color} | tools/hadoop tests |   0m 53s | Tests passed in 
hadoop-sls. |
| {color:green}+1{color} | yarn tests |  50m 48s | Tests passed in 
hadoop-yarn-server-resourcemanager. |
| | |  92m  8s | |
\\
\\
|| Subsystem || Report/Notes ||
| Patch URL | 
http://issues.apache.org/jira/secure/attachment/12742445/YARN-3445-v3.1.patch |
| Optional Tests | javadoc javac unit findbugs checkstyle |
| git revision | trunk / aad6a7d |
| hadoop-sls test log | 
https://builds.apache.org/job/PreCommit-YARN-Build/8371/artifact/patchprocess/testrun_hadoop-sls.txt
 |
| hadoop-yarn-server-resourcemanager test log | 
https://builds.apache.org/job/PreCommit-YARN-Build/8371/artifact/patchprocess/testrun_hadoop-yarn-server-resourcemanager.txt
 |
| Test Results | 
https://builds.apache.org/job/PreCommit-YARN-Build/8371/testReport/ |
| Java | 1.7.0_55 |
| uname | Linux asf906.gq1.ygridcore.net 3.13.0-36-lowlatency #63-Ubuntu SMP 
PREEMPT Wed Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Console output | 
https://builds.apache.org/job/PreCommit-YARN-Build/8371/console |


This message was automatically generated.

> Cache runningApps in RMNode for getting running apps on given NodeId
> 
>
> Key: YARN-3445
> URL: https://issues.apache.org/jira/browse/YARN-3445
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: nodemanager, resourcemanager
>Affects Versions: 2.7.0
>Reporter: Junping Du
>Assignee: Junping Du
> Attachments: YARN-3445-v2.patch, YARN-3445-v3.1.patch, 
> YARN-3445-v3.patch, YARN-3445.patch
>
>
> Per discussion in YARN-3334, we need filter out unnecessary collectors info 
> from RM in heartbeat response. Our propose is to add cache for runningApps in 
> RMNode, so RM only send collectors for local running apps back. This is also 
> needed in YARN-914 (graceful decommission) that if no running apps in NM 
> which is in decommissioning stage, it will get decommissioned immediately. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (YARN-3861) Add fav icon to YARN & MR daemons web UI

2015-06-28 Thread Devaraj K (JIRA)
Devaraj K created YARN-3861:
---

 Summary: Add fav icon to YARN & MR daemons web UI
 Key: YARN-3861
 URL: https://issues.apache.org/jira/browse/YARN-3861
 Project: Hadoop YARN
  Issue Type: Improvement
  Components: webapp
Reporter: Devaraj K
Assignee: Devaraj K


Add fav icon image to all YARN & MR daemons web UI.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (YARN-3860) rmadmin -transitionToActive should check the state of non-target node

2015-06-28 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-3860?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14605131#comment-14605131
 ] 

Hudson commented on YARN-3860:
--

FAILURE: Integrated in Hadoop-trunk-Commit #8082 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/8082/])
YARN-3860. rmadmin -transitionToActive should check the state of non-target 
node. (Contributed by Masatake Iwasaki) (junping_du: rev 
a95d39f9d08b3b215a1b33e77e9ab8a2ee59b3a9)
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-client/src/main/java/org/apache/hadoop/yarn/client/cli/RMAdminCLI.java
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-client/src/test/java/org/apache/hadoop/yarn/client/cli/TestRMAdminCLI.java
* hadoop-yarn-project/CHANGES.txt


> rmadmin -transitionToActive should check the state of non-target node
> -
>
> Key: YARN-3860
> URL: https://issues.apache.org/jira/browse/YARN-3860
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: resourcemanager
>Affects Versions: 2.7.0
>Reporter: Masatake Iwasaki
>Assignee: Masatake Iwasaki
> Fix For: 2.8.0
>
> Attachments: YARN-3860.001.patch, YARN-3860.002.patch, 
> YARN-3860.003.patch
>
>
> Users can make both ResouceManagers active by {{rmadmin -transitionToActive}} 
> even if {{\--forceactive}} option is not given. {{haadmin 
> -transitionToActive}} of HDFS checks whether non-target nodes are already 
> active but {{rmadmin -transitionToActive}} does not do.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (YARN-3860) rmadmin -transitionToActive should check the state of non-target node

2015-06-28 Thread zhihai xu (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-3860?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14605108#comment-14605108
 ] 

zhihai xu commented on YARN-3860:
-

yes, it makes sense. Although they are equivalent, it is easier to change from 
times(1) to  times(2) if we have "rm1, rm2, rm3" in config settings. 
+1(non-binding) for the latest patch.


> rmadmin -transitionToActive should check the state of non-target node
> -
>
> Key: YARN-3860
> URL: https://issues.apache.org/jira/browse/YARN-3860
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: resourcemanager
>Affects Versions: 2.7.0
>Reporter: Masatake Iwasaki
>Assignee: Masatake Iwasaki
> Attachments: YARN-3860.001.patch, YARN-3860.002.patch, 
> YARN-3860.003.patch
>
>
> Users can make both ResouceManagers active by {{rmadmin -transitionToActive}} 
> even if {{\--forceactive}} option is not given. {{haadmin 
> -transitionToActive}} of HDFS checks whether non-target nodes are already 
> active but {{rmadmin -transitionToActive}} does not do.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (YARN-3838) Rest API failing when ip configured in RM address in secure https mode

2015-06-28 Thread Bibin A Chundatt (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-3838?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14605086#comment-14605086
 ] 

Bibin A Chundatt commented on YARN-3838:


[~jianhe] Could you please review patch if possible. 

> Rest API failing when ip configured in RM address in secure https mode
> --
>
> Key: YARN-3838
> URL: https://issues.apache.org/jira/browse/YARN-3838
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: webapp
>Reporter: Bibin A Chundatt
>Assignee: Bibin A Chundatt
>Priority: Critical
> Attachments: 0001-HADOOP-12096.patch, 0001-YARN-3810.patch, 
> 0001-YARN-3838.patch, 0002-YARN-3810.patch, 0002-YARN-3838.patch
>
>
> Steps to reproduce
> ===
> 1.Configure hadoop.http.authentication.kerberos.principal as below
> {code:xml}
>   
> hadoop.http.authentication.kerberos.principal
> HTTP/_h...@hadoop.com
>   
> {code}
> 2. In RM web address also configure IP 
> 3. Startup RM 
> Call Rest API for RM  {{ curl -i -k  --insecure --negotiate -u : https IP 
> /ws/v1/cluster/info"}}
> *Actual*
> Rest API  failing
> {code}
> 2015-06-16 19:03:49,845 DEBUG 
> org.apache.hadoop.security.authentication.server.AuthenticationFilter: 
> Authentication exception: GSSException: No valid credentials provided 
> (Mechanism level: Failed to find any Kerberos credentails)
> org.apache.hadoop.security.authentication.client.AuthenticationException: 
> GSSException: No valid credentials provided (Mechanism level: Failed to find 
> any Kerberos credentails)
>   at 
> org.apache.hadoop.security.authentication.server.KerberosAuthenticationHandler.authenticate(KerberosAuthenticationHandler.java:399)
>   at 
> org.apache.hadoop.security.token.delegation.web.DelegationTokenAuthenticationHandler.authenticate(DelegationTokenAuthenticationHandler.java:348)
>   at 
> org.apache.hadoop.security.authentication.server.AuthenticationFilter.doFilter(AuthenticationFilter.java:519)
>   at 
> org.apache.hadoop.yarn.server.security.http.RMAuthenticationFilter.doFilter(RMAuthenticationFilter.java:82)
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (YARN-3445) Cache runningApps in RMNode for getting running apps on given NodeId

2015-06-28 Thread Junping Du (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-3445?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Junping Du updated YARN-3445:
-
Attachment: YARN-3445-v3.1.patch

Rebase the patch to trunk.

> Cache runningApps in RMNode for getting running apps on given NodeId
> 
>
> Key: YARN-3445
> URL: https://issues.apache.org/jira/browse/YARN-3445
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: nodemanager, resourcemanager
>Affects Versions: 2.7.0
>Reporter: Junping Du
>Assignee: Junping Du
> Attachments: YARN-3445-v2.patch, YARN-3445-v3.1.patch, 
> YARN-3445-v3.patch, YARN-3445.patch
>
>
> Per discussion in YARN-3334, we need filter out unnecessary collectors info 
> from RM in heartbeat response. Our propose is to add cache for runningApps in 
> RMNode, so RM only send collectors for local running apps back. This is also 
> needed in YARN-914 (graceful decommission) that if no running apps in NM 
> which is in decommissioning stage, it will get decommissioned immediately. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (YARN-3846) RM Web UI queue filter is not working

2015-06-28 Thread Devaraj K (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-3846?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Devaraj K updated YARN-3846:

Summary: RM Web UI queue filter is not working  (was: RM Web UI queue 
fileter not working)

> RM Web UI queue filter is not working
> -
>
> Key: YARN-3846
> URL: https://issues.apache.org/jira/browse/YARN-3846
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: yarn
>Affects Versions: 2.7.0
>Reporter: Mohammad Shahid Khan
>Assignee: Mohammad Shahid Khan
> Attachments: scheduler queue issue.png, scheduler queue positive 
> behavior.png
>
>
> Click on root queue will show the complete applications
> But click on the leaf queue is not filtering the application related to the 
> the clicked queue.
> The regular expression seems to be wrong 
> {code}
> q = '^' + q.substr(q.lastIndexOf(':') + 2) + '$';",
> {code}
> For example
> 1. Suppose  queue name is  b
> them the above expression will try to substr at index 1 
> q.lastIndexOf(':')  = -1
> -1+2= 1
> which is wrong. its should look at the 0 index.
> 2. if queue name is ab.x
> then it will parse it to .x 
> but it should be x



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (YARN-3860) rmadmin -transitionToActive should check the state of non-target node

2015-06-28 Thread Junping Du (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-3860?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Junping Du updated YARN-3860:
-
Target Version/s: 2.8.0
Priority: Major  (was: Minor)
 Component/s: resourcemanager

> rmadmin -transitionToActive should check the state of non-target node
> -
>
> Key: YARN-3860
> URL: https://issues.apache.org/jira/browse/YARN-3860
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: resourcemanager
>Affects Versions: 2.7.0
>Reporter: Masatake Iwasaki
>Assignee: Masatake Iwasaki
> Attachments: YARN-3860.001.patch, YARN-3860.002.patch, 
> YARN-3860.003.patch
>
>
> Users can make both ResouceManagers active by {{rmadmin -transitionToActive}} 
> even if {{\--forceactive}} option is not given. {{haadmin 
> -transitionToActive}} of HDFS checks whether non-target nodes are already 
> active but {{rmadmin -transitionToActive}} does not do.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (YARN-3860) rmadmin -transitionToActive should check the state of non-target node

2015-06-28 Thread Junping Du (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-3860?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14605073#comment-14605073
 ] 

Junping Du commented on YARN-3860:
--

Thanks [~iwasakims] for nice catching and the patch! Latest patch LGTM. 
[~zxu], thanks for review here. I think times(1) is quite useful here as the 
calling times indicate the other nodes loops into consideration. If we have 
"rm1, rm2, rm3" in config settings before, here we should get times(2). If you 
agree with it, I will go ahead to commit this patch soon.

> rmadmin -transitionToActive should check the state of non-target node
> -
>
> Key: YARN-3860
> URL: https://issues.apache.org/jira/browse/YARN-3860
> Project: Hadoop YARN
>  Issue Type: Bug
>Affects Versions: 2.7.0
>Reporter: Masatake Iwasaki
>Assignee: Masatake Iwasaki
>Priority: Minor
> Attachments: YARN-3860.001.patch, YARN-3860.002.patch, 
> YARN-3860.003.patch
>
>
> Users can make both ResouceManagers active by {{rmadmin -transitionToActive}} 
> even if {{\--forceactive}} option is not given. {{haadmin 
> -transitionToActive}} of HDFS checks whether non-target nodes are already 
> active but {{rmadmin -transitionToActive}} does not do.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (YARN-3768) Index out of range exception with environment variables without values

2015-06-28 Thread zhihai xu (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-3768?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14605072#comment-14605072
 ] 

zhihai xu commented on YARN-3768:
-

Hi [~jira.shegalov], thanks for the new patch. I noticed the patch will accept  
"a=b=c" instead of discarding it. If the input is "a=b=c", it saves Env 
variable "a" with value "b". Is it correct? I also noticed the patch will 
discard Env Variable with empty string value. I am ok with it, but I just want 
to make sure we don't support Env Variable with empty string value.

> Index out of range exception with environment variables without values
> --
>
> Key: YARN-3768
> URL: https://issues.apache.org/jira/browse/YARN-3768
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: yarn
>Affects Versions: 2.5.0
>Reporter: Joe Ferner
>Assignee: zhihai xu
> Attachments: YARN-3768.000.patch, YARN-3768.001.patch, 
> YARN-3768.002.patch
>
>
> Looking at line 80 of org.apache.hadoop.yarn.util.Apps an index out of range 
> exception occurs if an environment variable is encountered without a value.
> I believe this occurs because java will not return empty strings from the 
> split method. Similar to this 
> http://stackoverflow.com/questions/14602062/java-string-split-removed-empty-values



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (YARN-3857) Memory leak in ResourceManager with SIMPLE mode

2015-06-28 Thread mujunchao (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-3857?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14605006#comment-14605006
 ] 

mujunchao commented on YARN-3857:
-

Ok, got, thanks Devaraj for reviewing.

> Memory leak in ResourceManager with SIMPLE mode
> ---
>
> Key: YARN-3857
> URL: https://issues.apache.org/jira/browse/YARN-3857
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: resourcemanager
>Affects Versions: 2.7.0
>Reporter: mujunchao
>Assignee: mujunchao
>Priority: Critical
> Attachments: hadoop-yarn-server-resourcemanager.patch
>
>
>  We register the ClientTokenMasterKey to avoid client may hold an invalid 
> ClientToken after RM restarts. In SIMPLE mode, we register 
> Pair ,  But we never remove it from HashMap, as 
> unregister only runing while in Security mode, so memory leak coming. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (YARN-3857) Memory leak in ResourceManager with SIMPLE mode

2015-06-28 Thread mujunchao (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-3857?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14605005#comment-14605005
 ] 

mujunchao commented on YARN-3857:
-

Thanks for your reviewing, I will give the test case.

> Memory leak in ResourceManager with SIMPLE mode
> ---
>
> Key: YARN-3857
> URL: https://issues.apache.org/jira/browse/YARN-3857
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: resourcemanager
>Affects Versions: 2.7.0
>Reporter: mujunchao
>Assignee: mujunchao
>Priority: Critical
> Attachments: hadoop-yarn-server-resourcemanager.patch
>
>
>  We register the ClientTokenMasterKey to avoid client may hold an invalid 
> ClientToken after RM restarts. In SIMPLE mode, we register 
> Pair ,  But we never remove it from HashMap, as 
> unregister only runing while in Security mode, so memory leak coming. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (YARN-3051) [Storage abstraction] Create backing storage read interface for ATS readers

2015-06-28 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-3051?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14604932#comment-14604932
 ] 

Hadoop QA commented on YARN-3051:
-

\\
\\
| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:red}-1{color} | pre-patch |  16m 56s | Findbugs (version ) appears to 
be broken on YARN-2928. |
| {color:green}+1{color} | @author |   0m  0s | The patch does not contain any 
@author tags. |
| {color:green}+1{color} | tests included |   0m  0s | The patch appears to 
include 1 new or modified test files. |
| {color:green}+1{color} | javac |   7m 42s | There were no new javac warning 
messages. |
| {color:green}+1{color} | javadoc |   9m 44s | There were no new javadoc 
warning messages. |
| {color:green}+1{color} | release audit |   0m 23s | The applied patch does 
not increase the total number of release audit warnings. |
| {color:red}-1{color} | checkstyle |   1m 23s | The applied patch generated  3 
new checkstyle issues (total was 234, now 236). |
| {color:green}+1{color} | whitespace |   0m  0s | The patch has no lines that 
end in whitespace. |
| {color:green}+1{color} | install |   1m 38s | mvn install still works. |
| {color:green}+1{color} | eclipse:eclipse |   0m 40s | The patch built with 
eclipse:eclipse. |
| {color:red}-1{color} | findbugs |   3m 56s | The patch appears to introduce 1 
new Findbugs (version 3.0.0) warnings. |
| {color:green}+1{color} | yarn tests |   0m 25s | Tests passed in 
hadoop-yarn-api. |
| {color:green}+1{color} | yarn tests |   1m 57s | Tests passed in 
hadoop-yarn-common. |
| {color:green}+1{color} | yarn tests |   1m 18s | Tests passed in 
hadoop-yarn-server-timelineservice. |
| | |  46m 29s | |
\\
\\
|| Reason || Tests ||
| FindBugs | module:hadoop-yarn-server-timelineservice |
\\
\\
|| Subsystem || Report/Notes ||
| Patch URL | 
http://issues.apache.org/jira/secure/attachment/12742423/YARN-3051-YARN-2928.05.patch
 |
| Optional Tests | javadoc javac unit findbugs checkstyle |
| git revision | YARN-2928 / 84f37f1 |
| checkstyle |  
https://builds.apache.org/job/PreCommit-YARN-Build/8370/artifact/patchprocess/diffcheckstylehadoop-yarn-api.txt
 |
| Findbugs warnings | 
https://builds.apache.org/job/PreCommit-YARN-Build/8370/artifact/patchprocess/newPatchFindbugsWarningshadoop-yarn-server-timelineservice.html
 |
| hadoop-yarn-api test log | 
https://builds.apache.org/job/PreCommit-YARN-Build/8370/artifact/patchprocess/testrun_hadoop-yarn-api.txt
 |
| hadoop-yarn-common test log | 
https://builds.apache.org/job/PreCommit-YARN-Build/8370/artifact/patchprocess/testrun_hadoop-yarn-common.txt
 |
| hadoop-yarn-server-timelineservice test log | 
https://builds.apache.org/job/PreCommit-YARN-Build/8370/artifact/patchprocess/testrun_hadoop-yarn-server-timelineservice.txt
 |
| Test Results | 
https://builds.apache.org/job/PreCommit-YARN-Build/8370/testReport/ |
| Java | 1.7.0_55 |
| uname | Linux asf906.gq1.ygridcore.net 3.13.0-36-lowlatency #63-Ubuntu SMP 
PREEMPT Wed Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Console output | 
https://builds.apache.org/job/PreCommit-YARN-Build/8370/console |


This message was automatically generated.

> [Storage abstraction] Create backing storage read interface for ATS readers
> ---
>
> Key: YARN-3051
> URL: https://issues.apache.org/jira/browse/YARN-3051
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: timelineserver
>Affects Versions: YARN-2928
>Reporter: Sangjin Lee
>Assignee: Varun Saxena
> Attachments: YARN-3051-YARN-2928.003.patch, 
> YARN-3051-YARN-2928.03.patch, YARN-3051-YARN-2928.04.patch, 
> YARN-3051-YARN-2928.05.patch, YARN-3051.Reader_API.patch, 
> YARN-3051.Reader_API_1.patch, YARN-3051.Reader_API_2.patch, 
> YARN-3051.Reader_API_3.patch, YARN-3051.Reader_API_4.patch, 
> YARN-3051.wip.02.YARN-2928.patch, YARN-3051.wip.patch, YARN-3051_temp.patch
>
>
> Per design in YARN-2928, create backing storage read interface that can be 
> implemented by multiple backing storage implementations.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (YARN-3500) Optimize ResourceManager Web loading speed

2015-06-28 Thread Varun Saxena (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-3500?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Varun Saxena updated YARN-3500:
---
Assignee: (was: Varun Saxena)

> Optimize ResourceManager Web loading speed
> --
>
> Key: YARN-3500
> URL: https://issues.apache.org/jira/browse/YARN-3500
> Project: Hadoop YARN
>  Issue Type: Improvement
>  Components: resourcemanager
>Reporter: Peter Shi
>
> after running 10k jobs, resoucemanager webui load speed become slow. As 
> server side send 10k jobs information in one response, parsing and rendering 
> page will cost a long time. Current paging logic is done in browser side. 
> This issue makes server side to do the paging logic, so that the loading will 
> be fast.
> Loading 10k jobs costs 55 sec. loading 2k costs 7 sec



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Assigned] (YARN-3500) Optimize ResourceManager Web loading speed

2015-06-28 Thread Varun Saxena (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-3500?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Varun Saxena reassigned YARN-3500:
--

Assignee: Varun Saxena

> Optimize ResourceManager Web loading speed
> --
>
> Key: YARN-3500
> URL: https://issues.apache.org/jira/browse/YARN-3500
> Project: Hadoop YARN
>  Issue Type: Improvement
>  Components: resourcemanager
>Reporter: Peter Shi
>Assignee: Varun Saxena
>
> after running 10k jobs, resoucemanager webui load speed become slow. As 
> server side send 10k jobs information in one response, parsing and rendering 
> page will cost a long time. Current paging logic is done in browser side. 
> This issue makes server side to do the paging logic, so that the loading will 
> be fast.
> Loading 10k jobs costs 55 sec. loading 2k costs 7 sec



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (YARN-3047) [Data Serving] Set up ATS reader with basic request serving structure and lifecycle

2015-06-28 Thread Varun Saxena (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-3047?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14604914#comment-14604914
 ] 

Varun Saxena commented on YARN-3047:


[~sjlee0], [~zjshen], kindly review and let me know if any comments. I will 
have to rebase other patches if this one goes in. So sooner the better. :)

> [Data Serving] Set up ATS reader with basic request serving structure and 
> lifecycle
> ---
>
> Key: YARN-3047
> URL: https://issues.apache.org/jira/browse/YARN-3047
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: timelineserver
>Affects Versions: YARN-2928
>Reporter: Sangjin Lee
>Assignee: Varun Saxena
>  Labels: BB2015-05-TBR
> Attachments: Timeline_Reader(draft).pdf, 
> YARN-3047-YARN-2928.08.patch, YARN-3047-YARN-2928.09.patch, 
> YARN-3047.001.patch, YARN-3047.003.patch, YARN-3047.005.patch, 
> YARN-3047.006.patch, YARN-3047.007.patch, YARN-3047.02.patch, 
> YARN-3047.04.patch
>
>
> Per design in YARN-2938, set up the ATS reader as a service and implement the 
> basic structure as a service. It includes lifecycle management, request 
> serving, and so on.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (YARN-3051) [Storage abstraction] Create backing storage read interface for ATS readers

2015-06-28 Thread Varun Saxena (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-3051?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Varun Saxena updated YARN-3051:
---
Attachment: YARN-3051-YARN-2928.05.patch

> [Storage abstraction] Create backing storage read interface for ATS readers
> ---
>
> Key: YARN-3051
> URL: https://issues.apache.org/jira/browse/YARN-3051
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: timelineserver
>Affects Versions: YARN-2928
>Reporter: Sangjin Lee
>Assignee: Varun Saxena
> Attachments: YARN-3051-YARN-2928.003.patch, 
> YARN-3051-YARN-2928.03.patch, YARN-3051-YARN-2928.04.patch, 
> YARN-3051-YARN-2928.05.patch, YARN-3051.Reader_API.patch, 
> YARN-3051.Reader_API_1.patch, YARN-3051.Reader_API_2.patch, 
> YARN-3051.Reader_API_3.patch, YARN-3051.Reader_API_4.patch, 
> YARN-3051.wip.02.YARN-2928.patch, YARN-3051.wip.patch, YARN-3051_temp.patch
>
>
> Per design in YARN-2928, create backing storage read interface that can be 
> implemented by multiple backing storage implementations.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (YARN-3846) RM Web UI queue fileter not working

2015-06-28 Thread Mohammad Shahid Khan (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-3846?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mohammad Shahid Khan updated YARN-3846:
---
Attachment: scheduler queue issue.png
scheduler queue positive behavior.png

> RM Web UI queue fileter not working
> ---
>
> Key: YARN-3846
> URL: https://issues.apache.org/jira/browse/YARN-3846
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: yarn
>Affects Versions: 2.7.0
>Reporter: Mohammad Shahid Khan
>Assignee: Mohammad Shahid Khan
> Attachments: scheduler queue issue.png, scheduler queue positive 
> behavior.png
>
>
> Click on root queue will show the complete applications
> But click on the leaf queue is not filtering the application related to the 
> the clicked queue.
> The regular expression seems to be wrong 
> {code}
> q = '^' + q.substr(q.lastIndexOf(':') + 2) + '$';",
> {code}
> For example
> 1. Suppose  queue name is  b
> them the above expression will try to substr at index 1 
> q.lastIndexOf(':')  = -1
> -1+2= 1
> which is wrong. its should look at the 0 index.
> 2. if queue name is ab.x
> then it will parse it to .x 
> but it should be x



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (YARN-3840) Resource Manager web ui issue when sorting application by id (with application having id > 9999)

2015-06-28 Thread Mohammad Shahid Khan (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-3840?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mohammad Shahid Khan updated YARN-3840:
---
Labels: PatchAvailable  (was: )

> Resource Manager web ui issue when sorting application by id (with 
> application having id > )
> 
>
> Key: YARN-3840
> URL: https://issues.apache.org/jira/browse/YARN-3840
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: resourcemanager
>Affects Versions: 2.7.0
> Environment: Centos 6.6
> Java 1.7
>Reporter: LINTE
>  Labels: PatchAvailable
> Attachments: RMApps.png, YARN-3840.patch
>
>
> On the WEBUI, the global main view page : 
> http://resourcemanager:8088/cluster/apps doesn't display applications over 
> .
> With command line it works (# yarn application -list).
> Regards,
> Alexandre



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (YARN-3840) Resource Manager web ui issue when sorting application by id (with application having id > 9999)

2015-06-28 Thread Mohammad Shahid Khan (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-3840?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mohammad Shahid Khan updated YARN-3840:
---
Attachment: YARN-3840.patch

Please review the attached patch


> Resource Manager web ui issue when sorting application by id (with 
> application having id > )
> 
>
> Key: YARN-3840
> URL: https://issues.apache.org/jira/browse/YARN-3840
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: resourcemanager
>Affects Versions: 2.7.0
> Environment: Centos 6.6
> Java 1.7
>Reporter: LINTE
> Attachments: RMApps.png, YARN-3840.patch
>
>
> On the WEBUI, the global main view page : 
> http://resourcemanager:8088/cluster/apps doesn't display applications over 
> .
> With command line it works (# yarn application -list).
> Regards,
> Alexandre



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (YARN-2003) Support to process Job priority from Submission Context in AppAttemptAddedSchedulerEvent [RM side]

2015-06-28 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-2003?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14604757#comment-14604757
 ] 

Hadoop QA commented on YARN-2003:
-

\\
\\
| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | pre-patch |  16m 34s | Pre-patch trunk compilation is 
healthy. |
| {color:green}+1{color} | @author |   0m  0s | The patch does not contain any 
@author tags. |
| {color:green}+1{color} | tests included |   0m  0s | The patch appears to 
include 9 new or modified test files. |
| {color:green}+1{color} | javac |   7m 35s | There were no new javac warning 
messages. |
| {color:green}+1{color} | javadoc |   9m 39s | There were no new javadoc 
warning messages. |
| {color:green}+1{color} | release audit |   0m 22s | The applied patch does 
not increase the total number of release audit warnings. |
| {color:red}-1{color} | checkstyle |   1m  3s | The applied patch generated  
10 new checkstyle issues (total was 375, now 382). |
| {color:red}-1{color} | whitespace |   0m  6s | The patch has 18  line(s) that 
end in whitespace. Use git apply --whitespace=fix. |
| {color:green}+1{color} | install |   1m 35s | mvn install still works. |
| {color:green}+1{color} | eclipse:eclipse |   0m 33s | The patch built with 
eclipse:eclipse. |
| {color:green}+1{color} | findbugs |   2m 14s | The patch does not introduce 
any new Findbugs (version 3.0.0) warnings. |
| {color:green}+1{color} | tools/hadoop tests |   0m 53s | Tests passed in 
hadoop-sls. |
| {color:green}+1{color} | yarn tests |  50m 55s | Tests passed in 
hadoop-yarn-server-resourcemanager. |
| | |  91m 33s | |
\\
\\
|| Subsystem || Report/Notes ||
| Patch URL | 
http://issues.apache.org/jira/secure/attachment/12742395/0014-YARN-2003.patch |
| Optional Tests | javadoc javac unit findbugs checkstyle |
| git revision | trunk / b543d1a |
| checkstyle |  
https://builds.apache.org/job/PreCommit-YARN-Build/8369/artifact/patchprocess/diffcheckstylehadoop-yarn-server-resourcemanager.txt
 |
| whitespace | 
https://builds.apache.org/job/PreCommit-YARN-Build/8369/artifact/patchprocess/whitespace.txt
 |
| hadoop-sls test log | 
https://builds.apache.org/job/PreCommit-YARN-Build/8369/artifact/patchprocess/testrun_hadoop-sls.txt
 |
| hadoop-yarn-server-resourcemanager test log | 
https://builds.apache.org/job/PreCommit-YARN-Build/8369/artifact/patchprocess/testrun_hadoop-yarn-server-resourcemanager.txt
 |
| Test Results | 
https://builds.apache.org/job/PreCommit-YARN-Build/8369/testReport/ |
| Java | 1.7.0_55 |
| uname | Linux asf905.gq1.ygridcore.net 3.13.0-36-lowlatency #63-Ubuntu SMP 
PREEMPT Wed Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Console output | 
https://builds.apache.org/job/PreCommit-YARN-Build/8369/console |


This message was automatically generated.

> Support to process Job priority from Submission Context in 
> AppAttemptAddedSchedulerEvent [RM side]
> --
>
> Key: YARN-2003
> URL: https://issues.apache.org/jira/browse/YARN-2003
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: resourcemanager
>Reporter: Sunil G
>Assignee: Sunil G
>  Labels: BB2015-05-TBR
> Attachments: 0001-YARN-2003.patch, 00010-YARN-2003.patch, 
> 0002-YARN-2003.patch, 0003-YARN-2003.patch, 0004-YARN-2003.patch, 
> 0005-YARN-2003.patch, 0006-YARN-2003.patch, 0007-YARN-2003.patch, 
> 0008-YARN-2003.patch, 0009-YARN-2003.patch, 0011-YARN-2003.patch, 
> 0012-YARN-2003.patch, 0013-YARN-2003.patch, 0014-YARN-2003.patch
>
>
> AppAttemptAddedSchedulerEvent should be able to receive the Job Priority from 
> Submission Context and store.
> Later this can be used by Scheduler.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (YARN-3859) LeafQueue doesn't print user properly for application add

2015-06-28 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-3859?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14604738#comment-14604738
 ] 

Hudson commented on YARN-3859:
--

SUCCESS: Integrated in Hadoop-Mapreduce-trunk #2188 (See 
[https://builds.apache.org/job/Hadoop-Mapreduce-trunk/2188/])
YARN-3859. LeafQueue doesn't print user properly for application add. (devaraj: 
rev b543d1a390a67e5e92fea67d3a2635058c29e9da)
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/capacity/LeafQueue.java
* hadoop-yarn-project/CHANGES.txt


> LeafQueue doesn't print user properly for application add
> -
>
> Key: YARN-3859
> URL: https://issues.apache.org/jira/browse/YARN-3859
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: capacityscheduler
>Affects Versions: 2.7.0
>Reporter: Devaraj K
>Assignee: Varun Saxena
>Priority: Minor
> Fix For: 2.8.0
>
> Attachments: YARN-3859.01.patch
>
>
> {code:xml}
> 2015-06-28 04:36:22,721 INFO 
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.LeafQueue: 
> Application added - appId: application_1435446241489_0003 user: 
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.LeafQueue$User@e8fb7a8,
>  leaf-queue: default #user-pending-applications: 2 #user-active-applications: 
> 1 #queue-pending-applications: 2 #queue-active-applications: 1
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (YARN-3859) LeafQueue doesn't print user properly for application add

2015-06-28 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-3859?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14604722#comment-14604722
 ] 

Hudson commented on YARN-3859:
--

SUCCESS: Integrated in Hadoop-Mapreduce-trunk-Java8 #240 (See 
[https://builds.apache.org/job/Hadoop-Mapreduce-trunk-Java8/240/])
YARN-3859. LeafQueue doesn't print user properly for application add. (devaraj: 
rev b543d1a390a67e5e92fea67d3a2635058c29e9da)
* hadoop-yarn-project/CHANGES.txt
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/capacity/LeafQueue.java


> LeafQueue doesn't print user properly for application add
> -
>
> Key: YARN-3859
> URL: https://issues.apache.org/jira/browse/YARN-3859
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: capacityscheduler
>Affects Versions: 2.7.0
>Reporter: Devaraj K
>Assignee: Varun Saxena
>Priority: Minor
> Fix For: 2.8.0
>
> Attachments: YARN-3859.01.patch
>
>
> {code:xml}
> 2015-06-28 04:36:22,721 INFO 
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.LeafQueue: 
> Application added - appId: application_1435446241489_0003 user: 
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.LeafQueue$User@e8fb7a8,
>  leaf-queue: default #user-pending-applications: 2 #user-active-applications: 
> 1 #queue-pending-applications: 2 #queue-active-applications: 1
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (YARN-2004) Priority scheduling support in Capacity scheduler

2015-06-28 Thread Sunil G (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-2004?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sunil G updated YARN-2004:
--
Attachment: 0008-YARN-2004.patch

Thank you [~eepayne] for sharing the comments.

bq. Can getApplicationPriority return null?
In CS addApplicationAttempt(), immediately after creating FicaSchedulerApp, I 
am setting the priority.
{code}
application.setCurrentAppAttempt(attempt);
attempt.setApplicationPriority(application.getPriority());
{code}

Also from YARN-2003, we set the application priority in all cases while 
processing submitApplication. If user didnt set a priority, application will 
have the default priority from Queue. If user didnt configure a default 
priority in Queue, then by default 0(ZERO) will be considered as default 
priority.
Please let me know if this is fine.

> Priority scheduling support in Capacity scheduler
> -
>
> Key: YARN-2004
> URL: https://issues.apache.org/jira/browse/YARN-2004
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: capacityscheduler
>Reporter: Sunil G
>Assignee: Sunil G
> Attachments: 0001-YARN-2004.patch, 0002-YARN-2004.patch, 
> 0003-YARN-2004.patch, 0004-YARN-2004.patch, 0005-YARN-2004.patch, 
> 0006-YARN-2004.patch, 0007-YARN-2004.patch, 0008-YARN-2004.patch
>
>
> Based on the priority of the application, Capacity Scheduler should be able 
> to give preference to application while doing scheduling.
> Comparator applicationComparator can be changed as below.   
> 
> 1.Check for Application priority. If priority is available, then return 
> the highest priority job.
> 2.Otherwise continue with existing logic such as App ID comparison and 
> then TimeStamp comparison.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (YARN-2003) Support to process Job priority from Submission Context in AppAttemptAddedSchedulerEvent [RM side]

2015-06-28 Thread Sunil G (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-2003?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sunil G updated YARN-2003:
--
Attachment: 0014-YARN-2003.patch

Corrected one compile issue.

> Support to process Job priority from Submission Context in 
> AppAttemptAddedSchedulerEvent [RM side]
> --
>
> Key: YARN-2003
> URL: https://issues.apache.org/jira/browse/YARN-2003
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: resourcemanager
>Reporter: Sunil G
>Assignee: Sunil G
>  Labels: BB2015-05-TBR
> Attachments: 0001-YARN-2003.patch, 00010-YARN-2003.patch, 
> 0002-YARN-2003.patch, 0003-YARN-2003.patch, 0004-YARN-2003.patch, 
> 0005-YARN-2003.patch, 0006-YARN-2003.patch, 0007-YARN-2003.patch, 
> 0008-YARN-2003.patch, 0009-YARN-2003.patch, 0011-YARN-2003.patch, 
> 0012-YARN-2003.patch, 0013-YARN-2003.patch, 0014-YARN-2003.patch
>
>
> AppAttemptAddedSchedulerEvent should be able to receive the Job Priority from 
> Submission Context and store.
> Later this can be used by Scheduler.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (YARN-3859) LeafQueue doesn't print user properly for application add

2015-06-28 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-3859?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14604700#comment-14604700
 ] 

Hudson commented on YARN-3859:
--

SUCCESS: Integrated in Hadoop-Hdfs-trunk #2170 (See 
[https://builds.apache.org/job/Hadoop-Hdfs-trunk/2170/])
YARN-3859. LeafQueue doesn't print user properly for application add. (devaraj: 
rev b543d1a390a67e5e92fea67d3a2635058c29e9da)
* hadoop-yarn-project/CHANGES.txt
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/capacity/LeafQueue.java


> LeafQueue doesn't print user properly for application add
> -
>
> Key: YARN-3859
> URL: https://issues.apache.org/jira/browse/YARN-3859
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: capacityscheduler
>Affects Versions: 2.7.0
>Reporter: Devaraj K
>Assignee: Varun Saxena
>Priority: Minor
> Fix For: 2.8.0
>
> Attachments: YARN-3859.01.patch
>
>
> {code:xml}
> 2015-06-28 04:36:22,721 INFO 
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.LeafQueue: 
> Application added - appId: application_1435446241489_0003 user: 
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.LeafQueue$User@e8fb7a8,
>  leaf-queue: default #user-pending-applications: 2 #user-active-applications: 
> 1 #queue-pending-applications: 2 #queue-active-applications: 1
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (YARN-3859) LeafQueue doesn't print user properly for application add

2015-06-28 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-3859?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14604696#comment-14604696
 ] 

Hudson commented on YARN-3859:
--

FAILURE: Integrated in Hadoop-Hdfs-trunk-Java8 #231 (See 
[https://builds.apache.org/job/Hadoop-Hdfs-trunk-Java8/231/])
YARN-3859. LeafQueue doesn't print user properly for application add. (devaraj: 
rev b543d1a390a67e5e92fea67d3a2635058c29e9da)
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/capacity/LeafQueue.java
* hadoop-yarn-project/CHANGES.txt


> LeafQueue doesn't print user properly for application add
> -
>
> Key: YARN-3859
> URL: https://issues.apache.org/jira/browse/YARN-3859
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: capacityscheduler
>Affects Versions: 2.7.0
>Reporter: Devaraj K
>Assignee: Varun Saxena
>Priority: Minor
> Fix For: 2.8.0
>
> Attachments: YARN-3859.01.patch
>
>
> {code:xml}
> 2015-06-28 04:36:22,721 INFO 
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.LeafQueue: 
> Application added - appId: application_1435446241489_0003 user: 
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.LeafQueue$User@e8fb7a8,
>  leaf-queue: default #user-pending-applications: 2 #user-active-applications: 
> 1 #queue-pending-applications: 2 #queue-active-applications: 1
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (YARN-2003) Support to process Job priority from Submission Context in AppAttemptAddedSchedulerEvent [RM side]

2015-06-28 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-2003?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14604689#comment-14604689
 ] 

Hadoop QA commented on YARN-2003:
-

\\
\\
| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | pre-patch |  18m  9s | Pre-patch trunk compilation is 
healthy. |
| {color:green}+1{color} | @author |   0m  0s | The patch does not contain any 
@author tags. |
| {color:green}+1{color} | tests included |   0m  0s | The patch appears to 
include 9 new or modified test files. |
| {color:red}-1{color} | javac |   8m  1s | The patch appears to cause the 
build to fail. |
\\
\\
|| Subsystem || Report/Notes ||
| Patch URL | 
http://issues.apache.org/jira/secure/attachment/12742391/0013-YARN-2003.patch |
| Optional Tests | javadoc javac unit findbugs checkstyle |
| git revision | trunk / b543d1a |
| Console output | 
https://builds.apache.org/job/PreCommit-YARN-Build/8368/console |


This message was automatically generated.

> Support to process Job priority from Submission Context in 
> AppAttemptAddedSchedulerEvent [RM side]
> --
>
> Key: YARN-2003
> URL: https://issues.apache.org/jira/browse/YARN-2003
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: resourcemanager
>Reporter: Sunil G
>Assignee: Sunil G
>  Labels: BB2015-05-TBR
> Attachments: 0001-YARN-2003.patch, 00010-YARN-2003.patch, 
> 0002-YARN-2003.patch, 0003-YARN-2003.patch, 0004-YARN-2003.patch, 
> 0005-YARN-2003.patch, 0006-YARN-2003.patch, 0007-YARN-2003.patch, 
> 0008-YARN-2003.patch, 0009-YARN-2003.patch, 0011-YARN-2003.patch, 
> 0012-YARN-2003.patch, 0013-YARN-2003.patch
>
>
> AppAttemptAddedSchedulerEvent should be able to receive the Job Priority from 
> Submission Context and store.
> Later this can be used by Scheduler.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (YARN-2003) Support to process Job priority from Submission Context in AppAttemptAddedSchedulerEvent [RM side]

2015-06-28 Thread Sunil G (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-2003?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sunil G updated YARN-2003:
--
Attachment: 0013-YARN-2003.patch

> Support to process Job priority from Submission Context in 
> AppAttemptAddedSchedulerEvent [RM side]
> --
>
> Key: YARN-2003
> URL: https://issues.apache.org/jira/browse/YARN-2003
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: resourcemanager
>Reporter: Sunil G
>Assignee: Sunil G
>  Labels: BB2015-05-TBR
> Attachments: 0001-YARN-2003.patch, 00010-YARN-2003.patch, 
> 0002-YARN-2003.patch, 0003-YARN-2003.patch, 0004-YARN-2003.patch, 
> 0005-YARN-2003.patch, 0006-YARN-2003.patch, 0007-YARN-2003.patch, 
> 0008-YARN-2003.patch, 0009-YARN-2003.patch, 0011-YARN-2003.patch, 
> 0012-YARN-2003.patch, 0013-YARN-2003.patch
>
>
> AppAttemptAddedSchedulerEvent should be able to receive the Job Priority from 
> Submission Context and store.
> Later this can be used by Scheduler.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (YARN-2003) Support to process Job priority from Submission Context in AppAttemptAddedSchedulerEvent [RM side]

2015-06-28 Thread Sunil G (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-2003?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sunil G updated YARN-2003:
--
Attachment: (was: 0013-YARN-2003.patch)

> Support to process Job priority from Submission Context in 
> AppAttemptAddedSchedulerEvent [RM side]
> --
>
> Key: YARN-2003
> URL: https://issues.apache.org/jira/browse/YARN-2003
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: resourcemanager
>Reporter: Sunil G
>Assignee: Sunil G
>  Labels: BB2015-05-TBR
> Attachments: 0001-YARN-2003.patch, 00010-YARN-2003.patch, 
> 0002-YARN-2003.patch, 0003-YARN-2003.patch, 0004-YARN-2003.patch, 
> 0005-YARN-2003.patch, 0006-YARN-2003.patch, 0007-YARN-2003.patch, 
> 0008-YARN-2003.patch, 0009-YARN-2003.patch, 0011-YARN-2003.patch, 
> 0012-YARN-2003.patch
>
>
> AppAttemptAddedSchedulerEvent should be able to receive the Job Priority from 
> Submission Context and store.
> Later this can be used by Scheduler.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (YARN-2003) Support to process Job priority from Submission Context in AppAttemptAddedSchedulerEvent [RM side]

2015-06-28 Thread Sunil G (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-2003?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sunil G updated YARN-2003:
--
Attachment: 0013-YARN-2003.patch

Thank you [~leftnoteasy] for the comments. Uploading patch by addressing the 
same.

> Support to process Job priority from Submission Context in 
> AppAttemptAddedSchedulerEvent [RM side]
> --
>
> Key: YARN-2003
> URL: https://issues.apache.org/jira/browse/YARN-2003
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: resourcemanager
>Reporter: Sunil G
>Assignee: Sunil G
>  Labels: BB2015-05-TBR
> Attachments: 0001-YARN-2003.patch, 00010-YARN-2003.patch, 
> 0002-YARN-2003.patch, 0003-YARN-2003.patch, 0004-YARN-2003.patch, 
> 0005-YARN-2003.patch, 0006-YARN-2003.patch, 0007-YARN-2003.patch, 
> 0008-YARN-2003.patch, 0009-YARN-2003.patch, 0011-YARN-2003.patch, 
> 0012-YARN-2003.patch, 0013-YARN-2003.patch
>
>
> AppAttemptAddedSchedulerEvent should be able to receive the Job Priority from 
> Submission Context and store.
> Later this can be used by Scheduler.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (YARN-3859) LeafQueue doesn't print user properly for application add

2015-06-28 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-3859?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14604630#comment-14604630
 ] 

Hudson commented on YARN-3859:
--

FAILURE: Integrated in Hadoop-Yarn-trunk #972 (See 
[https://builds.apache.org/job/Hadoop-Yarn-trunk/972/])
YARN-3859. LeafQueue doesn't print user properly for application add. (devaraj: 
rev b543d1a390a67e5e92fea67d3a2635058c29e9da)
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/capacity/LeafQueue.java
* hadoop-yarn-project/CHANGES.txt


> LeafQueue doesn't print user properly for application add
> -
>
> Key: YARN-3859
> URL: https://issues.apache.org/jira/browse/YARN-3859
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: capacityscheduler
>Affects Versions: 2.7.0
>Reporter: Devaraj K
>Assignee: Varun Saxena
>Priority: Minor
> Fix For: 2.8.0
>
> Attachments: YARN-3859.01.patch
>
>
> {code:xml}
> 2015-06-28 04:36:22,721 INFO 
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.LeafQueue: 
> Application added - appId: application_1435446241489_0003 user: 
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.LeafQueue$User@e8fb7a8,
>  leaf-queue: default #user-pending-applications: 2 #user-active-applications: 
> 1 #queue-pending-applications: 2 #queue-active-applications: 1
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (YARN-3859) LeafQueue doesn't print user properly for application add

2015-06-28 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-3859?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14604628#comment-14604628
 ] 

Hudson commented on YARN-3859:
--

FAILURE: Integrated in Hadoop-Yarn-trunk-Java8 #242 (See 
[https://builds.apache.org/job/Hadoop-Yarn-trunk-Java8/242/])
YARN-3859. LeafQueue doesn't print user properly for application add. (devaraj: 
rev b543d1a390a67e5e92fea67d3a2635058c29e9da)
* hadoop-yarn-project/CHANGES.txt
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/capacity/LeafQueue.java


> LeafQueue doesn't print user properly for application add
> -
>
> Key: YARN-3859
> URL: https://issues.apache.org/jira/browse/YARN-3859
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: capacityscheduler
>Affects Versions: 2.7.0
>Reporter: Devaraj K
>Assignee: Varun Saxena
>Priority: Minor
> Fix For: 2.8.0
>
> Attachments: YARN-3859.01.patch
>
>
> {code:xml}
> 2015-06-28 04:36:22,721 INFO 
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.LeafQueue: 
> Application added - appId: application_1435446241489_0003 user: 
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.LeafQueue$User@e8fb7a8,
>  leaf-queue: default #user-pending-applications: 2 #user-active-applications: 
> 1 #queue-pending-applications: 2 #queue-active-applications: 1
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)