[jira] [Commented] (YARN-5610) Initial code for native services REST API

2016-09-22 Thread Jian He (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5610?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15515597#comment-15515597
 ] 

Jian He commented on YARN-5610:
---

bq. The reason these attributes are at the Application level as well is because 
there will be simple applications which will not have any components.
ok, sounds good to me.

Any update on the 2nd set of comments ?

> Initial code for native services REST API
> -
>
> Key: YARN-5610
> URL: https://issues.apache.org/jira/browse/YARN-5610
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Gour Saha
>Assignee: Gour Saha
> Attachments: YARN-4793-yarn-native-services.001.patch, 
> YARN-5610-yarn-native-services.002.patch
>
>
> This task will be used to submit and review patches for the initial code drop 
> for the native services REST API 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5539) TimelineClient failed to retry on "java.net.SocketTimeoutException: Read timed out"

2016-09-22 Thread Varun Saxena (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5539?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15515549#comment-15515549
 ] 

Varun Saxena commented on YARN-5539:


+1
Will commit it shortly.

> TimelineClient failed to retry on "java.net.SocketTimeoutException: Read 
> timed out"
> ---
>
> Key: YARN-5539
> URL: https://issues.apache.org/jira/browse/YARN-5539
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: yarn
>Reporter: Sumana Sathish
>Assignee: Junping Du
>Priority: Critical
> Attachments: YARN-5539.patch
>
>
> AM fails with the following exception
> {code}
> FATAL distributedshell.ApplicationMaster: Error running ApplicationMaster
> com.sun.jersey.api.client.ClientHandlerException: 
> java.net.SocketTimeoutException: Read timed out
>   at 
> com.sun.jersey.client.urlconnection.URLConnectionClientHandler.handle(URLConnectionClientHandler.java:149)
>   at 
> org.apache.hadoop.yarn.client.api.impl.TimelineClientImpl$TimelineJerseyRetryFilter$1.run(TimelineClientImpl.java:236)
>   at 
> org.apache.hadoop.yarn.client.api.impl.TimelineClientImpl$TimelineClientConnectionRetry.retryOn(TimelineClientImpl.java:185)
>   at 
> org.apache.hadoop.yarn.client.api.impl.TimelineClientImpl$TimelineJerseyRetryFilter.handle(TimelineClientImpl.java:247)
>   at com.sun.jersey.api.client.Client.handle(Client.java:648)
>   at com.sun.jersey.api.client.WebResource.handle(WebResource.java:670)
>   at com.sun.jersey.api.client.WebResource.access$200(WebResource.java:74)
>   at 
> com.sun.jersey.api.client.WebResource$Builder.post(WebResource.java:563)
>   at 
> org.apache.hadoop.yarn.client.api.impl.TimelineWriter.doPostingObject(TimelineWriter.java:154)
>   at 
> org.apache.hadoop.yarn.client.api.impl.TimelineWriter$1.run(TimelineWriter.java:115)
>   at 
> org.apache.hadoop.yarn.client.api.impl.TimelineWriter$1.run(TimelineWriter.java:112)
>   at java.security.AccessController.doPrivileged(Native Method)
>   at javax.security.auth.Subject.doAs(Subject.java:422)
>   at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1724)
>   at 
> org.apache.hadoop.yarn.client.api.impl.TimelineWriter.doPosting(TimelineWriter.java:112)
>   at 
> org.apache.hadoop.yarn.client.api.impl.TimelineWriter.putEntities(TimelineWriter.java:92)
>   at 
> org.apache.hadoop.yarn.client.api.impl.TimelineClientImpl.putEntities(TimelineClientImpl.java:345)
>   at 
> org.apache.hadoop.yarn.applications.distributedshell.ApplicationMaster.publishApplicationAttemptEvent(ApplicationMaster.java:1166)
>   at 
> org.apache.hadoop.yarn.applications.distributedshell.ApplicationMaster.run(ApplicationMaster.java:567)
>   at 
> org.apache.hadoop.yarn.applications.distributedshell.ApplicationMaster.main(ApplicationMaster.java:298)
> Caused by: java.net.SocketTimeoutException: Read timed out
>   at java.net.SocketInputStream.socketRead0(Native Method)
>   at java.net.SocketInputStream.socketRead(SocketInputStream.java:116)
>   at java.net.SocketInputStream.read(SocketInputStream.java:170)
>   at java.net.SocketInputStream.read(SocketInputStream.java:141)
>   at java.io.BufferedInputStream.fill(BufferedInputStream.java:246)
>   at java.io.BufferedInputStream.read1(BufferedInputStream.java:286)
>   at java.io.BufferedInputStream.read(BufferedInputStream.java:345)
>   at sun.net.www.http.HttpClient.parseHTTPHeader(HttpClient.java:704)
>   at sun.net.www.http.HttpClient.parseHTTP(HttpClient.java:647)
>   at 
> sun.net.www.protocol.http.HttpURLConnection.getInputStream0(HttpURLConnection.java:1536)
>   at 
> sun.net.www.protocol.http.HttpURLConnection.getInputStream(HttpURLConnection.java:1441)
>   at 
> java.net.HttpURLConnection.getResponseCode(HttpURLConnection.java:480)
>   at 
> org.apache.hadoop.security.authentication.client.AuthenticatedURL.extractToken(AuthenticatedURL.java:253)
>   at 
> org.apache.hadoop.security.authentication.client.PseudoAuthenticator.authenticate(PseudoAuthenticator.java:77)
>   at 
> org.apache.hadoop.security.token.delegation.web.DelegationTokenAuthenticator.authenticate(DelegationTokenAuthenticator.java:132)
>   at 
> org.apache.hadoop.security.authentication.client.AuthenticatedURL.openConnection(AuthenticatedURL.java:216)
>   at 
> org.apache.hadoop.security.token.delegation.web.DelegationTokenAuthenticatedURL.openConnection(DelegationTokenAuthenticatedURL.java:322)
>   at 
> org.apache.hadoop.yarn.client.api.impl.TimelineClientImpl$TimelineURLConnectionFactory.getHttpURLConnection(TimelineClientImpl.java:472)
>   at 
> com.sun.jersey.client.urlconnection.URLConnectionClientHandler._inv

[jira] [Updated] (YARN-3142) Improve locks in AppSchedulingInfo

2016-09-22 Thread Varun Saxena (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-3142?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Varun Saxena updated YARN-3142:
---
Attachment: YARN-3142.03.patch

> Improve locks in AppSchedulingInfo
> --
>
> Key: YARN-3142
> URL: https://issues.apache.org/jira/browse/YARN-3142
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: resourcemanager, scheduler
>Reporter: Wangda Tan
>Assignee: Varun Saxena
> Attachments: YARN-3142.01.patch, YARN-3142.02.patch, 
> YARN-3142.03.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (YARN-5551) Ignore deleted file mapping from memory computation when smaps is enabled

2016-09-22 Thread Rajesh Balamohan (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5551?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15515481#comment-15515481
 ] 

Rajesh Balamohan edited comment on YARN-5551 at 9/23/16 5:43 AM:
-

Attaching .2 version which takes into account "anonymous" pages.  


was (Author: rajesh.balamohan):
Attaching .2 version which takes into account "anonymous".  

> Ignore deleted file mapping from memory computation when smaps is enabled
> -
>
> Key: YARN-5551
> URL: https://issues.apache.org/jira/browse/YARN-5551
> Project: Hadoop YARN
>  Issue Type: Improvement
>Reporter: Rajesh Balamohan
>Assignee: Rajesh Balamohan
>Priority: Minor
> Attachments: YARN-5551.branch-2.001.patch, 
> YARN-5551.branch-2.002.patch
>
>
> Currently deleted file mappings are also included in the memory computation 
> when SMAP is enabled. For e.g
> {noformat}
> 7f612004a000-7f612004c000 rw-s  00:10 4201507513 
> /dev/shm/HadoopShortCircuitShm_DFSClient_NONMAPREDUCE_-521969216_162_734673185
>  (deleted)
> Size:  8 kB
> Rss:   4 kB
> Pss:   2 kB
> Shared_Clean:  0 kB
> Shared_Dirty:  4 kB
> Private_Clean: 0 kB
> Private_Dirty: 0 kB
> Referenced:4 kB
> Anonymous: 0 kB
> AnonHugePages: 0 kB
> Swap:  0 kB
> KernelPageSize:4 kB
> MMUPageSize:   4 kB
> 7fbf2800-7fbf6800 rw-s  08:02 11927571   
> /tmp/7298569189125604642/arena-1291157252088664681.cache (deleted)
> Size:1048576 kB
> Rss:   17288 kB
> Pss:   17288 kB
> Shared_Clean:  0 kB
> Shared_Dirty:  0 kB
> Private_Clean:   232 kB
> Private_Dirty: 17056 kB
> Referenced:17288 kB
> Anonymous: 0 kB
> AnonHugePages: 0 kB
> Swap:  0 kB
> KernelPageSize:4 kB
> MMUPageSize:   4 kB
> {noformat}
> It would be good to exclude these from getSmapBasedRssMemorySize() 
> computation.  



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-5551) Ignore deleted file mapping from memory computation when smaps is enabled

2016-09-22 Thread Rajesh Balamohan (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-5551?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Rajesh Balamohan updated YARN-5551:
---
Attachment: YARN-5551.branch-2.002.patch

Attaching .2 version which takes into account "anonymous".  

> Ignore deleted file mapping from memory computation when smaps is enabled
> -
>
> Key: YARN-5551
> URL: https://issues.apache.org/jira/browse/YARN-5551
> Project: Hadoop YARN
>  Issue Type: Improvement
>Reporter: Rajesh Balamohan
>Assignee: Rajesh Balamohan
>Priority: Minor
> Attachments: YARN-5551.branch-2.001.patch, 
> YARN-5551.branch-2.002.patch
>
>
> Currently deleted file mappings are also included in the memory computation 
> when SMAP is enabled. For e.g
> {noformat}
> 7f612004a000-7f612004c000 rw-s  00:10 4201507513 
> /dev/shm/HadoopShortCircuitShm_DFSClient_NONMAPREDUCE_-521969216_162_734673185
>  (deleted)
> Size:  8 kB
> Rss:   4 kB
> Pss:   2 kB
> Shared_Clean:  0 kB
> Shared_Dirty:  4 kB
> Private_Clean: 0 kB
> Private_Dirty: 0 kB
> Referenced:4 kB
> Anonymous: 0 kB
> AnonHugePages: 0 kB
> Swap:  0 kB
> KernelPageSize:4 kB
> MMUPageSize:   4 kB
> 7fbf2800-7fbf6800 rw-s  08:02 11927571   
> /tmp/7298569189125604642/arena-1291157252088664681.cache (deleted)
> Size:1048576 kB
> Rss:   17288 kB
> Pss:   17288 kB
> Shared_Clean:  0 kB
> Shared_Dirty:  0 kB
> Private_Clean:   232 kB
> Private_Dirty: 17056 kB
> Referenced:17288 kB
> Anonymous: 0 kB
> AnonHugePages: 0 kB
> Swap:  0 kB
> KernelPageSize:4 kB
> MMUPageSize:   4 kB
> {noformat}
> It would be good to exclude these from getSmapBasedRssMemorySize() 
> computation.  



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-5662) Provide an option to enable ContainerMonitor

2016-09-22 Thread Jian He (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-5662?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jian He updated YARN-5662:
--
Attachment: YARN-5662.2.patch

> Provide an option to enable ContainerMonitor 
> -
>
> Key: YARN-5662
> URL: https://issues.apache.org/jira/browse/YARN-5662
> Project: Hadoop YARN
>  Issue Type: Bug
>Reporter: Jian He
>Assignee: Jian He
> Attachments: YARN-5662.1.patch, YARN-5662.2.patch
>
>
> Currently, if vmem/pmem check is not enabled, ContainerMonitor would not run. 
>  In certain cases, ContainerMonitor also needs to run to monitor things like 
> container-metrics. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5400) Light cleanup in ZKRMStateStore

2016-09-22 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5400?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15515405#comment-15515405
 ] 

Hadoop QA commented on YARN-5400:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 13s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red} 0m 0s 
{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 7m 
41s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 34s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
22s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 43s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
17s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 4s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 22s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 
33s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 31s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 31s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
19s {color} | {color:green} 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager:
 The patch generated 0 new + 52 unchanged - 10 fixed = 52 total (was 62) 
{color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 39s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
17s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
0s {color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 
15s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 21s 
{color} | {color:green} 
hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager
 generated 0 new + 935 unchanged - 6 fixed = 935 total (was 941) {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 35m 39s 
{color} | {color:green} hadoop-yarn-server-resourcemanager in the patch passed. 
{color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 
16s {color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 51m 46s {color} 
| {color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:9560f25 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12829980/YARN-5400.002.patch |
| JIRA Issue | YARN-5400 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux ccb367e53a6f 3.13.0-93-generic #140-Ubuntu SMP Mon Jul 18 
21:21:05 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / d85d9b2 |
| Default Java | 1.8.0_101 |
| findbugs | v3.0.0 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-YARN-Build/13196/testReport/ |
| modules | C: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager
 U: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager
 |
| Console output | 
https://builds.apache.org/job/PreCommit-YARN-Build/13196/console |
| Powered by | Apache Yetus 0.3.0   http://yetus.apache.org |


This message was automatically generated.



> Light cleanup in ZKRMStateStore
> ---
>
> Key: YARN-5400
>   

[jira] [Commented] (YARN-4464) default value of yarn.resourcemanager.state-store.max-completed-applications should lower.

2016-09-22 Thread Naganarasimha G R (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-4464?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15515334#comment-15515334
 ] 

Naganarasimha G R commented on YARN-4464:
-

Thanks [~templedf],
+1, Latest patch LGTM.

> default value of yarn.resourcemanager.state-store.max-completed-applications 
> should lower.
> --
>
> Key: YARN-4464
> URL: https://issues.apache.org/jira/browse/YARN-4464
> Project: Hadoop YARN
>  Issue Type: Wish
>  Components: resourcemanager
>Reporter: KWON BYUNGCHANG
>Assignee: Daniel Templeton
>Priority: Blocker
> Attachments: YARN-4464.001.patch, YARN-4464.002.patch, 
> YARN-4464.003.patch, YARN-4464.004.patch, YARN-4464.005.patch, 
> YARN-4464.006.patch
>
>
> my cluster has 120 nodes.
> I configured RM Restart feature.
> {code}
> yarn.resourcemanager.recovery.enabled=true
> yarn.resourcemanager.store.class=org.apache.hadoop.yarn.server.resourcemanager.recovery.FileSystemRMStateStore
> yarn.resourcemanager.fs.state-store.uri=/system/yarn/rmstore
> {code}
> unfortunately I did not configure 
> {{yarn.resourcemanager.state-store.max-completed-applications}}.
> so that property configured default value 10,000.
> I have restarted RM due to changing another configuartion.
> I expected that RM restart immediately.
> recovery process was very slow.  I have waited about 20min.  
> realize missing 
> {{yarn.resourcemanager.state-store.max-completed-applications}}.
> its default value is very huge.  
> need to change lower value or document notice on [RM Restart 
> page|http://hadoop.apache.org/docs/stable/hadoop-yarn/hadoop-yarn-site/ResourceManagerRestart.html].



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5388) MAPREDUCE-6719 requires changes to DockerContainerExecutor

2016-09-22 Thread Daniel Templeton (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5388?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15515328#comment-15515328
 ] 

Daniel Templeton commented on YARN-5388:


[~sidharta-s], [~vvasudev], any chance you could give me a review?

> MAPREDUCE-6719 requires changes to DockerContainerExecutor
> --
>
> Key: YARN-5388
> URL: https://issues.apache.org/jira/browse/YARN-5388
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: nodemanager
>Reporter: Daniel Templeton
>Assignee: Daniel Templeton
>Priority: Critical
> Fix For: 2.9.0
>
> Attachments: YARN-5388.001.patch, YARN-5388.002.patch, 
> YARN-5388.branch-2.001.patch
>
>
> Because the {{DockerContainerExecuter}} overrides the {{writeLaunchEnv()}} 
> method, it must also have the wildcard processing logic from 
> YARN-4958/YARN-5373 added to it.  Without it, the use of -libjars will fail 
> unless wildcarding is disabled.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-4907) Make all MockRM#waitForState consistent.

2016-09-22 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-4907?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15515276#comment-15515276
 ] 

Hadoop QA commented on YARN-4907:
-

| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 13s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 
0s {color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 8m 
22s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 41s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
26s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 45s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
16s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 
18s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 26s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 
36s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 32s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 32s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
22s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 40s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
17s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
0s {color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 
19s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 18s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 34m 39s 
{color} | {color:green} hadoop-yarn-server-resourcemanager in the patch passed. 
{color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 
21s {color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 52m 10s {color} 
| {color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:9560f25 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12829967/YARN-4907.001.patch |
| JIRA Issue | YARN-4907 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux 6e2bc1e4eda5 3.13.0-93-generic #140-Ubuntu SMP Mon Jul 18 
21:21:05 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / d85d9b2 |
| Default Java | 1.8.0_101 |
| findbugs | v3.0.0 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-YARN-Build/13195/testReport/ |
| modules | C: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager
 U: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager
 |
| Console output | 
https://builds.apache.org/job/PreCommit-YARN-Build/13195/console |
| Powered by | Apache Yetus 0.3.0   http://yetus.apache.org |


This message was automatically generated.



> Make all MockRM#waitForState consistent. 
> -
>
> Key: YARN-4907
> URL: https://issues.apache.org/jira/browse/YARN-4907
> Project: Hadoop YARN
>  Issue Type: Improvement
>Reporter: Yufei Gu
>Assignee: Yufei Gu
> Attachments: YARN-4907.001.patch
>
>
> There are some inconsistencies among these {{waitForState}} in {{MockRM}}:
> 1. Some {{waitForState}} return a boolean while others don

[jira] [Updated] (YARN-5400) Light cleanup in ZKRMStateStore

2016-09-22 Thread Daniel Templeton (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-5400?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Daniel Templeton updated YARN-5400:
---
Attachment: YARN-5400.002.patch

Here's a patch to resolve the javadoc issues and a couple extra.

> Light cleanup in ZKRMStateStore
> ---
>
> Key: YARN-5400
> URL: https://issues.apache.org/jira/browse/YARN-5400
> Project: Hadoop YARN
>  Issue Type: Improvement
>  Components: resourcemanager
>Affects Versions: 2.9.0
>Reporter: Daniel Templeton
>Assignee: Daniel Templeton
>Priority: Trivial
> Attachments: YARN-5400.001.patch, YARN-5400.002.patch
>
>
> of {{ZKRMStateStore}} contains a plethora whitespace issues as well as some 
> icky bits, like unused variables.  This JIRA is to clean that up.  It should 
> have no functional impact.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-4767) Network issues can cause persistent RM UI outage

2016-09-22 Thread Daniel Templeton (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-4767?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15515210#comment-15515210
 ] 

Daniel Templeton commented on YARN-4767:


The javadoc complaints are all about the fact that {{_}} is used as a variable 
name.  I'd really love to fix that, but it's way out of scope for this patch.

> Network issues can cause persistent RM UI outage
> 
>
> Key: YARN-4767
> URL: https://issues.apache.org/jira/browse/YARN-4767
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: webapp
>Affects Versions: 2.7.2
>Reporter: Daniel Templeton
>Assignee: Daniel Templeton
>Priority: Critical
> Attachments: YARN-4767.001.patch, YARN-4767.002.patch, 
> YARN-4767.003.patch, YARN-4767.004.patch, YARN-4767.005.patch, 
> YARN-4767.006.patch, YARN-4767.007.patch, YARN-4767.008.patch, 
> YARN-4767.009.patch, YARN-4767.010.patch
>
>
> If a network issue causes an AM web app to resolve the RM proxy's address to 
> something other than what's listed in the allowed proxies list, the 
> AmIpFilter will 302 redirect the RM proxy's request back to the RM proxy.  
> The RM proxy will then consume all available handler threads connecting to 
> itself over and over, resulting in an outage of the web UI.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-4743) ResourceManager crash because TimSort

2016-09-22 Thread stefanlee (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-4743?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15515206#comment-15515206
 ] 

stefanlee commented on YARN-4743:
-

Thanks ,my hadoop version is 2.4.0 and i just found that continuousScheduling 
thread has removed collections.sort in hadoop-3.0.0, i will review the code 
carefully.

> ResourceManager crash because TimSort
> -
>
> Key: YARN-4743
> URL: https://issues.apache.org/jira/browse/YARN-4743
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: fairscheduler
>Affects Versions: 2.6.4
>Reporter: Zephyr Guo
>Assignee: Yufei Gu
> Attachments: YARN-4743-cdh5.4.7.patch
>
>
> {code}
> 2016-02-26 14:08:50,821 FATAL 
> org.apache.hadoop.yarn.server.resourcemanager.ResourceManager: Error in 
> handling event type NODE_UPDATE to the scheduler
> java.lang.IllegalArgumentException: Comparison method violates its general 
> contract!
>  at java.util.TimSort.mergeHi(TimSort.java:868)
>  at java.util.TimSort.mergeAt(TimSort.java:485)
>  at java.util.TimSort.mergeCollapse(TimSort.java:410)
>  at java.util.TimSort.sort(TimSort.java:214)
>  at java.util.TimSort.sort(TimSort.java:173)
>  at java.util.Arrays.sort(Arrays.java:659)
>  at java.util.Collections.sort(Collections.java:217)
>  at 
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.fair.FSLeafQueue.assignContainer(FSLeafQueue.java:316)
>  at 
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.fair.FSParentQueue.assignContainer(FSParentQueue.java:240)
>  at 
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.fair.FairScheduler.attemptScheduling(FairScheduler.java:1091)
>  at 
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.fair.FairScheduler.nodeUpdate(FairScheduler.java:989)
>  at 
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.fair.FairScheduler.handle(FairScheduler.java:1185)
>  at 
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.fair.FairScheduler.handle(FairScheduler.java:112)
>  at 
> org.apache.hadoop.yarn.server.resourcemanager.ResourceManager$SchedulerEventDispatcher$EventProcessor.run(ResourceManager.java:684)
>  at java.lang.Thread.run(Thread.java:745)
> 2016-02-26 14:08:50,822 INFO 
> org.apache.hadoop.yarn.server.resourcemanager.ResourceManager: Exiting, bbye..
> {code}
> Actually, this issue found in 2.6.0-cdh5.4.7.
> I think the cause is that we modify {{Resouce}} while we are sorting 
> {{runnableApps}}.
> {code:title=FSLeafQueue.java}
> Comparator comparator = policy.getComparator();
> writeLock.lock();
> try {
>   Collections.sort(runnableApps, comparator);
> } finally {
>   writeLock.unlock();
> }
> readLock.lock();
> {code}
> {code:title=FairShareComparator}
> public int compare(Schedulable s1, Schedulable s2) {
> ..
>   s1.getResourceUsage(), minShare1);
>   boolean s2Needy = Resources.lessThan(RESOURCE_CALCULATOR, null,
>   s2.getResourceUsage(), minShare2);
>   minShareRatio1 = (double) s1.getResourceUsage().getMemory()
>   / Resources.max(RESOURCE_CALCULATOR, null, minShare1, 
> ONE).getMemory();
>   minShareRatio2 = (double) s2.getResourceUsage().getMemory()
>   / Resources.max(RESOURCE_CALCULATOR, null, minShare2, 
> ONE).getMemory();
> ..
> {code}
> {{getResourceUsage}} will return current Resource. The current Resource is 
> unstable. 
> {code:title=FSAppAttempt.java}
> @Override
>   public Resource getResourceUsage() {
> // Here the getPreemptedResources() always return zero, except in
> // a preemption round
> return Resources.subtract(getCurrentConsumption(), 
> getPreemptedResources());
>   }
> {code}
> {code:title=SchedulerApplicationAttempt}
>  public Resource getCurrentConsumption() {
> return currentConsumption;
>   }
> // This method may modify current Resource.
> public synchronized void recoverContainer(RMContainer rmContainer) {
> ..
> Resources.addTo(currentConsumption, rmContainer.getContainer()
>   .getResource());
> ..
>   }
> {code}
> I suggest that use stable Resource in comparator.
> Is there something i think wrong?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-3692) Allow REST API to set a user generated message when killing an application

2016-09-22 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-3692?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15515059#comment-15515059
 ] 

Hudson commented on YARN-3692:
--

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #10476 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/10476/])
YARN-3692. Allow REST API to set a user generated message when killing 
(naganarasimha_gr: rev d0372dc613136910160e9d42bd5eaa0d4bde2356)
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/webapp/TestRMWebServicesAppsModification.java
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/TestClientRMService.java
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/webapp/dao/AppState.java
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-client/src/main/java/org/apache/hadoop/yarn/client/api/impl/YarnClientImpl.java
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-client/src/main/java/org/apache/hadoop/yarn/client/api/YarnClient.java
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/ClientRMService.java
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/api/protocolrecords/KillApplicationRequest.java
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/proto/yarn_service_protos.proto
* (edit) 
hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-jobclient/src/main/java/org/apache/hadoop/mapred/ResourceMgrDelegate.java
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/webapp/RMWebServices.java
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/java/org/apache/hadoop/yarn/api/protocolrecords/impl/pb/KillApplicationRequestPBImpl.java


> Allow REST API to set a user generated message when killing an application
> --
>
> Key: YARN-3692
> URL: https://issues.apache.org/jira/browse/YARN-3692
> Project: Hadoop YARN
>  Issue Type: Improvement
>Reporter: Rajat Jain
>Assignee: Rohith Sharma K S
> Attachments: 0001-YARN-3692.patch, 0002-YARN-3692.patch, 
> 0003-YARN-3692.patch, 0004-YARN-3692.patch, 0005-YARN-3692.1.patch, 
> 0005-YARN-3692.patch, 0006-YARN-3692.patch, 0007-YARN-3692.1.patch, 
> 0007-YARN-3692.patch
>
>
> Currently YARN's REST API supports killing an application without setting a 
> diagnostic message. It would be good to provide that support.
> *Use Case*
> Usually this helps in workflow management in a multi-tenant environment when 
> the workflow scheduler (or the hadoop admin) wants to kill a job - and let 
> the user know the reason why the job was killed. Killing the job by setting a 
> diagnostic message is a very good solution for that. Ideally, we can set the 
> diagnostic message on all such interface:
> yarn kill -applicationId ... -diagnosticMessage "some message added by 
> admin/workflow"
> REST API { 'state': 'KILLED', 'diagnosticMessage': 'some message added by 
> admin/workflow'}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5145) [YARN-3368] Move new YARN UI configuration to HADOOP_CONF_DIR

2016-09-22 Thread Weiwei Yang (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5145?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15515024#comment-15515024
 ] 

Weiwei Yang commented on YARN-5145:
---

Hello [~leftnoteasy]

If all configuration are set in yarn-site.xml, it is possible to retrieve them 
via http://RM:8088/conf. I am currently working on a JIRA to improve the way of 
getting configuration property via REST call, this would help this case. See 
more from HADOOP-13628. For example, load a property to UI code would simply 
call http://RM:8088/conf?name=yarn.property.name, it will return a json format 
response (when accept header is set to json).

> [YARN-3368] Move new YARN UI configuration to HADOOP_CONF_DIR
> -
>
> Key: YARN-5145
> URL: https://issues.apache.org/jira/browse/YARN-5145
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Wangda Tan
>Assignee: Kai Sasaki
> Attachments: YARN-5145-YARN-3368.01.patch
>
>
> Existing YARN UI configuration is under Hadoop package's directory: 
> $HADOOP_PREFIX/share/hadoop/yarn/webapps/, we should move it to 
> $HADOOP_CONF_DIR like other configurations.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (YARN-4907) Make all MockRM#waitForState consistent.

2016-09-22 Thread Yufei Gu (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-4907?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15515008#comment-15515008
 ] 

Yufei Gu edited comment on YARN-4907 at 9/23/16 1:14 AM:
-

Some {{waitForState}} return a boolean since the caller need to assert on 
different value. Some {{waitForState}} doesn't since the caller expect them to 
assert if the state is reached before timeout. So it makes sense to have both.
I changed all {{System.out.println}} to {{LOG.info}} and make sure every wait 
function has a timeout. 


was (Author: yufeigu):
Some {{waitForState}} return a boolean since the caller need to assert on 
different value. Some {{waitForState}} doesn't since the caller expect them to 
assert if the state is reached before timeout. So it makes sense to have both.
I changed all {{System.out.println}} to {{LOG.info}} and make sure every wait 
has a timeout. 

> Make all MockRM#waitForState consistent. 
> -
>
> Key: YARN-4907
> URL: https://issues.apache.org/jira/browse/YARN-4907
> Project: Hadoop YARN
>  Issue Type: Improvement
>Reporter: Yufei Gu
>Assignee: Yufei Gu
> Attachments: YARN-4907.001.patch
>
>
> There are some inconsistencies among these {{waitForState}} in {{MockRM}}:
> 1. Some {{waitForState}} return a boolean while others don't.  
> 2. Some {{waitForState}} don't have a timeout, they can wait for ever. 
> 3. Some {{waitForState}} use LOG.info and others use {{System.out.println}} 
> to print messages.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-4907) Make all MockRM#waitForState consistent.

2016-09-22 Thread Yufei Gu (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-4907?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15515008#comment-15515008
 ] 

Yufei Gu commented on YARN-4907:


Some {{waitForState}} return a boolean since the caller need to assert on 
different value. Some {{waitForState}} doesn't since the caller expect them to 
assert if the state is reached before timeout. So it makes sense to have both.
I changed all {{System.out.println}} to {{LOG.info}} and make sure every wait 
has a timeout. 

> Make all MockRM#waitForState consistent. 
> -
>
> Key: YARN-4907
> URL: https://issues.apache.org/jira/browse/YARN-4907
> Project: Hadoop YARN
>  Issue Type: Improvement
>Reporter: Yufei Gu
>Assignee: Yufei Gu
> Attachments: YARN-4907.001.patch
>
>
> There are some inconsistencies among these {{waitForState}} in {{MockRM}}:
> 1. Some {{waitForState}} return a boolean while others don't.  
> 2. Some {{waitForState}} don't have a timeout, they can wait for ever. 
> 3. Some {{waitForState}} use LOG.info and others use {{System.out.println}} 
> to print messages.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-4907) Make all MockRM#waitForState consistent.

2016-09-22 Thread Yufei Gu (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-4907?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yufei Gu updated YARN-4907:
---
Attachment: YARN-4907.001.patch

> Make all MockRM#waitForState consistent. 
> -
>
> Key: YARN-4907
> URL: https://issues.apache.org/jira/browse/YARN-4907
> Project: Hadoop YARN
>  Issue Type: Improvement
>Reporter: Yufei Gu
>Assignee: Yufei Gu
> Attachments: YARN-4907.001.patch
>
>
> There are some inconsistencies among these {{waitForState}} in {{MockRM}}:
> 1. Some {{waitForState}} return a boolean while others don't.  
> 2. Some {{waitForState}} don't have a timeout, they can wait for ever. 
> 3. Some {{waitForState}} use LOG.info and others use {{System.out.println}} 
> to print messages.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-3692) Allow REST API to set a user generated message when killing an application

2016-09-22 Thread Naganarasimha G R (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-3692?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15515000#comment-15515000
 ] 

Naganarasimha G R commented on YARN-3692:
-

[~rohithsharma],
Seems like patch doesnt seem to apply on branch-2
{code}
[ERROR] Failed to execute goal 
org.apache.maven.plugins:maven-compiler-plugin:3.1:compile (default-compile) on 
project hadoop-yarn-server-resourcemanager: Compilation failure: Compilation 
failure:
[ERROR] 
/opt/git/commit/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/webapp/RMWebServices.java:[1162,23]
 local variable diagnostic is accessed from within inner class; needs to be 
declared final
[ERROR] 
/opt/git/commit/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/webapp/RMWebServices.java:[1163,40]
 local variable diagnostic is accessed from within inner class; needs to be 
declared final
{code}

also shall i commit for 2.8?

> Allow REST API to set a user generated message when killing an application
> --
>
> Key: YARN-3692
> URL: https://issues.apache.org/jira/browse/YARN-3692
> Project: Hadoop YARN
>  Issue Type: Improvement
>Reporter: Rajat Jain
>Assignee: Rohith Sharma K S
> Attachments: 0001-YARN-3692.patch, 0002-YARN-3692.patch, 
> 0003-YARN-3692.patch, 0004-YARN-3692.patch, 0005-YARN-3692.1.patch, 
> 0005-YARN-3692.patch, 0006-YARN-3692.patch, 0007-YARN-3692.1.patch, 
> 0007-YARN-3692.patch
>
>
> Currently YARN's REST API supports killing an application without setting a 
> diagnostic message. It would be good to provide that support.
> *Use Case*
> Usually this helps in workflow management in a multi-tenant environment when 
> the workflow scheduler (or the hadoop admin) wants to kill a job - and let 
> the user know the reason why the job was killed. Killing the job by setting a 
> diagnostic message is a very good solution for that. Ideally, we can set the 
> diagnostic message on all such interface:
> yarn kill -applicationId ... -diagnosticMessage "some message added by 
> admin/workflow"
> REST API { 'state': 'KILLED', 'diagnosticMessage': 'some message added by 
> admin/workflow'}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-3877) YarnClientImpl.submitApplication swallows exceptions

2016-09-22 Thread Xabriel J Collazo Mojica (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-3877?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15514903#comment-15514903
 ] 

Xabriel J Collazo Mojica commented on YARN-3877:


Folks, I see that the {{InterruptedException}} is being wrapped by a 
{{YarnException}}. This makes it cumbersome for the caller to interpret whether 
the Thread was interrupted. Can't we just rethrow the original 
{{InterruptedException}}?

> YarnClientImpl.submitApplication swallows exceptions
> 
>
> Key: YARN-3877
> URL: https://issues.apache.org/jira/browse/YARN-3877
> Project: Hadoop YARN
>  Issue Type: Improvement
>  Components: client
>Affects Versions: 2.7.2
>Reporter: Steve Loughran
>Assignee: Varun Saxena
>Priority: Minor
> Attachments: YARN-3877.01.patch, YARN-3877.02.patch, 
> YARN-3877.03.patch, YARN-3877.04.patch
>
>
> When {{YarnClientImpl.submitApplication}} spins waiting for the application 
> to be accepted, any interruption during its Sleep() calls are logged and 
> swallowed.
> this makes it hard to interrupt the thread during shutdown. Really it should 
> throw some form of exception and let the caller deal with it.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-4973) YarnWebParams next.fresh.interval should be next.refresh.interval

2016-09-22 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-4973?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15514885#comment-15514885
 ] 

Hudson commented on YARN-4973:
--

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #10475 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/10475/])
YARN-4973. YarnWebParams next.fresh.interval should be (rkanter: rev 
5ffd4b7c1e1f3168483c708c7ed307a565389eb2)
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/java/org/apache/hadoop/yarn/webapp/YarnWebParams.java


> YarnWebParams next.fresh.interval should be next.refresh.interval
> -
>
> Key: YARN-4973
> URL: https://issues.apache.org/jira/browse/YARN-4973
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: webapp
>Affects Versions: 2.8.0
>Reporter: Daniel Templeton
>Assignee: Daniel Templeton
>Priority: Minor
> Fix For: 2.9.0, 3.0.0-alpha2
>
> Attachments: YARN-4973.001.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-5324) Stateless Federation router policies implementation

2016-09-22 Thread Subru Krishnan (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-5324?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Subru Krishnan updated YARN-5324:
-
Summary: Stateless Federation router policies implementation  (was: 
Stateless router policies implementation)

> Stateless Federation router policies implementation
> ---
>
> Key: YARN-5324
> URL: https://issues.apache.org/jira/browse/YARN-5324
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: nodemanager, resourcemanager
>Affects Versions: YARN-2915
>Reporter: Carlo Curino
>Assignee: Carlo Curino
> Fix For: YARN-2915
>
> Attachments: YARN-5324-YARN-2915.06.patch, 
> YARN-5324-YARN-2915.07.patch, YARN-5324-YARN-2915.08.patch, 
> YARN-5324-YARN-2915.09.patch, YARN-5324-YARN-2915.10.patch, 
> YARN-5324-YARN-2915.11.patch, YARN-5324-YARN-2915.12.patch, 
> YARN-5324-YARN-2915.13.patch, YARN-5324-YARN-2915.14.patch, 
> YARN-5324-YARN-2915.15.patch, YARN-5324-YARN-2915.16.patch, 
> YARN-5324.01.patch, YARN-5324.02.patch, YARN-5324.03.patch, 
> YARN-5324.04.patch, YARN-5324.05.patch
>
>
> These are policies at the Router that do not require maintaing state across 
> choices (e.g., weighted random).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-4973) YarnWebParams next.fresh.interval should be next.refresh.interval

2016-09-22 Thread Robert Kanter (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-4973?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15514824#comment-15514824
 ] 

Robert Kanter commented on YARN-4973:
-

+1

> YarnWebParams next.fresh.interval should be next.refresh.interval
> -
>
> Key: YARN-4973
> URL: https://issues.apache.org/jira/browse/YARN-4973
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: webapp
>Affects Versions: 2.8.0
>Reporter: Daniel Templeton
>Assignee: Daniel Templeton
>Priority: Minor
> Attachments: YARN-4973.001.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5659) getPathFromYarnURL should use standard methods

2016-09-22 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5659?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15514795#comment-15514795
 ] 

Hadoop QA commented on YARN-5659:
-

| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 13s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 
0s {color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 6m 
37s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 24s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
13s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 25s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
12s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 0s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 19s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 
22s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 20s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 20s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
10s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 25s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
9s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
0s {color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 
12s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 14s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 0m 22s 
{color} | {color:green} hadoop-yarn-api in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 
15s {color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 13m 28s {color} 
| {color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:9560f25 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12829946/YARN-5659.04.patch |
| JIRA Issue | YARN-5659 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux ca76e6778e93 3.13.0-93-generic #140-Ubuntu SMP Mon Jul 18 
21:21:05 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / 4fc632a |
| Default Java | 1.8.0_101 |
| findbugs | v3.0.0 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-YARN-Build/13194/testReport/ |
| modules | C: hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api U: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api |
| Console output | 
https://builds.apache.org/job/PreCommit-YARN-Build/13194/console |
| Powered by | Apache Yetus 0.3.0   http://yetus.apache.org |


This message was automatically generated.



> getPathFromYarnURL should use standard methods
> --
>
> Key: YARN-5659
> URL: https://issues.apache.org/jira/browse/YARN-5659
> Project: Hadoop YARN
>  Issue Type: Bug
>Reporter: Sergey Shelukhin
>Assignee: Sergey Shelukhin
> Attachments: YARN-5659.01.patch, YARN-5659.02.patch, 
> YARN-5659.03.patch, YARN-5659.04.patch, YARN-5659.04.patch, YARN-5659.patch
>
>
> getPathFromYarnURL does some string shenanigans where  standard ctors should 
> suffice.
> There are also bugs in it e.g.

[jira] [Commented] (YARN-5400) Light cleanup in ZKRMStateStore

2016-09-22 Thread Robert Kanter (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5400?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15514765#comment-15514765
 ] 

Robert Kanter commented on YARN-5400:
-

Test failure appears to be YARN-5057.  [~templedf], can you look at the Javadoc?

> Light cleanup in ZKRMStateStore
> ---
>
> Key: YARN-5400
> URL: https://issues.apache.org/jira/browse/YARN-5400
> Project: Hadoop YARN
>  Issue Type: Improvement
>  Components: resourcemanager
>Affects Versions: 2.9.0
>Reporter: Daniel Templeton
>Assignee: Daniel Templeton
>Priority: Trivial
> Attachments: YARN-5400.001.patch
>
>
> of {{ZKRMStateStore}} contains a plethora whitespace issues as well as some 
> icky bits, like unused variables.  This JIRA is to clean that up.  It should 
> have no functional impact.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5324) Stateless router policies implementation

2016-09-22 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5324?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15514760#comment-15514760
 ] 

Hadoop QA commented on YARN-5324:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 15s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 
0s {color} | {color:green} The patch appears to include 7 new or modified test 
files. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 9m 
40s {color} | {color:green} YARN-2915 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 20s 
{color} | {color:green} YARN-2915 passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
13s {color} | {color:green} YARN-2915 passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 25s 
{color} | {color:green} YARN-2915 passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
13s {color} | {color:green} YARN-2915 passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 0m 
44s {color} | {color:green} YARN-2915 passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 18s 
{color} | {color:green} YARN-2915 passed {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 
19s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 17s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 17s 
{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} checkstyle {color} | {color:red} 0m 10s 
{color} | {color:red} 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-common: 
The patch generated 2 new + 0 unchanged - 0 fixed = 2 total (was 0) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 20s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
10s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
0s {color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 0m 
46s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 14s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 0m 42s 
{color} | {color:green} hadoop-yarn-server-common in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 
16s {color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 16m 1s {color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:9560f25 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12829947/YARN-5324-YARN-2915.16.patch
 |
| JIRA Issue | YARN-5324 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux 3c81c4d4dad5 3.13.0-93-generic #140-Ubuntu SMP Mon Jul 18 
21:21:05 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | YARN-2915 / 9abc7da |
| Default Java | 1.8.0_101 |
| findbugs | v3.0.0 |
| checkstyle | 
https://builds.apache.org/job/PreCommit-YARN-Build/13193/artifact/patchprocess/diff-checkstyle-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-common.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-YARN-Build/13193/testReport/ |
| modules | C: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-common U: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-common |
| Console output | 
https://builds.apache.org/job/PreCommit-YARN-Build/13193/console |
| Powered by | Apache Yetus 0.3.0   http://yetus.apache.org |


This message was automatically generated.



> Stateless router policies implementation
> 
>
> Key: YARN-5324
> URL: https://issues.apache.org/jira/browse/YARN

[jira] [Commented] (YARN-5400) Light cleanup in ZKRMStateStore

2016-09-22 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5400?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15514757#comment-15514757
 ] 

Hadoop QA commented on YARN-5400:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 15s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red} 0m 0s 
{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 7m 
23s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 38s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
23s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 39s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
19s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 2s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 22s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 
33s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 31s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 31s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
20s {color} | {color:green} 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager:
 The patch generated 0 new + 54 unchanged - 8 fixed = 54 total (was 62) {color} 
|
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 40s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
14s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
0s {color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 8s 
{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} javadoc {color} | {color:red} 0m 19s 
{color} | {color:red} 
hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager
 generated 2 new + 935 unchanged - 6 fixed = 937 total (was 941) {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 35m 29s {color} 
| {color:red} hadoop-yarn-server-resourcemanager in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 
18s {color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 51m 13s {color} 
| {color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | 
hadoop.yarn.server.resourcemanager.security.TestDelegationTokenRenewer |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:9560f25 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12818689/YARN-5400.001.patch |
| JIRA Issue | YARN-5400 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux cb7231fd926b 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed 
Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / 40acace |
| Default Java | 1.8.0_101 |
| findbugs | v3.0.0 |
| javadoc | 
https://builds.apache.org/job/PreCommit-YARN-Build/13192/artifact/patchprocess/diff-javadoc-javadoc-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager.txt
 |
| unit | 
https://builds.apache.org/job/PreCommit-YARN-Build/13192/artifact/patchprocess/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager.txt
 |
| unit test logs |  
https://builds.apache.org/job/PreCommit-YARN-Build/131

[jira] [Commented] (YARN-5324) Stateless router policies implementation

2016-09-22 Thread Carlo Curino (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5324?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15514704#comment-15514704
 ] 

Carlo Curino commented on YARN-5324:


[~subru] thanks for reviewing. I addressed the nits.

> Stateless router policies implementation
> 
>
> Key: YARN-5324
> URL: https://issues.apache.org/jira/browse/YARN-5324
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: nodemanager, resourcemanager
>Affects Versions: YARN-2915
>Reporter: Carlo Curino
>Assignee: Carlo Curino
> Attachments: YARN-5324-YARN-2915.06.patch, 
> YARN-5324-YARN-2915.07.patch, YARN-5324-YARN-2915.08.patch, 
> YARN-5324-YARN-2915.09.patch, YARN-5324-YARN-2915.10.patch, 
> YARN-5324-YARN-2915.11.patch, YARN-5324-YARN-2915.12.patch, 
> YARN-5324-YARN-2915.13.patch, YARN-5324-YARN-2915.14.patch, 
> YARN-5324-YARN-2915.15.patch, YARN-5324-YARN-2915.16.patch, 
> YARN-5324.01.patch, YARN-5324.02.patch, YARN-5324.03.patch, 
> YARN-5324.04.patch, YARN-5324.05.patch
>
>
> These are policies at the Router that do not require maintaing state across 
> choices (e.g., weighted random).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-5324) Stateless router policies implementation

2016-09-22 Thread Carlo Curino (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-5324?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Carlo Curino updated YARN-5324:
---
Attachment: YARN-5324-YARN-2915.16.patch

> Stateless router policies implementation
> 
>
> Key: YARN-5324
> URL: https://issues.apache.org/jira/browse/YARN-5324
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: nodemanager, resourcemanager
>Affects Versions: YARN-2915
>Reporter: Carlo Curino
>Assignee: Carlo Curino
> Attachments: YARN-5324-YARN-2915.06.patch, 
> YARN-5324-YARN-2915.07.patch, YARN-5324-YARN-2915.08.patch, 
> YARN-5324-YARN-2915.09.patch, YARN-5324-YARN-2915.10.patch, 
> YARN-5324-YARN-2915.11.patch, YARN-5324-YARN-2915.12.patch, 
> YARN-5324-YARN-2915.13.patch, YARN-5324-YARN-2915.14.patch, 
> YARN-5324-YARN-2915.15.patch, YARN-5324-YARN-2915.16.patch, 
> YARN-5324.01.patch, YARN-5324.02.patch, YARN-5324.03.patch, 
> YARN-5324.04.patch, YARN-5324.05.patch
>
>
> These are policies at the Router that do not require maintaing state across 
> choices (e.g., weighted random).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-5659) getPathFromYarnURL should use standard methods

2016-09-22 Thread Sergey Shelukhin (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-5659?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sergey Shelukhin updated YARN-5659:
---
Attachment: YARN-5659.04.patch

The patch without a spurious change..

> getPathFromYarnURL should use standard methods
> --
>
> Key: YARN-5659
> URL: https://issues.apache.org/jira/browse/YARN-5659
> Project: Hadoop YARN
>  Issue Type: Bug
>Reporter: Sergey Shelukhin
>Assignee: Sergey Shelukhin
> Attachments: YARN-5659.01.patch, YARN-5659.02.patch, 
> YARN-5659.03.patch, YARN-5659.04.patch, YARN-5659.04.patch, YARN-5659.patch
>
>
> getPathFromYarnURL does some string shenanigans where  standard ctors should 
> suffice.
> There are also bugs in it e.g. passing an empty scheme to the URI ctor is 
> invalid, null should be used. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5659) getPathFromYarnURL should use standard methods

2016-09-22 Thread Sergey Shelukhin (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5659?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15514660#comment-15514660
 ] 

Sergey Shelukhin commented on YARN-5659:


Apparently editing patches directly is not a good idea...

> getPathFromYarnURL should use standard methods
> --
>
> Key: YARN-5659
> URL: https://issues.apache.org/jira/browse/YARN-5659
> Project: Hadoop YARN
>  Issue Type: Bug
>Reporter: Sergey Shelukhin
>Assignee: Sergey Shelukhin
> Attachments: YARN-5659.01.patch, YARN-5659.02.patch, 
> YARN-5659.03.patch, YARN-5659.04.patch, YARN-5659.patch
>
>
> getPathFromYarnURL does some string shenanigans where  standard ctors should 
> suffice.
> There are also bugs in it e.g. passing an empty scheme to the URI ctor is 
> invalid, null should be used. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5324) Stateless router policies implementation

2016-09-22 Thread Subru Krishnan (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5324?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15514674#comment-15514674
 ] 

Subru Krishnan commented on YARN-5324:
--

+1 on the latest patch, thanks [~curino] for addressing all my comments.

I have a couple of minor nits:
  * We should slf4j for logging in *WeightedPolicyInfo*.
  * {{bb.duplicate}} is redundant in *WeightedPolicyInfo*.
  * I think we can move the {{testNoSubClusters}} to the 
*BaseFederationRouterPoliciesTest*.

> Stateless router policies implementation
> 
>
> Key: YARN-5324
> URL: https://issues.apache.org/jira/browse/YARN-5324
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: nodemanager, resourcemanager
>Affects Versions: YARN-2915
>Reporter: Carlo Curino
>Assignee: Carlo Curino
> Attachments: YARN-5324-YARN-2915.06.patch, 
> YARN-5324-YARN-2915.07.patch, YARN-5324-YARN-2915.08.patch, 
> YARN-5324-YARN-2915.09.patch, YARN-5324-YARN-2915.10.patch, 
> YARN-5324-YARN-2915.11.patch, YARN-5324-YARN-2915.12.patch, 
> YARN-5324-YARN-2915.13.patch, YARN-5324-YARN-2915.14.patch, 
> YARN-5324-YARN-2915.15.patch, YARN-5324.01.patch, YARN-5324.02.patch, 
> YARN-5324.03.patch, YARN-5324.04.patch, YARN-5324.05.patch
>
>
> These are policies at the Router that do not require maintaing state across 
> choices (e.g., weighted random).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-3142) Improve locks in AppSchedulingInfo

2016-09-22 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-3142?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15514649#comment-15514649
 ] 

Hadoop QA commented on YARN-3142:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 15s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red} 0m 0s 
{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 7m 
57s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 37s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
22s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 41s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
20s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 7s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 22s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 
35s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 31s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 31s 
{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} checkstyle {color} | {color:red} 0m 22s 
{color} | {color:red} 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager:
 The patch generated 2 new + 4 unchanged - 1 fixed = 6 total (was 5) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 41s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
15s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
1s {color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 
15s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 19s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 34m 2s 
{color} | {color:green} hadoop-yarn-server-resourcemanager in the patch passed. 
{color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 
15s {color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 50m 35s {color} 
| {color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:9560f25 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12829932/YARN-3142.02.patch |
| JIRA Issue | YARN-3142 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux b478272df545 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed 
Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / 40acace |
| Default Java | 1.8.0_101 |
| findbugs | v3.0.0 |
| checkstyle | 
https://builds.apache.org/job/PreCommit-YARN-Build/13191/artifact/patchprocess/diff-checkstyle-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-YARN-Build/13191/testReport/ |
| modules | C: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager
 U: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager
 |
| Console output | 
https://builds.apache.org/job/PreCommit-YARN-Build/13191/console |
| Powered by | Apache Yetus 0.3.0   http://yetus.apache.org |


This message was automatically generated.



> Improve locks in AppSchedulingInfo
> -

[jira] [Commented] (YARN-5400) Light cleanup in ZKRMStateStore

2016-09-22 Thread Robert Kanter (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5400?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15514632#comment-15514632
 ] 

Robert Kanter commented on YARN-5400:
-

+1 pending Jenkins

> Light cleanup in ZKRMStateStore
> ---
>
> Key: YARN-5400
> URL: https://issues.apache.org/jira/browse/YARN-5400
> Project: Hadoop YARN
>  Issue Type: Improvement
>  Components: resourcemanager
>Affects Versions: 2.9.0
>Reporter: Daniel Templeton
>Assignee: Daniel Templeton
>Priority: Trivial
> Attachments: YARN-5400.001.patch
>
>
> of {{ZKRMStateStore}} contains a plethora whitespace issues as well as some 
> icky bits, like unused variables.  This JIRA is to clean that up.  It should 
> have no functional impact.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-4493) move queue can make app don't belong to any queue

2016-09-22 Thread Yufei Gu (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-4493?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15514602#comment-15514602
 ] 

Yufei Gu commented on YARN-4493:


I will close this as invalid. Feel free to reopen it if you have any concern.

> move queue can make app don't belong to any queue
> -
>
> Key: YARN-4493
> URL: https://issues.apache.org/jira/browse/YARN-4493
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: fairscheduler
>Affects Versions: 2.4.0, 2.6.0, 2.7.1
>Reporter: jiangyu
>Assignee: Yufei Gu
>Priority: Minor
> Attachments: YARN-4493.001.patch, yarn-4493.patch.1
>
>
> When moving a running application to a different queue, the current implement 
> don't check if the app can run in the new queue before remove it from current 
> queue. So if the destination queue is full, the app will throw exception, and 
> don't belong to any queue.
> After that, the queue become orphane, can not schedule any resources. If you 
> kill the app,  the removeApp method in FSLeafQueue will throw 
> IllealStateException of "Given app to remove app does not exist in queue ..." 
> exception.   
> So i think we should check if the destination queue can run the app before 
> remove it from the current queue.  
> The patch is from our revision.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Resolved] (YARN-5104) TestNMProxy.testNMProxyRPCRetry failed

2016-09-22 Thread Yufei Gu (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-5104?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yufei Gu resolved YARN-5104.

Resolution: Duplicate

> TestNMProxy.testNMProxyRPCRetry failed
> --
>
> Key: YARN-5104
> URL: https://issues.apache.org/jira/browse/YARN-5104
> Project: Hadoop YARN
>  Issue Type: Test
>Affects Versions: 2.9.0
>Reporter: Yufei Gu
>Assignee: Yufei Gu
>  Labels: test
> Attachments: YARN-5104.001.patch
>
>
> TestNMProxy.testNMProxyRPCRetry throws an exception and expected to be caught 
> and handled. 
> YARN-4916 did handle this exception:
> {code}
> java.net.BindException: Problem binding to [xxx] java.net.BindException: 
> Can't assign requested address; For more details see:  
> http://wiki.apache.org/hadoop/BindException
> {code}
> But it can also throw 
> {code}
> java.io.IOException: Failed on local exception: java.net.SocketException: 
> Invalid argument; Host Details : local host is: "xxx"; destination host is: 
> "1234":0; "
> {code}
> and this failed YARN-4916 with following message.
> {code}
> Error Message:
> null
> Stack Trace:
> java.lang.AssertionError: null
> at org.junit.Assert.fail(Assert.java:86)
> at org.junit.Assert.assertTrue(Assert.java:41)
> at org.junit.Assert.assertTrue(Assert.java:52)
> at 
> org.apache.hadoop.yarn.server.nodemanager.containermanager.TestNMProxy.testNMProxyRPCRetry(TestNMProxy.java:192)
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5104) TestNMProxy.testNMProxyRPCRetry failed

2016-09-22 Thread Yufei Gu (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5104?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15514589#comment-15514589
 ] 

Yufei Gu commented on YARN-5104:


This has been fixed by HADOOP-11212. With HADOOP-11212, SocketException will 
not be wrapped in an IOException and is threw out as it is with additional 
message. I will close this one soon.

> TestNMProxy.testNMProxyRPCRetry failed
> --
>
> Key: YARN-5104
> URL: https://issues.apache.org/jira/browse/YARN-5104
> Project: Hadoop YARN
>  Issue Type: Test
>Affects Versions: 2.9.0
>Reporter: Yufei Gu
>Assignee: Yufei Gu
>  Labels: test
> Attachments: YARN-5104.001.patch
>
>
> TestNMProxy.testNMProxyRPCRetry throws an exception and expected to be caught 
> and handled. 
> YARN-4916 did handle this exception:
> {code}
> java.net.BindException: Problem binding to [xxx] java.net.BindException: 
> Can't assign requested address; For more details see:  
> http://wiki.apache.org/hadoop/BindException
> {code}
> But it can also throw 
> {code}
> java.io.IOException: Failed on local exception: java.net.SocketException: 
> Invalid argument; Host Details : local host is: "xxx"; destination host is: 
> "1234":0; "
> {code}
> and this failed YARN-4916 with following message.
> {code}
> Error Message:
> null
> Stack Trace:
> java.lang.AssertionError: null
> at org.junit.Assert.fail(Assert.java:86)
> at org.junit.Assert.assertTrue(Assert.java:41)
> at org.junit.Assert.assertTrue(Assert.java:52)
> at 
> org.apache.hadoop.yarn.server.nodemanager.containermanager.TestNMProxy.testNMProxyRPCRetry(TestNMProxy.java:192)
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5659) getPathFromYarnURL should use standard methods

2016-09-22 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5659?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15514558#comment-15514558
 ] 

Hadoop QA commented on YARN-5659:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 16s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 
0s {color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 8m 
12s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 27s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
14s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 28s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
12s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 
14s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 21s 
{color} | {color:green} trunk passed {color} |
| {color:red}-1{color} | {color:red} mvninstall {color} | {color:red} 0m 28s 
{color} | {color:red} hadoop-yarn-api in the patch failed. {color} |
| {color:red}-1{color} | {color:red} compile {color} | {color:red} 0m 28s 
{color} | {color:red} hadoop-yarn-api in the patch failed. {color} |
| {color:red}-1{color} | {color:red} javac {color} | {color:red} 0m 28s {color} 
| {color:red} hadoop-yarn-api in the patch failed. {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
13s {color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} mvnsite {color} | {color:red} 0m 26s 
{color} | {color:red} hadoop-yarn-api in the patch failed. {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
12s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
0s {color} | {color:green} The patch has no whitespace issues. {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red} 0m 12s 
{color} | {color:red} hadoop-yarn-api in the patch failed. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 18s 
{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 0m 29s {color} 
| {color:red} hadoop-yarn-api in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 
17s {color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 15m 10s {color} 
| {color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:9560f25 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12829926/YARN-5659.04.patch |
| JIRA Issue | YARN-5659 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux a5bd581463df 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed 
Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / 40acace |
| Default Java | 1.8.0_101 |
| findbugs | v3.0.0 |
| mvninstall | 
https://builds.apache.org/job/PreCommit-YARN-Build/13190/artifact/patchprocess/patch-mvninstall-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-api.txt
 |
| compile | 
https://builds.apache.org/job/PreCommit-YARN-Build/13190/artifact/patchprocess/patch-compile-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-api.txt
 |
| javac | 
https://builds.apache.org/job/PreCommit-YARN-Build/13190/artifact/patchprocess/patch-compile-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-api.txt
 |
| mvnsite | 
https://builds.apache.org/job/PreCommit-YARN-Build/13190/artifact/patchprocess/patch-mvnsite-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-api.txt
 |
| findbugs | 
https://builds.apache.org/job/PreCommit-YARN-Build/13190/artifact/patchprocess/patch-findbugs-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-api.txt
 |
| unit | 
https://builds.apache.org/job/PreCommit-YARN-Build/13190/artifact/patchprocess/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-ap

[jira] [Commented] (YARN-2009) Priority support for preemption in ProportionalCapacityPreemptionPolicy

2016-09-22 Thread Eric Payne (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-2009?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15514480#comment-15514480
 ] 

Eric Payne commented on YARN-2009:
--

[~leftnoteasy], Thanks. I missed that.

> Priority support for preemption in ProportionalCapacityPreemptionPolicy
> ---
>
> Key: YARN-2009
> URL: https://issues.apache.org/jira/browse/YARN-2009
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: capacityscheduler
>Reporter: Devaraj K
>Assignee: Sunil G
> Attachments: YARN-2009.0001.patch, YARN-2009.0002.patch
>
>
> While preempting containers based on the queue ideal assignment, we may need 
> to consider preempting the low priority application containers first.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-3142) Improve locks in AppSchedulingInfo

2016-09-22 Thread Varun Saxena (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-3142?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15514449#comment-15514449
 ] 

Varun Saxena commented on YARN-3142:


Updated the patch after fixing review comments and checkstyle issues.

> Improve locks in AppSchedulingInfo
> --
>
> Key: YARN-3142
> URL: https://issues.apache.org/jira/browse/YARN-3142
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: resourcemanager, scheduler
>Reporter: Wangda Tan
>Assignee: Varun Saxena
> Attachments: YARN-3142.01.patch, YARN-3142.02.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-3142) Improve locks in AppSchedulingInfo

2016-09-22 Thread Varun Saxena (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-3142?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Varun Saxena updated YARN-3142:
---
Attachment: YARN-3142.02.patch

> Improve locks in AppSchedulingInfo
> --
>
> Key: YARN-3142
> URL: https://issues.apache.org/jira/browse/YARN-3142
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: resourcemanager, scheduler
>Reporter: Wangda Tan
>Assignee: Varun Saxena
> Attachments: YARN-3142.01.patch, YARN-3142.02.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (YARN-3142) Improve locks in AppSchedulingInfo

2016-09-22 Thread Varun Saxena (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-3142?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15514396#comment-15514396
 ] 

Varun Saxena edited comment on YARN-3142 at 9/22/16 8:30 PM:
-

Thanks [~leftnoteasy], [~sunilg] for reviews. 

bq. Lock is not required
Right. Will fix.

bq. You may need to synchronize placesBlacklistedByApp and call 
placesBlacklistedByApp.addAll(appInfo.getBlackList()) to make it consistent to 
other blacklist-related operations.
But we wont be using this till we set current attempt in scheduler which is 
after this. Also, this is a reference assignment which is atomic. Thoughts ?

bq. (Imaging someone replace the request in another thread before returning)
You mean remove the request from the resource request map ? Well the part about 
fetching from resource request map i.e. call to getResourceRequest is within 
locks. After that we just access ResourceRequest instance. And capability which 
we are returning here wont be changed even by another thread. However this is 
is not immutable field so we can probably guard it with a read lock just to be 
safe. Thoughts ?


was (Author: varun_saxena):
Thanks [~leftnoteasy], [~sunilg] for reviews. 

bq. Lock is not required
Right. Will fix.

bq. You may need to synchronize placesBlacklistedByApp and call 
placesBlacklistedByApp.addAll(appInfo.getBlackList()) to make it consistent to 
other blacklist-related operations.
But we wont be using this till we set current attempt in scheduler which is 
after this. Also, this is a reference assignment which is atomic. Thoughts ?

bq. (Imaging someone replace the request in another thread before returning)
You mean remove the request from the resource request map. Well the part about 
fetching from resource request map i.e. call to getResourceRequest is within 
locks. After that we just access ResourceRequest instance. And capability which 
we are returning here wont be changed even by another thread. However this is 
is not immutable field so we can probably guard it with a read lock just to be 
safe. Thoughts ?

> Improve locks in AppSchedulingInfo
> --
>
> Key: YARN-3142
> URL: https://issues.apache.org/jira/browse/YARN-3142
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: resourcemanager, scheduler
>Reporter: Wangda Tan
>Assignee: Varun Saxena
> Attachments: YARN-3142.01.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-3142) Improve locks in AppSchedulingInfo

2016-09-22 Thread Wangda Tan (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-3142?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15514415#comment-15514415
 ] 

Wangda Tan commented on YARN-3142:
--

[~varun_saxena], 

bq. But we wont be using this till we set current attempt in scheduler which is 
after this. Also, this is a reference assignment which is atomic. Thoughts ?

Yeah it should not required.

bq. You mean remove the request from the resource request map ? Well the part 
about fetching from resource request map i.e. call to getResourceRequest is 
within locks. After that we just access ResourceRequest instance. And 
capability which we are returning here wont be changed even by another thread. 
However this is is not immutable field so we can probably guard it with a read 
lock just to be safe. Thoughts ?

Agree we can add a readlock here just to be safe.

> Improve locks in AppSchedulingInfo
> --
>
> Key: YARN-3142
> URL: https://issues.apache.org/jira/browse/YARN-3142
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: resourcemanager, scheduler
>Reporter: Wangda Tan
>Assignee: Varun Saxena
> Attachments: YARN-3142.01.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-4464) default value of yarn.resourcemanager.state-store.max-completed-applications should lower.

2016-09-22 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-4464?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15514414#comment-15514414
 ] 

Hadoop QA commented on YARN-4464:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 15s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red} 0m 0s 
{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 10s 
{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 7m 
12s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 2m 27s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
41s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 37s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
42s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 3m 2s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 5s 
{color} | {color:green} trunk passed {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 9s 
{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 1m 
22s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 2m 23s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 2m 23s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
40s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 32s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
36s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
0s {color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green} 0m 2s 
{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 3m 
22s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 0s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 0m 25s 
{color} | {color:green} hadoop-yarn-api in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 2m 20s 
{color} | {color:green} hadoop-yarn-common in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 33m 38s 
{color} | {color:green} hadoop-yarn-server-resourcemanager in the patch passed. 
{color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 
19s {color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 65m 52s {color} 
| {color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:9560f25 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12829912/YARN-4464.006.patch |
| JIRA Issue | YARN-4464 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  xml  |
| uname | Linux f0ae1da3118f 3.13.0-93-generic #140-Ubuntu SMP Mon Jul 18 
21:21:05 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / 40acace |
| Default Java | 1.8.0_101 |
| findbugs | v3.0.0 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-YARN-Build/13189/testReport/ |
| modules | C: hadoop-yarn-project/ha

[jira] [Commented] (YARN-2009) Priority support for preemption in ProportionalCapacityPreemptionPolicy

2016-09-22 Thread Wangda Tan (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-2009?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15514405#comment-15514405
 ] 

Wangda Tan commented on YARN-2009:
--

Hi [~eepayne],

bq. If Queue1 has 100 resources, and if user1 starts app1 at priority 1 that 
consumes the whole queue, won't user1's user-limit-resource be 100? Then, if 
user1 starts another app (app2) at priority 2, won't the above algorithm skip 
over app2 because user1 has already achieved its user-limit-resource?

For your concern, I previously mentioned in the pseudo code:

{code}
// initial all value to 0
Map user-to-allocated;
{code}

Initially user-to-allocated is set to 0, and when we loop applications, we will 
go to app2 first, so app2's ideal-allocation becomes 100 and app1's 
ideal-allocation will be updated to 0. (Assume app2 has >= 100 pending 
resource). 

Plz let me know if I missed anything.

Thanks,

> Priority support for preemption in ProportionalCapacityPreemptionPolicy
> ---
>
> Key: YARN-2009
> URL: https://issues.apache.org/jira/browse/YARN-2009
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: capacityscheduler
>Reporter: Devaraj K
>Assignee: Sunil G
> Attachments: YARN-2009.0001.patch, YARN-2009.0002.patch
>
>
> While preempting containers based on the queue ideal assignment, we may need 
> to consider preempting the low priority application containers first.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-3142) Improve locks in AppSchedulingInfo

2016-09-22 Thread Varun Saxena (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-3142?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15514396#comment-15514396
 ] 

Varun Saxena commented on YARN-3142:


Thanks [~leftnoteasy], [~sunilg] for reviews. 

bq. Lock is not required
Right. Will fix.

bq. You may need to synchronize placesBlacklistedByApp and call 
placesBlacklistedByApp.addAll(appInfo.getBlackList()) to make it consistent to 
other blacklist-related operations.
But we wont be using this till we set current attempt in scheduler which is 
after this. Also, this is a reference assignment which is atomic. Thoughts ?

bq. (Imaging someone replace the request in another thread before returning)
You mean remove the request from the resource request map. Well the part about 
fetching from resource request map i.e. call to getResourceRequest is within 
locks. After that we just access ResourceRequest instance. And capability which 
we are returning here wont be changed even by another thread. However this is 
is not immutable field so we can probably guard it with a read lock just to be 
safe. Thoughts ?

> Improve locks in AppSchedulingInfo
> --
>
> Key: YARN-3142
> URL: https://issues.apache.org/jira/browse/YARN-3142
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: resourcemanager, scheduler
>Reporter: Wangda Tan
>Assignee: Varun Saxena
> Attachments: YARN-3142.01.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-2009) Priority support for preemption in ProportionalCapacityPreemptionPolicy

2016-09-22 Thread Eric Payne (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-2009?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15514393#comment-15514393
 ] 

Eric Payne commented on YARN-2009:
--

bq. So we could take a max(guaranteed, used). Will this be fine?
I don't think so. If {{tq.getActuallyToBePreempted}} is non-zero, it represents 
the amount that will be preempted from what {{tq}} is currently using, not 
{{tq}}'s guaranteed resources. The purpose of this line of code is to set 
{{tq}}'s unallocated resources. But even if {{tq}} is below it's guarantee, the 
amount of resources that intra-queue preemption should consider when balancing 
is not the queue's guarantee, it's what the queue is already using. If {{tq}} 
is below its guarantee, inter-queue preemption should be handling that.

bq. app1 of user1 used entire queue. app2 of user2 asks more resource
The use case I'm referencing regarding this code is not regarding 2 different 
users. It's regarding the same user submitting jobs of different priorities. If 
{{user1}} submits a low priority job that consumes the whole queue, {{user1}}'s 
headroom will be 0. Then, when {{user1}} submits a second app at a higher 
priority, this code will cause the second app to starve because {{user1}} has 
already used up its allocation.

> Priority support for preemption in ProportionalCapacityPreemptionPolicy
> ---
>
> Key: YARN-2009
> URL: https://issues.apache.org/jira/browse/YARN-2009
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: capacityscheduler
>Reporter: Devaraj K
>Assignee: Sunil G
> Attachments: YARN-2009.0001.patch, YARN-2009.0002.patch
>
>
> While preempting containers based on the queue ideal assignment, we may need 
> to consider preempting the low priority application containers first.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-5659) getPathFromYarnURL should use standard methods

2016-09-22 Thread Sergey Shelukhin (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-5659?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sergey Shelukhin updated YARN-5659:
---
Attachment: YARN-5659.04.patch

whitespace fixes...

> getPathFromYarnURL should use standard methods
> --
>
> Key: YARN-5659
> URL: https://issues.apache.org/jira/browse/YARN-5659
> Project: Hadoop YARN
>  Issue Type: Bug
>Reporter: Sergey Shelukhin
>Assignee: Sergey Shelukhin
> Attachments: YARN-5659.01.patch, YARN-5659.02.patch, 
> YARN-5659.03.patch, YARN-5659.04.patch, YARN-5659.patch
>
>
> getPathFromYarnURL does some string shenanigans where  standard ctors should 
> suffice.
> There are also bugs in it e.g. passing an empty scheme to the URI ctor is 
> invalid, null should be used. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-4329) Allow fetching exact reason as to why a submitted app is in ACCEPTED state in Fair Scheduler

2016-09-22 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-4329?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15514370#comment-15514370
 ] 

Hadoop QA commented on YARN-4329:
-

| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 16s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 
0s {color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 8m 
33s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 35s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
22s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 41s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
17s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 0s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 24s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 
33s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 30s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 30s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
20s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 35s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
14s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
0s {color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 7s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 18s 
{color} | {color:green} 
hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager
 generated 0 new + 938 unchanged - 3 fixed = 938 total (was 941) {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 34m 6s 
{color} | {color:green} hadoop-yarn-server-resourcemanager in the patch passed. 
{color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 
19s {color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 50m 47s {color} 
| {color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:9560f25 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12829910/YARN-4329.001.patch |
| JIRA Issue | YARN-4329 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux 1a042db26afd 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed 
Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / 40acace |
| Default Java | 1.8.0_101 |
| findbugs | v3.0.0 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-YARN-Build/13188/testReport/ |
| modules | C: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager
 U: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager
 |
| Console output | 
https://builds.apache.org/job/PreCommit-YARN-Build/13188/console |
| Powered by | Apache Yetus 0.3.0   http://yetus.apache.org |


This message was automatically generated.



> Allow fetching exact reason as to why a submitted app is in ACCEPTED state in 
> Fair Scheduler
> 
>
> Key: YARN-4329
> URL: https://issues.apache.org/jira/browse/YARN-4329
> Project: Hadoop YARN
>  Issue Type: Sub-task
>

[jira] [Commented] (YARN-5659) getPathFromYarnURL should use standard methods

2016-09-22 Thread Daniel Templeton (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5659?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15514302#comment-15514302
 ] 

Daniel Templeton commented on YARN-5659:


The checkstyle and whitespace issue (both are the same) should be cleaned up.

> getPathFromYarnURL should use standard methods
> --
>
> Key: YARN-5659
> URL: https://issues.apache.org/jira/browse/YARN-5659
> Project: Hadoop YARN
>  Issue Type: Bug
>Reporter: Sergey Shelukhin
>Assignee: Sergey Shelukhin
> Attachments: YARN-5659.01.patch, YARN-5659.02.patch, 
> YARN-5659.03.patch, YARN-5659.patch
>
>
> getPathFromYarnURL does some string shenanigans where  standard ctors should 
> suffice.
> There are also bugs in it e.g. passing an empty scheme to the URI ctor is 
> invalid, null should be used. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (YARN-5585) [Atsv2] Add a new filter fromId in REST endpoints

2016-09-22 Thread Varun Saxena (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5585?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15514243#comment-15514243
 ] 

Varun Saxena edited comment on YARN-5585 at 9/22/16 7:25 PM:
-

Summarizing the solution we decided upon in the call.

* We will now return entities from entity table in a lexicographic order of 
entity IDs'
* To achieve a different sort order, we will provide a mechanism for 
applications to provide an entity ID prefix which can be set in the 
TimelineEntity object while publishing the entity.
* This entityId prefix will be part of the row key in entity table. As the name 
suggests, it will be present just before the entity ID. Applications can choose 
to provide no entity ID prefix if they are happy with the lexicographic sort 
order. So the row key now will be 
{{cluster!user!flow!flowrun!app!entitytype!\{entityidprefix\}!\{entityid\}}}
* Entity ID will also be stored under a column qualifier too (being done 
already).
* Entity ID prefix can be a number (say long) as numbers generally provide a 
natural sort ordering. However, this needs to be finalized. Keep it as a string 
?
* When querying multiple entities, we will return the top N entities decided by 
limit in a lexicographic order of entity ID prefix + entity ID (i.e. if entity 
ID prefix is supplied). fromID filter can now be something like fromIDPrefix 
(say) or a similar filter which provides prefix + ID to support pagination.
* While querying a single entity, prefix can be supplied as a query param. If 
supplied, it will be a Get, otherwise we need to have a Scan with 
SingleColumnValueFilter on entity ID (this will be comparatively slower). We 
can have a separate REST endpoint to distinguish between prefix based queries 
and non prefix based queries. We need to distinguish between the case where for 
an entity prefix has not been specified on the write path and prefix not just 
supplied at the read path (even if it was supplied at the write path). This 
needs to be finalized.
* Prefix will also be returned as part of TimelineEntity object in response.

cc [~jrottinghuis], [~sjlee0], [~vrushalic], [~gtCarrera9]. Hope this covers 
everything.

The reason this solution was chosen was that we thought in UI use cases a 
single entity read would typically be followed listing of multiple entities and 
hence prefix would be known. This does not mean however, that we will not 
provide a mechanism to fetch entity if prefix wasn't given. We can use a single 
column value filter then.
Moreover, this solution overall had lesser write or read penalty compared to 
solutions listed above.



was (Author: varun_saxena):
Summarizing the solution we decided upon in the call.

* We will now return entities from entity table in a lexicographic order of 
entity IDs'
* To achieve a different sort order, we will provide a mechanism for 
applications to provide an entity ID prefix which can be set in the 
TimelineEntity object while writing the entity to backend.
* This entityId prefix will be part of the row key in entity table. As the name 
suggests, it will be present just before the entity ID. Applications can choose 
to provide no entity ID prefix if they are happy with the lexicographic sort 
order. So the row key now will be 
{{cluster!user!flow!flowrun!app!entitytype!\{entityidprefix\}!\{entityid\}}}
* Entity ID will also be stored under a column qualifier too (being done 
already).
* Entity ID prefix can be a number (say long) as numbers generally provide a 
natural sort ordering. However, this needs to be finalized. Keep it as a string 
?
* When querying multiple entities, we will return the top N entities decided by 
limit in a lexicographic order of entity ID prefix + entity ID (i.e. if entity 
ID prefix is supplied). fromID filter can now be something like fromIDPrefix 
(say) or a similar filter which provides prefix + ID to support pagination.
* While querying a single entity, prefix can be supplied as a query param. If 
supplied, it will be a Get, otherwise we need to have a Scan with 
SingleColumnValueFilter on entity ID (this will be comparatively slower). We 
can have a separate REST endpoint to distinguish between prefix based queries 
and non prefix based queries. We need to distinguish between the case where for 
an entity prefix has not been specified on the write path and prefix not just 
supplied at the read path (even if it was supplied at the write path). This 
needs to be finalized.
* Prefix will also be returned as part of TimelineEntity object in response.

cc [~jrottinghuis], [~sjlee0], [~vrushalic], [~gtCarrera9]. Hope this covers 
everything.

The reason this solution was chosen was that we thought in UI use cases a 
single entity read would typically be followed listing of multiple entities and 
hence prefix would be known. This does not mean however, that we will not 
provide a mechanism to fetch entity

[jira] [Commented] (YARN-5585) [Atsv2] Add a new filter fromId in REST endpoints

2016-09-22 Thread Varun Saxena (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5585?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15514243#comment-15514243
 ] 

Varun Saxena commented on YARN-5585:


Summarizing the solution we decided upon in the call.

* We will now return entities from entity table in a lexicographic order of 
entity IDs'
* To achieve a different sort order, we will provide a mechanism for 
applications to provide an entity ID prefix which can be set in the 
TimelineEntity object while writing the entity to backend.
* This entityId prefix will be part of the row key in entity table. As the name 
suggests, it will be present just before the entity ID. Applications can choose 
to provide no entity ID prefix if they are happy with the lexicographic sort 
order. So the row key now will be 
{{cluster!user!flow!flowrun!app!entitytype!\{entityidprefix\}!\{entityid\}}}
* Entity ID will also be stored under a column qualifier too (being done 
already).
* Entity ID prefix can be a number (say long) as numbers generally provide a 
natural sort ordering. However, this needs to be finalized. Keep it as a string 
?
* When querying multiple entities, we will return the top N entities decided by 
limit in a lexicographic order of entity ID prefix + entity ID (i.e. if entity 
ID prefix is supplied). fromID filter can now be something like fromIDPrefix 
(say) or a similar filter which provides prefix + ID to support pagination.
* While querying a single entity, prefix can be supplied as a query param. If 
supplied, it will be a Get, otherwise we need to have a Scan with 
SingleColumnValueFilter on entity ID (this will be comparatively slower). We 
can have a separate REST endpoint to distinguish between prefix based queries 
and non prefix based queries. We need to distinguish between the case where for 
an entity prefix has not been specified on the write path and prefix not just 
supplied at the read path (even if it was supplied at the write path). This 
needs to be finalized.
* Prefix will also be returned as part of TimelineEntity object in response.

cc [~jrottinghuis], [~sjlee0], [~vrushalic], [~gtCarrera9]. Hope this covers 
everything.

The reason this solution was chosen was that we thought in UI use cases a 
single entity read would typically be followed listing of multiple entities and 
hence prefix would be known. This does not mean however, that we will not 
provide a mechanism to fetch entity if prefix wasn't given. We can use a single 
column value filter then.
Moreover, this solution overall had lesser write or read penalty compared to 
solutions listed above.


> [Atsv2] Add a new filter fromId in REST endpoints
> -
>
> Key: YARN-5585
> URL: https://issues.apache.org/jira/browse/YARN-5585
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: timelinereader
>Reporter: Rohith Sharma K S
>Assignee: Rohith Sharma K S
>Priority: Critical
> Attachments: YARN-5585.v0.patch
>
>
> TimelineReader REST API's provides lot of filters to retrieve the 
> applications. Along with those, it would be good to add new filter i.e fromId 
> so that entities can be retrieved after the fromId. 
> Current Behavior : Default limit is set to 100. If there are 1000 entities 
> then REST call gives first/last 100 entities. How to retrieve next set of 100 
> entities i.e 101 to 200 OR 900 to 801?
> Example : If applications are stored database, app-1 app-2 ... app-10.
> *getApps?limit=5* gives app-1 to app-5. But to retrieve next 5 apps, there is 
> no way to achieve this. 
> So proposal is to have fromId in the filter like 
> *getApps?limit=5&&fromId=app-5* which gives list of apps from app-6 to 
> app-10. 
> Since ATS is targeting large number of entities storage, it is very common 
> use case to get next set of entities using fromId rather than querying all 
> the entites. This is very useful for pagination in web UI.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-4464) default value of yarn.resourcemanager.state-store.max-completed-applications should lower.

2016-09-22 Thread Daniel Templeton (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-4464?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Daniel Templeton updated YARN-4464:
---
Attachment: YARN-4464.006.patch

The docs are already correct.  I foolishly believed them that the code was 
correct.

> default value of yarn.resourcemanager.state-store.max-completed-applications 
> should lower.
> --
>
> Key: YARN-4464
> URL: https://issues.apache.org/jira/browse/YARN-4464
> Project: Hadoop YARN
>  Issue Type: Wish
>  Components: resourcemanager
>Reporter: KWON BYUNGCHANG
>Assignee: Daniel Templeton
>Priority: Blocker
> Attachments: YARN-4464.001.patch, YARN-4464.002.patch, 
> YARN-4464.003.patch, YARN-4464.004.patch, YARN-4464.005.patch, 
> YARN-4464.006.patch
>
>
> my cluster has 120 nodes.
> I configured RM Restart feature.
> {code}
> yarn.resourcemanager.recovery.enabled=true
> yarn.resourcemanager.store.class=org.apache.hadoop.yarn.server.resourcemanager.recovery.FileSystemRMStateStore
> yarn.resourcemanager.fs.state-store.uri=/system/yarn/rmstore
> {code}
> unfortunately I did not configure 
> {{yarn.resourcemanager.state-store.max-completed-applications}}.
> so that property configured default value 10,000.
> I have restarted RM due to changing another configuartion.
> I expected that RM restart immediately.
> recovery process was very slow.  I have waited about 20min.  
> realize missing 
> {{yarn.resourcemanager.state-store.max-completed-applications}}.
> its default value is very huge.  
> need to change lower value or document notice on [RM Restart 
> page|http://hadoop.apache.org/docs/stable/hadoop-yarn/hadoop-yarn-site/ResourceManagerRestart.html].



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-4329) Allow fetching exact reason as to why a submitted app is in ACCEPTED state in Fair Scheduler

2016-09-22 Thread Yufei Gu (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-4329?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yufei Gu updated YARN-4329:
---
Attachment: YARN-4329.001.patch

> Allow fetching exact reason as to why a submitted app is in ACCEPTED state in 
> Fair Scheduler
> 
>
> Key: YARN-4329
> URL: https://issues.apache.org/jira/browse/YARN-4329
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: fairscheduler, resourcemanager
>Reporter: Naganarasimha G R
>Assignee: Yufei Gu
> Attachments: YARN-4329.001.patch
>
>
> Similar to YARN-3946, it would be useful to capture possible reason why the 
> Application is in accepted state in FairScheduler



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5145) [YARN-3368] Move new YARN UI configuration to HADOOP_CONF_DIR

2016-09-22 Thread Wangda Tan (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5145?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15514003#comment-15514003
 ] 

Wangda Tan commented on YARN-5145:
--

Thanks [~kaisasak] and [~sunilg] looking into the issue.

I'm not sure about how to solve this issue:

To me, It is fine to put configs.env in any place for a development 
environment. For example under source tree.

But it is still very important to make it be able to config envs from 
HADOOP_CONF_DIR. And I think it's better to not assume source code dir: 
{{$HADOOP_PREFIX/share/hadoop/yarn/webapps/}} will be writable in a real 
environment. What I can mostly see for most deployment is, yarn user doesn't 
have permission to write $HADOOP_PREFIX.

Can we pass some parameter/env to the HTTP server which hosts YARN UI code in 
RM? 

[~Sreenath], could you also share some thoughts?



> [YARN-3368] Move new YARN UI configuration to HADOOP_CONF_DIR
> -
>
> Key: YARN-5145
> URL: https://issues.apache.org/jira/browse/YARN-5145
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Wangda Tan
>Assignee: Kai Sasaki
> Attachments: YARN-5145-YARN-3368.01.patch
>
>
> Existing YARN UI configuration is under Hadoop package's directory: 
> $HADOOP_PREFIX/share/hadoop/yarn/webapps/, we should move it to 
> $HADOOP_CONF_DIR like other configurations.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (YARN-4743) ResourceManager crash because TimSort

2016-09-22 Thread Yufei Gu (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-4743?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15513806#comment-15513806
 ] 

Yufei Gu edited comment on YARN-4743 at 9/22/16 5:46 PM:
-

Hi [~imstefanlee], continous scheduling uses the same code, I guess you got the 
similar stack trace. Please check your hadoop version to see if it has 
YARN-3547. 


was (Author: yufeigu):
Hi [~imstefanlee], continous scheduling uses the same code to do the 
scheduling. Please check your hadoop version to check if it has YARN-3547. 

> ResourceManager crash because TimSort
> -
>
> Key: YARN-4743
> URL: https://issues.apache.org/jira/browse/YARN-4743
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: fairscheduler
>Affects Versions: 2.6.4
>Reporter: Zephyr Guo
>Assignee: Yufei Gu
> Attachments: YARN-4743-cdh5.4.7.patch
>
>
> {code}
> 2016-02-26 14:08:50,821 FATAL 
> org.apache.hadoop.yarn.server.resourcemanager.ResourceManager: Error in 
> handling event type NODE_UPDATE to the scheduler
> java.lang.IllegalArgumentException: Comparison method violates its general 
> contract!
>  at java.util.TimSort.mergeHi(TimSort.java:868)
>  at java.util.TimSort.mergeAt(TimSort.java:485)
>  at java.util.TimSort.mergeCollapse(TimSort.java:410)
>  at java.util.TimSort.sort(TimSort.java:214)
>  at java.util.TimSort.sort(TimSort.java:173)
>  at java.util.Arrays.sort(Arrays.java:659)
>  at java.util.Collections.sort(Collections.java:217)
>  at 
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.fair.FSLeafQueue.assignContainer(FSLeafQueue.java:316)
>  at 
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.fair.FSParentQueue.assignContainer(FSParentQueue.java:240)
>  at 
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.fair.FairScheduler.attemptScheduling(FairScheduler.java:1091)
>  at 
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.fair.FairScheduler.nodeUpdate(FairScheduler.java:989)
>  at 
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.fair.FairScheduler.handle(FairScheduler.java:1185)
>  at 
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.fair.FairScheduler.handle(FairScheduler.java:112)
>  at 
> org.apache.hadoop.yarn.server.resourcemanager.ResourceManager$SchedulerEventDispatcher$EventProcessor.run(ResourceManager.java:684)
>  at java.lang.Thread.run(Thread.java:745)
> 2016-02-26 14:08:50,822 INFO 
> org.apache.hadoop.yarn.server.resourcemanager.ResourceManager: Exiting, bbye..
> {code}
> Actually, this issue found in 2.6.0-cdh5.4.7.
> I think the cause is that we modify {{Resouce}} while we are sorting 
> {{runnableApps}}.
> {code:title=FSLeafQueue.java}
> Comparator comparator = policy.getComparator();
> writeLock.lock();
> try {
>   Collections.sort(runnableApps, comparator);
> } finally {
>   writeLock.unlock();
> }
> readLock.lock();
> {code}
> {code:title=FairShareComparator}
> public int compare(Schedulable s1, Schedulable s2) {
> ..
>   s1.getResourceUsage(), minShare1);
>   boolean s2Needy = Resources.lessThan(RESOURCE_CALCULATOR, null,
>   s2.getResourceUsage(), minShare2);
>   minShareRatio1 = (double) s1.getResourceUsage().getMemory()
>   / Resources.max(RESOURCE_CALCULATOR, null, minShare1, 
> ONE).getMemory();
>   minShareRatio2 = (double) s2.getResourceUsage().getMemory()
>   / Resources.max(RESOURCE_CALCULATOR, null, minShare2, 
> ONE).getMemory();
> ..
> {code}
> {{getResourceUsage}} will return current Resource. The current Resource is 
> unstable. 
> {code:title=FSAppAttempt.java}
> @Override
>   public Resource getResourceUsage() {
> // Here the getPreemptedResources() always return zero, except in
> // a preemption round
> return Resources.subtract(getCurrentConsumption(), 
> getPreemptedResources());
>   }
> {code}
> {code:title=SchedulerApplicationAttempt}
>  public Resource getCurrentConsumption() {
> return currentConsumption;
>   }
> // This method may modify current Resource.
> public synchronized void recoverContainer(RMContainer rmContainer) {
> ..
> Resources.addTo(currentConsumption, rmContainer.getContainer()
>   .getResource());
> ..
>   }
> {code}
> I suggest that use stable Resource in comparator.
> Is there something i think wrong?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (YARN-4266) Allow whitelisted users to disable user re-mapping/squashing when launching docker containers

2016-09-22 Thread Daniel Templeton (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-4266?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15513926#comment-15513926
 ] 

Daniel Templeton edited comment on YARN-4266 at 9/22/16 5:40 PM:
-

After some discussion and thought, I think that 3.2 is OK as long as the user 
has the option to turn it off.  In other words, the contract is that either the 
user takes care of making sure the container has the required user, or let YARN 
do it for you.

It might be worth exploring how far dockers logs could take us in dealing with 
the logging end of the problem.

Any comments, [~sidharta-s], [~vvasudev], or [~shaneku...@gmail.com]?


was (Author: templedf):
After some discussion and thought, I think that 3.2 is OK as long as the user 
has the option to turn it off.  In other words, the contract is that either the 
user takes care of making sure the container has the required user, or let YARN 
do it for you.

It might be worth exploring how far dockers logs could take us in dealing with 
the logging end of the problem.

> Allow whitelisted users to disable user re-mapping/squashing when launching 
> docker containers
> -
>
> Key: YARN-4266
> URL: https://issues.apache.org/jira/browse/YARN-4266
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: yarn
>Reporter: Sidharta Seethana
>Assignee: Zhankun Tang
> Attachments: 
> YARN-4266_Allow_whitelisted_users_to_disable_user_re-mapping.pdf, 
> YARN-4266_Allow_whitelisted_users_to_disable_user_re-mapping_v2.pdf
>
>
> Docker provides a mechanism (the --user switch) that enables us to specify 
> the user the container processes should run as. We use this mechanism today 
> when launching docker containers . In non-secure mode, we run the docker 
> container based on 
> `yarn.nodemanager.linux-container-executor.nonsecure-mode.local-user` and in 
> secure mode, as the submitting user. However, this mechanism breaks down with 
> a large number of 'pre-created' images which don't necessarily have the users 
> available within the image. Examples of such images include shared images 
> that need to be used by multiple users. We need a way in which we can allow a 
> pre-defined set of users to run containers based on existing images, without 
> using the --user switch. There are some implications of disabling this user 
> squashing that we'll need to work through : log aggregation, artifact 
> deletion etc.,



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-4266) Allow whitelisted users to disable user re-mapping/squashing when launching docker containers

2016-09-22 Thread Daniel Templeton (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-4266?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15513926#comment-15513926
 ] 

Daniel Templeton commented on YARN-4266:


After some discussion and thought, I think that 3.2 is OK as long as the user 
has the option to turn it off.  In other words, the contract is that either the 
user takes care of making sure the container has the required user, or let YARN 
do it for you.

It might be worth exploring how far dockers logs could take us in dealing with 
the logging end of the problem.

> Allow whitelisted users to disable user re-mapping/squashing when launching 
> docker containers
> -
>
> Key: YARN-4266
> URL: https://issues.apache.org/jira/browse/YARN-4266
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: yarn
>Reporter: Sidharta Seethana
>Assignee: Zhankun Tang
> Attachments: 
> YARN-4266_Allow_whitelisted_users_to_disable_user_re-mapping.pdf, 
> YARN-4266_Allow_whitelisted_users_to_disable_user_re-mapping_v2.pdf
>
>
> Docker provides a mechanism (the --user switch) that enables us to specify 
> the user the container processes should run as. We use this mechanism today 
> when launching docker containers . In non-secure mode, we run the docker 
> container based on 
> `yarn.nodemanager.linux-container-executor.nonsecure-mode.local-user` and in 
> secure mode, as the submitting user. However, this mechanism breaks down with 
> a large number of 'pre-created' images which don't necessarily have the users 
> available within the image. Examples of such images include shared images 
> that need to be used by multiple users. We need a way in which we can allow a 
> pre-defined set of users to run containers based on existing images, without 
> using the --user switch. There are some implications of disabling this user 
> squashing that we'll need to work through : log aggregation, artifact 
> deletion etc.,



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5605) Preempt containers (all on one node) to meet the requirement of starved applications

2016-09-22 Thread Daniel Templeton (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5605?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15513914#comment-15513914
 ] 

Daniel Templeton commented on YARN-5605:


I think I'm happy with the last patch.  +1

> Preempt containers (all on one node) to meet the requirement of starved 
> applications
> 
>
> Key: YARN-5605
> URL: https://issues.apache.org/jira/browse/YARN-5605
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: fairscheduler
>Reporter: Karthik Kambatla
>Assignee: Karthik Kambatla
> Attachments: yarn-5605-1.patch, yarn-5605-2.patch, yarn-5605-3.patch, 
> yarn-5605-4.patch
>
>
> Required items:
> # Identify starved applications
> # Identify a node that has enough containers from applications over their 
> fairshare.
> # Preempt those containers



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-3139) Improve locks in AbstractYarnScheduler/CapacityScheduler/FairScheduler

2016-09-22 Thread Karthik Kambatla (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-3139?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15513883#comment-15513883
 ] 

Karthik Kambatla commented on YARN-3139:


Sorry [~leftnoteasy]. I will be traveling and won't have a chance to look at 
this until about 2 weeks from now. 

> Improve locks in AbstractYarnScheduler/CapacityScheduler/FairScheduler
> --
>
> Key: YARN-3139
> URL: https://issues.apache.org/jira/browse/YARN-3139
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: resourcemanager, scheduler
>Reporter: Wangda Tan
>Assignee: Wangda Tan
> Attachments: YARN-3139.0.patch, YARN-3139.1.patch, YARN-3139.2.patch
>
>
> Enhance locks in AbstractYarnScheduler/CapacityScheduler/FairScheduler, as 
> mentioned in YARN-3091, a possible solution is using read/write lock. Other 
> fine-graind locks for specific purposes / bugs should be addressed in 
> separated tickets.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-2009) Priority support for preemption in ProportionalCapacityPreemptionPolicy

2016-09-22 Thread Sunil G (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-2009?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15513840#comment-15513840
 ] 

Sunil G commented on YARN-2009:
---

Hi [~eepayne]

Couple of doubts on the comments.

bq.Shouldn't the following be tq.getUsed() - tq.getActuallyToBePreempted()? 
tq.getGuaranteed() only returns the queue's guaranteed capacity but if apps in 
the queue are using extra resources, then you want to subtract from the total 
usage.
If we pick on getUsed, we will be working on from a variable base value "used". 
So it will be hard to predict or calculate the preemption logic for analysis. 
But i think i am agreeing on the factor that the *used* can more that 
*guaranteed* and *unallocated* could even go negative in specific cases. So we 
could take a *max(guaranteed, used)*. Will this be fine?

bq.Shouldn't this also take into consideration used capacity of all parent 
queues as well?
I think we might need to consider only LeafQueue as we are working on each 
LeafQueue's one by one for intra-queue preemption.

bq.f user1 has used the entire queue with a low priority app, user1's headroom 
will be 0. But, if that same user starts a higher priority app, that higher 
priority app needs to preempt from the lower priority app, doesn't it?
I am doing this check in the first loop which runs from high priroity app to 
low priority app to calculate its idealAssigned. If an app's user-limit is 
already met, then we need not have to consider any more demand from that app. 
Hence such apps idealAssigned can be kept  as 0.  On the same line, assume if 
we are doing preemption here. app1 of user1 used entire queue. app2 of user2 
asks more resources. if we preempt some container from app1, will scheduler 
allocate to app2? (provided there are some demand from other users). If i am 
not wrong, it will not got. pls correct me if I am wrong.

I think this same scenario is mentioned by you in above comments. May be I 
could consider app2's demand provided there are no other user's waiting in 
queue with apps.



> Priority support for preemption in ProportionalCapacityPreemptionPolicy
> ---
>
> Key: YARN-2009
> URL: https://issues.apache.org/jira/browse/YARN-2009
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: capacityscheduler
>Reporter: Devaraj K
>Assignee: Sunil G
> Attachments: YARN-2009.0001.patch, YARN-2009.0002.patch
>
>
> While preempting containers based on the queue ideal assignment, we may need 
> to consider preempting the low priority application containers first.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-3139) Improve locks in AbstractYarnScheduler/CapacityScheduler/FairScheduler

2016-09-22 Thread Wangda Tan (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-3139?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15513827#comment-15513827
 ] 

Wangda Tan commented on YARN-3139:
--

Thanks [~templedf], but I really want to get this done sooner as it blocks 
review of other patches. [~asuresh] / [~kasha], do you have time to look at 
this patch, this is a big patch but logic should be fairly straightforward :)

Thanks,

> Improve locks in AbstractYarnScheduler/CapacityScheduler/FairScheduler
> --
>
> Key: YARN-3139
> URL: https://issues.apache.org/jira/browse/YARN-3139
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: resourcemanager, scheduler
>Reporter: Wangda Tan
>Assignee: Wangda Tan
> Attachments: YARN-3139.0.patch, YARN-3139.1.patch, YARN-3139.2.patch
>
>
> Enhance locks in AbstractYarnScheduler/CapacityScheduler/FairScheduler, as 
> mentioned in YARN-3091, a possible solution is using read/write lock. Other 
> fine-graind locks for specific purposes / bugs should be addressed in 
> separated tickets.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-3139) Improve locks in AbstractYarnScheduler/CapacityScheduler/FairScheduler

2016-09-22 Thread Wangda Tan (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-3139?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15513822#comment-15513822
 ] 

Wangda Tan commented on YARN-3139:
--

[~ozawa],

Yeah we plan to add multi-threading scheduling implementation. This change is 
first step of the YARN-5139. You can check [design 
doc|https://issues.apache.org/jira/secure/attachment/12825344/YARN-5139-Global-Schedulingd-esign-and-implementation-notes-v2.pdf],
 and [implementation 
notes|https://github.com/leftnoteasy/hadoop/blob/global-scheduling-3/global-scheduling-explaination.md]
 for details.

To avoid disruptive changes, we just want to simply convert synchronized lock 
to R/W lock for this patch, will do more fine-grained improvements in separate 
patches.

> Improve locks in AbstractYarnScheduler/CapacityScheduler/FairScheduler
> --
>
> Key: YARN-3139
> URL: https://issues.apache.org/jira/browse/YARN-3139
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: resourcemanager, scheduler
>Reporter: Wangda Tan
>Assignee: Wangda Tan
> Attachments: YARN-3139.0.patch, YARN-3139.1.patch, YARN-3139.2.patch
>
>
> Enhance locks in AbstractYarnScheduler/CapacityScheduler/FairScheduler, as 
> mentioned in YARN-3091, a possible solution is using read/write lock. Other 
> fine-graind locks for specific purposes / bugs should be addressed in 
> separated tickets.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-3139) Improve locks in AbstractYarnScheduler/CapacityScheduler/FairScheduler

2016-09-22 Thread Wangda Tan (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-3139?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15513815#comment-15513815
 ] 

Wangda Tan commented on YARN-3139:
--

Actually we need to hold application's lock when this is called. Will update a 
patch later.

> Improve locks in AbstractYarnScheduler/CapacityScheduler/FairScheduler
> --
>
> Key: YARN-3139
> URL: https://issues.apache.org/jira/browse/YARN-3139
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: resourcemanager, scheduler
>Reporter: Wangda Tan
>Assignee: Wangda Tan
> Attachments: YARN-3139.0.patch, YARN-3139.1.patch, YARN-3139.2.patch
>
>
> Enhance locks in AbstractYarnScheduler/CapacityScheduler/FairScheduler, as 
> mentioned in YARN-3091, a possible solution is using read/write lock. Other 
> fine-graind locks for specific purposes / bugs should be addressed in 
> separated tickets.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-4743) ResourceManager crash because TimSort

2016-09-22 Thread Yufei Gu (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-4743?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15513806#comment-15513806
 ] 

Yufei Gu commented on YARN-4743:


Hi [~imstefanlee], continous scheduling uses the same code to do the 
scheduling. Please check your hadoop version to check if it has YARN-3547. 

> ResourceManager crash because TimSort
> -
>
> Key: YARN-4743
> URL: https://issues.apache.org/jira/browse/YARN-4743
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: fairscheduler
>Affects Versions: 2.6.4
>Reporter: Zephyr Guo
>Assignee: Yufei Gu
> Attachments: YARN-4743-cdh5.4.7.patch
>
>
> {code}
> 2016-02-26 14:08:50,821 FATAL 
> org.apache.hadoop.yarn.server.resourcemanager.ResourceManager: Error in 
> handling event type NODE_UPDATE to the scheduler
> java.lang.IllegalArgumentException: Comparison method violates its general 
> contract!
>  at java.util.TimSort.mergeHi(TimSort.java:868)
>  at java.util.TimSort.mergeAt(TimSort.java:485)
>  at java.util.TimSort.mergeCollapse(TimSort.java:410)
>  at java.util.TimSort.sort(TimSort.java:214)
>  at java.util.TimSort.sort(TimSort.java:173)
>  at java.util.Arrays.sort(Arrays.java:659)
>  at java.util.Collections.sort(Collections.java:217)
>  at 
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.fair.FSLeafQueue.assignContainer(FSLeafQueue.java:316)
>  at 
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.fair.FSParentQueue.assignContainer(FSParentQueue.java:240)
>  at 
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.fair.FairScheduler.attemptScheduling(FairScheduler.java:1091)
>  at 
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.fair.FairScheduler.nodeUpdate(FairScheduler.java:989)
>  at 
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.fair.FairScheduler.handle(FairScheduler.java:1185)
>  at 
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.fair.FairScheduler.handle(FairScheduler.java:112)
>  at 
> org.apache.hadoop.yarn.server.resourcemanager.ResourceManager$SchedulerEventDispatcher$EventProcessor.run(ResourceManager.java:684)
>  at java.lang.Thread.run(Thread.java:745)
> 2016-02-26 14:08:50,822 INFO 
> org.apache.hadoop.yarn.server.resourcemanager.ResourceManager: Exiting, bbye..
> {code}
> Actually, this issue found in 2.6.0-cdh5.4.7.
> I think the cause is that we modify {{Resouce}} while we are sorting 
> {{runnableApps}}.
> {code:title=FSLeafQueue.java}
> Comparator comparator = policy.getComparator();
> writeLock.lock();
> try {
>   Collections.sort(runnableApps, comparator);
> } finally {
>   writeLock.unlock();
> }
> readLock.lock();
> {code}
> {code:title=FairShareComparator}
> public int compare(Schedulable s1, Schedulable s2) {
> ..
>   s1.getResourceUsage(), minShare1);
>   boolean s2Needy = Resources.lessThan(RESOURCE_CALCULATOR, null,
>   s2.getResourceUsage(), minShare2);
>   minShareRatio1 = (double) s1.getResourceUsage().getMemory()
>   / Resources.max(RESOURCE_CALCULATOR, null, minShare1, 
> ONE).getMemory();
>   minShareRatio2 = (double) s2.getResourceUsage().getMemory()
>   / Resources.max(RESOURCE_CALCULATOR, null, minShare2, 
> ONE).getMemory();
> ..
> {code}
> {{getResourceUsage}} will return current Resource. The current Resource is 
> unstable. 
> {code:title=FSAppAttempt.java}
> @Override
>   public Resource getResourceUsage() {
> // Here the getPreemptedResources() always return zero, except in
> // a preemption round
> return Resources.subtract(getCurrentConsumption(), 
> getPreemptedResources());
>   }
> {code}
> {code:title=SchedulerApplicationAttempt}
>  public Resource getCurrentConsumption() {
> return currentConsumption;
>   }
> // This method may modify current Resource.
> public synchronized void recoverContainer(RMContainer rmContainer) {
> ..
> Resources.addTo(currentConsumption, rmContainer.getContainer()
>   .getResource());
> ..
>   }
> {code}
> I suggest that use stable Resource in comparator.
> Is there something i think wrong?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5585) [Atsv2] Add a new filter fromId in REST endpoints

2016-09-22 Thread Sangjin Lee (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5585?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15513781#comment-15513781
 ] 

Sangjin Lee commented on YARN-5585:
---

Thanks for your comments [~varun_saxena]. Yes, we should discuss this during 
the call and report back here.

Before we go into how to implement, I think we need to have a consensus on the 
requirements first. Querying for entities is a fairly generic thing, and IMO 
there should be a clear expectation of in what order they should be queried. It 
affects *which* entities get selected as well as in what order they are sorted. 
As I mentioned, I don't think it would be desirable to leave this order 
completely arbitrary, or things could get quite confusing really quickly.

My preference for this sorting order is either the entity id (descending) order 
or the chronological order. I think the entity id order is the simplest and 
easiest to understand, and for the most part identical to the chronological 
order. YARN entities are mostly compliant (so are MR entities), and it would 
not be unreasonable to ask frameworks to maintain entity id's that way. Even if 
that is not feasible, there would be a very consistent understanding how 
entities would be returned to the reader. That's the default sorting order in 
the current YARN RM web UI too. Can tez adopt a stricter entity id scheme? If 
not, at least would it be acceptable if entities are consistently returned in 
that order?

If we go with the chronological order (created time), then I would want it to 
be consistent. Then we should do it not only for framework entities but also 
YARN entities and change the row key schema for all. And I think that may 
require the secondary lookup table (yes, I understand this would be only for 
lookups and not for data).

Another point about sorting within the timeline reader code. If the query is 
specified with a limit, the limit is passed to the hbase client, and as such it 
will only return that number of entities (or fewer), right? I don't think hbase 
will return more than the specified limit, no? Then I don't understand how you 
would get a *different* set of tez entities than what you expected. For 
example, if there are entity 1 through 10, and your limit was 5, I would expect 
hbase to return 6 through 10 still. The reader code may rearrange them so that 
6 is at the top, but I don't expect hbase to return anything other than 6 
through 10. [~rohithsharma], could you confirm? Did I understand this right?

Also, apart from fixing the sorting in {{TimelineEntity.compareTo()}}, I am not 
sure if we need to re-sort the entities that are returned by hbase again in the 
timeline reader code. The result set from hbase should return them in the right 
order, right? Then I think we should simply return them in the same order 
without applying any further sorting. In other words, instead of using a sorted 
set, we should use the insertion-order set. Thoughts? [~varun_saxena]



> [Atsv2] Add a new filter fromId in REST endpoints
> -
>
> Key: YARN-5585
> URL: https://issues.apache.org/jira/browse/YARN-5585
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: timelinereader
>Reporter: Rohith Sharma K S
>Assignee: Rohith Sharma K S
>Priority: Critical
> Attachments: YARN-5585.v0.patch
>
>
> TimelineReader REST API's provides lot of filters to retrieve the 
> applications. Along with those, it would be good to add new filter i.e fromId 
> so that entities can be retrieved after the fromId. 
> Current Behavior : Default limit is set to 100. If there are 1000 entities 
> then REST call gives first/last 100 entities. How to retrieve next set of 100 
> entities i.e 101 to 200 OR 900 to 801?
> Example : If applications are stored database, app-1 app-2 ... app-10.
> *getApps?limit=5* gives app-1 to app-5. But to retrieve next 5 apps, there is 
> no way to achieve this. 
> So proposal is to have fromId in the filter like 
> *getApps?limit=5&&fromId=app-5* which gives list of apps from app-6 to 
> app-10. 
> Since ATS is targeting large number of entities storage, it is very common 
> use case to get next set of entities using fromId rather than querying all 
> the entites. This is very useful for pagination in web UI.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5663) Possible resource leak

2016-09-22 Thread Daniel Templeton (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5663?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15513754#comment-15513754
 ] 

Daniel Templeton commented on YARN-5663:


I don't think that's actually a leak, but I don't mind the refactor to make it 
simpler.  Please remove the semicolon from the _try_ line.  Also, the 
indentation on line 864 was correct before you modified it.  Please put it back 
where it was.  I always like to see unit tests, especially with refactoring, 
but this one's fundamental enough that it's probably OK to let it slide.

> Possible resource leak
> --
>
> Key: YARN-5663
> URL: https://issues.apache.org/jira/browse/YARN-5663
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: resourcemanager
>Affects Versions: 3.0.0-alpha1
>Reporter: Oleksii Dymytrov
>Priority: Minor
> Attachments: YARN_5663_v1_001_patch.patch
>
>
> ByteArrayOutputStream resource will not be freed in case of errors in write 
> method



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-2009) Priority support for preemption in ProportionalCapacityPreemptionPolicy

2016-09-22 Thread Eric Payne (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-2009?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15513725#comment-15513725
 ] 

Eric Payne commented on YARN-2009:
--

[~leftnoteasy], I have some concerns about this algorithm from above:
{code}
for app in sort-by-fifo-or-priority(apps) {
   if (user-to-allocated.get(app.user) < user-limit-resource) {
app.allocated = min(app.used + pending, user-limit-resource - 
user-to-allocated.get(app.user));
user-to-allocated.get(app.user) += app.allocated;
   } else {
 // skip this app because user-limit reached
   }
}
{code}
If {{Queue1}} has 100 resources, and if {{user1}} starts {{app1}} at priority 1 
that consumes the whole queue, won't {{user1}}'s {{user-limit-resource}} be 
100? Then, if {{user1}} starts another app ({{app2}}) at priority 2, won't the 
above algorithm skip over {{app2}} because {{user1}} has already achieved its 
{{user-limit-resource}}?

> Priority support for preemption in ProportionalCapacityPreemptionPolicy
> ---
>
> Key: YARN-2009
> URL: https://issues.apache.org/jira/browse/YARN-2009
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: capacityscheduler
>Reporter: Devaraj K
>Assignee: Sunil G
> Attachments: YARN-2009.0001.patch, YARN-2009.0002.patch
>
>
> While preempting containers based on the queue ideal assignment, we may need 
> to consider preempting the low priority application containers first.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-3139) Improve locks in AbstractYarnScheduler/CapacityScheduler/FairScheduler

2016-09-22 Thread Daniel Templeton (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-3139?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15513717#comment-15513717
 ] 

Daniel Templeton commented on YARN-3139:


Yep.  I added it to my todo list, but I won't get there until next week.

> Improve locks in AbstractYarnScheduler/CapacityScheduler/FairScheduler
> --
>
> Key: YARN-3139
> URL: https://issues.apache.org/jira/browse/YARN-3139
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: resourcemanager, scheduler
>Reporter: Wangda Tan
>Assignee: Wangda Tan
> Attachments: YARN-3139.0.patch, YARN-3139.1.patch, YARN-3139.2.patch
>
>
> Enhance locks in AbstractYarnScheduler/CapacityScheduler/FairScheduler, as 
> mentioned in YARN-3091, a possible solution is using read/write lock. Other 
> fine-graind locks for specific purposes / bugs should be addressed in 
> separated tickets.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-3139) Improve locks in AbstractYarnScheduler/CapacityScheduler/FairScheduler

2016-09-22 Thread Tsuyoshi Ozawa (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-3139?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15513703#comment-15513703
 ] 

Tsuyoshi Ozawa commented on YARN-3139:
--

[~leftnoteasy] [~jianhe] thanks for taking this issue. 

{quote}
Summary:
No regression in performance, didn't see deadlock happens.
No significant performance improvement either, because existing scheduler 
allocation is still in single thread.
{quote}

If the performance doesn't change, could you clarify the reason to change this? 
Do you plan to make the scheduler allocation multi-threaded? 

> Improve locks in AbstractYarnScheduler/CapacityScheduler/FairScheduler
> --
>
> Key: YARN-3139
> URL: https://issues.apache.org/jira/browse/YARN-3139
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: resourcemanager, scheduler
>Reporter: Wangda Tan
>Assignee: Wangda Tan
> Attachments: YARN-3139.0.patch, YARN-3139.1.patch, YARN-3139.2.patch
>
>
> Enhance locks in AbstractYarnScheduler/CapacityScheduler/FairScheduler, as 
> mentioned in YARN-3091, a possible solution is using read/write lock. Other 
> fine-graind locks for specific purposes / bugs should be addressed in 
> separated tickets.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-5663) Possible resource leak

2016-09-22 Thread Oleksii Dymytrov (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-5663?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Oleksii Dymytrov updated YARN-5663:
---
Attachment: YARN_5663_v1_001_patch.patch

> Possible resource leak
> --
>
> Key: YARN-5663
> URL: https://issues.apache.org/jira/browse/YARN-5663
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: resourcemanager
>Affects Versions: 3.0.0-alpha1
>Reporter: Oleksii Dymytrov
>Priority: Minor
> Attachments: YARN_5663_v1_001_patch.patch
>
>
> ByteArrayOutputStream resource will not be freed in case of errors in write 
> method



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Created] (YARN-5663) Possible resource leak

2016-09-22 Thread Oleksii Dymytrov (JIRA)
Oleksii Dymytrov created YARN-5663:
--

 Summary: Possible resource leak
 Key: YARN-5663
 URL: https://issues.apache.org/jira/browse/YARN-5663
 Project: Hadoop YARN
  Issue Type: Bug
  Components: resourcemanager
Affects Versions: 3.0.0-alpha1
Reporter: Oleksii Dymytrov
Priority: Minor


ByteArrayOutputStream resource will not be freed in case of errors in write 
method



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-2009) Priority support for preemption in ProportionalCapacityPreemptionPolicy

2016-09-22 Thread Sunil G (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-2009?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15513545#comment-15513545
 ] 

Sunil G commented on YARN-2009:
---

Thanks [~eepayne] for the comments. Mostly makes sense, I will update a new 
patch soon.

> Priority support for preemption in ProportionalCapacityPreemptionPolicy
> ---
>
> Key: YARN-2009
> URL: https://issues.apache.org/jira/browse/YARN-2009
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: capacityscheduler
>Reporter: Devaraj K
>Assignee: Sunil G
> Attachments: YARN-2009.0001.patch, YARN-2009.0002.patch
>
>
> While preempting containers based on the queue ideal assignment, we may need 
> to consider preempting the low priority application containers first.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (YARN-5609) Expose upgrade and restart API in ContainerManagementProtocol

2016-09-22 Thread Jian He (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5609?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15513480#comment-15513480
 ] 

Jian He edited comment on YARN-5609 at 9/22/16 2:44 PM:


looks good to me overall, 
I think wherever {{setIsReInitializing(false)}} is called endReInitingContainer 
should be called. Otherwise it's possible endReInitingContainer is not invoked.
Maybe have a common method for both


was (Author: jianhe):
looks good to me overall, 
I think wherever {{setIsReInitializing(false)}} is called endReInitingContainer 
should be called. Otherwise it's possible endReInitingContainer is not invoked.

> Expose upgrade and restart API in ContainerManagementProtocol
> -
>
> Key: YARN-5609
> URL: https://issues.apache.org/jira/browse/YARN-5609
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Arun Suresh
>Assignee: Arun Suresh
> Attachments: YARN-5609.001.patch, YARN-5609.002.patch, 
> YARN-5609.003.patch, YARN-5609.004.patch, YARN-5609.005.patch, 
> YARN-5609.006.patch, YARN-5609.007.patch, YARN-5609.008.patch
>
>
> YARN-5620 and YARN-5637 allows an AM to explicitly *upgrade* a container with 
> a new launch context and subsequently *rollback* / *commit* the change on the 
> Container. This can also be used to simply *restart* the Container as well. 
> This JIRA proposes to extend the ContainerManagementProtocol with the 
> following API:
> * *reInitializeContainer*
> * *rollbackLastUpgrade*
> * *commitLastUpgrade*
> * *restartContainer*



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5609) Expose upgrade and restart API in ContainerManagementProtocol

2016-09-22 Thread Jian He (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5609?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15513480#comment-15513480
 ] 

Jian He commented on YARN-5609:
---

looks good to me overall, 
I think wherever {{setIsReInitializing(false)}} is called endReInitingContainer 
should be called. Otherwise it's possible endReInitingContainer is not invoked.

> Expose upgrade and restart API in ContainerManagementProtocol
> -
>
> Key: YARN-5609
> URL: https://issues.apache.org/jira/browse/YARN-5609
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Arun Suresh
>Assignee: Arun Suresh
> Attachments: YARN-5609.001.patch, YARN-5609.002.patch, 
> YARN-5609.003.patch, YARN-5609.004.patch, YARN-5609.005.patch, 
> YARN-5609.006.patch, YARN-5609.007.patch, YARN-5609.008.patch
>
>
> YARN-5620 and YARN-5637 allows an AM to explicitly *upgrade* a container with 
> a new launch context and subsequently *rollback* / *commit* the change on the 
> Container. This can also be used to simply *restart* the Container as well. 
> This JIRA proposes to extend the ContainerManagementProtocol with the 
> following API:
> * *reInitializeContainer*
> * *rollbackLastUpgrade*
> * *commitLastUpgrade*
> * *restartContainer*



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-3139) Improve locks in AbstractYarnScheduler/CapacityScheduler/FairScheduler

2016-09-22 Thread Jian He (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-3139?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15513402#comment-15513402
 ] 

Jian He commented on YARN-3139:
---

There are also similar comments like this... I just couldn't remember why these 
comments were added.. Anyway, may be it's fine..
{code}
  /**
   * Validate increase/decrease request. This function must be called under
   * the queue lock to make sure that the access to container resource is
   * atomic. Refer to LeafQueue.decreaseContainer() and
   * CapacityScheduelr.updateIncreaseRequests()
   * 
   * - Throw exception when any other error happens
   * 
   */
  public static void checkSchedContainerChangeRequest(
{code}

> Improve locks in AbstractYarnScheduler/CapacityScheduler/FairScheduler
> --
>
> Key: YARN-3139
> URL: https://issues.apache.org/jira/browse/YARN-3139
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: resourcemanager, scheduler
>Reporter: Wangda Tan
>Assignee: Wangda Tan
> Attachments: YARN-3139.0.patch, YARN-3139.1.patch, YARN-3139.2.patch
>
>
> Enhance locks in AbstractYarnScheduler/CapacityScheduler/FairScheduler, as 
> mentioned in YARN-3091, a possible solution is using read/write lock. Other 
> fine-graind locks for specific purposes / bugs should be addressed in 
> separated tickets.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Issue Comment Deleted] (YARN-3139) Improve locks in AbstractYarnScheduler/CapacityScheduler/FairScheduler

2016-09-22 Thread Jian He (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-3139?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jian He updated YARN-3139:
--
Comment: was deleted

(was: May be it's related to some consistency issues with respect to queue 
stats?)

> Improve locks in AbstractYarnScheduler/CapacityScheduler/FairScheduler
> --
>
> Key: YARN-3139
> URL: https://issues.apache.org/jira/browse/YARN-3139
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: resourcemanager, scheduler
>Reporter: Wangda Tan
>Assignee: Wangda Tan
> Attachments: YARN-3139.0.patch, YARN-3139.1.patch, YARN-3139.2.patch
>
>
> Enhance locks in AbstractYarnScheduler/CapacityScheduler/FairScheduler, as 
> mentioned in YARN-3091, a possible solution is using read/write lock. Other 
> fine-graind locks for specific purposes / bugs should be addressed in 
> separated tickets.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (YARN-3139) Improve locks in AbstractYarnScheduler/CapacityScheduler/FairScheduler

2016-09-22 Thread Jian He (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-3139?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15513402#comment-15513402
 ] 

Jian He edited comment on YARN-3139 at 9/22/16 2:20 PM:


There are also similar comments like this... I just couldn't remember why these 
comments were added.. Anyway, may be it's fine..
{code}
  /**
   * Validate increase/decrease request. This function must be called under
   * the queue lock to make sure that the access to container resource is
   * atomic. Refer to LeafQueue.decreaseContainer() and
   * CapacityScheduelr.updateIncreaseRequests()
   * 
   * - Throw exception when any other error happens
   * 
   */
  public static void checkSchedContainerChangeRequest(
{code}
May be it's related to some consistency issues with respect to queue stats?


was (Author: jianhe):
There are also similar comments like this... I just couldn't remember why these 
comments were added.. Anyway, may be it's fine..
{code}
  /**
   * Validate increase/decrease request. This function must be called under
   * the queue lock to make sure that the access to container resource is
   * atomic. Refer to LeafQueue.decreaseContainer() and
   * CapacityScheduelr.updateIncreaseRequests()
   * 
   * - Throw exception when any other error happens
   * 
   */
  public static void checkSchedContainerChangeRequest(
{code}

> Improve locks in AbstractYarnScheduler/CapacityScheduler/FairScheduler
> --
>
> Key: YARN-3139
> URL: https://issues.apache.org/jira/browse/YARN-3139
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: resourcemanager, scheduler
>Reporter: Wangda Tan
>Assignee: Wangda Tan
> Attachments: YARN-3139.0.patch, YARN-3139.1.patch, YARN-3139.2.patch
>
>
> Enhance locks in AbstractYarnScheduler/CapacityScheduler/FairScheduler, as 
> mentioned in YARN-3091, a possible solution is using read/write lock. Other 
> fine-graind locks for specific purposes / bugs should be addressed in 
> separated tickets.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-3139) Improve locks in AbstractYarnScheduler/CapacityScheduler/FairScheduler

2016-09-22 Thread Jian He (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-3139?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15513416#comment-15513416
 ] 

Jian He commented on YARN-3139:
---

May be it's related to some consistency issues with respect to queue stats?

> Improve locks in AbstractYarnScheduler/CapacityScheduler/FairScheduler
> --
>
> Key: YARN-3139
> URL: https://issues.apache.org/jira/browse/YARN-3139
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: resourcemanager, scheduler
>Reporter: Wangda Tan
>Assignee: Wangda Tan
> Attachments: YARN-3139.0.patch, YARN-3139.1.patch, YARN-3139.2.patch
>
>
> Enhance locks in AbstractYarnScheduler/CapacityScheduler/FairScheduler, as 
> mentioned in YARN-3091, a possible solution is using read/write lock. Other 
> fine-graind locks for specific purposes / bugs should be addressed in 
> separated tickets.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-3139) Improve locks in AbstractYarnScheduler/CapacityScheduler/FairScheduler

2016-09-22 Thread Jian He (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-3139?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15513407#comment-15513407
 ] 

Jian He commented on YARN-3139:
---

[~templedf], would you like to check the FairScheduler changes?

> Improve locks in AbstractYarnScheduler/CapacityScheduler/FairScheduler
> --
>
> Key: YARN-3139
> URL: https://issues.apache.org/jira/browse/YARN-3139
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: resourcemanager, scheduler
>Reporter: Wangda Tan
>Assignee: Wangda Tan
> Attachments: YARN-3139.0.patch, YARN-3139.1.patch, YARN-3139.2.patch
>
>
> Enhance locks in AbstractYarnScheduler/CapacityScheduler/FairScheduler, as 
> mentioned in YARN-3091, a possible solution is using read/write lock. Other 
> fine-graind locks for specific purposes / bugs should be addressed in 
> separated tickets.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-4743) ResourceManager crash because TimSort

2016-09-22 Thread stefanlee (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-4743?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15513222#comment-15513222
 ] 

stefanlee commented on YARN-4743:
-

Thanks [~yufeigu] ,i also met this problem,my scenario is that i decommission 
all my cluster nodemangers,after that ,thread "continuous scheduling" was 
down,then i found this exception was happened in "Collections.sort" in thread 
"continous scheduling" . 

> ResourceManager crash because TimSort
> -
>
> Key: YARN-4743
> URL: https://issues.apache.org/jira/browse/YARN-4743
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: fairscheduler
>Affects Versions: 2.6.4
>Reporter: Zephyr Guo
>Assignee: Yufei Gu
> Attachments: YARN-4743-cdh5.4.7.patch
>
>
> {code}
> 2016-02-26 14:08:50,821 FATAL 
> org.apache.hadoop.yarn.server.resourcemanager.ResourceManager: Error in 
> handling event type NODE_UPDATE to the scheduler
> java.lang.IllegalArgumentException: Comparison method violates its general 
> contract!
>  at java.util.TimSort.mergeHi(TimSort.java:868)
>  at java.util.TimSort.mergeAt(TimSort.java:485)
>  at java.util.TimSort.mergeCollapse(TimSort.java:410)
>  at java.util.TimSort.sort(TimSort.java:214)
>  at java.util.TimSort.sort(TimSort.java:173)
>  at java.util.Arrays.sort(Arrays.java:659)
>  at java.util.Collections.sort(Collections.java:217)
>  at 
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.fair.FSLeafQueue.assignContainer(FSLeafQueue.java:316)
>  at 
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.fair.FSParentQueue.assignContainer(FSParentQueue.java:240)
>  at 
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.fair.FairScheduler.attemptScheduling(FairScheduler.java:1091)
>  at 
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.fair.FairScheduler.nodeUpdate(FairScheduler.java:989)
>  at 
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.fair.FairScheduler.handle(FairScheduler.java:1185)
>  at 
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.fair.FairScheduler.handle(FairScheduler.java:112)
>  at 
> org.apache.hadoop.yarn.server.resourcemanager.ResourceManager$SchedulerEventDispatcher$EventProcessor.run(ResourceManager.java:684)
>  at java.lang.Thread.run(Thread.java:745)
> 2016-02-26 14:08:50,822 INFO 
> org.apache.hadoop.yarn.server.resourcemanager.ResourceManager: Exiting, bbye..
> {code}
> Actually, this issue found in 2.6.0-cdh5.4.7.
> I think the cause is that we modify {{Resouce}} while we are sorting 
> {{runnableApps}}.
> {code:title=FSLeafQueue.java}
> Comparator comparator = policy.getComparator();
> writeLock.lock();
> try {
>   Collections.sort(runnableApps, comparator);
> } finally {
>   writeLock.unlock();
> }
> readLock.lock();
> {code}
> {code:title=FairShareComparator}
> public int compare(Schedulable s1, Schedulable s2) {
> ..
>   s1.getResourceUsage(), minShare1);
>   boolean s2Needy = Resources.lessThan(RESOURCE_CALCULATOR, null,
>   s2.getResourceUsage(), minShare2);
>   minShareRatio1 = (double) s1.getResourceUsage().getMemory()
>   / Resources.max(RESOURCE_CALCULATOR, null, minShare1, 
> ONE).getMemory();
>   minShareRatio2 = (double) s2.getResourceUsage().getMemory()
>   / Resources.max(RESOURCE_CALCULATOR, null, minShare2, 
> ONE).getMemory();
> ..
> {code}
> {{getResourceUsage}} will return current Resource. The current Resource is 
> unstable. 
> {code:title=FSAppAttempt.java}
> @Override
>   public Resource getResourceUsage() {
> // Here the getPreemptedResources() always return zero, except in
> // a preemption round
> return Resources.subtract(getCurrentConsumption(), 
> getPreemptedResources());
>   }
> {code}
> {code:title=SchedulerApplicationAttempt}
>  public Resource getCurrentConsumption() {
> return currentConsumption;
>   }
> // This method may modify current Resource.
> public synchronized void recoverContainer(RMContainer rmContainer) {
> ..
> Resources.addTo(currentConsumption, rmContainer.getContainer()
>   .getResource());
> ..
>   }
> {code}
> I suggest that use stable Resource in comparator.
> Is there something i think wrong?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5539) TimelineClient failed to retry on "java.net.SocketTimeoutException: Read timed out"

2016-09-22 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5539?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15513130#comment-15513130
 ] 

Hadoop QA commented on YARN-5539:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 13s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red} 0m 0s 
{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 7m 
39s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 38s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
24s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 36s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
18s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 
30s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 31s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 
28s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 25s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 25s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
18s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 31s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
11s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
0s {color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 
14s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 27s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 2m 26s 
{color} | {color:green} hadoop-yarn-common in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 
16s {color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 18m 50s {color} 
| {color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:9560f25 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12829828/YARN-5539.patch |
| JIRA Issue | YARN-5539 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux 0fe56146ad40 3.13.0-93-generic #140-Ubuntu SMP Mon Jul 18 
21:21:05 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / 537095d |
| Default Java | 1.8.0_101 |
| findbugs | v3.0.0 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-YARN-Build/13187/testReport/ |
| modules | C: hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common U: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common |
| Console output | 
https://builds.apache.org/job/PreCommit-YARN-Build/13187/console |
| Powered by | Apache Yetus 0.3.0   http://yetus.apache.org |


This message was automatically generated.



> TimelineClient failed to retry on "java.net.SocketTimeoutException: Read 
> timed out"
> ---
>
> Key: YARN-5539
> URL: https://issues.apache.org/jira/browse/YARN-5539
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: yarn
>Reporter: Sumana Sathish
>Assignee: Junping Du
>Priority: Critical
> 

[jira] [Updated] (YARN-5539) TimelineClient failed to retry on "java.net.SocketTimeoutException: Read timed out"

2016-09-22 Thread Junping Du (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-5539?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Junping Du updated YARN-5539:
-
Attachment: YARN-5539.patch

Attach a quick patch to fix for corner case here. The fix It is very 
straight-forward, however, to add a unit test is not straightforward as 
exception handling is embedded. Should be fine to simply fix here without UT.

> TimelineClient failed to retry on "java.net.SocketTimeoutException: Read 
> timed out"
> ---
>
> Key: YARN-5539
> URL: https://issues.apache.org/jira/browse/YARN-5539
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: yarn
>Reporter: Sumana Sathish
>Assignee: Junping Du
>Priority: Critical
> Attachments: YARN-5539.patch
>
>
> AM fails with the following exception
> {code}
> FATAL distributedshell.ApplicationMaster: Error running ApplicationMaster
> com.sun.jersey.api.client.ClientHandlerException: 
> java.net.SocketTimeoutException: Read timed out
>   at 
> com.sun.jersey.client.urlconnection.URLConnectionClientHandler.handle(URLConnectionClientHandler.java:149)
>   at 
> org.apache.hadoop.yarn.client.api.impl.TimelineClientImpl$TimelineJerseyRetryFilter$1.run(TimelineClientImpl.java:236)
>   at 
> org.apache.hadoop.yarn.client.api.impl.TimelineClientImpl$TimelineClientConnectionRetry.retryOn(TimelineClientImpl.java:185)
>   at 
> org.apache.hadoop.yarn.client.api.impl.TimelineClientImpl$TimelineJerseyRetryFilter.handle(TimelineClientImpl.java:247)
>   at com.sun.jersey.api.client.Client.handle(Client.java:648)
>   at com.sun.jersey.api.client.WebResource.handle(WebResource.java:670)
>   at com.sun.jersey.api.client.WebResource.access$200(WebResource.java:74)
>   at 
> com.sun.jersey.api.client.WebResource$Builder.post(WebResource.java:563)
>   at 
> org.apache.hadoop.yarn.client.api.impl.TimelineWriter.doPostingObject(TimelineWriter.java:154)
>   at 
> org.apache.hadoop.yarn.client.api.impl.TimelineWriter$1.run(TimelineWriter.java:115)
>   at 
> org.apache.hadoop.yarn.client.api.impl.TimelineWriter$1.run(TimelineWriter.java:112)
>   at java.security.AccessController.doPrivileged(Native Method)
>   at javax.security.auth.Subject.doAs(Subject.java:422)
>   at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1724)
>   at 
> org.apache.hadoop.yarn.client.api.impl.TimelineWriter.doPosting(TimelineWriter.java:112)
>   at 
> org.apache.hadoop.yarn.client.api.impl.TimelineWriter.putEntities(TimelineWriter.java:92)
>   at 
> org.apache.hadoop.yarn.client.api.impl.TimelineClientImpl.putEntities(TimelineClientImpl.java:345)
>   at 
> org.apache.hadoop.yarn.applications.distributedshell.ApplicationMaster.publishApplicationAttemptEvent(ApplicationMaster.java:1166)
>   at 
> org.apache.hadoop.yarn.applications.distributedshell.ApplicationMaster.run(ApplicationMaster.java:567)
>   at 
> org.apache.hadoop.yarn.applications.distributedshell.ApplicationMaster.main(ApplicationMaster.java:298)
> Caused by: java.net.SocketTimeoutException: Read timed out
>   at java.net.SocketInputStream.socketRead0(Native Method)
>   at java.net.SocketInputStream.socketRead(SocketInputStream.java:116)
>   at java.net.SocketInputStream.read(SocketInputStream.java:170)
>   at java.net.SocketInputStream.read(SocketInputStream.java:141)
>   at java.io.BufferedInputStream.fill(BufferedInputStream.java:246)
>   at java.io.BufferedInputStream.read1(BufferedInputStream.java:286)
>   at java.io.BufferedInputStream.read(BufferedInputStream.java:345)
>   at sun.net.www.http.HttpClient.parseHTTPHeader(HttpClient.java:704)
>   at sun.net.www.http.HttpClient.parseHTTP(HttpClient.java:647)
>   at 
> sun.net.www.protocol.http.HttpURLConnection.getInputStream0(HttpURLConnection.java:1536)
>   at 
> sun.net.www.protocol.http.HttpURLConnection.getInputStream(HttpURLConnection.java:1441)
>   at 
> java.net.HttpURLConnection.getResponseCode(HttpURLConnection.java:480)
>   at 
> org.apache.hadoop.security.authentication.client.AuthenticatedURL.extractToken(AuthenticatedURL.java:253)
>   at 
> org.apache.hadoop.security.authentication.client.PseudoAuthenticator.authenticate(PseudoAuthenticator.java:77)
>   at 
> org.apache.hadoop.security.token.delegation.web.DelegationTokenAuthenticator.authenticate(DelegationTokenAuthenticator.java:132)
>   at 
> org.apache.hadoop.security.authentication.client.AuthenticatedURL.openConnection(AuthenticatedURL.java:216)
>   at 
> org.apache.hadoop.security.token.delegation.web.DelegationTokenAuthenticatedURL.openConnection(DelegationTokenAuthenticatedURL.java:322)
>   at 
> org.apache.hadoop.yarn.client.api.impl.TimelineClient

[jira] [Commented] (YARN-3877) YarnClientImpl.submitApplication swallows exceptions

2016-09-22 Thread Naganarasimha G R (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-3877?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15513003#comment-15513003
 ] 

Naganarasimha G R commented on YARN-3877:
-

Thanks for the patch [~varun_saxena], will get this patch committed if no 
further comments !

> YarnClientImpl.submitApplication swallows exceptions
> 
>
> Key: YARN-3877
> URL: https://issues.apache.org/jira/browse/YARN-3877
> Project: Hadoop YARN
>  Issue Type: Improvement
>  Components: client
>Affects Versions: 2.7.2
>Reporter: Steve Loughran
>Assignee: Varun Saxena
>Priority: Minor
> Attachments: YARN-3877.01.patch, YARN-3877.02.patch, 
> YARN-3877.03.patch, YARN-3877.04.patch
>
>
> When {{YarnClientImpl.submitApplication}} spins waiting for the application 
> to be accepted, any interruption during its Sleep() calls are logged and 
> swallowed.
> this makes it hard to interrupt the thread during shutdown. Really it should 
> throw some form of exception and let the caller deal with it.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-3877) YarnClientImpl.submitApplication swallows exceptions

2016-09-22 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-3877?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15512997#comment-15512997
 ] 

Hadoop QA commented on YARN-3877:
-

| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 13s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 
0s {color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 9m 
32s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 23s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
18s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 29s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
16s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 0m 
43s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 17s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 
29s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 26s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 26s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
18s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 29s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
15s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
0s {color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 0m 
42s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 14s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 16m 26s 
{color} | {color:green} hadoop-yarn-client in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 
17s {color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 32m 34s {color} 
| {color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:9560f25 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12829805/YARN-3877.04.patch |
| JIRA Issue | YARN-3877 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux 08d29eebd6b5 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed 
Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / 537095d |
| Default Java | 1.8.0_101 |
| findbugs | v3.0.0 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-YARN-Build/13186/testReport/ |
| modules | C: hadoop-yarn-project/hadoop-yarn/hadoop-yarn-client U: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-client |
| Console output | 
https://builds.apache.org/job/PreCommit-YARN-Build/13186/console |
| Powered by | Apache Yetus 0.3.0   http://yetus.apache.org |


This message was automatically generated.



> YarnClientImpl.submitApplication swallows exceptions
> 
>
> Key: YARN-3877
> URL: https://issues.apache.org/jira/browse/YARN-3877
> Project: Hadoop YARN
>  Issue Type: Improvement
>  Components: client
>Affects Versions: 2.7.2
>Reporter: Steve Loughran
>Assignee: Varun Saxena
>Priority: Minor
> Attachments: YARN-3877.01.patch, YARN-3877.02.patch, 
> YARN-3877.03.patch, YARN-3877.04.patch
>
>
> When {{YarnClientImpl.submitApplica

[jira] [Commented] (YARN-3877) YarnClientImpl.submitApplication swallows exceptions

2016-09-22 Thread Steve Loughran (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-3877?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15512989#comment-15512989
 ] 

Steve Loughran commented on YARN-3877:
--

was this ever committed? I guess not

> YarnClientImpl.submitApplication swallows exceptions
> 
>
> Key: YARN-3877
> URL: https://issues.apache.org/jira/browse/YARN-3877
> Project: Hadoop YARN
>  Issue Type: Improvement
>  Components: client
>Affects Versions: 2.7.2
>Reporter: Steve Loughran
>Assignee: Varun Saxena
>Priority: Minor
> Attachments: YARN-3877.01.patch, YARN-3877.02.patch, 
> YARN-3877.03.patch, YARN-3877.04.patch
>
>
> When {{YarnClientImpl.submitApplication}} spins waiting for the application 
> to be accepted, any interruption during its Sleep() calls are logged and 
> swallowed.
> this makes it hard to interrupt the thread during shutdown. Really it should 
> throw some form of exception and let the caller deal with it.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-3877) YarnClientImpl.submitApplication swallows exceptions

2016-09-22 Thread Varun Saxena (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-3877?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Varun Saxena updated YARN-3877:
---
Attachment: YARN-3877.04.patch

> YarnClientImpl.submitApplication swallows exceptions
> 
>
> Key: YARN-3877
> URL: https://issues.apache.org/jira/browse/YARN-3877
> Project: Hadoop YARN
>  Issue Type: Improvement
>  Components: client
>Affects Versions: 2.7.2
>Reporter: Steve Loughran
>Assignee: Varun Saxena
>Priority: Minor
> Attachments: YARN-3877.01.patch, YARN-3877.02.patch, 
> YARN-3877.03.patch, YARN-3877.04.patch
>
>
> When {{YarnClientImpl.submitApplication}} spins waiting for the application 
> to be accepted, any interruption during its Sleep() calls are logged and 
> swallowed.
> this makes it hard to interrupt the thread during shutdown. Really it should 
> throw some form of exception and let the caller deal with it.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5662) Provide an option to enable ContainerMonitor

2016-09-22 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5662?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15512728#comment-15512728
 ] 

Hadoop QA commented on YARN-5662:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 15s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red} 0m 0s 
{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 11s 
{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 8m 
35s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 2m 52s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
47s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 2s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
28s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 
58s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 36s 
{color} | {color:green} trunk passed {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 10s 
{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 
54s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 2m 48s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 2m 48s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
42s {color} | {color:green} hadoop-yarn-project/hadoop-yarn: The patch 
generated 0 new + 226 unchanged - 5 fixed = 226 total (was 231) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 52s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
21s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
0s {color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 2m 7s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 42s 
{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 0m 28s {color} 
| {color:red} hadoop-yarn-api in the patch failed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 15m 20s 
{color} | {color:green} hadoop-yarn-server-nodemanager in the patch passed. 
{color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 
20s {color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 42m 19s {color} 
| {color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.yarn.conf.TestYarnConfigurationFields |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:9560f25 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12829793/YARN-5662.1.patch |
| JIRA Issue | YARN-5662 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux 1bb23d8cd21a 3.13.0-93-generic #140-Ubuntu SMP Mon Jul 18 
21:21:05 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / 537095d |
| Default Java | 1.8.0_101 |
| findbugs | v3.0.0 |
| unit | 
https://builds.apache.org/job/PreCommit-YARN-Build/13185/artifact/patchprocess/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-api.txt
 |
| unit test logs |  
https://builds.apache.org/job/PreCommit-YARN-Build/13185/artifact/patchprocess/

[jira] [Commented] (YARN-2255) YARN Audit logging not added to log4j.properties

2016-09-22 Thread Varun Saxena (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-2255?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15512721#comment-15512721
 ] 

Varun Saxena commented on YARN-2255:


Ok...Please go ahead.

> YARN Audit logging not added to log4j.properties
> 
>
> Key: YARN-2255
> URL: https://issues.apache.org/jira/browse/YARN-2255
> Project: Hadoop YARN
>  Issue Type: Bug
>Affects Versions: 2.4.0
>Reporter: Varun Saxena
>Assignee: Varun Saxena
>
> log4j.properties file which is part of the hadoop package, doesnt have YARN 
> Audit logging tied to it. This leads to audit logs getting generated in 
> normal log files. Audit logs should be generated in a separate log file



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-3877) YarnClientImpl.submitApplication swallows exceptions

2016-09-22 Thread Varun Saxena (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-3877?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15512698#comment-15512698
 ] 

Varun Saxena commented on YARN-3877:


Yes. Somebody needs to commit it. Will look at killApplication too

> YarnClientImpl.submitApplication swallows exceptions
> 
>
> Key: YARN-3877
> URL: https://issues.apache.org/jira/browse/YARN-3877
> Project: Hadoop YARN
>  Issue Type: Improvement
>  Components: client
>Affects Versions: 2.7.2
>Reporter: Steve Loughran
>Assignee: Varun Saxena
>Priority: Minor
> Attachments: YARN-3877.01.patch, YARN-3877.02.patch, 
> YARN-3877.03.patch
>
>
> When {{YarnClientImpl.submitApplication}} spins waiting for the application 
> to be accepted, any interruption during its Sleep() calls are logged and 
> swallowed.
> this makes it hard to interrupt the thread during shutdown. Really it should 
> throw some form of exception and let the caller deal with it.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-3877) YarnClientImpl.submitApplication swallows exceptions

2016-09-22 Thread Naganarasimha G R (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-3877?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15512696#comment-15512696
 ] 

Naganarasimha G R commented on YARN-3877:
-

Thanks for the comments [~Ying Zhang],   We need to fix in 
{{YarnClientImpl.killApplication}} as well need to handle the checkstyle 
issues. 

> YarnClientImpl.submitApplication swallows exceptions
> 
>
> Key: YARN-3877
> URL: https://issues.apache.org/jira/browse/YARN-3877
> Project: Hadoop YARN
>  Issue Type: Improvement
>  Components: client
>Affects Versions: 2.7.2
>Reporter: Steve Loughran
>Assignee: Varun Saxena
>Priority: Minor
> Attachments: YARN-3877.01.patch, YARN-3877.02.patch, 
> YARN-3877.03.patch
>
>
> When {{YarnClientImpl.submitApplication}} spins waiting for the application 
> to be accepted, any interruption during its Sleep() calls are logged and 
> swallowed.
> this makes it hard to interrupt the thread during shutdown. Really it should 
> throw some form of exception and let the caller deal with it.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Resolved] (YARN-5661) transitionToActive blocks when enable work-preserving-recovery

2016-09-22 Thread fengyongshe (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-5661?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

fengyongshe resolved YARN-5661.
---
Resolution: Duplicate

> transitionToActive blocks when enable work-preserving-recovery
> --
>
> Key: YARN-5661
> URL: https://issues.apache.org/jira/browse/YARN-5661
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: resourcemanager
>Affects Versions: 2.6.0
>Reporter: fengyongshe
>
> Consider scenario : Restart YARN with some application running or accepted, 
> when enable work-preserving-recovery, RM is failed to transition to Active 
> because of NullPointerException
> 16-09-21 14:32:10,536 INFO  fair.AllocationFileLoaderService 
> (AllocationFileLoaderService.java:reloadAllocations(209)) - Loading 
> allocation file /cmss/bch/bc1.3.1/hadoop/etc/hadoop/fair-scheduler.xml.
> 2016-09-21 14:32:10,543 WARN  resourcemanager.RMAuditLogger 
> (RMAuditLogger.java:logFailure(287)) - USER=yarn
> OPERATION=transitionToActiveTARGET=RMHAProtocolService  
> RESULT=FAILURE  DESCRIPTION=Exception transitioning to active   PERMISSIONS=
> 2016-09-21 14:32:10,543 WARN  ha.ActiveStandbyElector 
> (ActiveStandbyElector.java:becomeActive(808)) - Exception handling the 
> winning of election
> org.apache.hadoop.ha.ServiceFailedException: RM could not transition to Active
> at 
> org.apache.hadoop.yarn.server.resourcemanager.EmbeddedElectorService.becomeActive(EmbeddedElectorService.java:124)
> at 
> org.apache.hadoop.ha.ActiveStandbyElector.becomeActive(ActiveStandbyElector.java:804)
> at 
> org.apache.hadoop.ha.ActiveStandbyElector.processResult(ActiveStandbyElector.java:415)
> at 
> org.apache.zookeeper.ClientCnxn$EventThread.processEvent(ClientCnxn.java:599)
> 2016-09-21 14:32:10,534 INFO  util.HostsFileReader 
> (HostsFileReader.java:refresh(129)) - Refreshing hosts (include/exclude) list
> 2016-09-21 14:32:10,537 INFO  fair.QueueManager 
> (QueueManager.java:updateAllocationConfiguration(385)) -  there is queue 
> root.default in fair scheduer used for test
> 2016-09-21 14:32:10,537 INFO  fair.QueueManager 
> (QueueManager.java:updateAllocationConfiguration(385)) -  there is queue 
> root in fair scheduer used for test
> 2016-09-21 14:32:10,543 WARN  ha.ActiveStandbyElector 
> (ActiveStandbyElector.java:becomeActive(808)) - Exception handling the 
> winning of election
> org.apache.hadoop.ha.ServiceFailedException: RM could not transition to Active
> at 
> org.apache.hadoop.yarn.server.resourcemanager.EmbeddedElectorService.becomeActive(EmbeddedElectorService.java:124)
> at 
> org.apache.hadoop.ha.ActiveStandbyElector.becomeActive(ActiveStandbyElector.java:804)
> at 
> org.apache.hadoop.ha.ActiveStandbyElector.processResult(ActiveStandbyElector.java:415)
> at 
> org.apache.zookeeper.ClientCnxn$EventThread.processEvent(ClientCnxn.java:599)
> at 
> org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:498)
> Caused by: org.apache.hadoop.ha.ServiceFailedException: Error when 
> transitioning to Active mode
> at 
> org.apache.hadoop.yarn.server.resourcemanager.AdminService.transitionToActive(AdminService.java:307)
> at 
> org.apache.hadoop.yarn.server.resourcemanager.EmbeddedElectorService.becomeActive(EmbeddedElectorService.java:122)
> ... 4 more
> Caused by: java.lang.NullPointerException
> at 
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.fair.FairScheduler.addApplicationAttempt(FairScheduler.java:748)
> at 
> org.apache.hadoop.yarn.server.resourcemanager.rmapp.attempt.RMAppAttemptImpl$AttemptRecoveredTransition.transition(RMAppAttemptImpl.java:1045)
> at 
> org.apache.hadoop.yarn.server.resourcemanager.rmapp.attempt.RMAppAttemptImpl$AttemptRecoveredTransition.transition(RMAppAttemptImpl.java:1009)
> at 
> org.apache.hadoop.yarn.server.resourcemanager.rmapp.attempt.RMAppAttemptImpl.handle(RMAppAttemptImpl.java:760)
> at 
> org.apache.hadoop.yarn.server.resourcemanager.rmapp.RMAppImpl.access$1900(RMAppImpl.java:105)
> at 
> org.apache.hadoop.yarn.server.resourcemanager.rmapp.RMAppImpl$RMAppRecoveredTransition.transition(RMAppImpl.java:877)
> at 
> org.apache.hadoop.yarn.server.resourcemanager.rmapp.RMAppImpl$RMAppRecoveredTransition.transition(RMAppImpl.java:867)
> at 
> org.apache.hadoop.yarn.state.StateMachineFactory$MultipleInternalArc.doTransition(StateMachineFactory.java:385)
> at 
> org.apache.hadoop.yarn.state.StateMachineFactory$InternalStateMachine.doTransition(StateMachineFactory.java:448)
> at 
> org.apache.hadoop.yarn.server.resourcemanager.rmapp.RMAppImpl.handle(RMAppImpl.java:742)
> at 
> org.apache.hadoop.yarn.server.resourcemanager.RMAppManager.recoverApp

[jira] [Commented] (YARN-5585) [Atsv2] Add a new filter fromId in REST endpoints

2016-09-22 Thread Varun Saxena (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5585?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15512673#comment-15512673
 ] 

Varun Saxena commented on YARN-5585:


Just to clarify, what I meant by having another index table was not to store 
data in it. It only stores the entityID for 
cluster!user!flow!run!app!entitytype and inverted created time.
The write to this table will only be when created time is reported i.e. when 
application reports created time on start event (most probably).

As as part of the interface, we are claiming entities will be returned, 
descendingly sorted by created time, I felt this use case we should definitely 
support. 
Whether we support sorting by some other parameter or not.
Currently we iterate over all the entities within the scope of entity type to 
arrive at the sorted set of entities. So, this IMO should definitely be fixed 
by providing some sort of index table.

In the 2nd point in my  [comment above | 
https://issues.apache.org/jira/browse/YARN-5585?focusedCommentId=15494251&page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-15494251]
 we can query entity ID specific entities directly from entity table.
One more suggestion was to open up an interface which can be used to provide 
encoding and decoding for specific Entity IDs' (based on entity type) as part 
of row key.
This would not require any extra write or read. However, Li and Rohith seemed 
to be a little reluctant with that solution as Tez or Spark will have to add 
code for it, albeit only a little bit.

However, as [~vrushalic] suggested we can also create an auxiliary table, and 
specify the key in timeline entity. Issue with this is we are sort of exposing 
internal implementation.
This however can be useful if we want to sort by something else as well as 
pointed out, not merely created time. Problem though can be double write. How 
about having this auxiliary table as an index table ? And have one write just 
to make an entry into this table. 
On read side though we can refer to this index table depending on the 
suggestion made by Vrushali i.e. specify the index table and start row key and 
then use MultiRowRangeFilter to get records from entity table.
Thoughts ?

However, I do feel we inherently need to support created time based sorting 
scenario (i.e. have created time based index table as a mandatory table without 
user needing to specify it in REST) as we promise in the interface that 
entities will be sorted in that fashion.

Probably we can discuss further on this in call today

> [Atsv2] Add a new filter fromId in REST endpoints
> -
>
> Key: YARN-5585
> URL: https://issues.apache.org/jira/browse/YARN-5585
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: timelinereader
>Reporter: Rohith Sharma K S
>Assignee: Rohith Sharma K S
>Priority: Critical
> Attachments: YARN-5585.v0.patch
>
>
> TimelineReader REST API's provides lot of filters to retrieve the 
> applications. Along with those, it would be good to add new filter i.e fromId 
> so that entities can be retrieved after the fromId. 
> Current Behavior : Default limit is set to 100. If there are 1000 entities 
> then REST call gives first/last 100 entities. How to retrieve next set of 100 
> entities i.e 101 to 200 OR 900 to 801?
> Example : If applications are stored database, app-1 app-2 ... app-10.
> *getApps?limit=5* gives app-1 to app-5. But to retrieve next 5 apps, there is 
> no way to achieve this. 
> So proposal is to have fromId in the filter like 
> *getApps?limit=5&&fromId=app-5* which gives list of apps from app-6 to 
> app-10. 
> Since ATS is targeting large number of entities storage, it is very common 
> use case to get next set of entities using fromId rather than querying all 
> the entites. This is very useful for pagination in web UI.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-5662) Provide an option to enable ContainerMonitor

2016-09-22 Thread Jian He (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-5662?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jian He updated YARN-5662:
--
Attachment: YARN-5662.1.patch

> Provide an option to enable ContainerMonitor 
> -
>
> Key: YARN-5662
> URL: https://issues.apache.org/jira/browse/YARN-5662
> Project: Hadoop YARN
>  Issue Type: Bug
>Reporter: Jian He
>Assignee: Jian He
> Attachments: YARN-5662.1.patch
>
>
> Currently, if vmem/pmem check is not enabled, ContainerMonitor would not run. 
>  In certain cases, ContainerMonitor also needs to run to monitor things like 
> container-metrics. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Created] (YARN-5662) Provide an option to enable ContainerMonitor

2016-09-22 Thread Jian He (JIRA)
Jian He created YARN-5662:
-

 Summary: Provide an option to enable ContainerMonitor 
 Key: YARN-5662
 URL: https://issues.apache.org/jira/browse/YARN-5662
 Project: Hadoop YARN
  Issue Type: Bug
Reporter: Jian He
Assignee: Jian He






--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-5662) Provide an option to enable ContainerMonitor

2016-09-22 Thread Jian He (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-5662?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jian He updated YARN-5662:
--
Description: Currently, if vmem/pmem check is not enabled, ContainerMonitor 
would not run.  In certain cases, ContainerMonitor also needs to run to monitor 
things like container-metrics. 

> Provide an option to enable ContainerMonitor 
> -
>
> Key: YARN-5662
> URL: https://issues.apache.org/jira/browse/YARN-5662
> Project: Hadoop YARN
>  Issue Type: Bug
>Reporter: Jian He
>Assignee: Jian He
>
> Currently, if vmem/pmem check is not enabled, ContainerMonitor would not run. 
>  In certain cases, ContainerMonitor also needs to run to monitor things like 
> container-metrics. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-5662) Provide an option to enable ContainerMonitor

2016-09-22 Thread Jian He (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-5662?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jian He updated YARN-5662:
--
Release Note:   (was: Currently, if vmem/pmem check is not enabled, 
ContainerMonitor would not run.  In certain cases, ContainerMonitor also needs 
to run to monitor things like container-metrics. )

> Provide an option to enable ContainerMonitor 
> -
>
> Key: YARN-5662
> URL: https://issues.apache.org/jira/browse/YARN-5662
> Project: Hadoop YARN
>  Issue Type: Bug
>Reporter: Jian He
>Assignee: Jian He
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Created] (YARN-5661) transitionToActive blocks when enable work-preserving-recovery

2016-09-22 Thread fengyongshe (JIRA)
fengyongshe created YARN-5661:
-

 Summary: transitionToActive blocks when enable 
work-preserving-recovery
 Key: YARN-5661
 URL: https://issues.apache.org/jira/browse/YARN-5661
 Project: Hadoop YARN
  Issue Type: Bug
  Components: resourcemanager
Affects Versions: 2.6.0
Reporter: fengyongshe


Consider scenario : Restart YARN with some application running or accepted, 
when enable work-preserving-recovery, RM is failed to transition to Active 
because of NullPointerException

16-09-21 14:32:10,536 INFO  fair.AllocationFileLoaderService 
(AllocationFileLoaderService.java:reloadAllocations(209)) - Loading allocation 
file /cmss/bch/bc1.3.1/hadoop/etc/hadoop/fair-scheduler.xml.
2016-09-21 14:32:10,543 WARN  resourcemanager.RMAuditLogger 
(RMAuditLogger.java:logFailure(287)) - USER=yarn
OPERATION=transitionToActiveTARGET=RMHAProtocolService  RESULT=FAILURE  
DESCRIPTION=Exception transitioning to active   PERMISSIONS=
2016-09-21 14:32:10,543 WARN  ha.ActiveStandbyElector 
(ActiveStandbyElector.java:becomeActive(808)) - Exception handling the winning 
of election
org.apache.hadoop.ha.ServiceFailedException: RM could not transition to Active
at 
org.apache.hadoop.yarn.server.resourcemanager.EmbeddedElectorService.becomeActive(EmbeddedElectorService.java:124)
at 
org.apache.hadoop.ha.ActiveStandbyElector.becomeActive(ActiveStandbyElector.java:804)
at 
org.apache.hadoop.ha.ActiveStandbyElector.processResult(ActiveStandbyElector.java:415)
at 
org.apache.zookeeper.ClientCnxn$EventThread.processEvent(ClientCnxn.java:599)
2016-09-21 14:32:10,534 INFO  util.HostsFileReader 
(HostsFileReader.java:refresh(129)) - Refreshing hosts (include/exclude) list
2016-09-21 14:32:10,537 INFO  fair.QueueManager 
(QueueManager.java:updateAllocationConfiguration(385)) -  there is queue 
root.default in fair scheduer used for test
2016-09-21 14:32:10,537 INFO  fair.QueueManager 
(QueueManager.java:updateAllocationConfiguration(385)) -  there is queue 
root in fair scheduer used for test
2016-09-21 14:32:10,543 WARN  ha.ActiveStandbyElector 
(ActiveStandbyElector.java:becomeActive(808)) - Exception handling the winning 
of election
org.apache.hadoop.ha.ServiceFailedException: RM could not transition to Active
at 
org.apache.hadoop.yarn.server.resourcemanager.EmbeddedElectorService.becomeActive(EmbeddedElectorService.java:124)
at 
org.apache.hadoop.ha.ActiveStandbyElector.becomeActive(ActiveStandbyElector.java:804)
at 
org.apache.hadoop.ha.ActiveStandbyElector.processResult(ActiveStandbyElector.java:415)
at 
org.apache.zookeeper.ClientCnxn$EventThread.processEvent(ClientCnxn.java:599)
at org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:498)
Caused by: org.apache.hadoop.ha.ServiceFailedException: Error when 
transitioning to Active mode
at 
org.apache.hadoop.yarn.server.resourcemanager.AdminService.transitionToActive(AdminService.java:307)
at 
org.apache.hadoop.yarn.server.resourcemanager.EmbeddedElectorService.becomeActive(EmbeddedElectorService.java:122)
... 4 more
Caused by: java.lang.NullPointerException
at 
org.apache.hadoop.yarn.server.resourcemanager.scheduler.fair.FairScheduler.addApplicationAttempt(FairScheduler.java:748)
at 
org.apache.hadoop.yarn.server.resourcemanager.rmapp.attempt.RMAppAttemptImpl$AttemptRecoveredTransition.transition(RMAppAttemptImpl.java:1045)
at 
org.apache.hadoop.yarn.server.resourcemanager.rmapp.attempt.RMAppAttemptImpl$AttemptRecoveredTransition.transition(RMAppAttemptImpl.java:1009)
at 
org.apache.hadoop.yarn.server.resourcemanager.rmapp.attempt.RMAppAttemptImpl.handle(RMAppAttemptImpl.java:760)
at 
org.apache.hadoop.yarn.server.resourcemanager.rmapp.RMAppImpl.access$1900(RMAppImpl.java:105)
at 
org.apache.hadoop.yarn.server.resourcemanager.rmapp.RMAppImpl$RMAppRecoveredTransition.transition(RMAppImpl.java:877)
at 
org.apache.hadoop.yarn.server.resourcemanager.rmapp.RMAppImpl$RMAppRecoveredTransition.transition(RMAppImpl.java:867)
at 
org.apache.hadoop.yarn.state.StateMachineFactory$MultipleInternalArc.doTransition(StateMachineFactory.java:385)
at 
org.apache.hadoop.yarn.state.StateMachineFactory$InternalStateMachine.doTransition(StateMachineFactory.java:448)
at 
org.apache.hadoop.yarn.server.resourcemanager.rmapp.RMAppImpl.handle(RMAppImpl.java:742)
at 
org.apache.hadoop.yarn.server.resourcemanager.RMAppManager.recoverApplication(RMAppManager.java:313)
at 
org.apache.hadoop.yarn.server.resourcemanager.RMAppManager.recover(RMAppManager.java:419)
at 
org.apache.hadoop.yarn.server.resourcemanager.ResourceManager.recover(ResourceManager.java:1201)
at 
org.apache.hadoop.yarn.server.resourcemanager.ResourceManager$RMActiveServices.serviceStart(Resour