[jira] [Commented] (YARN-877) Allow for black-listing resources in FifoScheduler

2013-06-28 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-877?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13695954#comment-13695954
 ] 

Hadoop QA commented on YARN-877:


{color:green}+1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12590115/YARN-877-2.patch
  against trunk revision .

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 1 new 
or modified test files.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javadoc{color}.  The javadoc tool did not generate any 
warning messages.

{color:green}+1 eclipse:eclipse{color}.  The patch built with 
eclipse:eclipse.

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 1.3.9) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:green}+1 core tests{color}.  The patch passed unit tests in 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager.

{color:green}+1 contrib tests{color}.  The patch passed contrib unit tests.

Test results: 
https://builds.apache.org/job/PreCommit-YARN-Build/1409//testReport/
Console output: https://builds.apache.org/job/PreCommit-YARN-Build/1409//console

This message is automatically generated.

> Allow for black-listing resources in FifoScheduler
> --
>
> Key: YARN-877
> URL: https://issues.apache.org/jira/browse/YARN-877
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: scheduler
>Reporter: Junping Du
>Assignee: Junping Du
> Attachments: YARN-877-2.patch, YARN-877.patch
>
>
> YARN-750 already addressed black-list staff in YARN API and CS scheduler, 
> this jira add implementation for FifoScheduler.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (YARN-877) Allow for black-listing resources in FifoScheduler

2013-06-28 Thread Junping Du (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-877?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Junping Du updated YARN-877:


Attachment: YARN-877-2.patch

Remove some unnecessary changes that fix warnings, we can address them 
separated later.

> Allow for black-listing resources in FifoScheduler
> --
>
> Key: YARN-877
> URL: https://issues.apache.org/jira/browse/YARN-877
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: scheduler
>Reporter: Junping Du
>Assignee: Junping Du
> Attachments: YARN-877-2.patch, YARN-877.patch
>
>
> YARN-750 already addressed black-list staff in YARN API and CS scheduler, 
> this jira add implementation for FifoScheduler.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (YARN-62) AM should not be able to abuse container tokens for repetitive container launches

2013-06-28 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-62?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13695908#comment-13695908
 ] 

Hadoop QA commented on YARN-62:
---

{color:green}+1 overall{color}.  Here are the results of testing the latest 
attachment 
  
http://issues.apache.org/jira/secure/attachment/12590108/YARN-62-20130628.patch
  against trunk revision .

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 1 new 
or modified test files.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javadoc{color}.  The javadoc tool did not generate any 
warning messages.

{color:green}+1 eclipse:eclipse{color}.  The patch built with 
eclipse:eclipse.

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 1.3.9) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:green}+1 core tests{color}.  The patch passed unit tests in 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager.

{color:green}+1 contrib tests{color}.  The patch passed contrib unit tests.

Test results: 
https://builds.apache.org/job/PreCommit-YARN-Build/1408//testReport/
Console output: https://builds.apache.org/job/PreCommit-YARN-Build/1408//console

This message is automatically generated.

> AM should not be able to abuse container tokens for repetitive container 
> launches
> -
>
> Key: YARN-62
> URL: https://issues.apache.org/jira/browse/YARN-62
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Affects Versions: 0.23.3, 2.0.0-alpha
>Reporter: Vinod Kumar Vavilapalli
>Assignee: Omkar Vinit Joshi
> Attachments: YARN-62-20130621.1.patch, YARN-62-20130621.patch, 
> YARN-62-20130628.patch
>
>
> Clone of YARN-51.
> ApplicationMaster should not be able to store container tokens and use the 
> same set of tokens for repetitive container launches. The possibility of such 
> abuse is there in the current code, for a duration of 1d+10mins, we need to 
> fix this.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (YARN-62) AM should not be able to abuse container tokens for repetitive container launches

2013-06-28 Thread Omkar Vinit Joshi (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-62?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Omkar Vinit Joshi updated YARN-62:
--

Attachment: (was: YARN-62-20130628.patch)

> AM should not be able to abuse container tokens for repetitive container 
> launches
> -
>
> Key: YARN-62
> URL: https://issues.apache.org/jira/browse/YARN-62
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Affects Versions: 0.23.3, 2.0.0-alpha
>Reporter: Vinod Kumar Vavilapalli
>Assignee: Omkar Vinit Joshi
> Attachments: YARN-62-20130621.1.patch, YARN-62-20130621.patch, 
> YARN-62-20130628.patch
>
>
> Clone of YARN-51.
> ApplicationMaster should not be able to store container tokens and use the 
> same set of tokens for repetitive container launches. The possibility of such 
> abuse is there in the current code, for a duration of 1d+10mins, we need to 
> fix this.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (YARN-62) AM should not be able to abuse container tokens for repetitive container launches

2013-06-28 Thread Omkar Vinit Joshi (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-62?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Omkar Vinit Joshi updated YARN-62:
--

Attachment: YARN-62-20130628.patch

> AM should not be able to abuse container tokens for repetitive container 
> launches
> -
>
> Key: YARN-62
> URL: https://issues.apache.org/jira/browse/YARN-62
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Affects Versions: 0.23.3, 2.0.0-alpha
>Reporter: Vinod Kumar Vavilapalli
>Assignee: Omkar Vinit Joshi
> Attachments: YARN-62-20130621.1.patch, YARN-62-20130621.patch, 
> YARN-62-20130628.patch
>
>
> Clone of YARN-51.
> ApplicationMaster should not be able to store container tokens and use the 
> same set of tokens for repetitive container launches. The possibility of such 
> abuse is there in the current code, for a duration of 1d+10mins, we need to 
> fix this.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (YARN-890) The roundup for memory values on resource manager UI is misleading

2013-06-28 Thread Karthik Kambatla (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-890?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13695891#comment-13695891
 ] 

Karthik Kambatla commented on YARN-890:
---

Hi Trupti

Can you provide more information, may be a screen shot of the resource manager 
ui you are referring to?

> The roundup for memory values on resource manager UI is misleading
> --
>
> Key: YARN-890
> URL: https://issues.apache.org/jira/browse/YARN-890
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: resourcemanager
>Reporter: Trupti Dhavle
>
> From the yarn-site.xml, I see following values-
> 
> yarn.nodemanager.resource.memory-mb
> 4192
> 
> 
> yarn.scheduler.maximum-allocation-mb
> 4192
> 
> 
> yarn.scheduler.minimum-allocation-mb
> 1024
> 
> However the resourcemanager UI shows total memory as 5MB 

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (YARN-62) AM should not be able to abuse container tokens for repetitive container launches

2013-06-28 Thread Omkar Vinit Joshi (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-62?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Omkar Vinit Joshi updated YARN-62:
--

Attachment: YARN-62-20130628.patch

> AM should not be able to abuse container tokens for repetitive container 
> launches
> -
>
> Key: YARN-62
> URL: https://issues.apache.org/jira/browse/YARN-62
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Affects Versions: 0.23.3, 2.0.0-alpha
>Reporter: Vinod Kumar Vavilapalli
>Assignee: Omkar Vinit Joshi
> Attachments: YARN-62-20130621.1.patch, YARN-62-20130621.patch, 
> YARN-62-20130628.patch
>
>
> Clone of YARN-51.
> ApplicationMaster should not be able to store container tokens and use the 
> same set of tokens for repetitive container launches. The possibility of such 
> abuse is there in the current code, for a duration of 1d+10mins, we need to 
> fix this.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (YARN-62) AM should not be able to abuse container tokens for repetitive container launches

2013-06-28 Thread Omkar Vinit Joshi (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-62?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13695883#comment-13695883
 ] 

Omkar Vinit Joshi commented on YARN-62:
---

fixing test issues...

> AM should not be able to abuse container tokens for repetitive container 
> launches
> -
>
> Key: YARN-62
> URL: https://issues.apache.org/jira/browse/YARN-62
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Affects Versions: 0.23.3, 2.0.0-alpha
>Reporter: Vinod Kumar Vavilapalli
>Assignee: Omkar Vinit Joshi
> Attachments: YARN-62-20130621.1.patch, YARN-62-20130621.patch
>
>
> Clone of YARN-51.
> ApplicationMaster should not be able to store container tokens and use the 
> same set of tokens for repetitive container launches. The possibility of such 
> abuse is there in the current code, for a duration of 1d+10mins, we need to 
> fix this.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (YARN-814) Difficult to diagnose a failed container launch when error due to invalid environment variable

2013-06-28 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-814?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13695874#comment-13695874
 ] 

Hadoop QA commented on YARN-814:


{color:green}+1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12590098/YARN-814.2.patch
  against trunk revision .

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 2 new 
or modified test files.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javadoc{color}.  The javadoc tool did not generate any 
warning messages.

{color:green}+1 eclipse:eclipse{color}.  The patch built with 
eclipse:eclipse.

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 1.3.9) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:green}+1 core tests{color}.  The patch passed unit tests in 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager
 hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-tests.

{color:green}+1 contrib tests{color}.  The patch passed contrib unit tests.

Test results: 
https://builds.apache.org/job/PreCommit-YARN-Build/1407//testReport/
Console output: https://builds.apache.org/job/PreCommit-YARN-Build/1407//console

This message is automatically generated.

> Difficult to diagnose a failed container launch when error due to invalid 
> environment variable
> --
>
> Key: YARN-814
> URL: https://issues.apache.org/jira/browse/YARN-814
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Hitesh Shah
>Assignee: Jian He
> Attachments: YARN-814.1.patch, YARN-814.2.patch, YARN-814.patch
>
>
> The container's launch script sets up environment variables, symlinks etc. 
> If there is any failure when setting up the basic context ( before the actual 
> user's process is launched ), nothing is captured by the NM. This makes it 
> impossible to diagnose the reason for the failure. 
> To reproduce, set an env var where the value contains characters that throw 
> syntax errors in bash. 

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (YARN-513) Create common proxy client for communicating with RM

2013-06-28 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-513?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13695859#comment-13695859
 ] 

Hadoop QA commented on YARN-513:


{color:green}+1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12590095/YARN-513.13.patch
  against trunk revision .

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 5 new 
or modified test files.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javadoc{color}.  The javadoc tool did not generate any 
warning messages.

{color:green}+1 eclipse:eclipse{color}.  The patch built with 
eclipse:eclipse.

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 1.3.9) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:green}+1 core tests{color}.  The patch passed unit tests in 
hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-app 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-client 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-common 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager
 hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-tests.

{color:green}+1 contrib tests{color}.  The patch passed contrib unit tests.

Test results: 
https://builds.apache.org/job/PreCommit-YARN-Build/1406//testReport/
Console output: https://builds.apache.org/job/PreCommit-YARN-Build/1406//console

This message is automatically generated.

> Create common proxy client for communicating with RM
> 
>
> Key: YARN-513
> URL: https://issues.apache.org/jira/browse/YARN-513
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: resourcemanager
>Reporter: Bikas Saha
>Assignee: Jian He
> Attachments: YARN-513.10.patch, YARN-513.11.patch, YARN-513.12.patch, 
> YARN-513.13.patch, YARN-513.1.patch, YARN-513.2.patch, YARN-513.3.patch, 
> YARN-513.4.patch, YARN.513.5.patch, YARN-513.6.patch, YARN-513.7.patch, 
> YARN-513.8.patch, YARN-513.9.patch
>
>
> When the RM is restarting, the NM, AM and Clients should wait for some time 
> for the RM to come back up.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (YARN-814) Difficult to diagnose a failed container launch when error due to invalid environment variable

2013-06-28 Thread Jian He (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-814?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jian He updated YARN-814:
-

Attachment: YARN-814.2.patch

> Difficult to diagnose a failed container launch when error due to invalid 
> environment variable
> --
>
> Key: YARN-814
> URL: https://issues.apache.org/jira/browse/YARN-814
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Hitesh Shah
>Assignee: Jian He
> Attachments: YARN-814.1.patch, YARN-814.2.patch, YARN-814.patch
>
>
> The container's launch script sets up environment variables, symlinks etc. 
> If there is any failure when setting up the basic context ( before the actual 
> user's process is launched ), nothing is captured by the NM. This makes it 
> impossible to diagnose the reason for the failure. 
> To reproduce, set an env var where the value contains characters that throw 
> syntax errors in bash. 

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (YARN-513) Create common proxy client for communicating with RM

2013-06-28 Thread Jian He (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-513?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jian He updated YARN-513:
-

Attachment: YARN-513.13.patch

rebased on latest trunk

> Create common proxy client for communicating with RM
> 
>
> Key: YARN-513
> URL: https://issues.apache.org/jira/browse/YARN-513
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: resourcemanager
>Reporter: Bikas Saha
>Assignee: Jian He
> Attachments: YARN-513.10.patch, YARN-513.11.patch, YARN-513.12.patch, 
> YARN-513.13.patch, YARN-513.1.patch, YARN-513.2.patch, YARN-513.3.patch, 
> YARN-513.4.patch, YARN.513.5.patch, YARN-513.6.patch, YARN-513.7.patch, 
> YARN-513.8.patch, YARN-513.9.patch
>
>
> When the RM is restarting, the NM, AM and Clients should wait for some time 
> for the RM to come back up.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Created] (YARN-890) The roundup for memory values on resource manager UI is misleading

2013-06-28 Thread Trupti Dhavle (JIRA)
Trupti Dhavle created YARN-890:
--

 Summary: The roundup for memory values on resource manager UI is 
misleading
 Key: YARN-890
 URL: https://issues.apache.org/jira/browse/YARN-890
 Project: Hadoop YARN
  Issue Type: Bug
  Components: resourcemanager
Reporter: Trupti Dhavle



>From the yarn-site.xml, I see following values-

yarn.nodemanager.resource.memory-mb
4192


yarn.scheduler.maximum-allocation-mb
4192


yarn.scheduler.minimum-allocation-mb
1024


However the resourcemanager UI shows total memory as 5MB 


--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (YARN-62) AM should not be able to abuse container tokens for repetitive container launches

2013-06-28 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-62?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13695766#comment-13695766
 ] 

Hadoop QA commented on YARN-62:
---

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  
http://issues.apache.org/jira/secure/attachment/12589212/YARN-62-20130621.1.patch
  against trunk revision .

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 1 new 
or modified test files.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javadoc{color}.  The javadoc tool did not generate any 
warning messages.

{color:green}+1 eclipse:eclipse{color}.  The patch built with 
eclipse:eclipse.

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 1.3.9) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:red}-1 core tests{color}.  The patch failed these unit tests in 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager:

  
org.apache.hadoop.yarn.server.nodemanager.containermanager.application.TestApplication
  
org.apache.hadoop.yarn.server.nodemanager.TestNodeManagerReboot
  
org.apache.hadoop.yarn.server.nodemanager.TestNodeManagerResync
  
org.apache.hadoop.yarn.server.nodemanager.TestNodeManagerShutdown
  
org.apache.hadoop.yarn.server.nodemanager.containermanager.TestContainerManager

{color:green}+1 contrib tests{color}.  The patch passed contrib unit tests.

Test results: 
https://builds.apache.org/job/PreCommit-YARN-Build/1405//testReport/
Console output: https://builds.apache.org/job/PreCommit-YARN-Build/1405//console

This message is automatically generated.

> AM should not be able to abuse container tokens for repetitive container 
> launches
> -
>
> Key: YARN-62
> URL: https://issues.apache.org/jira/browse/YARN-62
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Affects Versions: 0.23.3, 2.0.0-alpha
>Reporter: Vinod Kumar Vavilapalli
>Assignee: Omkar Vinit Joshi
> Attachments: YARN-62-20130621.1.patch, YARN-62-20130621.patch
>
>
> Clone of YARN-51.
> ApplicationMaster should not be able to store container tokens and use the 
> same set of tokens for repetitive container launches. The possibility of such 
> abuse is there in the current code, for a duration of 1d+10mins, we need to 
> fix this.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (YARN-744) Race condition in ApplicationMasterService.allocate .. It might process same allocate request twice resulting in additional containers getting allocated.

2013-06-28 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-744?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13695757#comment-13695757
 ] 

Hadoop QA commented on YARN-744:


{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12590077/YARN-744.patch
  against trunk revision .

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:red}-1 tests included{color}.  The patch doesn't appear to include 
any new or modified tests.
Please justify why no new tests are needed for this 
patch.
Also please list what manual steps were performed to 
verify this patch.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javadoc{color}.  The javadoc tool did not generate any 
warning messages.

{color:green}+1 eclipse:eclipse{color}.  The patch built with 
eclipse:eclipse.

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 1.3.9) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:green}+1 core tests{color}.  The patch passed unit tests in 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager.

{color:green}+1 contrib tests{color}.  The patch passed contrib unit tests.

Test results: 
https://builds.apache.org/job/PreCommit-YARN-Build/1404//testReport/
Console output: https://builds.apache.org/job/PreCommit-YARN-Build/1404//console

This message is automatically generated.

> Race condition in ApplicationMasterService.allocate .. It might process same 
> allocate request twice resulting in additional containers getting allocated.
> -
>
> Key: YARN-744
> URL: https://issues.apache.org/jira/browse/YARN-744
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: resourcemanager
>Reporter: Bikas Saha
>Assignee: Omkar Vinit Joshi
> Attachments: MAPREDUCE-3899-branch-0.23.patch, YARN-744.patch
>
>
> Looks like the lock taken in this is broken. It takes a lock on lastResponse 
> object and then puts a new lastResponse object into the map. At this point a 
> new thread entering this function will get a new lastResponse object and will 
> be able to take its lock and enter the critical section. Presumably we want 
> to limit one response per app attempt. So the lock could be taken on the 
> ApplicationAttemptId key of the response map object.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Resolved] (YARN-718) Remove RemoteUGI.getRemoteUser check from startContainer in ContainerManagerImpl as this is no longer required

2013-06-28 Thread Omkar Vinit Joshi (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-718?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Omkar Vinit Joshi resolved YARN-718.


Resolution: Invalid

YARN-613 fixes this.. getUser is required and containerId is replaced by 
applicationAttemptId.

> Remove RemoteUGI.getRemoteUser check from startContainer in 
> ContainerManagerImpl as this is no longer required
> --
>
> Key: YARN-718
> URL: https://issues.apache.org/jira/browse/YARN-718
> Project: Hadoop YARN
>  Issue Type: Bug
>Reporter: Omkar Vinit Joshi
>Assignee: Omkar Vinit Joshi
>
> Earlier there was a check in startContainer which was validating that 
> RemoteUGI.getRemoteUser is same as containerId. However this check is no 
> longer required and it should be removed. YARN-699 and YARN-715 will get 
> fixed with this.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (YARN-744) Race condition in ApplicationMasterService.allocate .. It might process same allocate request twice resulting in additional containers getting allocated.

2013-06-28 Thread Omkar Vinit Joshi (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-744?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Omkar Vinit Joshi updated YARN-744:
---

Attachment: YARN-744.patch

> Race condition in ApplicationMasterService.allocate .. It might process same 
> allocate request twice resulting in additional containers getting allocated.
> -
>
> Key: YARN-744
> URL: https://issues.apache.org/jira/browse/YARN-744
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: resourcemanager
>Reporter: Bikas Saha
>Assignee: Omkar Vinit Joshi
> Attachments: MAPREDUCE-3899-branch-0.23.patch, YARN-744.patch
>
>
> Looks like the lock taken in this is broken. It takes a lock on lastResponse 
> object and then puts a new lastResponse object into the map. At this point a 
> new thread entering this function will get a new lastResponse object and will 
> be able to take its lock and enter the critical section. Presumably we want 
> to limit one response per app attempt. So the lock could be taken on the 
> ApplicationAttemptId key of the response map object.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (YARN-744) Race condition in ApplicationMasterService.allocate .. It might process same allocate request twice resulting in additional containers getting allocated.

2013-06-28 Thread Omkar Vinit Joshi (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-744?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13695706#comment-13695706
 ] 

Omkar Vinit Joshi commented on YARN-744:


The problem here is that we retrieve the last response from resource map and 
then try to grab a lock on it. However after grabbing lock we don't check if 
the last response in resource map itself got updated or not. That results into 
a race condition which I am trying to solve here.. After grabbing the lock  an 
additional check has to be made to ensure that lastResponse was not changed in 
between i.e. no other AM requests were processed.

> Race condition in ApplicationMasterService.allocate .. It might process same 
> allocate request twice resulting in additional containers getting allocated.
> -
>
> Key: YARN-744
> URL: https://issues.apache.org/jira/browse/YARN-744
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: resourcemanager
>Reporter: Bikas Saha
>Assignee: Omkar Vinit Joshi
> Attachments: MAPREDUCE-3899-branch-0.23.patch
>
>
> Looks like the lock taken in this is broken. It takes a lock on lastResponse 
> object and then puts a new lastResponse object into the map. At this point a 
> new thread entering this function will get a new lastResponse object and will 
> be able to take its lock and enter the critical section. Presumably we want 
> to limit one response per app attempt. So the lock could be taken on the 
> ApplicationAttemptId key of the response map object.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (YARN-744) Race condition in ApplicationMasterService.allocate .. It might process same allocate request twice resulting in additional containers getting allocated.

2013-06-28 Thread Omkar Vinit Joshi (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-744?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Omkar Vinit Joshi updated YARN-744:
---

Summary: Race condition in ApplicationMasterService.allocate .. It might 
process same allocate request twice resulting in additional containers getting 
allocated.  (was: Locking not correct in 
org.apache.hadoop.yarn.server.resourcemanager.ApplicationMasterService.allocate(AllocateRequest
 request))

> Race condition in ApplicationMasterService.allocate .. It might process same 
> allocate request twice resulting in additional containers getting allocated.
> -
>
> Key: YARN-744
> URL: https://issues.apache.org/jira/browse/YARN-744
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: resourcemanager
>Reporter: Bikas Saha
>Assignee: Omkar Vinit Joshi
> Attachments: MAPREDUCE-3899-branch-0.23.patch
>
>
> Looks like the lock taken in this is broken. It takes a lock on lastResponse 
> object and then puts a new lastResponse object into the map. At this point a 
> new thread entering this function will get a new lastResponse object and will 
> be able to take its lock and enter the critical section. Presumably we want 
> to limit one response per app attempt. So the lock could be taken on the 
> ApplicationAttemptId key of the response map object.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (YARN-883) Expose Fair Scheduler-specific queue metrics

2013-06-28 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-883?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13695685#comment-13695685
 ] 

Hudson commented on YARN-883:
-

Integrated in Hadoop-trunk-Commit #4018 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/4018/])
YARN-883. Expose Fair Scheduler-specific queue metrics. (sandyr via tucu) 
(Revision 1497884)

 Result = SUCCESS
tucu : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1497884
Files : 
* /hadoop/common/trunk/hadoop-yarn-project/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/QueueMetrics.java
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/fair/AppSchedulable.java
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/fair/FSParentQueue.java
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/fair/FSQueue.java
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/fair/FSQueueMetrics.java
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/fair/FairScheduler.java
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/fair/QueueManager.java
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/webapp/FairSchedulerPage.java
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/webapp/dao/FairSchedulerQueueInfo.java
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/fair/TestFSLeafQueue.java
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/fair/TestFairScheduler.java


> Expose Fair Scheduler-specific queue metrics
> 
>
> Key: YARN-883
> URL: https://issues.apache.org/jira/browse/YARN-883
> Project: Hadoop YARN
>  Issue Type: Improvement
>  Components: scheduler
>Affects Versions: 2.0.5-alpha
>Reporter: Sandy Ryza
>Assignee: Sandy Ryza
> Fix For: 2.2.0
>
> Attachments: YARN-883-1.patch, YARN-883-1.patch, YARN-883.patch
>
>
> When the Fair Scheduler is enabled, QueueMetrics should include fair share, 
> minimum share, and maximum share.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (YARN-461) Fair scheduler accepts apps with empty string names and queues

2013-06-28 Thread Wei Yan (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-461?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13695679#comment-13695679
 ] 

Wei Yan commented on YARN-461:
--

Got it. I'll update the patch.

> Fair scheduler accepts apps with empty string names and queues
> --
>
> Key: YARN-461
> URL: https://issues.apache.org/jira/browse/YARN-461
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: resourcemanager
>Affects Versions: 2.0.3-alpha
>Reporter: Sandy Ryza
>Assignee: Sandy Ryza
> Attachments: YARN-461.patch, YARN-461.patch, YARN-461.patch
>
>
> When an app is submitted with "" for the name or queue, the RMAppManager 
> passes it on like it does with any other string.
> Instead it should probably use the default name and queue, as it does when it 
> encounters a null name or queue.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (YARN-814) Difficult to diagnose a failed container launch when error due to invalid environment variable

2013-06-28 Thread Jian He (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-814?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13695678#comment-13695678
 ] 

Jian He commented on YARN-814:
--

kick the jenkins

> Difficult to diagnose a failed container launch when error due to invalid 
> environment variable
> --
>
> Key: YARN-814
> URL: https://issues.apache.org/jira/browse/YARN-814
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Hitesh Shah
>Assignee: Jian He
> Attachments: YARN-814.1.patch, YARN-814.patch
>
>
> The container's launch script sets up environment variables, symlinks etc. 
> If there is any failure when setting up the basic context ( before the actual 
> user's process is launched ), nothing is captured by the NM. This makes it 
> impossible to diagnose the reason for the failure. 
> To reproduce, set an env var where the value contains characters that throw 
> syntax errors in bash. 

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (YARN-461) Fair scheduler accepts apps with empty string names and queues

2013-06-28 Thread Sandy Ryza (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-461?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13695675#comment-13695675
 ] 

Sandy Ryza commented on YARN-461:
-

Per Arun's suggestion, we don't need to make changes to the common code 
(RMAppManager).  We should reject the application inside the Fair Scheduler.

> Fair scheduler accepts apps with empty string names and queues
> --
>
> Key: YARN-461
> URL: https://issues.apache.org/jira/browse/YARN-461
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: resourcemanager
>Affects Versions: 2.0.3-alpha
>Reporter: Sandy Ryza
>Assignee: Sandy Ryza
> Attachments: YARN-461.patch, YARN-461.patch, YARN-461.patch
>
>
> When an app is submitted with "" for the name or queue, the RMAppManager 
> passes it on like it does with any other string.
> Instead it should probably use the default name and queue, as it does when it 
> encounters a null name or queue.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Created] (YARN-889) Make bars in Fair Scheduler web UI show multiple resources

2013-06-28 Thread Sandy Ryza (JIRA)
Sandy Ryza created YARN-889:
---

 Summary: Make bars in Fair Scheduler web UI show multiple resources
 Key: YARN-889
 URL: https://issues.apache.org/jira/browse/YARN-889
 Project: Hadoop YARN
  Issue Type: Improvement
  Components: scheduler
Affects Versions: 2.0.5-alpha
Reporter: Sandy Ryza
Assignee: Sandy Ryza


The Fair Scheduler web UI contains bars that make it easier to visualize 
resource usage relative to fair share and capacity.  Currently these bars only 
reflect memory metrics - they should be augmented to display other resources as 
well.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (YARN-883) Expose Fair Scheduler-specific queue metrics

2013-06-28 Thread Sandy Ryza (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-883?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13695663#comment-13695663
 ] 

Sandy Ryza commented on YARN-883:
-

Thanks, just filed YARN-889

> Expose Fair Scheduler-specific queue metrics
> 
>
> Key: YARN-883
> URL: https://issues.apache.org/jira/browse/YARN-883
> Project: Hadoop YARN
>  Issue Type: Improvement
>  Components: scheduler
>Affects Versions: 2.0.5-alpha
>Reporter: Sandy Ryza
>Assignee: Sandy Ryza
> Attachments: YARN-883-1.patch, YARN-883-1.patch, YARN-883.patch
>
>
> When the Fair Scheduler is enabled, QueueMetrics should include fair share, 
> minimum share, and maximum share.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (YARN-883) Expose Fair Scheduler-specific queue metrics

2013-06-28 Thread Alejandro Abdelnur (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-883?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13695661#comment-13695661
 ] 

Alejandro Abdelnur commented on YARN-883:
-

+1, LGTM. Are you following up with a  JIRA to improve the UI so the collapse 
tree shows both CPU and MEM? Please do so.

> Expose Fair Scheduler-specific queue metrics
> 
>
> Key: YARN-883
> URL: https://issues.apache.org/jira/browse/YARN-883
> Project: Hadoop YARN
>  Issue Type: Improvement
>  Components: scheduler
>Affects Versions: 2.0.5-alpha
>Reporter: Sandy Ryza
>Assignee: Sandy Ryza
> Attachments: YARN-883-1.patch, YARN-883-1.patch, YARN-883.patch
>
>
> When the Fair Scheduler is enabled, QueueMetrics should include fair share, 
> minimum share, and maximum share.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (YARN-353) Add Zookeeper-based store implementation for RMStateStore

2013-06-28 Thread Karthik Kambatla (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-353?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13695656#comment-13695656
 ] 

Karthik Kambatla commented on YARN-353:
---

Thanks Bikas. Mostly looks good. Can you address the findbugs and rebase 
against trunk. Will post a detailed review (couple of nits) on the updated 
patch.

> Add Zookeeper-based store implementation for RMStateStore
> -
>
> Key: YARN-353
> URL: https://issues.apache.org/jira/browse/YARN-353
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: resourcemanager
>Reporter: Hitesh Shah
>Assignee: Bikas Saha
> Attachments: YARN-353.1.patch
>
>
> Add store that write RM state data to ZK

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (YARN-888) clean up POM dependencies

2013-06-28 Thread Alejandro Abdelnur (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-888?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13695649#comment-13695649
 ] 

Alejandro Abdelnur commented on YARN-888:
-

this JIRA should be follow up by a JIRA that fixes dependencies based on the 
dependencies:analyze report

> clean up POM dependencies
> -
>
> Key: YARN-888
> URL: https://issues.apache.org/jira/browse/YARN-888
> Project: Hadoop YARN
>  Issue Type: Bug
>Affects Versions: 2.1.0-beta
>Reporter: Alejandro Abdelnur
>
> Intermediate 'pom' modules define dependencies inherited by leaf modules.
> This is causing issues in intellij IDE.
> We should normalize the leaf modules like in common, hdfs and tools where all 
> dependencies are defined in each leaf module and the intermediate 'pom' 
> module do not define any dependency.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Created] (YARN-888) clean up POM dependencies

2013-06-28 Thread Alejandro Abdelnur (JIRA)
Alejandro Abdelnur created YARN-888:
---

 Summary: clean up POM dependencies
 Key: YARN-888
 URL: https://issues.apache.org/jira/browse/YARN-888
 Project: Hadoop YARN
  Issue Type: Bug
Affects Versions: 2.1.0-beta
Reporter: Alejandro Abdelnur


Intermediate 'pom' modules define dependencies inherited by leaf modules.

This is causing issues in intellij IDE.

We should normalize the leaf modules like in common, hdfs and tools where all 
dependencies are defined in each leaf module and the intermediate 'pom' module 
do not define any dependency.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira