[jira] [Commented] (YARN-568) FairScheduler: support for work-preserving preemption

2013-05-19 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-568?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13661796#comment-13661796
 ] 

Hudson commented on YARN-568:
-

Integrated in Hadoop-trunk-Commit #3768 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/3768/])
YARN-568. Followup; remove an unused field added to Allocation (Revision 
1484375)

 Result = SUCCESS
cdouglas : 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1484375
Files : 
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/Allocation.java


> FairScheduler: support for work-preserving preemption 
> --
>
> Key: YARN-568
> URL: https://issues.apache.org/jira/browse/YARN-568
> Project: Hadoop YARN
>  Issue Type: Improvement
>  Components: scheduler
>Reporter: Carlo Curino
>Assignee: Carlo Curino
> Fix For: 2.0.5-beta
>
> Attachments: YARN-568-1.patch, YARN-568-2.patch, YARN-568-2.patch, 
> YARN-568.patch, YARN-568.patch
>
>
> In the attached patch, we modified  the FairScheduler to substitute its 
> preemption-by-killling with a work-preserving version of preemption (followed 
> by killing if the AMs do not respond quickly enough). This should allows to 
> run preemption checking more often, but kill less often (proper tuning to be 
> investigated).  Depends on YARN-567 and YARN-45, is related to YARN-569.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (YARN-568) FairScheduler: support for work-preserving preemption

2013-05-19 Thread Chris Douglas (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-568?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13661793#comment-13661793
 ] 

Chris Douglas commented on YARN-568:


bq. From the code in generatePreemptionMessage() the overlap between strict and 
fungible is not obvious. Can both be sent?

Yes. From the discussion in YARN-45, it seemed the consensus was that the RM 
may want to send a mix of both requests. Does that still make sense?

bq. Unused new member seems to have been added: recordFactory?

Sorry, an artifact of a previous version. Cleaned up in a followup commit.

> FairScheduler: support for work-preserving preemption 
> --
>
> Key: YARN-568
> URL: https://issues.apache.org/jira/browse/YARN-568
> Project: Hadoop YARN
>  Issue Type: Improvement
>  Components: scheduler
>Reporter: Carlo Curino
>Assignee: Carlo Curino
> Fix For: 2.0.5-beta
>
> Attachments: YARN-568-1.patch, YARN-568-2.patch, YARN-568-2.patch, 
> YARN-568.patch, YARN-568.patch
>
>
> In the attached patch, we modified  the FairScheduler to substitute its 
> preemption-by-killling with a work-preserving version of preemption (followed 
> by killing if the AMs do not respond quickly enough). This should allows to 
> run preemption checking more often, but kill less often (proper tuning to be 
> investigated).  Depends on YARN-567 and YARN-45, is related to YARN-569.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (YARN-392) Make it possible to specify hard locality constraints in resource requests

2013-05-19 Thread Sandy Ryza (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-392?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13661767#comment-13661767
 ] 

Sandy Ryza commented on YARN-392:
-

We currently have a working patch that has gone through multiple phases of 
review.  This patch implements the proposal made by Arun on YARN-398, which 
many comments led me to believe we had consensus on. The approach enables 
whitelisting by setting a disable-allocation flag on certain requests, so some 
form of "blacklisting" is a natural extension of it. The changes to the 
scheduler are about 10 lines. Modifying the proposal to support *only* 
whitelisting would require many additional changes, and do nothing to simplify 
the current changes.

As I said, if everyone else participating agrees on these additional changes, I 
am happy to implement them. But my opinion is that the best way to get this 
into 2.0.5, both in terms of soundness of the approach and in terms of 
punctuality, is to go with what we have worked on so far.

> Make it possible to specify hard locality constraints in resource requests
> --
>
> Key: YARN-392
> URL: https://issues.apache.org/jira/browse/YARN-392
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Bikas Saha
>Assignee: Sandy Ryza
> Attachments: YARN-392-1.patch, YARN-392-2.patch, YARN-392-2.patch, 
> YARN-392-2.patch, YARN-392-3.patch, YARN-392-4.patch, YARN-392.patch
>
>
> Currently its not possible to specify scheduling requests for specific nodes 
> and nowhere else. The RM automatically relaxes locality to rack and * and 
> assigns non-specified machines to the app.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Created] (YARN-701) ApplicationTokens should be used irrespective of kerberos

2013-05-19 Thread Vinod Kumar Vavilapalli (JIRA)
Vinod Kumar Vavilapalli created YARN-701:


 Summary: ApplicationTokens should be used irrespective of kerberos
 Key: YARN-701
 URL: https://issues.apache.org/jira/browse/YARN-701
 Project: Hadoop YARN
  Issue Type: Sub-task
Reporter: Vinod Kumar Vavilapalli
Assignee: Vinod Kumar Vavilapalli


 - Single code path for secure and non-secure cases is useful for testing, 
coverage.
 - Having this in non-secure mode will help us avoid accidental bugs in AMs 
DDos'ing and bringing down RM.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (YARN-624) Support gang scheduling in the AM RM protocol

2013-05-19 Thread Vinod Kumar Vavilapalli (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-624?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13661750#comment-13661750
 ] 

Vinod Kumar Vavilapalli commented on YARN-624:
--

While I see a general use for this and also that having this will be 
'exciting', I'd suggest that we first figure out real use cases. Like an 
application that already needs this. We need to have some ML-type applications 
that Carlo mentoined on board before we attempt this. Otherwise, we'll just be 
implementing random features out of priority that no one needs. We could 
implement tons of scheduling features, lets prioritize and implement in that 
order.

> Support gang scheduling in the AM RM protocol
> -
>
> Key: YARN-624
> URL: https://issues.apache.org/jira/browse/YARN-624
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: api, scheduler
>Affects Versions: 2.0.4-alpha
>Reporter: Sandy Ryza
>Assignee: Sandy Ryza
>
> Per discussion on YARN-392 and elsewhere, gang scheduling, in which a 
> scheduler runs a set of tasks when they can all be run at the same time, 
> would be a useful feature for YARN schedulers to support.
> Currently, AMs can approximate this by holding on to containers until they 
> get all the ones they need.  However, this lends itself to deadlocks when 
> different AMs are waiting on the same containers.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (YARN-392) Make it possible to specify hard locality constraints in resource requests

2013-05-19 Thread Vinod Kumar Vavilapalli (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-392?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13661747#comment-13661747
 ] 

Vinod Kumar Vavilapalli commented on YARN-392:
--

bq. The most common use case of black listing is to specify a set of nodes on 
which no allocations should be made
bq. I am not suggesting that this blacklisting mechanism is there to address 
the most common case. ...
IIUC, there is no point in supporting black-listing per resource-type. I don't 
see a use-case for it. When you blacklist a node or a rack, you blacklist it. 
You don't blacklist it for 5GB,5core containers but want to use it for 
1GB/1core container.

Still catching up the discussion. But wanted to say that this has gone on for 
too long. We should try and get this into 2.0.5.

Sandy/Bikas, can we just focus this for 'white-listing- per resource type 
through the flag that was proposed (and seems to be the consensus earlier) and 
use YARN-395 for blacklisting. I can close YARN-398 as duplicate.

> Make it possible to specify hard locality constraints in resource requests
> --
>
> Key: YARN-392
> URL: https://issues.apache.org/jira/browse/YARN-392
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Bikas Saha
>Assignee: Sandy Ryza
> Attachments: YARN-392-1.patch, YARN-392-2.patch, YARN-392-2.patch, 
> YARN-392-2.patch, YARN-392-3.patch, YARN-392-4.patch, YARN-392.patch
>
>
> Currently its not possible to specify scheduling requests for specific nodes 
> and nowhere else. The RM automatically relaxes locality to rack and * and 
> assigns non-specified machines to the app.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (YARN-392) Make it possible to specify hard locality constraints in resource requests

2013-05-19 Thread Sandy Ryza (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-392?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13661705#comment-13661705
 ] 

Sandy Ryza commented on YARN-392:
-

As I mentioned, any mechanism that allows whitelisting also allows blacklisting 
by definition, as it is always possible to whitelist all the nodes but the one 
that one doesn't want.  So I don't see it as overloading. 

bq. The most common use case of black listing is to specify a set of nodes on 
which no allocations should be made
I am not suggesting that this blacklisting mechanism is there to address the 
most common case.  In the same way that the most common use of delay scheduling 
is probably a cluster-wide setting, but allowing customization on specific 
requests in the way that you suggested earlier on this thread would still be 
useful, the ability to blacklist nodes for specific requests does not preclude 
a cluster-wide setting to address the common case.

Does the following seem like a fair representation of the mechanics of the 
alternative?  When a node-level request comes with disableAllocation=true, an 
InvalidAllocationException is thrown.  When a rack-level request comes with 
disableAllocation=true, we check to make sure that there are nodes under it.  
If not, an InvalidAllocationException is thrown.  When a node-level request is 
cancelled, we check the rack above it to make sure that if its 
disableAllocation=true, there are other non-zero node-level requests below it.  
If not, we throw an InvalidAllocationException.  To me, this seems both more 
complicated and gives up functionality unnecessarily.  That said, if we can get 
some consensus on an alternative, I am happy to implement that instead.


> Make it possible to specify hard locality constraints in resource requests
> --
>
> Key: YARN-392
> URL: https://issues.apache.org/jira/browse/YARN-392
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Bikas Saha
>Assignee: Sandy Ryza
> Attachments: YARN-392-1.patch, YARN-392-2.patch, YARN-392-2.patch, 
> YARN-392-2.patch, YARN-392-3.patch, YARN-392-4.patch, YARN-392.patch
>
>
> Currently its not possible to specify scheduling requests for specific nodes 
> and nowhere else. The RM automatically relaxes locality to rack and * and 
> assigns non-specified machines to the app.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (YARN-568) FairScheduler: support for work-preserving preemption

2013-05-19 Thread Bikas Saha (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-568?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13661691#comment-13661691
 ] 

Bikas Saha commented on YARN-568:
-

>From the code in generatePreemptionMessage() the overlap between strict and 
>fungible is not obvious. Can both be sent?
{code}
+  private PreemptionMessage generatePreemptionMessage(Allocation allocation){
{code}

Unused new member seems to have been added: recordFactory?
{code}
 public class Allocation {
+  
+  private final RecordFactory recordFactory =
+  RecordFactoryProvider.getRecordFactory(null);
+
{code}

> FairScheduler: support for work-preserving preemption 
> --
>
> Key: YARN-568
> URL: https://issues.apache.org/jira/browse/YARN-568
> Project: Hadoop YARN
>  Issue Type: Improvement
>  Components: scheduler
>Reporter: Carlo Curino
>Assignee: Carlo Curino
> Fix For: 2.0.5-beta
>
> Attachments: YARN-568-1.patch, YARN-568-2.patch, YARN-568-2.patch, 
> YARN-568.patch, YARN-568.patch
>
>
> In the attached patch, we modified  the FairScheduler to substitute its 
> preemption-by-killling with a work-preserving version of preemption (followed 
> by killing if the AMs do not respond quickly enough). This should allows to 
> run preemption checking more often, but kill less often (proper tuning to be 
> investigated).  Depends on YARN-567 and YARN-45, is related to YARN-569.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (YARN-700) TestInfoBlock fails on Windows because of line ending missmatch

2013-05-19 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-700?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13661675#comment-13661675
 ] 

Hadoop QA commented on YARN-700:


{color:green}+1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12583782/YARN-700.patch
  against trunk revision .

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 1 new 
or modified test files.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javadoc{color}.  The javadoc tool did not generate any 
warning messages.

{color:green}+1 eclipse:eclipse{color}.  The patch built with 
eclipse:eclipse.

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 1.3.9) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:green}+1 core tests{color}.  The patch passed unit tests in 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common.

{color:green}+1 contrib tests{color}.  The patch passed contrib unit tests.

Test results: 
https://builds.apache.org/job/PreCommit-YARN-Build/956//testReport/
Console output: https://builds.apache.org/job/PreCommit-YARN-Build/956//console

This message is automatically generated.

> TestInfoBlock fails on Windows because of line ending missmatch
> ---
>
> Key: YARN-700
> URL: https://issues.apache.org/jira/browse/YARN-700
> Project: Hadoop YARN
>  Issue Type: Bug
>Affects Versions: 3.0.0
>Reporter: Ivan Mitic
>Assignee: Ivan Mitic
> Attachments: YARN-700.patch
>
>
> Exception:
> {noformat}
> Running org.apache.hadoop.yarn.webapp.view.TestInfoBlock
> Tests run: 1, Failures: 1, Errors: 0, Skipped: 0, Time elapsed: 0.962 sec <<< 
> FAILURE!
> testMultilineInfoBlock(org.apache.hadoop.yarn.webapp.view.TestInfoBlock)  
> Time elapsed: 873 sec  <<< FAILURE!
> java.lang.AssertionError: 
>   at org.junit.Assert.fail(Assert.java:91)
>   at org.junit.Assert.assertTrue(Assert.java:43)
>   at org.junit.Assert.assertTrue(Assert.java:54)
>   at 
> org.apache.hadoop.yarn.webapp.view.TestInfoBlock.testMultilineInfoBlock(TestInfoBlock.java:79)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>   at java.lang.reflect.Method.invoke(Method.java:597)
>   at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:44)
>   at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:15)
>   at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:41)
>   at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:20)
>   at 
> org.junit.internal.runners.statements.FailOnTimeout$1.run(FailOnTimeout.java:28)
> {noformat}

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (YARN-617) In unsercure mode, AM can fake resource requirements

2013-05-19 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-617?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13661666#comment-13661666
 ] 

Hudson commented on YARN-617:
-

Integrated in Hadoop-trunk-Commit #3767 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/3767/])
MAPREDUCE-5257. Fix issues in TestContainerLauncherImpl after YARN-617. 
Contributed by Omkar Vinit Joshi. (Revision 1484349)

 Result = SUCCESS
vinodkv : 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1484349
Files : 
* /hadoop/common/trunk/hadoop-mapreduce-project/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-app/src/test/java/org/apache/hadoop/mapreduce/v2/app/launcher/TestContainerLauncherImpl.java


> In unsercure mode, AM can fake resource requirements 
> -
>
> Key: YARN-617
> URL: https://issues.apache.org/jira/browse/YARN-617
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Vinod Kumar Vavilapalli
>Assignee: Omkar Vinit Joshi
>Priority: Minor
> Fix For: 2.0.5-beta
>
> Attachments: YARN-617.20130501.1.patch, YARN-617.20130501.patch, 
> YARN-617.20130502.patch, YARN-617-20130507.patch, YARN-617.20130508.patch, 
> YARN-617-20130513.patch, YARN-617-20130515.patch, 
> YARN-617-20130516.branch-2.patch, YARN-617-20130516.trunk.patch
>
>
> Without security, it is impossible to completely avoid AMs faking resources. 
> We can at the least make it as difficult as possible by using the same 
> container tokens and the RM-NM shared key mechanism over unauthenticated 
> RM-NM channel.
> In the minimum, this will avoid accidental bugs in AMs in unsecure mode.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (YARN-700) TestInfoBlock fails on Windows because of line ending missmatch

2013-05-19 Thread Ivan Mitic (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-700?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ivan Mitic updated YARN-700:


Attachment: YARN-700.patch

Attaching the patch.

> TestInfoBlock fails on Windows because of line ending missmatch
> ---
>
> Key: YARN-700
> URL: https://issues.apache.org/jira/browse/YARN-700
> Project: Hadoop YARN
>  Issue Type: Bug
>Affects Versions: 3.0.0
>Reporter: Ivan Mitic
>Assignee: Ivan Mitic
> Attachments: YARN-700.patch
>
>
> Exception:
> {noformat}
> Running org.apache.hadoop.yarn.webapp.view.TestInfoBlock
> Tests run: 1, Failures: 1, Errors: 0, Skipped: 0, Time elapsed: 0.962 sec <<< 
> FAILURE!
> testMultilineInfoBlock(org.apache.hadoop.yarn.webapp.view.TestInfoBlock)  
> Time elapsed: 873 sec  <<< FAILURE!
> java.lang.AssertionError: 
>   at org.junit.Assert.fail(Assert.java:91)
>   at org.junit.Assert.assertTrue(Assert.java:43)
>   at org.junit.Assert.assertTrue(Assert.java:54)
>   at 
> org.apache.hadoop.yarn.webapp.view.TestInfoBlock.testMultilineInfoBlock(TestInfoBlock.java:79)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>   at java.lang.reflect.Method.invoke(Method.java:597)
>   at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:44)
>   at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:15)
>   at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:41)
>   at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:20)
>   at 
> org.junit.internal.runners.statements.FailOnTimeout$1.run(FailOnTimeout.java:28)
> {noformat}

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Created] (YARN-700) TestInfoBlock fails on Windows because of line ending missmatch

2013-05-19 Thread Ivan Mitic (JIRA)
Ivan Mitic created YARN-700:
---

 Summary: TestInfoBlock fails on Windows because of line ending 
missmatch
 Key: YARN-700
 URL: https://issues.apache.org/jira/browse/YARN-700
 Project: Hadoop YARN
  Issue Type: Bug
Affects Versions: 3.0.0
Reporter: Ivan Mitic
Assignee: Ivan Mitic


Exception:
{noformat}
Running org.apache.hadoop.yarn.webapp.view.TestInfoBlock
Tests run: 1, Failures: 1, Errors: 0, Skipped: 0, Time elapsed: 0.962 sec <<< 
FAILURE!
testMultilineInfoBlock(org.apache.hadoop.yarn.webapp.view.TestInfoBlock)  Time 
elapsed: 873 sec  <<< FAILURE!
java.lang.AssertionError: 
at org.junit.Assert.fail(Assert.java:91)
at org.junit.Assert.assertTrue(Assert.java:43)
at org.junit.Assert.assertTrue(Assert.java:54)
at 
org.apache.hadoop.yarn.webapp.view.TestInfoBlock.testMultilineInfoBlock(TestInfoBlock.java:79)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
at java.lang.reflect.Method.invoke(Method.java:597)
at 
org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:44)
at 
org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:15)
at 
org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:41)
at 
org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:20)
at 
org.junit.internal.runners.statements.FailOnTimeout$1.run(FailOnTimeout.java:28)
{noformat}

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Created] (YARN-699) TestUnmanagedAMLauncher fails with: Unauthorized request to start container

2013-05-19 Thread Ivan Mitic (JIRA)
Ivan Mitic created YARN-699:
---

 Summary: TestUnmanagedAMLauncher fails with: Unauthorized request 
to start container
 Key: YARN-699
 URL: https://issues.apache.org/jira/browse/YARN-699
 Project: Hadoop YARN
  Issue Type: Bug
Affects Versions: 3.0.0
Reporter: Ivan Mitic
Assignee: Ivan Mitic


Just run into this. Looks like YARN-617 regressed TestUnmanagedAMLauncher.

>From the test log:
{noformat}
2013-05-19 12:39:10,631 INFO  distributedshell.ApplicationMaster 
(ApplicationMaster.java:run(682)) - Setting up container launch container for 
containerid=container_1368992334149_0001_01_01
2013-05-19 12:39:10,647 INFO  distributedshell.ApplicationMaster 
(ApplicationMaster.java:run(690)) - Setting user in ContainerLaunchContext to: 
ivanmi
2013-05-19 12:39:10,678 ERROR containermanager.ContainerManagerImpl 
(ContainerManagerImpl.java:authorizeRequest(412)) - Unauthorized request to 
start container. 
Expected containerId: ivanmi Found: container_1368992334149_0001_01_01
2013-05-19 12:39:10,678 ERROR security.UserGroupInformation 
(UserGroupInformation.java:doAs(1492)) - PriviledgedActionException as:ivanmi 
(auth:SIMPLE) cause:org.apache.hadoop.yarn.exceptions.YarnRemoteException: 
Unauthorized request to start container. 
Expected containerId: ivanmi Found: container_1368992334149_0001_01_01
2013-05-19 12:39:10,678 INFO  ipc.Server (Server.java:run(1864)) - IPC Server 
handler 5 on 49529, call 
org.apache.hadoop.yarn.api.ContainerManagerPB.startContainer from 
10.120.19.109:49566: error: 
org.apache.hadoop.yarn.exceptions.YarnRemoteException: Unauthorized request to 
start container. 
Expected containerId: ivanmi Found: container_1368992334149_0001_01_01
org.apache.hadoop.yarn.exceptions.YarnRemoteException: Unauthorized request to 
start container. 
Expected containerId: ivanmi Found: container_1368992334149_0001_01_01
at 
org.apache.hadoop.yarn.ipc.RPCUtil.getRemoteException(RPCUtil.java:43)
at 
org.apache.hadoop.yarn.server.nodemanager.containermanager.ContainerManagerImpl.authorizeRequest(ContainerManagerImpl.java:413)
at 
org.apache.hadoop.yarn.server.nodemanager.containermanager.ContainerManagerImpl.startContainer(ContainerManagerImpl.java:440)
at 
org.apache.hadoop.yarn.api.impl.pb.service.ContainerManagerPBServiceImpl.startContainer(ContainerManagerPBServiceImpl.java:72)
at 
org.apache.hadoop.yarn.proto.ContainerManager$ContainerManagerService$2.callBlockingMethod(ContainerManager.java:83)
at 
org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:527)
at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1033)
at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1842)
at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1838)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:396)
at 
org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1489)
at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1836)
2013-05-19 12:39:10,678 INFO  distributedshell.ApplicationMaster 
(ApplicationMaster.java:run(761)) - Start container failed for :, 
containerId=container_1368992334149_0001_01_01
{noformat}

ContainerManagerImpl expected containerId to be equal to the remote UGI and 
since this was not the case, failed the authorization:
{noformat}
Unauthorized request to start container. 
Expected containerId: ivanmi Found: container_1368992334149_0001_01_01
{noformat}

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (YARN-392) Make it possible to specify hard locality constraints in resource requests

2013-05-19 Thread Bikas Saha (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-392?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13661513#comment-13661513
 ] 

Bikas Saha commented on YARN-392:
-

I am not sure how we are making the semantics consistent by overloading 
something for 2 things. When the flag is set at a network hierarchy level then 
it means then scheduler will not relax locality beyond that level. The same 
flag can also be used to blacklist locations.
The most common use case of black listing is to specify a set of nodes on which 
no allocations should be made (eg they are badly behaving nodes). How does this 
scheme address that case? Will we have to specify the same blacklist 
information for every priority that is used by an application (because resource 
request is per priority). Every time an app uses a new priority we will have to 
issue a new set of resource requests to blacklist at that priority?

> Make it possible to specify hard locality constraints in resource requests
> --
>
> Key: YARN-392
> URL: https://issues.apache.org/jira/browse/YARN-392
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Bikas Saha
>Assignee: Sandy Ryza
> Attachments: YARN-392-1.patch, YARN-392-2.patch, YARN-392-2.patch, 
> YARN-392-2.patch, YARN-392-3.patch, YARN-392-4.patch, YARN-392.patch
>
>
> Currently its not possible to specify scheduling requests for specific nodes 
> and nowhere else. The RM automatically relaxes locality to rack and * and 
> assigns non-specified machines to the app.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira