[jira] [Updated] (YARN-1128) FifoPolicy.computeShares throws NPE on empty list of Schedulables

2013-09-18 Thread Karthik Kambatla (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-1128?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Karthik Kambatla updated YARN-1128:
---

Attachment: yarn-1128-1.patch

Straight-forward patch.

> FifoPolicy.computeShares throws NPE on empty list of Schedulables
> -
>
> Key: YARN-1128
> URL: https://issues.apache.org/jira/browse/YARN-1128
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: scheduler
>Affects Versions: 2.1.0-beta
>Reporter: Sandy Ryza
>Assignee: Karthik Kambatla
> Attachments: yarn-1128-1.patch
>
>
> FifoPolicy gives all of a queue's share to the earliest-scheduled application.
> {code}
> Schedulable earliest = null;
> for (Schedulable schedulable : schedulables) {
>   if (earliest == null ||
>   schedulable.getStartTime() < earliest.getStartTime()) {
> earliest = schedulable;
>   }
> }
> earliest.setFairShare(Resources.clone(totalResources));
> {code}
> If the queue has no schedulables in it, earliest will be left null, leading 
> to an NPE on the last line.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (YARN-1128) FifoPolicy.computeShares throws NPE on empty list of Schedulables

2013-09-18 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-1128?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13770575#comment-13770575
 ] 

Hadoop QA commented on YARN-1128:
-

{color:green}+1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12603779/yarn-1128-1.patch
  against trunk revision .

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 1 new 
or modified test files.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javadoc{color}.  The javadoc tool did not generate any 
warning messages.

{color:green}+1 eclipse:eclipse{color}.  The patch built with 
eclipse:eclipse.

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 1.3.9) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:green}+1 core tests{color}.  The patch passed unit tests in 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager.

{color:green}+1 contrib tests{color}.  The patch passed contrib unit tests.

Test results: 
https://builds.apache.org/job/PreCommit-YARN-Build/1955//testReport/
Console output: https://builds.apache.org/job/PreCommit-YARN-Build/1955//console

This message is automatically generated.

> FifoPolicy.computeShares throws NPE on empty list of Schedulables
> -
>
> Key: YARN-1128
> URL: https://issues.apache.org/jira/browse/YARN-1128
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: scheduler
>Affects Versions: 2.1.0-beta
>Reporter: Sandy Ryza
>Assignee: Karthik Kambatla
> Attachments: yarn-1128-1.patch
>
>
> FifoPolicy gives all of a queue's share to the earliest-scheduled application.
> {code}
> Schedulable earliest = null;
> for (Schedulable schedulable : schedulables) {
>   if (earliest == null ||
>   schedulable.getStartTime() < earliest.getStartTime()) {
> earliest = schedulable;
>   }
> }
> earliest.setFairShare(Resources.clone(totalResources));
> {code}
> If the queue has no schedulables in it, earliest will be left null, leading 
> to an NPE on the last line.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (YARN-1128) FifoPolicy.computeShares throws NPE on empty list of Schedulables

2013-09-18 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-1128?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13770575#comment-13770575
 ] 

Hadoop QA commented on YARN-1128:
-

{color:green}+1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12603779/yarn-1128-1.patch
  against trunk revision .

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 1 new 
or modified test files.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javadoc{color}.  The javadoc tool did not generate any 
warning messages.

{color:green}+1 eclipse:eclipse{color}.  The patch built with 
eclipse:eclipse.

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 1.3.9) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:green}+1 core tests{color}.  The patch passed unit tests in 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager.

{color:green}+1 contrib tests{color}.  The patch passed contrib unit tests.

Test results: 
https://builds.apache.org/job/PreCommit-YARN-Build/1955//testReport/
Console output: https://builds.apache.org/job/PreCommit-YARN-Build/1955//console

This message is automatically generated.

> FifoPolicy.computeShares throws NPE on empty list of Schedulables
> -
>
> Key: YARN-1128
> URL: https://issues.apache.org/jira/browse/YARN-1128
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: scheduler
>Affects Versions: 2.1.0-beta
>Reporter: Sandy Ryza
>Assignee: Karthik Kambatla
> Attachments: yarn-1128-1.patch
>
>
> FifoPolicy gives all of a queue's share to the earliest-scheduled application.
> {code}
> Schedulable earliest = null;
> for (Schedulable schedulable : schedulables) {
>   if (earliest == null ||
>   schedulable.getStartTime() < earliest.getStartTime()) {
> earliest = schedulable;
>   }
> }
> earliest.setFairShare(Resources.clone(totalResources));
> {code}
> If the queue has no schedulables in it, earliest will be left null, leading 
> to an NPE on the last line.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (YARN-1128) FifoPolicy.computeShares throws NPE on empty list of Schedulables

2013-09-18 Thread Sandy Ryza (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-1128?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13770823#comment-13770823
 ] 

Sandy Ryza commented on YARN-1128:
--

+1

> FifoPolicy.computeShares throws NPE on empty list of Schedulables
> -
>
> Key: YARN-1128
> URL: https://issues.apache.org/jira/browse/YARN-1128
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: scheduler
>Affects Versions: 2.1.0-beta
>Reporter: Sandy Ryza
>Assignee: Karthik Kambatla
> Attachments: yarn-1128-1.patch
>
>
> FifoPolicy gives all of a queue's share to the earliest-scheduled application.
> {code}
> Schedulable earliest = null;
> for (Schedulable schedulable : schedulables) {
>   if (earliest == null ||
>   schedulable.getStartTime() < earliest.getStartTime()) {
> earliest = schedulable;
>   }
> }
> earliest.setFairShare(Resources.clone(totalResources));
> {code}
> If the queue has no schedulables in it, earliest will be left null, leading 
> to an NPE on the last line.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (YARN-819) ResourceManager and NodeManager should check for a minimum allowed version

2013-09-18 Thread Robert Parker (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-819?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Robert Parker updated YARN-819:
---

Attachment: YARN-819-2.patch

> ResourceManager and NodeManager should check for a minimum allowed version
> --
>
> Key: YARN-819
> URL: https://issues.apache.org/jira/browse/YARN-819
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: nodemanager, resourcemanager
>Affects Versions: 2.0.4-alpha
>Reporter: Robert Parker
>Assignee: Robert Parker
> Attachments: YARN-819-1.patch, YARN-819-2.patch
>
>
> Our use case is during upgrade on a large cluster several NodeManagers may 
> not restart with the new version.  Once the RM comes back up the NodeManager 
> will re-register without issue to the RM.
> The NM should report the version the RM.  The RM should have a configuration 
> to disallow the check (default), equal to the RM (to prevent config change 
> for each release), equal to or greater than RM (to allow NM upgrades), and 
> finally an explicit version or version range.
> The RM should also have an configuration on how to treat the mismatch: 
> REJECT, or REBOOT the NM.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (YARN-1063) Winutils needs ability to create task as domain user

2013-09-18 Thread Kyle Leckie (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-1063?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13770984#comment-13770984
 ] 

Kyle Leckie commented on YARN-1063:
---

Hi Bikas,
An isolated container is one of the requirements for a secure cluster. I have 
not evaluated utilizing this in a non-secure cluster. The issue you would run 
into is that in a non-secure cluster a user may run jobs in a user context that 
does not match a user on the domain controller. 

The Node Manager runs the winutils exe in order to perform the task launch. 
This mirrors the behaviour in a secure Linux cluster with the 
container-executor binary that has setuid enabled.

> Winutils needs ability to create task as domain user
> 
>
> Key: YARN-1063
> URL: https://issues.apache.org/jira/browse/YARN-1063
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: nodemanager
>Affects Versions: trunk-win
> Environment: Windows
>Reporter: Kyle Leckie
>  Labels: security
> Fix For: trunk-win
>
> Attachments: YARN-1063.patch
>
>
> h1. Summary:
> Securing a Hadoop cluster requires constructing some form of security 
> boundary around the processes executed in YARN containers. Isolation based on 
> Windows user isolation seems most feasible. This approach is similar to the 
> approach taken by the existing LinuxContainerExecutor. The current patch to 
> winutils.exe adds the ability to create a process as a domain user. 
> h1. Alternative Methods considered:
> h2. Process rights limited by security token restriction:
> On Windows access decisions are made by examining the security token of a 
> process. It is possible to spawn a process with a restricted security token. 
> Any of the rights granted by SIDs of the default token may be restricted. It 
> is possible to see this in action by examining the security tone of a 
> sandboxed process launch be a web browser. Typically the launched process 
> will have a fully restricted token and need to access machine resources 
> through a dedicated broker process that enforces a custom security policy. 
> This broker process mechanism would break compatibility with the typical 
> Hadoop container process. The Container process must be able to utilize 
> standard function calls for disk and network IO. I performed some work 
> looking at ways to ACL the local files to the specific launched without 
> granting rights to other processes launched on the same machine but found 
> this to be an overly complex solution. 
> h2. Relying on APP containers:
> Recent versions of windows have the ability to launch processes within an 
> isolated container. Application containers are supported for execution of 
> WinRT based executables. This method was ruled out due to the lack of 
> official support for standard windows APIs. At some point in the future 
> windows may support functionality similar to BSD jails or Linux containers, 
> at that point support for containers should be added.
> h1. Create As User Feature Description:
> h2. Usage:
> A new sub command was added to the set of task commands. Here is the syntax:
> winutils task createAsUser [TASKNAME] [USERNAME] [COMMAND_LINE]
> Some notes:
> * The username specified is in the format of "user@domain"
> * The machine executing this command must be joined to the domain of the user 
> specified
> * The domain controller must allow the account executing the command access 
> to the user information. For this join the account to the predefined group 
> labeled "Pre-Windows 2000 Compatible Access"
> * The account running the command must have several rights on the local 
> machine. These can be managed manually using secpol.msc: 
> ** "Act as part of the operating system" - SE_TCB_NAME
> ** "Replace a process-level token" - SE_ASSIGNPRIMARYTOKEN_NAME
> ** "Adjust memory quotas for a process" - SE_INCREASE_QUOTA_NAME
> * The launched process will not have rights to the desktop so will not be 
> able to display any information or create UI.
> * The launched process will have no network credentials. Any access of 
> network resources that requires domain authentication will fail.
> h2. Implementation:
> Winutils performs the following steps:
> # Enable the required privileges for the current process.
> # Register as a trusted process with the Local Security Authority (LSA).
> # Create a new logon for the user passed on the command line.
> # Load/Create a profile on the local machine for the new logon.
> # Create a new environment for the new logon.
> # Launch the new process in a job with the task name specified and using the 
> created logon.
> # Wait for the JOB to exit.
> h2. Future work:
> The following work was scoped out of this check in:
> * Support for non-domain users or machine that are n

[jira] [Commented] (YARN-819) ResourceManager and NodeManager should check for a minimum allowed version

2013-09-18 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-819?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13770957#comment-13770957
 ] 

Hadoop QA commented on YARN-819:


{color:green}+1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12603853/YARN-819-2.patch
  against trunk revision .

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 3 new 
or modified test files.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javadoc{color}.  The javadoc tool did not generate any 
warning messages.

{color:green}+1 eclipse:eclipse{color}.  The patch built with 
eclipse:eclipse.

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 1.3.9) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:green}+1 core tests{color}.  The patch passed unit tests in 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-common 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager
 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager.

{color:green}+1 contrib tests{color}.  The patch passed contrib unit tests.

Test results: 
https://builds.apache.org/job/PreCommit-YARN-Build/1956//testReport/
Console output: https://builds.apache.org/job/PreCommit-YARN-Build/1956//console

This message is automatically generated.

> ResourceManager and NodeManager should check for a minimum allowed version
> --
>
> Key: YARN-819
> URL: https://issues.apache.org/jira/browse/YARN-819
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: nodemanager, resourcemanager
>Affects Versions: 2.0.4-alpha
>Reporter: Robert Parker
>Assignee: Robert Parker
> Attachments: YARN-819-1.patch, YARN-819-2.patch
>
>
> Our use case is during upgrade on a large cluster several NodeManagers may 
> not restart with the new version.  Once the RM comes back up the NodeManager 
> will re-register without issue to the RM.
> The NM should report the version the RM.  The RM should have a configuration 
> to disallow the check (default), equal to the RM (to prevent config change 
> for each release), equal to or greater than RM (to allow NM upgrades), and 
> finally an explicit version or version range.
> The RM should also have an configuration on how to treat the mismatch: 
> REJECT, or REBOOT the NM.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (YARN-779) AMRMClient should clean up dangling unsatisfied request

2013-09-18 Thread Arun C Murthy (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-779?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arun C Murthy updated YARN-779:
---

Assignee: Maysam Yabandeh

> AMRMClient should clean up dangling unsatisfied request
> ---
>
> Key: YARN-779
> URL: https://issues.apache.org/jira/browse/YARN-779
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: client
>Affects Versions: 2.0.4-alpha
>Reporter: Alejandro Abdelnur
>Assignee: Maysam Yabandeh
>Priority: Critical
> Attachments: YARN-779.patch, YARN-779.patch
>
>
> If an AMRMClient allocates a ContainerRequest for 10 containers in node1 or 
> node2 is placed (assuming a single rack) the resulting ResourceRequests will 
> be
> {code}
> location - containers
> -
> node1- 10
> node2- 10
> rack - 10
> ANY  - 10
> {code}
> Assuming 5 containers are allocated in node1 and 5 containers are allocated 
> in node2, the following ResourceRequests will be outstanding on the RM.
> {code}
> location - containers
> -
> node1- 5
> node2- 5
> {code}
> If the AMMRClient does a new ContainerRequest allocation, this time for 5 
> containers in node3, the resulting outstanding ResourceRequests on the RM 
> will be:
> {code}
> location - containers
> -
> node1- 5
> node2- 5
> node3- 5
> rack - 5
> ANY  - 5
> {code}
> At this point, the scheduler may assign 5 containers to node1 and it will 
> never assign the 5 containers node3 asked for.
> AMRMClient should keep track of the outstanding allocations counts per 
> ContainerRequest and when gets to zero it should update the the RACK/ANY 
> decrementing the dangling requests. 

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (YARN-1044) used/min/max resources do not display info in the scheduler page

2013-09-18 Thread Arun C Murthy (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-1044?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arun C Murthy updated YARN-1044:


Assignee: Sangjin Lee

> used/min/max resources do not display info in the scheduler page
> 
>
> Key: YARN-1044
> URL: https://issues.apache.org/jira/browse/YARN-1044
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: resourcemanager, scheduler
>Affects Versions: 2.0.5-alpha
>Reporter: Sangjin Lee
>Assignee: Sangjin Lee
>Priority: Critical
>  Labels: newbie
> Attachments: screenshot.png, yarn-1044-20130815.3.patch, 
> yarn-1044.patch, yarn-1044.patch
>
>
> Go to the scheduler page in RM, and click any queue to display the detailed 
> info. You'll find that none of the resources entries (used, min, or max) 
> would display values.
> It is because the values contain brackets ("<" and ">") and are not properly 
> html-escaped.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (YARN-1068) Add admin support for HA operations

2013-09-18 Thread Alejandro Abdelnur (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-1068?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13771030#comment-13771030
 ] 

Alejandro Abdelnur commented on YARN-1068:
--

Unless I'm missing something, the HAAdmin service is always on and listening, 
is this correct? The always on is OK, but listening to a port, do we need it?

> Add admin support for HA operations
> ---
>
> Key: YARN-1068
> URL: https://issues.apache.org/jira/browse/YARN-1068
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: resourcemanager
>Affects Versions: 2.1.0-beta
>Reporter: Karthik Kambatla
>Assignee: Karthik Kambatla
>  Labels: ha
> Attachments: yarn-1068-1.patch, yarn-1068-2.patch, yarn-1068-3.patch, 
> yarn-1068-prelim.patch
>
>
> Support HA admin operations to facilitate transitioning the RM to Active and 
> Standby states.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (YARN-1199) Make NM/RM Versions Available

2013-09-18 Thread Robert Parker (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-1199?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13771056#comment-13771056
 ] 

Robert Parker commented on YARN-1199:
-

+1 lgtm (non-binding)

> Make NM/RM Versions Available
> -
>
> Key: YARN-1199
> URL: https://issues.apache.org/jira/browse/YARN-1199
> Project: Hadoop YARN
>  Issue Type: Improvement
>Reporter: Mit Desai
>Assignee: Mit Desai
> Attachments: YARN-1199.patch, YARN-1199.patch
>
>
> Now as we have the NM and RM Versions available, we can display the YARN 
> version of nodes running in the cluster.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (YARN-1188) The context of QueueMetrics becomes 'default' when using FairScheduler

2013-09-18 Thread Tsuyoshi OZAWA (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-1188?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tsuyoshi OZAWA updated YARN-1188:
-

Attachment: YARN-1188.1.patch

Add @Metrics(context="yarn") to FSQueueMetrics class. One alternative is to 
make @Metrics annotation @Inherit, but the extent of the impact can be large, 
so I didn't choose the approach. 

> The context of QueueMetrics becomes 'default' when using FairScheduler
> --
>
> Key: YARN-1188
> URL: https://issues.apache.org/jira/browse/YARN-1188
> Project: Hadoop YARN
>  Issue Type: Bug
>Affects Versions: 2.1.0-beta
>Reporter: Akira AJISAKA
>Priority: Minor
>  Labels: metrics, newbie
> Attachments: YARN-1188.1.patch
>
>
> I found the context of QueueMetrics changed to 'default' from 'yarn' when I 
> was using FairScheduler.
> The context should always be 'yarn' by adding an annotation to FSQueueMetrics 
> like below:
> {code}
> + @Metrics(context="yarn")
> public class FSQueueMetrics extends QueueMetrics {
> {code}

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (YARN-451) Add more metrics to RM page

2013-09-18 Thread Arun C Murthy (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-451?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arun C Murthy updated YARN-451:
---

Assignee: Sangjin Lee

> Add more metrics to RM page
> ---
>
> Key: YARN-451
> URL: https://issues.apache.org/jira/browse/YARN-451
> Project: Hadoop YARN
>  Issue Type: Improvement
>  Components: resourcemanager
>Affects Versions: 2.0.3-alpha
>Reporter: Lohit Vijayarenu
>Assignee: Sangjin Lee
>Priority: Blocker
> Attachments: in_progress_2x.png, yarn-451-trunk-20130916.1.patch
>
>
> ResourceManager webUI shows list of RUNNING applications, but it does not 
> tell which applications are requesting more resource compared to others. With 
> cluster running hundreds of applications at once it would be useful to have 
> some kind of metric to show high-resource usage applications vs low-resource 
> usage ones. At the minimum showing number of containers is good option.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (YARN-1021) Yarn Scheduler Load Simulator

2013-09-18 Thread Wei Yan (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-1021?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wei Yan updated YARN-1021:
--

Attachment: YARN-1021.patch

> Yarn Scheduler Load Simulator
> -
>
> Key: YARN-1021
> URL: https://issues.apache.org/jira/browse/YARN-1021
> Project: Hadoop YARN
>  Issue Type: New Feature
>  Components: scheduler
>Reporter: Wei Yan
>Assignee: Wei Yan
> Attachments: YARN-1021-demo.tar.gz, YARN-1021-images.tar.gz, 
> YARN-1021.patch, YARN-1021.patch, YARN-1021.patch, YARN-1021.patch, 
> YARN-1021.patch, YARN-1021.patch, YARN-1021.patch, YARN-1021.patch, 
> YARN-1021.patch, YARN-1021.pdf
>
>
> The Yarn Scheduler is a fertile area of interest with different 
> implementations, e.g., Fifo, Capacity and Fair  schedulers. Meanwhile, 
> several optimizations are also made to improve scheduler performance for 
> different scenarios and workload. Each scheduler algorithm has its own set of 
> features, and drives scheduling decisions by many factors, such as fairness, 
> capacity guarantee, resource availability, etc. It is very important to 
> evaluate a scheduler algorithm very well before we deploy it in a production 
> cluster. Unfortunately, currently it is non-trivial to evaluate a scheduling 
> algorithm. Evaluating in a real cluster is always time and cost consuming, 
> and it is also very hard to find a large-enough cluster. Hence, a simulator 
> which can predict how well a scheduler algorithm for some specific workload 
> would be quite useful.
> We want to build a Scheduler Load Simulator to simulate large-scale Yarn 
> clusters and application loads in a single machine. This would be invaluable 
> in furthering Yarn by providing a tool for researchers and developers to 
> prototype new scheduler features and predict their behavior and performance 
> with reasonable amount of confidence, there-by aiding rapid innovation.
> The simulator will exercise the real Yarn ResourceManager removing the 
> network factor by simulating NodeManagers and ApplicationMasters via handling 
> and dispatching NM/AMs heartbeat events from within the same JVM.
> To keep tracking of scheduler behavior and performance, a scheduler wrapper 
> will wrap the real scheduler.
> The simulator will produce real time metrics while executing, including:
> * Resource usages for whole cluster and each queue, which can be utilized to 
> configure cluster and queue's capacity.
> * The detailed application execution trace (recorded in relation to simulated 
> time), which can be analyzed to understand/validate the  scheduler behavior 
> (individual jobs turn around time, throughput, fairness, capacity guarantee, 
> etc).
> * Several key metrics of scheduler algorithm, such as time cost of each 
> scheduler operation (allocate, handle, etc), which can be utilized by Hadoop 
> developers to find the code spots and scalability limits.
> The simulator will provide real time charts showing the behavior of the 
> scheduler and its performance.
> A short demo is available http://www.youtube.com/watch?v=6thLi8q0qLE, showing 
> how to use simulator to simulate Fair Scheduler and Capacity Scheduler.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (YARN-451) Add more metrics to RM page

2013-09-18 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-451?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13771073#comment-13771073
 ] 

Hadoop QA commented on YARN-451:


{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12603493/in_progress_2x.png
  against trunk revision .

{color:red}-1 patch{color}.  The patch command could not apply the patch.

Console output: https://builds.apache.org/job/PreCommit-YARN-Build/1959//console

This message is automatically generated.

> Add more metrics to RM page
> ---
>
> Key: YARN-451
> URL: https://issues.apache.org/jira/browse/YARN-451
> Project: Hadoop YARN
>  Issue Type: Improvement
>  Components: resourcemanager
>Affects Versions: 2.0.3-alpha
>Reporter: Lohit Vijayarenu
>Assignee: Sangjin Lee
>Priority: Blocker
> Attachments: in_progress_2x.png, yarn-451-trunk-20130916.1.patch
>
>
> ResourceManager webUI shows list of RUNNING applications, but it does not 
> tell which applications are requesting more resource compared to others. With 
> cluster running hundreds of applications at once it would be useful to have 
> some kind of metric to show high-resource usage applications vs low-resource 
> usage ones. At the minimum showing number of containers is good option.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (YARN-1068) Add admin support for HA operations

2013-09-18 Thread Karthik Kambatla (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-1068?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Karthik Kambatla updated YARN-1068:
---

Attachment: yarn-1068-4.patch

Thanks Alejandro for catching it, completely missed that.

Uploading a patch that creates RMHAAdminService only when HA is enabled.

> Add admin support for HA operations
> ---
>
> Key: YARN-1068
> URL: https://issues.apache.org/jira/browse/YARN-1068
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: resourcemanager
>Affects Versions: 2.1.0-beta
>Reporter: Karthik Kambatla
>Assignee: Karthik Kambatla
>  Labels: ha
> Attachments: yarn-1068-1.patch, yarn-1068-2.patch, yarn-1068-3.patch, 
> yarn-1068-4.patch, yarn-1068-prelim.patch
>
>
> Support HA admin operations to facilitate transitioning the RM to Active and 
> Standby states.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (YARN-1188) The context of QueueMetrics becomes 'default' when using FairScheduler

2013-09-18 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-1188?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13771103#comment-13771103
 ] 

Hadoop QA commented on YARN-1188:
-

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12603877/YARN-1188.1.patch
  against trunk revision .

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:red}-1 tests included{color}.  The patch doesn't appear to include 
any new or modified tests.
Please justify why no new tests are needed for this 
patch.
Also please list what manual steps were performed to 
verify this patch.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javadoc{color}.  The javadoc tool did not generate any 
warning messages.

{color:green}+1 eclipse:eclipse{color}.  The patch built with 
eclipse:eclipse.

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 1.3.9) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:green}+1 core tests{color}.  The patch passed unit tests in 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager.

{color:green}+1 contrib tests{color}.  The patch passed contrib unit tests.

Test results: 
https://builds.apache.org/job/PreCommit-YARN-Build/1957//testReport/
Console output: https://builds.apache.org/job/PreCommit-YARN-Build/1957//console

This message is automatically generated.

> The context of QueueMetrics becomes 'default' when using FairScheduler
> --
>
> Key: YARN-1188
> URL: https://issues.apache.org/jira/browse/YARN-1188
> Project: Hadoop YARN
>  Issue Type: Bug
>Affects Versions: 2.1.0-beta
>Reporter: Akira AJISAKA
>Priority: Minor
>  Labels: metrics, newbie
> Attachments: YARN-1188.1.patch
>
>
> I found the context of QueueMetrics changed to 'default' from 'yarn' when I 
> was using FairScheduler.
> The context should always be 'yarn' by adding an annotation to FSQueueMetrics 
> like below:
> {code}
> + @Metrics(context="yarn")
> public class FSQueueMetrics extends QueueMetrics {
> {code}

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (YARN-1021) Yarn Scheduler Load Simulator

2013-09-18 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-1021?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13771109#comment-13771109
 ] 

Hadoop QA commented on YARN-1021:
-

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12603882/YARN-1021.patch
  against trunk revision .

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 5 new 
or modified test files.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javadoc{color}.  The javadoc tool did not generate any 
warning messages.

{color:green}+1 eclipse:eclipse{color}.  The patch built with 
eclipse:eclipse.

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 1.3.9) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:red}-1 core tests{color}.  The patch failed these unit tests in 
hadoop-assemblies hadoop-tools/hadoop-sls hadoop-tools/hadoop-tools-dist:

  org.apache.hadoop.yarn.sls.TestSLSRunner

{color:green}+1 contrib tests{color}.  The patch passed contrib unit tests.

Test results: 
https://builds.apache.org/job/PreCommit-YARN-Build/1958//testReport/
Console output: https://builds.apache.org/job/PreCommit-YARN-Build/1958//console

This message is automatically generated.

> Yarn Scheduler Load Simulator
> -
>
> Key: YARN-1021
> URL: https://issues.apache.org/jira/browse/YARN-1021
> Project: Hadoop YARN
>  Issue Type: New Feature
>  Components: scheduler
>Reporter: Wei Yan
>Assignee: Wei Yan
> Attachments: YARN-1021-demo.tar.gz, YARN-1021-images.tar.gz, 
> YARN-1021.patch, YARN-1021.patch, YARN-1021.patch, YARN-1021.patch, 
> YARN-1021.patch, YARN-1021.patch, YARN-1021.patch, YARN-1021.patch, 
> YARN-1021.patch, YARN-1021.pdf
>
>
> The Yarn Scheduler is a fertile area of interest with different 
> implementations, e.g., Fifo, Capacity and Fair  schedulers. Meanwhile, 
> several optimizations are also made to improve scheduler performance for 
> different scenarios and workload. Each scheduler algorithm has its own set of 
> features, and drives scheduling decisions by many factors, such as fairness, 
> capacity guarantee, resource availability, etc. It is very important to 
> evaluate a scheduler algorithm very well before we deploy it in a production 
> cluster. Unfortunately, currently it is non-trivial to evaluate a scheduling 
> algorithm. Evaluating in a real cluster is always time and cost consuming, 
> and it is also very hard to find a large-enough cluster. Hence, a simulator 
> which can predict how well a scheduler algorithm for some specific workload 
> would be quite useful.
> We want to build a Scheduler Load Simulator to simulate large-scale Yarn 
> clusters and application loads in a single machine. This would be invaluable 
> in furthering Yarn by providing a tool for researchers and developers to 
> prototype new scheduler features and predict their behavior and performance 
> with reasonable amount of confidence, there-by aiding rapid innovation.
> The simulator will exercise the real Yarn ResourceManager removing the 
> network factor by simulating NodeManagers and ApplicationMasters via handling 
> and dispatching NM/AMs heartbeat events from within the same JVM.
> To keep tracking of scheduler behavior and performance, a scheduler wrapper 
> will wrap the real scheduler.
> The simulator will produce real time metrics while executing, including:
> * Resource usages for whole cluster and each queue, which can be utilized to 
> configure cluster and queue's capacity.
> * The detailed application execution trace (recorded in relation to simulated 
> time), which can be analyzed to understand/validate the  scheduler behavior 
> (individual jobs turn around time, throughput, fairness, capacity guarantee, 
> etc).
> * Several key metrics of scheduler algorithm, such as time cost of each 
> scheduler operation (allocate, handle, etc), which can be utilized by Hadoop 
> developers to find the code spots and scalability limits.
> The simulator will provide real time charts showing the behavior of the 
> scheduler and its performance.
> A short demo is available http://www.youtube.com/watch?v=6thLi8q0qLE, showing 
> how to use simulator to simulate Fair Scheduler and Capacity Scheduler.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (YARN-1068) Add admin support for HA operations

2013-09-18 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-1068?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13771135#comment-13771135
 ] 

Hadoop QA commented on YARN-1068:
-

{color:green}+1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12603885/yarn-1068-4.patch
  against trunk revision .

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 1 new 
or modified test files.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javadoc{color}.  The javadoc tool did not generate any 
warning messages.

{color:green}+1 eclipse:eclipse{color}.  The patch built with 
eclipse:eclipse.

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 1.3.9) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:green}+1 core tests{color}.  The patch passed unit tests in 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-client 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager.

{color:green}+1 contrib tests{color}.  The patch passed contrib unit tests.

Test results: 
https://builds.apache.org/job/PreCommit-YARN-Build/1960//testReport/
Console output: https://builds.apache.org/job/PreCommit-YARN-Build/1960//console

This message is automatically generated.

> Add admin support for HA operations
> ---
>
> Key: YARN-1068
> URL: https://issues.apache.org/jira/browse/YARN-1068
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: resourcemanager
>Affects Versions: 2.1.0-beta
>Reporter: Karthik Kambatla
>Assignee: Karthik Kambatla
>  Labels: ha
> Attachments: yarn-1068-1.patch, yarn-1068-2.patch, yarn-1068-3.patch, 
> yarn-1068-4.patch, yarn-1068-prelim.patch
>
>
> Support HA admin operations to facilitate transitioning the RM to Active and 
> Standby states.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (YARN-1021) Yarn Scheduler Load Simulator

2013-09-18 Thread Wei Yan (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-1021?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wei Yan updated YARN-1021:
--

Attachment: YARN-1021.patch

> Yarn Scheduler Load Simulator
> -
>
> Key: YARN-1021
> URL: https://issues.apache.org/jira/browse/YARN-1021
> Project: Hadoop YARN
>  Issue Type: New Feature
>  Components: scheduler
>Reporter: Wei Yan
>Assignee: Wei Yan
> Attachments: YARN-1021-demo.tar.gz, YARN-1021-images.tar.gz, 
> YARN-1021.patch, YARN-1021.patch, YARN-1021.patch, YARN-1021.patch, 
> YARN-1021.patch, YARN-1021.patch, YARN-1021.patch, YARN-1021.patch, 
> YARN-1021.patch, YARN-1021.patch, YARN-1021.pdf
>
>
> The Yarn Scheduler is a fertile area of interest with different 
> implementations, e.g., Fifo, Capacity and Fair  schedulers. Meanwhile, 
> several optimizations are also made to improve scheduler performance for 
> different scenarios and workload. Each scheduler algorithm has its own set of 
> features, and drives scheduling decisions by many factors, such as fairness, 
> capacity guarantee, resource availability, etc. It is very important to 
> evaluate a scheduler algorithm very well before we deploy it in a production 
> cluster. Unfortunately, currently it is non-trivial to evaluate a scheduling 
> algorithm. Evaluating in a real cluster is always time and cost consuming, 
> and it is also very hard to find a large-enough cluster. Hence, a simulator 
> which can predict how well a scheduler algorithm for some specific workload 
> would be quite useful.
> We want to build a Scheduler Load Simulator to simulate large-scale Yarn 
> clusters and application loads in a single machine. This would be invaluable 
> in furthering Yarn by providing a tool for researchers and developers to 
> prototype new scheduler features and predict their behavior and performance 
> with reasonable amount of confidence, there-by aiding rapid innovation.
> The simulator will exercise the real Yarn ResourceManager removing the 
> network factor by simulating NodeManagers and ApplicationMasters via handling 
> and dispatching NM/AMs heartbeat events from within the same JVM.
> To keep tracking of scheduler behavior and performance, a scheduler wrapper 
> will wrap the real scheduler.
> The simulator will produce real time metrics while executing, including:
> * Resource usages for whole cluster and each queue, which can be utilized to 
> configure cluster and queue's capacity.
> * The detailed application execution trace (recorded in relation to simulated 
> time), which can be analyzed to understand/validate the  scheduler behavior 
> (individual jobs turn around time, throughput, fairness, capacity guarantee, 
> etc).
> * Several key metrics of scheduler algorithm, such as time cost of each 
> scheduler operation (allocate, handle, etc), which can be utilized by Hadoop 
> developers to find the code spots and scalability limits.
> The simulator will provide real time charts showing the behavior of the 
> scheduler and its performance.
> A short demo is available http://www.youtube.com/watch?v=6thLi8q0qLE, showing 
> how to use simulator to simulate Fair Scheduler and Capacity Scheduler.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (YARN-451) Add more metrics to RM page

2013-09-18 Thread Arun C Murthy (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-451?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13771163#comment-13771163
 ] 

Arun C Murthy commented on YARN-451:


I'm not clear if this is the right approach... 

How about something simpler:
# RM UI displays current usage of resources (memory, cpu etc.)
# Application can pass in a string (along with progress) where-by we can 
annotate with something app-specific like: "100 maps total, 5 finished, 5 
running"

> Add more metrics to RM page
> ---
>
> Key: YARN-451
> URL: https://issues.apache.org/jira/browse/YARN-451
> Project: Hadoop YARN
>  Issue Type: Improvement
>  Components: resourcemanager
>Affects Versions: 2.0.3-alpha
>Reporter: Lohit Vijayarenu
>Assignee: Sangjin Lee
>Priority: Blocker
> Attachments: in_progress_2x.png, yarn-451-trunk-20130916.1.patch
>
>
> ResourceManager webUI shows list of RUNNING applications, but it does not 
> tell which applications are requesting more resource compared to others. With 
> cluster running hundreds of applications at once it would be useful to have 
> some kind of metric to show high-resource usage applications vs low-resource 
> usage ones. At the minimum showing number of containers is good option.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (YARN-1068) Add admin support for HA operations

2013-09-18 Thread Bikas Saha (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-1068?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13771196#comment-13771196
 ] 

Bikas Saha commented on YARN-1068:
--

What are the pros and cons of making the new service part of the existing 
RMHAService instead of creating another RPC endpoint?

> Add admin support for HA operations
> ---
>
> Key: YARN-1068
> URL: https://issues.apache.org/jira/browse/YARN-1068
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: resourcemanager
>Affects Versions: 2.1.0-beta
>Reporter: Karthik Kambatla
>Assignee: Karthik Kambatla
>  Labels: ha
> Attachments: yarn-1068-1.patch, yarn-1068-2.patch, yarn-1068-3.patch, 
> yarn-1068-4.patch, yarn-1068-prelim.patch
>
>
> Support HA admin operations to facilitate transitioning the RM to Active and 
> Standby states.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (YARN-1021) Yarn Scheduler Load Simulator

2013-09-18 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-1021?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13771185#comment-13771185
 ] 

Hadoop QA commented on YARN-1021:
-

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12603903/YARN-1021.patch
  against trunk revision .

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 8 new 
or modified test files.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javadoc{color}.  The javadoc tool did not generate any 
warning messages.

{color:green}+1 eclipse:eclipse{color}.  The patch built with 
eclipse:eclipse.

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 1.3.9) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:red}-1 core tests{color}.  The patch failed these unit tests in 
hadoop-assemblies hadoop-tools/hadoop-sls hadoop-tools/hadoop-tools-dist:

  org.apache.hadoop.yarn.sls.TestSLSRunner

{color:green}+1 contrib tests{color}.  The patch passed contrib unit tests.

Test results: 
https://builds.apache.org/job/PreCommit-YARN-Build/1961//testReport/
Console output: https://builds.apache.org/job/PreCommit-YARN-Build/1961//console

This message is automatically generated.

> Yarn Scheduler Load Simulator
> -
>
> Key: YARN-1021
> URL: https://issues.apache.org/jira/browse/YARN-1021
> Project: Hadoop YARN
>  Issue Type: New Feature
>  Components: scheduler
>Reporter: Wei Yan
>Assignee: Wei Yan
> Attachments: YARN-1021-demo.tar.gz, YARN-1021-images.tar.gz, 
> YARN-1021.patch, YARN-1021.patch, YARN-1021.patch, YARN-1021.patch, 
> YARN-1021.patch, YARN-1021.patch, YARN-1021.patch, YARN-1021.patch, 
> YARN-1021.patch, YARN-1021.patch, YARN-1021.pdf
>
>
> The Yarn Scheduler is a fertile area of interest with different 
> implementations, e.g., Fifo, Capacity and Fair  schedulers. Meanwhile, 
> several optimizations are also made to improve scheduler performance for 
> different scenarios and workload. Each scheduler algorithm has its own set of 
> features, and drives scheduling decisions by many factors, such as fairness, 
> capacity guarantee, resource availability, etc. It is very important to 
> evaluate a scheduler algorithm very well before we deploy it in a production 
> cluster. Unfortunately, currently it is non-trivial to evaluate a scheduling 
> algorithm. Evaluating in a real cluster is always time and cost consuming, 
> and it is also very hard to find a large-enough cluster. Hence, a simulator 
> which can predict how well a scheduler algorithm for some specific workload 
> would be quite useful.
> We want to build a Scheduler Load Simulator to simulate large-scale Yarn 
> clusters and application loads in a single machine. This would be invaluable 
> in furthering Yarn by providing a tool for researchers and developers to 
> prototype new scheduler features and predict their behavior and performance 
> with reasonable amount of confidence, there-by aiding rapid innovation.
> The simulator will exercise the real Yarn ResourceManager removing the 
> network factor by simulating NodeManagers and ApplicationMasters via handling 
> and dispatching NM/AMs heartbeat events from within the same JVM.
> To keep tracking of scheduler behavior and performance, a scheduler wrapper 
> will wrap the real scheduler.
> The simulator will produce real time metrics while executing, including:
> * Resource usages for whole cluster and each queue, which can be utilized to 
> configure cluster and queue's capacity.
> * The detailed application execution trace (recorded in relation to simulated 
> time), which can be analyzed to understand/validate the  scheduler behavior 
> (individual jobs turn around time, throughput, fairness, capacity guarantee, 
> etc).
> * Several key metrics of scheduler algorithm, such as time cost of each 
> scheduler operation (allocate, handle, etc), which can be utilized by Hadoop 
> developers to find the code spots and scalability limits.
> The simulator will provide real time charts showing the behavior of the 
> scheduler and its performance.
> A short demo is available http://www.youtube.com/watch?v=6thLi8q0qLE, showing 
> how to use simulator to simulate Fair Scheduler and Capacity Scheduler.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (YARN-1068) Add admin support for HA operations

2013-09-18 Thread Karthik Kambatla (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-1068?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13771211#comment-13771211
 ] 

Karthik Kambatla commented on YARN-1068:


bq. What are the pros and cons of making the new service part of the existing 
RMHAService instead of creating another RPC endpoint?
Sorry Bikas, I am not sure I understand the question. Can you elaborate a bit. 
Little background on the thought-process that went in:
# Should RMHAAdminService be a part of RMHAProtocolService? It can be, I just 
moved it to a separate service to make the code easier to understand and 
maintain. Can definitely merge it back.
# Should RMHAAdminService be a part of AdminService and use the same port 
instead of listening on another? I initially thought of doing that, but 
refrained for two reasons: (1) Better to have two listeners to address two 
protocols - the AdminProtocol and HAAdminProtocol. (2) AdminService itself is 
not an Always-On service at the moment. To move it to Always-On, we need to 
make it HA-aware which could potentially take longer time.

> Add admin support for HA operations
> ---
>
> Key: YARN-1068
> URL: https://issues.apache.org/jira/browse/YARN-1068
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: resourcemanager
>Affects Versions: 2.1.0-beta
>Reporter: Karthik Kambatla
>Assignee: Karthik Kambatla
>  Labels: ha
> Attachments: yarn-1068-1.patch, yarn-1068-2.patch, yarn-1068-3.patch, 
> yarn-1068-4.patch, yarn-1068-prelim.patch
>
>
> Support HA admin operations to facilitate transitioning the RM to Active and 
> Standby states.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (YARN-1068) Add admin support for HA operations

2013-09-18 Thread Bikas Saha (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-1068?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13771249#comment-13771249
 ] 

Bikas Saha commented on YARN-1068:
--

I meant the first one. Was wondering whether we should have a single 
RMHAService that does both the protocol and the admin parts. Seems to me that 
they belong to one place. So a pro/con discussion would help.

> Add admin support for HA operations
> ---
>
> Key: YARN-1068
> URL: https://issues.apache.org/jira/browse/YARN-1068
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: resourcemanager
>Affects Versions: 2.1.0-beta
>Reporter: Karthik Kambatla
>Assignee: Karthik Kambatla
>  Labels: ha
> Attachments: yarn-1068-1.patch, yarn-1068-2.patch, yarn-1068-3.patch, 
> yarn-1068-4.patch, yarn-1068-prelim.patch
>
>
> Support HA admin operations to facilitate transitioning the RM to Active and 
> Standby states.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Created] (YARN-1214) Register ClientToken MasterKey in SecretManager after it is saved

2013-09-18 Thread Jian He (JIRA)
Jian He created YARN-1214:
-

 Summary: Register ClientToken MasterKey in SecretManager after it 
is saved
 Key: YARN-1214
 URL: https://issues.apache.org/jira/browse/YARN-1214
 Project: Hadoop YARN
  Issue Type: Sub-task
Reporter: Jian He
Assignee: Jian He


Currently, app attempt ClientToken master key is registered before it is saved. 
This can cause problem that before the master key is saved, client gets the 
token and RM also crashes, RM cannot reloads the master key back after it 
restarts as it is not saved. As a result, client is holding an invalid token.

We can register the client token master key after it is saved in the store.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (YARN-1021) Yarn Scheduler Load Simulator

2013-09-18 Thread Wei Yan (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-1021?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wei Yan updated YARN-1021:
--

Attachment: YARN-1021.patch

> Yarn Scheduler Load Simulator
> -
>
> Key: YARN-1021
> URL: https://issues.apache.org/jira/browse/YARN-1021
> Project: Hadoop YARN
>  Issue Type: New Feature
>  Components: scheduler
>Reporter: Wei Yan
>Assignee: Wei Yan
> Attachments: YARN-1021-demo.tar.gz, YARN-1021-images.tar.gz, 
> YARN-1021.patch, YARN-1021.patch, YARN-1021.patch, YARN-1021.patch, 
> YARN-1021.patch, YARN-1021.patch, YARN-1021.patch, YARN-1021.patch, 
> YARN-1021.patch, YARN-1021.patch, YARN-1021.patch, YARN-1021.pdf
>
>
> The Yarn Scheduler is a fertile area of interest with different 
> implementations, e.g., Fifo, Capacity and Fair  schedulers. Meanwhile, 
> several optimizations are also made to improve scheduler performance for 
> different scenarios and workload. Each scheduler algorithm has its own set of 
> features, and drives scheduling decisions by many factors, such as fairness, 
> capacity guarantee, resource availability, etc. It is very important to 
> evaluate a scheduler algorithm very well before we deploy it in a production 
> cluster. Unfortunately, currently it is non-trivial to evaluate a scheduling 
> algorithm. Evaluating in a real cluster is always time and cost consuming, 
> and it is also very hard to find a large-enough cluster. Hence, a simulator 
> which can predict how well a scheduler algorithm for some specific workload 
> would be quite useful.
> We want to build a Scheduler Load Simulator to simulate large-scale Yarn 
> clusters and application loads in a single machine. This would be invaluable 
> in furthering Yarn by providing a tool for researchers and developers to 
> prototype new scheduler features and predict their behavior and performance 
> with reasonable amount of confidence, there-by aiding rapid innovation.
> The simulator will exercise the real Yarn ResourceManager removing the 
> network factor by simulating NodeManagers and ApplicationMasters via handling 
> and dispatching NM/AMs heartbeat events from within the same JVM.
> To keep tracking of scheduler behavior and performance, a scheduler wrapper 
> will wrap the real scheduler.
> The simulator will produce real time metrics while executing, including:
> * Resource usages for whole cluster and each queue, which can be utilized to 
> configure cluster and queue's capacity.
> * The detailed application execution trace (recorded in relation to simulated 
> time), which can be analyzed to understand/validate the  scheduler behavior 
> (individual jobs turn around time, throughput, fairness, capacity guarantee, 
> etc).
> * Several key metrics of scheduler algorithm, such as time cost of each 
> scheduler operation (allocate, handle, etc), which can be utilized by Hadoop 
> developers to find the code spots and scalability limits.
> The simulator will provide real time charts showing the behavior of the 
> scheduler and its performance.
> A short demo is available http://www.youtube.com/watch?v=6thLi8q0qLE, showing 
> how to use simulator to simulate Fair Scheduler and Capacity Scheduler.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (YARN-1021) Yarn Scheduler Load Simulator

2013-09-18 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-1021?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13771269#comment-13771269
 ] 

Hadoop QA commented on YARN-1021:
-

{color:green}+1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12603917/YARN-1021.patch
  against trunk revision .

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 11 new 
or modified test files.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javadoc{color}.  The javadoc tool did not generate any 
warning messages.

{color:green}+1 eclipse:eclipse{color}.  The patch built with 
eclipse:eclipse.

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 1.3.9) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:green}+1 core tests{color}.  The patch passed unit tests in 
hadoop-assemblies hadoop-tools/hadoop-sls hadoop-tools/hadoop-tools-dist.

{color:green}+1 contrib tests{color}.  The patch passed contrib unit tests.

Test results: 
https://builds.apache.org/job/PreCommit-YARN-Build/1962//testReport/
Console output: https://builds.apache.org/job/PreCommit-YARN-Build/1962//console

This message is automatically generated.

> Yarn Scheduler Load Simulator
> -
>
> Key: YARN-1021
> URL: https://issues.apache.org/jira/browse/YARN-1021
> Project: Hadoop YARN
>  Issue Type: New Feature
>  Components: scheduler
>Reporter: Wei Yan
>Assignee: Wei Yan
> Attachments: YARN-1021-demo.tar.gz, YARN-1021-images.tar.gz, 
> YARN-1021.patch, YARN-1021.patch, YARN-1021.patch, YARN-1021.patch, 
> YARN-1021.patch, YARN-1021.patch, YARN-1021.patch, YARN-1021.patch, 
> YARN-1021.patch, YARN-1021.patch, YARN-1021.patch, YARN-1021.pdf
>
>
> The Yarn Scheduler is a fertile area of interest with different 
> implementations, e.g., Fifo, Capacity and Fair  schedulers. Meanwhile, 
> several optimizations are also made to improve scheduler performance for 
> different scenarios and workload. Each scheduler algorithm has its own set of 
> features, and drives scheduling decisions by many factors, such as fairness, 
> capacity guarantee, resource availability, etc. It is very important to 
> evaluate a scheduler algorithm very well before we deploy it in a production 
> cluster. Unfortunately, currently it is non-trivial to evaluate a scheduling 
> algorithm. Evaluating in a real cluster is always time and cost consuming, 
> and it is also very hard to find a large-enough cluster. Hence, a simulator 
> which can predict how well a scheduler algorithm for some specific workload 
> would be quite useful.
> We want to build a Scheduler Load Simulator to simulate large-scale Yarn 
> clusters and application loads in a single machine. This would be invaluable 
> in furthering Yarn by providing a tool for researchers and developers to 
> prototype new scheduler features and predict their behavior and performance 
> with reasonable amount of confidence, there-by aiding rapid innovation.
> The simulator will exercise the real Yarn ResourceManager removing the 
> network factor by simulating NodeManagers and ApplicationMasters via handling 
> and dispatching NM/AMs heartbeat events from within the same JVM.
> To keep tracking of scheduler behavior and performance, a scheduler wrapper 
> will wrap the real scheduler.
> The simulator will produce real time metrics while executing, including:
> * Resource usages for whole cluster and each queue, which can be utilized to 
> configure cluster and queue's capacity.
> * The detailed application execution trace (recorded in relation to simulated 
> time), which can be analyzed to understand/validate the  scheduler behavior 
> (individual jobs turn around time, throughput, fairness, capacity guarantee, 
> etc).
> * Several key metrics of scheduler algorithm, such as time cost of each 
> scheduler operation (allocate, handle, etc), which can be utilized by Hadoop 
> developers to find the code spots and scalability limits.
> The simulator will provide real time charts showing the behavior of the 
> scheduler and its performance.
> A short demo is available http://www.youtube.com/watch?v=6thLi8q0qLE, showing 
> how to use simulator to simulate Fair Scheduler and Capacity Scheduler.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (YARN-1021) Yarn Scheduler Load Simulator

2013-09-18 Thread Wei Yan (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-1021?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13771273#comment-13771273
 ] 

Wei Yan commented on YARN-1021:
---

Thanks, [~tucu00]. 

Update a new patch according to [~tucu00]'s latest comments.

The simulation running time in TestSLSRunner reduced to 45 seconds.


> Yarn Scheduler Load Simulator
> -
>
> Key: YARN-1021
> URL: https://issues.apache.org/jira/browse/YARN-1021
> Project: Hadoop YARN
>  Issue Type: New Feature
>  Components: scheduler
>Reporter: Wei Yan
>Assignee: Wei Yan
> Attachments: YARN-1021-demo.tar.gz, YARN-1021-images.tar.gz, 
> YARN-1021.patch, YARN-1021.patch, YARN-1021.patch, YARN-1021.patch, 
> YARN-1021.patch, YARN-1021.patch, YARN-1021.patch, YARN-1021.patch, 
> YARN-1021.patch, YARN-1021.patch, YARN-1021.patch, YARN-1021.pdf
>
>
> The Yarn Scheduler is a fertile area of interest with different 
> implementations, e.g., Fifo, Capacity and Fair  schedulers. Meanwhile, 
> several optimizations are also made to improve scheduler performance for 
> different scenarios and workload. Each scheduler algorithm has its own set of 
> features, and drives scheduling decisions by many factors, such as fairness, 
> capacity guarantee, resource availability, etc. It is very important to 
> evaluate a scheduler algorithm very well before we deploy it in a production 
> cluster. Unfortunately, currently it is non-trivial to evaluate a scheduling 
> algorithm. Evaluating in a real cluster is always time and cost consuming, 
> and it is also very hard to find a large-enough cluster. Hence, a simulator 
> which can predict how well a scheduler algorithm for some specific workload 
> would be quite useful.
> We want to build a Scheduler Load Simulator to simulate large-scale Yarn 
> clusters and application loads in a single machine. This would be invaluable 
> in furthering Yarn by providing a tool for researchers and developers to 
> prototype new scheduler features and predict their behavior and performance 
> with reasonable amount of confidence, there-by aiding rapid innovation.
> The simulator will exercise the real Yarn ResourceManager removing the 
> network factor by simulating NodeManagers and ApplicationMasters via handling 
> and dispatching NM/AMs heartbeat events from within the same JVM.
> To keep tracking of scheduler behavior and performance, a scheduler wrapper 
> will wrap the real scheduler.
> The simulator will produce real time metrics while executing, including:
> * Resource usages for whole cluster and each queue, which can be utilized to 
> configure cluster and queue's capacity.
> * The detailed application execution trace (recorded in relation to simulated 
> time), which can be analyzed to understand/validate the  scheduler behavior 
> (individual jobs turn around time, throughput, fairness, capacity guarantee, 
> etc).
> * Several key metrics of scheduler algorithm, such as time cost of each 
> scheduler operation (allocate, handle, etc), which can be utilized by Hadoop 
> developers to find the code spots and scalability limits.
> The simulator will provide real time charts showing the behavior of the 
> scheduler and its performance.
> A short demo is available http://www.youtube.com/watch?v=6thLi8q0qLE, showing 
> how to use simulator to simulate Fair Scheduler and Capacity Scheduler.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Created] (YARN-1215) Yarn URL should include userinfo

2013-09-18 Thread Chuan Liu (JIRA)
Chuan Liu created YARN-1215:
---

 Summary: Yarn URL should include userinfo
 Key: YARN-1215
 URL: https://issues.apache.org/jira/browse/YARN-1215
 Project: Hadoop YARN
  Issue Type: Bug
Affects Versions: 3.0.0
Reporter: Chuan Liu
Assignee: Chuan Liu


In the {{org.apache.hadoop.yarn.api.records.URL}} class, we don't have an 
userinfo as part of the URL. When converting a {{java.net.URI}} object into the 
YARN URL object in {{ConverterUtils.getYarnUrlFromURI()}} method, we will set 
uri host as the url host. If the uri has a userinfo part, the userinfo is 
discarded. This will lead to information loss if the original uri has the 
userinfo, e.g. foo://username:passw...@example.com will be converted to 
foo://example.com and username/password information is lost during the 
conversion.



--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (YARN-1199) Make NM/RM Versions Available

2013-09-18 Thread Robert Parker (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-1199?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13771311#comment-13771311
 ] 

Robert Parker commented on YARN-1199:
-

Mit, when running the tests I am getting:

{noformat}
testNodesQueryNew(org.apache.hadoop.yarn.server.resourcemanager.webapp.TestRMWebServicesNodes)
  Time elapsed: 0.242 sec  <<< FAILURE!
java.lang.AssertionError: incorrect number of elements expected:<10> but 
was:<11>
at org.junit.Assert.fail(Assert.java:93)
at org.junit.Assert.failNotEquals(Assert.java:647)
at org.junit.Assert.assertEquals(Assert.java:128)
at org.junit.Assert.assertEquals(Assert.java:472)
at 
org.apache.hadoop.yarn.server.resourcemanager.webapp.TestRMWebServicesNodes.verifyNodeInfo(TestRMWebServicesNodes.java:663)
at 
org.apache.hadoop.yarn.server.resourcemanager.webapp.TestRMWebServicesNodes.testNodesQueryNew(TestRMWebServicesNodes.java:189)
{noformat}

Need to account for the extra version field.

> Make NM/RM Versions Available
> -
>
> Key: YARN-1199
> URL: https://issues.apache.org/jira/browse/YARN-1199
> Project: Hadoop YARN
>  Issue Type: Improvement
>Reporter: Mit Desai
>Assignee: Mit Desai
> Attachments: YARN-1199.patch, YARN-1199.patch
>
>
> Now as we have the NM and RM Versions available, we can display the YARN 
> version of nodes running in the cluster.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (YARN-1215) Yarn URL should include userinfo

2013-09-18 Thread Chuan Liu (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-1215?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chuan Liu updated YARN-1215:


Attachment: YARN-1215-trunk.patch

Attach a patch. Instead of modifying the URL class, I changed the 
{{getYarnUrlFromURI()}} method to set userinfo as part of the URL host. A unit 
test is also included to verify the desired behavior.

> Yarn URL should include userinfo
> 
>
> Key: YARN-1215
> URL: https://issues.apache.org/jira/browse/YARN-1215
> Project: Hadoop YARN
>  Issue Type: Bug
>Affects Versions: 3.0.0
>Reporter: Chuan Liu
>Assignee: Chuan Liu
> Attachments: YARN-1215-trunk.patch
>
>
> In the {{org.apache.hadoop.yarn.api.records.URL}} class, we don't have an 
> userinfo as part of the URL. When converting a {{java.net.URI}} object into 
> the YARN URL object in {{ConverterUtils.getYarnUrlFromURI()}} method, we will 
> set uri host as the url host. If the uri has a userinfo part, the userinfo is 
> discarded. This will lead to information loss if the original uri has the 
> userinfo, e.g. foo://username:passw...@example.com will be converted to 
> foo://example.com and username/password information is lost during the 
> conversion.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (YARN-1123) [YARN-321] Adding ContainerReport and Protobuf implementation

2013-09-18 Thread Mayank Bansal (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-1123?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mayank Bansal updated YARN-1123:


Attachment: YARN-1123-2.patch

Attaching Fixed patch.

Thanks,
Mayank

> [YARN-321] Adding ContainerReport and Protobuf implementation
> -
>
> Key: YARN-1123
> URL: https://issues.apache.org/jira/browse/YARN-1123
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Zhijie Shen
>Assignee: Mayank Bansal
> Attachments: YARN-1123-1.patch, YARN-1123-2.patch
>
>
> Like YARN-978, we need some client-oriented class to expose the container 
> history info. Neither Container nor RMContainer is the right one.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (YARN-1214) Register ClientToken MasterKey in SecretManager after it is saved

2013-09-18 Thread Bikas Saha (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-1214?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13771353#comment-13771353
 ] 

Bikas Saha commented on YARN-1214:
--

We can probably return a client token in getApplicationReport after the app is 
saved. So RMAppAttemtptImpl.getClientToken() and other methods like it should 
return null until app has been saved. This might be simpler.

> Register ClientToken MasterKey in SecretManager after it is saved
> -
>
> Key: YARN-1214
> URL: https://issues.apache.org/jira/browse/YARN-1214
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: resourcemanager
>Reporter: Jian He
>Assignee: Jian He
>
> Currently, app attempt ClientToken master key is registered before it is 
> saved. This can cause problem that before the master key is saved, client 
> gets the token and RM also crashes, RM cannot reloads the master key back 
> after it restarts as it is not saved. As a result, client is holding an 
> invalid token.
> We can register the client token master key after it is saved in the store.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (YARN-1123) [YARN-321] Adding ContainerReport and Protobuf implementation

2013-09-18 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-1123?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13771355#comment-13771355
 ] 

Hadoop QA commented on YARN-1123:
-

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12603928/YARN-1123-2.patch
  against trunk revision .

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:red}-1 tests included{color}.  The patch doesn't appear to include 
any new or modified tests.
Please justify why no new tests are needed for this 
patch.
Also please list what manual steps were performed to 
verify this patch.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javadoc{color}.  The javadoc tool did not generate any 
warning messages.

{color:green}+1 eclipse:eclipse{color}.  The patch built with 
eclipse:eclipse.

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 1.3.9) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:green}+1 core tests{color}.  The patch passed unit tests in 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common.

{color:green}+1 contrib tests{color}.  The patch passed contrib unit tests.

Test results: 
https://builds.apache.org/job/PreCommit-YARN-Build/1963//testReport/
Console output: https://builds.apache.org/job/PreCommit-YARN-Build/1963//console

This message is automatically generated.

> [YARN-321] Adding ContainerReport and Protobuf implementation
> -
>
> Key: YARN-1123
> URL: https://issues.apache.org/jira/browse/YARN-1123
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Zhijie Shen
>Assignee: Mayank Bansal
> Attachments: YARN-1123-1.patch, YARN-1123-2.patch
>
>
> Like YARN-978, we need some client-oriented class to expose the container 
> history info. Neither Container nor RMContainer is the right one.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (YARN-1215) Yarn URL should include userinfo

2013-09-18 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-1215?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13771381#comment-13771381
 ] 

Hadoop QA commented on YARN-1215:
-

{color:green}+1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12603926/YARN-1215-trunk.patch
  against trunk revision .

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 1 new 
or modified test files.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javadoc{color}.  The javadoc tool did not generate any 
warning messages.

{color:green}+1 eclipse:eclipse{color}.  The patch built with 
eclipse:eclipse.

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 1.3.9) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:green}+1 core tests{color}.  The patch passed unit tests in 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common.

{color:green}+1 contrib tests{color}.  The patch passed contrib unit tests.

Test results: 
https://builds.apache.org/job/PreCommit-YARN-Build/1964//testReport/
Console output: https://builds.apache.org/job/PreCommit-YARN-Build/1964//console

This message is automatically generated.

> Yarn URL should include userinfo
> 
>
> Key: YARN-1215
> URL: https://issues.apache.org/jira/browse/YARN-1215
> Project: Hadoop YARN
>  Issue Type: Bug
>Affects Versions: 3.0.0
>Reporter: Chuan Liu
>Assignee: Chuan Liu
> Attachments: YARN-1215-trunk.patch
>
>
> In the {{org.apache.hadoop.yarn.api.records.URL}} class, we don't have an 
> userinfo as part of the URL. When converting a {{java.net.URI}} object into 
> the YARN URL object in {{ConverterUtils.getYarnUrlFromURI()}} method, we will 
> set uri host as the url host. If the uri has a userinfo part, the userinfo is 
> discarded. This will lead to information loss if the original uri has the 
> userinfo, e.g. foo://username:passw...@example.com will be converted to 
> foo://example.com and username/password information is lost during the 
> conversion.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Created] (YARN-1216) Deprecate/ mark private few methods from YarnConfiguration

2013-09-18 Thread Omkar Vinit Joshi (JIRA)
Omkar Vinit Joshi created YARN-1216:
---

 Summary: Deprecate/ mark private few methods from YarnConfiguration
 Key: YARN-1216
 URL: https://issues.apache.org/jira/browse/YARN-1216
 Project: Hadoop YARN
  Issue Type: Bug
Reporter: Omkar Vinit Joshi
Priority: Minor


Today we have few methods in YarnConfiguration which should ideally be moved to 
some utility class.
[related comment | 
https://issues.apache.org/jira/browse/YARN-1203?focusedCommentId=13771281&page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-13771281
 ]
* getRMWebAppURL
* getRMWebAppHostAndPort
* getProxyHostAndPort

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (YARN-1216) Deprecate/ mark private few methods from YarnConfiguration

2013-09-18 Thread Omkar Vinit Joshi (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-1216?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Omkar Vinit Joshi updated YARN-1216:


Labels: newbie  (was: )

> Deprecate/ mark private few methods from YarnConfiguration
> --
>
> Key: YARN-1216
> URL: https://issues.apache.org/jira/browse/YARN-1216
> Project: Hadoop YARN
>  Issue Type: Bug
>Reporter: Omkar Vinit Joshi
>Priority: Minor
>  Labels: newbie
>
> Today we have few methods in YarnConfiguration which should ideally be moved 
> to some utility class.
> [related comment | 
> https://issues.apache.org/jira/browse/YARN-1203?focusedCommentId=13771281&page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-13771281
>  ]
> * getRMWebAppURL
> * getRMWebAppHostAndPort
> * getProxyHostAndPort

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (YARN-353) Add Zookeeper-based store implementation for RMStateStore

2013-09-18 Thread Karthik Kambatla (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-353?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13771393#comment-13771393
 ] 

Karthik Kambatla commented on YARN-353:
---

[~hitesh], can you please take a look when you get a chance? I believe the 
latest patch addresses all the comments. Thanks.

> Add Zookeeper-based store implementation for RMStateStore
> -
>
> Key: YARN-353
> URL: https://issues.apache.org/jira/browse/YARN-353
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: resourcemanager
>Reporter: Hitesh Shah
>Assignee: Karthik Kambatla
> Attachments: YARN-353.10.patch, YARN-353.11.patch, YARN-353.12.patch, 
> yarn-353-12-wip.patch, YARN-353.13.patch, YARN-353.14.patch, 
> YARN-353.15.patch, YARN-353.16.patch, YARN-353.1.patch, YARN-353.2.patch, 
> YARN-353.3.patch, YARN-353.4.patch, YARN-353.5.patch, YARN-353.6.patch, 
> YARN-353.7.patch, YARN-353.8.patch, YARN-353.9.patch
>
>
> Add store that write RM state data to ZK

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Created] (YARN-1217) write Unit tests to verify SSL for RM/NM/ job history server and map reduce AM.

2013-09-18 Thread Omkar Vinit Joshi (JIRA)
Omkar Vinit Joshi created YARN-1217:
---

 Summary: write Unit tests to verify SSL for RM/NM/ job history 
server and map reduce AM. 
 Key: YARN-1217
 URL: https://issues.apache.org/jira/browse/YARN-1217
 Project: Hadoop YARN
  Issue Type: Bug
Reporter: Omkar Vinit Joshi


Today there are no unit tests to verify whether SSL is working or not for 
RM/NM/JHS/MR-AM. It is related to YARN-1203. Few things to verify
* correct permissions to keystore / truststore
* All the links exposed by RM/NM/JHS are https and accessible
* servers are not listening on http port
* all the links exposed by AM to RM/NM/JHS are https and are controlled by 
hadoop.ssl.enabled
* if truststore doesn't contain certificate from say host1 (nodemanager) then 
that nodemanager should not be able to join the network and RM should reject it.


--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (YARN-1068) Add admin support for HA operations

2013-09-18 Thread Karthik Kambatla (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-1068?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13771411#comment-13771411
 ] 

Karthik Kambatla commented on YARN-1068:


Having a separate RMHAAdminService allows handling it through the 
super.service*() calls in RMHAProtocolService. This makes it very simple to 
create a RMHAProtocolService without the admin parts without having to override 
all its service*() calls. In the current patch, MockRM does the following. If 
it was all in one RMHAService, this will be much longer and needs to be updated 
everytime the actual RMHAService service* methods get updated. Also, don't see 
particular downside to having it as a separate service except that it is class. 
The RM itself still sees a single service in RMHAProtocolService.  
{code}
  protected RMHAProtocolService createRMHAProtocolService() {
return new RMHAProtocolService(this) {
  @Override
  protected RMHAAdminService createRMHAAdminService() {
return null;
  }
};
  }
{code}

> Add admin support for HA operations
> ---
>
> Key: YARN-1068
> URL: https://issues.apache.org/jira/browse/YARN-1068
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: resourcemanager
>Affects Versions: 2.1.0-beta
>Reporter: Karthik Kambatla
>Assignee: Karthik Kambatla
>  Labels: ha
> Attachments: yarn-1068-1.patch, yarn-1068-2.patch, yarn-1068-3.patch, 
> yarn-1068-4.patch, yarn-1068-prelim.patch
>
>
> Support HA admin operations to facilitate transitioning the RM to Active and 
> Standby states.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (YARN-1203) Application Manager UI does not appear with Https enabled

2013-09-18 Thread Omkar Vinit Joshi (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-1203?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13771410#comment-13771410
 ] 

Omkar Vinit Joshi commented on YARN-1203:
-

Thanks vinod..

bq. Move the MRClientService code to setup configs to the top level 
MRAppMaster. That way we can be oblivious of service-start dependencies.
fixed..

bq. WebAppUtil.secure -> sslEnabledInYARN? Similarly (get|set)Secure -> 
isSSLEnabledInYARN, setSSLEnabledInYARN().
isSSLEnabledInYARN()

bq. Add reference to shuffle encryption and the corresponding config key in 
mapred-default.xml in the description for mapreduce.ssl.enabled.
done..

bq. Also mention that the new config is only for MR AM, not for JHS.
done.

bq. Plz file a ticket to mark all the util methods in YarnConfiguration to be 
private or if needed to move them to a utils class.
done. YARN-1216

bq. Add documentation to AM register/finish APIs to say about the handling of 
scheme and no-scheme?
done. 

bq. HttpConfig.setSecure() no longer just visible for testing?
removed VisibleForTesting annotation.

Follow up ticket for testing SSL for RM/NM/JHS/MR-AM. YARN-1217
I have tested below things
* all the links exposed by RM/NM/JHS are https
* web communication between 
** RM - client (https)
** client - NM (https)
** client - JHS (https)
** client - AM (via proxy)
*** client - RM (WebProxy - https)
*** RM (WebProxy - AM - http) http by default as AM is still listening for http 
rather than https because of keystore access problem.
*** All the links returned as a part of AM response are https for 
RM/NM/WebProxy/JHS
*** Today we don't parse the content returned by AM for invalid urls. Entire 
assumption is made that always client will talk to AM via proxy but potentially 
AM can return link in the response to directly communicate with it by bypassing 
proxy. By doing this it can circulate malicious certificate in the cluster. To 
avoid this we need to block this someway. Opening a ticket to track this. 
YARN-1218

> Application Manager UI does not appear with Https enabled
> -
>
> Key: YARN-1203
> URL: https://issues.apache.org/jira/browse/YARN-1203
> Project: Hadoop YARN
>  Issue Type: Bug
>Reporter: Yesha Vora
>Assignee: Omkar Vinit Joshi
> Attachments: YARN-1203.20131017.1.patch, YARN-1203.20131017.2.patch, 
> YARN-1203.20131017.3.patch
>
>
> Need to add support to disable 'hadoop.ssl.enabled' for MR jobs.
> A job should be able to run on http protocol by setting 'hadoop.ssl.enabled' 
> property at job level.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Created] (YARN-1218) Parse html response returned by Application master when showing it via WebProxy

2013-09-18 Thread Omkar Vinit Joshi (JIRA)
Omkar Vinit Joshi created YARN-1218:
---

 Summary: Parse html response returned by Application master when 
showing it via WebProxy
 Key: YARN-1218
 URL: https://issues.apache.org/jira/browse/YARN-1218
 Project: Hadoop YARN
  Issue Type: Bug
Reporter: Omkar Vinit Joshi
Assignee: Omkar Vinit Joshi




--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Created] (YARN-1219) FSDownload changes file suffix making FileUtil.unTar() throw exception

2013-09-18 Thread shanyu zhao (JIRA)
shanyu zhao created YARN-1219:
-

 Summary: FSDownload changes file suffix making FileUtil.unTar() 
throw exception
 Key: YARN-1219
 URL: https://issues.apache.org/jira/browse/YARN-1219
 Project: Hadoop YARN
  Issue Type: Bug
  Components: nodemanager
Affects Versions: 3.0.0, 2.1.1-beta
Reporter: shanyu zhao


While running a Hive join operation on Yarn, I saw exception as described 
below. This is caused by FSDownload copy the files into a temp file and change 
the suffix into ".tmp" before unpacking it. In unpack(), it uses 
FileUtil.unTar() which will determine if the file is "gzipped" by looking at 
the file suffix:
{code}
boolean gzipped = inFile.toString().endsWith("gz");
{code}

To fix this problem, we can remove the ".tmp" in the temp file name.

Here is the detailed exception:

org.apache.commons.compress.archivers.tar.TarArchiveInputStream.getNextTarEntry(TarArchiveInputStream.java:240)
at org.apache.hadoop.fs.FileUtil.unTarUsingJava(FileUtil.java:676)
at org.apache.hadoop.fs.FileUtil.unTar(FileUtil.java:625)
at org.apache.hadoop.yarn.util.FSDownload.unpack(FSDownload.java:203)
at org.apache.hadoop.yarn.util.FSDownload.call(FSDownload.java:287)
at org.apache.hadoop.yarn.util.FSDownload.call(FSDownload.java:50)
at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:334)
at java.util.concurrent.FutureTask.run(FutureTask.java:166)
at 
java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471)
at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:334)
at java.util.concurrent.FutureTask.run(FutureTask.java:166)
at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1110)
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:603)

at java.lang.Thread.run(Thread.java:722)

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Resolved] (YARN-1208) One of the WebUI Links redirected to http instead https protocol with ssl enabled

2013-09-18 Thread Omkar Vinit Joshi (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-1208?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Omkar Vinit Joshi resolved YARN-1208.
-

Resolution: Duplicate

> One of the WebUI Links redirected to http instead https protocol with ssl 
> enabled
> -
>
> Key: YARN-1208
> URL: https://issues.apache.org/jira/browse/YARN-1208
> Project: Hadoop YARN
>  Issue Type: Bug
>Reporter: Yesha Vora
>Assignee: Omkar Vinit Joshi
> Fix For: 2.1.1-beta
>
>
> One of the webUI links is redirecting to the http link when https is enabled.
> Open Nodemanager UI (https://nodemanager:50060/node/allContainers) and click 
> on RM HOME link. This link redirects to "http://resourcemanager:port"; instead 
> "https://resourcemanager:port";

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Reopened] (YARN-1208) One of the WebUI Links redirected to http instead https protocol with ssl enabled

2013-09-18 Thread Omkar Vinit Joshi (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-1208?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Omkar Vinit Joshi reopened YARN-1208:
-


> One of the WebUI Links redirected to http instead https protocol with ssl 
> enabled
> -
>
> Key: YARN-1208
> URL: https://issues.apache.org/jira/browse/YARN-1208
> Project: Hadoop YARN
>  Issue Type: Bug
>Reporter: Yesha Vora
>Assignee: Omkar Vinit Joshi
> Fix For: 2.1.1-beta
>
>
> One of the webUI links is redirecting to the http link when https is enabled.
> Open Nodemanager UI (https://nodemanager:50060/node/allContainers) and click 
> on RM HOME link. This link redirects to "http://resourcemanager:port"; instead 
> "https://resourcemanager:port";

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (YARN-1206) Container logs link is broken on RM web UI after application finished

2013-09-18 Thread Jian He (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-1206?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13771425#comment-13771425
 ] 

Jian He commented on YARN-1206:
---

Yeah, it worked.

> Container logs link is broken on RM web UI after application finished
> -
>
> Key: YARN-1206
> URL: https://issues.apache.org/jira/browse/YARN-1206
> Project: Hadoop YARN
>  Issue Type: Bug
>Reporter: Jian He
>Priority: Blocker
>
> With log aggregation disabled, when container is running, its logs link works 
> properly, but after the application is finished, the link shows 'Container 
> does not exist.'

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (YARN-1203) Application Manager UI does not appear with Https enabled

2013-09-18 Thread Omkar Vinit Joshi (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-1203?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Omkar Vinit Joshi updated YARN-1203:


Attachment: YARN-1203.20131018.1.patch

> Application Manager UI does not appear with Https enabled
> -
>
> Key: YARN-1203
> URL: https://issues.apache.org/jira/browse/YARN-1203
> Project: Hadoop YARN
>  Issue Type: Bug
>Reporter: Yesha Vora
>Assignee: Omkar Vinit Joshi
> Attachments: YARN-1203.20131017.1.patch, YARN-1203.20131017.2.patch, 
> YARN-1203.20131017.3.patch, YARN-1203.20131018.1.patch
>
>
> Need to add support to disable 'hadoop.ssl.enabled' for MR jobs.
> A job should be able to run on http protocol by setting 'hadoop.ssl.enabled' 
> property at job level.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (YARN-987) Adding History Service to use Store and converting Historydata to Report

2013-09-18 Thread Mayank Bansal (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-987?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mayank Bansal updated YARN-987:
---

Attachment: YARN-987-3.patch

Fixing the Container Report instead of Container.

Thanks,
Mayank

> Adding History Service to use Store and converting Historydata to Report
> 
>
> Key: YARN-987
> URL: https://issues.apache.org/jira/browse/YARN-987
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Mayank Bansal
>Assignee: Mayank Bansal
> Attachments: YARN-987-1.patch, YARN-987-2.patch, YARN-987-3.patch
>
>


--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (YARN-1214) Register ClientToken MasterKey in SecretManager after it is saved

2013-09-18 Thread Jian He (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-1214?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jian He updated YARN-1214:
--

Attachment: YARN-1214.patch

> Register ClientToken MasterKey in SecretManager after it is saved
> -
>
> Key: YARN-1214
> URL: https://issues.apache.org/jira/browse/YARN-1214
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: resourcemanager
>Reporter: Jian He
>Assignee: Jian He
> Attachments: YARN-1214.patch
>
>
> Currently, app attempt ClientToken master key is registered before it is 
> saved. This can cause problem that before the master key is saved, client 
> gets the token and RM also crashes, RM cannot reloads the master key back 
> after it restarts as it is not saved. As a result, client is holding an 
> invalid token.
> We can register the client token master key after it is saved in the store.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (YARN-1214) Register ClientToken MasterKey in SecretManager after it is saved

2013-09-18 Thread Jian He (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-1214?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13771437#comment-13771437
 ] 

Jian He commented on YARN-1214:
---

bq. We can probably return a client token in getApplicationReport after the app 
is saved. 
For that, we may need a boolean to be set after both Attempt_saved and 
recovery. Either way is fine. As already created the patch, submit the patch, 
not a big change though.

The patch moves the registerMasterKey to AMLaunchedTransition and modified a 
few test cases accordingly.

> Register ClientToken MasterKey in SecretManager after it is saved
> -
>
> Key: YARN-1214
> URL: https://issues.apache.org/jira/browse/YARN-1214
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: resourcemanager
>Reporter: Jian He
>Assignee: Jian He
> Attachments: YARN-1214.patch
>
>
> Currently, app attempt ClientToken master key is registered before it is 
> saved. This can cause problem that before the master key is saved, client 
> gets the token and RM also crashes, RM cannot reloads the master key back 
> after it restarts as it is not saved. As a result, client is holding an 
> invalid token.
> We can register the client token master key after it is saved in the store.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Created] (YARN-1220) Yarn RM fs state store should handle safemode exceptions

2013-09-18 Thread Arpit Gupta (JIRA)
Arpit Gupta created YARN-1220:
-

 Summary: Yarn RM fs state store should handle safemode exceptions
 Key: YARN-1220
 URL: https://issues.apache.org/jira/browse/YARN-1220
 Project: Hadoop YARN
  Issue Type: Bug
  Components: resourcemanager
Affects Versions: 2.1.0-beta
Reporter: Arpit Gupta
Assignee: Vinod Kumar Vavilapalli


{code}
ons: 0
2013-09-18 05:41:13,542 ERROR recovery.RMStateStore 
(RMStateStore.java:handleStoreEvent(490)) - Error removing app: 
application_1379482521108_0003
org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.hdfs.server.namenode.SafeModeException):
 Cannot delete 
/tmp/hadoop-yarn/yarn/system/rmstore/FSRMStateRoot/RMAppRoot/application_1379482521108_0003.
Name node is in safe mode.
The reported blocks 1018 has reached the threshold 1. of total blocks 1018. 
The number of live datanodes 5 has reached the minimum number 0. Safe mode will 
be turned off automatically in 20 seconds.
at 
org.apache.hadoop.hdfs.server.namenode.FSNamesystem.deleteInternal(FSNamesystem.java:3124)
at 
org.apache.hadoop.hdfs.server.namenode.FSNamesystem.deleteInt(FSNamesystem.java:3083)
at 
org.apache.hadoop.hdfs.server.namenode.FSNamesystem.delete(FSNamesystem.java:3067)
at 
org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.delete(NameNodeRpcServer.java:697)
at 
org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.delete(ClientNamenodeProtocolServerSideTranslatorPB.java:491)
at 
org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$Clien
{code}

The issue here is that in case namenode is in safemode while we are interacting 
with fs state store we wont be able to update the status. In this particular 
case the app was never removed from the store and upon rm restart the app was 
recovered when it did not need to be.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (YARN-1218) Parse html response returned by Application master when showing it via WebProxy

2013-09-18 Thread Vinod Kumar Vavilapalli (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-1218?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vinod Kumar Vavilapalli updated YARN-1218:
--

Issue Type: Sub-task  (was: Bug)
Parent: YARN-47

> Parse html response returned by Application master when showing it via 
> WebProxy
> ---
>
> Key: YARN-1218
> URL: https://issues.apache.org/jira/browse/YARN-1218
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Omkar Vinit Joshi
>Assignee: Omkar Vinit Joshi
>


--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (YARN-987) Adding History Service to use Store and converting Historydata to Report

2013-09-18 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-987?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13771444#comment-13771444
 ] 

Hadoop QA commented on YARN-987:


{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12603952/YARN-987-3.patch
  against trunk revision .

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:red}-1 tests included{color}.  The patch doesn't appear to include 
any new or modified tests.
Please justify why no new tests are needed for this 
patch.
Also please list what manual steps were performed to 
verify this patch.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javadoc{color}.  The javadoc tool did not generate any 
warning messages.

{color:green}+1 eclipse:eclipse{color}.  The patch built with 
eclipse:eclipse.

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 1.3.9) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:green}+1 core tests{color}.  The patch passed unit tests in 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api.

{color:green}+1 contrib tests{color}.  The patch passed contrib unit tests.

Test results: 
https://builds.apache.org/job/PreCommit-YARN-Build/1965//testReport/
Console output: https://builds.apache.org/job/PreCommit-YARN-Build/1965//console

This message is automatically generated.

> Adding History Service to use Store and converting Historydata to Report
> 
>
> Key: YARN-987
> URL: https://issues.apache.org/jira/browse/YARN-987
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Mayank Bansal
>Assignee: Mayank Bansal
> Attachments: YARN-987-1.patch, YARN-987-2.patch, YARN-987-3.patch
>
>


--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (YARN-1216) Deprecate/ mark private few methods from YarnConfiguration

2013-09-18 Thread Vinod Kumar Vavilapalli (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-1216?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vinod Kumar Vavilapalli updated YARN-1216:
--

Issue Type: Sub-task  (was: Bug)
Parent: YARN-386

> Deprecate/ mark private few methods from YarnConfiguration
> --
>
> Key: YARN-1216
> URL: https://issues.apache.org/jira/browse/YARN-1216
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Omkar Vinit Joshi
>Priority: Minor
>  Labels: newbie
>
> Today we have few methods in YarnConfiguration which should ideally be moved 
> to some utility class.
> [related comment | 
> https://issues.apache.org/jira/browse/YARN-1203?focusedCommentId=13771281&page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-13771281
>  ]
> * getRMWebAppURL
> * getRMWebAppHostAndPort
> * getProxyHostAndPort

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (YARN-1204) Need to add https port related property in Yarn

2013-09-18 Thread Omkar Vinit Joshi (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-1204?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13771451#comment-13771451
 ] 

Omkar Vinit Joshi commented on YARN-1204:
-

Attaching a patch
newly added configuration keys
* yarn.resourcemanager.webapp.https.address (to specify RM https port - default 
- 8090)
* yarn.nodemanager.webapp.https.address (to specify NM https port - default - 
8044)
* mapreduce.jobhistory.webapp.https.address (to specify JHS https port - 
default - 19890)

> Need to add https port related property in Yarn
> ---
>
> Key: YARN-1204
> URL: https://issues.apache.org/jira/browse/YARN-1204
> Project: Hadoop YARN
>  Issue Type: Bug
>Reporter: Yesha Vora
>Assignee: Omkar Vinit Joshi
> Attachments: YARN-1204.20131018.1.patch
>
>
> There is no yarn property available to configure https port for Resource 
> manager, nodemanager and history server. Currently, Yarn services uses the 
> port defined for http [defined by 
> 'mapreduce.jobhistory.webapp.address','yarn.nodemanager.webapp.address', 
> 'yarn.resourcemanager.webapp.address'] for running services on https protocol.
> Yarn should have list of property to assign https port for RM, NM and JHS.
> It can be like below.
> yarn.nodemanager.webapp.https.address
> yarn.resourcemanager.webapp.https.address
> mapreduce.jobhistory.webapp.https.address 

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (YARN-1204) Need to add https port related property in Yarn

2013-09-18 Thread Omkar Vinit Joshi (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-1204?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Omkar Vinit Joshi updated YARN-1204:


Attachment: YARN-1204.20131018.1.patch

> Need to add https port related property in Yarn
> ---
>
> Key: YARN-1204
> URL: https://issues.apache.org/jira/browse/YARN-1204
> Project: Hadoop YARN
>  Issue Type: Bug
>Reporter: Yesha Vora
>Assignee: Omkar Vinit Joshi
> Attachments: YARN-1204.20131018.1.patch
>
>
> There is no yarn property available to configure https port for Resource 
> manager, nodemanager and history server. Currently, Yarn services uses the 
> port defined for http [defined by 
> 'mapreduce.jobhistory.webapp.address','yarn.nodemanager.webapp.address', 
> 'yarn.resourcemanager.webapp.address'] for running services on https protocol.
> Yarn should have list of property to assign https port for RM, NM and JHS.
> It can be like below.
> yarn.nodemanager.webapp.https.address
> yarn.resourcemanager.webapp.https.address
> mapreduce.jobhistory.webapp.https.address 

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (YARN-1220) Yarn App recovers when it should not as delete failed from rm fs store

2013-09-18 Thread Arpit Gupta (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-1220?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arpit Gupta updated YARN-1220:
--

Summary: Yarn App recovers when it should not as delete failed from rm fs 
store  (was: Yarn RM fs state store should handle safemode exceptions)

> Yarn App recovers when it should not as delete failed from rm fs store
> --
>
> Key: YARN-1220
> URL: https://issues.apache.org/jira/browse/YARN-1220
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: resourcemanager
>Affects Versions: 2.1.0-beta
>Reporter: Arpit Gupta
>Assignee: Vinod Kumar Vavilapalli
>
> {code}
> ons: 0
> 2013-09-18 05:41:13,542 ERROR recovery.RMStateStore 
> (RMStateStore.java:handleStoreEvent(490)) - Error removing app: 
> application_1379482521108_0003
> org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.hdfs.server.namenode.SafeModeException):
>  Cannot delete 
> /tmp/hadoop-yarn/yarn/system/rmstore/FSRMStateRoot/RMAppRoot/application_1379482521108_0003.
> Name node is in safe mode.
> The reported blocks 1018 has reached the threshold 1. of total blocks 
> 1018. The number of live datanodes 5 has reached the minimum number 0. Safe 
> mode will be turned off automatically in 20 seconds.
> at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.deleteInternal(FSNamesystem.java:3124)
> at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.deleteInt(FSNamesystem.java:3083)
> at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.delete(FSNamesystem.java:3067)
> at 
> org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.delete(NameNodeRpcServer.java:697)
> at 
> org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.delete(ClientNamenodeProtocolServerSideTranslatorPB.java:491)
> at 
> org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$Clien
> {code}
> The issue here is that in case namenode is in safemode while we are 
> interacting with fs state store we wont be able to update the status. In this 
> particular case the app was never removed from the store and upon rm restart 
> the app was recovered when it did not need to be.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (YARN-1203) Application Manager UI does not appear with Https enabled

2013-09-18 Thread Omkar Vinit Joshi (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-1203?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Omkar Vinit Joshi updated YARN-1203:


Attachment: YARN-1203.20131018.2.patch

> Application Manager UI does not appear with Https enabled
> -
>
> Key: YARN-1203
> URL: https://issues.apache.org/jira/browse/YARN-1203
> Project: Hadoop YARN
>  Issue Type: Bug
>Reporter: Yesha Vora
>Assignee: Omkar Vinit Joshi
> Attachments: YARN-1203.20131017.1.patch, YARN-1203.20131017.2.patch, 
> YARN-1203.20131017.3.patch, YARN-1203.20131018.1.patch, 
> YARN-1203.20131018.2.patch
>
>
> Need to add support to disable 'hadoop.ssl.enabled' for MR jobs.
> A job should be able to run on http protocol by setting 'hadoop.ssl.enabled' 
> property at job level.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (YARN-1214) Register ClientToken MasterKey in SecretManager after it is saved

2013-09-18 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-1214?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13771472#comment-13771472
 ] 

Hadoop QA commented on YARN-1214:
-

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12603953/YARN-1214.patch
  against trunk revision .

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 4 new 
or modified test files.

  {color:red}-1 javac{color}.  The applied patch generated 1149 javac 
compiler warnings (more than the trunk's current 1145 warnings).

{color:green}+1 javadoc{color}.  The javadoc tool did not generate any 
warning messages.

{color:green}+1 eclipse:eclipse{color}.  The patch built with 
eclipse:eclipse.

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 1.3.9) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:green}+1 core tests{color}.  The patch passed unit tests in 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager.

{color:green}+1 contrib tests{color}.  The patch passed contrib unit tests.

Test results: 
https://builds.apache.org/job/PreCommit-YARN-Build/1967//testReport/
Javac warnings: 
https://builds.apache.org/job/PreCommit-YARN-Build/1967//artifact/trunk/patchprocess/diffJavacWarnings.txt
Console output: https://builds.apache.org/job/PreCommit-YARN-Build/1967//console

This message is automatically generated.

> Register ClientToken MasterKey in SecretManager after it is saved
> -
>
> Key: YARN-1214
> URL: https://issues.apache.org/jira/browse/YARN-1214
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: resourcemanager
>Reporter: Jian He
>Assignee: Jian He
> Attachments: YARN-1214.patch
>
>
> Currently, app attempt ClientToken master key is registered before it is 
> saved. This can cause problem that before the master key is saved, client 
> gets the token and RM also crashes, RM cannot reloads the master key back 
> after it restarts as it is not saved. As a result, client is holding an 
> invalid token.
> We can register the client token master key after it is saved in the store.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (YARN-1203) Application Manager UI does not appear with Https enabled

2013-09-18 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-1203?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13771478#comment-13771478
 ] 

Hadoop QA commented on YARN-1203:
-

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  
http://issues.apache.org/jira/secure/attachment/12603951/YARN-1203.20131018.1.patch
  against trunk revision .

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:red}-1 tests included{color}.  The patch doesn't appear to include 
any new or modified tests.
Please justify why no new tests are needed for this 
patch.
Also please list what manual steps were performed to 
verify this patch.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javadoc{color}.  The javadoc tool did not generate any 
warning messages.

{color:green}+1 eclipse:eclipse{color}.  The patch built with 
eclipse:eclipse.

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 1.3.9) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:red}-1 core tests{color}.  The following test timeouts occurred in 
hadoop-common-project/hadoop-common 
hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-app 
hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core 
hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-hs 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-web-proxy:

org.apache.hadoop.mapreduce.v2.app.TestRMContainerAllocator

{color:green}+1 contrib tests{color}.  The patch passed contrib unit tests.

Test results: 
https://builds.apache.org/job/PreCommit-YARN-Build/1966//testReport/
Console output: https://builds.apache.org/job/PreCommit-YARN-Build/1966//console

This message is automatically generated.

> Application Manager UI does not appear with Https enabled
> -
>
> Key: YARN-1203
> URL: https://issues.apache.org/jira/browse/YARN-1203
> Project: Hadoop YARN
>  Issue Type: Bug
>Reporter: Yesha Vora
>Assignee: Omkar Vinit Joshi
> Attachments: YARN-1203.20131017.1.patch, YARN-1203.20131017.2.patch, 
> YARN-1203.20131017.3.patch, YARN-1203.20131018.1.patch, 
> YARN-1203.20131018.2.patch
>
>
> Need to add support to disable 'hadoop.ssl.enabled' for MR jobs.
> A job should be able to run on http protocol by setting 'hadoop.ssl.enabled' 
> property at job level.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (YARN-1219) FSDownload changes file suffix making FileUtil.unTar() throw exception

2013-09-18 Thread shanyu zhao (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-1219?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

shanyu zhao updated YARN-1219:
--

Assignee: shanyu zhao

> FSDownload changes file suffix making FileUtil.unTar() throw exception
> --
>
> Key: YARN-1219
> URL: https://issues.apache.org/jira/browse/YARN-1219
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: nodemanager
>Affects Versions: 3.0.0, 2.1.1-beta
>Reporter: shanyu zhao
>Assignee: shanyu zhao
>
> While running a Hive join operation on Yarn, I saw exception as described 
> below. This is caused by FSDownload copy the files into a temp file and 
> change the suffix into ".tmp" before unpacking it. In unpack(), it uses 
> FileUtil.unTar() which will determine if the file is "gzipped" by looking at 
> the file suffix:
> {code}
> boolean gzipped = inFile.toString().endsWith("gz");
> {code}
> To fix this problem, we can remove the ".tmp" in the temp file name.
> Here is the detailed exception:
> org.apache.commons.compress.archivers.tar.TarArchiveInputStream.getNextTarEntry(TarArchiveInputStream.java:240)
>   at org.apache.hadoop.fs.FileUtil.unTarUsingJava(FileUtil.java:676)
>   at org.apache.hadoop.fs.FileUtil.unTar(FileUtil.java:625)
>   at org.apache.hadoop.yarn.util.FSDownload.unpack(FSDownload.java:203)
>   at org.apache.hadoop.yarn.util.FSDownload.call(FSDownload.java:287)
>   at org.apache.hadoop.yarn.util.FSDownload.call(FSDownload.java:50)
>   at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:334)
>   at java.util.concurrent.FutureTask.run(FutureTask.java:166)
>   at 
> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471)
>   at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:334)
>   at java.util.concurrent.FutureTask.run(FutureTask.java:166)
>   at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1110)
>   at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:603)
> at java.lang.Thread.run(Thread.java:722)

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (YARN-1203) Application Manager UI does not appear with Https enabled

2013-09-18 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-1203?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13771512#comment-13771512
 ] 

Hadoop QA commented on YARN-1203:
-

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  
http://issues.apache.org/jira/secure/attachment/12603961/YARN-1203.20131018.2.patch
  against trunk revision .

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:red}-1 tests included{color}.  The patch doesn't appear to include 
any new or modified tests.
Please justify why no new tests are needed for this 
patch.
Also please list what manual steps were performed to 
verify this patch.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javadoc{color}.  The javadoc tool did not generate any 
warning messages.

{color:green}+1 eclipse:eclipse{color}.  The patch built with 
eclipse:eclipse.

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 1.3.9) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:red}-1 core tests{color}.  The following test timeouts occurred in 
hadoop-common-project/hadoop-common 
hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-app 
hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core 
hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-hs 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-web-proxy:

org.apache.hadoop.mapreduce.v2.app.TestRMContainerAllocator

{color:green}+1 contrib tests{color}.  The patch passed contrib unit tests.

Test results: 
https://builds.apache.org/job/PreCommit-YARN-Build/1968//testReport/
Console output: https://builds.apache.org/job/PreCommit-YARN-Build/1968//console

This message is automatically generated.

> Application Manager UI does not appear with Https enabled
> -
>
> Key: YARN-1203
> URL: https://issues.apache.org/jira/browse/YARN-1203
> Project: Hadoop YARN
>  Issue Type: Bug
>Reporter: Yesha Vora
>Assignee: Omkar Vinit Joshi
> Attachments: YARN-1203.20131017.1.patch, YARN-1203.20131017.2.patch, 
> YARN-1203.20131017.3.patch, YARN-1203.20131018.1.patch, 
> YARN-1203.20131018.2.patch
>
>
> Need to add support to disable 'hadoop.ssl.enabled' for MR jobs.
> A job should be able to run on http protocol by setting 'hadoop.ssl.enabled' 
> property at job level.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (YARN-1216) Deprecate/ mark private few methods from YarnConfiguration

2013-09-18 Thread Raghavendra Nandagopal (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-1216?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13771561#comment-13771561
 ] 

Raghavendra Nandagopal commented on YARN-1216:
--

Hi Joshi
  I am new to YARN community and would like start contributing.  

Looking at the above change, here are the findings.
1) We should move the methods to a new class under the below util package.
package org.apache.hadoop.yarn.util;
2) Create a new class, what would be the new name (say YarnManagerURLs or 
YarnURLs)
3) Also it is mentioned to mark it has private, should we use singleton 
instance and then call the methods that are private for the class.
3) Will this class be wrapper and the calls will still continue to go through 
the existing methods that are declared in the YarnConfiguration or do we need 
to change wherever the methods are called.
4) Should we write any additional test methods after the change based on the 
outcome of 3rd point above.

> Deprecate/ mark private few methods from YarnConfiguration
> --
>
> Key: YARN-1216
> URL: https://issues.apache.org/jira/browse/YARN-1216
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Omkar Vinit Joshi
>Priority: Minor
>  Labels: newbie
>
> Today we have few methods in YarnConfiguration which should ideally be moved 
> to some utility class.
> [related comment | 
> https://issues.apache.org/jira/browse/YARN-1203?focusedCommentId=13771281&page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-13771281
>  ]
> * getRMWebAppURL
> * getRMWebAppHostAndPort
> * getProxyHostAndPort

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Created] (YARN-1221) With Fair Scheduler, reserved MB reported in RM web UI increases indefinitely

2013-09-18 Thread Sandy Ryza (JIRA)
Sandy Ryza created YARN-1221:


 Summary: With Fair Scheduler, reserved MB reported in RM web UI 
increases indefinitely
 Key: YARN-1221
 URL: https://issues.apache.org/jira/browse/YARN-1221
 Project: Hadoop YARN
  Issue Type: Bug
  Components: resourcemanager, scheduler
Affects Versions: 2.1.0-beta
Reporter: Sandy Ryza




--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (YARN-353) Add Zookeeper-based store implementation for RMStateStore

2013-09-18 Thread Hitesh Shah (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-353?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Hitesh Shah updated YARN-353:
-

Attachment: YARN-353.16.1.patch

[~kkambatl] Thanks for the patience. Looks good. I missed a minor point on my 
previous comments. 

{code}
+} catch (Exception e) {
+  LOG.error("Failed to load state.", e);
+  throw e;
+}
{code}

I removed the above snippet as it was not adding any useful information and 
attached the 16.1 patch.

Let me know if you have any concerns. 

Will wait for jenkins to +1 and commit sometime tomorrow.  



> Add Zookeeper-based store implementation for RMStateStore
> -
>
> Key: YARN-353
> URL: https://issues.apache.org/jira/browse/YARN-353
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: resourcemanager
>Reporter: Hitesh Shah
>Assignee: Karthik Kambatla
> Attachments: YARN-353.10.patch, YARN-353.11.patch, YARN-353.12.patch, 
> yarn-353-12-wip.patch, YARN-353.13.patch, YARN-353.14.patch, 
> YARN-353.15.patch, YARN-353.16.1.patch, YARN-353.16.patch, YARN-353.1.patch, 
> YARN-353.2.patch, YARN-353.3.patch, YARN-353.4.patch, YARN-353.5.patch, 
> YARN-353.6.patch, YARN-353.7.patch, YARN-353.8.patch, YARN-353.9.patch
>
>
> Add store that write RM state data to ZK

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira