[jira] [Updated] (YARN-4793) [Umbrella] Simplified API layer for services and beyond

2016-08-30 Thread Gour Saha (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-4793?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gour Saha updated YARN-4793:

Attachment: YARN-4793-yarn-native-services.001.patch

First version of REST API service implementation based on swagger specification

> [Umbrella] Simplified API layer for services and beyond
> ---
>
> Key: YARN-4793
> URL: https://issues.apache.org/jira/browse/YARN-4793
> Project: Hadoop YARN
>  Issue Type: Bug
>Reporter: Vinod Kumar Vavilapalli
>Assignee: Gour Saha
> Attachments: 20160603-YARN-Simplified-V1-API-Examples.adoc, 
> 20160603-YARN-Simplified-V1-API-Layer-For-Services.pdf, 
> 20160603-YARN-Simplified-V1-API-Layer-For-Services.yaml, 
> YARN-4793-yarn-native-services.001.patch
>
>
> [See overview doc at YARN-4692, modifying and copy-pasting some of the 
> relevant pieces and sub-section 3.3.2 to track the specific sub-item.]
> Bringing a new service on YARN today is not a simple experience. The APIs of 
> existing frameworks are either too low­ level (native YARN), require writing 
> new code (for frameworks with programmatic APIs ) or writing a complex spec 
> (for declarative frameworks).
> In addition to building critical building blocks inside YARN (as part of 
> other efforts at YARN-4692), we should also look to simplifying the user 
> facing story for building services. Experience of projects like Slider 
> building real-­life services like HBase, Storm, Accumulo, Solr etc gives us 
> some very good learnings on how simplified APIs for building services will 
> look like.
> To this end, we should look at a new simple-services API layer backed by REST 
> interfaces. The REST layer can act as a single point of entry for creation 
> and lifecycle management of YARN services. Services here can range from 
> simple single-­component apps to the most complex, multi­-component 
> applications needing special orchestration needs.
> We should also look at making this a unified REST based entry point for other 
> important features like resource­-profile management (YARN-3926), 
> package-definitions' lifecycle­-management and service­-discovery (YARN-913 / 
> YARN-4757). We also need to flesh out its relation to our present much ­lower 
> level REST APIs (YARN-1695) in YARN for application-­submission and 
> management.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Created] (YARN-5582) SchedulerUtils#validate vcores even for DefaultResourceCalculator

2016-08-30 Thread Bibin A Chundatt (JIRA)
Bibin A Chundatt created YARN-5582:
--

 Summary: SchedulerUtils#validate vcores even for 
DefaultResourceCalculator
 Key: YARN-5582
 URL: https://issues.apache.org/jira/browse/YARN-5582
 Project: Hadoop YARN
  Issue Type: Bug
Reporter: Bibin A Chundatt
Assignee: Bibin A Chundatt


Configure Memory=20 GB core 3 Vcores
Submit request for 5 containers with memory 4 Gb  and  5 core each from 
mapreduce application.

{noformat}
Caused by: 
org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.yarn.exceptions.InvalidResourceRequestException):
 Invalid resource request, requested virtual cores < 0, or requested virtual 
cores > max configured, requestedVirtualCores=5, maxVirtualCores=3
at 
org.apache.hadoop.yarn.server.resourcemanager.scheduler.SchedulerUtils.validateResourceRequest(SchedulerUtils.java:274)
at 
org.apache.hadoop.yarn.server.resourcemanager.scheduler.SchedulerUtils.normalizeAndValidateRequest(SchedulerUtils.java:234)
at 
org.apache.hadoop.yarn.server.resourcemanager.scheduler.SchedulerUtils.normalizeAndvalidateRequest(SchedulerUtils.java:250)
at 
org.apache.hadoop.yarn.server.resourcemanager.RMServerUtils.normalizeAndValidateRequests(RMServerUtils.java:105)
at 
org.apache.hadoop.yarn.server.resourcemanager.ApplicationMasterService.allocate(ApplicationMasterService.java:703)
at 
org.apache.hadoop.yarn.api.impl.pb.service.ApplicationMasterProtocolPBServiceImpl.allocate(ApplicationMasterProtocolPBServiceImpl.java:65)
at 
org.apache.hadoop.yarn.proto.ApplicationMasterProtocol$ApplicationMasterProtocolService$2.callBlockingMethod(ApplicationMasterProtocol.java:115)
{noformat}

Shouldnot validate core when resource calculator is 
{{org.apache.hadoop.yarn.util.resource.DefaultResourceCalculator}}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Created] (YARN-5583) [YARN-3368] Fix paths in .gitignore

2016-08-30 Thread Sreenath Somarajapuram (JIRA)
Sreenath Somarajapuram created YARN-5583:


 Summary: [YARN-3368] Fix paths in .gitignore
 Key: YARN-5583
 URL: https://issues.apache.org/jira/browse/YARN-5583
 Project: Hadoop YARN
  Issue Type: Sub-task
Reporter: Sreenath Somarajapuram
Assignee: Sreenath Somarajapuram


npm-debug.log & testem.log paths are mentioned wrong.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-5583) [YARN-3368] Fix paths in .gitignore

2016-08-30 Thread Sreenath Somarajapuram (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-5583?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sreenath Somarajapuram updated YARN-5583:
-
Attachment: YARN-5583-YARN-3368-0001.patch

[~sunilg] [~wangda] [~vvasudev]
Please help with this root level patch.

This would trigger complete build, and might cause unrelated pre-commit 
failures. 

> [YARN-3368] Fix paths in .gitignore
> ---
>
> Key: YARN-5583
> URL: https://issues.apache.org/jira/browse/YARN-5583
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Sreenath Somarajapuram
>Assignee: Sreenath Somarajapuram
> Attachments: YARN-5583-YARN-3368-0001.patch
>
>
> npm-debug.log & testem.log paths are mentioned wrong.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-5503) [YARN-3368] Add missing hidden files in webapp folder

2016-08-30 Thread Sreenath Somarajapuram (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-5503?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sreenath Somarajapuram updated YARN-5503:
-
Description: 
- Feel it might be good to have a readme file with the basic instructions.
- Change package type to war, as ours is a web application
- Just noticed that the hidden files that must be present in the base directory 
of ember app, are missing. Most of them are used for configuration, and when 
missing the default vakues would be used by ember.
-- They include - .bowerrc, .editorconfig, .ember-cli, .gitignore, .jshintrc, 
.travis.yml, .watchmanconfig


  was:
- Feel it might be good to have a readme file with the basic instructions.
- Just noticed that the hidden files that must be present in the base directory 
of ember app, are missing. Most of them are used for configuration, and when 
missing the default vakues would be used by ember.
-- They include - .bowerrc, .editorconfig, .ember-cli, .gitignore, .jshintrc, 
.travis.yml, .watchmanconfig



> [YARN-3368] Add missing hidden files in webapp folder
> -
>
> Key: YARN-5503
> URL: https://issues.apache.org/jira/browse/YARN-5503
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Sreenath Somarajapuram
>Assignee: Sreenath Somarajapuram
> Attachments: YARN-5503-YARN-3368-0001.patch, 
> YARN-5503-YARN-3368-0002.patch, YARN-5503-YARN-3368-0003.patch, 
> YARN-5503-YARN-3368-0004.patch, YARN-5503-YARN-3368.0005.patch
>
>
> - Feel it might be good to have a readme file with the basic instructions.
> - Change package type to war, as ours is a web application
> - Just noticed that the hidden files that must be present in the base 
> directory of ember app, are missing. Most of them are used for configuration, 
> and when missing the default vakues would be used by ember.
> -- They include - .bowerrc, .editorconfig, .ember-cli, .gitignore, .jshintrc, 
> .travis.yml, .watchmanconfig



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5583) [YARN-3368] Fix paths in .gitignore

2016-08-30 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5583?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15448568#comment-15448568
 ] 

Hadoop QA commented on YARN-5583:
-

| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 1m 33s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
0s {color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 
16s {color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 2m 10s {color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:f62df43 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12826153/YARN-5583-YARN-3368-0001.patch
 |
| JIRA Issue | YARN-5583 |
| Optional Tests |  asflicense  |
| uname | Linux c5723894b0a3 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed 
Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | YARN-3368 / 91efda9 |
| modules | C: . U: . |
| Console output | 
https://builds.apache.org/job/PreCommit-YARN-Build/12938/console |
| Powered by | Apache Yetus 0.3.0   http://yetus.apache.org |


This message was automatically generated.



> [YARN-3368] Fix paths in .gitignore
> ---
>
> Key: YARN-5583
> URL: https://issues.apache.org/jira/browse/YARN-5583
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Sreenath Somarajapuram
>Assignee: Sreenath Somarajapuram
> Attachments: YARN-5583-YARN-3368-0001.patch
>
>
> npm-debug.log & testem.log paths are mentioned wrong.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5576) Core change to localize resource while container is running

2016-08-30 Thread Varun Vasudev (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5576?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15448580#comment-15448580
 ] 

Varun Vasudev commented on YARN-5576:
-

Thanks for the patch [~jianhe]! 
1) Can you please look at the javac errors?

2) 
{code}
+if (!container.getContainerState().equals(
+org.apache.hadoop.yarn.server.nodemanager.
+containermanager.container.ContainerState.RUNNING)) {
+  throw new YarnException(
+  containerId + " is at " + container.getContainerState()
+  + " state. Not able to localize new resources.");
+}
{code}

Can we move this logic into the ContainerImpl class? Something line {code} 
if(!container.canLocalizeResources()) { {code}
It puts all the logic for this stuff in one place. It looks like similar logic 
is required in the ResourceLocalization class?

{code}
+EnumSet set =
+EnumSet.of(ContainerState.LOCALIZING, ContainerState.RUNNING);
+if (!set.contains(c.getContainerState())) {
+  LOG.warn(c.getContainerId() + " is at " + c.getContainerState()
+  + " state, do not localize resources.");
+  return;
+}
{code}

3)
{code}
@@ -844,22 +772,21 @@ public ContainerState transition(ContainerImpl container,
   ContainerResourceLocalizedEvent rsrcEvent = 
(ContainerResourceLocalizedEvent) event;
{code}
{code}
-  container.localizedResources.put(location, sys);
{code}
I couldn't figure out where this logic was moved to - can you please explain?

4)
{code}
+  if (new File(linkFile).exists()) {
+LOG.info("Symlink file already exists: " + linkFile);
+  } else {
{code}
We should throw an error here or at least flag this as a failed localization?

5)
{code}
+  container.diagnostics.append(failedEvent.getDiagnosticMessage());
{code}
We need to check the diagnostics string size - if a AM sends us too many failed 
requests, the diagnostics string will just balloon in size.

6)
{code}
+if (localizer != null && localizer.killContainerLocalizer.get()) {
+  LOG.info("New " + event.getType() + " localize request for "
+  + locId + ", remove old private localizer.");
+  cleanupPrivLocalizers(locId);
+  localizer = null;
+}
{code}
Can you explain the logic for this? I couldn't figure out why we need this.

7)
{code}
+System.out.println("==");
+System.out.println(appDir.getAbsolutePath());
+System.out.println(appSysDir.getAbsolutePath());
+System.out.println(containerDir.getAbsolutePath());
+System.out.println(containerSysDir.getAbsolutePath());
+System.out.println(targetFile.getAbsolutePath());
+System.out.println("==");
{code}
Do we need these lines in the test code? Maybe move them to LOG.debug?

8)
Can you also add a check in testLocalingResourceWhileContainerRunning to make 
sure we can’t localize for non-running containers?

> Core change to localize resource while container is running
> ---
>
> Key: YARN-5576
> URL: https://issues.apache.org/jira/browse/YARN-5576
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Jian He
>Assignee: Jian He
> Attachments: YARN-5576.1.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Created] (YARN-5584) Include name of JAR/Tar/Zip on failure to expand artifact on download

2016-08-30 Thread Steve Loughran (JIRA)
Steve Loughran created YARN-5584:


 Summary: Include name of JAR/Tar/Zip on failure to expand artifact 
on download
 Key: YARN-5584
 URL: https://issues.apache.org/jira/browse/YARN-5584
 Project: Hadoop YARN
  Issue Type: Improvement
  Components: nodemanager
Affects Versions: 2.7.2
Reporter: Steve Loughran


If yarn can't expand a JAR/ZIP/tar file on download, the exception is passed 
back to the AM —but not the name of which file is failing. This makes it harder 
to track down the problem than one would like.
{code}
java.util.zip.ZipException: invalid CEN header (bad signature)
at java.util.zip.ZipFile.open(Native Method)
at java.util.zip.ZipFile.(ZipFile.java:215)
at java.util.zip.ZipFile.(ZipFile.java:145)
at java.util.zip.ZipFile.(ZipFile.java:159)
at org.apache.hadoop.fs.FileUtil.unZip(FileUtil.java:589)
at org.apache.hadoop.yarn.util.FSDownload.unpack(FSDownload.java:277)
at org.apache.hadoop.yarn.util.FSDownload.call(FSDownload.java:362)
at org.apache.hadoop.yarn.util.FSDownload.call(FSDownload.java:60)
at java.util.concurrent.FutureTask.run(FutureTask.java:262)
at 
java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471)
at java.util.concurrent.FutureTask.run(FutureTask.java:262)
{code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-3692) Allow REST API to set a user generated message when killing an application

2016-08-30 Thread Steve Loughran (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-3692?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15448595#comment-15448595
 ] 

Steve Loughran commented on YARN-3692:
--

Seems reasonable to me. Can someone with more knowledge of the codebase review 
this?

> Allow REST API to set a user generated message when killing an application
> --
>
> Key: YARN-3692
> URL: https://issues.apache.org/jira/browse/YARN-3692
> Project: Hadoop YARN
>  Issue Type: Improvement
>Reporter: Rajat Jain
>Assignee: Rohith Sharma K S
> Attachments: 0001-YARN-3692.patch
>
>
> Currently YARN's REST API supports killing an application without setting a 
> diagnostic message. It would be good to provide that support.
> *Use Case*
> Usually this helps in workflow management in a multi-tenant environment when 
> the workflow scheduler (or the hadoop admin) wants to kill a job - and let 
> the user know the reason why the job was killed. Killing the job by setting a 
> diagnostic message is a very good solution for that. Ideally, we can set the 
> diagnostic message on all such interface:
> yarn kill -applicationId ... -diagnosticMessage "some message added by 
> admin/workflow"
> REST API { 'state': 'KILLED', 'diagnosticMessage': 'some message added by 
> admin/workflow'}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-3692) Allow REST API to set a user generated message when killing an application

2016-08-30 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-3692?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15448604#comment-15448604
 ] 

Hadoop QA commented on YARN-3692:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 0s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:red}-1{color} | {color:red} patch {color} | {color:red} 0m 9s {color} 
| {color:red} YARN-3692 does not apply to trunk. Rebase required? Wrong Branch? 
See https://wiki.apache.org/hadoop/HowToContribute for help. {color} |
\\
\\
|| Subsystem || Report/Notes ||
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12798487/0001-YARN-3692.patch |
| JIRA Issue | YARN-3692 |
| Console output | 
https://builds.apache.org/job/PreCommit-YARN-Build/12939/console |
| Powered by | Apache Yetus 0.3.0   http://yetus.apache.org |


This message was automatically generated.



> Allow REST API to set a user generated message when killing an application
> --
>
> Key: YARN-3692
> URL: https://issues.apache.org/jira/browse/YARN-3692
> Project: Hadoop YARN
>  Issue Type: Improvement
>Reporter: Rajat Jain
>Assignee: Rohith Sharma K S
> Attachments: 0001-YARN-3692.patch
>
>
> Currently YARN's REST API supports killing an application without setting a 
> diagnostic message. It would be good to provide that support.
> *Use Case*
> Usually this helps in workflow management in a multi-tenant environment when 
> the workflow scheduler (or the hadoop admin) wants to kill a job - and let 
> the user know the reason why the job was killed. Killing the job by setting a 
> diagnostic message is a very good solution for that. Ideally, we can set the 
> diagnostic message on all such interface:
> yarn kill -applicationId ... -diagnosticMessage "some message added by 
> admin/workflow"
> REST API { 'state': 'KILLED', 'diagnosticMessage': 'some message added by 
> admin/workflow'}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-5561) [Atsv2] : Support for ability to retrieve apps/app-attempt/containers and entities via REST

2016-08-30 Thread Rohith Sharma K S (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-5561?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Rohith Sharma K S updated YARN-5561:

Attachment: YARN-5561.patch

Attached the patch for /containers and /appattempts. These are similar to 
retrieving an general entity. So full table scan do not happen. Tested the APIs 
in real cluster and verified.

> [Atsv2] : Support for ability to retrieve apps/app-attempt/containers and 
> entities via REST
> ---
>
> Key: YARN-5561
> URL: https://issues.apache.org/jira/browse/YARN-5561
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: timelinereader
>Reporter: Rohith Sharma K S
>Assignee: Rohith Sharma K S
> Attachments: YARN-5561.patch, YARN-5561.v0.patch
>
>
> ATSv2 model lacks retrieval of {{list-of-all-apps}}, 
> {{list-of-all-app-attempts}} and {{list-of-all-containers-per-attempt}} via 
> REST API's. And also it is required to know about all the entities in an 
> applications.
> It is pretty much highly required these URLs for Web  UI.
> New REST URL would be 
> # GET {{/ws/v2/timeline/apps}}
> # GET {{/ws/v2/timeline/apps/\{app-id\}/appattempts}}.
> # GET 
> {{/ws/v2/timeline/apps/\{app-id\}/appattempts/\{attempt-id\}/containers}}
> # GET {{/ws/v2/timeline/apps/\{app id\}/entities}} should display list of 
> entities that can be queried.  



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5583) [YARN-3368] Fix paths in .gitignore

2016-08-30 Thread Sunil G (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5583?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15448634#comment-15448634
 ] 

Sunil G commented on YARN-5583:
---

Looks good for me... I will commit this patch, if there are no objection in a 
day..

> [YARN-3368] Fix paths in .gitignore
> ---
>
> Key: YARN-5583
> URL: https://issues.apache.org/jira/browse/YARN-5583
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Sreenath Somarajapuram
>Assignee: Sreenath Somarajapuram
> Attachments: YARN-5583-YARN-3368-0001.patch
>
>
> npm-debug.log & testem.log paths are mentioned wrong.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-5503) [YARN-3368] Add missing hidden files in webapp folder

2016-08-30 Thread Sreenath Somarajapuram (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-5503?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sreenath Somarajapuram updated YARN-5503:
-
Attachment: YARN-5503-YARN-3368-0001.patch

Attaching a fresh patch without root level changes. Those will be added as part 
of YARN-5583.

> [YARN-3368] Add missing hidden files in webapp folder
> -
>
> Key: YARN-5503
> URL: https://issues.apache.org/jira/browse/YARN-5503
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Sreenath Somarajapuram
>Assignee: Sreenath Somarajapuram
> Attachments: YARN-5503-YARN-3368-0001.patch, 
> YARN-5503-YARN-3368-0001.patch, YARN-5503-YARN-3368-0002.patch, 
> YARN-5503-YARN-3368-0003.patch, YARN-5503-YARN-3368-0004.patch, 
> YARN-5503-YARN-3368.0005.patch
>
>
> - Feel it might be good to have a readme file with the basic instructions.
> - Change package type to war, as ours is a web application
> - Just noticed that the hidden files that must be present in the base 
> directory of ember app, are missing. Most of them are used for configuration, 
> and when missing the default vakues would be used by ember.
> -- They include - .bowerrc, .editorconfig, .ember-cli, .gitignore, .jshintrc, 
> .travis.yml, .watchmanconfig



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5503) [YARN-3368] Add missing hidden files in webapp folder

2016-08-30 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5503?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15448692#comment-15448692
 ] 

Hadoop QA commented on YARN-5503:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 19s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red} 0m 0s 
{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 10m 
1s {color} | {color:green} YARN-3368 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 9s 
{color} | {color:green} YARN-3368 passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 13s 
{color} | {color:green} YARN-3368 passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
11s {color} | {color:green} YARN-3368 passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 10s 
{color} | {color:green} YARN-3368 passed {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 
8s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 7s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 7s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 8s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
7s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
0s {color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green} 0m 1s 
{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 6s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 0m 7s 
{color} | {color:green} hadoop-yarn-ui in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 
15s {color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 12m 33s {color} 
| {color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:f62df43 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12826161/YARN-5503-YARN-3368-0001.patch
 |
| JIRA Issue | YARN-5503 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  xml  |
| uname | Linux dc13dbbb52c7 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed 
Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | YARN-3368 / 91efda9 |
| Default Java | 1.8.0_101 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-YARN-Build/12940/testReport/ |
| modules | C: hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui U: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui |
| Console output | 
https://builds.apache.org/job/PreCommit-YARN-Build/12940/console |
| Powered by | Apache Yetus 0.3.0   http://yetus.apache.org |


This message was automatically generated.



> [YARN-3368] Add missing hidden files in webapp folder
> -
>
> Key: YARN-5503
> URL: https://issues.apache.org/jira/browse/YARN-5503
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Sreenath Somarajapuram
>Assignee: Sreenath Somarajapuram
> Attachments: YARN-5503-YARN-3368-0001.patch, 
> YARN-5503-YARN-3368-0001.patch, YARN-5503-YARN-3368-0002.patch, 
> YARN-5503-YARN-3368-0003.patch, YARN-5503-YARN-3368-0004.patch, 
> YARN-5503-YARN-3368.0005.patch
>
>
> - Feel it might be good to have a readme file with the basic instructions.
> - Change package type to war, as ours is a web application
> - Just noticed that the hidden files that must be present in the base 
> directory of ember app, are missing. Most of them are 

[jira] [Commented] (YARN-5503) [YARN-3368] Add missing hidden files in webapp folder

2016-08-30 Thread Sreenath Somarajapuram (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5503?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15448777#comment-15448777
 ] 

Sreenath Somarajapuram commented on YARN-5503:
--

Regarding test4tests: The patch doesn't add or modify any business logic. So 
tests are not required.

> [YARN-3368] Add missing hidden files in webapp folder
> -
>
> Key: YARN-5503
> URL: https://issues.apache.org/jira/browse/YARN-5503
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Sreenath Somarajapuram
>Assignee: Sreenath Somarajapuram
> Attachments: YARN-5503-YARN-3368-0001.patch, 
> YARN-5503-YARN-3368-0001.patch, YARN-5503-YARN-3368-0002.patch, 
> YARN-5503-YARN-3368-0003.patch, YARN-5503-YARN-3368-0004.patch, 
> YARN-5503-YARN-3368.0005.patch
>
>
> - Feel it might be good to have a readme file with the basic instructions.
> - Change package type to war, as ours is a web application
> - Just noticed that the hidden files that must be present in the base 
> directory of ember app, are missing. Most of them are used for configuration, 
> and when missing the default vakues would be used by ember.
> -- They include - .bowerrc, .editorconfig, .ember-cli, .gitignore, .jshintrc, 
> .travis.yml, .watchmanconfig



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5566) client-side NM graceful decom doesn't trigger when jobs finish

2016-08-30 Thread Junping Du (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5566?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15448799#comment-15448799
 ] 

Junping Du commented on YARN-5566:
--

Hi [~rkanter], sorry for my reply late as I am between in travel. The above 
analysis make sense to me. However, I need a bit more time for check the code. 
Will give it a review before EOD of tomorrow.

> client-side NM graceful decom doesn't trigger when jobs finish
> --
>
> Key: YARN-5566
> URL: https://issues.apache.org/jira/browse/YARN-5566
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: nodemanager
>Affects Versions: 2.8.0
>Reporter: Robert Kanter
>Assignee: Robert Kanter
> Attachments: YARN-5566.001.patch, YARN-5566.002.patch, 
> YARN-5566.003.patch
>
>
> I was testing the client-side NM graceful decommission and noticed that it 
> was always waiting for the timeout, even if all jobs running on that node (or 
> even the cluster) had already finished.
> For example:
> # JobA is running with at least one container on NodeA
> # User runs client-side decom on NodeA at 5:00am with a timeout of 3 hours 
> --> NodeA enters DECOMMISSIONING state
> # JobA finishes at 6:00am and there are no other jobs running on NodeA
> # User's client reaches the timeout at 8:00am, and forcibly decommissions 
> NodeA
> NodeA should have decommissioned at 6:00am.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-3692) Allow REST API to set a user generated message when killing an application

2016-08-30 Thread Naganarasimha G R (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-3692?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15448804#comment-15448804
 ] 

Naganarasimha G R commented on YARN-3692:
-

seems like patch is not getting applied on the trunk, mind rebasing 
[~rohithsharma] ?

> Allow REST API to set a user generated message when killing an application
> --
>
> Key: YARN-3692
> URL: https://issues.apache.org/jira/browse/YARN-3692
> Project: Hadoop YARN
>  Issue Type: Improvement
>Reporter: Rajat Jain
>Assignee: Rohith Sharma K S
> Attachments: 0001-YARN-3692.patch
>
>
> Currently YARN's REST API supports killing an application without setting a 
> diagnostic message. It would be good to provide that support.
> *Use Case*
> Usually this helps in workflow management in a multi-tenant environment when 
> the workflow scheduler (or the hadoop admin) wants to kill a job - and let 
> the user know the reason why the job was killed. Killing the job by setting a 
> diagnostic message is a very good solution for that. Ideally, we can set the 
> diagnostic message on all such interface:
> yarn kill -applicationId ... -diagnosticMessage "some message added by 
> admin/workflow"
> REST API { 'state': 'KILLED', 'diagnosticMessage': 'some message added by 
> admin/workflow'}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5547) NMLeveldbStateStore should be more tolerant of unknown keys

2016-08-30 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5547?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15448813#comment-15448813
 ] 

Hadoop QA commented on YARN-5547:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 19s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 
0s {color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 6m 
36s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 25s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
17s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 26s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
13s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 0m 
41s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 17s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 
21s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 24s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 24s 
{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} checkstyle {color} | {color:red} 0m 14s 
{color} | {color:red} 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager:
 The patch generated 3 new + 19 unchanged - 0 fixed = 22 total (was 19) {color} 
|
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 24s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
10s {color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} whitespace {color} | {color:red} 0m 0s 
{color} | {color:red} The patch has 7 line(s) that end in whitespace. Use git 
apply --whitespace=fix. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 0m 
45s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 14s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 13m 12s 
{color} | {color:green} hadoop-yarn-server-nodemanager in the patch passed. 
{color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 
15s {color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 25m 49s {color} 
| {color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:9560f25 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12825646/YARN-5547.01.patch |
| JIRA Issue | YARN-5547 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux c179b52ad91b 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed 
Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / 4bd45f5 |
| Default Java | 1.8.0_101 |
| findbugs | v3.0.0 |
| checkstyle | 
https://builds.apache.org/job/PreCommit-YARN-Build/12941/artifact/patchprocess/diff-checkstyle-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-nodemanager.txt
 |
| whitespace | 
https://builds.apache.org/job/PreCommit-YARN-Build/12941/artifact/patchprocess/whitespace-eol.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-YARN-Build/12941/testReport/ |
| modules | C: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager
 U: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager
 |
| Console output | 
https://builds.apache.org/job/PreCommit-YARN-Build/12941/console |
| Powered by | Apache Yetus 0.3.0   http://yetus.apache.org |


This message was automatically generated.



> NMLeveldbStateStore should be 

[jira] [Commented] (YARN-3692) Allow REST API to set a user generated message when killing an application

2016-08-30 Thread Rohith Sharma K S (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-3692?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15448832#comment-15448832
 ] 

Rohith Sharma K S commented on YARN-3692:
-

Sure, I will update the patch!

> Allow REST API to set a user generated message when killing an application
> --
>
> Key: YARN-3692
> URL: https://issues.apache.org/jira/browse/YARN-3692
> Project: Hadoop YARN
>  Issue Type: Improvement
>Reporter: Rajat Jain
>Assignee: Rohith Sharma K S
> Attachments: 0001-YARN-3692.patch
>
>
> Currently YARN's REST API supports killing an application without setting a 
> diagnostic message. It would be good to provide that support.
> *Use Case*
> Usually this helps in workflow management in a multi-tenant environment when 
> the workflow scheduler (or the hadoop admin) wants to kill a job - and let 
> the user know the reason why the job was killed. Killing the job by setting a 
> diagnostic message is a very good solution for that. Ideally, we can set the 
> diagnostic message on all such interface:
> yarn kill -applicationId ... -diagnosticMessage "some message added by 
> admin/workflow"
> REST API { 'state': 'KILLED', 'diagnosticMessage': 'some message added by 
> admin/workflow'}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-4205) Add a service for monitoring application life time out

2016-08-30 Thread Rohith Sharma K S (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-4205?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15448840#comment-15448840
 ] 

Rohith Sharma K S commented on YARN-4205:
-

Its been long time rebasing the patch. I will rebase the patch. 

> Add a service for monitoring application life time out
> --
>
> Key: YARN-4205
> URL: https://issues.apache.org/jira/browse/YARN-4205
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: scheduler
>Reporter: nijel
>Assignee: nijel
> Attachments: YARN-4205_01.patch, YARN-4205_02.patch, 
> YARN-4205_03.patch
>
>
> This JIRA intend to provide a lifetime monitor service. 
> The service will monitor the applications where the life time is configured. 
> If the application is running beyond the lifetime, it will be killed. 
> The lifetime will be considered from the submit time.
> The thread monitoring interval is configurable.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-4205) Add a service for monitoring application life time out

2016-08-30 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-4205?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15448855#comment-15448855
 ] 

Hadoop QA commented on YARN-4205:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 0s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:red}-1{color} | {color:red} patch {color} | {color:red} 0m 7s {color} 
| {color:red} YARN-4205 does not apply to trunk. Rebase required? Wrong Branch? 
See https://wiki.apache.org/hadoop/HowToContribute for help. {color} |
\\
\\
|| Subsystem || Report/Notes ||
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12764043/YARN-4205_03.patch |
| JIRA Issue | YARN-4205 |
| Console output | 
https://builds.apache.org/job/PreCommit-YARN-Build/12942/console |
| Powered by | Apache Yetus 0.3.0   http://yetus.apache.org |


This message was automatically generated.



> Add a service for monitoring application life time out
> --
>
> Key: YARN-4205
> URL: https://issues.apache.org/jira/browse/YARN-4205
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: scheduler
>Reporter: nijel
>Assignee: nijel
> Attachments: YARN-4205_01.patch, YARN-4205_02.patch, 
> YARN-4205_03.patch
>
>
> This JIRA intend to provide a lifetime monitor service. 
> The service will monitor the applications where the life time is configured. 
> If the application is running beyond the lifetime, it will be killed. 
> The lifetime will be considered from the submit time.
> The thread monitoring interval is configurable.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Assigned] (YARN-5584) Include name of JAR/Tar/Zip on failure to expand artifact on download

2016-08-30 Thread Ajith S (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-5584?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ajith S reassigned YARN-5584:
-

Assignee: Ajith S

> Include name of JAR/Tar/Zip on failure to expand artifact on download
> -
>
> Key: YARN-5584
> URL: https://issues.apache.org/jira/browse/YARN-5584
> Project: Hadoop YARN
>  Issue Type: Improvement
>  Components: nodemanager
>Affects Versions: 2.7.2
>Reporter: Steve Loughran
>Assignee: Ajith S
>
> If yarn can't expand a JAR/ZIP/tar file on download, the exception is passed 
> back to the AM —but not the name of which file is failing. This makes it 
> harder to track down the problem than one would like.
> {code}
> java.util.zip.ZipException: invalid CEN header (bad signature)
>   at java.util.zip.ZipFile.open(Native Method)
>   at java.util.zip.ZipFile.(ZipFile.java:215)
>   at java.util.zip.ZipFile.(ZipFile.java:145)
>   at java.util.zip.ZipFile.(ZipFile.java:159)
>   at org.apache.hadoop.fs.FileUtil.unZip(FileUtil.java:589)
>   at org.apache.hadoop.yarn.util.FSDownload.unpack(FSDownload.java:277)
>   at org.apache.hadoop.yarn.util.FSDownload.call(FSDownload.java:362)
>   at org.apache.hadoop.yarn.util.FSDownload.call(FSDownload.java:60)
>   at java.util.concurrent.FutureTask.run(FutureTask.java:262)
>   at 
> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471)
>   at java.util.concurrent.FutureTask.run(FutureTask.java:262)
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5582) SchedulerUtils#validate vcores even for DefaultResourceCalculator

2016-08-30 Thread Naganarasimha G R (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5582?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15448886#comment-15448886
 ] 

Naganarasimha G R commented on YARN-5582:
-

[~bibinchundatt], Makes sense not to verify for vcores when 
DefaultResourceCalculator is configured.

> SchedulerUtils#validate vcores even for DefaultResourceCalculator
> -
>
> Key: YARN-5582
> URL: https://issues.apache.org/jira/browse/YARN-5582
> Project: Hadoop YARN
>  Issue Type: Bug
>Reporter: Bibin A Chundatt
>Assignee: Bibin A Chundatt
>
> Configure Memory=20 GB core 3 Vcores
> Submit request for 5 containers with memory 4 Gb  and  5 core each from 
> mapreduce application.
> {noformat}
> Caused by: 
> org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.yarn.exceptions.InvalidResourceRequestException):
>  Invalid resource request, requested virtual cores < 0, or requested virtual 
> cores > max configured, requestedVirtualCores=5, maxVirtualCores=3
> at 
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.SchedulerUtils.validateResourceRequest(SchedulerUtils.java:274)
> at 
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.SchedulerUtils.normalizeAndValidateRequest(SchedulerUtils.java:234)
> at 
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.SchedulerUtils.normalizeAndvalidateRequest(SchedulerUtils.java:250)
> at 
> org.apache.hadoop.yarn.server.resourcemanager.RMServerUtils.normalizeAndValidateRequests(RMServerUtils.java:105)
> at 
> org.apache.hadoop.yarn.server.resourcemanager.ApplicationMasterService.allocate(ApplicationMasterService.java:703)
> at 
> org.apache.hadoop.yarn.api.impl.pb.service.ApplicationMasterProtocolPBServiceImpl.allocate(ApplicationMasterProtocolPBServiceImpl.java:65)
> at 
> org.apache.hadoop.yarn.proto.ApplicationMasterProtocol$ApplicationMasterProtocolService$2.callBlockingMethod(ApplicationMasterProtocol.java:115)
> {noformat}
> Shouldnot validate core when resource calculator is 
> {{org.apache.hadoop.yarn.util.resource.DefaultResourceCalculator}}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-4793) [Umbrella] Simplified API layer for services and beyond

2016-08-30 Thread Jian He (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-4793?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15448934#comment-15448934
 ] 

Jian He commented on YARN-4793:
---

Thanks Gour ! some comments I have on the patch:

*API Models*
- {artifact, resource, launch_command, number_of_containers} in Application 
seems duplicated with those inside the component. I feel in this scenario, a 
default global setting for artifacts, launch_command etc. is not that 
appropriate,  different components may likely have different requirements. 
IMHO, we only need the ones in Component, this makes the interface cleaner and 
underlying implementation simpler?
- unique_component_support: what is the primary use-case to have distinct 
component name ?
- What is the BaseResource object for? Why does Application, ApplicationStatus, 
Container, Resource need to extend this class?
- What does the Artifact#APPLICATION mean ?
- ApplicationState: What is difference  RUNNNG vs STARTED, FINISHED vs STOPPED
{code}
ACCEPTED, RUNNING, FINISHED, FAILED, STOPPED, STARTED;
{code}
- Application#lifetime: it is String type. Does this mean we have to define a 
scheme for user to specify the time in string format? How about just using long 
type ?  
- ApplicationStatus#errorMessage, how about call it diagnostics ? sometimes we 
may also return non-error messages.

*Implementation*
- “hadoop-yarn-services-api” should be under hadoop-yarn-slider module as peer 
to hadoop-yarn-slider-core
- why the changes needed in hadoop-project/pom.xml 
- We should not use a deprecated getPort() method {{logger.info("Listening at 
port = {}", applicationApiServer.getPort());}}, jenkins will report error.
- couple of things for below code
{code}
HADOOP_CONFIG = getHadoopConfigs();

SLIDER_CONFIG = getSliderClientConfiguration();
{code}
-- We cannot load hdfs config, that's for hdfs servers. Any reason you need the 
hdfs configs?
-- Instead of calling these two methods, I think we can just call 
{{YarnConfiguration yarnConf = new YarnConfiguration()}}. This will 
automatically load the yarn-site and core-site configs.

- Why do we need to explicitly call initHadoopBinding, which is already called 
the super.init() previously.
{code}
SliderClient client = new SliderClient() {
  @Override
  public void init(org.apache.hadoop.conf.Configuration conf) {
super.init(conf);
try {
  initHadoopBinding();
} catch (SliderException e) {
  throw new RuntimeException(
  "Unable to automatically init Hadoop binding", e);
} catch (IOException e) {
  throw new RuntimeException(
  "Unable to automatically init Hadoop binding", e);
}
  }
};
{code}
- These two catch clauses are identical, and Exception extends Throwable, so we 
only need catch Throwable, if that's desired. 
{code}
} catch (Exception e) {
  logger.error("Unable to create SliderClient", e);
  throw new RuntimeException(e.getMessage(), e);
} catch (Throwable e) {
  logger.error("Unable to create SliderClient", e);
  throw new RuntimeException(e.getMessage(), e);
}
{code}
- This will never return null, because the numberOfContainers is intialized as 
1. you might want to check zero ?
{code}
  // container size
  if (application.getNumberOfContainers() == null) {
throw new IllegalArgumentException(ERROR_CONTAINERS_COUNT_INVALID);
  }
{code}
- The lifetime field will never be null, because it is  intilized as 
"unlimited" by default
{code}
// Application lifetime if not specified, is set to unlimited lifetime
if (application.getLifetime() == null) {
  application.setLifetime(DEFAULT_UNLIMITED_LIFETIME);
}
{code}
- IIUC, all these code are not needed, because appOptions is only used for 
logging, uniqueGlobalPropertyCache is not used logically, Python is not 
required any more in yarn-slider
{code}
if (application.getConfiguration() != null
&& application.getConfiguration().getProperties() != null) {
  for (Map.Entry propEntry : application.getConfiguration()
  .getProperties().entrySet()) {
if (PROPERTY_PYTHON_PATH.equals(propEntry.getKey())) {
  addOptionsIfNotPresent(appOptions, uniqueGlobalPropertyCache,
  SliderXmlConfKeys.PYTHON_EXECUTABLE_PATH, propEntry.getValue());
  continue;
}
addOptionsIfNotPresent(appOptions, uniqueGlobalPropertyCache,
propEntry.getKey(), propEntry.getValue());
  }
}
{code}
- In agent-less world, the status command is probably not required. We need a 
different mechanism to determine container status. let's remove this for now
{code}
appConfOptTriples.addAll(Arrays.asList(compName, configPrefix.toLowerCase()
+ ".statusCommand", DEFAULT_STATUS_CMD));
{code}
- remove the unused parameter globalConf in createAppConfigComponent
- remove unused method crea

[jira] [Commented] (YARN-3981) support timeline clients not associated with an application

2016-08-30 Thread Rohith Sharma K S (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-3981?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15448925#comment-15448925
 ] 

Rohith Sharma K S commented on YARN-3981:
-

Hi [~sjlee0], thanks for the summary of the discussion.

bq. This client would launch a special YARN app under the covers whose app 
master and its associated timeline writer can serve as the proxy for timeline 
data the client may write. When this special timeline client shuts down, it 
would tear down the associated YARN app also.
I assume launching a special is just same as normal-app where in YARN 
frameworks take care of writing all the corresponding data to ATS2. Are you 
referring special app as Uber job? Even uber job, for each application there 
will be corresponding another job in the list. This doubles the storage are 
also. 

One another approach is
As part of NM daemon, start new service same as TimeLineWriterWebService. Idea 
is NM reports all these collector address to RM.   Introduce new API in 
clientRMservice to get collector address. Address is given by RM in random(This 
can be decided later). This address is used by timeline client. TimeLineClient  
exposes new constructor with an flowName. So system properties can be written 
at flow level.
These special entities are stored in separate table i.e with key 
*clusterid!flowName!entityType!entityId*. 

I would appreciate if folks give their suggestions/comments on the new approach.

> support timeline clients not associated with an application
> ---
>
> Key: YARN-3981
> URL: https://issues.apache.org/jira/browse/YARN-3981
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: timelineserver
>Affects Versions: YARN-2928
>Reporter: Sangjin Lee
>Assignee: Rohith Sharma K S
>  Labels: YARN-5355
>
> In the current v.2 design, all timeline writes must belong in a 
> flow/application context (cluster + user + flow + flow run + application).
> But there are use cases that require writing data outside the context of an 
> application. One such example is a higher level client (e.g. tez client or 
> hive/oozie/cascading client) writing flow-level data that spans multiple 
> applications. We need to find a way to support them.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Created] (YARN-5585) [Atsv2] Add a new filter fromId in REST endpoints

2016-08-30 Thread Rohith Sharma K S (JIRA)
Rohith Sharma K S created YARN-5585:
---

 Summary: [Atsv2] Add a new filter fromId in REST endpoints
 Key: YARN-5585
 URL: https://issues.apache.org/jira/browse/YARN-5585
 Project: Hadoop YARN
  Issue Type: Bug
  Components: timelinereader
Reporter: Rohith Sharma K S
Assignee: Rohith Sharma K S


TimelineReader REST API's provides lot of filters to retrieve the applications. 
Along with those, it would be good to add new filter i.e fromId so that 
entities can be retrieved after the fromId. 

Example : If applications are stored database, app-1 app-2 ... app-10.
*getApps?limit=5* gives app-1 to app-10. But to retrieve next 5 apps, it is 
difficult.

So proposal is to have fromId in the filter like 
*getApps?limit=5&&fromId=app-5* which gives list of apps from app-6 to app-10. 

This is very useful for pagination in web UI.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5561) [Atsv2] : Support for ability to retrieve apps/app-attempt/containers and entities via REST

2016-08-30 Thread Rohith Sharma K S (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5561?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15448961#comment-15448961
 ] 

Rohith Sharma K S commented on YARN-5561:
-

I created new JIRA YARN-5585 for adding new filter *fromId* in REST endpoints.

> [Atsv2] : Support for ability to retrieve apps/app-attempt/containers and 
> entities via REST
> ---
>
> Key: YARN-5561
> URL: https://issues.apache.org/jira/browse/YARN-5561
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: timelinereader
>Reporter: Rohith Sharma K S
>Assignee: Rohith Sharma K S
> Attachments: YARN-5561.patch, YARN-5561.v0.patch
>
>
> ATSv2 model lacks retrieval of {{list-of-all-apps}}, 
> {{list-of-all-app-attempts}} and {{list-of-all-containers-per-attempt}} via 
> REST API's. And also it is required to know about all the entities in an 
> applications.
> It is pretty much highly required these URLs for Web  UI.
> New REST URL would be 
> # GET {{/ws/v2/timeline/apps}}
> # GET {{/ws/v2/timeline/apps/\{app-id\}/appattempts}}.
> # GET 
> {{/ws/v2/timeline/apps/\{app-id\}/appattempts/\{attempt-id\}/containers}}
> # GET {{/ws/v2/timeline/apps/\{app id\}/entities}} should display list of 
> entities that can be queried.  



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-5221) Expose UpdateResourceRequest API to allow AM to request for change in container properties

2016-08-30 Thread Arun Suresh (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-5221?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arun Suresh updated YARN-5221:
--
Attachment: YARN-5221.013.patch

[~leftnoteasy],
bq. Same thing, in YARN, evolving is treated as "stable" in most cases. Even if 
we can change it bylaw.
Aah.. Thanks for clarifying

Updating patch : changed the Evolving to Unstable. 



> Expose UpdateResourceRequest API to allow AM to request for change in 
> container properties
> --
>
> Key: YARN-5221
> URL: https://issues.apache.org/jira/browse/YARN-5221
> Project: Hadoop YARN
>  Issue Type: Bug
>Reporter: Arun Suresh
>Assignee: Arun Suresh
> Attachments: YARN-5221.001.patch, YARN-5221.002.patch, 
> YARN-5221.003.patch, YARN-5221.004.patch, YARN-5221.005.patch, 
> YARN-5221.006.patch, YARN-5221.007.patch, YARN-5221.008.patch, 
> YARN-5221.009.patch, YARN-5221.010.patch, YARN-5221.011.patch, 
> YARN-5221.012.patch, YARN-5221.013.patch
>
>
> YARN-1197 introduced APIs to allow an AM to request for Increase and Decrease 
> of Container Resources after initial allocation.
> YARN-5085 proposes to allow an AM to request for a change of Container 
> ExecutionType.
> This JIRA proposes to unify both of the above into an Update Container API.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5264) Use FSQueue to store queue-specific information

2016-08-30 Thread Daniel Templeton (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5264?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15449031#comment-15449031
 ] 

Daniel Templeton commented on YARN-5264:


Thanks for cleaning this mess up, [~yufeigu].  Nice job on putting together a 
clean patch.  My only comment is a macro-level one.  You did all of this work 
to move the weights and shares and such from the maps in 
{{AllocationConfiguration}} into the queue tree.  You added the information to 
the queues, but you didn't remove it from the {{AllocationConfiguration}}.  
You're now storing the data twice, though the {{AllocationConfiguration}}'s 
copy of the data is only used to initialize the queues.  Think you can get rid 
of the corresponding maps in {{AllocationConfiguration}}?

> Use FSQueue to store queue-specific information
> ---
>
> Key: YARN-5264
> URL: https://issues.apache.org/jira/browse/YARN-5264
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Yufei Gu
>Assignee: Yufei Gu
> Attachments: YARN-5264.001.patch, YARN-5264.002.patch, 
> YARN-5264.003.patch
>
>
> Use FSQueue to store queue-specific information instead of querying 
> AllocationConfiguration. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5549) AMLauncher.createAMContainerLaunchContext() should not log the command to be launched indiscriminately

2016-08-30 Thread Jason Lowe (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5549?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15449059#comment-15449059
 ] 

Jason Lowe commented on YARN-5549:
--

Storing launch info in ATSv2 is fine with me and sounds preferable as long as 
it's easy for the user to get to it via the UI.

> AMLauncher.createAMContainerLaunchContext() should not log the command to be 
> launched indiscriminately
> --
>
> Key: YARN-5549
> URL: https://issues.apache.org/jira/browse/YARN-5549
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: resourcemanager
>Affects Versions: 2.7.2
>Reporter: Daniel Templeton
>Assignee: Daniel Templeton
>Priority: Critical
> Attachments: YARN-5549.001.patch, YARN-5549.002.patch, 
> YARN-5549.003.patch, YARN-5549.004.patch
>
>
> The command could contain sensitive information, such as keystore passwords 
> or AWS credentials or other.  Instead of logging it as INFO, we should log it 
> as DEBUG and include a property to disable logging it at all.  Logging it to 
> a different logger would also be viable and may create a smaller 
> administrative footprint.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5574) TestDistributedShell sets yarn.log.dir in the configuration instead of as a system property

2016-08-30 Thread Daniel Templeton (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5574?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15449063#comment-15449063
 ] 

Daniel Templeton commented on YARN-5574:


I'm unsurprised.  Otherwise this JIRA would have been filed years ago.

I didn't do any validation of the property.  I was merely looking at bare 
yarn.* properties for YARN-5575 and noticed the inconsistent use of 
yarn.log.dir.  Maybe we should change the summary and description to either fix 
or remove the property.

> TestDistributedShell sets yarn.log.dir in the configuration instead of as a 
> system property
> ---
>
> Key: YARN-5574
> URL: https://issues.apache.org/jira/browse/YARN-5574
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: yarn
>Affects Versions: 2.8.0
>Reporter: Daniel Templeton
>
> In the {{setupInternal()}} method, the distributed shell has this line:
> {code}
> conf.set("yarn.log.dir", "target");
> {code}
> Everywhere else that "yarn.log.dir" is used, it's set as a system property.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-4793) [Umbrella] Simplified API layer for services and beyond

2016-08-30 Thread Lei Guo (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-4793?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15449102#comment-15449102
 ] 

Lei Guo commented on YARN-4793:
---

As this Jira targets to build a unified interface, I'd like to share some 
thoughts related to the resource modeling part. The core of Yarn is still to 
map the resource and workload. For the resource modeling proposed in YARN-3926, 
it extends the current Yarn static resource modeling to be a flat resource 
modeling. The end user has the potential to define/schedule their own resource. 
I am considering whether we should do further extension to make the resource 
modeling to be a hierarchy based modeling. The use case I see for future is the 
heterogenous environment with different hardware accelerators (GPU, Intel Xeon 
Phi, FPGA, etc). For example, if you treat one GPU as a unit of special 
resource, the flat resource modeling is good enough. But we are seeing cases 
that GPU to be shared between applications, even the application prefer to 
allocate certain range of memory inside GPU to avoid cache rotation issue. In 
this case, it's hard for scheduler to handle. There is relationship between 
resource (just like the relationship between applications in Slider). Scheduler 
must allocate GPU memory and GPU core on the same GPU. 

If we do have vision to cover more complicate environments with Yarn, maybe 
it's time to consider further extension on the resource modeling together with 
Slider integration and unified service API.

> [Umbrella] Simplified API layer for services and beyond
> ---
>
> Key: YARN-4793
> URL: https://issues.apache.org/jira/browse/YARN-4793
> Project: Hadoop YARN
>  Issue Type: Bug
>Reporter: Vinod Kumar Vavilapalli
>Assignee: Gour Saha
> Attachments: 20160603-YARN-Simplified-V1-API-Examples.adoc, 
> 20160603-YARN-Simplified-V1-API-Layer-For-Services.pdf, 
> 20160603-YARN-Simplified-V1-API-Layer-For-Services.yaml, 
> YARN-4793-yarn-native-services.001.patch
>
>
> [See overview doc at YARN-4692, modifying and copy-pasting some of the 
> relevant pieces and sub-section 3.3.2 to track the specific sub-item.]
> Bringing a new service on YARN today is not a simple experience. The APIs of 
> existing frameworks are either too low­ level (native YARN), require writing 
> new code (for frameworks with programmatic APIs ) or writing a complex spec 
> (for declarative frameworks).
> In addition to building critical building blocks inside YARN (as part of 
> other efforts at YARN-4692), we should also look to simplifying the user 
> facing story for building services. Experience of projects like Slider 
> building real-­life services like HBase, Storm, Accumulo, Solr etc gives us 
> some very good learnings on how simplified APIs for building services will 
> look like.
> To this end, we should look at a new simple-services API layer backed by REST 
> interfaces. The REST layer can act as a single point of entry for creation 
> and lifecycle management of YARN services. Services here can range from 
> simple single-­component apps to the most complex, multi­-component 
> applications needing special orchestration needs.
> We should also look at making this a unified REST based entry point for other 
> important features like resource­-profile management (YARN-3926), 
> package-definitions' lifecycle­-management and service­-discovery (YARN-913 / 
> YARN-4757). We also need to flesh out its relation to our present much ­lower 
> level REST APIs (YARN-1695) in YARN for application-­submission and 
> management.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Created] (YARN-5586) Update the Resources class to consider all resource types

2016-08-30 Thread Varun Vasudev (JIRA)
Varun Vasudev created YARN-5586:
---

 Summary: Update the Resources class to consider all resource types
 Key: YARN-5586
 URL: https://issues.apache.org/jira/browse/YARN-5586
 Project: Hadoop YARN
  Issue Type: Sub-task
Reporter: Varun Vasudev
Assignee: Varun Vasudev


The Resources class provides a bunch of useful functions like clone, addTo, 
etc. These need to be updated to consider all resource types instead of just 
memory and cpu.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Created] (YARN-5587) Add support for resource profiles

2016-08-30 Thread Varun Vasudev (JIRA)
Varun Vasudev created YARN-5587:
---

 Summary: Add support for resource profiles
 Key: YARN-5587
 URL: https://issues.apache.org/jira/browse/YARN-5587
 Project: Hadoop YARN
  Issue Type: Sub-task
Reporter: Varun Vasudev
Assignee: Varun Vasudev


Add support for resource profiles on the RM side to allow users to use 
shorthands to specify resource requirements.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Created] (YARN-5588) Add support for resource profiles in distributed shell and MapReduce

2016-08-30 Thread Varun Vasudev (JIRA)
Varun Vasudev created YARN-5588:
---

 Summary: Add support for resource profiles in distributed shell 
and MapReduce
 Key: YARN-5588
 URL: https://issues.apache.org/jira/browse/YARN-5588
 Project: Hadoop YARN
  Issue Type: Sub-task
Reporter: Varun Vasudev
Assignee: Varun Vasudev






--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Created] (YARN-5590) Add support for increase and decrease of container resources with resource profiles

2016-08-30 Thread Varun Vasudev (JIRA)
Varun Vasudev created YARN-5590:
---

 Summary: Add support for increase and decrease of container 
resources with resource profiles
 Key: YARN-5590
 URL: https://issues.apache.org/jira/browse/YARN-5590
 Project: Hadoop YARN
  Issue Type: Sub-task
Reporter: Varun Vasudev
Assignee: Varun Vasudev






--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Created] (YARN-5591) Update web UIs to reflect multiple resource types

2016-08-30 Thread Varun Vasudev (JIRA)
Varun Vasudev created YARN-5591:
---

 Summary: Update web UIs to reflect multiple resource types
 Key: YARN-5591
 URL: https://issues.apache.org/jira/browse/YARN-5591
 Project: Hadoop YARN
  Issue Type: Sub-task
Reporter: Varun Vasudev
Assignee: Varun Vasudev






--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Created] (YARN-5589) Update CapacitySchedulerConfiguration minimum and maximum calculations to consider all resource types

2016-08-30 Thread Varun Vasudev (JIRA)
Varun Vasudev created YARN-5589:
---

 Summary: Update CapacitySchedulerConfiguration minimum and maximum 
calculations to consider all resource types
 Key: YARN-5589
 URL: https://issues.apache.org/jira/browse/YARN-5589
 Project: Hadoop YARN
  Issue Type: Sub-task
Reporter: Varun Vasudev
Assignee: Varun Vasudev






--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Created] (YARN-5592) Add support for dynamic resource updates with multiple resource types

2016-08-30 Thread Varun Vasudev (JIRA)
Varun Vasudev created YARN-5592:
---

 Summary: Add support for dynamic resource updates with multiple 
resource types
 Key: YARN-5592
 URL: https://issues.apache.org/jira/browse/YARN-5592
 Project: Hadoop YARN
  Issue Type: Sub-task
Reporter: Varun Vasudev






--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-4997) Update fair scheduler to use pluggable auth provider

2016-08-30 Thread Tao Jie (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-4997?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15449165#comment-15449165
 ] 

Tao Jie commented on YARN-4997:
---

Thank you [~templedf] for your review!  [~kasha], could you please give it a 
review?

> Update fair scheduler to use pluggable auth provider
> 
>
> Key: YARN-4997
> URL: https://issues.apache.org/jira/browse/YARN-4997
> Project: Hadoop YARN
>  Issue Type: Improvement
>  Components: fairscheduler
>Affects Versions: 2.8.0
>Reporter: Daniel Templeton
>Assignee: Tao Jie
> Attachments: YARN-4997-001.patch, YARN-4997-002.patch, 
> YARN-4997-003.patch, YARN-4997-004.patch, YARN-4997-005.patch, 
> YARN-4997-006.patch, YARN-4997-007.patch
>
>
> Now that YARN-3100 has made the authorization pluggable, it should be 
> supported by the fair scheduler.  YARN-3100 only updated the capacity 
> scheduler.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-3926) Extend the YARN resource model for easier resource-type management and profiles

2016-08-30 Thread Lei Guo (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-3926?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15449183#comment-15449183
 ] 

Lei Guo commented on YARN-3926:
---

[~vvasudev],  some thoughts I just commented in Yarn-4793.

As this Jira targets to build a unified interface, I'd like to share some 
thoughts related to the resource modeling part. The core of Yarn is still to 
map the resource and workload. For the resource modeling proposed in YARN-3926, 
it extends the current Yarn static resource modeling to be a flat resource 
modeling. The end user has the potential to define/schedule their own resource. 
I am considering whether we should do further extension to make the resource 
modeling to be a hierarchy based modeling. The use case I see for future is the 
heterogenous environment with different hardware accelerators (GPU, Intel Xeon 
Phi, FPGA, etc). For example, if you treat one GPU as a unit of special 
resource, the flat resource modeling is good enough. But we are seeing cases 
that GPU to be shared between applications, even the application prefer to 
allocate certain range of memory inside GPU to avoid cache rotation issue. In 
this case, it's hard for scheduler to handle. There is relationship between 
resource (just like the relationship between applications in Slider). Scheduler 
must allocate GPU memory and GPU core on the same GPU.

If we do have vision to cover more complicate environments with Yarn, maybe 
it's time to consider further extension on the resource modeling together with 
Slider integration and unified service API.

> Extend the YARN resource model for easier resource-type management and 
> profiles
> ---
>
> Key: YARN-3926
> URL: https://issues.apache.org/jira/browse/YARN-3926
> Project: Hadoop YARN
>  Issue Type: New Feature
>  Components: nodemanager, resourcemanager
>Reporter: Varun Vasudev
>Assignee: Varun Vasudev
> Attachments: Proposal for modifying resource model and profiles.pdf
>
>
> Currently, there are efforts to add support for various resource-types such 
> as disk(YARN-2139), network(YARN-2140), and  HDFS bandwidth(YARN-2681). These 
> efforts all aim to add support for a new resource type and are fairly 
> involved efforts. In addition, once support is added, it becomes harder for 
> users to specify the resources they need. All existing jobs have to be 
> modified, or have to use the minimum allocation.
> This ticket is a proposal to extend the YARN resource model to a more 
> flexible model which makes it easier to support additional resource-types. It 
> also considers the related aspect of “resource profiles” which allow users to 
> easily specify the various resources they need for any given container.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (YARN-3926) Extend the YARN resource model for easier resource-type management and profiles

2016-08-30 Thread Lei Guo (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-3926?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15449183#comment-15449183
 ] 

Lei Guo edited comment on YARN-3926 at 8/30/16 2:37 PM:


[~vvasudev],  some thoughts I just commented in YARN-4793.

As this Jira targets to build a unified interface, I'd like to share some 
thoughts related to the resource modeling part. The core of Yarn is still to 
map the resource and workload. For the resource modeling proposed in YARN-3926, 
it extends the current Yarn static resource modeling to be a flat resource 
modeling. The end user has the potential to define/schedule their own resource. 
I am considering whether we should do further extension to make the resource 
modeling to be a hierarchy based modeling. The use case I see for future is the 
heterogenous environment with different hardware accelerators (GPU, Intel Xeon 
Phi, FPGA, etc). For example, if you treat one GPU as a unit of special 
resource, the flat resource modeling is good enough. But we are seeing cases 
that GPU to be shared between applications, even the application prefer to 
allocate certain range of memory inside GPU to avoid cache rotation issue. In 
this case, it's hard for scheduler to handle. There is relationship between 
resource (just like the relationship between applications in Slider). Scheduler 
must allocate GPU memory and GPU core on the same GPU.

If we do have vision to cover more complicate environments with Yarn, maybe 
it's time to consider further extension on the resource modeling together with 
Slider integration and unified service API.


was (Author: grey):
[~vvasudev],  some thoughts I just commented in Yarn-4793.

As this Jira targets to build a unified interface, I'd like to share some 
thoughts related to the resource modeling part. The core of Yarn is still to 
map the resource and workload. For the resource modeling proposed in YARN-3926, 
it extends the current Yarn static resource modeling to be a flat resource 
modeling. The end user has the potential to define/schedule their own resource. 
I am considering whether we should do further extension to make the resource 
modeling to be a hierarchy based modeling. The use case I see for future is the 
heterogenous environment with different hardware accelerators (GPU, Intel Xeon 
Phi, FPGA, etc). For example, if you treat one GPU as a unit of special 
resource, the flat resource modeling is good enough. But we are seeing cases 
that GPU to be shared between applications, even the application prefer to 
allocate certain range of memory inside GPU to avoid cache rotation issue. In 
this case, it's hard for scheduler to handle. There is relationship between 
resource (just like the relationship between applications in Slider). Scheduler 
must allocate GPU memory and GPU core on the same GPU.

If we do have vision to cover more complicate environments with Yarn, maybe 
it's time to consider further extension on the resource modeling together with 
Slider integration and unified service API.

> Extend the YARN resource model for easier resource-type management and 
> profiles
> ---
>
> Key: YARN-3926
> URL: https://issues.apache.org/jira/browse/YARN-3926
> Project: Hadoop YARN
>  Issue Type: New Feature
>  Components: nodemanager, resourcemanager
>Reporter: Varun Vasudev
>Assignee: Varun Vasudev
> Attachments: Proposal for modifying resource model and profiles.pdf
>
>
> Currently, there are efforts to add support for various resource-types such 
> as disk(YARN-2139), network(YARN-2140), and  HDFS bandwidth(YARN-2681). These 
> efforts all aim to add support for a new resource type and are fairly 
> involved efforts. In addition, once support is added, it becomes harder for 
> users to specify the resources they need. All existing jobs have to be 
> modified, or have to use the minimum allocation.
> This ticket is a proposal to extend the YARN resource model to a more 
> flexible model which makes it easier to support additional resource-types. It 
> also considers the related aspect of “resource profiles” which allow users to 
> easily specify the various resources they need for any given container.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-5503) [YARN-3368] Add missing hidden files in webapp folder

2016-08-30 Thread Sunil G (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-5503?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sunil G updated YARN-5503:
--
Attachment: YARN-5503-YARN-3368.0006.patch

Patch looks good to me. Attaching same patch with correct version number for 
tracking. Will commit once jenkins is back.

> [YARN-3368] Add missing hidden files in webapp folder
> -
>
> Key: YARN-5503
> URL: https://issues.apache.org/jira/browse/YARN-5503
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Sreenath Somarajapuram
>Assignee: Sreenath Somarajapuram
> Attachments: YARN-5503-YARN-3368-0001.patch, 
> YARN-5503-YARN-3368-0001.patch, YARN-5503-YARN-3368-0002.patch, 
> YARN-5503-YARN-3368-0003.patch, YARN-5503-YARN-3368-0004.patch, 
> YARN-5503-YARN-3368.0005.patch, YARN-5503-YARN-3368.0006.patch
>
>
> - Feel it might be good to have a readme file with the basic instructions.
> - Change package type to war, as ours is a web application
> - Just noticed that the hidden files that must be present in the base 
> directory of ember app, are missing. Most of them are used for configuration, 
> and when missing the default vakues would be used by ember.
> -- They include - .bowerrc, .editorconfig, .ember-cli, .gitignore, .jshintrc, 
> .travis.yml, .watchmanconfig



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-5583) [YARN-3368] Fix wrong paths in .gitignore

2016-08-30 Thread Sunil G (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-5583?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sunil G updated YARN-5583:
--
Summary: [YARN-3368] Fix wrong paths in .gitignore  (was: [YARN-3368] Fix 
paths in .gitignore)

> [YARN-3368] Fix wrong paths in .gitignore
> -
>
> Key: YARN-5583
> URL: https://issues.apache.org/jira/browse/YARN-5583
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Sreenath Somarajapuram
>Assignee: Sreenath Somarajapuram
> Attachments: YARN-5583-YARN-3368-0001.patch
>
>
> npm-debug.log & testem.log paths are mentioned wrong.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Created] (YARN-5593) [Umbrella] Add support for YARN Allocation composed of multiple containers/processes

2016-08-30 Thread Arun Suresh (JIRA)
Arun Suresh created YARN-5593:
-

 Summary: [Umbrella] Add support for YARN Allocation composed of 
multiple containers/processes
 Key: YARN-5593
 URL: https://issues.apache.org/jira/browse/YARN-5593
 Project: Hadoop YARN
  Issue Type: Improvement
Reporter: Arun Suresh
Assignee: Arun Suresh


Opening this to explicitly call out and track some of the ideas that were 
discussed in YARN-1040. Specifically the concept of an {{Allocation}} which can 
be used by an AM to start multiple {{Containers}} against as long as the sum of 
resources used by all containers {{fitsIn()}} the Resources leased to the 
{{Allocation}}.
This is especially useful for AMs that might want to target certain operations 
(like upgrade / restart) on specific containers / processes within an 
Allocation without fear of losing the allocation.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-4868) [API] Distinguish Allocation from the container/process/task in the public YARN records

2016-08-30 Thread Arun Suresh (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-4868?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arun Suresh updated YARN-4868:
--
Parent Issue: YARN-5593  (was: YARN-4726)

> [API] Distinguish Allocation from the container/process/task in the public 
> YARN records
> ---
>
> Key: YARN-4868
> URL: https://issues.apache.org/jira/browse/YARN-4868
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: api
>Reporter: Arun Suresh
>Assignee: Arun Suresh
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-4869) Create RMAllocation and associated state machine

2016-08-30 Thread Arun Suresh (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-4869?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arun Suresh updated YARN-4869:
--
Parent Issue: YARN-5593  (was: YARN-4726)

> Create RMAllocation and associated state machine
> 
>
> Key: YARN-4869
> URL: https://issues.apache.org/jira/browse/YARN-4869
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: resourcemanager
>Reporter: Arun Suresh
>Assignee: Arun Suresh
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-4870) Changes in RM to lease and track Allocations instead of Containers

2016-08-30 Thread Arun Suresh (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-4870?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arun Suresh updated YARN-4870:
--
Parent Issue: YARN-5593  (was: YARN-4726)

> Changes in RM to lease and track Allocations instead of Containers
> --
>
> Key: YARN-4870
> URL: https://issues.apache.org/jira/browse/YARN-4870
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Arun Suresh
>Assignee: Arun Suresh
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-4872) [API] Extend ContainerManagementProtocol to use Allocations in addition to containers

2016-08-30 Thread Arun Suresh (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-4872?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arun Suresh updated YARN-4872:
--
Parent Issue: YARN-5593  (was: YARN-4726)

> [API] Extend ContainerManagementProtocol to use Allocations in addition to 
> containers
> -
>
> Key: YARN-4872
> URL: https://issues.apache.org/jira/browse/YARN-4872
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Arun Suresh
>Assignee: Arun Suresh
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-4871) [API] Extend ApplicationMasterProtocol to use Allocations in addition to Containers

2016-08-30 Thread Arun Suresh (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-4871?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arun Suresh updated YARN-4871:
--
Parent Issue: YARN-5593  (was: YARN-4726)

> [API] Extend ApplicationMasterProtocol to use Allocations in addition to 
> Containers
> ---
>
> Key: YARN-4871
> URL: https://issues.apache.org/jira/browse/YARN-4871
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Arun Suresh
>Assignee: Arun Suresh
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-4873) Introduce AllocationManager in the NodeManager to handle Allocations

2016-08-30 Thread Arun Suresh (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-4873?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arun Suresh updated YARN-4873:
--
Parent Issue: YARN-5593  (was: YARN-4726)

> Introduce AllocationManager in the NodeManager to handle Allocations
> 
>
> Key: YARN-4873
> URL: https://issues.apache.org/jira/browse/YARN-4873
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Arun Suresh
>Assignee: Arun Suresh
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5576) Core change to localize resource while container is running

2016-08-30 Thread Jian He (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5576?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15449258#comment-15449258
 ] 

Jian He commented on YARN-5576:
---

Thanks for the review, Varun

bq. Can we move this logic into the ContainerImpl class?
bq.  It looks like similar logic is required in the ResourceLocalization class?
The logic is different for these two places. The first one should only allow 
localizing when the container is running, while the second should allow if the 
container is either localizing or running. This logic is tailored to the 
localize API only, I feel adding canLocalizeResources method to the interface 
of Container makes it a bit confusing, as container can also localize while at 
localizing state in the normal scenario too. I prefer keep it outside as it 
should be clear enough to readers

bq. I couldn't figure out where this logic was moved to - can you please 
explain?
It's moved to ResourceSet#resourceLocalized method 
bq. We need to check the diagnostics string size - if a AM sends us too many 
failed requests, the diagnostics string will just balloon in size.
Not quite sure how to check though. doesn't seem appropriate to add a new 
config for the diagnostics size. So just skip appending the diagnostics if it 
is larger than say 2000 in length?
bq. We should throw an error here or at least flag this as a failed 
localization?
I don't think we should throw error. It's mentioned in the doc that we'll skip 
updating the symlink for now. This is a valid use case if we want to replacing 
existing resource while container is running. The changing symlink part, 
instead of being done here, will be done later when container re-launches.
bq. Can you explain the logic for this? I couldn't figure out why we need this.
It's required because, in the normal scenario, when the localization completes 
before launching the container, the private localizer thread will be 
interrupted. And when we re-localize, we need to create a new Thread object as 
the old thread is stopped. Added code comments for this
bq. Do we need these lines in the test code? Maybe move them to LOG.debug?
I just removed it.
bq. Can you also add a check in testLocalingResourceWhileContainerRunning to 
make sure we can’t localize for non-running containers?
will do
bq. Do we need these lines in the test code? Maybe move them to LOG.debug?
It's useful for debugging test. Anyway, I just removed it.

> Core change to localize resource while container is running
> ---
>
> Key: YARN-5576
> URL: https://issues.apache.org/jira/browse/YARN-5576
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Jian He
>Assignee: Jian He
> Attachments: YARN-5576.1.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (YARN-5576) Core change to localize resource while container is running

2016-08-30 Thread Jian He (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5576?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15449258#comment-15449258
 ] 

Jian He edited comment on YARN-5576 at 8/30/16 3:05 PM:


Thanks for the review, Varun

bq. Can we move this logic into the ContainerImpl class?
bq.  It looks like similar logic is required in the ResourceLocalization class?
The logic is different for these two places. The first one should only allow 
localizing when the container is running, while the second should allow if the 
container is either localizing or running. This logic is tailored to the 
localize API only, I feel adding canLocalizeResources method to the interface 
of Container makes it a bit confusing, as container can also localize while at 
localizing state in the normal scenario too. I prefer keep it outside as it 
should be clear enough to readers

bq. I couldn't figure out where this logic was moved to - can you please 
explain?
It's moved to ResourceSet#resourceLocalized method 
bq. We need to check the diagnostics string size - if a AM sends us too many 
failed requests, the diagnostics string will just balloon in size.
Not quite sure how to check though. doesn't seem appropriate to add a new 
config for the diagnostics size. So just skip appending the diagnostics if it 
is larger than say 2000 in length?
bq. We should throw an error here or at least flag this as a failed 
localization?
I don't think we should throw error. It's mentioned in the doc that we'll skip 
updating the symlink for now. This is a valid use case if we want to replacing 
existing resource while container is running. The changing symlink part, 
instead of being done here, will be done later when container re-launches.
bq. Can you explain the logic for this? I couldn't figure out why we need this.
It's required because, in the normal scenario, when the localization completes 
before launching the container, the private localizer thread will be 
interrupted. And when we re-localize, we need to create a new Thread object as 
the old thread is stopped. Added code comments for this
bq. Can you also add a check in testLocalingResourceWhileContainerRunning to 
make sure we can’t localize for non-running containers?
will do
bq. Do we need these lines in the test code? Maybe move them to LOG.debug?
It's useful for debugging test. Anyway, I just removed it.


was (Author: jianhe):
Thanks for the review, Varun

bq. Can we move this logic into the ContainerImpl class?
bq.  It looks like similar logic is required in the ResourceLocalization class?
The logic is different for these two places. The first one should only allow 
localizing when the container is running, while the second should allow if the 
container is either localizing or running. This logic is tailored to the 
localize API only, I feel adding canLocalizeResources method to the interface 
of Container makes it a bit confusing, as container can also localize while at 
localizing state in the normal scenario too. I prefer keep it outside as it 
should be clear enough to readers

bq. I couldn't figure out where this logic was moved to - can you please 
explain?
It's moved to ResourceSet#resourceLocalized method 
bq. We need to check the diagnostics string size - if a AM sends us too many 
failed requests, the diagnostics string will just balloon in size.
Not quite sure how to check though. doesn't seem appropriate to add a new 
config for the diagnostics size. So just skip appending the diagnostics if it 
is larger than say 2000 in length?
bq. We should throw an error here or at least flag this as a failed 
localization?
I don't think we should throw error. It's mentioned in the doc that we'll skip 
updating the symlink for now. This is a valid use case if we want to replacing 
existing resource while container is running. The changing symlink part, 
instead of being done here, will be done later when container re-launches.
bq. Can you explain the logic for this? I couldn't figure out why we need this.
It's required because, in the normal scenario, when the localization completes 
before launching the container, the private localizer thread will be 
interrupted. And when we re-localize, we need to create a new Thread object as 
the old thread is stopped. Added code comments for this
bq. Do we need these lines in the test code? Maybe move them to LOG.debug?
I just removed it.
bq. Can you also add a check in testLocalingResourceWhileContainerRunning to 
make sure we can’t localize for non-running containers?
will do
bq. Do we need these lines in the test code? Maybe move them to LOG.debug?
It's useful for debugging test. Anyway, I just removed it.

> Core change to localize resource while container is running
> ---
>
> Key: YARN-5576
> URL: https://issues.apache.org/jira/browse/YARN-5576
> Pr

[jira] [Created] (YARN-5594) Handle old data format while recovering RM

2016-08-30 Thread Tatyana But (JIRA)
Tatyana But created YARN-5594:
-

 Summary: Handle old data format while recovering RM
 Key: YARN-5594
 URL: https://issues.apache.org/jira/browse/YARN-5594
 Project: Hadoop YARN
  Issue Type: Bug
  Components: resourcemanager
Affects Versions: 2.7.0
Reporter: Tatyana But


We've got that error after upgrade cluster from v.2.5.1 to 2.7.0.

2016-08-25 17:20:33,293 ERROR
org.apache.hadoop.yarn.server.resourcemanager.ResourceManager: Failed to
load/recover state
com.google.protobuf.InvalidProtocolBufferException: Protocol message contained
an invalid tag (zero).
at 
com.google.protobuf.InvalidProtocolBufferException.invalidTag(InvalidProtocolBufferException.java:89)
at com.google.protobuf.CodedInputStream.readTag(CodedInputStream.java:108)
at 
org.apache.hadoop.yarn.proto.YarnServerResourceManagerRecoveryProtos$RMDelegationTokenIdentifierDataProto.(YarnServerResourceManagerRecoveryProtos.java:4680)
at 
org.apache.hadoop.yarn.proto.YarnServerResourceManagerRecoveryProtos$RMDelegationTokenIdentifierDataProto.(YarnServerResourceManagerRecoveryProtos.java:4644)
at 
org.apache.hadoop.yarn.proto.YarnServerResourceManagerRecoveryProtos$RMDelegationTokenIdentifierDataProto$1.parsePartialFrom(YarnServerResourceManagerRecoveryProtos.java:4740)
at 
org.apache.hadoop.yarn.proto.YarnServerResourceManagerRecoveryProtos$RMDelegationTokenIdentifierDataProto$1.parsePartialFrom(YarnServerResourceManagerRecoveryProtos.java:4735)
at 
org.apache.hadoop.yarn.proto.YarnServerResourceManagerRecoveryProtos$RMDelegationTokenIdentifierDataProto$Builder.mergeFrom(YarnServerResourceManagerRecoveryProtos.java:5075)
at 
org.apache.hadoop.yarn.proto.YarnServerResourceManagerRecoveryProtos$RMDelegationTokenIdentifierDataProto$Builder.mergeFrom(YarnServerResourceManagerRecoveryProtos.java:4955)
at 
com.google.protobuf.AbstractMessage$Builder.mergeFrom(AbstractMessage.java:337)
at 
com.google.protobuf.AbstractMessage$Builder.mergeFrom(AbstractMessage.java:267)
at 
com.google.protobuf.AbstractMessageLite$Builder.mergeFrom(AbstractMessageLite.java:210)
at 
com.google.protobuf.AbstractMessage$Builder.mergeFrom(AbstractMessage.java:904)
at 
org.apache.hadoop.yarn.server.resourcemanager.recovery.records.RMDelegationTokenIdentifierData.readFields(RMDelegationTokenIdentifierData.java:43)
at 
org.apache.hadoop.yarn.server.resourcemanager.recovery.FileSystemRMStateStore.loadRMDTSecretManagerState(FileSystemRMStateStore.java:355)
at 
org.apache.hadoop.yarn.server.resourcemanager.recovery.FileSystemRMStateStore.loadState(FileSystemRMStateStore.java:199)
at 
org.apache.hadoop.yarn.server.resourcemanager.ResourceManager$RMActiveServices.serviceStart(ResourceManager.java:587)
at org.apache.hadoop.service.AbstractService.start(AbstractService.java:193)
at 
org.apache.hadoop.yarn.server.resourcemanager.ResourceManager.startActiveServices(ResourceManager.java:1007)
at 
org.apache.hadoop.yarn.server.resourcemanager.ResourceManager$1.run(ResourceManager.java:1048)
at 
org.apache.hadoop.yarn.server.resourcemanager.ResourceManager$1.run(ResourceManager.java:1044

The reason of this problem is that we use different formats of files 
/var/mapr/cluster/yarn/rm/system/FSRMStateRoot/RMDTSecretManagerRoot/RMDelegationToken*
 in these hadoop versions.

This fix handle old data format during RM recover if 
InvalidProtocolBufferException occures.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5503) [YARN-3368] Add missing hidden files in webapp folder

2016-08-30 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5503?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15449278#comment-15449278
 ] 

Hadoop QA commented on YARN-5503:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 1m 35s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red} 0m 0s 
{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 10m 
2s {color} | {color:green} YARN-3368 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 9s 
{color} | {color:green} YARN-3368 passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 13s 
{color} | {color:green} YARN-3368 passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
16s {color} | {color:green} YARN-3368 passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 11s 
{color} | {color:green} YARN-3368 passed {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 
8s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 6s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 6s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 8s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
7s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
0s {color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green} 0m 2s 
{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 6s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 0m 6s 
{color} | {color:green} hadoop-yarn-ui in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 
16s {color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 13m 57s {color} 
| {color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:f62df43 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12826191/YARN-5503-YARN-3368.0006.patch
 |
| JIRA Issue | YARN-5503 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  xml  |
| uname | Linux 050b0755271b 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed 
Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | YARN-3368 / a58afc2 |
| Default Java | 1.8.0_101 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-YARN-Build/12944/testReport/ |
| modules | C: hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui U: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui |
| Console output | 
https://builds.apache.org/job/PreCommit-YARN-Build/12944/console |
| Powered by | Apache Yetus 0.3.0   http://yetus.apache.org |


This message was automatically generated.



> [YARN-3368] Add missing hidden files in webapp folder
> -
>
> Key: YARN-5503
> URL: https://issues.apache.org/jira/browse/YARN-5503
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Sreenath Somarajapuram
>Assignee: Sreenath Somarajapuram
> Attachments: YARN-5503-YARN-3368-0001.patch, 
> YARN-5503-YARN-3368-0001.patch, YARN-5503-YARN-3368-0002.patch, 
> YARN-5503-YARN-3368-0003.patch, YARN-5503-YARN-3368-0004.patch, 
> YARN-5503-YARN-3368.0005.patch, YARN-5503-YARN-3368.0006.patch
>
>
> - Feel it might be good to have a readme file with the basic instructions.
> - Change package type to war, as ours is a web application
> - Just noticed that the hidden files that must be present in the base 
> directory of ember app

[jira] [Updated] (YARN-5594) Handle old data format while recovering RM

2016-08-30 Thread Tatyana But (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-5594?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tatyana But updated YARN-5594:
--
Description: 
We've got that error after upgrade cluster from v.2.5.1 to 2.7.0.
{noformat}
2016-08-25 17:20:33,293 ERROR
org.apache.hadoop.yarn.server.resourcemanager.ResourceManager: Failed to
load/recover state
com.google.protobuf.InvalidProtocolBufferException: Protocol message contained
an invalid tag (zero).
at 
com.google.protobuf.InvalidProtocolBufferException.invalidTag(InvalidProtocolBufferException.java:89)
at com.google.protobuf.CodedInputStream.readTag(CodedInputStream.java:108)
at 
org.apache.hadoop.yarn.proto.YarnServerResourceManagerRecoveryProtos$RMDelegationTokenIdentifierDataProto.(YarnServerResourceManagerRecoveryProtos.java:4680)
at 
org.apache.hadoop.yarn.proto.YarnServerResourceManagerRecoveryProtos$RMDelegationTokenIdentifierDataProto.(YarnServerResourceManagerRecoveryProtos.java:4644)
at 
org.apache.hadoop.yarn.proto.YarnServerResourceManagerRecoveryProtos$RMDelegationTokenIdentifierDataProto$1.parsePartialFrom(YarnServerResourceManagerRecoveryProtos.java:4740)
at 
org.apache.hadoop.yarn.proto.YarnServerResourceManagerRecoveryProtos$RMDelegationTokenIdentifierDataProto$1.parsePartialFrom(YarnServerResourceManagerRecoveryProtos.java:4735)
at 
org.apache.hadoop.yarn.proto.YarnServerResourceManagerRecoveryProtos$RMDelegationTokenIdentifierDataProto$Builder.mergeFrom(YarnServerResourceManagerRecoveryProtos.java:5075)
at 
org.apache.hadoop.yarn.proto.YarnServerResourceManagerRecoveryProtos$RMDelegationTokenIdentifierDataProto$Builder.mergeFrom(YarnServerResourceManagerRecoveryProtos.java:4955)
at 
com.google.protobuf.AbstractMessage$Builder.mergeFrom(AbstractMessage.java:337)
at 
com.google.protobuf.AbstractMessage$Builder.mergeFrom(AbstractMessage.java:267)
at 
com.google.protobuf.AbstractMessageLite$Builder.mergeFrom(AbstractMessageLite.java:210)
at 
com.google.protobuf.AbstractMessage$Builder.mergeFrom(AbstractMessage.java:904)
at 
org.apache.hadoop.yarn.server.resourcemanager.recovery.records.RMDelegationTokenIdentifierData.readFields(RMDelegationTokenIdentifierData.java:43)
at 
org.apache.hadoop.yarn.server.resourcemanager.recovery.FileSystemRMStateStore.loadRMDTSecretManagerState(FileSystemRMStateStore.java:355)
at 
org.apache.hadoop.yarn.server.resourcemanager.recovery.FileSystemRMStateStore.loadState(FileSystemRMStateStore.java:199)
at 
org.apache.hadoop.yarn.server.resourcemanager.ResourceManager$RMActiveServices.serviceStart(ResourceManager.java:587)
at org.apache.hadoop.service.AbstractService.start(AbstractService.java:193)
at 
org.apache.hadoop.yarn.server.resourcemanager.ResourceManager.startActiveServices(ResourceManager.java:1007)
at 
org.apache.hadoop.yarn.server.resourcemanager.ResourceManager$1.run(ResourceManager.java:1048)
at 
org.apache.hadoop.yarn.server.resourcemanager.ResourceManager$1.run(ResourceManager.java:1044
{noformat}
The reason of this problem is that we use different formats of files 
/var/mapr/cluster/yarn/rm/system/FSRMStateRoot/RMDTSecretManagerRoot/RMDelegationToken*
 in these hadoop versions.

This fix handle old data format during RM recover if 
InvalidProtocolBufferException occures.

  was:
We've got that error after upgrade cluster from v.2.5.1 to 2.7.0.

2016-08-25 17:20:33,293 ERROR
org.apache.hadoop.yarn.server.resourcemanager.ResourceManager: Failed to
load/recover state
com.google.protobuf.InvalidProtocolBufferException: Protocol message contained
an invalid tag (zero).
at 
com.google.protobuf.InvalidProtocolBufferException.invalidTag(InvalidProtocolBufferException.java:89)
at com.google.protobuf.CodedInputStream.readTag(CodedInputStream.java:108)
at 
org.apache.hadoop.yarn.proto.YarnServerResourceManagerRecoveryProtos$RMDelegationTokenIdentifierDataProto.(YarnServerResourceManagerRecoveryProtos.java:4680)
at 
org.apache.hadoop.yarn.proto.YarnServerResourceManagerRecoveryProtos$RMDelegationTokenIdentifierDataProto.(YarnServerResourceManagerRecoveryProtos.java:4644)
at 
org.apache.hadoop.yarn.proto.YarnServerResourceManagerRecoveryProtos$RMDelegationTokenIdentifierDataProto$1.parsePartialFrom(YarnServerResourceManagerRecoveryProtos.java:4740)
at 
org.apache.hadoop.yarn.proto.YarnServerResourceManagerRecoveryProtos$RMDelegationTokenIdentifierDataProto$1.parsePartialFrom(YarnServerResourceManagerRecoveryProtos.java:4735)
at 
org.apache.hadoop.yarn.proto.YarnServerResourceManagerRecoveryProtos$RMDelegationTokenIdentifierDataProto$Builder.mergeFrom(YarnServerResourceManagerRecoveryProtos.java:5075)
at 
org.apache.hadoop.yarn.proto.YarnServerResourceManagerRecoveryProtos$RMDelegationTokenIdentifierDataProto$Builder.mergeFrom(YarnServerResourceManagerRecoveryProtos.java:4955)
at 
com.google.protobuf.AbstractMessage$Builder.mergeFrom(AbstractMessage.java:337)
at 
com.google.protobuf.AbstractMessage$Builder.mergeFrom(AbstractMessage.java:267)
at 
com.google.p

[jira] [Updated] (YARN-5503) [YARN-3368] Add missing hidden files in webapp folder for deployment

2016-08-30 Thread Sunil G (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-5503?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sunil G updated YARN-5503:
--
Summary: [YARN-3368] Add missing hidden files in webapp folder for 
deployment  (was: [YARN-3368] Add missing hidden files in webapp folder)

> [YARN-3368] Add missing hidden files in webapp folder for deployment
> 
>
> Key: YARN-5503
> URL: https://issues.apache.org/jira/browse/YARN-5503
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Sreenath Somarajapuram
>Assignee: Sreenath Somarajapuram
> Attachments: YARN-5503-YARN-3368-0001.patch, 
> YARN-5503-YARN-3368-0001.patch, YARN-5503-YARN-3368-0002.patch, 
> YARN-5503-YARN-3368-0003.patch, YARN-5503-YARN-3368-0004.patch, 
> YARN-5503-YARN-3368.0005.patch, YARN-5503-YARN-3368.0006.patch
>
>
> - Feel it might be good to have a readme file with the basic instructions.
> - Change package type to war, as ours is a web application
> - Just noticed that the hidden files that must be present in the base 
> directory of ember app, are missing. Most of them are used for configuration, 
> and when missing the default vakues would be used by ember.
> -- They include - .bowerrc, .editorconfig, .ember-cli, .gitignore, .jshintrc, 
> .travis.yml, .watchmanconfig



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-5594) Handle old data format while recovering RM

2016-08-30 Thread Tatyana But (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-5594?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tatyana But updated YARN-5594:
--
Attachment: YARN-5594.001.patch

> Handle old data format while recovering RM
> --
>
> Key: YARN-5594
> URL: https://issues.apache.org/jira/browse/YARN-5594
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: resourcemanager
>Affects Versions: 2.7.0
>Reporter: Tatyana But
> Attachments: YARN-5594.001.patch
>
>
> We've got that error after upgrade cluster from v.2.5.1 to 2.7.0.
> {noformat}
> 2016-08-25 17:20:33,293 ERROR
> org.apache.hadoop.yarn.server.resourcemanager.ResourceManager: Failed to
> load/recover state
> com.google.protobuf.InvalidProtocolBufferException: Protocol message contained
> an invalid tag (zero).
> at 
> com.google.protobuf.InvalidProtocolBufferException.invalidTag(InvalidProtocolBufferException.java:89)
> at com.google.protobuf.CodedInputStream.readTag(CodedInputStream.java:108)
> at 
> org.apache.hadoop.yarn.proto.YarnServerResourceManagerRecoveryProtos$RMDelegationTokenIdentifierDataProto.(YarnServerResourceManagerRecoveryProtos.java:4680)
> at 
> org.apache.hadoop.yarn.proto.YarnServerResourceManagerRecoveryProtos$RMDelegationTokenIdentifierDataProto.(YarnServerResourceManagerRecoveryProtos.java:4644)
> at 
> org.apache.hadoop.yarn.proto.YarnServerResourceManagerRecoveryProtos$RMDelegationTokenIdentifierDataProto$1.parsePartialFrom(YarnServerResourceManagerRecoveryProtos.java:4740)
> at 
> org.apache.hadoop.yarn.proto.YarnServerResourceManagerRecoveryProtos$RMDelegationTokenIdentifierDataProto$1.parsePartialFrom(YarnServerResourceManagerRecoveryProtos.java:4735)
> at 
> org.apache.hadoop.yarn.proto.YarnServerResourceManagerRecoveryProtos$RMDelegationTokenIdentifierDataProto$Builder.mergeFrom(YarnServerResourceManagerRecoveryProtos.java:5075)
> at 
> org.apache.hadoop.yarn.proto.YarnServerResourceManagerRecoveryProtos$RMDelegationTokenIdentifierDataProto$Builder.mergeFrom(YarnServerResourceManagerRecoveryProtos.java:4955)
> at 
> com.google.protobuf.AbstractMessage$Builder.mergeFrom(AbstractMessage.java:337)
> at 
> com.google.protobuf.AbstractMessage$Builder.mergeFrom(AbstractMessage.java:267)
> at 
> com.google.protobuf.AbstractMessageLite$Builder.mergeFrom(AbstractMessageLite.java:210)
> at 
> com.google.protobuf.AbstractMessage$Builder.mergeFrom(AbstractMessage.java:904)
> at 
> org.apache.hadoop.yarn.server.resourcemanager.recovery.records.RMDelegationTokenIdentifierData.readFields(RMDelegationTokenIdentifierData.java:43)
> at 
> org.apache.hadoop.yarn.server.resourcemanager.recovery.FileSystemRMStateStore.loadRMDTSecretManagerState(FileSystemRMStateStore.java:355)
> at 
> org.apache.hadoop.yarn.server.resourcemanager.recovery.FileSystemRMStateStore.loadState(FileSystemRMStateStore.java:199)
> at 
> org.apache.hadoop.yarn.server.resourcemanager.ResourceManager$RMActiveServices.serviceStart(ResourceManager.java:587)
> at org.apache.hadoop.service.AbstractService.start(AbstractService.java:193)
> at 
> org.apache.hadoop.yarn.server.resourcemanager.ResourceManager.startActiveServices(ResourceManager.java:1007)
> at 
> org.apache.hadoop.yarn.server.resourcemanager.ResourceManager$1.run(ResourceManager.java:1048)
> at 
> org.apache.hadoop.yarn.server.resourcemanager.ResourceManager$1.run(ResourceManager.java:1044
> {noformat}
> The reason of this problem is that we use different formats of files 
> /var/mapr/cluster/yarn/rm/system/FSRMStateRoot/RMDTSecretManagerRoot/RMDelegationToken*
>  in these hadoop versions.
> This fix handle old data format during RM recover if 
> InvalidProtocolBufferException occures.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5221) Expose UpdateResourceRequest API to allow AM to request for change in container properties

2016-08-30 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5221?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15449417#comment-15449417
 ] 

Hadoop QA commented on YARN-5221:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 20s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 
0s {color} | {color:green} The patch appears to include 29 new or modified test 
files. {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 14s 
{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 6m 
58s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 7m 3s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 1m 
53s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 4m 28s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 2m 
30s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 6m 
37s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 2m 51s 
{color} | {color:green} trunk passed {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 14s 
{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 3m 
35s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 7m 0s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} cc {color} | {color:green} 7m 0s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 7m 0s 
{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} checkstyle {color} | {color:red} 1m 51s 
{color} | {color:red} root: The patch generated 11 new + 1900 unchanged - 76 
fixed = 1911 total (was 1976) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 4m 31s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 2m 
31s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
0s {color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 7m 
52s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 18s 
{color} | {color:green} hadoop-yarn-project_hadoop-yarn_hadoop-yarn-api 
generated 0 new + 123 unchanged - 2 fixed = 123 total (was 125) {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 28s 
{color} | {color:green} hadoop-yarn-common in the patch passed. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 16s 
{color} | {color:green} hadoop-yarn-server-common in the patch passed. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 18s 
{color} | {color:green} hadoop-yarn-server-nodemanager in the patch passed. 
{color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 20s 
{color} | {color:green} 
hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager
 generated 0 new + 941 unchanged - 20 fixed = 941 total (was 961) {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 12s 
{color} | {color:green} hadoop-yarn-server-tests in the patch passed. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 14s 
{color} | {color:green} hadoop-yarn-client in the patch passed. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 13s 
{color} | {color:green} hadoop-yarn-applications-distributedshell in the patch 
passed. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 16s 
{color} | {color:green} hadoop-mapreduce-client-app in the patch passed. 
{color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 13s 
{color} 

[jira] [Commented] (YARN-5593) [Umbrella] Add support for YARN Allocation composed of multiple containers/processes

2016-08-30 Thread Arun Suresh (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5593?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15449450#comment-15449450
 ] 

Arun Suresh commented on YARN-5593:
---

Moved a bunch of tasks over from YARN-4726.

> [Umbrella] Add support for YARN Allocation composed of multiple 
> containers/processes
> 
>
> Key: YARN-5593
> URL: https://issues.apache.org/jira/browse/YARN-5593
> Project: Hadoop YARN
>  Issue Type: Improvement
>Reporter: Arun Suresh
>Assignee: Arun Suresh
>
> Opening this to explicitly call out and track some of the ideas that were 
> discussed in YARN-1040. Specifically the concept of an {{Allocation}} which 
> can be used by an AM to start multiple {{Containers}} against as long as the 
> sum of resources used by all containers {{fitsIn()}} the Resources leased to 
> the {{Allocation}}.
> This is especially useful for AMs that might want to target certain 
> operations (like upgrade / restart) on specific containers / processes within 
> an Allocation without fear of losing the allocation.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5594) Handle old data format while recovering RM

2016-08-30 Thread Daniel Templeton (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5594?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15449501#comment-15449501
 ] 

Daniel Templeton commented on YARN-5594:


Thanks for the patch, [~Tatyana But].  It seems to me like a very point-in-time 
fix.  What happens when the format changes again?

> Handle old data format while recovering RM
> --
>
> Key: YARN-5594
> URL: https://issues.apache.org/jira/browse/YARN-5594
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: resourcemanager
>Affects Versions: 2.7.0
>Reporter: Tatyana But
> Attachments: YARN-5594.001.patch
>
>
> We've got that error after upgrade cluster from v.2.5.1 to 2.7.0.
> {noformat}
> 2016-08-25 17:20:33,293 ERROR
> org.apache.hadoop.yarn.server.resourcemanager.ResourceManager: Failed to
> load/recover state
> com.google.protobuf.InvalidProtocolBufferException: Protocol message contained
> an invalid tag (zero).
> at 
> com.google.protobuf.InvalidProtocolBufferException.invalidTag(InvalidProtocolBufferException.java:89)
> at com.google.protobuf.CodedInputStream.readTag(CodedInputStream.java:108)
> at 
> org.apache.hadoop.yarn.proto.YarnServerResourceManagerRecoveryProtos$RMDelegationTokenIdentifierDataProto.(YarnServerResourceManagerRecoveryProtos.java:4680)
> at 
> org.apache.hadoop.yarn.proto.YarnServerResourceManagerRecoveryProtos$RMDelegationTokenIdentifierDataProto.(YarnServerResourceManagerRecoveryProtos.java:4644)
> at 
> org.apache.hadoop.yarn.proto.YarnServerResourceManagerRecoveryProtos$RMDelegationTokenIdentifierDataProto$1.parsePartialFrom(YarnServerResourceManagerRecoveryProtos.java:4740)
> at 
> org.apache.hadoop.yarn.proto.YarnServerResourceManagerRecoveryProtos$RMDelegationTokenIdentifierDataProto$1.parsePartialFrom(YarnServerResourceManagerRecoveryProtos.java:4735)
> at 
> org.apache.hadoop.yarn.proto.YarnServerResourceManagerRecoveryProtos$RMDelegationTokenIdentifierDataProto$Builder.mergeFrom(YarnServerResourceManagerRecoveryProtos.java:5075)
> at 
> org.apache.hadoop.yarn.proto.YarnServerResourceManagerRecoveryProtos$RMDelegationTokenIdentifierDataProto$Builder.mergeFrom(YarnServerResourceManagerRecoveryProtos.java:4955)
> at 
> com.google.protobuf.AbstractMessage$Builder.mergeFrom(AbstractMessage.java:337)
> at 
> com.google.protobuf.AbstractMessage$Builder.mergeFrom(AbstractMessage.java:267)
> at 
> com.google.protobuf.AbstractMessageLite$Builder.mergeFrom(AbstractMessageLite.java:210)
> at 
> com.google.protobuf.AbstractMessage$Builder.mergeFrom(AbstractMessage.java:904)
> at 
> org.apache.hadoop.yarn.server.resourcemanager.recovery.records.RMDelegationTokenIdentifierData.readFields(RMDelegationTokenIdentifierData.java:43)
> at 
> org.apache.hadoop.yarn.server.resourcemanager.recovery.FileSystemRMStateStore.loadRMDTSecretManagerState(FileSystemRMStateStore.java:355)
> at 
> org.apache.hadoop.yarn.server.resourcemanager.recovery.FileSystemRMStateStore.loadState(FileSystemRMStateStore.java:199)
> at 
> org.apache.hadoop.yarn.server.resourcemanager.ResourceManager$RMActiveServices.serviceStart(ResourceManager.java:587)
> at org.apache.hadoop.service.AbstractService.start(AbstractService.java:193)
> at 
> org.apache.hadoop.yarn.server.resourcemanager.ResourceManager.startActiveServices(ResourceManager.java:1007)
> at 
> org.apache.hadoop.yarn.server.resourcemanager.ResourceManager$1.run(ResourceManager.java:1048)
> at 
> org.apache.hadoop.yarn.server.resourcemanager.ResourceManager$1.run(ResourceManager.java:1044
> {noformat}
> The reason of this problem is that we use different formats of files 
> /var/mapr/cluster/yarn/rm/system/FSRMStateRoot/RMDTSecretManagerRoot/RMDelegationToken*
>  in these hadoop versions.
> This fix handle old data format during RM recover if 
> InvalidProtocolBufferException occures.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5549) AMLauncher.createAMContainerLaunchContext() should not log the command to be launched indiscriminately

2016-08-30 Thread Daniel Templeton (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5549?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15449515#comment-15449515
 ] 

Daniel Templeton commented on YARN-5549:


I would still argue to keep the config param.  There are many reasons why one 
would want to enable debug logging, all of which can cause the leaking of 
credentials into the logs.  The point of this JIRA is to secure the application 
command contents.  Without the config param, we're only making it a bit more 
secure, sometimes.

> AMLauncher.createAMContainerLaunchContext() should not log the command to be 
> launched indiscriminately
> --
>
> Key: YARN-5549
> URL: https://issues.apache.org/jira/browse/YARN-5549
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: resourcemanager
>Affects Versions: 2.7.2
>Reporter: Daniel Templeton
>Assignee: Daniel Templeton
>Priority: Critical
> Attachments: YARN-5549.001.patch, YARN-5549.002.patch, 
> YARN-5549.003.patch, YARN-5549.004.patch
>
>
> The command could contain sensitive information, such as keystore passwords 
> or AWS credentials or other.  Instead of logging it as INFO, we should log it 
> as DEBUG and include a property to disable logging it at all.  Logging it to 
> a different logger would also be viable and may create a smaller 
> administrative footprint.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5594) Handle old data format while recovering RM

2016-08-30 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5594?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15449540#comment-15449540
 ] 

Hadoop QA commented on YARN-5594:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 18s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 
0s {color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 15s 
{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 6m 
46s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 6m 54s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 1m 
24s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 2m 2s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
44s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 3m 
11s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 34s 
{color} | {color:green} trunk passed {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 14s 
{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 1m 
34s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 6m 49s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 6m 49s 
{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} checkstyle {color} | {color:red} 1m 24s 
{color} | {color:red} root: The patch generated 2 new + 49 unchanged - 0 fixed 
= 51 total (was 49) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 2m 1s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
43s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
0s {color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 3m 
34s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 34s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 7m 18s 
{color} | {color:green} hadoop-common in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 2m 18s 
{color} | {color:green} hadoop-yarn-common in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 37m 35s 
{color} | {color:green} hadoop-yarn-server-resourcemanager in the patch passed. 
{color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 
21s {color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 89m 31s {color} 
| {color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:9560f25 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12826194/YARN-5594.001.patch |
| JIRA Issue | YARN-5594 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux 9c7b3859f583 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed 
Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / af50860 |
| Default Java | 1.8.0_101 |
| findbugs | v3.0.0 |
| checkstyle | 
https://builds.apache.org/job/PreCommit-YARN-Build/12945/artifact/patchprocess/diff-checkstyle-root.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-YARN-Build/12945/testReport/ |
| modules | C: hadoop-common-project/hadoop-common 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common 
hadoop-yarn-project/hadoop-ya

[jira] [Commented] (YARN-5567) Fix script exit code checking in NodeHealthScriptRunner#reportHealthStatus

2016-08-30 Thread Ray Chiang (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5567?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15449561#comment-15449561
 ] 

Ray Chiang commented on YARN-5567:
--

Thanks [~wilfreds] for that.  For incompatible changes, I'd prefer to leave it 
in trunk, pull it out of branch-2.8 and debate about branch-2 (effectively 2.9).


> Fix script exit code checking in NodeHealthScriptRunner#reportHealthStatus
> --
>
> Key: YARN-5567
> URL: https://issues.apache.org/jira/browse/YARN-5567
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: nodemanager
>Affects Versions: 2.8.0, 3.0.0-alpha1
>Reporter: Yufei Gu
>Assignee: Yufei Gu
> Fix For: 2.8.1
>
> Attachments: YARN-5567.001.patch
>
>
> In case of FAILED_WITH_EXIT_CODE, health status should be false.
> {code}
>   case FAILED_WITH_EXIT_CODE:
> setHealthStatus(true, "", now);
> break;
> {code}
> should be 
> {code}
>   case FAILED_WITH_EXIT_CODE:
> setHealthStatus(false, "", now);
> break;
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Assigned] (YARN-5451) Container localizers that hang are not cleaned up

2016-08-30 Thread luhuichun (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-5451?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

luhuichun reassigned YARN-5451:
---

Assignee: luhuichun

> Container localizers that hang are not cleaned up
> -
>
> Key: YARN-5451
> URL: https://issues.apache.org/jira/browse/YARN-5451
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: nodemanager
>Affects Versions: 2.6.0
>Reporter: Jason Lowe
>Assignee: luhuichun
>
> I ran across an old, rogue process on one of our nodes.  It apparently was a 
> container localizer that somehow entered an infinite loop during startup.  
> The NM never cleaned up this broken localizer, so it happily ran forever.  
> The NM needs to do a better job of tracking localizers, including killing 
> them if they appear to be hung/broken.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5451) Container localizers that hang are not cleaned up

2016-08-30 Thread luhuichun (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5451?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15449601#comment-15449601
 ] 

luhuichun commented on YARN-5451:
-

@Varun Vasudev hi Varun, I am reading localizer code recently, maybe I can take 
this job.

> Container localizers that hang are not cleaned up
> -
>
> Key: YARN-5451
> URL: https://issues.apache.org/jira/browse/YARN-5451
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: nodemanager
>Affects Versions: 2.6.0
>Reporter: Jason Lowe
>Assignee: luhuichun
>
> I ran across an old, rogue process on one of our nodes.  It apparently was a 
> container localizer that somehow entered an infinite loop during startup.  
> The NM never cleaned up this broken localizer, so it happily ran forever.  
> The NM needs to do a better job of tracking localizers, including killing 
> them if they appear to be hung/broken.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Created] (YARN-5595) Update documentation and Javadoc to match change to NodeHealthScriptRunner#reportHealthStatus

2016-08-30 Thread Ray Chiang (JIRA)
Ray Chiang created YARN-5595:


 Summary: Update documentation and Javadoc to match change to 
NodeHealthScriptRunner#reportHealthStatus
 Key: YARN-5595
 URL: https://issues.apache.org/jira/browse/YARN-5595
 Project: Hadoop YARN
  Issue Type: Bug
Reporter: Ray Chiang






--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-5595) Update documentation and Javadoc to match change to NodeHealthScriptRunner#reportHealthStatus

2016-08-30 Thread Ray Chiang (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-5595?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ray Chiang updated YARN-5595:
-
Assignee: Yufei Gu

> Update documentation and Javadoc to match change to 
> NodeHealthScriptRunner#reportHealthStatus
> -
>
> Key: YARN-5595
> URL: https://issues.apache.org/jira/browse/YARN-5595
> Project: Hadoop YARN
>  Issue Type: Bug
>Reporter: Ray Chiang
>Assignee: Yufei Gu
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5567) Fix script exit code checking in NodeHealthScriptRunner#reportHealthStatus

2016-08-30 Thread Ray Chiang (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5567?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15449735#comment-15449735
 ] 

Ray Chiang commented on YARN-5567:
--

One point of clarification.  While this *is* an incompatible change, I was 
debating about the "hardness" of it.  It will break on broken health checking 
scripts (assuming anyone is even using the feature).  If we want to treat this 
as a hard incompatibility, then I'd go with my earlier suggestion.  In general, 
I prefer being conservative along these lines.

If others are of the mind that this is a "softer" incompatibility, we could 
keep it in branch-2.8.

Either way, I agree the documentation and Javadoc need to be updated to match.  
I've filed YARN-5595 as a follow up.


> Fix script exit code checking in NodeHealthScriptRunner#reportHealthStatus
> --
>
> Key: YARN-5567
> URL: https://issues.apache.org/jira/browse/YARN-5567
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: nodemanager
>Affects Versions: 2.8.0, 3.0.0-alpha1
>Reporter: Yufei Gu
>Assignee: Yufei Gu
> Fix For: 2.8.1
>
> Attachments: YARN-5567.001.patch
>
>
> In case of FAILED_WITH_EXIT_CODE, health status should be false.
> {code}
>   case FAILED_WITH_EXIT_CODE:
> setHealthStatus(true, "", now);
> break;
> {code}
> should be 
> {code}
>   case FAILED_WITH_EXIT_CODE:
> setHealthStatus(false, "", now);
> break;
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Assigned] (YARN-5555) Scheduler UI: "% of Queue" is inaccurate if leaf queue is hierarchically nested.

2016-08-30 Thread Eric Payne (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Eric Payne reassigned YARN-:


Assignee: Eric Payne

> Scheduler UI: "% of Queue" is inaccurate if leaf queue is hierarchically 
> nested.
> 
>
> Key: YARN-
> URL: https://issues.apache.org/jira/browse/YARN-
> Project: Hadoop YARN
>  Issue Type: Bug
>Affects Versions: 2.8.0
>Reporter: Eric Payne
>Assignee: Eric Payne
>Priority: Minor
> Attachments: PctOfQueueIsInaccurate.jpg
>
>
> If a leaf queue is hierarchically nested (e.g., {{root.a.a1}}, 
> {{root.a.a2}}), the values in the "*% of Queue*" column in the apps section 
> of the Scheduler UI is calculated as if the leaf queue ({{a1}}) were a direct 
> child of {{root}}.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5221) Expose UpdateResourceRequest API to allow AM to request for change in container properties

2016-08-30 Thread Wangda Tan (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5221?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15449757#comment-15449757
 ] 

Wangda Tan commented on YARN-5221:
--

Looks good, +1, thanks [~asuresh].

And if you plan to commit this patch to branches other than trunk, it's better 
to submit them to Jenkins before committing.

> Expose UpdateResourceRequest API to allow AM to request for change in 
> container properties
> --
>
> Key: YARN-5221
> URL: https://issues.apache.org/jira/browse/YARN-5221
> Project: Hadoop YARN
>  Issue Type: Bug
>Reporter: Arun Suresh
>Assignee: Arun Suresh
> Attachments: YARN-5221.001.patch, YARN-5221.002.patch, 
> YARN-5221.003.patch, YARN-5221.004.patch, YARN-5221.005.patch, 
> YARN-5221.006.patch, YARN-5221.007.patch, YARN-5221.008.patch, 
> YARN-5221.009.patch, YARN-5221.010.patch, YARN-5221.011.patch, 
> YARN-5221.012.patch, YARN-5221.013.patch
>
>
> YARN-1197 introduced APIs to allow an AM to request for Increase and Decrease 
> of Container Resources after initial allocation.
> YARN-5085 proposes to allow an AM to request for a change of Container 
> ExecutionType.
> This JIRA proposes to unify both of the above into an Update Container API.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-4876) [Phase 1] Decoupled Init / Destroy of Containers from Start / Stop

2016-08-30 Thread Arun Suresh (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-4876?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arun Suresh updated YARN-4876:
--
Attachment: YARN-4876.003.patch

Rebasing against trunk..

> [Phase 1] Decoupled Init / Destroy of Containers from Start / Stop
> --
>
> Key: YARN-4876
> URL: https://issues.apache.org/jira/browse/YARN-4876
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Arun Suresh
>Assignee: Marco Rabozzi
> Attachments: YARN-4876-design-doc.pdf, YARN-4876.002.patch, 
> YARN-4876.003.patch, YARN-4876.01.patch
>
>
> Introduce *initialize* and *destroy* container API into the 
> *ContainerManagementProtocol* and decouple the actual start of a container 
> from the initialization. This will allow AMs to re-start a container without 
> having to lose the allocation.
> Additionally, if the localization of the container is associated to the 
> initialize (and the cleanup with the destroy), This can also be used by 
> applications to upgrade a Container by *re-initializing* with a new 
> *ContainerLaunchContext*



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5428) Allow for specifying the docker client configuration directory

2016-08-30 Thread Shane Kumpf (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5428?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15449804#comment-15449804
 ] 

Shane Kumpf commented on YARN-5428:
---

Sorry for the delay here.

[~aw], [~tangzhankun] - Thank you for the feedback. In stepping back, I agree 
that there are multiple needs here that require additional consideration. I'm 
attaching a design document that outlines two approaches to meet needs of both 
administrators and the user. Please let me know your thoughts.

> Allow for specifying the docker client configuration directory
> --
>
> Key: YARN-5428
> URL: https://issues.apache.org/jira/browse/YARN-5428
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: yarn
>Reporter: Shane Kumpf
>Assignee: Shane Kumpf
> Attachments: YARN-5428.001.patch, YARN-5428.002.patch, 
> YARN-5428.003.patch, YARN-5428.004.patch, 
> YARN-5428Allowforspecifyingthedockerclientconfigurationdirectory.pdf
>
>
> The docker client allows for specifying a configuration directory that 
> contains the docker client's configuration. It is common to store "docker 
> login" credentials in this config, to avoid the need to docker login on each 
> cluster member. 
> By default the docker client config is $HOME/.docker/config.json on Linux. 
> However, this does not work with the current container executor user 
> switching and it may also be desirable to centralize this configuration 
> beyond the single user's home directory.
> Note that the command line arg is for the configuration directory NOT the 
> configuration file.
> This change will be needed to allow YARN to automatically pull images at 
> localization time or within container executor.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-5428) Allow for specifying the docker client configuration directory

2016-08-30 Thread Shane Kumpf (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-5428?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Shane Kumpf updated YARN-5428:
--
Attachment: 
YARN-5428Allowforspecifyingthedockerclientconfigurationdirectory.pdf

> Allow for specifying the docker client configuration directory
> --
>
> Key: YARN-5428
> URL: https://issues.apache.org/jira/browse/YARN-5428
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: yarn
>Reporter: Shane Kumpf
>Assignee: Shane Kumpf
> Attachments: YARN-5428.001.patch, YARN-5428.002.patch, 
> YARN-5428.003.patch, YARN-5428.004.patch, 
> YARN-5428Allowforspecifyingthedockerclientconfigurationdirectory.pdf
>
>
> The docker client allows for specifying a configuration directory that 
> contains the docker client's configuration. It is common to store "docker 
> login" credentials in this config, to avoid the need to docker login on each 
> cluster member. 
> By default the docker client config is $HOME/.docker/config.json on Linux. 
> However, this does not work with the current container executor user 
> switching and it may also be desirable to centralize this configuration 
> beyond the single user's home directory.
> Note that the command line arg is for the configuration directory NOT the 
> configuration file.
> This change will be needed to allow YARN to automatically pull images at 
> localization time or within container executor.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Created] (YARN-5596) TestDockerContainerRuntime fails on the mac

2016-08-30 Thread Sidharta Seethana (JIRA)
Sidharta Seethana created YARN-5596:
---

 Summary: TestDockerContainerRuntime fails on the mac
 Key: YARN-5596
 URL: https://issues.apache.org/jira/browse/YARN-5596
 Project: Hadoop YARN
  Issue Type: Sub-task
  Components: nodemanager, yarn
Reporter: Sidharta Seethana
Assignee: Sidharta Seethana
Priority: Minor


/sys/fs/cgroup doesn't exist on the Mac. And the tests seem to fail because of 
this. 

{code}
Failed tests:
  TestDockerContainerRuntime.testContainerLaunchWithCustomNetworks:456 
expected:<...ET_BIND_SERVICE -v /[sys/fs/cgroup:/sys/fs/cgroup:ro -v 
/]test_container_local...> but was:<...ET_BIND_SERVICE -v 
/[]test_container_local...>
  TestDockerContainerRuntime.testContainerLaunchWithNetworkingDefaults:401 
expected:<...ET_BIND_SERVICE -v /[sys/fs/cgroup:/sys/fs/cgroup:ro -v 
/]test_container_local...> but was:<...ET_BIND_SERVICE -v 
/[]test_container_local...>
  TestDockerContainerRuntime.testDockerContainerLaunch:297 
expected:<...ET_BIND_SERVICE -v /[sys/fs/cgroup:/sys/fs/cgroup:ro -v 
/]test_container_local...> but was:<...ET_BIND_SERVICE -v 
/[]test_container_local...>

Tests run: 19, Failures: 3, Errors: 0, Skipped: 0
{code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5596) TestDockerContainerRuntime fails on the mac

2016-08-30 Thread Sidharta Seethana (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5596?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15449822#comment-15449822
 ] 

Sidharta Seethana commented on YARN-5596:
-

I'll take a look at this. Patch coming up.

> TestDockerContainerRuntime fails on the mac
> ---
>
> Key: YARN-5596
> URL: https://issues.apache.org/jira/browse/YARN-5596
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: nodemanager, yarn
>Reporter: Sidharta Seethana
>Assignee: Sidharta Seethana
>Priority: Minor
>
> /sys/fs/cgroup doesn't exist on the Mac. And the tests seem to fail because 
> of this. 
> {code}
> Failed tests:
>   TestDockerContainerRuntime.testContainerLaunchWithCustomNetworks:456 
> expected:<...ET_BIND_SERVICE -v /[sys/fs/cgroup:/sys/fs/cgroup:ro -v 
> /]test_container_local...> but was:<...ET_BIND_SERVICE -v 
> /[]test_container_local...>
>   TestDockerContainerRuntime.testContainerLaunchWithNetworkingDefaults:401 
> expected:<...ET_BIND_SERVICE -v /[sys/fs/cgroup:/sys/fs/cgroup:ro -v 
> /]test_container_local...> but was:<...ET_BIND_SERVICE -v 
> /[]test_container_local...>
>   TestDockerContainerRuntime.testDockerContainerLaunch:297 
> expected:<...ET_BIND_SERVICE -v /[sys/fs/cgroup:/sys/fs/cgroup:ro -v 
> /]test_container_local...> but was:<...ET_BIND_SERVICE -v 
> /[]test_container_local...>
> Tests run: 19, Failures: 3, Errors: 0, Skipped: 0
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5566) client-side NM graceful decom doesn't trigger when jobs finish

2016-08-30 Thread Karthik Kambatla (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5566?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15449826#comment-15449826
 ] 

Karthik Kambatla commented on YARN-5566:


Patch makes sense to me. Minor comments:
# RMNodeImpl: Nit - the comments in the added code don't add much information. 
We should remove them or added more details so they add some information. 
{noformat}
// no running (and keeping alive) app on this node, get it
// decommissioned.
{noformat}
# TestResourceTrackerService: The second heartbeat from node1 does not need to 
indicate running containers. 
# TestRMNodeTransitions#testGracefulDecommissionWithApp: When creating 
NodeStatus, we don't need to specify ContainerStatus when creating a new 
ArrayList

> client-side NM graceful decom doesn't trigger when jobs finish
> --
>
> Key: YARN-5566
> URL: https://issues.apache.org/jira/browse/YARN-5566
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: nodemanager
>Affects Versions: 2.8.0
>Reporter: Robert Kanter
>Assignee: Robert Kanter
> Attachments: YARN-5566.001.patch, YARN-5566.002.patch, 
> YARN-5566.003.patch
>
>
> I was testing the client-side NM graceful decommission and noticed that it 
> was always waiting for the timeout, even if all jobs running on that node (or 
> even the cluster) had already finished.
> For example:
> # JobA is running with at least one container on NodeA
> # User runs client-side decom on NodeA at 5:00am with a timeout of 3 hours 
> --> NodeA enters DECOMMISSIONING state
> # JobA finishes at 6:00am and there are no other jobs running on NodeA
> # User's client reaches the timeout at 8:00am, and forcibly decommissions 
> NodeA
> NodeA should have decommissioned at 6:00am.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-5596) TestDockerContainerRuntime fails on the mac

2016-08-30 Thread Sidharta Seethana (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-5596?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sidharta Seethana updated YARN-5596:

Description: 
/sys/fs/cgroup doesn't exist on Mac OS X. And the tests seem to fail because of 
this. 

{code}
Failed tests:
  TestDockerContainerRuntime.testContainerLaunchWithCustomNetworks:456 
expected:<...ET_BIND_SERVICE -v /[sys/fs/cgroup:/sys/fs/cgroup:ro -v 
/]test_container_local...> but was:<...ET_BIND_SERVICE -v 
/[]test_container_local...>
  TestDockerContainerRuntime.testContainerLaunchWithNetworkingDefaults:401 
expected:<...ET_BIND_SERVICE -v /[sys/fs/cgroup:/sys/fs/cgroup:ro -v 
/]test_container_local...> but was:<...ET_BIND_SERVICE -v 
/[]test_container_local...>
  TestDockerContainerRuntime.testDockerContainerLaunch:297 
expected:<...ET_BIND_SERVICE -v /[sys/fs/cgroup:/sys/fs/cgroup:ro -v 
/]test_container_local...> but was:<...ET_BIND_SERVICE -v 
/[]test_container_local...>

Tests run: 19, Failures: 3, Errors: 0, Skipped: 0
{code}

  was:
/sys/fs/cgroup doesn't exist on the Mac. And the tests seem to fail because of 
this. 

{code}
Failed tests:
  TestDockerContainerRuntime.testContainerLaunchWithCustomNetworks:456 
expected:<...ET_BIND_SERVICE -v /[sys/fs/cgroup:/sys/fs/cgroup:ro -v 
/]test_container_local...> but was:<...ET_BIND_SERVICE -v 
/[]test_container_local...>
  TestDockerContainerRuntime.testContainerLaunchWithNetworkingDefaults:401 
expected:<...ET_BIND_SERVICE -v /[sys/fs/cgroup:/sys/fs/cgroup:ro -v 
/]test_container_local...> but was:<...ET_BIND_SERVICE -v 
/[]test_container_local...>
  TestDockerContainerRuntime.testDockerContainerLaunch:297 
expected:<...ET_BIND_SERVICE -v /[sys/fs/cgroup:/sys/fs/cgroup:ro -v 
/]test_container_local...> but was:<...ET_BIND_SERVICE -v 
/[]test_container_local...>

Tests run: 19, Failures: 3, Errors: 0, Skipped: 0
{code}


> TestDockerContainerRuntime fails on the mac
> ---
>
> Key: YARN-5596
> URL: https://issues.apache.org/jira/browse/YARN-5596
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: nodemanager, yarn
>Reporter: Sidharta Seethana
>Assignee: Sidharta Seethana
>Priority: Minor
>
> /sys/fs/cgroup doesn't exist on Mac OS X. And the tests seem to fail because 
> of this. 
> {code}
> Failed tests:
>   TestDockerContainerRuntime.testContainerLaunchWithCustomNetworks:456 
> expected:<...ET_BIND_SERVICE -v /[sys/fs/cgroup:/sys/fs/cgroup:ro -v 
> /]test_container_local...> but was:<...ET_BIND_SERVICE -v 
> /[]test_container_local...>
>   TestDockerContainerRuntime.testContainerLaunchWithNetworkingDefaults:401 
> expected:<...ET_BIND_SERVICE -v /[sys/fs/cgroup:/sys/fs/cgroup:ro -v 
> /]test_container_local...> but was:<...ET_BIND_SERVICE -v 
> /[]test_container_local...>
>   TestDockerContainerRuntime.testDockerContainerLaunch:297 
> expected:<...ET_BIND_SERVICE -v /[sys/fs/cgroup:/sys/fs/cgroup:ro -v 
> /]test_container_local...> but was:<...ET_BIND_SERVICE -v 
> /[]test_container_local...>
> Tests run: 19, Failures: 3, Errors: 0, Skipped: 0
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-5596) TestDockerContainerRuntime fails on OS X

2016-08-30 Thread Sidharta Seethana (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-5596?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sidharta Seethana updated YARN-5596:

Summary: TestDockerContainerRuntime fails on OS X  (was: 
TestDockerContainerRuntime fails on the mac)

> TestDockerContainerRuntime fails on OS X
> 
>
> Key: YARN-5596
> URL: https://issues.apache.org/jira/browse/YARN-5596
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: nodemanager, yarn
>Reporter: Sidharta Seethana
>Assignee: Sidharta Seethana
>Priority: Minor
>
> /sys/fs/cgroup doesn't exist on Mac OS X. And the tests seem to fail because 
> of this. 
> {code}
> Failed tests:
>   TestDockerContainerRuntime.testContainerLaunchWithCustomNetworks:456 
> expected:<...ET_BIND_SERVICE -v /[sys/fs/cgroup:/sys/fs/cgroup:ro -v 
> /]test_container_local...> but was:<...ET_BIND_SERVICE -v 
> /[]test_container_local...>
>   TestDockerContainerRuntime.testContainerLaunchWithNetworkingDefaults:401 
> expected:<...ET_BIND_SERVICE -v /[sys/fs/cgroup:/sys/fs/cgroup:ro -v 
> /]test_container_local...> but was:<...ET_BIND_SERVICE -v 
> /[]test_container_local...>
>   TestDockerContainerRuntime.testDockerContainerLaunch:297 
> expected:<...ET_BIND_SERVICE -v /[sys/fs/cgroup:/sys/fs/cgroup:ro -v 
> /]test_container_local...> but was:<...ET_BIND_SERVICE -v 
> /[]test_container_local...>
> Tests run: 19, Failures: 3, Errors: 0, Skipped: 0
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-3671) Integrate Federation services with ResourceManager

2016-08-30 Thread Subru Krishnan (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-3671?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15449846#comment-15449846
 ] 

Subru Krishnan commented on YARN-3671:
--

Thanks [~jianhe] for the thoughtful review.

> Integrate Federation services with ResourceManager
> --
>
> Key: YARN-3671
> URL: https://issues.apache.org/jira/browse/YARN-3671
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: nodemanager, resourcemanager
>Reporter: Subru Krishnan
>Assignee: Subru Krishnan
> Fix For: YARN-2915
>
> Attachments: YARN-3671-YARN-2915-v1.patch, 
> YARN-3671-YARN-2915-v2.patch, YARN-3671-YARN-2915-v3.patch, 
> YARN-3671-YARN-2915-v4.patch, YARN-3671-YARN-2915-v5.patch
>
>
> This JIRA proposes adding the ability to turn on Federation services like 
> StateStore, cluster membership heartbeat etc in the RM



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Resolved] (YARN-3665) Federation subcluster membership mechanisms

2016-08-30 Thread Subru Krishnan (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-3665?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Subru Krishnan resolved YARN-3665.
--
  Resolution: Implemented
Hadoop Flags: Reviewed

Closing this as YARN-3671 includes this too.

> Federation subcluster membership mechanisms
> ---
>
> Key: YARN-3665
> URL: https://issues.apache.org/jira/browse/YARN-3665
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: nodemanager, resourcemanager
>Reporter: Subru Krishnan
>Assignee: Subru Krishnan
>
> The member YARN RMs continuously heartbeat to the state store to keep alive 
> and publish their current capability/load information. This JIRA tracks this 
> mechanisms.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-5566) client-side NM graceful decom doesn't trigger when jobs finish

2016-08-30 Thread Robert Kanter (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-5566?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Robert Kanter updated YARN-5566:

Attachment: YARN-5566.004.patch

The 004 patch addresses [~kasha]'s feedback.  
(I assume by TestRMNodeTransitions#testGracefulDecommissionWithApp you meant 
TestRMNodeTransitions#testDecommissioningUnhealthy)

> client-side NM graceful decom doesn't trigger when jobs finish
> --
>
> Key: YARN-5566
> URL: https://issues.apache.org/jira/browse/YARN-5566
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: nodemanager
>Affects Versions: 2.8.0
>Reporter: Robert Kanter
>Assignee: Robert Kanter
> Attachments: YARN-5566.001.patch, YARN-5566.002.patch, 
> YARN-5566.003.patch, YARN-5566.004.patch
>
>
> I was testing the client-side NM graceful decommission and noticed that it 
> was always waiting for the timeout, even if all jobs running on that node (or 
> even the cluster) had already finished.
> For example:
> # JobA is running with at least one container on NodeA
> # User runs client-side decom on NodeA at 5:00am with a timeout of 3 hours 
> --> NodeA enters DECOMMISSIONING state
> # JobA finishes at 6:00am and there are no other jobs running on NodeA
> # User's client reaches the timeout at 8:00am, and forcibly decommissions 
> NodeA
> NodeA should have decommissioned at 6:00am.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (YARN-5566) client-side NM graceful decom doesn't trigger when jobs finish

2016-08-30 Thread Robert Kanter (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5566?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15449870#comment-15449870
 ] 

Robert Kanter edited comment on YARN-5566 at 8/30/16 7:15 PM:
--

Thanks [~kasha] for the review.  The 004 patch addresses your feedback.  
(I assume by TestRMNodeTransitions#testGracefulDecommissionWithApp you meant 
TestRMNodeTransitions#testDecommissioningUnhealthy)


was (Author: rkanter):
The 004 patch addresses [~kasha]'s feedback.  
(I assume by TestRMNodeTransitions#testGracefulDecommissionWithApp you meant 
TestRMNodeTransitions#testDecommissioningUnhealthy)

> client-side NM graceful decom doesn't trigger when jobs finish
> --
>
> Key: YARN-5566
> URL: https://issues.apache.org/jira/browse/YARN-5566
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: nodemanager
>Affects Versions: 2.8.0
>Reporter: Robert Kanter
>Assignee: Robert Kanter
> Attachments: YARN-5566.001.patch, YARN-5566.002.patch, 
> YARN-5566.003.patch, YARN-5566.004.patch
>
>
> I was testing the client-side NM graceful decommission and noticed that it 
> was always waiting for the timeout, even if all jobs running on that node (or 
> even the cluster) had already finished.
> For example:
> # JobA is running with at least one container on NodeA
> # User runs client-side decom on NodeA at 5:00am with a timeout of 3 hours 
> --> NodeA enters DECOMMISSIONING state
> # JobA finishes at 6:00am and there are no other jobs running on NodeA
> # User's client reaches the timeout at 8:00am, and forcibly decommissions 
> NodeA
> NodeA should have decommissioned at 6:00am.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-5596) TestDockerContainerRuntime fails on OS X

2016-08-30 Thread Sidharta Seethana (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-5596?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sidharta Seethana updated YARN-5596:

Attachment: YARN-5596.001.patch

Uploading a patch that fixes the tests. 

> TestDockerContainerRuntime fails on OS X
> 
>
> Key: YARN-5596
> URL: https://issues.apache.org/jira/browse/YARN-5596
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: nodemanager, yarn
>Reporter: Sidharta Seethana
>Assignee: Sidharta Seethana
>Priority: Minor
> Attachments: YARN-5596.001.patch
>
>
> /sys/fs/cgroup doesn't exist on Mac OS X. And the tests seem to fail because 
> of this. 
> {code}
> Failed tests:
>   TestDockerContainerRuntime.testContainerLaunchWithCustomNetworks:456 
> expected:<...ET_BIND_SERVICE -v /[sys/fs/cgroup:/sys/fs/cgroup:ro -v 
> /]test_container_local...> but was:<...ET_BIND_SERVICE -v 
> /[]test_container_local...>
>   TestDockerContainerRuntime.testContainerLaunchWithNetworkingDefaults:401 
> expected:<...ET_BIND_SERVICE -v /[sys/fs/cgroup:/sys/fs/cgroup:ro -v 
> /]test_container_local...> but was:<...ET_BIND_SERVICE -v 
> /[]test_container_local...>
>   TestDockerContainerRuntime.testDockerContainerLaunch:297 
> expected:<...ET_BIND_SERVICE -v /[sys/fs/cgroup:/sys/fs/cgroup:ro -v 
> /]test_container_local...> but was:<...ET_BIND_SERVICE -v 
> /[]test_container_local...>
> Tests run: 19, Failures: 3, Errors: 0, Skipped: 0
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-3981) support timeline clients not associated with an application

2016-08-30 Thread Li Lu (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-3981?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15449912#comment-15449912
 ] 

Li Lu commented on YARN-3981:
-

Thanks [~rohithsharma]! 

bq. As part of NM daemon, start new service same as TimeLineWriterWebService. 
Idea is NM reports all these collector address to RM. Introduce new API in 
clientRMservice to get collector address. Address is given by RM in random(This 
can be decided later). This address is used by timeline client. TimeLineClient 
exposes new constructor with an flowName. So system properties can be written 
at flow level.
Actually this looks a little bit similar to the current collector discovery 
mechanism, where the NM reports app level collector information to RM, and RM 
distributes such information to all containers. 

The difference is we need to explicitly decide where and when to launch the 
collectors. The RM can decide where to launch collectors, but as of now, all 
collectors are associated with some concrete application's life-cycles 
(launched as aux-services). We can launch collectors as separate process for 
this use case? 

One concern is this will increase the load on the RM again. Not sure if this 
will be a problem on busy clusters with a lot of client connections. However, 
this is definitely better than launching a central server daemon to handle all 
client requests (which falls back to old ATS v1 architecture). 

For storing those entities posted from clients, can we put them in the entity 
table, but just leave some unknown fields empty? Will that be a concern for the 
storage API's semantics? 

> support timeline clients not associated with an application
> ---
>
> Key: YARN-3981
> URL: https://issues.apache.org/jira/browse/YARN-3981
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: timelineserver
>Affects Versions: YARN-2928
>Reporter: Sangjin Lee
>Assignee: Rohith Sharma K S
>  Labels: YARN-5355
>
> In the current v.2 design, all timeline writes must belong in a 
> flow/application context (cluster + user + flow + flow run + application).
> But there are use cases that require writing data outside the context of an 
> application. One such example is a higher level client (e.g. tez client or 
> hive/oozie/cascading client) writing flow-level data that spans multiple 
> applications. We need to find a way to support them.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5596) TestDockerContainerRuntime fails on OS X

2016-08-30 Thread Daniel Templeton (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5596?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15449935#comment-15449935
 ] 

Daniel Templeton commented on YARN-5596:


Patch looks good.  Does the path constant need to be public, or would it be 
better to make it default visibility?

> TestDockerContainerRuntime fails on OS X
> 
>
> Key: YARN-5596
> URL: https://issues.apache.org/jira/browse/YARN-5596
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: nodemanager, yarn
>Reporter: Sidharta Seethana
>Assignee: Sidharta Seethana
>Priority: Minor
> Attachments: YARN-5596.001.patch
>
>
> /sys/fs/cgroup doesn't exist on Mac OS X. And the tests seem to fail because 
> of this. 
> {code}
> Failed tests:
>   TestDockerContainerRuntime.testContainerLaunchWithCustomNetworks:456 
> expected:<...ET_BIND_SERVICE -v /[sys/fs/cgroup:/sys/fs/cgroup:ro -v 
> /]test_container_local...> but was:<...ET_BIND_SERVICE -v 
> /[]test_container_local...>
>   TestDockerContainerRuntime.testContainerLaunchWithNetworkingDefaults:401 
> expected:<...ET_BIND_SERVICE -v /[sys/fs/cgroup:/sys/fs/cgroup:ro -v 
> /]test_container_local...> but was:<...ET_BIND_SERVICE -v 
> /[]test_container_local...>
>   TestDockerContainerRuntime.testDockerContainerLaunch:297 
> expected:<...ET_BIND_SERVICE -v /[sys/fs/cgroup:/sys/fs/cgroup:ro -v 
> /]test_container_local...> but was:<...ET_BIND_SERVICE -v 
> /[]test_container_local...>
> Tests run: 19, Failures: 3, Errors: 0, Skipped: 0
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5596) TestDockerContainerRuntime fails on OS X

2016-08-30 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5596?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15449968#comment-15449968
 ] 

Hadoop QA commented on YARN-5596:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 11s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 
0s {color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 7m 
11s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 27s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
17s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 28s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
13s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 0m 
41s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 17s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 
22s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 24s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 24s 
{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} checkstyle {color} | {color:red} 0m 12s 
{color} | {color:red} 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager:
 The patch generated 1 new + 17 unchanged - 0 fixed = 18 total (was 17) {color} 
|
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 24s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
10s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
0s {color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 0m 
46s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 14s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 13m 14s 
{color} | {color:green} hadoop-yarn-server-nodemanager in the patch passed. 
{color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 
15s {color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 26m 20s {color} 
| {color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:9560f25 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12826226/YARN-5596.001.patch |
| JIRA Issue | YARN-5596 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux 6c2eadf7a19f 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed 
Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / c4ee691 |
| Default Java | 1.8.0_101 |
| findbugs | v3.0.0 |
| checkstyle | 
https://builds.apache.org/job/PreCommit-YARN-Build/12949/artifact/patchprocess/diff-checkstyle-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-nodemanager.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-YARN-Build/12949/testReport/ |
| modules | C: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager
 U: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager
 |
| Console output | 
https://builds.apache.org/job/PreCommit-YARN-Build/12949/console |
| Powered by | Apache Yetus 0.3.0   http://yetus.apache.org |


This message was automatically generated.



> TestDockerContainerRuntime fails on OS X
> 
>
> Key: YARN-5596
> URL: https://issues.apache.org/jira/browse/YAR

[jira] [Commented] (YARN-5596) TestDockerContainerRuntime fails on OS X

2016-08-30 Thread Sidharta Seethana (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5596?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15449971#comment-15449971
 ] 

Sidharta Seethana commented on YARN-5596:
-

Thanks, [~templedf]. I'll upload a new patch with the visibility changed. 

> TestDockerContainerRuntime fails on OS X
> 
>
> Key: YARN-5596
> URL: https://issues.apache.org/jira/browse/YARN-5596
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: nodemanager, yarn
>Reporter: Sidharta Seethana
>Assignee: Sidharta Seethana
>Priority: Minor
> Attachments: YARN-5596.001.patch
>
>
> /sys/fs/cgroup doesn't exist on Mac OS X. And the tests seem to fail because 
> of this. 
> {code}
> Failed tests:
>   TestDockerContainerRuntime.testContainerLaunchWithCustomNetworks:456 
> expected:<...ET_BIND_SERVICE -v /[sys/fs/cgroup:/sys/fs/cgroup:ro -v 
> /]test_container_local...> but was:<...ET_BIND_SERVICE -v 
> /[]test_container_local...>
>   TestDockerContainerRuntime.testContainerLaunchWithNetworkingDefaults:401 
> expected:<...ET_BIND_SERVICE -v /[sys/fs/cgroup:/sys/fs/cgroup:ro -v 
> /]test_container_local...> but was:<...ET_BIND_SERVICE -v 
> /[]test_container_local...>
>   TestDockerContainerRuntime.testDockerContainerLaunch:297 
> expected:<...ET_BIND_SERVICE -v /[sys/fs/cgroup:/sys/fs/cgroup:ro -v 
> /]test_container_local...> but was:<...ET_BIND_SERVICE -v 
> /[]test_container_local...>
> Tests run: 19, Failures: 3, Errors: 0, Skipped: 0
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5388) MAPREDUCE-6719 requires changes to DockerContainerExecutor

2016-08-30 Thread Daniel Templeton (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5388?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15449978#comment-15449978
 ] 

Daniel Templeton commented on YARN-5388:


[~sidharta-s], any comments?

> MAPREDUCE-6719 requires changes to DockerContainerExecutor
> --
>
> Key: YARN-5388
> URL: https://issues.apache.org/jira/browse/YARN-5388
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: nodemanager
>Reporter: Daniel Templeton
>Assignee: Daniel Templeton
>Priority: Critical
> Fix For: 2.9.0
>
> Attachments: YARN-5388.001.patch, YARN-5388.002.patch, 
> YARN-5388.branch-2.001.patch
>
>
> Because the {{DockerContainerExecuter}} overrides the {{writeLaunchEnv()}} 
> method, it must also have the wildcard processing logic from 
> YARN-4958/YARN-5373 added to it.  Without it, the use of -libjars will fail 
> unless wildcarding is disabled.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5567) Fix script exit code checking in NodeHealthScriptRunner#reportHealthStatus

2016-08-30 Thread Yufei Gu (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5567?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15449979#comment-15449979
 ] 

Yufei Gu commented on YARN-5567:


Thanks [~wilfreds] for pointing out. Nice catch. My bad to miss the part of the 
Java doc. Thanks [~Naganarasimha] and [~rchiang]'s comments. I prefer to revert 
it in branch-2.8 and branch-2 for compatibility reason,  and keep it in trunk 
with updating of documentation. Thanks [~rchiang] for filing the follow up JIRA.


> Fix script exit code checking in NodeHealthScriptRunner#reportHealthStatus
> --
>
> Key: YARN-5567
> URL: https://issues.apache.org/jira/browse/YARN-5567
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: nodemanager
>Affects Versions: 2.8.0, 3.0.0-alpha1
>Reporter: Yufei Gu
>Assignee: Yufei Gu
> Fix For: 2.8.1
>
> Attachments: YARN-5567.001.patch
>
>
> In case of FAILED_WITH_EXIT_CODE, health status should be false.
> {code}
>   case FAILED_WITH_EXIT_CODE:
> setHealthStatus(true, "", now);
> break;
> {code}
> should be 
> {code}
>   case FAILED_WITH_EXIT_CODE:
> setHealthStatus(false, "", now);
> break;
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-5596) TestDockerContainerRuntime fails on OS X

2016-08-30 Thread Sidharta Seethana (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-5596?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sidharta Seethana updated YARN-5596:

Attachment: YARN-5596.002.patch

Uploaded a new patch with proposed change. 

> TestDockerContainerRuntime fails on OS X
> 
>
> Key: YARN-5596
> URL: https://issues.apache.org/jira/browse/YARN-5596
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: nodemanager, yarn
>Reporter: Sidharta Seethana
>Assignee: Sidharta Seethana
>Priority: Minor
> Attachments: YARN-5596.001.patch, YARN-5596.002.patch
>
>
> /sys/fs/cgroup doesn't exist on Mac OS X. And the tests seem to fail because 
> of this. 
> {code}
> Failed tests:
>   TestDockerContainerRuntime.testContainerLaunchWithCustomNetworks:456 
> expected:<...ET_BIND_SERVICE -v /[sys/fs/cgroup:/sys/fs/cgroup:ro -v 
> /]test_container_local...> but was:<...ET_BIND_SERVICE -v 
> /[]test_container_local...>
>   TestDockerContainerRuntime.testContainerLaunchWithNetworkingDefaults:401 
> expected:<...ET_BIND_SERVICE -v /[sys/fs/cgroup:/sys/fs/cgroup:ro -v 
> /]test_container_local...> but was:<...ET_BIND_SERVICE -v 
> /[]test_container_local...>
>   TestDockerContainerRuntime.testDockerContainerLaunch:297 
> expected:<...ET_BIND_SERVICE -v /[sys/fs/cgroup:/sys/fs/cgroup:ro -v 
> /]test_container_local...> but was:<...ET_BIND_SERVICE -v 
> /[]test_container_local...>
> Tests run: 19, Failures: 3, Errors: 0, Skipped: 0
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5596) TestDockerContainerRuntime fails on OS X

2016-08-30 Thread Daniel Templeton (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5596?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15449995#comment-15449995
 ] 

Daniel Templeton commented on YARN-5596:


+1 (non-binding).  Thanks, [~sidharta-s]!

> TestDockerContainerRuntime fails on OS X
> 
>
> Key: YARN-5596
> URL: https://issues.apache.org/jira/browse/YARN-5596
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: nodemanager, yarn
>Reporter: Sidharta Seethana
>Assignee: Sidharta Seethana
>Priority: Minor
> Attachments: YARN-5596.001.patch, YARN-5596.002.patch
>
>
> /sys/fs/cgroup doesn't exist on Mac OS X. And the tests seem to fail because 
> of this. 
> {code}
> Failed tests:
>   TestDockerContainerRuntime.testContainerLaunchWithCustomNetworks:456 
> expected:<...ET_BIND_SERVICE -v /[sys/fs/cgroup:/sys/fs/cgroup:ro -v 
> /]test_container_local...> but was:<...ET_BIND_SERVICE -v 
> /[]test_container_local...>
>   TestDockerContainerRuntime.testContainerLaunchWithNetworkingDefaults:401 
> expected:<...ET_BIND_SERVICE -v /[sys/fs/cgroup:/sys/fs/cgroup:ro -v 
> /]test_container_local...> but was:<...ET_BIND_SERVICE -v 
> /[]test_container_local...>
>   TestDockerContainerRuntime.testDockerContainerLaunch:297 
> expected:<...ET_BIND_SERVICE -v /[sys/fs/cgroup:/sys/fs/cgroup:ro -v 
> /]test_container_local...> but was:<...ET_BIND_SERVICE -v 
> /[]test_container_local...>
> Tests run: 19, Failures: 3, Errors: 0, Skipped: 0
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-3657) Federation maintenance mechanisms (simple CLI and command propagation)

2016-08-30 Thread Subru Krishnan (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-3657?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Subru Krishnan updated YARN-3657:
-
Parent Issue: YARN-5597  (was: YARN-2915)

> Federation maintenance mechanisms (simple CLI and command propagation)
> --
>
> Key: YARN-3657
> URL: https://issues.apache.org/jira/browse/YARN-3657
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: nodemanager, resourcemanager
>Reporter: Carlo Curino
>Assignee: Carlo Curino
>
> The maintenance mechanisms provided by the RM are not sufficient in a 
> federated environment. In this JIRA we track few extensions 
> (more to come later) to allow basic maintenance mechanisms (and command 
> propagation) for the federated components.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Created] (YARN-5597) YARN Federation phase 2

2016-08-30 Thread Subru Krishnan (JIRA)
Subru Krishnan created YARN-5597:


 Summary: YARN Federation phase 2
 Key: YARN-5597
 URL: https://issues.apache.org/jira/browse/YARN-5597
 Project: Hadoop YARN
  Issue Type: Improvement
Reporter: Subru Krishnan


This umbrella JIRA tracks set of improvements over the YARN Federation MVP 
(YARN-2915)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-3660) Federation Global Policy Generator (load balancing)

2016-08-30 Thread Subru Krishnan (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-3660?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Subru Krishnan updated YARN-3660:
-
Parent Issue: YARN-5597  (was: YARN-2915)

> Federation Global Policy Generator (load balancing)
> ---
>
> Key: YARN-3660
> URL: https://issues.apache.org/jira/browse/YARN-3660
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: nodemanager, resourcemanager
>Reporter: Carlo Curino
>Assignee: Subru Krishnan
>
> In a federated environment, local impairments of one sub-cluster might 
> unfairly affect users/queues that are mapped to that sub-cluster. A 
> centralized component (GPG) runs out-of-band and edits the policies governing 
> how users/queues are allocated to sub-clusters. This allows us to enforce 
> global invariants (by dynamically updating locally-enforced invariants).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-3658) Federation "Capacity Allocation" across sub-cluster

2016-08-30 Thread Subru Krishnan (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-3658?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Subru Krishnan updated YARN-3658:
-
Parent Issue: YARN-5597  (was: YARN-2915)

> Federation "Capacity Allocation" across sub-cluster
> ---
>
> Key: YARN-3658
> URL: https://issues.apache.org/jira/browse/YARN-3658
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: nodemanager, resourcemanager
>Reporter: Carlo Curino
>Assignee: Carlo Curino
>
> This JIRA will track mechanisms to map federation level capacity allocations 
> to sub-cluster level ones. (Possibly via reservation mechanisms).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5566) client-side NM graceful decom doesn't trigger when jobs finish

2016-08-30 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5566?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15450025#comment-15450025
 ] 

Hadoop QA commented on YARN-5566:
-

| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 22s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 
0s {color} | {color:green} The patch appears to include 2 new or modified test 
files. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 8m 
22s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 37s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
24s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 46s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
17s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 8s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 23s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 
36s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 31s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 31s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
21s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 36s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
17s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
0s {color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 
14s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 22s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 38m 24s 
{color} | {color:green} hadoop-yarn-server-resourcemanager in the patch passed. 
{color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 
16s {color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 55m 38s {color} 
| {color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:9560f25 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12826224/YARN-5566.004.patch |
| JIRA Issue | YARN-5566 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux e73f2848e9f0 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed 
Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / c4ee691 |
| Default Java | 1.8.0_101 |
| findbugs | v3.0.0 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-YARN-Build/12948/testReport/ |
| modules | C: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager
 U: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager
 |
| Console output | 
https://builds.apache.org/job/PreCommit-YARN-Build/12948/console |
| Powered by | Apache Yetus 0.3.0   http://yetus.apache.org |


This message was automatically generated.



> client-side NM graceful decom doesn't trigger when jobs finish
> --
>
> Key: YARN-5566
> URL: https://issues.apache.org/jira/browse/YARN-5566
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: nodemanager
>Affects Versions: 2.8.0
>Reporter: Robert Kanter
>Assignee: Robert Kanter
> Attachments: YARN-5566.001.patch, YARN-5566.002.patch

[jira] [Commented] (YARN-5596) TestDockerContainerRuntime fails on OS X

2016-08-30 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5596?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15450074#comment-15450074
 ] 

Hadoop QA commented on YARN-5596:
-

| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 20s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 
0s {color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 6m 
39s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 26s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
16s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 26s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
13s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 0m 
42s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 18s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 
24s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 26s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 26s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
15s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 26s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
10s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
0s {color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 0m 
46s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 16s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 13m 17s 
{color} | {color:green} hadoop-yarn-server-nodemanager in the patch passed. 
{color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 
17s {color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 26m 15s {color} 
| {color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:9560f25 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12826235/YARN-5596.002.patch |
| JIRA Issue | YARN-5596 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux 2314f86b0bbd 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed 
Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / c4ee691 |
| Default Java | 1.8.0_101 |
| findbugs | v3.0.0 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-YARN-Build/12950/testReport/ |
| modules | C: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager
 U: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager
 |
| Console output | 
https://builds.apache.org/job/PreCommit-YARN-Build/12950/console |
| Powered by | Apache Yetus 0.3.0   http://yetus.apache.org |


This message was automatically generated.



> TestDockerContainerRuntime fails on OS X
> 
>
> Key: YARN-5596
> URL: https://issues.apache.org/jira/browse/YARN-5596
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: nodemanager, yarn
>Reporter: Sidharta Seethana
>Assignee: Sidharta Seethana
>Priority: Minor
> Attachments: YARN-5596.001.patch, YARN-5596.002.patch
>
>
> /sys/fs/cgroup doesn't exist on Ma

[jira] [Commented] (YARN-5567) Fix script exit code checking in NodeHealthScriptRunner#reportHealthStatus

2016-08-30 Thread Ray Chiang (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5567?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15450114#comment-15450114
 ] 

Ray Chiang commented on YARN-5567:
--

So, that's two votes for trunk only.  [~wilfreds] or [~Naganarasimha], are both 
of you okay with that?

> Fix script exit code checking in NodeHealthScriptRunner#reportHealthStatus
> --
>
> Key: YARN-5567
> URL: https://issues.apache.org/jira/browse/YARN-5567
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: nodemanager
>Affects Versions: 2.8.0, 3.0.0-alpha1
>Reporter: Yufei Gu
>Assignee: Yufei Gu
> Fix For: 2.8.1
>
> Attachments: YARN-5567.001.patch
>
>
> In case of FAILED_WITH_EXIT_CODE, health status should be false.
> {code}
>   case FAILED_WITH_EXIT_CODE:
> setHealthStatus(true, "", now);
> break;
> {code}
> should be 
> {code}
>   case FAILED_WITH_EXIT_CODE:
> setHealthStatus(false, "", now);
> break;
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-4876) [Phase 1] Decoupled Init / Destroy of Containers from Start / Stop

2016-08-30 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-4876?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15450152#comment-15450152
 ] 

Hadoop QA commented on YARN-4876:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 21s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 
0s {color} | {color:green} The patch appears to include 16 new or modified test 
files. {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 1m 33s 
{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 6m 
46s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 6m 55s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 1m 
42s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 3m 28s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 1m 
40s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 5m 
11s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 2m 10s 
{color} | {color:green} trunk passed {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 16s 
{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 3m 
17s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 8m 29s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} cc {color} | {color:green} 8m 29s 
{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} javac {color} | {color:red} 8m 29s {color} 
| {color:red} root generated 1 new + 708 unchanged - 0 fixed = 709 total (was 
708) {color} |
| {color:red}-1{color} | {color:red} checkstyle {color} | {color:red} 1m 50s 
{color} | {color:red} root: The patch generated 157 new + 1259 unchanged - 10 
fixed = 1416 total (was 1269) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 3m 46s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 1m 
44s {color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} whitespace {color} | {color:red} 0m 0s 
{color} | {color:red} The patch has 88 line(s) that end in whitespace. Use git 
apply --whitespace=fix. {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red} 0m 53s 
{color} | {color:red} 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager
 generated 1 new + 0 unchanged - 0 fixed = 1 total (was 0) {color} |
| {color:red}-1{color} | {color:red} javadoc {color} | {color:red} 0m 18s 
{color} | {color:red} hadoop-yarn-project_hadoop-yarn_hadoop-yarn-api generated 
9 new + 125 unchanged - 0 fixed = 134 total (was 125) {color} |
| {color:red}-1{color} | {color:red} javadoc {color} | {color:red} 0m 17s 
{color} | {color:red} 
hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-nodemanager
 generated 5 new + 242 unchanged - 0 fixed = 247 total (was 242) {color} |
| {color:red}-1{color} | {color:red} javadoc {color} | {color:red} 0m 15s 
{color} | {color:red} hadoop-yarn-project_hadoop-yarn_hadoop-yarn-client 
generated 4 new + 157 unchanged - 0 fixed = 161 total (was 157) {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 0m 26s {color} 
| {color:red} hadoop-yarn-api in the patch failed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 2m 20s 
{color} | {color:green} hadoop-yarn-common in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 0m 28s 
{color} | {color:green} hadoop-yarn-server-common in the patch passed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 13m 8s {color} 
| {color:red} hadoop-yarn-server-nodemanager in the patch failed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 37m 46s 
{color} | {color:green} hadoop-yarn-server-resourcemanager in the patch passed. 
{color} |
| {color:red}-1{color} | {color:red} unit {color}

  1   2   >