[jira] [Updated] (YARN-5431) TimeLineReader daemon start should allow to pass its own reader opts

2016-07-26 Thread Rohith Sharma K S (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-5431?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Rohith Sharma K S updated YARN-5431:

Attachment: YARN-5431.2.patch

Updated the patch with YARN prefix.

> TimeLineReader daemon start should allow to pass its own reader opts
> 
>
> Key: YARN-5431
> URL: https://issues.apache.org/jira/browse/YARN-5431
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: scripts, timelinereader
>Reporter: Rohith Sharma K S
>Assignee: Rohith Sharma K S
> Fix For: 3.0.0-alpha2
>
> Attachments: YARN-5431.0.patch, YARN-5431.1.patch, YARN-5431.2.patch
>
>
> In yarn script , timelinereader doesn't allow to pass reader_opts.
> {code}
> timelinereader)
> HADOOP_SUBCMD_SUPPORTDAEMONIZATION="true"  
> HADOOP_CLASSNAME='org.apache.hadoop.yarn.server.timelineservice.reader.TimelineReaderServer'
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5431) TimeLineReader daemon start should allow to pass its own reader opts

2016-07-26 Thread Rohith Sharma K S (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5431?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15395084#comment-15395084
 ] 

Rohith Sharma K S commented on YARN-5431:
-

I think since it new opts being added, what varun says make sense to me so that 
there is no deprecation later for timelinereader. I will update the patch with 
YARN prefix.  

> TimeLineReader daemon start should allow to pass its own reader opts
> 
>
> Key: YARN-5431
> URL: https://issues.apache.org/jira/browse/YARN-5431
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: scripts, timelinereader
>Reporter: Rohith Sharma K S
>Assignee: Rohith Sharma K S
> Fix For: 3.0.0-alpha2
>
> Attachments: YARN-5431.0.patch, YARN-5431.1.patch
>
>
> In yarn script , timelinereader doesn't allow to pass reader_opts.
> {code}
> timelinereader)
> HADOOP_SUBCMD_SUPPORTDAEMONIZATION="true"  
> HADOOP_CLASSNAME='org.apache.hadoop.yarn.server.timelineservice.reader.TimelineReaderServer'
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5195) RM intermittently crashed with NPE while handling APP_ATTEMPT_REMOVED event when async-scheduling enabled in CapacityScheduler

2016-07-26 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5195?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15395043#comment-15395043
 ] 

Hudson commented on YARN-5195:
--

SUCCESS: Integrated in Hadoop-trunk-Commit #10161 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/10161/])
YARN-5195. RM intermittently crashed with NPE while handling (wangda: rev 
d62e121ffc0239e7feccc1e23ece92c5fac685f6)
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/capacity/TestCapacityScheduler.java
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/capacity/CapacityScheduler.java


> RM intermittently crashed with NPE while handling APP_ATTEMPT_REMOVED event 
> when async-scheduling enabled in CapacityScheduler
> --
>
> Key: YARN-5195
> URL: https://issues.apache.org/jira/browse/YARN-5195
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: resourcemanager
>Affects Versions: 2.7.2
>Reporter: Karam Singh
>Assignee: sandflee
> Fix For: 2.9.0
>
> Attachments: YARN-5195.01.patch, YARN-5195.02.patch, 
> YARN-5195.03.patch
>
>
> While running gridmix experiments one time came across incident where RM went 
> down with following exception
> {noformat}
> 2016-05-28 15:45:24,459 [ResourceManager Event Processor] FATAL 
> org.apache.hadoop.yarn.server.resourcemanager.ResourceManager: Error in 
> handling event type APP_ATTEMPT_REMOVED to the scheduler
> java.lang.NullPointerException
> at 
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.LeafQueue.completedContainer(LeafQueue.java:1282)
> at 
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.CapacityScheduler.completedContainerInternal(CapacityScheduler.java:1469)
> at 
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.AbstractYarnScheduler.completedContainer(AbstractYarnScheduler.java:497)
> at 
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.CapacityScheduler.doneApplicationAttempt(CapacityScheduler.java:860)
> at 
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.CapacityScheduler.handle(CapacityScheduler.java:1319)
> at 
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.CapacityScheduler.handle(CapacityScheduler.java:127)
> at 
> org.apache.hadoop.yarn.server.resourcemanager.ResourceManager$SchedulerEventDispatcher$EventProcessor.run(ResourceManager.java:704)
> at java.lang.Thread.run(Thread.java:745)
> 2016-05-28 15:45:24,460 [ApplicationMasterLauncher #49] INFO 
> org.apache.hadoop.yarn.server.resourcemanager.amlauncher.AMLauncher: Cleaning 
> master appattempt_1464449118385_0006_01
> 2016-05-28 15:45:24,460 [ResourceManager Event Processor] INFO 
> org.apache.hadoop.yarn.server.resourcemanager.ResourceManager: Exiting, bbye..
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5342) Improve non-exclusive node partition resource allocation in Capacity Scheduler

2016-07-26 Thread Wangda Tan (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5342?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15395019#comment-15395019
 ] 

Wangda Tan commented on YARN-5342:
--

Committed to trunk/branch-2.

[~sunilg], mind to update the patch to branch-2.8? It has some conflicts.
Thanks,

> Improve non-exclusive node partition resource allocation in Capacity Scheduler
> --
>
> Key: YARN-5342
> URL: https://issues.apache.org/jira/browse/YARN-5342
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Wangda Tan
>Assignee: Sunil G
> Attachments: YARN-5342.1.patch, YARN-5342.2.patch, YARN-5342.3.patch, 
> YARN-5342.4.patch
>
>
> In the previous implementation, one non-exclusive container allocation is 
> possible when the missed-opportunity >= #cluster-nodes. And 
> missed-opportunity will be reset when container allocated to any node.
> This will slow down the frequency of container allocation on non-exclusive 
> node partition: *When a non-exclusive partition=x has idle resource, we can 
> only allocate one container for this app in every 
> X=nodemanagers.heartbeat-interval secs for the whole cluster.*
> In this JIRA, I propose a fix to reset missed-opporunity only if we have >0 
> pending resource for the non-exclusive partition OR we get allocation from 
> the default partition.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-4091) Add REST API to retrieve scheduler activity

2016-07-26 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-4091?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15395017#comment-15395017
 ] 

Hadoop QA commented on YARN-4091:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 17s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 
0s {color} | {color:green} The patch appears to include 2 new or modified test 
files. {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 54s 
{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 8m 
2s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 2m 49s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
47s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 4m 10s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
47s {color} | {color:green} trunk passed {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue} 0m 0s 
{color} | {color:blue} Skipped patched modules with no Java source: 
hadoop-yarn-project/hadoop-yarn {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 0m 
56s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 42s 
{color} | {color:green} trunk passed {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 10s 
{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 2m 
49s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 2m 28s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 2m 28s 
{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} checkstyle {color} | {color:red} 0m 41s 
{color} | {color:red} hadoop-yarn-project/hadoop-yarn: The patch generated 73 
new + 368 unchanged - 1 fixed = 441 total (was 369) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 3m 35s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
43s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
0s {color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green} 0m 2s 
{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue} 0m 0s 
{color} | {color:blue} Skipped patched modules with no Java source: 
hadoop-yarn-project/hadoop-yarn {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 3s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 44s 
{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 74m 27s {color} 
| {color:red} hadoop-yarn in the patch failed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 50m 39s {color} 
| {color:red} hadoop-yarn-server-resourcemanager in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 
17s {color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 159m 45s {color} 
| {color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.yarn.server.resourcemanager.TestRMRestart |
|   | hadoop.yarn.server.resourcemanager.TestRM |
|   | hadoop.yarn.server.nodemanager.TestDirectoryCollection |
|   | hadoop.yarn.server.resourcemanager.TestRMRestart |
|   | hadoop.yarn.server.resourcemanager.TestRM |
| Timed out junit tests | 
org.apache.hadoop.yarn.server.resourcemanager.webapp.TestRMWebServicesReservation
 |
|   | 
org.apache.hadoop.yarn.server.resourcemanager.webapp.TestRMWebServicesReservation
 |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:9560f25

[jira] [Commented] (YARN-3662) Federation Membership State Store internal APIs

2016-07-26 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-3662?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15395009#comment-15395009
 ] 

Hadoop QA commented on YARN-3662:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 35s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 
0s {color} | {color:green} The patch appears to include 3 new or modified test 
files. {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 16m 50s 
{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 11m 
40s {color} | {color:green} YARN-2915 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 2m 33s 
{color} | {color:green} YARN-2915 passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
43s {color} | {color:green} YARN-2915 passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 33s 
{color} | {color:green} YARN-2915 passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
31s {color} | {color:green} YARN-2915 passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 
44s {color} | {color:green} YARN-2915 passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 45s 
{color} | {color:green} YARN-2915 passed {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 10s 
{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 
44s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 2m 31s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} cc {color} | {color:green} 2m 31s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 2m 31s 
{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} checkstyle {color} | {color:red} 0m 39s 
{color} | {color:red} hadoop-yarn-project/hadoop-yarn: The patch generated 9 
new + 26 unchanged - 2 fixed = 35 total (was 28) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 14s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
51s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
0s {color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green} 0m 3s 
{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 4m 2s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 31s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 4m 43s 
{color} | {color:green} hadoop-yarn-common in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 1m 8s 
{color} | {color:green} hadoop-yarn-server-common in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 
39s {color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 56m 47s {color} 
| {color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:9560f25 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12820351/YARN-3662-YARN-2915-v5.patch
 |
| JIRA Issue | YARN-3662 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  xml  cc  |
| uname | Linux f49ebfa3af1c 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed 
Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | YARN-2915 / 85eda58 |
| Default Java | 1.8.0_101 |
| findbugs | v3.0.0 |
| checkstyle | 
https://builds.apache.org/job/PreCommit-YARN-Build/12519/artifact/patchprocess/diff-checkstyle-hadoop-yarn-project_hadoop-yarn.txt
 |

[jira] [Commented] (YARN-5113) Refactoring and other clean-up for distributed scheduling

2016-07-26 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5113?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15394995#comment-15394995
 ] 

Hadoop QA commented on YARN-5113:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 16s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 
0s {color} | {color:green} The patch appears to include 7 new or modified test 
files. {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 5m 17s 
{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 9m 
34s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 2m 37s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
51s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 3m 10s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 1m 
31s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 5m 
20s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 2m 5s 
{color} | {color:green} trunk passed {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 9s 
{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 2m 
26s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 2m 46s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} cc {color} | {color:green} 2m 46s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 2m 46s 
{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} checkstyle {color} | {color:red} 0m 47s 
{color} | {color:red} hadoop-yarn-project/hadoop-yarn: The patch generated 7 
new + 367 unchanged - 45 fixed = 374 total (was 412) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 2m 58s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 1m 
19s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
0s {color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green} 0m 3s 
{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 6m 
10s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 16s 
{color} | {color:green} hadoop-yarn-api in the patch passed. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 28s 
{color} | {color:green} hadoop-yarn-common in the patch passed. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 13s 
{color} | {color:green} 
hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-common 
generated 0 new + 159 unchanged - 4 fixed = 159 total (was 163) {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 16s 
{color} | {color:green} 
hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-nodemanager
 generated 0 new + 248 unchanged - 7 fixed = 248 total (was 255) {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 19s 
{color} | {color:green} hadoop-yarn-server-resourcemanager in the patch passed. 
{color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 13s 
{color} | {color:green} hadoop-yarn-client in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 0m 28s 
{color} | {color:green} hadoop-yarn-api in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 2m 31s 
{color} | {color:green} hadoop-yarn-common in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 0m 29s 
{color} | {color:green} 

[jira] [Commented] (YARN-5327) API changes required to support recurring reservations in the YARN ReservationSystem

2016-07-26 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5327?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15394987#comment-15394987
 ] 

Hadoop QA commented on YARN-5327:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 17s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red} 0m 0s 
{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 13s 
{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 8m 
28s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 3m 1s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
46s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 10s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
29s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 2m 
25s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 55s 
{color} | {color:green} trunk passed {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 14s 
{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 
59s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 2m 42s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} cc {color} | {color:green} 2m 42s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 2m 42s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
37s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 55s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
22s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
0s {color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 2m 
11s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 40s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 0m 22s 
{color} | {color:green} hadoop-yarn-api in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 2m 16s 
{color} | {color:green} hadoop-yarn-common in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 
17s {color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 30m 11s {color} 
| {color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:9560f25 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12820348/YARN-5327.001.patch |
| JIRA Issue | YARN-5327 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  cc  |
| uname | Linux 4818caec02e6 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed 
Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / 2d8d183 |
| Default Java | 1.8.0_101 |
| findbugs | v3.0.0 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-YARN-Build/12518/testReport/ |
| modules | C: hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common U: 
hadoop-yarn-project/hadoop-yarn |
| Console output | 
https://builds.apache.org/job/PreCommit-YARN

[jira] [Commented] (YARN-5287) LinuxContainerExecutor fails to set proper permission

2016-07-26 Thread Ying Zhang (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5287?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15394986#comment-15394986
 ] 

Ying Zhang commented on YARN-5287:
--

Thanks Naganarasimha and Rohith for review. I've updated the patch with 
addressing Rohith's comment. Please have a look.
The test failure is not related, separate code path. And I've seen YARN-5425 
has already been filed against it. Although what's interesting is that the test 
failure seems to be similar with what we are trying to resolve here.

> LinuxContainerExecutor fails to set proper permission
> -
>
> Key: YARN-5287
> URL: https://issues.apache.org/jira/browse/YARN-5287
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: nodemanager
>Affects Versions: 2.7.2
>Reporter: Ying Zhang
>Assignee: Ying Zhang
>Priority: Minor
> Attachments: YARN-5287-tmp.patch, YARN-5287.003.patch, 
> YARN-5287.004.patch
>
>   Original Estimate: 48h
>  Remaining Estimate: 48h
>
> LinuxContainerExecutor fails to set the proper permissions on the local 
> directories(i.e., /hadoop/yarn/local/usercache/... by default) if the cluster 
> has been configured with a restrictive umask, e.g.: umask 077. Job failed due 
> to the following reason:
> Path /hadoop/yarn/local/usercache/ambari-qa/appcache/application_ has 
> permission 700 but needs permission 750



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5079) [Umbrella] Native YARN framework layer for services and beyond

2016-07-26 Thread Vinod Kumar Vavilapalli (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5079?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15394979#comment-15394979
 ] 

Vinod Kumar Vavilapalli commented on YARN-5079:
---

bq. I'm going to go ahead with creating a branch and starting the port 
activities.
Created 'yarn-native-services' branch based off trunk.

> [Umbrella] Native YARN framework layer for services and beyond
> --
>
> Key: YARN-5079
> URL: https://issues.apache.org/jira/browse/YARN-5079
> Project: Hadoop YARN
>  Issue Type: New Feature
>Reporter: Vinod Kumar Vavilapalli
>Assignee: Vinod Kumar Vavilapalli
>
> (See overview doc at YARN-4692, modifying and copy-pasting some of the 
> relevant pieces and sub-section 3.3.1 to track the specific sub-item.)
> (This is a companion to YARN-4793 in our effort to simplify the entire story, 
> but focusing on APIs)
> So far, YARN by design has restricted itself to having a very low-­level API 
> that can support any type of application. Frameworks like Apache Hadoop 
> MapReduce, Apache Tez, Apache Spark, Apache REEF, Apache Twill, Apache Helix 
> and others ended up exposing higher level APIs that end­-users can directly 
> leverage to build their applications on top of YARN. On the services side, 
> Apache Slider has done something similar.
> With our current attention on making services first­-class and simplified, 
> it's time to take a fresh look at how we can make Apache Hadoop YARN support 
> services well out of the box. Beyond the functionality that I outlined in the 
> previous sections in the doc on how NodeManagers can be enhanced to help 
> services, the biggest missing piece is the framework itself. There is a lot 
> of very important functionality that a services' framework can own together 
> with YARN in executing services end­-to­-end.
> In this JIRA I propose we look at having a native Apache Hadoop framework for 
> running services natively on YARN.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-3662) Federation Membership State Store internal APIs

2016-07-26 Thread Subru Krishnan (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-3662?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Subru Krishnan updated YARN-3662:
-
Attachment: YARN-3662-YARN-2915-v5.patch

Thanks [~vvasudev] for the clarifications. I have addressed your feedback in v5 
of the patch. 

Quite a few changes (:)) so I'll list only the important ones:
  * Moved the {{FederationStore}} interfaces from _API_ to _store_ sub-package.
  * Added request/response objects for all methods
  * Renamed request/response objects to Get/Set... to align with YARN 
convention.
  * Other renames which you suggested.

I have left the _capability_ as string as it is more than a _resource_ - we 
need the nodes in the cluster and utilization, i.e. why we currently use the 
serialized string representation of _ClusterMetricsInfo_. We can update it 
later if we find a better option.

I'll also update YARN-5307/YARN-3664 similarly and post the patches tomorrow. 
This JIRA is more important as blocks both.

> Federation Membership State Store internal APIs
> ---
>
> Key: YARN-3662
> URL: https://issues.apache.org/jira/browse/YARN-3662
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: nodemanager, resourcemanager
>Reporter: Subru Krishnan
>Assignee: Subru Krishnan
> Attachments: YARN-3662-YARN-2915-v1.1.patch, 
> YARN-3662-YARN-2915-v1.patch, YARN-3662-YARN-2915-v2.patch, 
> YARN-3662-YARN-2915-v3.01.patch, YARN-3662-YARN-2915-v3.patch, 
> YARN-3662-YARN-2915-v4.patch, YARN-3662-YARN-2915-v5.patch
>
>
> The Federation Application State encapsulates the information about the 
> active RM of each sub-cluster that is participating in Federation. The 
> information includes addresses for ClientRM, ApplicationMaster and Admin 
> services along with the sub_cluster _capability_ which is currently defined 
> by *ClusterMetricsInfo*. Please refer to the design doc in parent JIRA for 
> further details.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-5312) Parameter 'size' in the webservices "/containerlogs/$containerid/$filename" and in AHSWebServices is semantically confusing

2016-07-26 Thread Vinod Kumar Vavilapalli (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-5312?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vinod Kumar Vavilapalli updated YARN-5312:
--
Issue Type: Sub-task  (was: Bug)
Parent: YARN-4904

> Parameter 'size' in the webservices "/containerlogs/$containerid/$filename" 
> and in AHSWebServices is semantically confusing
> ---
>
> Key: YARN-5312
> URL: https://issues.apache.org/jira/browse/YARN-5312
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Vinod Kumar Vavilapalli
>Assignee: Vinod Kumar Vavilapalli
>
> This got added in YARN-5088 and I found this while reviewing YARN-5224.
> bq. Also, the parameter 'size' in the API 
> "/containerlogs/$containerid/$filename" and similarly in AHSWebServices is 
> confusing with semantics. I think we are better off with an offset and size.
> An offset (in bytes, +ve to indicate from the start and -ve to indicate from 
> the end) together with a size (in bytes) indicating how much to read from the 
> offset are a better combination - this is how most file-system APIs look 
> like, for comparison.
> I can also imagine number of lines as a better unit than bytes for offset and 
> size - perhaps yet another ticket.
> /cc [~vvasudev].



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5081) Replace RPC calls with WebService calls in LogsCLI

2016-07-26 Thread Vinod Kumar Vavilapalli (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5081?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15394958#comment-15394958
 ] 

Vinod Kumar Vavilapalli commented on YARN-5081:
---

bq. To goal is to remove 
"yarn.timeline-service.generic-application-history.enabled" dependency. 
Can we achieve this without completely replacing the RPC calls?

> Replace RPC calls with WebService calls in LogsCLI
> --
>
> Key: YARN-5081
> URL: https://issues.apache.org/jira/browse/YARN-5081
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Xuan Gong
>Assignee: Xuan Gong
> Attachments: YARN-5081.1.patch
>
>
> Currently in LogsCLI, we still use YarnClient to get ContainerReport. We 
> expect the users to enable 
> yarn.timeline-service.generic-application-history.enabled to get finished 
> container report which is not ideal. We can replace all RPC calls with 
> WebService call, so the users do not need to change their configuration (to 
> enable generic-application-history).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5203) Return ResourceRequest JAXB object in ResourceManager Cluster Applications REST API

2016-07-26 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5203?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15394959#comment-15394959
 ] 

Hadoop QA commented on YARN-5203:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 17s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 
0s {color} | {color:green} The patch appears to include 3 new or modified test 
files. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 7m 
45s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 38s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
24s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 42s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
18s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 5s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 25s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 
38s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 36s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 36s 
{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} checkstyle {color} | {color:red} 0m 20s 
{color} | {color:red} 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager:
 The patch generated 1 new + 123 unchanged - 7 fixed = 124 total (was 130) 
{color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 41s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
15s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
0s {color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 
17s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 23s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 34m 3s 
{color} | {color:green} hadoop-yarn-server-resourcemanager in the patch passed. 
{color} |
| {color:red}-1{color} | {color:red} asflicense {color} | {color:red} 0m 15s 
{color} | {color:red} The patch generated 1 ASF License warnings. {color} |
| {color:black}{color} | {color:black} {color} | {color:black} 50m 44s {color} 
| {color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:9560f25 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12820343/YARN-5203.v4.patch |
| JIRA Issue | YARN-5203 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux 225ca6587977 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed 
Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / 49969b1 |
| Default Java | 1.8.0_101 |
| findbugs | v3.0.0 |
| checkstyle | 
https://builds.apache.org/job/PreCommit-YARN-Build/12516/artifact/patchprocess/diff-checkstyle-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-YARN-Build/12516/testReport/ |
| asflicense | 
https://builds.apache.org/job/PreCommit-YARN-Build/12516/artifact/patchprocess/patch-asflicense-problems.txt
 |
| modules | C: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager
 U: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager
 |
| Console output | 
https://builds.apache.org/job/PreCommit-YARN-Build/12516/console |
| Powered by | Apache Yetus 0.3.0   http://yetus.apache.org |


This message was automatically generated.



> Return ResourceRequest JAXB object in ResourceMana

[jira] [Updated] (YARN-5327) API changes required to support recurring reservations in the YARN ReservationSystem

2016-07-26 Thread Sangeetha Abdu Jyothi (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-5327?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sangeetha Abdu Jyothi updated YARN-5327:

Attachment: YARN-5327.001.patch

> API changes required to support recurring reservations in the YARN 
> ReservationSystem
> 
>
> Key: YARN-5327
> URL: https://issues.apache.org/jira/browse/YARN-5327
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: resourcemanager
>Reporter: Subru Krishnan
>Assignee: Sangeetha Abdu Jyothi
> Attachments: YARN-5327.001.patch
>
>
> YARN-5326 proposes adding native support for recurring reservations in the 
> YARN ReservationSystem. This JIRA is a sub-task to track the changes needed 
> in ApplicationClientProtocol to accomplish it. Please refer to the design doc 
> in the parent JIRA for details.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5392) Replace use of Priority in the Scheduling infrastructure with an opaque ShedulerRequestKey

2016-07-26 Thread Subru Krishnan (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5392?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15394949#comment-15394949
 ] 

Subru Krishnan commented on YARN-5392:
--

Thanks [~asuresh] for driving this and [~leftnoteasy]/[~kasha] for the reviews. 
I'll rebase YARN-4888 shortly and post an updated patch.

> Replace use of Priority in the Scheduling infrastructure with an opaque 
> ShedulerRequestKey
> --
>
> Key: YARN-5392
> URL: https://issues.apache.org/jira/browse/YARN-5392
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Arun Suresh
>Assignee: Arun Suresh
> Fix For: 2.9.0
>
> Attachments: YARN-5392.001.patch, YARN-5392.002.patch, 
> YARN-5392.003.patch, YARN-5392.004.patch, YARN-5392.005.patch, 
> YARN-5392.006.patch, YARN-5392.007.patch, YARN-5392.008.patch, 
> YARN-5392.009.patch, YARN-5392.010.patch
>
>
> Based on discussions in YARN-4888, this jira proposes to replace the use of 
> {{Priority}} in the Scheduler infrastructure (Scheduler, Queues, SchedulerApp 
> / Node etc.) with a more opaque and extensible {{SchedulerKey}}.
> Note: Even though {{SchedulerKey}} will be used by the internal scheduling 
> infrastructure, It will not be exposed to the Client or the AM. The 
> SchdulerKey is meant to be an internal construct that is derived from 
> attributes of the ResourceRequest / ApplicationSubmissionContext / Scheduler 
> Configuration etc.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5351) ResourceRequest should take ExecutionType into account during comparison

2016-07-26 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5351?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15394950#comment-15394950
 ] 

Hudson commented on YARN-5351:
--

SUCCESS: Integrated in Hadoop-trunk-Commit #10160 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/10160/])
YARN-5351. ResourceRequest should take ExecutionType into account during (arun 
suresh: rev 2d8d183b1992b82c4d8dd3d6b41a1964685d909e)
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/java/org/apache/hadoop/yarn/api/records/impl/pb/ExecutionTypeRequestPBImpl.java
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/api/records/ResourceRequest.java
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/api/records/ExecutionTypeRequest.java
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-client/src/test/java/org/apache/hadoop/yarn/client/api/impl/TestAMRMClient.java


> ResourceRequest should take ExecutionType into account during comparison
> 
>
> Key: YARN-5351
> URL: https://issues.apache.org/jira/browse/YARN-5351
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Konstantinos Karanasos
>Assignee: Konstantinos Karanasos
> Fix For: 2.9.0
>
> Attachments: YARN-5351.001.patch, YARN-5351.002.patch
>
>
> {{ExecutionTypeRequest}} should be taken into account in the {{compareTo}} 
> method of {{ResourceRequest}}.
> Otherwise, in the {{ask}} list of the {{AMRMClientImpl}} we may incorrectly 
> add pending container requests in the presence of both GUARANTEED and 
> OPPORTUNISTIC containers.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5351) ResourceRequest should take ExecutionType into account during comparison

2016-07-26 Thread Arun Suresh (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5351?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15394930#comment-15394930
 ] 

Arun Suresh commented on YARN-5351:
---

+1, Committing this shortly

> ResourceRequest should take ExecutionType into account during comparison
> 
>
> Key: YARN-5351
> URL: https://issues.apache.org/jira/browse/YARN-5351
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Konstantinos Karanasos
>Assignee: Konstantinos Karanasos
> Attachments: YARN-5351.001.patch, YARN-5351.002.patch
>
>
> {{ExecutionTypeRequest}} should be taken into account in the {{compareTo}} 
> method of {{ResourceRequest}}.
> Otherwise, in the {{ask}} list of the {{AMRMClientImpl}} we may incorrectly 
> add pending container requests in the presence of both GUARANTEED and 
> OPPORTUNISTIC containers.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5342) Improve non-exclusive node partition resource allocation in Capacity Scheduler

2016-07-26 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5342?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15394916#comment-15394916
 ] 

Hudson commented on YARN-5342:
--

SUCCESS: Integrated in Hadoop-trunk-Commit #10159 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/10159/])
YARN-5342. Improve non-exclusive node partition resource allocation in (wangda: 
rev 49969b16cdba0f251b9f8bf3d8df9906e38b5c61)
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/capacity/TestApplicationPriority.java
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/capacity/allocator/RegularContainerAllocator.java
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/capacity/TestNodeLabelContainerAllocation.java


> Improve non-exclusive node partition resource allocation in Capacity Scheduler
> --
>
> Key: YARN-5342
> URL: https://issues.apache.org/jira/browse/YARN-5342
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Wangda Tan
>Assignee: Sunil G
> Attachments: YARN-5342.1.patch, YARN-5342.2.patch, YARN-5342.3.patch, 
> YARN-5342.4.patch
>
>
> In the previous implementation, one non-exclusive container allocation is 
> possible when the missed-opportunity >= #cluster-nodes. And 
> missed-opportunity will be reset when container allocated to any node.
> This will slow down the frequency of container allocation on non-exclusive 
> node partition: *When a non-exclusive partition=x has idle resource, we can 
> only allocate one container for this app in every 
> X=nodemanagers.heartbeat-interval secs for the whole cluster.*
> In this JIRA, I propose a fix to reset missed-opporunity only if we have >0 
> pending resource for the non-exclusive partition OR we get allocation from 
> the default partition.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-4091) Add REST API to retrieve scheduler activity

2016-07-26 Thread Chen Ge (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-4091?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chen Ge updated YARN-4091:
--
Attachment: YARN-4091.4.patch

updated with new version

> Add REST API to retrieve scheduler activity
> ---
>
> Key: YARN-4091
> URL: https://issues.apache.org/jira/browse/YARN-4091
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: capacity scheduler, resourcemanager
>Affects Versions: 2.7.0
>Reporter: Sunil G
>Assignee: Chen Ge
> Attachments: Improvement on debugdiagnostic information - YARN.pdf, 
> YARN-4091-design-doc-v1.pdf, YARN-4091.1.patch, YARN-4091.2.patch, 
> YARN-4091.3.patch, YARN-4091.4.patch, YARN-4091.preliminary.1.patch, 
> app_activities.json, node_activities.json
>
>
> As schedulers are improved with various new capabilities, more configurations 
> which tunes the schedulers starts to take actions such as limit assigning 
> containers to an application, or introduce delay to allocate container etc. 
> There are no clear information passed down from scheduler to outerworld under 
> these various scenarios. This makes debugging very tougher.
> This ticket is an effort to introduce more defined states on various parts in 
> scheduler where it skips/rejects container assignment, activate application 
> etc. Such information will help user to know whats happening in scheduler.
> Attaching a short proposal for initial discussion. We would like to improve 
> on this as we discuss.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-5203) Return ResourceRequest JAXB object in ResourceManager Cluster Applications REST API

2016-07-26 Thread Ellen Hui (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-5203?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ellen Hui updated YARN-5203:

Attachment: YARN-5203.v4.patch

Thanks [~subru] for the feedback.

 - Add ExecutionTypeRequestInfo object
 - Change list of ResourceRequestInfo in AppInfo to private
 - Rename testUnmarshalApp to testUnmarshalAppInfo

There is still one checkstyle error from too many arguments to 
verifyResourceRequestsGeneric in TestRMWebServicesApps, but I am inclined to 
ignore it since verifyAppInfoGeneric does the same thing.

> Return ResourceRequest JAXB object in ResourceManager Cluster Applications 
> REST API
> ---
>
> Key: YARN-5203
> URL: https://issues.apache.org/jira/browse/YARN-5203
> Project: Hadoop YARN
>  Issue Type: Bug
>Reporter: Subru Krishnan
>Assignee: Ellen Hui
>  Labels: api-breaking, bug, incompatible
> Attachments: YARN-5203.v0.patch, YARN-5203.v1.patch, 
> YARN-5203.v2.patch, YARN-5203.v3.patch, YARN-5203.v4.patch
>
>
> The ResourceManager Cluster Applications REST API returns {{ResourceRequest}} 
> as String rather than a JAXB object. This prevents downstream tools like 
> Federation Router (YARN-3659) that depend on the REST API to unmarshall the 
> {{AppInfo}}. This JIRA proposes updating {{AppInfo}} to return a JAXB version 
> of the {{ResourceRequest}}.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-5113) Refactoring and other clean-up for distributed scheduling

2016-07-26 Thread Konstantinos Karanasos (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-5113?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Konstantinos Karanasos updated YARN-5113:
-
Attachment: YARN-5113.007.patch

Fixing unit test failures, checkstyle, whitespace and javadoc issues.

> Refactoring and other clean-up for distributed scheduling
> -
>
> Key: YARN-5113
> URL: https://issues.apache.org/jira/browse/YARN-5113
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Arun Suresh
>Assignee: Konstantinos Karanasos
> Attachments: YARN-5113.001.patch, YARN-5113.002.patch, 
> YARN-5113.003.patch, YARN-5113.004.patch, YARN-5113.005.patch, 
> YARN-5113.006.patch, YARN-5113.007.patch
>
>
> This JIRA focuses on the refactoring of classes related to Distributed 
> Scheduling



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5342) Improve non-exclusive node partition resource allocation in Capacity Scheduler

2016-07-26 Thread Wangda Tan (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5342?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15394891#comment-15394891
 ] 

Wangda Tan commented on YARN-5342:
--

Committing this.

> Improve non-exclusive node partition resource allocation in Capacity Scheduler
> --
>
> Key: YARN-5342
> URL: https://issues.apache.org/jira/browse/YARN-5342
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Wangda Tan
>Assignee: Sunil G
> Attachments: YARN-5342.1.patch, YARN-5342.2.patch, YARN-5342.3.patch, 
> YARN-5342.4.patch
>
>
> In the previous implementation, one non-exclusive container allocation is 
> possible when the missed-opportunity >= #cluster-nodes. And 
> missed-opportunity will be reset when container allocated to any node.
> This will slow down the frequency of container allocation on non-exclusive 
> node partition: *When a non-exclusive partition=x has idle resource, we can 
> only allocate one container for this app in every 
> X=nodemanagers.heartbeat-interval secs for the whole cluster.*
> In this JIRA, I propose a fix to reset missed-opporunity only if we have >0 
> pending resource for the non-exclusive partition OR we get allocation from 
> the default partition.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5432) Lock already held by another process while LevelDB cache store creation for dag

2016-07-26 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5432?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15394851#comment-15394851
 ] 

Hadoop QA commented on YARN-5432:
-

| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 17s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 
0s {color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 27s 
{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 7m 
25s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 2m 31s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
37s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 49s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
26s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 
19s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 39s 
{color} | {color:green} trunk passed {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 9s 
{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 
38s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 2m 14s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 2m 14s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
35s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 44s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
21s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
0s {color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green} 0m 2s 
{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 
28s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 34s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 2m 16s 
{color} | {color:green} hadoop-yarn-common in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 1m 12s 
{color} | {color:green} hadoop-yarn-server-timeline-pluginstorage in the patch 
passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 
18s {color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 25m 48s {color} 
| {color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:9560f25 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12820323/YARN-5432-trunk.003.patch
 |
| JIRA Issue | YARN-5432 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  xml  findbugs  checkstyle  |
| uname | Linux 6775b38127f9 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed 
Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / d84ab8a |
| Default Java | 1.8.0_101 |
| findbugs | v3.0.0 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-YARN-Build/12514/testReport/ |
| modules | C: hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-timeline-pluginstorage
 U: hadoop-yarn-project/hadoop-yarn |
| Console output | 
https://builds.apache.org/job/PreCommit-YARN-Build/12514/console |
| 

[jira] [Commented] (YARN-4091) Add REST API to retrieve scheduler activity

2016-07-26 Thread Wangda Tan (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-4091?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15394834#comment-15394834
 ] 

Wangda Tan commented on YARN-4091:
--

Created a new parent JIRA (YARN-5437) and moved all original children of this 
JIRA to YARN-5437.

> Add REST API to retrieve scheduler activity
> ---
>
> Key: YARN-4091
> URL: https://issues.apache.org/jira/browse/YARN-4091
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: capacity scheduler, resourcemanager
>Affects Versions: 2.7.0
>Reporter: Sunil G
>Assignee: Chen Ge
> Attachments: Improvement on debugdiagnostic information - YARN.pdf, 
> YARN-4091-design-doc-v1.pdf, YARN-4091.1.patch, YARN-4091.2.patch, 
> YARN-4091.3.patch, YARN-4091.preliminary.1.patch, app_activities.json, 
> node_activities.json
>
>
> As schedulers are improved with various new capabilities, more configurations 
> which tunes the schedulers starts to take actions such as limit assigning 
> containers to an application, or introduce delay to allocate container etc. 
> There are no clear information passed down from scheduler to outerworld under 
> these various scenarios. This makes debugging very tougher.
> This ticket is an effort to introduce more defined states on various parts in 
> scheduler where it skips/rejects container assignment, activate application 
> etc. Such information will help user to know whats happening in scheduler.
> Attaching a short proposal for initial discussion. We would like to improve 
> on this as we discuss.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-4091) Add REST API to retrieve scheduler activity

2016-07-26 Thread Wangda Tan (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-4091?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wangda Tan updated YARN-4091:
-
Summary: Add REST API to retrieve scheduler activity  (was: Improvement: 
Introduce more debug/diagnostics information to detail out scheduler activity)

> Add REST API to retrieve scheduler activity
> ---
>
> Key: YARN-4091
> URL: https://issues.apache.org/jira/browse/YARN-4091
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: capacity scheduler, resourcemanager
>Affects Versions: 2.7.0
>Reporter: Sunil G
>Assignee: Chen Ge
> Attachments: Improvement on debugdiagnostic information - YARN.pdf, 
> YARN-4091-design-doc-v1.pdf, YARN-4091.1.patch, YARN-4091.2.patch, 
> YARN-4091.3.patch, YARN-4091.preliminary.1.patch, app_activities.json, 
> node_activities.json
>
>
> As schedulers are improved with various new capabilities, more configurations 
> which tunes the schedulers starts to take actions such as limit assigning 
> containers to an application, or introduce delay to allocate container etc. 
> There are no clear information passed down from scheduler to outerworld under 
> these various scenarios. This makes debugging very tougher.
> This ticket is an effort to introduce more defined states on various parts in 
> scheduler where it skips/rejects container assignment, activate application 
> etc. Such information will help user to know whats happening in scheduler.
> Attaching a short proposal for initial discussion. We would like to improve 
> on this as we discuss.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-4091) Improvement: Introduce more debug/diagnostics information to detail out scheduler activity

2016-07-26 Thread Wangda Tan (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-4091?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wangda Tan updated YARN-4091:
-
Issue Type: Sub-task  (was: Improvement)
Parent: YARN-5437

> Improvement: Introduce more debug/diagnostics information to detail out 
> scheduler activity
> --
>
> Key: YARN-4091
> URL: https://issues.apache.org/jira/browse/YARN-4091
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: capacity scheduler, resourcemanager
>Affects Versions: 2.7.0
>Reporter: Sunil G
>Assignee: Chen Ge
> Attachments: Improvement on debugdiagnostic information - YARN.pdf, 
> YARN-4091-design-doc-v1.pdf, YARN-4091.1.patch, YARN-4091.2.patch, 
> YARN-4091.3.patch, YARN-4091.preliminary.1.patch, app_activities.json, 
> node_activities.json
>
>
> As schedulers are improved with various new capabilities, more configurations 
> which tunes the schedulers starts to take actions such as limit assigning 
> containers to an application, or introduce delay to allocate container etc. 
> There are no clear information passed down from scheduler to outerworld under 
> these various scenarios. This makes debugging very tougher.
> This ticket is an effort to introduce more defined states on various parts in 
> scheduler where it skips/rejects container assignment, activate application 
> etc. Such information will help user to know whats happening in scheduler.
> Attaching a short proposal for initial discussion. We would like to improve 
> on this as we discuss.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-4329) Allow fetching exact reason as to why a submitted app is in ACCEPTED state in Fair Scheduler

2016-07-26 Thread Wangda Tan (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-4329?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wangda Tan updated YARN-4329:
-
Parent Issue: YARN-5437  (was: YARN-4091)

> Allow fetching exact reason as to why a submitted app is in ACCEPTED state in 
> Fair Scheduler
> 
>
> Key: YARN-4329
> URL: https://issues.apache.org/jira/browse/YARN-4329
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: fairscheduler, resourcemanager
>Reporter: Naganarasimha G R
>Assignee: Naganarasimha G R
>
> Similar to YARN-3946, it would be useful to capture possible reason why the 
> Application is in accepted state in FairScheduler



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-4490) RM restart the finished app shows wrong Diagnostics status

2016-07-26 Thread Wangda Tan (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-4490?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wangda Tan updated YARN-4490:
-
Parent Issue: YARN-5437  (was: YARN-4091)

> RM restart the finished app shows wrong Diagnostics status
> --
>
> Key: YARN-4490
> URL: https://issues.apache.org/jira/browse/YARN-4490
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: capacity scheduler, resourcemanager
>Reporter: Mohammad Shahid Khan
>Assignee: Mohammad Shahid Khan
>
> RM restart the finished app shows wrong Diagnostics status.
> Preconditions:
> RM recovery enable true.
> Steps:
> 1. run an application, wait application is finished.
> 2. Restart the RM
> 3. Check the application status is RM web UI
> Issue:
> Check the Diagnostic message: Attempt recovered after RM restart.
> Expected:
> The Diagnostic message should be available only for the application waiting 
> for allocation.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-4192) Add YARN metric logging periodically to a seperate file

2016-07-26 Thread Wangda Tan (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-4192?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wangda Tan updated YARN-4192:
-
Parent Issue: YARN-5437  (was: YARN-4091)

> Add YARN metric logging periodically to a seperate file
> ---
>
> Key: YARN-4192
> URL: https://issues.apache.org/jira/browse/YARN-4192
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: nijel
>Assignee: nijel
>Priority: Minor
>
> HDFS-8880 added a framework for logging metrics in a given interval.
> This can be added to YARN as well
> Any thoughts ? 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-4104) dryrun of schedule for diagnostic and tenant's complain

2016-07-26 Thread Wangda Tan (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-4104?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wangda Tan updated YARN-4104:
-
Parent Issue: YARN-5437  (was: YARN-4091)

> dryrun of schedule for diagnostic and tenant's complain
> ---
>
> Key: YARN-4104
> URL: https://issues.apache.org/jira/browse/YARN-4104
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: scheduler
>Reporter: Hong Zhiguo
>Assignee: Hong Zhiguo
>Priority: Minor
>
> We have more than 1 thousand queues and several hundreds of tenants in a busy 
> cluster. We get a lot of complains/questions from owner/operator of queues 
> about "Why my queue/app can't get resource for a long while? "
> It's really hard to answer such questions.
> So we added a diagnostic REST endpoint 
> "/ws/v1/cluster/schedule/dryrun/{parentQueueName}" which returns the sorted 
> list of it's children according to it's SchedulingPolicy.getComparator().  
> All scheduling parameters of the children are also displayed, such as 
> minShare, usage, demand, weight, priority etc.
> Usually we just call "/ws/v1/cluster/schedule/dryrun/root", and the result 
> self-explains to the questions.
> I feel it's really useful for multi-tenant clusters, and hope it could be 
> merged into the mainline.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-3946) Update exact reason as to why a submitted app is in ACCEPTED state to app's diagnostic message

2016-07-26 Thread Wangda Tan (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-3946?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wangda Tan updated YARN-3946:
-
Parent Issue: YARN-5437  (was: YARN-4091)

> Update exact reason as to why a submitted app is in ACCEPTED state to app's 
> diagnostic message
> --
>
> Key: YARN-3946
> URL: https://issues.apache.org/jira/browse/YARN-3946
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: capacity scheduler, resourcemanager
>Affects Versions: 2.6.0
>Reporter: Sumit Nigam
>Assignee: Naganarasimha G R
> Fix For: 2.8.0
>
> Attachments: 3946WebImages.zip, YARN-3946.v1.001.patch, 
> YARN-3946.v1.002.patch, YARN-3946.v1.003.Images.zip, YARN-3946.v1.003.patch, 
> YARN-3946.v1.004.patch, YARN-3946.v1.005.patch, YARN-3946.v1.006.patch, 
> YARN-3946.v1.007.patch, YARN-3946.v1.008.patch
>
>
> Currently there is no direct way to get the exact reason as to why a 
> submitted app is still in ACCEPTED state. It should be possible to know 
> through RM REST API as to what aspect is not being met - say, queue limits 
> being reached, or core/ memory requirement not being met, or AM limit being 
> reached, etc.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Created] (YARN-5437) [Umbrella] Add more debug/diagnostic messages to scheduler

2016-07-26 Thread Wangda Tan (JIRA)
Wangda Tan created YARN-5437:


 Summary: [Umbrella] Add more debug/diagnostic messages to scheduler
 Key: YARN-5437
 URL: https://issues.apache.org/jira/browse/YARN-5437
 Project: Hadoop YARN
  Issue Type: Task
Reporter: Wangda Tan






--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-5432) Lock already held by another process while LevelDB cache store creation for dag

2016-07-26 Thread Li Lu (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-5432?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Li Lu updated YARN-5432:

Attachment: YARN-5432-trunk.003.patch

Fix checkstyle issues. 

> Lock already held by another process while LevelDB cache store creation for 
> dag
> ---
>
> Key: YARN-5432
> URL: https://issues.apache.org/jira/browse/YARN-5432
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: timelineserver
>Affects Versions: 2.8.0, 2.7.3
>Reporter: Karam Singh
>Assignee: Li Lu
> Attachments: YARN-5432-trunk.001.patch, YARN-5432-trunk.002.patch, 
> YARN-5432-trunk.003.patch
>
>
> While running ATS  stress tests,  15 concurrent ATS reads (python thread 
> which gives ws/v1/time/TEZ_DAG_ID, 
> ws/v1/time/TEZ_VERTEX_DI?primaryFilter=TEZ_DAG_ID: etc) calls.
> Note: Summary store for ATSv1.5 is RLD, but as we for each dag/application 
> ATS also creates leveldb cache when vertex/task/taskattempts information is 
> queried from ATS.
>  
> Getting following type of excpetion very frequently in ATS logs :- 
> 2016-07-23 00:01:56,089 [1517798697@qtp-1198158701-850] INFO 
> org.apache.hadoop.service.AbstractService: Service 
> LeveldbCache.timelineEntityGroupId_1469090881194_4832_application_1469090881194_4832
>  failed in state INITED; cause: 
> org.fusesource.leveldbjni.internal.NativeDB$DBException: IO error: lock 
> /grid/4/yarn_ats/atsv15_rld/timelineEntityGroupId_1469090881194_4832_application_1469090881194_4832-timeline-cache.ldb/LOCK:
>  already held by process
> org.fusesource.leveldbjni.internal.NativeDB$DBException: IO error: lock 
> /grid/4/yarn_ats/atsv15_rld/timelineEntityGroupId_1469090881194_4832_application_1469090881194_4832-timeline-cache.ldb/LOCK:
>  already held by process
> at 
> org.fusesource.leveldbjni.internal.NativeDB.checkStatus(NativeDB.java:200)
> at org.fusesource.leveldbjni.internal.NativeDB.open(NativeDB.java:218)
> at org.fusesource.leveldbjni.JniDBFactory.open(JniDBFactory.java:168)
> at 
> org.apache.hadoop.yarn.server.timeline.LevelDBCacheTimelineStore.serviceInit(LevelDBCacheTimelineStore.java:108)
> at 
> org.apache.hadoop.service.AbstractService.init(AbstractService.java:163)
> at 
> org.apache.hadoop.yarn.server.timeline.EntityCacheItem.refreshCache(EntityCacheItem.java:113)
> at 
> org.apache.hadoop.yarn.server.timeline.EntityGroupFSTimelineStore.getCachedStore(EntityGroupFSTimelineStore.java:1021)
> at 
> org.apache.hadoop.yarn.server.timeline.EntityGroupFSTimelineStore.getTimelineStoresFromCacheIds(EntityGroupFSTimelineStore.java:936)
> at 
> org.apache.hadoop.yarn.server.timeline.EntityGroupFSTimelineStore.getTimelineStoresForRead(EntityGroupFSTimelineStore.java:989)
> at 
> org.apache.hadoop.yarn.server.timeline.EntityGroupFSTimelineStore.getEntities(EntityGroupFSTimelineStore.java:1041)
> at 
> org.apache.hadoop.yarn.server.timeline.TimelineDataManager.doGetEntities(TimelineDataManager.java:168)
> at 
> org.apache.hadoop.yarn.server.timeline.TimelineDataManager.getEntities(TimelineDataManager.java:138)
> at 
> org.apache.hadoop.yarn.server.timeline.webapp.TimelineWebServices.getEntities(TimelineWebServices.java:117)
> at sun.reflect.GeneratedMethodAccessor82.invoke(Unknown Source)
> at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> at java.lang.reflect.Method.invoke(Method.java:498)
> at 
> com.sun.jersey.spi.container.JavaMethodInvokerFactory$1.invoke(JavaMethodInvokerFactory.java:60)
> at 
> com.sun.jersey.server.impl.model.method.dispatch.AbstractResourceMethodDispatchProvider$TypeOutInvoker._dispatch(AbstractResourceMethodDispatchProvider.java:185)
> at 
> com.sun.jersey.server.impl.model.method.dispatch.ResourceJavaMethodDispatcher.dispatch(ResourceJavaMethodDispatcher.java:75)
> at 
> com.sun.jersey.server.impl.uri.rules.HttpMethodRule.accept(HttpMethodRule.java:288)
> at 
> com.sun.jersey.server.impl.uri.rules.RightHandPathRule.accept(RightHandPathRule.java:147)
> at 
> com.sun.jersey.server.impl.uri.rules.ResourceClassRule.accept(ResourceClassRule.java:108)
> at 
> com.sun.jersey.server.impl.uri.rules.RightHandPathRule.accept(RightHandPathRule.java:147)
> at 
> com.sun.jersey.server.impl.uri.rules.RootResourceClassesRule.accept(RootResourceClassesRule.java:84)
> at 
> com.sun.jersey.server.impl.application.WebApplicationImpl._handleRequest(WebApplicationImpl.java:1469)
> at 
> com.sun.jersey.server.impl.application.WebApplicationImpl._handleRequest(WebApplicationImpl.java:1400)
> at 
> com.sun.jersey.server.impl.application.WebApplicationImpl.handleRequest(WebApplicati

[jira] [Commented] (YARN-5432) Lock already held by another process while LevelDB cache store creation for dag

2016-07-26 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5432?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15394808#comment-15394808
 ] 

Hadoop QA commented on YARN-5432:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 24s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 
0s {color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 10s 
{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 6m 
48s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 2m 16s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
39s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 53s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
26s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 
23s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 41s 
{color} | {color:green} trunk passed {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 9s 
{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 
42s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 2m 19s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 2m 19s 
{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} checkstyle {color} | {color:red} 0m 37s 
{color} | {color:red} hadoop-yarn-project/hadoop-yarn: The patch generated 8 
new + 6 unchanged - 0 fixed = 14 total (was 6) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 46s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
25s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
0s {color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green} 0m 2s 
{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 
28s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 47s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 2m 22s 
{color} | {color:green} hadoop-yarn-common in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 1m 14s 
{color} | {color:green} hadoop-yarn-server-timeline-pluginstorage in the patch 
passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 
17s {color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 25m 37s {color} 
| {color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:9560f25 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12820313/YARN-5432-trunk.002.patch
 |
| JIRA Issue | YARN-5432 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  xml  findbugs  checkstyle  |
| uname | Linux 877fcdc395d0 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed 
Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / d84ab8a |
| Default Java | 1.8.0_101 |
| findbugs | v3.0.0 |
| checkstyle | 
https://builds.apache.org/job/PreCommit-YARN-Build/12513/artifact/patchprocess/diff-checkstyle-hadoop-yarn-project_hadoop-yarn.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-YARN-Build/12513/testReport/ |
| modules | C: hadoop-yarn-project/hadoop-yarn/hadoop-yarn-co

[jira] [Updated] (YARN-5436) Race in AsyncDispatcher can cause random test failures in Tez(probably YARN also )

2016-07-26 Thread Zhiyuan Yang (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-5436?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Zhiyuan Yang updated YARN-5436:
---
Description: In YARN-2264, a race in DrainDispatcher was fixed. 
Unfortunately, it also exists in AsyncDispatcher but wasn't found. In 
YARN-2991, another DrainDispatcher bug was fixed by letting DrainDispatcher 
extend AsyncDispatcher because AsyncDispatcher doesn't have such issue. 
However, this shadows YARN-2264, and now similar race reappears in Tez unit 
tests (probably also YARN unit tests also).  (was: In YARN-2264, a race in 
DrainedDispatcher was fixed. Unfortunately, it also exists in AsyncDispatcher 
but wasn't found. In YARN-2991, another DrainedDispatcher bug was fixed by 
letting DrainedDispatcher extend AsyncDispatcher because AsyncDispatcher 
doesn't have such issue. However, this shadows YARN-2264, and now similar race 
reappears in Tez unit tests (probably also YARN unit tests also).)

> Race in AsyncDispatcher can cause random test failures in Tez(probably YARN 
> also )
> --
>
> Key: YARN-5436
> URL: https://issues.apache.org/jira/browse/YARN-5436
> Project: Hadoop YARN
>  Issue Type: Bug
>Reporter: Zhiyuan Yang
>Assignee: Zhiyuan Yang
>
> In YARN-2264, a race in DrainDispatcher was fixed. Unfortunately, it also 
> exists in AsyncDispatcher but wasn't found. In YARN-2991, another 
> DrainDispatcher bug was fixed by letting DrainDispatcher extend 
> AsyncDispatcher because AsyncDispatcher doesn't have such issue. However, 
> this shadows YARN-2264, and now similar race reappears in Tez unit tests 
> (probably also YARN unit tests also).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-5432) Lock already held by another process while LevelDB cache store creation for dag

2016-07-26 Thread Li Lu (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-5432?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Li Lu updated YARN-5432:

Attachment: YARN-5432-trunk.002.patch

Had some offline discussion with [~djp]. Seems we're on a track of 
overcomplicating problems. The problem reported in YARN-4987 only occurs when 
the app cache is too small, and we limit the size of the app cache to control 
the memory usage upper bound of the ATS reader. So, instead of using refcounts 
and handle corner cases like the one reported here, another solution is to 
insist on the app cache size. We also make it clear in the config's description 
about the potential effect of setting a cache size that is too small. 

> Lock already held by another process while LevelDB cache store creation for 
> dag
> ---
>
> Key: YARN-5432
> URL: https://issues.apache.org/jira/browse/YARN-5432
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: timelineserver
>Affects Versions: 2.8.0, 2.7.3
>Reporter: Karam Singh
>Assignee: Li Lu
> Attachments: YARN-5432-trunk.001.patch, YARN-5432-trunk.002.patch
>
>
> While running ATS  stress tests,  15 concurrent ATS reads (python thread 
> which gives ws/v1/time/TEZ_DAG_ID, 
> ws/v1/time/TEZ_VERTEX_DI?primaryFilter=TEZ_DAG_ID: etc) calls.
> Note: Summary store for ATSv1.5 is RLD, but as we for each dag/application 
> ATS also creates leveldb cache when vertex/task/taskattempts information is 
> queried from ATS.
>  
> Getting following type of excpetion very frequently in ATS logs :- 
> 2016-07-23 00:01:56,089 [1517798697@qtp-1198158701-850] INFO 
> org.apache.hadoop.service.AbstractService: Service 
> LeveldbCache.timelineEntityGroupId_1469090881194_4832_application_1469090881194_4832
>  failed in state INITED; cause: 
> org.fusesource.leveldbjni.internal.NativeDB$DBException: IO error: lock 
> /grid/4/yarn_ats/atsv15_rld/timelineEntityGroupId_1469090881194_4832_application_1469090881194_4832-timeline-cache.ldb/LOCK:
>  already held by process
> org.fusesource.leveldbjni.internal.NativeDB$DBException: IO error: lock 
> /grid/4/yarn_ats/atsv15_rld/timelineEntityGroupId_1469090881194_4832_application_1469090881194_4832-timeline-cache.ldb/LOCK:
>  already held by process
> at 
> org.fusesource.leveldbjni.internal.NativeDB.checkStatus(NativeDB.java:200)
> at org.fusesource.leveldbjni.internal.NativeDB.open(NativeDB.java:218)
> at org.fusesource.leveldbjni.JniDBFactory.open(JniDBFactory.java:168)
> at 
> org.apache.hadoop.yarn.server.timeline.LevelDBCacheTimelineStore.serviceInit(LevelDBCacheTimelineStore.java:108)
> at 
> org.apache.hadoop.service.AbstractService.init(AbstractService.java:163)
> at 
> org.apache.hadoop.yarn.server.timeline.EntityCacheItem.refreshCache(EntityCacheItem.java:113)
> at 
> org.apache.hadoop.yarn.server.timeline.EntityGroupFSTimelineStore.getCachedStore(EntityGroupFSTimelineStore.java:1021)
> at 
> org.apache.hadoop.yarn.server.timeline.EntityGroupFSTimelineStore.getTimelineStoresFromCacheIds(EntityGroupFSTimelineStore.java:936)
> at 
> org.apache.hadoop.yarn.server.timeline.EntityGroupFSTimelineStore.getTimelineStoresForRead(EntityGroupFSTimelineStore.java:989)
> at 
> org.apache.hadoop.yarn.server.timeline.EntityGroupFSTimelineStore.getEntities(EntityGroupFSTimelineStore.java:1041)
> at 
> org.apache.hadoop.yarn.server.timeline.TimelineDataManager.doGetEntities(TimelineDataManager.java:168)
> at 
> org.apache.hadoop.yarn.server.timeline.TimelineDataManager.getEntities(TimelineDataManager.java:138)
> at 
> org.apache.hadoop.yarn.server.timeline.webapp.TimelineWebServices.getEntities(TimelineWebServices.java:117)
> at sun.reflect.GeneratedMethodAccessor82.invoke(Unknown Source)
> at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> at java.lang.reflect.Method.invoke(Method.java:498)
> at 
> com.sun.jersey.spi.container.JavaMethodInvokerFactory$1.invoke(JavaMethodInvokerFactory.java:60)
> at 
> com.sun.jersey.server.impl.model.method.dispatch.AbstractResourceMethodDispatchProvider$TypeOutInvoker._dispatch(AbstractResourceMethodDispatchProvider.java:185)
> at 
> com.sun.jersey.server.impl.model.method.dispatch.ResourceJavaMethodDispatcher.dispatch(ResourceJavaMethodDispatcher.java:75)
> at 
> com.sun.jersey.server.impl.uri.rules.HttpMethodRule.accept(HttpMethodRule.java:288)
> at 
> com.sun.jersey.server.impl.uri.rules.RightHandPathRule.accept(RightHandPathRule.java:147)
> at 
> com.sun.jersey.server.impl.uri.rules.ResourceClassRule.accept(ResourceClassRule.java:108)
> at 
> com.sun.jersey.server.impl.uri.rules.RightHandPathRule.accept(Rig

[jira] [Created] (YARN-5436) Race in AsyncDispatcher can cause random test failures in Tez(probably YARN also )

2016-07-26 Thread Zhiyuan Yang (JIRA)
Zhiyuan Yang created YARN-5436:
--

 Summary: Race in AsyncDispatcher can cause random test failures in 
Tez(probably YARN also )
 Key: YARN-5436
 URL: https://issues.apache.org/jira/browse/YARN-5436
 Project: Hadoop YARN
  Issue Type: Bug
Reporter: Zhiyuan Yang
Assignee: Zhiyuan Yang


In YARN-2264, a race in DrainedDispatcher was fixed. Unfortunately, it also 
exists in AsyncDispatcher but wasn't found. In YARN-2991, another 
DrainedDispatcher bug was fixed by letting DrainedDispatcher extend 
AsyncDispatcher because AsyncDispatcher doesn't have such issue. However, this 
shadows YARN-2264, and now similar race reappears in Tez unit tests (probably 
also YARN unit tests also).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Created] (YARN-5435) User AM resource limit for does not get updated for ReservationQueue after execution of plan-follower.

2016-07-26 Thread Sean Po (JIRA)
Sean Po created YARN-5435:
-

 Summary:  User AM resource limit for does not get updated for 
ReservationQueue after execution of plan-follower.
 Key: YARN-5435
 URL: https://issues.apache.org/jira/browse/YARN-5435
 Project: Hadoop YARN
  Issue Type: Bug
  Components: capacity scheduler, resourcemanager
Affects Versions: 2.8.0
Reporter: Sean Po
Assignee: Sean Po


After a reservation queue is allocated, and the plan is synchronized by the 
plan follower, we expect the user am resource limit to reflect this change if 
the appropriate configuration is set.

Instead, the user am resource limit is always the same. As a result, multiple 
AM cannot run in parallel for small reservations.

To reproduce this issue, create a reservation, and submit multiple applications 
against the new reservation.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5434) Add -client|server argument for graceful decom

2016-07-26 Thread Robert Kanter (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5434?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15394708#comment-15394708
 ] 

Robert Kanter commented on YARN-5434:
-

Remaining checkstyle is in line with the current formatting of the file.  
Failing test is unrelated (looks like YARN-5389).

> Add -client|server argument for graceful decom
> --
>
> Key: YARN-5434
> URL: https://issues.apache.org/jira/browse/YARN-5434
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: graceful
>Affects Versions: 2.8.0
>Reporter: Robert Kanter
>Assignee: Robert Kanter
>Priority: Blocker
> Attachments: YARN-5343.001.patch, YARN-5343.002.patch
>
>
> We should add {{-client|server}} argument to allow the user to specify if 
> they want to use the client-side graceful decom tracking, or the server-side 
> tracking (YARN-4676).
> Even though the server-side tracking won't go into 2.8, we should add the 
> arguments to 2.8 for compatibility between 2.8 and 2.9, when it's added.  In 
> 2.8, using {{-server}} would just throw an Exception.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5382) RM does not audit log kill request for active applications

2016-07-26 Thread Jason Lowe (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5382?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15394699#comment-15394699
 ] 

Jason Lowe commented on YARN-5382:
--

Note that if the trunk patch applies as-is to branch-2 (as I suspect it will) 
then there's no need for a separate patch.

> RM does not audit log kill request for active applications
> --
>
> Key: YARN-5382
> URL: https://issues.apache.org/jira/browse/YARN-5382
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: resourcemanager
>Affects Versions: 2.7.2
>Reporter: Jason Lowe
>Assignee: Vrushali C
> Attachments: YARN-5382-branch-2.7.01.patch, 
> YARN-5382-branch-2.7.02.patch, YARN-5382-branch-2.7.03.patch, 
> YARN-5382-branch-2.7.04.patch, YARN-5382-branch-2.7.05.patch
>
>
> ClientRMService will audit a kill request but only if it either fails to 
> issue the kill or if the kill is sent to an already finished application.  It 
> does not create a log entry when the application is active which is arguably 
> the most important case to audit.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5382) RM does not audit log kill request for active applications

2016-07-26 Thread Vrushali C (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5382?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15394696#comment-15394696
 ] 

Vrushali C commented on YARN-5382:
--

Thanks [~jlowe]! I will make the changes and upload an updated patch for trunk 
as well as branch-2. 

> RM does not audit log kill request for active applications
> --
>
> Key: YARN-5382
> URL: https://issues.apache.org/jira/browse/YARN-5382
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: resourcemanager
>Affects Versions: 2.7.2
>Reporter: Jason Lowe
>Assignee: Vrushali C
> Attachments: YARN-5382-branch-2.7.01.patch, 
> YARN-5382-branch-2.7.02.patch, YARN-5382-branch-2.7.03.patch, 
> YARN-5382-branch-2.7.04.patch, YARN-5382-branch-2.7.05.patch
>
>
> ClientRMService will audit a kill request but only if it either fails to 
> issue the kill or if the kill is sent to an already finished application.  It 
> does not create a log entry when the application is active which is arguably 
> the most important case to audit.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5382) RM does not audit log kill request for active applications

2016-07-26 Thread Jason Lowe (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5382?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15394678#comment-15394678
 ] 

Jason Lowe commented on YARN-5382:
--

Thanks for the update, Vrushali!

I should have said this earlier: normally we review patches against trunk 
first, since that's where the change must go before it goes anywhere else.  We 
can only put this into 2.7 once it's also in trunk, branch-2, and branch-2.8, 
otherwise we risk releasing a fix/feature that disappears in later releases.

createSuccessLog should build on other versions rather than replicate the code. 
 For example, now that we have an overload that takes the IP address as a 
string, the one that doesn't take an IP address should get the IP address as a 
string and call the other version.

Nit: ClientRMService#forceKillApplication is calling Server.getRemoteAddress 
three times which is wasteful, we should just cache this in a local.

Would RMAppKillByClientEvent be a more meaningful name than RMAppKillLogEvent?

Should we be passing around an InetAddress in the event and audit log overloads 
instead of a String for improved type safety?  Not a must-fix, just wondering 
if it would be clearer.

I'm not sure we should log empty strings for values that aren't provided.  It 
would probably be better to log "null" which is what it will already do for 
user names, for example, if we just let the null pass through.  Not sure if 
audit log parsers will properly parse if there isn't some kind of corresponding 
token listed for a key.

Existing audit log code will not log a key for an IP address if it can't obtain 
it.  This code will log an IP key with no value.  May be a reason to pass the 
InetAddress through and let the audit logger decide whether to add the key if 
the value is non-null and it can obtain the address.

In the tests, why do we need a fooUser and a testUGI?  I think we only need one 
of these.  fooUser is created and then only used to get information to create 
testUGI, and I don't see why we can't just create testUGI directly.  Also that 
UGI could be a static final "constant" in the test class rather than having 
each test method replicate the code.

> RM does not audit log kill request for active applications
> --
>
> Key: YARN-5382
> URL: https://issues.apache.org/jira/browse/YARN-5382
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: resourcemanager
>Affects Versions: 2.7.2
>Reporter: Jason Lowe
>Assignee: Vrushali C
> Attachments: YARN-5382-branch-2.7.01.patch, 
> YARN-5382-branch-2.7.02.patch, YARN-5382-branch-2.7.03.patch, 
> YARN-5382-branch-2.7.04.patch, YARN-5382-branch-2.7.05.patch
>
>
> ClientRMService will audit a kill request but only if it either fails to 
> issue the kill or if the kill is sent to an already finished application.  It 
> does not create a log entry when the application is active which is arguably 
> the most important case to audit.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5392) Replace use of Priority in the Scheduling infrastructure with an opaque ShedulerRequestKey

2016-07-26 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5392?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15394662#comment-15394662
 ] 

Hudson commented on YARN-5392:
--

SUCCESS: Integrated in Hadoop-trunk-Commit #10156 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/10156/])
YARN-5392. Replace use of Priority in the Scheduling infrastructure with (arun 
suresh: rev 5aace38b748ba71aaadd2c4d64eba8dc1f816828)
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/monitor/capacity/ProportionalCapacityPreemptionPolicyMockFramework.java
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/SchedulerRequestKey.java
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/capacity/TestReservations.java
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/capacity/TestCapacityScheduler.java
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/fair/TestFairScheduler.java
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/fair/FSAppAttempt.java
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/resource/Priority.java
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/SchedulerApplicationAttempt.java
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/rmcontainer/RMContainerReservedEvent.java
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/TestSchedulerApplicationAttempt.java
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/fair/TestFairSchedulerPreemption.java
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/capacity/TestUtils.java
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/capacity/TestNodeLabelContainerAllocation.java
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/monitor/capacity/FifoCandidatesSelector.java
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/common/fica/FiCaSchedulerApp.java
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/capacity/allocator/IncreaseContainerAllocator.java
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/common/fica/FiCaSchedulerNode.java
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/fair/FSSchedulerNode.java
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/AppSchedulingInfo.java
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/capacity/allocator/RegularContainerAllocator.java
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/capacity/LeafQueue.java
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/fair/TestFSAppAttempt.java
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/fifo/FifoScheduler.java
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test

[jira] [Commented] (YARN-5434) Add -client|server argument for graceful decom

2016-07-26 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5434?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15394640#comment-15394640
 ] 

Hadoop QA commented on YARN-5434:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 14s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 
0s {color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 9m 
4s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 24s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
17s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 29s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
17s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 0m 
32s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 14s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 
19s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 17s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 17s 
{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} checkstyle {color} | {color:red} 0m 12s 
{color} | {color:red} hadoop-yarn-project/hadoop-yarn/hadoop-yarn-client: The 
patch generated 2 new + 118 unchanged - 10 fixed = 120 total (was 128) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 22s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
12s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
0s {color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 0m 
34s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 12s 
{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 8m 34s {color} 
| {color:red} hadoop-yarn-client in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 
17s {color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 23m 6s {color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.yarn.client.api.impl.TestYarnClient |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:9560f25 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12820287/YARN-5343.002.patch |
| JIRA Issue | YARN-5434 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux f8a53024c4ee 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed 
Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / d2cf8b5 |
| Default Java | 1.8.0_101 |
| findbugs | v3.0.0 |
| checkstyle | 
https://builds.apache.org/job/PreCommit-YARN-Build/12512/artifact/patchprocess/diff-checkstyle-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-client.txt
 |
| unit | 
https://builds.apache.org/job/PreCommit-YARN-Build/12512/artifact/patchprocess/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-client.txt
 |
| unit test logs |  
https://builds.apache.org/job/PreCommit-YARN-Build/12512/artifact/patchprocess/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-client.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-YARN-Build/12512/testReport/ |
| modules | C: hadoop-yarn-project/hadoop-yarn/hadoop-yarn-client U: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-client |
| Console output | 
https://builds.apache.org/job/PreCommit-YARN-Build/12512/console |
| Powered by | Apach

[jira] [Updated] (YARN-5392) Replace use of Priority in the Scheduling infrastructure with an opaque ShedulerRequestKey

2016-07-26 Thread Arun Suresh (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-5392?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arun Suresh updated YARN-5392:
--
Summary: Replace use of Priority in the Scheduling infrastructure with an 
opaque ShedulerRequestKey  (was: Replace use of Priority in the Scheduling 
infrastructure with an opaque ShedulerKey)

> Replace use of Priority in the Scheduling infrastructure with an opaque 
> ShedulerRequestKey
> --
>
> Key: YARN-5392
> URL: https://issues.apache.org/jira/browse/YARN-5392
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Arun Suresh
>Assignee: Arun Suresh
> Attachments: YARN-5392.001.patch, YARN-5392.002.patch, 
> YARN-5392.003.patch, YARN-5392.004.patch, YARN-5392.005.patch, 
> YARN-5392.006.patch, YARN-5392.007.patch, YARN-5392.008.patch, 
> YARN-5392.009.patch, YARN-5392.010.patch
>
>
> Based on discussions in YARN-4888, this jira proposes to replace the use of 
> {{Priority}} in the Scheduler infrastructure (Scheduler, Queues, SchedulerApp 
> / Node etc.) with a more opaque and extensible {{SchedulerKey}}.
> Note: Even though {{SchedulerKey}} will be used by the internal scheduling 
> infrastructure, It will not be exposed to the Client or the AM. The 
> SchdulerKey is meant to be an internal construct that is derived from 
> attributes of the ResourceRequest / ApplicationSubmissionContext / Scheduler 
> Configuration etc.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5351) ResourceRequest should take ExecutionType into account during comparison

2016-07-26 Thread Konstantinos Karanasos (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5351?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15394595#comment-15394595
 ] 

Konstantinos Karanasos commented on YARN-5351:
--

The three test methods of {{TestYarnClient}} that fail seem unrelated.
Moreover, they ran successfully for me locally.

> ResourceRequest should take ExecutionType into account during comparison
> 
>
> Key: YARN-5351
> URL: https://issues.apache.org/jira/browse/YARN-5351
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Konstantinos Karanasos
>Assignee: Konstantinos Karanasos
> Attachments: YARN-5351.001.patch, YARN-5351.002.patch
>
>
> {{ExecutionTypeRequest}} should be taken into account in the {{compareTo}} 
> method of {{ResourceRequest}}.
> Otherwise, in the {{ask}} list of the {{AMRMClientImpl}} we may incorrectly 
> add pending container requests in the presence of both GUARANTEED and 
> OPPORTUNISTIC containers.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5392) Replace use of Priority in the Scheduling infrastructure with an opaque ShedulerKey

2016-07-26 Thread Karthik Kambatla (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5392?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15394594#comment-15394594
 ] 

Karthik Kambatla commented on YARN-5392:


+1. Thanks for your patience through the reviews, Arun. 

> Replace use of Priority in the Scheduling infrastructure with an opaque 
> ShedulerKey
> ---
>
> Key: YARN-5392
> URL: https://issues.apache.org/jira/browse/YARN-5392
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Arun Suresh
>Assignee: Arun Suresh
> Attachments: YARN-5392.001.patch, YARN-5392.002.patch, 
> YARN-5392.003.patch, YARN-5392.004.patch, YARN-5392.005.patch, 
> YARN-5392.006.patch, YARN-5392.007.patch, YARN-5392.008.patch, 
> YARN-5392.009.patch, YARN-5392.010.patch
>
>
> Based on discussions in YARN-4888, this jira proposes to replace the use of 
> {{Priority}} in the Scheduler infrastructure (Scheduler, Queues, SchedulerApp 
> / Node etc.) with a more opaque and extensible {{SchedulerKey}}.
> Note: Even though {{SchedulerKey}} will be used by the internal scheduling 
> infrastructure, It will not be exposed to the Client or the AM. The 
> SchdulerKey is meant to be an internal construct that is derived from 
> attributes of the ResourceRequest / ApplicationSubmissionContext / Scheduler 
> Configuration etc.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-5434) Add -client|server argument for graceful decom

2016-07-26 Thread Robert Kanter (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-5434?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Robert Kanter updated YARN-5434:

Attachment: YARN-5343.002.patch

The 002 patch fixes some of the checkstyle warnings.  The remaining warnings 
are due to the existing formatting in the file.



> Add -client|server argument for graceful decom
> --
>
> Key: YARN-5434
> URL: https://issues.apache.org/jira/browse/YARN-5434
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: graceful
>Affects Versions: 2.8.0
>Reporter: Robert Kanter
>Assignee: Robert Kanter
>Priority: Blocker
> Attachments: YARN-5343.001.patch, YARN-5343.002.patch
>
>
> We should add {{-client|server}} argument to allow the user to specify if 
> they want to use the client-side graceful decom tracking, or the server-side 
> tracking (YARN-4676).
> Even though the server-side tracking won't go into 2.8, we should add the 
> arguments to 2.8 for compatibility between 2.8 and 2.9, when it's added.  In 
> 2.8, using {{-server}} would just throw an Exception.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5428) Allow for specifying the docker client configuration directory

2016-07-26 Thread Shane Kumpf (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5428?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15394591#comment-15394591
 ] 

Shane Kumpf commented on YARN-5428:
---

As previously mentioned, the checkstyle failures are because two methods were 
already at 150 lines, and my additions put them over. We should refactor these 
in another issue. The unit test failure appears unrelated. I believe this is 
ready for review.

> Allow for specifying the docker client configuration directory
> --
>
> Key: YARN-5428
> URL: https://issues.apache.org/jira/browse/YARN-5428
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: yarn
>Reporter: Shane Kumpf
>Assignee: Shane Kumpf
> Attachments: YARN-5428.001.patch, YARN-5428.002.patch
>
>
> The docker client allows for specifying a configuration directory that 
> contains the docker client's configuration. It is common to store "docker 
> login" credentials in this config, to avoid the need to docker login on each 
> cluster member. 
> By default the docker client config is $HOME/.docker/config.json on Linux. 
> However, this does not work with the current container executor user 
> switching and it may also be desirable to centralize this configuration 
> beyond the single user's home directory.
> Note that the command line arg is for the configuration directory NOT the 
> configuration file.
> This change will be needed to allow YARN to automatically pull images at 
> localization time or within container executor.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5434) Add -client|server argument for graceful decom

2016-07-26 Thread Karthik Kambatla (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5434?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15394582#comment-15394582
 ] 

Karthik Kambatla commented on YARN-5434:


Marking this a blocker for 5.8.0 to ensure we maintain compatibility on the CLI 
for graceful decomm. 

> Add -client|server argument for graceful decom
> --
>
> Key: YARN-5434
> URL: https://issues.apache.org/jira/browse/YARN-5434
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: graceful
>Affects Versions: 2.8.0
>Reporter: Robert Kanter
>Assignee: Robert Kanter
>Priority: Blocker
> Attachments: YARN-5343.001.patch
>
>
> We should add {{-client|server}} argument to allow the user to specify if 
> they want to use the client-side graceful decom tracking, or the server-side 
> tracking (YARN-4676).
> Even though the server-side tracking won't go into 2.8, we should add the 
> arguments to 2.8 for compatibility between 2.8 and 2.9, when it's added.  In 
> 2.8, using {{-server}} would just throw an Exception.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-5434) Add -client|server argument for graceful decom

2016-07-26 Thread Karthik Kambatla (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-5434?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Karthik Kambatla updated YARN-5434:
---
Priority: Blocker  (was: Major)

> Add -client|server argument for graceful decom
> --
>
> Key: YARN-5434
> URL: https://issues.apache.org/jira/browse/YARN-5434
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: graceful
>Affects Versions: 2.8.0
>Reporter: Robert Kanter
>Assignee: Robert Kanter
>Priority: Blocker
> Attachments: YARN-5343.001.patch
>
>
> We should add {{-client|server}} argument to allow the user to specify if 
> they want to use the client-side graceful decom tracking, or the server-side 
> tracking (YARN-4676).
> Even though the server-side tracking won't go into 2.8, we should add the 
> arguments to 2.8 for compatibility between 2.8 and 2.9, when it's added.  In 
> 2.8, using {{-server}} would just throw an Exception.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5392) Replace use of Priority in the Scheduling infrastructure with an opaque ShedulerKey

2016-07-26 Thread Arun Suresh (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5392?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15394580#comment-15394580
 ] 

Arun Suresh commented on YARN-5392:
---

The test failure is unrelated

> Replace use of Priority in the Scheduling infrastructure with an opaque 
> ShedulerKey
> ---
>
> Key: YARN-5392
> URL: https://issues.apache.org/jira/browse/YARN-5392
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Arun Suresh
>Assignee: Arun Suresh
> Attachments: YARN-5392.001.patch, YARN-5392.002.patch, 
> YARN-5392.003.patch, YARN-5392.004.patch, YARN-5392.005.patch, 
> YARN-5392.006.patch, YARN-5392.007.patch, YARN-5392.008.patch, 
> YARN-5392.009.patch, YARN-5392.010.patch
>
>
> Based on discussions in YARN-4888, this jira proposes to replace the use of 
> {{Priority}} in the Scheduler infrastructure (Scheduler, Queues, SchedulerApp 
> / Node etc.) with a more opaque and extensible {{SchedulerKey}}.
> Note: Even though {{SchedulerKey}} will be used by the internal scheduling 
> infrastructure, It will not be exposed to the Client or the AM. The 
> SchdulerKey is meant to be an internal construct that is derived from 
> attributes of the ResourceRequest / ApplicationSubmissionContext / Scheduler 
> Configuration etc.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5434) Add -client|server argument for graceful decom

2016-07-26 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5434?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15394556#comment-15394556
 ] 

Hadoop QA commented on YARN-5434:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 20s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 
0s {color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 7m 
3s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 22s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
19s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 26s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
16s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 0m 
32s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 15s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 
20s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 17s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 17s 
{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} checkstyle {color} | {color:red} 0m 14s 
{color} | {color:red} hadoop-yarn-project/hadoop-yarn/hadoop-yarn-client: The 
patch generated 25 new + 118 unchanged - 10 fixed = 143 total (was 128) {color} 
|
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 23s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
13s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
0s {color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 0m 
34s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 12s 
{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 8m 22s {color} 
| {color:red} hadoop-yarn-client in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 
17s {color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 21m 7s {color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.yarn.client.api.impl.TestYarnClient |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:9560f25 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12820276/YARN-5343.001.patch |
| JIRA Issue | YARN-5434 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux 674ca6e8e3c0 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed 
Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / d2cf8b5 |
| Default Java | 1.8.0_101 |
| findbugs | v3.0.0 |
| checkstyle | 
https://builds.apache.org/job/PreCommit-YARN-Build/12510/artifact/patchprocess/diff-checkstyle-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-client.txt
 |
| unit | 
https://builds.apache.org/job/PreCommit-YARN-Build/12510/artifact/patchprocess/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-client.txt
 |
| unit test logs |  
https://builds.apache.org/job/PreCommit-YARN-Build/12510/artifact/patchprocess/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-client.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-YARN-Build/12510/testReport/ |
| modules | C: hadoop-yarn-project/hadoop-yarn/hadoop-yarn-client U: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-client |
| Console output | 
https://builds.apache.org/job/PreCommit-YARN-Build/12510/console |
| Powered by | Apa

[jira] [Commented] (YARN-5392) Replace use of Priority in the Scheduling infrastructure with an opaque ShedulerKey

2016-07-26 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5392?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15394551#comment-15394551
 ] 

Hadoop QA commented on YARN-5392:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 16s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 
0s {color} | {color:green} The patch appears to include 14 new or modified test 
files. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 7m 
47s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 38s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
56s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 46s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
18s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 8s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 25s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 
41s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 37s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 37s 
{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} checkstyle {color} | {color:red} 0m 50s 
{color} | {color:red} 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager:
 The patch generated 18 new + 2041 unchanged - 81 fixed = 2059 total (was 2122) 
{color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 42s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
16s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
0s {color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 9s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 22s 
{color} | {color:green} 
hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager
 generated 0 new + 963 unchanged - 26 fixed = 963 total (was 989) {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 41m 17s {color} 
| {color:red} hadoop-yarn-server-resourcemanager in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 
17s {color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 59m 20s {color} 
| {color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | 
hadoop.yarn.server.resourcemanager.TestLeaderElectorService |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:9560f25 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12820266/YARN-5392.010.patch |
| JIRA Issue | YARN-5392 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux 1a9bd3bffe73 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed 
Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / d2cf8b5 |
| Default Java | 1.8.0_101 |
| findbugs | v3.0.0 |
| checkstyle | 
https://builds.apache.org/job/PreCommit-YARN-Build/12509/artifact/patchprocess/diff-checkstyle-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager.txt
 |
| unit | 
https://builds.apache.org/job/PreCommit-YARN-Build/12509/artifact/patchprocess/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager.txt
 |
| unit test logs |  
https://builds.apache.org/job/PreCommit-YARN-Build/12509/artifact/patchprocess/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager.txt
 |
| 

[jira] [Commented] (YARN-5404) Add the ability to split reverse zone subnets

2016-07-26 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5404?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15394550#comment-15394550
 ] 

Hadoop QA commented on YARN-5404:
-

| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 18s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 
0s {color} | {color:green} The patch appears to include 2 new or modified test 
files. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 12m 
17s {color} | {color:green} YARN-4757 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 16s 
{color} | {color:green} YARN-4757 passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
17s {color} | {color:green} YARN-4757 passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 21s 
{color} | {color:green} YARN-4757 passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
20s {color} | {color:green} YARN-4757 passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 0m 
37s {color} | {color:green} YARN-4757 passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 17s 
{color} | {color:green} YARN-4757 passed {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 
18s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 13s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 13s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
10s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 20s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
10s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
0s {color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 0m 
32s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 11s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 0m 46s 
{color} | {color:green} hadoop-yarn-registry in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 
20s {color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 18m 26s {color} 
| {color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:9560f25 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12820278/YARN-5404-YARN-4757.002.patch
 |
| JIRA Issue | YARN-5404 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux c01e0a99f713 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed 
Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | YARN-4757 / 552b7cc |
| Default Java | 1.8.0_101 |
| findbugs | v3.0.0 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-YARN-Build/12511/testReport/ |
| modules | C: hadoop-yarn-project/hadoop-yarn/hadoop-yarn-registry U: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-registry |
| Console output | 
https://builds.apache.org/job/PreCommit-YARN-Build/12511/console |
| Powered by | Apache Yetus 0.3.0   http://yetus.apache.org |


This message was automatically generated.



> Add the ability to split reverse zone subnets
> -
>
> Key: YARN-5404
> URL: https://issues.apache.org/jira/browse/YARN-5404
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Shane Kumpf
>Assignee: Shane Kumpf
> Attachments: YARN-5404-YARN-4757.001.patch, 
> YARN-5404-YARN-4757.001.patch, YARN-5404-YARN-4757.001.patch, 
> YARN-5404-YARN-4757.002.patch, YARN-5404.001.patch
>
>
> In some environments, the 

[jira] [Updated] (YARN-5404) Add the ability to split reverse zone subnets

2016-07-26 Thread Shane Kumpf (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-5404?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Shane Kumpf updated YARN-5404:
--
Attachment: YARN-5404-YARN-4757.002.patch

> Add the ability to split reverse zone subnets
> -
>
> Key: YARN-5404
> URL: https://issues.apache.org/jira/browse/YARN-5404
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Shane Kumpf
>Assignee: Shane Kumpf
> Attachments: YARN-5404-YARN-4757.001.patch, 
> YARN-5404-YARN-4757.001.patch, YARN-5404-YARN-4757.001.patch, 
> YARN-5404-YARN-4757.002.patch, YARN-5404.001.patch
>
>
> In some environments, the entire container subnet may not be used exclusively 
> by containers (ie the YARN nodemanager host IPs may also be part of the 
> larger subnet). 
> As a result, the reverse lookup zones created by the YARN Registry DNS server 
> may not match those created on the forwarders.
> For example:
> Network: 172.27.0.0
> Subnet: 255.255.248.0
> Hosts:
> 0.27.172.in-addr.arpa
> 1.27.172.in-addr.arpa
> 2.27.172.in-addr.arpa
> 3.27.172.in-addr.arpa
> Containers
> 4.27.172.in-addr.arpa
> 5.27.172.in-addr.arpa
> 6.27.172.in-addr.arpa
> 7.27.172.in-addr.arpa
> YARN Registry DNS only allows for creating (as the total IP count is greater 
> than 256):
> 27.172.in-addr.arpa
> Provide configuration to further subdivide the subnets.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-5434) Add -client|server argument for graceful decom

2016-07-26 Thread Robert Kanter (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-5434?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Robert Kanter updated YARN-5434:

Attachment: YARN-5343.001.patch

[~djp]

> Add -client|server argument for graceful decom
> --
>
> Key: YARN-5434
> URL: https://issues.apache.org/jira/browse/YARN-5434
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: graceful
>Affects Versions: 2.8.0
>Reporter: Robert Kanter
>Assignee: Robert Kanter
> Attachments: YARN-5343.001.patch
>
>
> We should add {{-client|server}} argument to allow the user to specify if 
> they want to use the client-side graceful decom tracking, or the server-side 
> tracking (YARN-4676).
> Even though the server-side tracking won't go into 2.8, we should add the 
> arguments to 2.8 for compatibility between 2.8 and 2.9, when it's added.  In 
> 2.8, using {{-server}} would just throw an Exception.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5434) Add -client|server argument for graceful decom

2016-07-26 Thread Robert Kanter (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5434?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15394512#comment-15394512
 ] 

Robert Kanter commented on YARN-5434:
-

I also noticed that the optional timeout isn't actually optional in the code, 
so I'll fix that while I'm touching this code anyway.

> Add -client|server argument for graceful decom
> --
>
> Key: YARN-5434
> URL: https://issues.apache.org/jira/browse/YARN-5434
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: graceful
>Affects Versions: 2.8.0
>Reporter: Robert Kanter
>Assignee: Robert Kanter
>
> We should add {{-client|server}} argument to allow the user to specify if 
> they want to use the client-side graceful decom tracking, or the server-side 
> tracking (YARN-4676).
> Even though the server-side tracking won't go into 2.8, we should add the 
> arguments to 2.8 for compatibility between 2.8 and 2.9, when it's added.  In 
> 2.8, using {{-server}} would just throw an Exception.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Created] (YARN-5434) Add -client|server argument for graceful decom

2016-07-26 Thread Robert Kanter (JIRA)
Robert Kanter created YARN-5434:
---

 Summary: Add -client|server argument for graceful decom
 Key: YARN-5434
 URL: https://issues.apache.org/jira/browse/YARN-5434
 Project: Hadoop YARN
  Issue Type: Sub-task
  Components: graceful
Affects Versions: 2.8.0
Reporter: Robert Kanter
Assignee: Robert Kanter


We should add {{-client|server}} argument to allow the user to specify if they 
want to use the client-side graceful decom tracking, or the server-side 
tracking (YARN-4676).

Even though the server-side tracking won't go into 2.8, we should add the 
arguments to 2.8 for compatibility between 2.8 and 2.9, when it's added.  In 
2.8, using {{-server}} would just throw an Exception.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Assigned] (YARN-5425) TestDirectoryCollection.testCreateDirectories failed

2016-07-26 Thread Yufei Gu (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-5425?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yufei Gu reassigned YARN-5425:
--

Assignee: Yufei Gu

> TestDirectoryCollection.testCreateDirectories failed
> 
>
> Key: YARN-5425
> URL: https://issues.apache.org/jira/browse/YARN-5425
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: nodemanager
>Affects Versions: 2.9.0
>Reporter: Yufei Gu
>Assignee: Yufei Gu
>
> {code}
> java.lang.AssertionError: local dir parent not created with proper 
> permissions expected: but was:
>   at org.junit.Assert.fail(Assert.java:88)
>   at org.junit.Assert.failNotEquals(Assert.java:743)
>   at org.junit.Assert.assertEquals(Assert.java:118)
>   at 
> org.apache.hadoop.yarn.server.nodemanager.TestDirectoryCollection.testCreateDirectories(TestDirectoryCollection.java:113)
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-5392) Replace use of Priority in the Scheduling infrastructure with an opaque ShedulerKey

2016-07-26 Thread Arun Suresh (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-5392?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arun Suresh updated YARN-5392:
--
Attachment: YARN-5392.010.patch

Hmm.. Being a test class, I did not want to modify any more of it than 
necessary..
In any case, updating patch with your suggested change...

> Replace use of Priority in the Scheduling infrastructure with an opaque 
> ShedulerKey
> ---
>
> Key: YARN-5392
> URL: https://issues.apache.org/jira/browse/YARN-5392
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Arun Suresh
>Assignee: Arun Suresh
> Attachments: YARN-5392.001.patch, YARN-5392.002.patch, 
> YARN-5392.003.patch, YARN-5392.004.patch, YARN-5392.005.patch, 
> YARN-5392.006.patch, YARN-5392.007.patch, YARN-5392.008.patch, 
> YARN-5392.009.patch, YARN-5392.010.patch
>
>
> Based on discussions in YARN-4888, this jira proposes to replace the use of 
> {{Priority}} in the Scheduler infrastructure (Scheduler, Queues, SchedulerApp 
> / Node etc.) with a more opaque and extensible {{SchedulerKey}}.
> Note: Even though {{SchedulerKey}} will be used by the internal scheduling 
> infrastructure, It will not be exposed to the Client or the AM. The 
> SchdulerKey is meant to be an internal construct that is derived from 
> attributes of the ResourceRequest / ApplicationSubmissionContext / Scheduler 
> Configuration etc.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5137) Make DiskChecker pluggable in NodeManager

2016-07-26 Thread Ray Chiang (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5137?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15394456#comment-15394456
 ] 

Ray Chiang commented on YARN-5137:
--

Yeah, I think the fix to LinuxContainerExecutor and 
TestLinuxContainerExecutorWithMocks is sufficiently unrelated that it should go 
into its own JIRA.

[~vvasudev], it looks like YARN-4253 was the most recent modification to this 
code.  Would you agree that this fix is what is intended with the interaction 
between containerSchedPriorityIsSet and LinuxContainerExecutor#setConf()?  And 
should it go into a separate JIRA?


> Make DiskChecker pluggable in NodeManager
> -
>
> Key: YARN-5137
> URL: https://issues.apache.org/jira/browse/YARN-5137
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: nodemanager
>Reporter: Ray Chiang
>Assignee: Yufei Gu
>  Labels: supportability
> Attachments: YARN-5137.001.patch, YARN-5137.002.patch, 
> YARN-5137.003.patch, YARN-5137.004.patch, YARN-5137.005.patch
>
>
> It would be nice to have the option for a DiskChecker that has more 
> sophisticated checking capabilities.  In order to do this, we would first 
> need DiskChecker to be pluggable.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5392) Replace use of Priority in the Scheduling infrastructure with an opaque ShedulerKey

2016-07-26 Thread Karthik Kambatla (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5392?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15394449#comment-15394449
 ] 

Karthik Kambatla commented on YARN-5392:


In my comments above, 2.1 still applies. Otherwise, looks good. 

> Replace use of Priority in the Scheduling infrastructure with an opaque 
> ShedulerKey
> ---
>
> Key: YARN-5392
> URL: https://issues.apache.org/jira/browse/YARN-5392
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Arun Suresh
>Assignee: Arun Suresh
> Attachments: YARN-5392.001.patch, YARN-5392.002.patch, 
> YARN-5392.003.patch, YARN-5392.004.patch, YARN-5392.005.patch, 
> YARN-5392.006.patch, YARN-5392.007.patch, YARN-5392.008.patch, 
> YARN-5392.009.patch
>
>
> Based on discussions in YARN-4888, this jira proposes to replace the use of 
> {{Priority}} in the Scheduler infrastructure (Scheduler, Queues, SchedulerApp 
> / Node etc.) with a more opaque and extensible {{SchedulerKey}}.
> Note: Even though {{SchedulerKey}} will be used by the internal scheduling 
> infrastructure, It will not be exposed to the Client or the AM. The 
> SchdulerKey is meant to be an internal construct that is derived from 
> attributes of the ResourceRequest / ApplicationSubmissionContext / Scheduler 
> Configuration etc.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5404) Add the ability to split reverse zone subnets

2016-07-26 Thread Shane Kumpf (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5404?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15394441#comment-15394441
 ] 

Shane Kumpf commented on YARN-5404:
---

Uploading a new patch that addresses the comments above. Please let me know if 
I missed any or if you have additional suggestions.

> Add the ability to split reverse zone subnets
> -
>
> Key: YARN-5404
> URL: https://issues.apache.org/jira/browse/YARN-5404
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Shane Kumpf
>Assignee: Shane Kumpf
> Attachments: YARN-5404-YARN-4757.001.patch, 
> YARN-5404-YARN-4757.001.patch, YARN-5404-YARN-4757.001.patch, 
> YARN-5404.001.patch
>
>
> In some environments, the entire container subnet may not be used exclusively 
> by containers (ie the YARN nodemanager host IPs may also be part of the 
> larger subnet). 
> As a result, the reverse lookup zones created by the YARN Registry DNS server 
> may not match those created on the forwarders.
> For example:
> Network: 172.27.0.0
> Subnet: 255.255.248.0
> Hosts:
> 0.27.172.in-addr.arpa
> 1.27.172.in-addr.arpa
> 2.27.172.in-addr.arpa
> 3.27.172.in-addr.arpa
> Containers
> 4.27.172.in-addr.arpa
> 5.27.172.in-addr.arpa
> 6.27.172.in-addr.arpa
> 7.27.172.in-addr.arpa
> YARN Registry DNS only allows for creating (as the total IP count is greater 
> than 256):
> 27.172.in-addr.arpa
> Provide configuration to further subdivide the subnets.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5429) Fix @return related javadoc warnings in yarn-api

2016-07-26 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5429?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15394358#comment-15394358
 ] 

Hadoop QA commented on YARN-5429:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 17s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red} 0m 0s 
{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 6m 
39s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 23s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
14s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 26s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
12s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 0m 
58s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 17s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 
21s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 20s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 20s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
11s {color} | {color:green} hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api: 
The patch generated 0 new + 100 unchanged - 1 fixed = 100 total (was 101) 
{color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 24s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
9s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
0s {color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 5s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 14s 
{color} | {color:green} hadoop-yarn-project_hadoop-yarn_hadoop-yarn-api 
generated 0 new + 125 unchanged - 31 fixed = 125 total (was 156) {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 0m 21s 
{color} | {color:green} hadoop-yarn-api in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 
15s {color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 13m 23s {color} 
| {color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:9560f25 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12820232/YARN-5429.02.patch |
| JIRA Issue | YARN-5429 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux f89e035852c2 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed 
Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / da6adf5 |
| Default Java | 1.8.0_101 |
| findbugs | v3.0.0 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-YARN-Build/12508/testReport/ |
| modules | C: hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api U: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api |
| Console output | 
https://builds.apache.org/job/PreCommit-YARN-Build/12508/console |
| Powered by | Apache Yetus 0.3.0   http://yetus.apache.org |


This message was automatically generated.



> Fix @return related javadoc warnings in yarn-api
> 
>
> Key: YARN-5429
> URL: https://issues.apache.org/jira/browse/YARN-5429
> Project: Hadoop YARN
>  Issue Type: Sub-task

[jira] [Commented] (YARN-5431) TimeLineReader daemon start should allow to pass its own reader opts

2016-07-26 Thread Allen Wittenauer (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5431?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15394356#comment-15394356
 ] 

Allen Wittenauer commented on YARN-5431:


FWIW, I'm planning on making this all consistent in trunk with HADOOP-13341.  

HDFS stuff will be:

HDFS_namenode_OPTS, HDFS_datanode_OPTS,etc

YARN stuff will be:

YARN_resourcemanager_OPTS, YARN_nodemanager_OPTS, etc

Effectively:

(capital command)_(subcommand)_OPTS

This means one could do HADOOP_distcp_OPTS, MAPRED_streaming_OPTS,...  all in a 
consistent way without having custom env var handling all over the place.

> TimeLineReader daemon start should allow to pass its own reader opts
> 
>
> Key: YARN-5431
> URL: https://issues.apache.org/jira/browse/YARN-5431
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: scripts, timelinereader
>Reporter: Rohith Sharma K S
>Assignee: Rohith Sharma K S
> Fix For: 3.0.0-alpha2
>
> Attachments: YARN-5431.0.patch, YARN-5431.1.patch
>
>
> In yarn script , timelinereader doesn't allow to pass reader_opts.
> {code}
> timelinereader)
> HADOOP_SUBCMD_SUPPORTDAEMONIZATION="true"  
> HADOOP_CLASSNAME='org.apache.hadoop.yarn.server.timelineservice.reader.TimelineReaderServer'
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5432) Lock already held by another process while LevelDB cache store creation for dag

2016-07-26 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5432?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15394340#comment-15394340
 ] 

Hadoop QA commented on YARN-5432:
-

| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 17s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 
0s {color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 7m 
0s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 15s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
12s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 19s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
13s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 0m 
24s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 12s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 
13s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 13s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 13s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
8s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 16s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
10s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
0s {color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 0m 
27s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 10s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 1m 17s 
{color} | {color:green} hadoop-yarn-server-timeline-pluginstorage in the patch 
passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 
14s {color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 12m 37s {color} 
| {color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:9560f25 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12820226/YARN-5432-trunk.001.patch
 |
| JIRA Issue | YARN-5432 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux 1975846e70c5 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed 
Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / da6adf5 |
| Default Java | 1.8.0_101 |
| findbugs | v3.0.0 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-YARN-Build/12507/testReport/ |
| modules | C: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-timeline-pluginstorage
 U: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-timeline-pluginstorage
 |
| Console output | 
https://builds.apache.org/job/PreCommit-YARN-Build/12507/console |
| Powered by | Apache Yetus 0.3.0   http://yetus.apache.org |


This message was automatically generated.



> Lock already held by another process while LevelDB cache store creation for 
> dag
> ---
>
> Key: YARN-5432
> URL: https://issues.apache.org/jira/browse/YARN-5432
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: timelineserver
>Affects Versions: 2.8.0, 2.7.3
>Reporter: Karam Singh
>Assignee: Li Lu
>

[jira] [Commented] (YARN-5382) RM does not audit log kill request for active applications

2016-07-26 Thread Vrushali C (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5382?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15394338#comment-15394338
 ] 

Vrushali C commented on YARN-5382:
--

Thanks [~sunilg]!

[~jlowe] Would appreciate your feedback as well.. Thanks!

> RM does not audit log kill request for active applications
> --
>
> Key: YARN-5382
> URL: https://issues.apache.org/jira/browse/YARN-5382
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: resourcemanager
>Affects Versions: 2.7.2
>Reporter: Jason Lowe
>Assignee: Vrushali C
> Attachments: YARN-5382-branch-2.7.01.patch, 
> YARN-5382-branch-2.7.02.patch, YARN-5382-branch-2.7.03.patch, 
> YARN-5382-branch-2.7.04.patch, YARN-5382-branch-2.7.05.patch
>
>
> ClientRMService will audit a kill request but only if it either fails to 
> issue the kill or if the kill is sent to an already finished application.  It 
> does not create a log entry when the application is active which is arguably 
> the most important case to audit.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-5429) Fix @return related javadoc warnings in yarn-api

2016-07-26 Thread Vrushali C (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-5429?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vrushali C updated YARN-5429:
-
Attachment: YARN-5429.02.patch

Thanks [~varun_saxena], yes fixed the minor checkstyle issue and uploading 
patch v02. Appreciate your review!


> Fix @return related javadoc warnings in yarn-api
> 
>
> Key: YARN-5429
> URL: https://issues.apache.org/jira/browse/YARN-5429
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Vrushali C
>Assignee: Vrushali C
> Attachments: YARN-5429.01.patch, YARN-5429.02.patch
>
>
> As part of YARN-4977, filing a subtask to fix a subset of the javadoc 
> warnings in yarn-api.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-5432) Lock already held by another process while LevelDB cache store creation for dag

2016-07-26 Thread Li Lu (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-5432?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Li Lu updated YARN-5432:

Attachment: YARN-5432-trunk.001.patch

Upload a patch to fix this issue. It fixes the problems on two sides: 1. using 
a creation timestamp in the leveldb cache store's db name, therefore caches 
pointing to the same group id will have unique names even if the old cache has 
been evicted. 2. Perform precondition check on the cache size to avoid the ATS 
v1.5 store runs with 0 caches. 

> Lock already held by another process while LevelDB cache store creation for 
> dag
> ---
>
> Key: YARN-5432
> URL: https://issues.apache.org/jira/browse/YARN-5432
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: timelineserver
>Affects Versions: 2.8.0, 2.7.3
>Reporter: Karam Singh
>Assignee: Li Lu
> Attachments: YARN-5432-trunk.001.patch
>
>
> While running ATS  stress tests,  15 concurrent ATS reads (python thread 
> which gives ws/v1/time/TEZ_DAG_ID, 
> ws/v1/time/TEZ_VERTEX_DI?primaryFilter=TEZ_DAG_ID: etc) calls.
> Note: Summary store for ATSv1.5 is RLD, but as we for each dag/application 
> ATS also creates leveldb cache when vertex/task/taskattempts information is 
> queried from ATS.
>  
> Getting following type of excpetion very frequently in ATS logs :- 
> 2016-07-23 00:01:56,089 [1517798697@qtp-1198158701-850] INFO 
> org.apache.hadoop.service.AbstractService: Service 
> LeveldbCache.timelineEntityGroupId_1469090881194_4832_application_1469090881194_4832
>  failed in state INITED; cause: 
> org.fusesource.leveldbjni.internal.NativeDB$DBException: IO error: lock 
> /grid/4/yarn_ats/atsv15_rld/timelineEntityGroupId_1469090881194_4832_application_1469090881194_4832-timeline-cache.ldb/LOCK:
>  already held by process
> org.fusesource.leveldbjni.internal.NativeDB$DBException: IO error: lock 
> /grid/4/yarn_ats/atsv15_rld/timelineEntityGroupId_1469090881194_4832_application_1469090881194_4832-timeline-cache.ldb/LOCK:
>  already held by process
> at 
> org.fusesource.leveldbjni.internal.NativeDB.checkStatus(NativeDB.java:200)
> at org.fusesource.leveldbjni.internal.NativeDB.open(NativeDB.java:218)
> at org.fusesource.leveldbjni.JniDBFactory.open(JniDBFactory.java:168)
> at 
> org.apache.hadoop.yarn.server.timeline.LevelDBCacheTimelineStore.serviceInit(LevelDBCacheTimelineStore.java:108)
> at 
> org.apache.hadoop.service.AbstractService.init(AbstractService.java:163)
> at 
> org.apache.hadoop.yarn.server.timeline.EntityCacheItem.refreshCache(EntityCacheItem.java:113)
> at 
> org.apache.hadoop.yarn.server.timeline.EntityGroupFSTimelineStore.getCachedStore(EntityGroupFSTimelineStore.java:1021)
> at 
> org.apache.hadoop.yarn.server.timeline.EntityGroupFSTimelineStore.getTimelineStoresFromCacheIds(EntityGroupFSTimelineStore.java:936)
> at 
> org.apache.hadoop.yarn.server.timeline.EntityGroupFSTimelineStore.getTimelineStoresForRead(EntityGroupFSTimelineStore.java:989)
> at 
> org.apache.hadoop.yarn.server.timeline.EntityGroupFSTimelineStore.getEntities(EntityGroupFSTimelineStore.java:1041)
> at 
> org.apache.hadoop.yarn.server.timeline.TimelineDataManager.doGetEntities(TimelineDataManager.java:168)
> at 
> org.apache.hadoop.yarn.server.timeline.TimelineDataManager.getEntities(TimelineDataManager.java:138)
> at 
> org.apache.hadoop.yarn.server.timeline.webapp.TimelineWebServices.getEntities(TimelineWebServices.java:117)
> at sun.reflect.GeneratedMethodAccessor82.invoke(Unknown Source)
> at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> at java.lang.reflect.Method.invoke(Method.java:498)
> at 
> com.sun.jersey.spi.container.JavaMethodInvokerFactory$1.invoke(JavaMethodInvokerFactory.java:60)
> at 
> com.sun.jersey.server.impl.model.method.dispatch.AbstractResourceMethodDispatchProvider$TypeOutInvoker._dispatch(AbstractResourceMethodDispatchProvider.java:185)
> at 
> com.sun.jersey.server.impl.model.method.dispatch.ResourceJavaMethodDispatcher.dispatch(ResourceJavaMethodDispatcher.java:75)
> at 
> com.sun.jersey.server.impl.uri.rules.HttpMethodRule.accept(HttpMethodRule.java:288)
> at 
> com.sun.jersey.server.impl.uri.rules.RightHandPathRule.accept(RightHandPathRule.java:147)
> at 
> com.sun.jersey.server.impl.uri.rules.ResourceClassRule.accept(ResourceClassRule.java:108)
> at 
> com.sun.jersey.server.impl.uri.rules.RightHandPathRule.accept(RightHandPathRule.java:147)
> at 
> com.sun.jersey.server.impl.uri.rules.RootResourceClassesRule.accept(RootResourceClassesRule.java:84)
> at 
> com.sun.jersey.server.impl.application.WebApplicationI

[jira] [Commented] (YARN-5432) Lock already held by another process while LevelDB cache store creation for dag

2016-07-26 Thread Li Lu (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5432?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15394287#comment-15394287
 ] 

Li Lu commented on YARN-5432:
-

Thanks for reporting this issue [~karams]! 

The main cause of this issue is that after concurrency changes in YARN-4987, it 
is possible for readers to hold a cache item from being released. If during 
this period another read request to the same entity group id occurs, the 
storage will try to create a new cache on the same file location. This will 
cause the locking issue on the leveldb. This also explains why the problem is 
severe when cache size is small and reader contention is high: with smaller 
cache sizes, cache evictions are more frequent. At the same time, higher reader 
contention will introduce higher chances for readers to "hold" a cache storage.

> Lock already held by another process while LevelDB cache store creation for 
> dag
> ---
>
> Key: YARN-5432
> URL: https://issues.apache.org/jira/browse/YARN-5432
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: timelineserver
>Affects Versions: 2.8.0, 2.7.3
>Reporter: Karam Singh
>Assignee: Li Lu
>
> While running ATS  stress tests,  15 concurrent ATS reads (python thread 
> which gives ws/v1/time/TEZ_DAG_ID, 
> ws/v1/time/TEZ_VERTEX_DI?primaryFilter=TEZ_DAG_ID: etc) calls.
> Note: Summary store for ATSv1.5 is RLD, but as we for each dag/application 
> ATS also creates leveldb cache when vertex/task/taskattempts information is 
> queried from ATS.
>  
> Getting following type of excpetion very frequently in ATS logs :- 
> 2016-07-23 00:01:56,089 [1517798697@qtp-1198158701-850] INFO 
> org.apache.hadoop.service.AbstractService: Service 
> LeveldbCache.timelineEntityGroupId_1469090881194_4832_application_1469090881194_4832
>  failed in state INITED; cause: 
> org.fusesource.leveldbjni.internal.NativeDB$DBException: IO error: lock 
> /grid/4/yarn_ats/atsv15_rld/timelineEntityGroupId_1469090881194_4832_application_1469090881194_4832-timeline-cache.ldb/LOCK:
>  already held by process
> org.fusesource.leveldbjni.internal.NativeDB$DBException: IO error: lock 
> /grid/4/yarn_ats/atsv15_rld/timelineEntityGroupId_1469090881194_4832_application_1469090881194_4832-timeline-cache.ldb/LOCK:
>  already held by process
> at 
> org.fusesource.leveldbjni.internal.NativeDB.checkStatus(NativeDB.java:200)
> at org.fusesource.leveldbjni.internal.NativeDB.open(NativeDB.java:218)
> at org.fusesource.leveldbjni.JniDBFactory.open(JniDBFactory.java:168)
> at 
> org.apache.hadoop.yarn.server.timeline.LevelDBCacheTimelineStore.serviceInit(LevelDBCacheTimelineStore.java:108)
> at 
> org.apache.hadoop.service.AbstractService.init(AbstractService.java:163)
> at 
> org.apache.hadoop.yarn.server.timeline.EntityCacheItem.refreshCache(EntityCacheItem.java:113)
> at 
> org.apache.hadoop.yarn.server.timeline.EntityGroupFSTimelineStore.getCachedStore(EntityGroupFSTimelineStore.java:1021)
> at 
> org.apache.hadoop.yarn.server.timeline.EntityGroupFSTimelineStore.getTimelineStoresFromCacheIds(EntityGroupFSTimelineStore.java:936)
> at 
> org.apache.hadoop.yarn.server.timeline.EntityGroupFSTimelineStore.getTimelineStoresForRead(EntityGroupFSTimelineStore.java:989)
> at 
> org.apache.hadoop.yarn.server.timeline.EntityGroupFSTimelineStore.getEntities(EntityGroupFSTimelineStore.java:1041)
> at 
> org.apache.hadoop.yarn.server.timeline.TimelineDataManager.doGetEntities(TimelineDataManager.java:168)
> at 
> org.apache.hadoop.yarn.server.timeline.TimelineDataManager.getEntities(TimelineDataManager.java:138)
> at 
> org.apache.hadoop.yarn.server.timeline.webapp.TimelineWebServices.getEntities(TimelineWebServices.java:117)
> at sun.reflect.GeneratedMethodAccessor82.invoke(Unknown Source)
> at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> at java.lang.reflect.Method.invoke(Method.java:498)
> at 
> com.sun.jersey.spi.container.JavaMethodInvokerFactory$1.invoke(JavaMethodInvokerFactory.java:60)
> at 
> com.sun.jersey.server.impl.model.method.dispatch.AbstractResourceMethodDispatchProvider$TypeOutInvoker._dispatch(AbstractResourceMethodDispatchProvider.java:185)
> at 
> com.sun.jersey.server.impl.model.method.dispatch.ResourceJavaMethodDispatcher.dispatch(ResourceJavaMethodDispatcher.java:75)
> at 
> com.sun.jersey.server.impl.uri.rules.HttpMethodRule.accept(HttpMethodRule.java:288)
> at 
> com.sun.jersey.server.impl.uri.rules.RightHandPathRule.accept(RightHandPathRule.java:147)
> at 
> com.sun.jersey.server.impl.uri.rules.ResourceClassRule.accept(ResourceClassRule.java:108)
>  

[jira] [Commented] (YARN-5137) Make DiskChecker pluggable in NodeManager

2016-07-26 Thread Yufei Gu (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5137?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15394278#comment-15394278
 ] 

Yufei Gu commented on YARN-5137:


The failed tests are unrelated.

> Make DiskChecker pluggable in NodeManager
> -
>
> Key: YARN-5137
> URL: https://issues.apache.org/jira/browse/YARN-5137
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: nodemanager
>Reporter: Ray Chiang
>Assignee: Yufei Gu
>  Labels: supportability
> Attachments: YARN-5137.001.patch, YARN-5137.002.patch, 
> YARN-5137.003.patch, YARN-5137.004.patch, YARN-5137.005.patch
>
>
> It would be nice to have the option for a DiskChecker that has more 
> sophisticated checking capabilities.  In order to do this, we would first 
> need DiskChecker to be pluggable.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-5264) Use FSQueue to store queue-specific information

2016-07-26 Thread Yufei Gu (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-5264?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yufei Gu updated YARN-5264:
---
Attachment: YARN-5264.002.patch

Uploaded patch 002 to solve the test failures.

> Use FSQueue to store queue-specific information
> ---
>
> Key: YARN-5264
> URL: https://issues.apache.org/jira/browse/YARN-5264
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Yufei Gu
>Assignee: Yufei Gu
> Attachments: YARN-5264.001.patch, YARN-5264.002.patch
>
>
> Use FSQueue to store queue-specific information instead of querying 
> AllocationConfiguration. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5431) TimeLineReader daemon start should allow to pass its own reader opts

2016-07-26 Thread Varun Saxena (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5431?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15394191#comment-15394191
 ] 

Varun Saxena commented on YARN-5431:


I am not sure why changes for all the daemons were not made as part of 
YARN-1400.
But I think OPTS starting with YARN make more sense. I suggested changing it to 
YARN_TIMELINEREADER_OPTS here itself because this one is new.

Let us see what others think.

> TimeLineReader daemon start should allow to pass its own reader opts
> 
>
> Key: YARN-5431
> URL: https://issues.apache.org/jira/browse/YARN-5431
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: scripts, timelinereader
>Reporter: Rohith Sharma K S
>Assignee: Rohith Sharma K S
> Fix For: 3.0.0-alpha2
>
> Attachments: YARN-5431.0.patch, YARN-5431.1.patch
>
>
> In yarn script , timelinereader doesn't allow to pass reader_opts.
> {code}
> timelinereader)
> HADOOP_SUBCMD_SUPPORTDAEMONIZATION="true"  
> HADOOP_CLASSNAME='org.apache.hadoop.yarn.server.timelineservice.reader.TimelineReaderServer'
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5431) TimeLineReader daemon start should allow to pass its own reader opts

2016-07-26 Thread Rohith Sharma K S (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5431?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15394151#comment-15394151
 ] 

Rohith Sharma K S commented on YARN-5431:
-

Changing from {{HADOOP_TIMELINEREADER_OPTS}} to {{YARN_TIMELINEREADER_OPTS}} 
can be done, but other opts are starting with HADOOP.  It would be nice keep as 
same as other opts. And I am not sure why other  opts are started with HADOOP 
prefix which is not handled as part of YARN-1400. 

I am open to incorporate in this patch given if it is agreed upon. Otherwise, 
may be in separate JIRA can be tracked.

> TimeLineReader daemon start should allow to pass its own reader opts
> 
>
> Key: YARN-5431
> URL: https://issues.apache.org/jira/browse/YARN-5431
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: scripts, timelinereader
>Reporter: Rohith Sharma K S
>Assignee: Rohith Sharma K S
> Fix For: 3.0.0-alpha2
>
> Attachments: YARN-5431.0.patch, YARN-5431.1.patch
>
>
> In yarn script , timelinereader doesn't allow to pass reader_opts.
> {code}
> timelinereader)
> HADOOP_SUBCMD_SUPPORTDAEMONIZATION="true"  
> HADOOP_CLASSNAME='org.apache.hadoop.yarn.server.timelineservice.reader.TimelineReaderServer'
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5431) TimeLineReader daemon start should allow to pass its own reader opts

2016-07-26 Thread Varun Saxena (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5431?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15394125#comment-15394125
 ] 

Varun Saxena commented on YARN-5431:


I think last point should be fine. Because we capture which daemon to run in 
ATSv2 documentation.

> TimeLineReader daemon start should allow to pass its own reader opts
> 
>
> Key: YARN-5431
> URL: https://issues.apache.org/jira/browse/YARN-5431
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: scripts, timelinereader
>Reporter: Rohith Sharma K S
>Assignee: Rohith Sharma K S
> Fix For: 3.0.0-alpha2
>
> Attachments: YARN-5431.0.patch, YARN-5431.1.patch
>
>
> In yarn script , timelinereader doesn't allow to pass reader_opts.
> {code}
> timelinereader)
> HADOOP_SUBCMD_SUPPORTDAEMONIZATION="true"  
> HADOOP_CLASSNAME='org.apache.hadoop.yarn.server.timelineservice.reader.TimelineReaderServer'
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (YARN-5431) TimeLineReader daemon start should allow to pass its own reader opts

2016-07-26 Thread Varun Saxena (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5431?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15394115#comment-15394115
 ] 

Varun Saxena edited comment on YARN-5431 at 7/26/16 5:18 PM:
-

[~rohithsharma], I would prefer to name the opts as 
{{YARN_TIMELINEREADER_OPTS}} instead of {{HADOOP_TIMELINEREADER_OPTS}} along 
the lines of YARN_RESOURCEMANAGER_OPTS.

Ideally HADOOP_NODEMANAGER_OPTS should also be named as YARN_NODEMANAGER_OPTS 
and so on. I see in YARN-1400 only RM opts were changed but not others.
Also should we have better description for these OPTS (for WINDOWS script) 
along the lines of {{yarn-env.sh}} ?
But these issues are not related to this JIRA and a separate JIRA can be filed 
for it...

Moreover, timelinereader command is meant only for ATSv2 and timelineserver 
only for ATSv1/1.5 (i.e. depending on timeline service version you are running)
This isn't very clear from the help. Maybe we can handle this here itself. Do 
we need to ? cc [~sjlee0]



was (Author: varun_saxena):
[~rohithsharma], I would prefer to name the opts as 
{{YARN_TIMELINEREADER_OPTS}} instead of {{HADOOP_TIMELINEREADER_OPTS}} along 
the lines of YARN_RESOURCEMANAGER_OPTS.

Ideally HADOOP_NODEMANAGER_OPTS should also be named as YARN_NODEMANAGER_OPTS 
and so on. I see in YARN-1400 only RM opts were changed but not others.
Also should we have better description for these OPTS (for WINDOWS script) 
along the lines of {{yarn-env.sh}} ?
But these issues are not related to this JIRA and a separate JIRA can be filed 
for it...

Moreover, timelinereader command is meant only for ATSv2 and timelineserver 
only for ATSv1/1.5 (i.e. depending on timeline service version you are running)
This isn't very clear from the help. Maybe we can handle this here itself. cc 
[~sjlee0]


> TimeLineReader daemon start should allow to pass its own reader opts
> 
>
> Key: YARN-5431
> URL: https://issues.apache.org/jira/browse/YARN-5431
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: scripts, timelinereader
>Reporter: Rohith Sharma K S
>Assignee: Rohith Sharma K S
> Fix For: 3.0.0-alpha2
>
> Attachments: YARN-5431.0.patch, YARN-5431.1.patch
>
>
> In yarn script , timelinereader doesn't allow to pass reader_opts.
> {code}
> timelinereader)
> HADOOP_SUBCMD_SUPPORTDAEMONIZATION="true"  
> HADOOP_CLASSNAME='org.apache.hadoop.yarn.server.timelineservice.reader.TimelineReaderServer'
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (YARN-5431) TimeLineReader daemon start should allow to pass its own reader opts

2016-07-26 Thread Varun Saxena (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5431?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15394115#comment-15394115
 ] 

Varun Saxena edited comment on YARN-5431 at 7/26/16 5:16 PM:
-

[~rohithsharma], I would prefer to name the opts as 
{{YARN_TIMELINEREADER_OPTS}} instead of {{HADOOP_TIMELINEREADER_OPTS}} along 
the lines of YARN_RESOURCEMANAGER_OPTS.

Ideally HADOOP_NODEMANAGER_OPTS should also be named as YARN_NODEMANAGER_OPTS 
and so on. I see in YARN-1400 only RM opts were changed but not others.
Also should we have better description for these OPTS (for WINDOWS script) 
along the lines of {{yarn-env.sh}} ?
But these issues are not related to this JIRA and a separate JIRA can be filed 
for it...

Moreover, timelinereader command is meant only for ATSv2 and timelineserver 
only for ATSv1/1.5 (i.e. depending on timeline service version you are running)
This isn't very clear from the help. Maybe we can handle this here itself. cc 
[~sjlee0]



was (Author: varun_saxena):
[~rohithsharma], I would prefer to name the opts as 
{{YARN_TIMELINEREADER_OPTS}} instead of {{HADOOP_TIMELINEREADER_OPTS}} along 
the lines of YARN_RESOURCEMANAGER_OPTS.

Ideally HADOOP_NODEMANAGER_OPTS should also be named as YARN_NODEMANAGER_OPTS 
and so on. I see in YARN-1400 only RM opts were changed but not others.
Also should we have better description for these OPTS along the lines of 
{{yarn-env.sh}} ?
But these issues are not related to this JIRA and a separate JIRA can be filed 
for it...

Moreover, timelinereader command is meant only for ATSv2 and timelineserver 
only for ATSv1/1.5 (i.e. depending on timeline service version you are running)
This isn't very clear from the help. Maybe we can handle this here itself. cc 
[~sjlee0]


> TimeLineReader daemon start should allow to pass its own reader opts
> 
>
> Key: YARN-5431
> URL: https://issues.apache.org/jira/browse/YARN-5431
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: scripts, timelinereader
>Reporter: Rohith Sharma K S
>Assignee: Rohith Sharma K S
> Fix For: 3.0.0-alpha2
>
> Attachments: YARN-5431.0.patch, YARN-5431.1.patch
>
>
> In yarn script , timelinereader doesn't allow to pass reader_opts.
> {code}
> timelinereader)
> HADOOP_SUBCMD_SUPPORTDAEMONIZATION="true"  
> HADOOP_CLASSNAME='org.apache.hadoop.yarn.server.timelineservice.reader.TimelineReaderServer'
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5431) TimeLineReader daemon start should allow to pass its own reader opts

2016-07-26 Thread Varun Saxena (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5431?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15394115#comment-15394115
 ] 

Varun Saxena commented on YARN-5431:


[~rohithsharma], I would prefer to name the opts as 
{{YARN_TIMELINEREADER_OPTS}} instead of {{HADOOP_TIMELINEREADER_OPTS}} along 
the lines of YARN_RESOURCEMANAGER_OPTS.

Ideally HADOOP_NODEMANAGER_OPTS should also be named as YARN_NODEMANAGER_OPTS 
and so on. I see in YARN-1400 only RM opts were changed but not others.
Also should we have better description for these OPTS along the lines of 
{{yarn-env.sh}} ?
But these issues are not related to this JIRA and a separate JIRA can be filed 
for it...

Moreover, timelinereader command is meant only for ATSv2 and timelineserver 
only for ATSv1/1.5 (i.e. depending on timeline service version you are running)
This isn't very clear from the help. Maybe we can handle this here itself. cc 
[~sjlee0]


> TimeLineReader daemon start should allow to pass its own reader opts
> 
>
> Key: YARN-5431
> URL: https://issues.apache.org/jira/browse/YARN-5431
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: scripts, timelinereader
>Reporter: Rohith Sharma K S
>Assignee: Rohith Sharma K S
> Fix For: 3.0.0-alpha2
>
> Attachments: YARN-5431.0.patch, YARN-5431.1.patch
>
>
> In yarn script , timelinereader doesn't allow to pass reader_opts.
> {code}
> timelinereader)
> HADOOP_SUBCMD_SUPPORTDAEMONIZATION="true"  
> HADOOP_CLASSNAME='org.apache.hadoop.yarn.server.timelineservice.reader.TimelineReaderServer'
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-5431) TimeLineReader daemon start should allow to pass its own reader opts

2016-07-26 Thread Rohith Sharma K S (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-5431?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Rohith Sharma K S updated YARN-5431:

Attachment: YARN-5431.1.patch

Thanks [~aw] for pointing out which are all the files to be modified :-)
Updated the patch fixing review comments. And fixed typo in yarn-env.sh file 
too.

> TimeLineReader daemon start should allow to pass its own reader opts
> 
>
> Key: YARN-5431
> URL: https://issues.apache.org/jira/browse/YARN-5431
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: scripts, timelinereader
>Reporter: Rohith Sharma K S
>Assignee: Rohith Sharma K S
> Fix For: 3.0.0-alpha2
>
> Attachments: YARN-5431.0.patch, YARN-5431.1.patch
>
>
> In yarn script , timelinereader doesn't allow to pass reader_opts.
> {code}
> timelinereader)
> HADOOP_SUBCMD_SUPPORTDAEMONIZATION="true"  
> HADOOP_CLASSNAME='org.apache.hadoop.yarn.server.timelineservice.reader.TimelineReaderServer'
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-5422) ContainerLocalizer log should be logged in separate log file.

2016-07-26 Thread Varun Saxena (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-5422?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Varun Saxena updated YARN-5422:
---
Assignee: Surendra Singh Lilhore

> ContainerLocalizer log should be logged in separate log file.
> -
>
> Key: YARN-5422
> URL: https://issues.apache.org/jira/browse/YARN-5422
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: applications
>Affects Versions: 2.7.1
>Reporter: Surendra Singh Lilhore
>Assignee: Surendra Singh Lilhore
>
> We should set the log4j for the ContainerLocalizer jvm. Currently it will use 
> the NM log4j and  it will log the logs in NM hadoop.log file.
> If NM user and application user is different, then ContainerLocalizer will 
> not be able to log in hadoop.log file.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5431) TimeLineReader daemon start should allow to pass its own reader opts

2016-07-26 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5431?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15393988#comment-15393988
 ] 

Hudson commented on YARN-5431:
--

FAILURE: Integrated in Hadoop-trunk-Commit #10154 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/10154/])
Revert "YARN-5431. TimelineReader daemon start should allow to pass its 
(varunsaxena: rev da6adf5151bb73511f7fb3c95118b91e0aec772a)
* hadoop-yarn-project/hadoop-yarn/bin/yarn


> TimeLineReader daemon start should allow to pass its own reader opts
> 
>
> Key: YARN-5431
> URL: https://issues.apache.org/jira/browse/YARN-5431
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: scripts, timelinereader
>Reporter: Rohith Sharma K S
>Assignee: Rohith Sharma K S
> Fix For: 3.0.0-alpha2
>
> Attachments: YARN-5431.0.patch
>
>
> In yarn script , timelinereader doesn't allow to pass reader_opts.
> {code}
> timelinereader)
> HADOOP_SUBCMD_SUPPORTDAEMONIZATION="true"  
> HADOOP_CLASSNAME='org.apache.hadoop.yarn.server.timelineservice.reader.TimelineReaderServer'
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5433) Audit dependencies for Category-X

2016-07-26 Thread Sean Busbey (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5433?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15393978#comment-15393978
 ] 

Sean Busbey commented on YARN-5433:
---

fixing the findbugs annotations jar is easy, we just need to exclude it. We 
already have the clean room ALv2 reimplementation present.

> Audit dependencies for Category-X
> -
>
> Key: YARN-5433
> URL: https://issues.apache.org/jira/browse/YARN-5433
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: timelineserver
>Affects Versions: 3.0.0-alpha1
>Reporter: Sean Busbey
>Priority: Blocker
>
> Recently phoenix has found some category-x dependencies in their build 
> (PHOENIX-3084, PHOENIX-3091), which also showed some problems in HBase 
> (HBASE-16260).
> Since the Timeline Server work brought in both of these as dependencies, we 
> should make sure we don't have any cat-x dependencies either. From what I've 
> seen in those projects, our choice of HBase version shouldn't be impacted but 
> our Phoenix one is.
> Greping our current dependency list for the timeline server component shows 
> some LGPL:
> {code}
> ...
> [INFO]net.sourceforge.findbugs:annotations:jar:1.3.2:compile
> ...
> {code}
> I haven't checked the rest of the dependencies that have changed since 
> HADOOP-12893 went in, so ATM I've filed this against YARN since that's where 
> this one example came in.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5431) TimeLineReader daemon start should allow to pass its own reader opts

2016-07-26 Thread Varun Saxena (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5431?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15393974#comment-15393974
 ] 

Varun Saxena commented on YARN-5431:


I have reverted the commit.
[~rohithsharma], kindly make the necessary change.

> TimeLineReader daemon start should allow to pass its own reader opts
> 
>
> Key: YARN-5431
> URL: https://issues.apache.org/jira/browse/YARN-5431
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: scripts, timelinereader
>Reporter: Rohith Sharma K S
>Assignee: Rohith Sharma K S
> Fix For: 3.0.0-alpha2
>
> Attachments: YARN-5431.0.patch
>
>
> In yarn script , timelinereader doesn't allow to pass reader_opts.
> {code}
> timelinereader)
> HADOOP_SUBCMD_SUPPORTDAEMONIZATION="true"  
> HADOOP_CLASSNAME='org.apache.hadoop.yarn.server.timelineservice.reader.TimelineReaderServer'
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5431) TimeLineReader daemon start should allow to pass its own reader opts

2016-07-26 Thread Allen Wittenauer (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5431?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15393975#comment-15393975
 ] 

Allen Wittenauer commented on YARN-5431:


It also needs to get added to yarn.cmd.

> TimeLineReader daemon start should allow to pass its own reader opts
> 
>
> Key: YARN-5431
> URL: https://issues.apache.org/jira/browse/YARN-5431
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: scripts, timelinereader
>Reporter: Rohith Sharma K S
>Assignee: Rohith Sharma K S
> Fix For: 3.0.0-alpha2
>
> Attachments: YARN-5431.0.patch
>
>
> In yarn script , timelinereader doesn't allow to pass reader_opts.
> {code}
> timelinereader)
> HADOOP_SUBCMD_SUPPORTDAEMONIZATION="true"  
> HADOOP_CLASSNAME='org.apache.hadoop.yarn.server.timelineservice.reader.TimelineReaderServer'
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Reopened] (YARN-5431) TimeLineReader daemon start should allow to pass its own reader opts

2016-07-26 Thread Varun Saxena (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-5431?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Varun Saxena reopened YARN-5431:


> TimeLineReader daemon start should allow to pass its own reader opts
> 
>
> Key: YARN-5431
> URL: https://issues.apache.org/jira/browse/YARN-5431
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: scripts, timelinereader
>Reporter: Rohith Sharma K S
>Assignee: Rohith Sharma K S
> Fix For: 3.0.0-alpha2
>
> Attachments: YARN-5431.0.patch
>
>
> In yarn script , timelinereader doesn't allow to pass reader_opts.
> {code}
> timelinereader)
> HADOOP_SUBCMD_SUPPORTDAEMONIZATION="true"  
> HADOOP_CLASSNAME='org.apache.hadoop.yarn.server.timelineservice.reader.TimelineReaderServer'
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Created] (YARN-5433) Audit dependencies for Category-X

2016-07-26 Thread Sean Busbey (JIRA)
Sean Busbey created YARN-5433:
-

 Summary: Audit dependencies for Category-X
 Key: YARN-5433
 URL: https://issues.apache.org/jira/browse/YARN-5433
 Project: Hadoop YARN
  Issue Type: Bug
  Components: timelineserver
Affects Versions: 3.0.0-alpha1
Reporter: Sean Busbey
Priority: Blocker


Recently phoenix has found some category-x dependencies in their build 
(PHOENIX-3084, PHOENIX-3091), which also showed some problems in HBase 
(HBASE-16260).

Since the Timeline Server work brought in both of these as dependencies, we 
should make sure we don't have any cat-x dependencies either. From what I've 
seen in those projects, our choice of HBase version shouldn't be impacted but 
our Phoenix one is.

Greping our current dependency list for the timeline server component shows 
some LGPL:

{code}
...
[INFO]net.sourceforge.findbugs:annotations:jar:1.3.2:compile
...
{code}

I haven't checked the rest of the dependencies that have changed since 
HADOOP-12893 went in, so ATM I've filed this against YARN since that's where 
this one example came in.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5431) TimeLineReader daemon start should allow to pass its own reader opts

2016-07-26 Thread Varun Saxena (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5431?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15393965#comment-15393965
 ] 

Varun Saxena commented on YARN-5431:


Oops, sorry, I did not know about.

> TimeLineReader daemon start should allow to pass its own reader opts
> 
>
> Key: YARN-5431
> URL: https://issues.apache.org/jira/browse/YARN-5431
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: scripts, timelinereader
>Reporter: Rohith Sharma K S
>Assignee: Rohith Sharma K S
> Fix For: 3.0.0-alpha2
>
> Attachments: YARN-5431.0.patch
>
>
> In yarn script , timelinereader doesn't allow to pass reader_opts.
> {code}
> timelinereader)
> HADOOP_SUBCMD_SUPPORTDAEMONIZATION="true"  
> HADOOP_CLASSNAME='org.apache.hadoop.yarn.server.timelineservice.reader.TimelineReaderServer'
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5431) TimeLineReader daemon start should allow to pass its own reader opts

2016-07-26 Thread Varun Saxena (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5431?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15393959#comment-15393959
 ] 

Varun Saxena commented on YARN-5431:


Committed to trunk. Thanks Rohith for your contribution

> TimeLineReader daemon start should allow to pass its own reader opts
> 
>
> Key: YARN-5431
> URL: https://issues.apache.org/jira/browse/YARN-5431
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: scripts, timelinereader
>Reporter: Rohith Sharma K S
>Assignee: Rohith Sharma K S
> Fix For: 3.0.0-alpha2
>
> Attachments: YARN-5431.0.patch
>
>
> In yarn script , timelinereader doesn't allow to pass reader_opts.
> {code}
> timelinereader)
> HADOOP_SUBCMD_SUPPORTDAEMONIZATION="true"  
> HADOOP_CLASSNAME='org.apache.hadoop.yarn.server.timelineservice.reader.TimelineReaderServer'
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5431) TimeLineReader daemon start should allow to pass its own reader opts

2016-07-26 Thread Allen Wittenauer (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5431?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15393960#comment-15393960
 ] 

Allen Wittenauer commented on YARN-5431:


Where's the yarn-env.sh documentation?

> TimeLineReader daemon start should allow to pass its own reader opts
> 
>
> Key: YARN-5431
> URL: https://issues.apache.org/jira/browse/YARN-5431
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: scripts, timelinereader
>Reporter: Rohith Sharma K S
>Assignee: Rohith Sharma K S
> Fix For: 3.0.0-alpha2
>
> Attachments: YARN-5431.0.patch
>
>
> In yarn script , timelinereader doesn't allow to pass reader_opts.
> {code}
> timelinereader)
> HADOOP_SUBCMD_SUPPORTDAEMONIZATION="true"  
> HADOOP_CLASSNAME='org.apache.hadoop.yarn.server.timelineservice.reader.TimelineReaderServer'
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5431) TimeLineReader daemon start should allow to pass its own reader opts

2016-07-26 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5431?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15393955#comment-15393955
 ] 

Hudson commented on YARN-5431:
--

FAILURE: Integrated in Hadoop-trunk-Commit #10153 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/10153/])
YARN-5431. TimelineReader daemon start should allow to pass its own 
(varunsaxena: rev 8e2614592ddbdb3a1067652bf9dc4a0929d7be16)
* hadoop-yarn-project/hadoop-yarn/bin/yarn


> TimeLineReader daemon start should allow to pass its own reader opts
> 
>
> Key: YARN-5431
> URL: https://issues.apache.org/jira/browse/YARN-5431
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: scripts, timelinereader
>Reporter: Rohith Sharma K S
>Assignee: Rohith Sharma K S
> Fix For: 3.0.0-alpha2
>
> Attachments: YARN-5431.0.patch
>
>
> In yarn script , timelinereader doesn't allow to pass reader_opts.
> {code}
> timelinereader)
> HADOOP_SUBCMD_SUPPORTDAEMONIZATION="true"  
> HADOOP_CLASSNAME='org.apache.hadoop.yarn.server.timelineservice.reader.TimelineReaderServer'
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5287) LinuxContainerExecutor fails to set proper permission

2016-07-26 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5287?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15393945#comment-15393945
 ] 

Hadoop QA commented on YARN-5287:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 16m 8s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 
0s {color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 9m 
9s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 28s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 28s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
13s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 
23s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 25s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} cc {color} | {color:green} 0m 25s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 25s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 26s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
11s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
0s {color} | {color:green} The patch has no whitespace issues. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 13m 6s {color} 
| {color:red} hadoop-yarn-server-nodemanager in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 
18s {color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 42m 28s {color} 
| {color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.yarn.server.nodemanager.TestDirectoryCollection |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:9560f25 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12820174/YARN-5287.004.patch |
| JIRA Issue | YARN-5287 |
| Optional Tests |  asflicense  compile  cc  mvnsite  javac  unit  |
| uname | Linux 6693c1fa18bf 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed 
Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / f1a4863 |
| Default Java | 1.8.0_101 |
| unit | 
https://builds.apache.org/job/PreCommit-YARN-Build/12504/artifact/patchprocess/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-nodemanager.txt
 |
| unit test logs |  
https://builds.apache.org/job/PreCommit-YARN-Build/12504/artifact/patchprocess/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-nodemanager.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-YARN-Build/12504/testReport/ |
| modules | C: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager
 U: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager
 |
| Console output | 
https://builds.apache.org/job/PreCommit-YARN-Build/12504/console |
| Powered by | Apache Yetus 0.3.0   http://yetus.apache.org |


This message was automatically generated.



> LinuxContainerExecutor fails to set proper permission
> -
>
> Key: YARN-5287
> URL: https://issues.apache.org/jira/browse/YARN-5287
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: nodemanager
>Affects Versions: 2.7.2
>Reporter: Ying Zhang
>Assignee: Ying Zhang
>Priority: Minor
> Attachments: YARN-5287-tmp.patch, YARN-5287.003.patch, 
> YARN-5287.004.patch
>
>   Original Estimate: 48h
>  Remaining Estimate: 48h
>
> LinuxContainerExecutor fails to set the proper permissions on the local 
> directories(i.e., /hadoop/yarn/local/usercache/... by default) if the cluster 
> has been 

[jira] [Commented] (YARN-5428) Allow for specifying the docker client configuration directory

2016-07-26 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5428?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15393940#comment-15393940
 ] 

Hadoop QA commented on YARN-5428:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 18s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 
0s {color} | {color:green} The patch appears to include 2 new or modified test 
files. {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 9s 
{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 8m 
36s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 2m 41s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
40s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 9s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
27s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 2m 
14s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 41s 
{color} | {color:green} trunk passed {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 11s 
{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 1m 
0s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 3m 8s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 3m 8s 
{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} checkstyle {color} | {color:red} 0m 38s 
{color} | {color:red} hadoop-yarn-project/hadoop-yarn: The patch generated 2 
new + 236 unchanged - 1 fixed = 238 total (was 237) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 49s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
21s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
0s {color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 
57s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 32s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 0m 24s 
{color} | {color:green} hadoop-yarn-api in the patch passed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 12m 58s {color} 
| {color:red} hadoop-yarn-server-nodemanager in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 
18s {color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 40m 0s {color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.yarn.server.nodemanager.TestDirectoryCollection |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:9560f25 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12820173/YARN-5428.002.patch |
| JIRA Issue | YARN-5428 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux 387e729fdb8c 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed 
Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / f1a4863 |
| Default Java | 1.8.0_101 |
| findbugs | v3.0.0 |
| checkstyle | 
https://builds.apache.org/job/PreCommit-YARN-Build/12505/artifact/patchprocess/diff-checkstyle-hadoop-yarn-project_hadoop-yarn.txt
 |
| unit | 
https://builds.apache.org/job/PreCommit-YARN-Build/12505/artifact/patchprocess/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-nodemanager.txt
 |
| unit test logs |  
https://builds.

[jira] [Updated] (YARN-5431) TimeLineReader daemon start should allow to pass its own reader opts

2016-07-26 Thread Varun Saxena (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-5431?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Varun Saxena updated YARN-5431:
---
Fix Version/s: (was: 3.0.0-alpha1)
   3.0.0-alpha2

> TimeLineReader daemon start should allow to pass its own reader opts
> 
>
> Key: YARN-5431
> URL: https://issues.apache.org/jira/browse/YARN-5431
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: scripts, timelinereader
>Reporter: Rohith Sharma K S
>Assignee: Rohith Sharma K S
> Fix For: 3.0.0-alpha2
>
> Attachments: YARN-5431.0.patch
>
>
> In yarn script , timelinereader doesn't allow to pass reader_opts.
> {code}
> timelinereader)
> HADOOP_SUBCMD_SUPPORTDAEMONIZATION="true"  
> HADOOP_CLASSNAME='org.apache.hadoop.yarn.server.timelineservice.reader.TimelineReaderServer'
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5404) Add the ability to split reverse zone subnets

2016-07-26 Thread Shane Kumpf (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5404?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15393932#comment-15393932
 ] 

Shane Kumpf commented on YARN-5404:
---

Thanks for the review [~vvasudev]! I'll get these fixed up and a new patch 
submitted.

> Add the ability to split reverse zone subnets
> -
>
> Key: YARN-5404
> URL: https://issues.apache.org/jira/browse/YARN-5404
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Shane Kumpf
>Assignee: Shane Kumpf
> Attachments: YARN-5404-YARN-4757.001.patch, 
> YARN-5404-YARN-4757.001.patch, YARN-5404-YARN-4757.001.patch, 
> YARN-5404.001.patch
>
>
> In some environments, the entire container subnet may not be used exclusively 
> by containers (ie the YARN nodemanager host IPs may also be part of the 
> larger subnet). 
> As a result, the reverse lookup zones created by the YARN Registry DNS server 
> may not match those created on the forwarders.
> For example:
> Network: 172.27.0.0
> Subnet: 255.255.248.0
> Hosts:
> 0.27.172.in-addr.arpa
> 1.27.172.in-addr.arpa
> 2.27.172.in-addr.arpa
> 3.27.172.in-addr.arpa
> Containers
> 4.27.172.in-addr.arpa
> 5.27.172.in-addr.arpa
> 6.27.172.in-addr.arpa
> 7.27.172.in-addr.arpa
> YARN Registry DNS only allows for creating (as the total IP count is greater 
> than 256):
> 27.172.in-addr.arpa
> Provide configuration to further subdivide the subnets.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5402) Fix NoSuchMethodError in ClusterMetricsInfo

2016-07-26 Thread Weiwei Yang (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5402?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15393923#comment-15393923
 ] 

Weiwei Yang commented on YARN-5402:
---

Hello [~sunilg]

I did not try on trunk yet, I can give a try tomorrow to see if it also happens 
on trunk. 

> Fix NoSuchMethodError in ClusterMetricsInfo
> ---
>
> Key: YARN-5402
> URL: https://issues.apache.org/jira/browse/YARN-5402
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: webapp
>Affects Versions: YARN-3368
>Reporter: Weiwei Yang
> Attachments: YARN-5402.YARN-3368.001.patch
>
>
> When trying out new UI on a cluster, the index page failed to load because of 
> error {code}java.lang.NoSuchMethodError: 
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.QueueMetrics.getReservedMB()J{code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-5432) Lock already held by another process while LevelDB cache store creation for dag

2016-07-26 Thread Junping Du (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-5432?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Junping Du updated YARN-5432:
-
Assignee: Li Lu

> Lock already held by another process while LevelDB cache store creation for 
> dag
> ---
>
> Key: YARN-5432
> URL: https://issues.apache.org/jira/browse/YARN-5432
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: timelineserver
>Affects Versions: 2.8.0, 2.7.3
>Reporter: Karam Singh
>Assignee: Li Lu
>
> While running ATS  stress tests,  15 concurrent ATS reads (python thread 
> which gives ws/v1/time/TEZ_DAG_ID, 
> ws/v1/time/TEZ_VERTEX_DI?primaryFilter=TEZ_DAG_ID: etc) calls.
> Note: Summary store for ATSv1.5 is RLD, but as we for each dag/application 
> ATS also creates leveldb cache when vertex/task/taskattempts information is 
> queried from ATS.
>  
> Getting following type of excpetion very frequently in ATS logs :- 
> 2016-07-23 00:01:56,089 [1517798697@qtp-1198158701-850] INFO 
> org.apache.hadoop.service.AbstractService: Service 
> LeveldbCache.timelineEntityGroupId_1469090881194_4832_application_1469090881194_4832
>  failed in state INITED; cause: 
> org.fusesource.leveldbjni.internal.NativeDB$DBException: IO error: lock 
> /grid/4/yarn_ats/atsv15_rld/timelineEntityGroupId_1469090881194_4832_application_1469090881194_4832-timeline-cache.ldb/LOCK:
>  already held by process
> org.fusesource.leveldbjni.internal.NativeDB$DBException: IO error: lock 
> /grid/4/yarn_ats/atsv15_rld/timelineEntityGroupId_1469090881194_4832_application_1469090881194_4832-timeline-cache.ldb/LOCK:
>  already held by process
> at 
> org.fusesource.leveldbjni.internal.NativeDB.checkStatus(NativeDB.java:200)
> at org.fusesource.leveldbjni.internal.NativeDB.open(NativeDB.java:218)
> at org.fusesource.leveldbjni.JniDBFactory.open(JniDBFactory.java:168)
> at 
> org.apache.hadoop.yarn.server.timeline.LevelDBCacheTimelineStore.serviceInit(LevelDBCacheTimelineStore.java:108)
> at 
> org.apache.hadoop.service.AbstractService.init(AbstractService.java:163)
> at 
> org.apache.hadoop.yarn.server.timeline.EntityCacheItem.refreshCache(EntityCacheItem.java:113)
> at 
> org.apache.hadoop.yarn.server.timeline.EntityGroupFSTimelineStore.getCachedStore(EntityGroupFSTimelineStore.java:1021)
> at 
> org.apache.hadoop.yarn.server.timeline.EntityGroupFSTimelineStore.getTimelineStoresFromCacheIds(EntityGroupFSTimelineStore.java:936)
> at 
> org.apache.hadoop.yarn.server.timeline.EntityGroupFSTimelineStore.getTimelineStoresForRead(EntityGroupFSTimelineStore.java:989)
> at 
> org.apache.hadoop.yarn.server.timeline.EntityGroupFSTimelineStore.getEntities(EntityGroupFSTimelineStore.java:1041)
> at 
> org.apache.hadoop.yarn.server.timeline.TimelineDataManager.doGetEntities(TimelineDataManager.java:168)
> at 
> org.apache.hadoop.yarn.server.timeline.TimelineDataManager.getEntities(TimelineDataManager.java:138)
> at 
> org.apache.hadoop.yarn.server.timeline.webapp.TimelineWebServices.getEntities(TimelineWebServices.java:117)
> at sun.reflect.GeneratedMethodAccessor82.invoke(Unknown Source)
> at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> at java.lang.reflect.Method.invoke(Method.java:498)
> at 
> com.sun.jersey.spi.container.JavaMethodInvokerFactory$1.invoke(JavaMethodInvokerFactory.java:60)
> at 
> com.sun.jersey.server.impl.model.method.dispatch.AbstractResourceMethodDispatchProvider$TypeOutInvoker._dispatch(AbstractResourceMethodDispatchProvider.java:185)
> at 
> com.sun.jersey.server.impl.model.method.dispatch.ResourceJavaMethodDispatcher.dispatch(ResourceJavaMethodDispatcher.java:75)
> at 
> com.sun.jersey.server.impl.uri.rules.HttpMethodRule.accept(HttpMethodRule.java:288)
> at 
> com.sun.jersey.server.impl.uri.rules.RightHandPathRule.accept(RightHandPathRule.java:147)
> at 
> com.sun.jersey.server.impl.uri.rules.ResourceClassRule.accept(ResourceClassRule.java:108)
> at 
> com.sun.jersey.server.impl.uri.rules.RightHandPathRule.accept(RightHandPathRule.java:147)
> at 
> com.sun.jersey.server.impl.uri.rules.RootResourceClassesRule.accept(RootResourceClassesRule.java:84)
> at 
> com.sun.jersey.server.impl.application.WebApplicationImpl._handleRequest(WebApplicationImpl.java:1469)
> at 
> com.sun.jersey.server.impl.application.WebApplicationImpl._handleRequest(WebApplicationImpl.java:1400)
> at 
> com.sun.jersey.server.impl.application.WebApplicationImpl.handleRequest(WebApplicationImpl.java:1349)
> at 
> com.sun.jersey.server.impl.application.WebApplicationImpl.handleRequest(WebApplicationImpl.java:1339)
>

[jira] [Updated] (YARN-5287) LinuxContainerExecutor fails to set proper permission

2016-07-26 Thread Ying Zhang (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-5287?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ying Zhang updated YARN-5287:
-
Attachment: YARN-5287.004.patch

> LinuxContainerExecutor fails to set proper permission
> -
>
> Key: YARN-5287
> URL: https://issues.apache.org/jira/browse/YARN-5287
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: nodemanager
>Affects Versions: 2.7.2
>Reporter: Ying Zhang
>Assignee: Ying Zhang
>Priority: Minor
> Attachments: YARN-5287-tmp.patch, YARN-5287.003.patch, 
> YARN-5287.004.patch
>
>   Original Estimate: 48h
>  Remaining Estimate: 48h
>
> LinuxContainerExecutor fails to set the proper permissions on the local 
> directories(i.e., /hadoop/yarn/local/usercache/... by default) if the cluster 
> has been configured with a restrictive umask, e.g.: umask 077. Job failed due 
> to the following reason:
> Path /hadoop/yarn/local/usercache/ambari-qa/appcache/application_ has 
> permission 700 but needs permission 750



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-5428) Allow for specifying the docker client configuration directory

2016-07-26 Thread Shane Kumpf (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-5428?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Shane Kumpf updated YARN-5428:
--
Attachment: YARN-5428.002.patch

> Allow for specifying the docker client configuration directory
> --
>
> Key: YARN-5428
> URL: https://issues.apache.org/jira/browse/YARN-5428
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: yarn
>Reporter: Shane Kumpf
>Assignee: Shane Kumpf
> Attachments: YARN-5428.001.patch, YARN-5428.002.patch
>
>
> The docker client allows for specifying a configuration directory that 
> contains the docker client's configuration. It is common to store "docker 
> login" credentials in this config, to avoid the need to docker login on each 
> cluster member. 
> By default the docker client config is $HOME/.docker/config.json on Linux. 
> However, this does not work with the current container executor user 
> switching and it may also be desirable to centralize this configuration 
> beyond the single user's home directory.
> Note that the command line arg is for the configuration directory NOT the 
> configuration file.
> This change will be needed to allow YARN to automatically pull images at 
> localization time or within container executor.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



  1   2   >