[jira] [Updated] (YARN-7169) Backport new yarn-ui to branch2 code (starting with YARN-5355_branch2)

2017-10-19 Thread Vrushali C (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-7169?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vrushali C updated YARN-7169:
-
Attachment: YARN-7169-branch-2.0002.patch


Uploading another patch YARN-7169-branch-2.0002.patch to check against latest 
branch-2. 

> Backport new yarn-ui to branch2 code (starting with YARN-5355_branch2)
> --
>
> Key: YARN-7169
> URL: https://issues.apache.org/jira/browse/YARN-7169
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: timelineclient, timelinereader, timelineserver
>Reporter: Vrushali C
>Assignee: Vrushali C
>Priority: Critical
> Attachments: FlowRunDetails_Sleepjob.png, Metrics_Yarn_UI.png, 
> YARN-7169-YARN-3368_branch2.0001.patch, 
> YARN-7169-YARN-5355_branch2.0001.patch, 
> YARN-7169-YARN-5355_branch2.0002.patch, 
> YARN-7169-YARN-5355_branch2.0003.patch, 
> YARN-7169-YARN-5355_branch2.0004.patch, 
> YARN-7169-YARN-5355_branch2.0004.patch, YARN-7169-branch-2.0001.patch, 
> YARN-7169-branch-2.0002.patch, ui_commits(1)
>
>
> Jira to track the backport of the new yarn-ui onto branch2. Right now adding 
> into Timeline Service v2's branch2 which is YARN-5355_branch2.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-7117) Capacity Scheduler: Support Auto Creation of Leaf Queues While Doing Queue Mapping

2017-10-19 Thread Suma Shivaprasad (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-7117?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Suma Shivaprasad updated YARN-7117:
---
Attachment: YARN-7117.poc.1.patch

> Capacity Scheduler: Support Auto Creation of Leaf Queues While Doing Queue 
> Mapping
> --
>
> Key: YARN-7117
> URL: https://issues.apache.org/jira/browse/YARN-7117
> Project: Hadoop YARN
>  Issue Type: New Feature
>  Components: capacity scheduler
>Reporter: Wangda Tan
>Assignee: Wangda Tan
> Attachments: 
> YARN-7117.Capacity.Scheduler.Support.Auto.Creation.Of.Leaf.Queue.pdf, 
> YARN-7117.poc.1.patch, YARN-7117.poc.patch
>
>
> Currently Capacity Scheduler doesn't support auto creation of queues when 
> doing queue mapping. We saw more and more use cases which has complex queue 
> mapping policies configured to handle application to queues mapping. 
> The most common use case of CapacityScheduler queue mapping is to create one 
> queue for each user/group. However update {{capacity-scheduler.xml}} and 
> {{RMAdmin:refreshQueues}} needs to be done when new user/group onboard. One 
> of the option to solve the problem is automatically create queues when new 
> user/group arrives.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-7117) Capacity Scheduler: Support Auto Creation of Leaf Queues While Doing Queue Mapping

2017-10-19 Thread Suma Shivaprasad (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-7117?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Suma Shivaprasad updated YARN-7117:
---
Attachment: YARN-7117.poc.1.patch

Fixed minor issues with checking of available capacity during activation of 
leaf queues under a parent queue and added check to disable auto creation with 
a configured "parentQueue-prefix. auto-create-child-queue.max-queues 
configuration which is by default set to 1000. 
Also added configuration to disable auto queue creation when 
"auto-create-child-queue.fail-on-exceeding-parent-capacity" is set. This will 
disable auto queue creation when sum of child queue capacities >=  Parent's 
guaranteed capacity

> Capacity Scheduler: Support Auto Creation of Leaf Queues While Doing Queue 
> Mapping
> --
>
> Key: YARN-7117
> URL: https://issues.apache.org/jira/browse/YARN-7117
> Project: Hadoop YARN
>  Issue Type: New Feature
>  Components: capacity scheduler
>Reporter: Wangda Tan
>Assignee: Wangda Tan
> Attachments: 
> YARN-7117.Capacity.Scheduler.Support.Auto.Creation.Of.Leaf.Queue.pdf, 
> YARN-7117.poc.patch
>
>
> Currently Capacity Scheduler doesn't support auto creation of queues when 
> doing queue mapping. We saw more and more use cases which has complex queue 
> mapping policies configured to handle application to queues mapping. 
> The most common use case of CapacityScheduler queue mapping is to create one 
> queue for each user/group. However update {{capacity-scheduler.xml}} and 
> {{RMAdmin:refreshQueues}} needs to be done when new user/group onboard. One 
> of the option to solve the problem is automatically create queues when new 
> user/group arrives.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-7117) Capacity Scheduler: Support Auto Creation of Leaf Queues While Doing Queue Mapping

2017-10-19 Thread Suma Shivaprasad (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-7117?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Suma Shivaprasad updated YARN-7117:
---
Attachment: (was: YARN-7117.poc.1.patch)

> Capacity Scheduler: Support Auto Creation of Leaf Queues While Doing Queue 
> Mapping
> --
>
> Key: YARN-7117
> URL: https://issues.apache.org/jira/browse/YARN-7117
> Project: Hadoop YARN
>  Issue Type: New Feature
>  Components: capacity scheduler
>Reporter: Wangda Tan
>Assignee: Wangda Tan
> Attachments: 
> YARN-7117.Capacity.Scheduler.Support.Auto.Creation.Of.Leaf.Queue.pdf, 
> YARN-7117.poc.patch
>
>
> Currently Capacity Scheduler doesn't support auto creation of queues when 
> doing queue mapping. We saw more and more use cases which has complex queue 
> mapping policies configured to handle application to queues mapping. 
> The most common use case of CapacityScheduler queue mapping is to create one 
> queue for each user/group. However update {{capacity-scheduler.xml}} and 
> {{RMAdmin:refreshQueues}} needs to be done when new user/group onboard. One 
> of the option to solve the problem is automatically create queues when new 
> user/group arrives.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-7243) Moving logging APIs over to slf4j in hadoop-yarn-server-resourcemanager

2017-10-19 Thread Yeliang Cang (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-7243?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yeliang Cang updated YARN-7243:
---
Attachment: YARN-7243.006.patch

Submit patch006!

> Moving logging APIs over to slf4j in hadoop-yarn-server-resourcemanager
> ---
>
> Key: YARN-7243
> URL: https://issues.apache.org/jira/browse/YARN-7243
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Yeliang Cang
>Assignee: Yeliang Cang
> Attachments: YARN-7243.001.patch, YARN-7243.002.patch, 
> YARN-7243.003.patch, YARN-7243.004.patch, YARN-7243.005.patch, 
> YARN-7243.006.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7261) Add debug message in class FSDownload for better download latency monitoring

2017-10-19 Thread Xiao Chen (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7261?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16212189#comment-16212189
 ] 

Xiao Chen commented on YARN-7261:
-

Thanks for revving Yufei!

1 minor comment:
{code}
if (LOG.isDebugEnabled()) {
LOG.debug("Skip downloading resource: " + key + " since it isn't"
+ " ready or it has been downloaded.");
}
{code}
Let's also add {{rsrc.getState()}} to this message. We can say {{"since it's in 
state: " + rsrc.getState()}}.

+1 pending that and pre-commit.

> Add debug message in class FSDownload for better download latency monitoring
> 
>
> Key: YARN-7261
> URL: https://issues.apache.org/jira/browse/YARN-7261
> Project: Hadoop YARN
>  Issue Type: Improvement
>  Components: nodemanager
>Reporter: Yufei Gu
>Assignee: Yufei Gu
> Attachments: YARN-7261.001.patch, YARN-7261.002.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7170) Improve bower dependencies for YARN UI v2

2017-10-19 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7170?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16212177#comment-16212177
 ] 

Hudson commented on YARN-7170:
--

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #13115 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/13115/])
YARN-7170. Improve bower dependencies for YARN UI v2. (Sunil G via (wangda: rev 
4afd308b62d2335f31064c05bfefaf2294d874b0)
* (edit) hadoop-project/pom.xml
* (edit) hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/pom.xml
* (edit) hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/.bowerrc


> Improve bower dependencies for YARN UI v2
> -
>
> Key: YARN-7170
> URL: https://issues.apache.org/jira/browse/YARN-7170
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: webapp
>Reporter: Sunil G
>Assignee: Sunil G
>Priority: Critical
> Fix For: 3.0.0
>
> Attachments: YARN-7170.001.patch, YARN-7170.002.patch
>
>
> [INFO] bower ember#2.2.0   progress Receiving
> objects:  50% (38449/75444), 722.46 MiB | 3.30 MiB/s
> ...
> [INFO] bower ember#2.2.0   progress Receiving
> objects:  99% (75017/75444), 1.56 GiB | 3.31 MiB/s
> Investigate the dependencies and reduce the download size and speed of 
> compilation.
> cc/ [~Sreenath] and [~akhilpb]



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-4511) Common scheduler changes supporting scheduler-specific implementations

2017-10-19 Thread Haibo Chen (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-4511?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16212171#comment-16212171
 ] 

Haibo Chen commented on YARN-4511:
--

Upload a new patch now that YARN-1011 is rebased on top of latest trunk which 
has YARN-7112.

> Common scheduler changes supporting scheduler-specific implementations
> --
>
> Key: YARN-4511
> URL: https://issues.apache.org/jira/browse/YARN-4511
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Wangda Tan
>Assignee: Haibo Chen
> Attachments: YARN-4511-YARN-1011.00.patch, 
> YARN-4511-YARN-1011.01.patch, YARN-4511-YARN-1011.02.patch, 
> YARN-4511-YARN-1011.03.patch, YARN-4511-YARN-1011.04.patch, 
> YARN-4511-YARN-1011.05.patch, YARN-4511-YARN-1011.06.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-4511) Common scheduler changes supporting scheduler-specific implementations

2017-10-19 Thread Haibo Chen (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-4511?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Haibo Chen updated YARN-4511:
-
Attachment: YARN-4511-YARN-1011.06.patch

> Common scheduler changes supporting scheduler-specific implementations
> --
>
> Key: YARN-4511
> URL: https://issues.apache.org/jira/browse/YARN-4511
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Wangda Tan
>Assignee: Haibo Chen
> Attachments: YARN-4511-YARN-1011.00.patch, 
> YARN-4511-YARN-1011.01.patch, YARN-4511-YARN-1011.02.patch, 
> YARN-4511-YARN-1011.03.patch, YARN-4511-YARN-1011.04.patch, 
> YARN-4511-YARN-1011.05.patch, YARN-4511-YARN-1011.06.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-7170) Improve bower dependencies for YARN UI v2

2017-10-19 Thread Wangda Tan (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-7170?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wangda Tan updated YARN-7170:
-
Summary: Improve bower dependencies for YARN UI v2  (was: Investigate bower 
dependencies for YARN UI v2)

> Improve bower dependencies for YARN UI v2
> -
>
> Key: YARN-7170
> URL: https://issues.apache.org/jira/browse/YARN-7170
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: webapp
>Reporter: Sunil G
>Assignee: Sunil G
>Priority: Critical
> Attachments: YARN-7170.001.patch, YARN-7170.002.patch
>
>
> [INFO] bower ember#2.2.0   progress Receiving
> objects:  50% (38449/75444), 722.46 MiB | 3.30 MiB/s
> ...
> [INFO] bower ember#2.2.0   progress Receiving
> objects:  99% (75017/75444), 1.56 GiB | 3.31 MiB/s
> Investigate the dependencies and reduce the download size and speed of 
> compilation.
> cc/ [~Sreenath] and [~akhilpb]



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7351) High CPU usage issue in RegistryDNS

2017-10-19 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7351?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16212157#comment-16212157
 ] 

Hadoop QA commented on YARN-7351:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
18s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
|| || || || {color:brown} yarn-native-services Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 16m 
24s{color} | {color:green} yarn-native-services passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
19s{color} | {color:green} yarn-native-services passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
15s{color} | {color:green} yarn-native-services passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
22s{color} | {color:green} yarn-native-services passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green}  
9m 34s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
30s{color} | {color:green} yarn-native-services passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
16s{color} | {color:green} yarn-native-services passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
16s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
15s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
15s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
11s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
19s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
10m 24s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
35s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
14s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
51s{color} | {color:green} hadoop-yarn-registry in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
19s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 42m  0s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:0de40f0 |
| JIRA Issue | YARN-7351 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12893166/YARN-7351.yarn-native-services.03.patch
 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  shadedclient  findbugs  checkstyle  |
| uname | Linux 9845678599ce 3.13.0-116-generic #163-Ubuntu SMP Fri Mar 31 
14:13:22 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | yarn-native-services / 16ecb9c |
| Default Java | 1.8.0_144 |
| findbugs | v3.1.0-RC1 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-YARN-Build/18045/testReport/ |
| modules | C: hadoop-yarn-project/hadoop-yarn/hadoop-yarn-registry U: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-registry |
| Console output | 
https://builds.apache.org/job/PreCommit-YARN-Build/18045/console |
| Powered by | Apache Yetus 0.6.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically 

[jira] [Commented] (YARN-7261) Add debug message in class FSDownload for better download latency monitoring

2017-10-19 Thread Yufei Gu (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7261?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16212156#comment-16212156
 ] 

Yufei Gu commented on YARN-7261:


Thanks for the review, [~xiaochen]. Uploaded patch v2 for your comment. It is 
probably a good idea. The only downside is log would be very verbose in some 
cases. Shouldn't be a big issue though.

> Add debug message in class FSDownload for better download latency monitoring
> 
>
> Key: YARN-7261
> URL: https://issues.apache.org/jira/browse/YARN-7261
> Project: Hadoop YARN
>  Issue Type: Improvement
>  Components: nodemanager
>Reporter: Yufei Gu
>Assignee: Yufei Gu
> Attachments: YARN-7261.001.patch, YARN-7261.002.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-7261) Add debug message in class FSDownload for better download latency monitoring

2017-10-19 Thread Yufei Gu (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-7261?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yufei Gu updated YARN-7261:
---
Attachment: YARN-7261.002.patch

> Add debug message in class FSDownload for better download latency monitoring
> 
>
> Key: YARN-7261
> URL: https://issues.apache.org/jira/browse/YARN-7261
> Project: Hadoop YARN
>  Issue Type: Improvement
>  Components: nodemanager
>Reporter: Yufei Gu
>Assignee: Yufei Gu
> Attachments: YARN-7261.001.patch, YARN-7261.002.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-4511) Common scheduler changes supporting scheduler-specific implementations

2017-10-19 Thread Haibo Chen (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-4511?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16212150#comment-16212150
 ] 

Haibo Chen commented on YARN-4511:
--

TestContainerAllocation.testAMContainerAllocationWhenDNSUnavailable() is a very 
known failure.
The sls test failures are likely due to YARN-7112. Will rebase YARN-1011 branch 
to see if they go away.

> Common scheduler changes supporting scheduler-specific implementations
> --
>
> Key: YARN-4511
> URL: https://issues.apache.org/jira/browse/YARN-4511
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Wangda Tan
>Assignee: Haibo Chen
> Attachments: YARN-4511-YARN-1011.00.patch, 
> YARN-4511-YARN-1011.01.patch, YARN-4511-YARN-1011.02.patch, 
> YARN-4511-YARN-1011.03.patch, YARN-4511-YARN-1011.04.patch, 
> YARN-4511-YARN-1011.05.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7276) Federation Router Web Service fixes

2017-10-19 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7276?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16212148#comment-16212148
 ] 

Hadoop QA commented on YARN-7276:
-

| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
11s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Findbugs executables are not available. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 13m 
 8s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
18s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
13s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
20s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green}  
9m 10s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
14s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
16s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
16s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
16s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 11s{color} | {color:orange} 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-router: 
The patch generated 20 new + 4 unchanged - 0 fixed = 24 total (was 4) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
18s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green}  
9m 50s{color} | {color:green} patch has no errors when building and testing our 
client artifacts. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
13s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
59s{color} | {color:green} hadoop-yarn-server-router in the patch passed. 
{color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
20s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 36m 24s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:ca8ddc6 |
| JIRA Issue | YARN-7276 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12893168/YARN-7276.001.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  shadedclient  findbugs  checkstyle  |
| uname | Linux f38bd23b6f47 3.13.0-129-generic #178-Ubuntu SMP Fri Aug 11 
12:48:20 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / ce7cf66 |
| Default Java | 1.8.0_131 |
| checkstyle | 
https://builds.apache.org/job/PreCommit-YARN-Build/18044/artifact/patchprocess/diff-checkstyle-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-router.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-YARN-Build/18044/testReport/ |
| modules | C: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-router U: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-router |
| Console output | 
https://builds.apache.org/job/PreCommit-YARN-Build/18044/console |
| Powered by | Apache Yetus 0.6.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> 

[jira] [Commented] (YARN-4511) Common scheduler changes supporting scheduler-specific implementations

2017-10-19 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-4511?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16212124#comment-16212124
 ] 

Hadoop QA commented on YARN-4511:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
19s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 6 new or modified test 
files. {color} |
|| || || || {color:brown} YARN-1011 Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  6m  
9s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 16m 
 6s{color} | {color:green} YARN-1011 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 17m 
21s{color} | {color:green} YARN-1011 passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  2m 
19s{color} | {color:green} YARN-1011 passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
22s{color} | {color:green} YARN-1011 passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
15m 16s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
51s{color} | {color:green} YARN-1011 passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
55s{color} | {color:green} YARN-1011 passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
19s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
57s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 11m 
34s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 11m 
34s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  2m 
12s{color} | {color:green} root: The patch generated 0 new + 535 unchanged - 2 
fixed = 535 total (was 537) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
17s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
11m 57s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  1m 
22s{color} | {color:red} 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager
 generated 8 new + 0 unchanged - 0 fixed = 8 total (was 0) {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m  
0s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 49m 37s{color} 
| {color:red} hadoop-yarn-server-resourcemanager in the patch failed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red}  1m 49s{color} 
| {color:red} hadoop-sls in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
36s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}162m  3s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| FindBugs | 
module:hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager
 |
|  |  Increment of volatile field 
org.apache.hadoop.yarn.server.resourcemanager.scheduler.SchedulerNode.numGuaranteedContainers
 in 
org.apache.hadoop.yarn.server.resourcemanager.scheduler.SchedulerNode.allocateContainer(RMContainer,
 boolean)  At SchedulerNode.java:in 
org.apache.hadoop.yarn.server.resourcemanager.scheduler.SchedulerNode.allocateContainer(RMContainer,
 boolean)  At SchedulerNode.java:[line 177] |
|  |  Increment of volatile field 
org.apache.hadoop.yarn.server.resourcemanager.scheduler.SchedulerNode.numOpportunisticContainers
 

[jira] [Commented] (YARN-4511) Common scheduler changes supporting scheduler-specific implementations

2017-10-19 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-4511?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16212121#comment-16212121
 ] 

Hadoop QA commented on YARN-4511:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
18s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 6 new or modified test 
files. {color} |
|| || || || {color:brown} YARN-1011 Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  1m 
29s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 12m 
29s{color} | {color:green} YARN-1011 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 13m 
42s{color} | {color:green} YARN-1011 passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
52s{color} | {color:green} YARN-1011 passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m  
1s{color} | {color:green} YARN-1011 passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
13m 43s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
38s{color} | {color:green} YARN-1011 passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
51s{color} | {color:green} YARN-1011 passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
19s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
55s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 10m 
28s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 10m 
28s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  2m 
 9s{color} | {color:green} root: The patch generated 0 new + 534 unchanged - 2 
fixed = 534 total (was 536) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
21s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
11m 25s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  1m 
22s{color} | {color:red} 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager
 generated 8 new + 0 unchanged - 0 fixed = 8 total (was 0) {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m  
5s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 46m 19s{color} 
| {color:red} hadoop-yarn-server-resourcemanager in the patch failed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red}  1m 46s{color} 
| {color:red} hadoop-sls in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
38s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}144m 22s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| FindBugs | 
module:hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager
 |
|  |  Increment of volatile field 
org.apache.hadoop.yarn.server.resourcemanager.scheduler.SchedulerNode.numGuaranteedContainers
 in 
org.apache.hadoop.yarn.server.resourcemanager.scheduler.SchedulerNode.allocateContainer(RMContainer,
 boolean)  At SchedulerNode.java:in 
org.apache.hadoop.yarn.server.resourcemanager.scheduler.SchedulerNode.allocateContainer(RMContainer,
 boolean)  At SchedulerNode.java:[line 177] |
|  |  Increment of volatile field 
org.apache.hadoop.yarn.server.resourcemanager.scheduler.SchedulerNode.numOpportunisticContainers
 

[jira] [Updated] (YARN-7357) Several methods in TestZKRMStateStore.TestZKRMStateStoreTester.TestZKRMStateStoreInternal should have @Override annotations

2017-10-19 Thread Sen Zhao (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-7357?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sen Zhao updated YARN-7357:
---
Attachment: YARN-7357.001.patch

HI, [~templedf]. I submit a patch to fix it. Please review it. Thanks

> Several methods in 
> TestZKRMStateStore.TestZKRMStateStoreTester.TestZKRMStateStoreInternal should 
> have @Override annotations
> ---
>
> Key: YARN-7357
> URL: https://issues.apache.org/jira/browse/YARN-7357
> Project: Hadoop YARN
>  Issue Type: Improvement
>  Components: resourcemanager
>Affects Versions: 3.0.0-beta1
>Reporter: Daniel Templeton
>Priority: Trivial
>  Labels: newbie
> Attachments: YARN-7357.001.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-7276) Federation Router Web Service fixes

2017-10-19 Thread JIRA

 [ 
https://issues.apache.org/jira/browse/YARN-7276?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Íñigo Goiri updated YARN-7276:
--
Attachment: YARN-7276.001.patch

> Federation Router Web Service fixes
> ---
>
> Key: YARN-7276
> URL: https://issues.apache.org/jira/browse/YARN-7276
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Íñigo Goiri
>Assignee: Giovanni Matteo Fumarola
> Attachments: YARN-7276.000.patch, YARN-7276.001.patch
>
>
> While testing YARN-3661, I found a few issues with the REST interface in the 
> Router:
> * No support for empty content (error 204)
> * Media type support
> * Attributes in {{FederationInterceptorREST}}
> * Support for empty states and labels
> * DefaultMetricsSystem initialization is missing



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-7351) High CPU usage issue in RegistryDNS

2017-10-19 Thread Jian He (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-7351?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jian He updated YARN-7351:
--
Attachment: YARN-7351.yarn-native-services.03.patch

> High CPU usage issue in RegistryDNS
> ---
>
> Key: YARN-7351
> URL: https://issues.apache.org/jira/browse/YARN-7351
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Jian He
>Assignee: Jian He
> Attachments: YARN-7351.yarn-native-services.01.patch, 
> YARN-7351.yarn-native-services.02.patch, 
> YARN-7351.yarn-native-services.03.patch, 
> YARN-7351.yarn-native-services.03.patch
>
>
> Thanks [~aw] for finding this issue.
> The current RegistryDNS implementation is always running on high CPU and 
> pretty much eats one core. 



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-7276) Federation Router Web Service fixes

2017-10-19 Thread JIRA

 [ 
https://issues.apache.org/jira/browse/YARN-7276?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Íñigo Goiri updated YARN-7276:
--
Attachment: YARN-7276.000.patch

> Federation Router Web Service fixes
> ---
>
> Key: YARN-7276
> URL: https://issues.apache.org/jira/browse/YARN-7276
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Íñigo Goiri
>Assignee: Giovanni Matteo Fumarola
> Attachments: YARN-7276.000.patch
>
>
> While testing YARN-3661, I found a few issues with the REST interface in the 
> Router:
> * No support for empty content (error 204)
> * Media type support
> * Attributes in {{FederationInterceptorREST}}
> * Support for empty states and labels
> * DefaultMetricsSystem initialization is missing



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7261) Add debug message in class FSDownload for better download latency monitoring

2017-10-19 Thread Xiao Chen (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7261?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16212095#comment-16212095
 ] 

Xiao Chen commented on YARN-7261:
-

Thanks [~yufeigu] for creating the jira and providing a patch.

For context, Yufei and myself have seen an intermittent issue where 
localization took very long. It is suspected that the copying from hdfs took 
long, but HDFS metrics/logs doesn't show any smoking guns. We'd like to use 
this jira to add more debugging information.

The log we collected currently looks like:
{noformat}
2017-09-15 10:55:50,738 INFO 
org.apache.hadoop.yarn.server.nodemanager.containermanager.localizer.ResourceLocalizationService:
 Created localizer for container_e70_1505214525894_75227_01_14
2017-09-15 10:55:50,738 INFO 
org.apache.hadoop.yarn.server.nodemanager.containermanager.localizer.ResourceLocalizationService:
 Downloading public rsrc:{ 
hdfs://nameservice1/cached/pub/deviceDetailsQuery_1505472717000.xml, 
1505472808731, FILE, null }
...
2017-09-15 10:58:38,760 DEBUG org.apache.hadoop.yarn.util.FSDownload: Changing 
permissions for path 
file:/var/hdfs/5/yarn/nm/filecache/7363_tmp/deviceDetailsQuery_1505472717000.xml
 to perm r-xr-xr-x
2017-09-15 10:58:38,775 INFO 
org.apache.hadoop.yarn.server.nodemanager.containermanager.container.Container: 
Container container_e70_1505214525894_75227_01_14 transitioned from 
LOCALIZING to LOCALIZED
{noformat}
But no details on what happened in the 3 minutes.

The patch LGTM. 1 question:
Do you think adding a debug message to 
{{ResourceLocalizationService#addResource}}, to indicate the when the following 
1 & 2 conditions are false would be helpful?
{code}
  /*
   * Here multiple containers may request the same resource. So we need
   * to start downloading only when
   * 1) ResourceState == DOWNLOADING
   * 2) We are able to acquire non blocking semaphore lock.
   * If not we will skip this resource as either it is getting downloaded
   * or it FAILED / LOCALIZED.
   */
{code}

> Add debug message in class FSDownload for better download latency monitoring
> 
>
> Key: YARN-7261
> URL: https://issues.apache.org/jira/browse/YARN-7261
> Project: Hadoop YARN
>  Issue Type: Improvement
>  Components: nodemanager
>Reporter: Yufei Gu
>Assignee: Yufei Gu
> Attachments: YARN-7261.001.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7243) Moving logging APIs over to slf4j in hadoop-yarn-server-resourcemanager

2017-10-19 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7243?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16212092#comment-16212092
 ] 

Hadoop QA commented on YARN-7243:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m  
0s{color} | {color:blue} Docker mode activated. {color} |
| {color:red}-1{color} | {color:red} patch {color} | {color:red}  0m  6s{color} 
| {color:red} YARN-7243 does not apply to trunk. Rebase required? Wrong Branch? 
See https://wiki.apache.org/hadoop/HowToContribute for help. {color} |
\\
\\
|| Subsystem || Report/Notes ||
| JIRA Issue | YARN-7243 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12893157/YARN-7243.005.patch |
| Console output | 
https://builds.apache.org/job/PreCommit-YARN-Build/18043/console |
| Powered by | Apache Yetus 0.6.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> Moving logging APIs over to slf4j in hadoop-yarn-server-resourcemanager
> ---
>
> Key: YARN-7243
> URL: https://issues.apache.org/jira/browse/YARN-7243
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Yeliang Cang
>Assignee: Yeliang Cang
> Attachments: YARN-7243.001.patch, YARN-7243.002.patch, 
> YARN-7243.003.patch, YARN-7243.004.patch, YARN-7243.005.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-7243) Moving logging APIs over to slf4j in hadoop-yarn-server-resourcemanager

2017-10-19 Thread Yeliang Cang (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-7243?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yeliang Cang updated YARN-7243:
---
Attachment: YARN-7243.005.patch

> Moving logging APIs over to slf4j in hadoop-yarn-server-resourcemanager
> ---
>
> Key: YARN-7243
> URL: https://issues.apache.org/jira/browse/YARN-7243
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Yeliang Cang
>Assignee: Yeliang Cang
> Attachments: YARN-7243.001.patch, YARN-7243.002.patch, 
> YARN-7243.003.patch, YARN-7243.004.patch, YARN-7243.005.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-7243) Moving logging APIs over to slf4j in hadoop-yarn-server-resourcemanager

2017-10-19 Thread Yeliang Cang (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-7243?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yeliang Cang updated YARN-7243:
---
Attachment: (was: YARN-7243.005.patch)

> Moving logging APIs over to slf4j in hadoop-yarn-server-resourcemanager
> ---
>
> Key: YARN-7243
> URL: https://issues.apache.org/jira/browse/YARN-7243
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Yeliang Cang
>Assignee: Yeliang Cang
> Attachments: YARN-7243.001.patch, YARN-7243.002.patch, 
> YARN-7243.003.patch, YARN-7243.004.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7372) TestContainerSchedulerQueuing.testContainerUpdateExecTypeGuaranteedToOpportunistic is flaky

2017-10-19 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7372?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16212072#comment-16212072
 ] 

Hadoop QA commented on YARN-7372:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 10m 
15s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Findbugs executables are not available. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 11m 
42s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
42s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
14s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
25s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green}  
8m 17s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
16s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
24s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
40s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
40s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 16s{color} | {color:orange} 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager:
 The patch generated 12 new + 3 unchanged - 0 fixed = 15 total (was 3) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
25s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green}  
8m 34s{color} | {color:green} patch has no errors when building and testing our 
client artifacts. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
14s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 16m  1s{color} 
| {color:red} hadoop-yarn-server-nodemanager in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
15s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 58m 58s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | 
hadoop.yarn.server.nodemanager.scheduler.TestDistributedScheduler |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:ca8ddc6 |
| JIRA Issue | YARN-7372 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12893140/YARN-7372.00.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  shadedclient  findbugs  checkstyle  |
| uname | Linux c404ba66c075 4.4.0-43-generic #63-Ubuntu SMP Wed Oct 12 
13:48:03 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / ce7cf66 |
| Default Java | 1.8.0_131 |
| checkstyle | 
https://builds.apache.org/job/PreCommit-YARN-Build/18041/artifact/patchprocess/diff-checkstyle-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-nodemanager.txt
 |
| unit | 
https://builds.apache.org/job/PreCommit-YARN-Build/18041/artifact/patchprocess/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-nodemanager.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-YARN-Build/18041/testReport/ |
| modules | C: 

[jira] [Commented] (YARN-7372) TestContainerSchedulerQueuing.testContainerUpdateExecTypeGuaranteedToOpportunistic is flaky

2017-10-19 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7372?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16212062#comment-16212062
 ] 

Hadoop QA commented on YARN-7372:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
10s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Findbugs executables are not available. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 13m 
57s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
46s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
19s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
30s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green}  
9m 20s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
18s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
25s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
42s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
42s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 17s{color} | {color:orange} 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager:
 The patch generated 12 new + 3 unchanged - 0 fixed = 15 total (was 3) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
26s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green}  
9m 50s{color} | {color:green} patch has no errors when building and testing our 
client artifacts. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
17s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 15m 43s{color} 
| {color:red} hadoop-yarn-server-nodemanager in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
20s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 53m 33s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | 
hadoop.yarn.server.nodemanager.scheduler.TestDistributedScheduler |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:ca8ddc6 |
| JIRA Issue | YARN-7372 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12893140/YARN-7372.00.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  shadedclient  findbugs  checkstyle  |
| uname | Linux 5da215806607 3.13.0-119-generic #166-Ubuntu SMP Wed May 3 
12:18:55 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / 0f1c037 |
| Default Java | 1.8.0_131 |
| checkstyle | 
https://builds.apache.org/job/PreCommit-YARN-Build/18039/artifact/patchprocess/diff-checkstyle-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-nodemanager.txt
 |
| unit | 
https://builds.apache.org/job/PreCommit-YARN-Build/18039/artifact/patchprocess/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-nodemanager.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-YARN-Build/18039/testReport/ |
| modules | C: 

[jira] [Commented] (YARN-7372) TestContainerSchedulerQueuing.testContainerUpdateExecTypeGuaranteedToOpportunistic is flaky

2017-10-19 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7372?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16212045#comment-16212045
 ] 

Hadoop QA commented on YARN-7372:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  9m 
32s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Findbugs executables are not available. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 12m 
55s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
44s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
18s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
30s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green}  
9m 32s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
20s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
30s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
50s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
50s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 19s{color} | {color:orange} 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager:
 The patch generated 12 new + 3 unchanged - 0 fixed = 15 total (was 3) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
31s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green}  
9m 47s{color} | {color:green} patch has no errors when building and testing our 
client artifacts. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
17s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 15m 53s{color} 
| {color:red} hadoop-yarn-server-nodemanager in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
17s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 62m 32s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | 
hadoop.yarn.server.nodemanager.scheduler.TestDistributedScheduler |
|   | 
hadoop.yarn.server.nodemanager.containermanager.logaggregation.TestLogAggregationService
 |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:ca8ddc6 |
| JIRA Issue | YARN-7372 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12893140/YARN-7372.00.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  shadedclient  findbugs  checkstyle  |
| uname | Linux 8f2253da5eea 3.13.0-119-generic #166-Ubuntu SMP Wed May 3 
12:18:55 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / 7b4b018 |
| Default Java | 1.8.0_131 |
| checkstyle | 
https://builds.apache.org/job/PreCommit-YARN-Build/18038/artifact/patchprocess/diff-checkstyle-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-nodemanager.txt
 |
| unit | 
https://builds.apache.org/job/PreCommit-YARN-Build/18038/artifact/patchprocess/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-nodemanager.txt
 |
|  Test Results | 

[jira] [Updated] (YARN-4511) Common scheduler changes supporting scheduler-specific implementations

2017-10-19 Thread Haibo Chen (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-4511?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Haibo Chen updated YARN-4511:
-
Attachment: YARN-4511-YARN-1011.05.patch

Updated patch based Miklos' review.

> Common scheduler changes supporting scheduler-specific implementations
> --
>
> Key: YARN-4511
> URL: https://issues.apache.org/jira/browse/YARN-4511
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Wangda Tan
>Assignee: Haibo Chen
> Attachments: YARN-4511-YARN-1011.00.patch, 
> YARN-4511-YARN-1011.01.patch, YARN-4511-YARN-1011.02.patch, 
> YARN-4511-YARN-1011.03.patch, YARN-4511-YARN-1011.04.patch, 
> YARN-4511-YARN-1011.05.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-7372) TestContainerSchedulerQueuing.testContainerUpdateExecTypeGuaranteedToOpportunistic is flaky

2017-10-19 Thread Haibo Chen (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-7372?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Haibo Chen updated YARN-7372:
-
Attachment: YARN-7372.00.patch

I throw in a quick fix that waits for a certain time for the container type 
update. Feel free to take it over.

> TestContainerSchedulerQueuing.testContainerUpdateExecTypeGuaranteedToOpportunistic
>  is flaky 
> 
>
> Key: YARN-7372
> URL: https://issues.apache.org/jira/browse/YARN-7372
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: nodemanager
>Affects Versions: 3.0.0-alpha3
>Reporter: Haibo Chen
>Assignee: Haibo Chen
>  Labels: unit-test
> Attachments: YARN-7372.00.patch
>
>
> testContainerUpdateExecTypeGuaranteedToOpportunistic waits for the container 
> to be running before it sends container update request.
> The container update is handled asynchronously in node manager, and it does 
> not trigger visible state transition. If the node manager event
> dispatch thread is slow, the unit test can fail at the the assertion 
> {code} Assert.assertEquals(ExecutionType.OPPORTUNISTIC, 
> status.getExecutionType());{code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7359) TestAppManager.testQueueSubmitWithNoPermission() should be scheduler agnostic

2017-10-19 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7359?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16211969#comment-16211969
 ] 

Hudson commented on YARN-7359:
--

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #13112 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/13112/])
YARN-7359. TestAppManager.testQueueSubmitWithNoPermission() should be (yufei: 
rev 7b4b0187806601e33f5a88d48991e7c12ee4419f)
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/TestAppManager.java


> TestAppManager.testQueueSubmitWithNoPermission() should be scheduler agnostic
> -
>
> Key: YARN-7359
> URL: https://issues.apache.org/jira/browse/YARN-7359
> Project: Hadoop YARN
>  Issue Type: Improvement
>Reporter: Haibo Chen
>Assignee: Haibo Chen
>Priority: Minor
> Fix For: 2.9.0, 3.0.0, 3.1.0
>
> Attachments: YARN-7359.00.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7372) TestContainerSchedulerQueuing.testContainerUpdateExecTypeGuaranteedToOpportunistic is flaky

2017-10-19 Thread Arun Suresh (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7372?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16211960#comment-16211960
 ] 

Arun Suresh commented on YARN-7372:
---

Thanks for raising [~haibochen]. [~kartheek], can you take a look at this ?

> TestContainerSchedulerQueuing.testContainerUpdateExecTypeGuaranteedToOpportunistic
>  is flaky 
> 
>
> Key: YARN-7372
> URL: https://issues.apache.org/jira/browse/YARN-7372
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: nodemanager
>Affects Versions: 3.0.0-alpha3
>Reporter: Haibo Chen
>Assignee: Haibo Chen
>  Labels: unit-test
>
> testContainerUpdateExecTypeGuaranteedToOpportunistic waits for the container 
> to be running before it sends container update request.
> The container update is handled asynchronously in node manager, and it does 
> not trigger visible state transition. If the node manager event
> dispatch thread is slow, the unit test can fail at the the assertion 
> {code} Assert.assertEquals(ExecutionType.OPPORTUNISTIC, 
> status.getExecutionType());{code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-7372) TestContainerSchedulerQueuing.testContainerUpdateExecTypeGuaranteedToOpportunistic is flaky

2017-10-19 Thread Haibo Chen (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-7372?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Haibo Chen updated YARN-7372:
-
Description: 
testContainerUpdateExecTypeGuaranteedToOpportunistic waits for the container to 
be running before it sends container update request.
The container update is handled asynchronously in node manager, and it does not 
trigger visible state transition. If the node manager event
dispatch thread is slow, the unit test can fail at the the assertion 
{code} Assert.assertEquals(ExecutionType.OPPORTUNISTIC, 
status.getExecutionType());{code}

> TestContainerSchedulerQueuing.testContainerUpdateExecTypeGuaranteedToOpportunistic
>  is flaky 
> 
>
> Key: YARN-7372
> URL: https://issues.apache.org/jira/browse/YARN-7372
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: nodemanager
>Affects Versions: 3.0.0-alpha3
>Reporter: Haibo Chen
>Assignee: Haibo Chen
>  Labels: unit-test
>
> testContainerUpdateExecTypeGuaranteedToOpportunistic waits for the container 
> to be running before it sends container update request.
> The container update is handled asynchronously in node manager, and it does 
> not trigger visible state transition. If the node manager event
> dispatch thread is slow, the unit test can fail at the the assertion 
> {code} Assert.assertEquals(ExecutionType.OPPORTUNISTIC, 
> status.getExecutionType());{code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7372) TestContainerSchedulerQueuing.testContainerUpdateExecTypeGuaranteedToOpportunistic is flaky

2017-10-19 Thread Haibo Chen (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7372?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16211956#comment-16211956
 ] 

Haibo Chen commented on YARN-7372:
--

cc [~asuresh]

> TestContainerSchedulerQueuing.testContainerUpdateExecTypeGuaranteedToOpportunistic
>  is flaky 
> 
>
> Key: YARN-7372
> URL: https://issues.apache.org/jira/browse/YARN-7372
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: nodemanager
>Affects Versions: 3.0.0-alpha3
>Reporter: Haibo Chen
>Assignee: Haibo Chen
>  Labels: unit-test
>
> testContainerUpdateExecTypeGuaranteedToOpportunistic waits for the container 
> to be running before it sends container update request.
> The container update is handled asynchronously in node manager, and it does 
> not trigger visible state transition. If the node manager event
> dispatch thread is slow, the unit test can fail at the the assertion 
> {code} Assert.assertEquals(ExecutionType.OPPORTUNISTIC, 
> status.getExecutionType());{code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Created] (YARN-7372) TestContainerSchedulerQueuing.testContainerUpdateExecTypeGuaranteedToOpportunistic is flaky

2017-10-19 Thread Haibo Chen (JIRA)
Haibo Chen created YARN-7372:


 Summary: 
TestContainerSchedulerQueuing.testContainerUpdateExecTypeGuaranteedToOpportunistic
 is flaky 
 Key: YARN-7372
 URL: https://issues.apache.org/jira/browse/YARN-7372
 Project: Hadoop YARN
  Issue Type: Bug
  Components: nodemanager
Affects Versions: 3.0.0-alpha3
Reporter: Haibo Chen
Assignee: Haibo Chen






--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-7217) Improve API service usability for updating service spec and state

2017-10-19 Thread Eric Yang (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-7217?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Eric Yang updated YARN-7217:

Description: 
API service for deploy, and manage YARN services have several limitations.

{{updateService}} API provides multiple functions:

# Stopping a service.
# Start a service.
# Increase or decrease number of containers.  (This was removed in YARN-7323).

The overloading is buggy depending on how the configuration should be applied.

h4. Scenario 1
A user retrieves Service object from getService call, and the Service object 
contains state: STARTED.  The user would like to increase number of containers 
for the deployed service.  The JSON has been updated to increase container 
count.  The PUT method does not actually increase container count.

h4. Scenario 2
A user retrieves Service object from getService call, and the Service object 
contains state: STOPPED.  The user would like to make a environment 
configuration change.  The configuration does not get updated after PUT method.

This is possible to address by rearranging the logic of START/STOP after 
configuration update.  However, there are other potential combinations that can 
break PUT method.  For example, user like to make configuration changes, but 
not yet restart the service until a later time.

h4. Scenario 3
There is no API to list all deployed applications by the same user.

h4. Scenario 4
Desired state (spec) and current state are represented by the same Service 
object.  There is no easy way to identify "state" is desired state to reach or, 
the current state of the service.  It would be nice to have ability to retrieve 
both desired state, and current state with separated entry points.  By 
implementing /spec and /state, it can resolve this problem.

h4. Scenario 5
List all services deploy by the same user can trigger a directory listing 
operation on namenode if hdfs is used as storage for metadata.  When hundred of 
users use Service UI to view or deploy applications, this will trigger denial 
of services attack on namenode.  The sparse small metadata files also reduce 
efficiency of Namenode memory usage.  Hence, a cache layer for storing service 
metadata can reduce namenode stress.

h3. Proposed change

ApiService can separate the PUT method into two PUT methods for configuration 
changes vs operation changes.  New API could look like:

{code}
@PUT
/ws/v1/services/[service_name]/spec

Request Data:
{
  "name": "amp",
  "components": [
{
  "name": "mysql",
  "number_of_containers": 2,
  "artifact": {
"id": "centos/mysql-57-centos7:latest",
"type": "DOCKER"
  },
  "run_privileged_container": false,
  "launch_command": "",
  "resource": {
"cpus": 1,
"memory": "2048"
  },
  "configuration": {
"env": {
  "MYSQL_USER":"${USER}",
  "MYSQL_PASSWORD":"password"
}
  }
 }
  ],
  "quicklinks": {
"Apache Document Root": 
"http://httpd.${SERVICE_NAME}.${USER}.${DOMAIN}:8080/;,
"PHP MyAdmin": "http://phpmyadmin.${SERVICE_NAME}.${USER}.${DOMAIN}:8080/;
  }
}
{code}

{code}
@PUT
/ws/v1/services/[service_name]/state

Request data:
{
  "name": "amp",
  "components": [
{
  "name": "mysql",
  "state": "STOPPED"
 }
  ]
}
{code}

SOLR can be used to cache Yarnfile to improve lookup performance and reduce 
stress of namenode small file problems and high frequency lookup.  SOLR is 
chosen for caching metadata because its indexing feature can be used to build 
full text search for application catalog as well.

For service that requires configuration changes to increase or decrease node 
count.  The calling sequence is:

{code}
# GET /ws/v1/services/{service_name}/spec
# Change number_of_containers to desired number.
# PUT /ws/v1/services/{service_name}/spec to update the spec.
# PUT /ws/v1/services/{service_name}/state to stop existing service.
# PUT /ws/v1/services/{service_name}/state to start service.
{code}

For components that can increase node count without rewrite configuration:

{code}
# GET /ws/v1/services/{service_name}/spec
# Change number_of_containers to desired number.
# PUT /ws/v1/services/{service_name}/spec to update the spec.
# PUT /ws/v1/services/{service_name}/component/{component_name} to change node 
count.
{code}


  was:
API service for deploy, and manage YARN services have several limitations.

{{updateService}} API provides multiple functions:

# Stopping a service.
# Start a service.
# Increase or decrease number of containers.  (This was removed in YARN-7323).

The overloading is buggy depending on how the configuration should be applied.

h4. Scenario 1
A user retrieves Service object from getService call, and the Service object 
contains state: STARTED.  The user would like to increase number of containers 
for the deployed service.  The JSON has been updated to increase container 
count.  The PUT 

[jira] [Commented] (YARN-7359) TestAppManager.testQueueSubmitWithNoPermission() should be scheduler agnostic

2017-10-19 Thread Yufei Gu (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7359?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16211944#comment-16211944
 ] 

Yufei Gu commented on YARN-7359:


Committed to trunk, branch-3.0 and branch-2. Thanks for working on this, 
[~haibo.chen].

> TestAppManager.testQueueSubmitWithNoPermission() should be scheduler agnostic
> -
>
> Key: YARN-7359
> URL: https://issues.apache.org/jira/browse/YARN-7359
> Project: Hadoop YARN
>  Issue Type: Improvement
>Reporter: Haibo Chen
>Assignee: Haibo Chen
>Priority: Minor
> Fix For: 2.9.0, 3.0.0, 3.1.0
>
> Attachments: YARN-7359.00.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-6921) Allow resource request to opt out of oversubscription in Fair Scheduler

2017-10-19 Thread Haibo Chen (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-6921?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Haibo Chen updated YARN-6921:
-
Summary: Allow resource request to opt out of oversubscription in Fair 
Scheduler  (was: Allow resource request to opt out of oversubscription)

> Allow resource request to opt out of oversubscription in Fair Scheduler
> ---
>
> Key: YARN-6921
> URL: https://issues.apache.org/jira/browse/YARN-6921
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: scheduler
>Affects Versions: 3.0.0-alpha3
>Reporter: Haibo Chen
>Assignee: Haibo Chen
>
> Guaranteed container requests, enforce tag true or not, are by default 
> eligible for oversubscription, and thus can get OPPORTUNISTIC container 
> allocations. We should allow them to opt out if their enforce tag is set to 
> true.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7359) TestAppManager.testQueueSubmitWithNoPermission() should be scheduler agnostic

2017-10-19 Thread Yufei Gu (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7359?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16211940#comment-16211940
 ] 

Yufei Gu commented on YARN-7359:


+1. 

> TestAppManager.testQueueSubmitWithNoPermission() should be scheduler agnostic
> -
>
> Key: YARN-7359
> URL: https://issues.apache.org/jira/browse/YARN-7359
> Project: Hadoop YARN
>  Issue Type: Improvement
>Reporter: Haibo Chen
>Assignee: Haibo Chen
>Priority: Minor
> Attachments: YARN-7359.00.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7261) Add debug message in class FSDownload for better download latency monitoring

2017-10-19 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7261?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16211933#comment-16211933
 ] 

Hadoop QA commented on YARN-7261:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
20s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 12m 
34s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
31s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
21s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
32s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green}  
8m 56s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m  
8s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
34s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
31s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
32s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
32s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
22s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
36s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green}  
9m 13s{color} | {color:green} patch has no errors when building and testing our 
client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
14s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
35s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  2m 
35s{color} | {color:green} hadoop-yarn-common in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
15s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 41m  5s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:0de40f0 |
| JIRA Issue | YARN-7261 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12893136/YARN-7261.001.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  shadedclient  findbugs  checkstyle  |
| uname | Linux 4cfcfdc79748 4.4.0-43-generic #63-Ubuntu SMP Wed Oct 12 
13:48:03 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / c1b08ba |
| Default Java | 1.8.0_144 |
| findbugs | v3.1.0-RC1 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-YARN-Build/18037/testReport/ |
| modules | C: hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common U: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common |
| Console output | 
https://builds.apache.org/job/PreCommit-YARN-Build/18037/console |
| Powered by | Apache Yetus 0.6.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> Add debug message in class FSDownload for better download latency monitoring
> 

[jira] [Commented] (YARN-7294) TestSignalContainer#testSignalRequestDeliveryToNM fails intermittently with Fair scheduler

2017-10-19 Thread Yufei Gu (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7294?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16211931#comment-16211931
 ] 

Yufei Gu commented on YARN-7294:


Committed to trunk, branch-3.0 and branch-2. Thanks for working on this, 
[~miklos.szeg...@cloudera.com].

> TestSignalContainer#testSignalRequestDeliveryToNM fails intermittently with 
> Fair scheduler
> --
>
> Key: YARN-7294
> URL: https://issues.apache.org/jira/browse/YARN-7294
> Project: Hadoop YARN
>  Issue Type: Bug
>Reporter: Miklos Szegedi
>Assignee: Miklos Szegedi
> Fix For: 2.9.0, 3.0.0, 3.1.0
>
> Attachments: YARN-7294.000.patch
>
>
> This issue exists due to the fact that FS needs an update after allocation 
> and more node updates for all the requests to be fulfilled.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7102) NM heartbeat stuck when responseId overflows MAX_INT

2017-10-19 Thread Botong Huang (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7102?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16211895#comment-16211895
 ] 

Botong Huang commented on YARN-7102:


The unit test failure for YARN-7102-branch-2.8.v10.patch seem unrelated. I 
cannot repro them locally as well...

> NM heartbeat stuck when responseId overflows MAX_INT
> 
>
> Key: YARN-7102
> URL: https://issues.apache.org/jira/browse/YARN-7102
> Project: Hadoop YARN
>  Issue Type: Bug
>Reporter: Botong Huang
>Assignee: Botong Huang
>Priority: Critical
> Attachments: YARN-7102-branch-2.8.v10.patch, 
> YARN-7102-branch-2.8.v9.patch, YARN-7102-branch-2.v9.patch, 
> YARN-7102.v1.patch, YARN-7102.v2.patch, YARN-7102.v3.patch, 
> YARN-7102.v4.patch, YARN-7102.v5.patch, YARN-7102.v6.patch, 
> YARN-7102.v7.patch, YARN-7102.v8.patch, YARN-7102.v9.patch
>
>
> ResponseId overflow problem in NM-RM heartbeat. This is same as AM-RM 
> heartbeat in YARN-6640, please refer to YARN-6640 for details. 



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-7261) Add debug message in class FSDownload for better download latency monitoring

2017-10-19 Thread Yufei Gu (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-7261?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yufei Gu updated YARN-7261:
---
Attachment: (was: YARN-7261.001.patch)

> Add debug message in class FSDownload for better download latency monitoring
> 
>
> Key: YARN-7261
> URL: https://issues.apache.org/jira/browse/YARN-7261
> Project: Hadoop YARN
>  Issue Type: Improvement
>  Components: nodemanager
>Reporter: Yufei Gu
>Assignee: Yufei Gu
> Attachments: YARN-7261.001.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-7261) Add debug message in class FSDownload for better download latency monitoring

2017-10-19 Thread Yufei Gu (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-7261?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yufei Gu updated YARN-7261:
---
Attachment: YARN-7261.001.patch

> Add debug message in class FSDownload for better download latency monitoring
> 
>
> Key: YARN-7261
> URL: https://issues.apache.org/jira/browse/YARN-7261
> Project: Hadoop YARN
>  Issue Type: Improvement
>  Components: nodemanager
>Reporter: Yufei Gu
>Assignee: Yufei Gu
> Attachments: YARN-7261.001.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-7261) Add debug message in class FSDownload for better download latency monitoring

2017-10-19 Thread Yufei Gu (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-7261?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yufei Gu updated YARN-7261:
---
Attachment: YARN-7261.001.patch

> Add debug message in class FSDownload for better download latency monitoring
> 
>
> Key: YARN-7261
> URL: https://issues.apache.org/jira/browse/YARN-7261
> Project: Hadoop YARN
>  Issue Type: Improvement
>  Components: nodemanager
>Reporter: Yufei Gu
>Assignee: Yufei Gu
> Attachments: YARN-7261.001.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5516) Add REST API for periodicity

2017-10-19 Thread Subru Krishnan (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5516?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16211890#comment-16211890
 ] 

Subru Krishnan commented on YARN-5516:
--

Thanks [~seanpo03] for the patch. It looks mostly good except for some minor 
comments:
* You'll have to rebase now that YARN-7311 is in.
* Can you update the REST API documentation also to include recurrence 
expression.
* I see you have added a test but would it be possible to run the existing 
tests with both recurrence and non-recurrence. In the case of former, the 
reservation will repeat periodically which can be checked by wrapping the 
assertions in a loop on periodicity.

Additionally I agree that we have to update {{ReservationDefinition}} API 
documentation with the constraints you observed:
# Recurrence expression as a long must be greater than the duration (deadline – 
arrival).
# MAX_PERIOD must be divisible by the recurrence expression.

We already have covered (1) in {{ReservationInputValidator}}. Can you make (2) 
is also there? If not, please add it (along with 
{{TestReservationInputValidator}}.

Thanks!




> Add REST API for periodicity
> 
>
> Key: YARN-5516
> URL: https://issues.apache.org/jira/browse/YARN-5516
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: resourcemanager
>Reporter: Sangeetha Abdu Jyothi
>Assignee: Sean Po
> Attachments: YARN-5516.v001.patch, YARN-5516.v002.patch
>
>
> YARN-5516 changing REST API of the reservation system to support periodicity. 



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7013) merge related work for YARN-3926 branch

2017-10-19 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7013?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16211888#comment-16211888
 ] 

Hadoop QA commented on YARN-7013:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m  
0s{color} | {color:blue} Docker mode activated. {color} |
| {color:red}-1{color} | {color:red} docker {color} | {color:red}  0m 
16s{color} | {color:red} Docker failed to build yetus/hadoop:24ac7c6. {color} |
\\
\\
|| Subsystem || Report/Notes ||
| JIRA Issue | YARN-7013 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12893134/YARN-7013.branch-3.0.003.patch
 |
| Console output | 
https://builds.apache.org/job/PreCommit-YARN-Build/18036/console |
| Powered by | Apache Yetus 0.6.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> merge related work for YARN-3926 branch
> ---
>
> Key: YARN-7013
> URL: https://issues.apache.org/jira/browse/YARN-7013
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: nodemanager, resourcemanager
>Reporter: Sunil G
>Assignee: Daniel Templeton
> Attachments: YARN-7013.001.patch, YARN-7013.002.patch, 
> YARN-7013.003.patch, YARN-7013.004.patch, YARN-7013.005.patch, 
> YARN-7013.006.patch, YARN-7013.008.patch, YARN-7013.branch-3.0.000.patch, 
> YARN-7013.branch-3.0.001.patch, YARN-7013.branch-3.0.002.patch, 
> YARN-7013.branch-3.0.003.patch
>
>
> To run jenkins for whole branch.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-7013) merge related work for YARN-3926 branch

2017-10-19 Thread Daniel Templeton (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-7013?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Daniel Templeton updated YARN-7013:
---
Attachment: YARN-7013.branch-3.0.003.patch

Rebased to latest branch-3.0.

> merge related work for YARN-3926 branch
> ---
>
> Key: YARN-7013
> URL: https://issues.apache.org/jira/browse/YARN-7013
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: nodemanager, resourcemanager
>Reporter: Sunil G
>Assignee: Daniel Templeton
> Attachments: YARN-7013.001.patch, YARN-7013.002.patch, 
> YARN-7013.003.patch, YARN-7013.004.patch, YARN-7013.005.patch, 
> YARN-7013.006.patch, YARN-7013.008.patch, YARN-7013.branch-3.0.000.patch, 
> YARN-7013.branch-3.0.001.patch, YARN-7013.branch-3.0.002.patch, 
> YARN-7013.branch-3.0.003.patch
>
>
> To run jenkins for whole branch.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7353) Docker permitted volumes don't properly check for directories

2017-10-19 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7353?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16211874#comment-16211874
 ] 

Hadoop QA commented on YARN-7353:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m  
0s{color} | {color:blue} Docker mode activated. {color} |
| {color:red}-1{color} | {color:red} docker {color} | {color:red}  0m 
15s{color} | {color:red} Docker failed to build yetus/hadoop:0de40f0. {color} |
\\
\\
|| Subsystem || Report/Notes ||
| JIRA Issue | YARN-7353 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12893133/YARN-7353.003.patch |
| Console output | 
https://builds.apache.org/job/PreCommit-YARN-Build/18035/console |
| Powered by | Apache Yetus 0.6.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> Docker permitted volumes don't properly check for directories
> -
>
> Key: YARN-7353
> URL: https://issues.apache.org/jira/browse/YARN-7353
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: yarn
>Reporter: Eric Badger
>Assignee: Eric Badger
> Attachments: YARN-7353.001.patch, YARN-7353.002.patch, 
> YARN-7353.003.patch
>
>
> {noformat:title=docker-util.c:check_mount_permitted()}
> // directory check
> permitted_mount_len = strlen(permitted_mounts[i]);
> if (permitted_mount_len > 0
> && permitted_mounts[i][permitted_mount_len - 1] == '/') {
>   if (strncmp(normalized_path, permitted_mounts[i], permitted_mount_len) 
> == 0) {
> ret = 1;
> break;
>   }
> }
> {noformat}
> This code will treat "/home/" as a directory, but not "/home"
> {noformat}
> [  FAILED  ] 3 tests, listed below:
> [  FAILED  ] TestDockerUtil.test_check_mount_permitted
> [  FAILED  ] TestDockerUtil.test_normalize_mounts
> [  FAILED  ] TestDockerUtil.test_add_rw_mounts
> {noformat}
> Additionally, YARN-6623 introduced new test failures in the C++ 
> container-executor test "cetest"



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7294) TestSignalContainer#testSignalRequestDeliveryToNM fails intermittently with Fair scheduler

2017-10-19 Thread Yufei Gu (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7294?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16211870#comment-16211870
 ] 

Yufei Gu commented on YARN-7294:


OK. +1.

> TestSignalContainer#testSignalRequestDeliveryToNM fails intermittently with 
> Fair scheduler
> --
>
> Key: YARN-7294
> URL: https://issues.apache.org/jira/browse/YARN-7294
> Project: Hadoop YARN
>  Issue Type: Bug
>Reporter: Miklos Szegedi
>Assignee: Miklos Szegedi
> Attachments: YARN-7294.000.patch
>
>
> This issue exists due to the fact that FS needs an update after allocation 
> and more node updates for all the requests to be fulfilled.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (YARN-7217) Improve API service usability for updating service spec and state

2017-10-19 Thread Eric Yang (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7217?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16211832#comment-16211832
 ] 

Eric Yang edited comment on YARN-7217 at 10/19/17 10:29 PM:


Hi [~billie.rinaldi], thank you for the review.

# Does YARN resource manager have privileges to all users's home directory?  In 
non-secure cluster, the answer is yes.  However, this might not be true in 
secure cluster.  This is one of the reasons that I did not implement 
getServicesList on fs.  If we want getServiceList on fs implemented for 
non-secure cluster, it can be included.
# I will fix actionBuild.
# I think it's best to add configuration to YarnCommon, there are unit tests 
that valid the configuration addition, and reduce chance of using duplicated 
names.
# I will fix Solr version to hadoop-project/pom.xml
# YARN-7193 includes definition for global application definition in your last 
point.  It provides a set of API to register Yarnfile with additional metadata 
like organization, description, icon, and download into Solr.  I couldn't get 
anyone to review that JIRA, hence, I break down the functionality into small 
bits that can be consumed.  The first step is to offload storing of Yarnfile 
from HDFS to SOLR as improvement for this issue.  Application catalog can be 
built on top of what is proposed here.  Once application catalog is built, then 
we can start to think about how the money and software exchange take place for 
appstore.  Appstore idea might not get traction in Apache because its a 
non-profit organization.  We can think about that idea later.


was (Author: eyang):
Hi [~billie.rinaldi], thank you for the review.

# Does YARN resource manager have privileges to all users's home directory?  In 
non-secure cluster, the answer is yes.  However, this might not be true in 
secure cluster.  This is one of the reasons that I did not implement 
getServicesList on fs.  If we want getServiceList on fs implemented for 
non-secure cluster, it can be included.
# actionBuild is validating and persist application on HDFS.  There is no 
external caller to actionBuild from REST API.  Perhaps, make actionBuild a 
private method?  actionCreate is the API that external callers uses.
# I think it's best to add configuration to YarnCommon, there are unit tests 
that valid the configuration addition, and reduce chance of using duplicated 
names.
# I will fix Solr version to hadoop-project/pom.xml
# YARN-7193 includes definition for global application definition in your last 
point.  It provides a set of API to register Yarnfile with additional metadata 
like organization, description, icon, and download into Solr.  I couldn't get 
anyone to review that JIRA, hence, I break down the functionality into small 
bits that can be consumed.  The first step is to offload storing of Yarnfile 
from HDFS to SOLR as improvement for this issue.  Application catalog can be 
built on top of what is proposed here.  Once application catalog is built, then 
we can start to think about how the money and software exchange take place for 
appstore.  Appstore idea might not get traction in Apache because its a 
non-profit organization.  We can think about that idea later.

> Improve API service usability for updating service spec and state
> -
>
> Key: YARN-7217
> URL: https://issues.apache.org/jira/browse/YARN-7217
> Project: Hadoop YARN
>  Issue Type: Task
>  Components: api, applications
>Reporter: Eric Yang
>Assignee: Eric Yang
> Attachments: YARN-7217.yarn-native-services.001.patch, 
> YARN-7217.yarn-native-services.002.patch, 
> YARN-7217.yarn-native-services.003.patch, 
> YARN-7217.yarn-native-services.004.patch
>
>
> API service for deploy, and manage YARN services have several limitations.
> {{updateService}} API provides multiple functions:
> # Stopping a service.
> # Start a service.
> # Increase or decrease number of containers.  (This was removed in YARN-7323).
> The overloading is buggy depending on how the configuration should be applied.
> h4. Scenario 1
> A user retrieves Service object from getService call, and the Service object 
> contains state: STARTED.  The user would like to increase number of 
> containers for the deployed service.  The JSON has been updated to increase 
> container count.  The PUT method does not actually increase container count.
> h4. Scenario 2
> A user retrieves Service object from getService call, and the Service object 
> contains state: STOPPED.  The user would like to make a environment 
> configuration change.  The configuration does not get updated after PUT 
> method.
> This is possible to address by rearranging the logic of START/STOP after 
> configuration update.  However, there are other potential combinations that 
> can break PUT 

[jira] [Updated] (YARN-7353) Docker permitted volumes don't properly check for directories

2017-10-19 Thread Eric Badger (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-7353?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Eric Badger updated YARN-7353:
--
Attachment: YARN-7353.003.patch

Attaching a patch that fixes the docker permitted volumes directory check and 
then also removes the usage of "/bin" binaries from all of the tests to get rid 
of the problems with symlinks on CentOS/RHEL. 

[~vvasudev], I had already started working on a patch before you put up your 
comment, so I didn't use your changes. But, I believe that the changes are 
similar. 

> Docker permitted volumes don't properly check for directories
> -
>
> Key: YARN-7353
> URL: https://issues.apache.org/jira/browse/YARN-7353
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: yarn
>Reporter: Eric Badger
>Assignee: Eric Badger
> Attachments: YARN-7353.001.patch, YARN-7353.002.patch, 
> YARN-7353.003.patch
>
>
> {noformat:title=docker-util.c:check_mount_permitted()}
> // directory check
> permitted_mount_len = strlen(permitted_mounts[i]);
> if (permitted_mount_len > 0
> && permitted_mounts[i][permitted_mount_len - 1] == '/') {
>   if (strncmp(normalized_path, permitted_mounts[i], permitted_mount_len) 
> == 0) {
> ret = 1;
> break;
>   }
> }
> {noformat}
> This code will treat "/home/" as a directory, but not "/home"
> {noformat}
> [  FAILED  ] 3 tests, listed below:
> [  FAILED  ] TestDockerUtil.test_check_mount_permitted
> [  FAILED  ] TestDockerUtil.test_normalize_mounts
> [  FAILED  ] TestDockerUtil.test_add_rw_mounts
> {noformat}
> Additionally, YARN-6623 introduced new test failures in the C++ 
> container-executor test "cetest"



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Created] (YARN-7371) NPE in ServiceMaster after RM is restarted and then the ServiceMaster is killed

2017-10-19 Thread Chandni Singh (JIRA)
Chandni Singh created YARN-7371:
---

 Summary: NPE in ServiceMaster after RM is restarted and then the 
ServiceMaster is killed
 Key: YARN-7371
 URL: https://issues.apache.org/jira/browse/YARN-7371
 Project: Hadoop YARN
  Issue Type: Sub-task
Reporter: Chandni Singh


java.lang.NullPointerException
at 
org.apache.hadoop.yarn.service.ServiceScheduler.recoverComponents(ServiceScheduler.java:313)
at 
org.apache.hadoop.yarn.service.ServiceScheduler.serviceStart(ServiceScheduler.java:265)
at org.apache.hadoop.service.AbstractService.start(AbstractService.java:194)
at 
org.apache.hadoop.service.CompositeService.serviceStart(CompositeService.java:121)
at org.apache.hadoop.service.AbstractService.start(AbstractService.java:194)
at org.apache.hadoop.yarn.service.ServiceMaster.main(ServiceMaster.java:150)

Steps:
1. Stopped RM and then started it
2. Application was still running
3. Killed the ServiceMaster to check if it recovers
4. Next attempt failed with the above exception



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7338) Support same origin policy for cross site scripting prevention.

2017-10-19 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7338?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16211838#comment-16211838
 ] 

Hudson commented on YARN-7338:
--

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #13109 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/13109/])
YARN-7338. Support same origin policy for cross site scripting (wangda: rev 
298b174f663a06e67098f7b5cd645769c1a98a80)
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/java/org/apache/hadoop/yarn/webapp/WebApps.java


> Support same origin policy for cross site scripting prevention.
> ---
>
> Key: YARN-7338
> URL: https://issues.apache.org/jira/browse/YARN-7338
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: yarn-ui-v2
>Reporter: Vrushali C
>Assignee: Sunil G
> Fix For: 3.0.0, 3.1.0
>
> Attachments: YARN-7338.001.patch
>
>
> Opening jira as suggested b [~eyang] on the thread for merging YARN-3368 (new 
> web UI) to branch2  
> http://mail-archives.apache.org/mod_mbox/hadoop-yarn-dev/201610.mbox/%3ccad++ecmvvqnzqz9ynkvkcxaczdkg50yiofxktgk3mmms9sh...@mail.gmail.com%3E
> --
> Ui2 does not seem to support same origin policy for cross site scripting 
> prevention.
> The following parameters has no effect for /ui2:
> hadoop.http.cross-origin.enabled = true
> yarn.resourcemanager.webapp.cross-origin.enabled = true
> This is because ui2 is designed as a separate web application.  WebFilters 
> setup for existing resource manager doesn’t apply to the new web application.
> Please open JIRA to track the security issue and resolve the problem prior to 
> backporting this to branch-2.
> This would minimize the risk to open up security hole in branch-2.
> --



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7217) Improve API service usability for updating service spec and state

2017-10-19 Thread Eric Yang (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7217?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16211832#comment-16211832
 ] 

Eric Yang commented on YARN-7217:
-

Hi [~billie.rinaldi], thank you for the review.

# Does YARN resource manager have privileges to all users's home directory?  In 
non-secure cluster, the answer is yes.  However, this might not be true in 
secure cluster.  This is one of the reasons that I did not implement 
getServicesList on fs.  If we want getServiceList on fs implemented for 
non-secure cluster, it can be included.
# actionBuild is validating and persist application on HDFS.  There is no 
external caller to actionBuild from REST API.  Perhaps, make actionBuild a 
private method?  actionCreate is the API that external callers uses.
# I think it's best to add configuration to YarnCommon, there are unit tests 
that valid the configuration addition, and reduce chance of using duplicated 
names.
# I will fix Solr version to hadoop-project/pom.xml
# YARN-7193 includes definition for global application definition in your last 
point.  It provides a set of API to register Yarnfile with additional metadata 
like organization, description, icon, and download into Solr.  I couldn't get 
anyone to review that JIRA, hence, I break down the functionality into small 
bits that can be consumed.  The first step is to offload storing of Yarnfile 
from HDFS to SOLR as improvement for this issue.  Application catalog can be 
built on top of what is proposed here.  Once application catalog is built, then 
we can start to think about how the money and software exchange take place for 
appstore.  Appstore idea might not get traction in Apache because its a 
non-profit organization.  We can think about that idea later.

> Improve API service usability for updating service spec and state
> -
>
> Key: YARN-7217
> URL: https://issues.apache.org/jira/browse/YARN-7217
> Project: Hadoop YARN
>  Issue Type: Task
>  Components: api, applications
>Reporter: Eric Yang
>Assignee: Eric Yang
> Attachments: YARN-7217.yarn-native-services.001.patch, 
> YARN-7217.yarn-native-services.002.patch, 
> YARN-7217.yarn-native-services.003.patch, 
> YARN-7217.yarn-native-services.004.patch
>
>
> API service for deploy, and manage YARN services have several limitations.
> {{updateService}} API provides multiple functions:
> # Stopping a service.
> # Start a service.
> # Increase or decrease number of containers.  (This was removed in YARN-7323).
> The overloading is buggy depending on how the configuration should be applied.
> h4. Scenario 1
> A user retrieves Service object from getService call, and the Service object 
> contains state: STARTED.  The user would like to increase number of 
> containers for the deployed service.  The JSON has been updated to increase 
> container count.  The PUT method does not actually increase container count.
> h4. Scenario 2
> A user retrieves Service object from getService call, and the Service object 
> contains state: STOPPED.  The user would like to make a environment 
> configuration change.  The configuration does not get updated after PUT 
> method.
> This is possible to address by rearranging the logic of START/STOP after 
> configuration update.  However, there are other potential combinations that 
> can break PUT method.  For example, user like to make configuration changes, 
> but not yet restart the service until a later time.
> h4. Scenario 3
> There is no API to list all deployed applications by the same user.
> h4. Scenario 4
> Desired state (spec) and current state are represented by the same Service 
> object.  There is no easy way to identify "state" is desired state to reach 
> or, the current state of the service.  It would be nice to have ability to 
> retrieve both desired state, and current state with separated entry points.  
> By implementing /spec and /state, it can resolve this problem.
> h4. Scenario 5
> List all services deploy by the same user can trigger a directory listing 
> operation on namenode if hdfs is used as storage for metadata.  When hundred 
> of users use Service UI to view or deploy applications, this will trigger 
> denial of services attack on namenode.  The sparse small metadata files also 
> reduce efficiency of Namenode memory usage.  Hence, a cache layer for storing 
> service metadata can reduce namenode stress.
> h3. Proposed change
> ApiService can separate the PUT method into two PUT methods for configuration 
> changes vs operation changes.  New API could look like:
> {code}
> @PUT
> /ws/v1/services/[service_name]/spec
> Request Data:
> {
>   "name": "amp",
>   "components": [
> {
>   "name": "mysql",
>   "number_of_containers": 2,
>   "artifact": {
> "id": "centos/mysql-57-centos7:latest",
> 

[jira] [Commented] (YARN-7345) GPU Isolation: Incorrect minor device numbers written to devices.deny file

2017-10-19 Thread Jonathan Hung (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7345?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16211821#comment-16211821
 ] 

Jonathan Hung commented on YARN-7345:
-

Thanks!

> GPU Isolation: Incorrect minor device numbers written to devices.deny file
> --
>
> Key: YARN-7345
> URL: https://issues.apache.org/jira/browse/YARN-7345
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Jonathan Hung
>Assignee: Jonathan Hung
> Fix For: 3.0.0, 3.1.0
>
> Attachments: YARN-7345.001.patch
>
>
> Currently the minor numbers written to devices.deny file is 0->(num devices 
> to block - 1). But blocked devices are not necessarily sequential starting 
> from 0.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7307) Revisit resource-types.xml loading behaviors

2017-10-19 Thread Wangda Tan (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7307?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16211799#comment-16211799
 ] 

Wangda Tan commented on YARN-7307:
--

Thanks [~sunilg] for updating the patch, 

I'm fine with the last patch except 
DEFAULT_YARN_CLIENT_LOAD_RESOURCETYPES_FROM_SERVER should be false instead of 
true. Since applications need to update their implementation in order to use 
the feature, it's better to default opt-out the update resource types behavior 
in YarnClientImpl for safety.

Could you also check UT failures / java docs warnings?

> Revisit resource-types.xml loading behaviors
> 
>
> Key: YARN-7307
> URL: https://issues.apache.org/jira/browse/YARN-7307
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: nodemanager, resourcemanager
>Reporter: Wangda Tan
>Assignee: Sunil G
>Priority: Blocker
> Attachments: YARN-7307.001.patch, YARN-7307.002.patch, 
> YARN-7307.003.patch
>
>
> Existing feature requires every client has a resource-types.xml in order to 
> use multiple resource types, should we allow client/AM update supported 
> resource types via Yarn APIs?



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-7345) GPU Isolation: Incorrect minor device numbers written to devices.deny file

2017-10-19 Thread Wangda Tan (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-7345?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wangda Tan updated YARN-7345:
-
Summary: GPU Isolation: Incorrect minor device numbers written to 
devices.deny file  (was: Incorrect minor device numbers written to devices.deny 
file)

> GPU Isolation: Incorrect minor device numbers written to devices.deny file
> --
>
> Key: YARN-7345
> URL: https://issues.apache.org/jira/browse/YARN-7345
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Jonathan Hung
>Assignee: Jonathan Hung
> Attachments: YARN-7345.001.patch
>
>
> Currently the minor numbers written to devices.deny file is 0->(num devices 
> to block - 1). But blocked devices are not necessarily sequential starting 
> from 0.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Assigned] (YARN-7338) Support same origin policy for cross site scripting prevention.

2017-10-19 Thread Wangda Tan (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-7338?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wangda Tan reassigned YARN-7338:


Assignee: Sunil G

> Support same origin policy for cross site scripting prevention.
> ---
>
> Key: YARN-7338
> URL: https://issues.apache.org/jira/browse/YARN-7338
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: yarn-ui-v2
>Reporter: Vrushali C
>Assignee: Sunil G
> Attachments: YARN-7338.001.patch
>
>
> Opening jira as suggested b [~eyang] on the thread for merging YARN-3368 (new 
> web UI) to branch2  
> http://mail-archives.apache.org/mod_mbox/hadoop-yarn-dev/201610.mbox/%3ccad++ecmvvqnzqz9ynkvkcxaczdkg50yiofxktgk3mmms9sh...@mail.gmail.com%3E
> --
> Ui2 does not seem to support same origin policy for cross site scripting 
> prevention.
> The following parameters has no effect for /ui2:
> hadoop.http.cross-origin.enabled = true
> yarn.resourcemanager.webapp.cross-origin.enabled = true
> This is because ui2 is designed as a separate web application.  WebFilters 
> setup for existing resource manager doesn’t apply to the new web application.
> Please open JIRA to track the security issue and resolve the problem prior to 
> backporting this to branch-2.
> This would minimize the risk to open up security hole in branch-2.
> --



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7370) Intra-queue preemption properties should be refreshable

2017-10-19 Thread Wangda Tan (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7370?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16211790#comment-16211790
 ] 

Wangda Tan commented on YARN-7370:
--

[~eepayne], actually I think all preemption parameters should be refreshable, 
and includes the whole preemption enable/disable.

> Intra-queue preemption properties should be refreshable
> ---
>
> Key: YARN-7370
> URL: https://issues.apache.org/jira/browse/YARN-7370
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: capacity scheduler, scheduler preemption
>Affects Versions: 2.8.0, 3.0.0-alpha3
>Reporter: Eric Payne
>
> At least the properties for {{max-allowable-limit}} and {{minimum-threshold}} 
> should be refreshable. It would also be nice to make 
> {{intra-queue-preemption.enabled}} and {{preemption-order-policy}} 
> refreshable.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7364) Queue dash board in new YARN UI has incorrect values

2017-10-19 Thread Wangda Tan (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7364?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16211780#comment-16211780
 ] 

Wangda Tan commented on YARN-7364:
--

[~vrushalic], sounds good! Thanks.

> Queue dash board in new YARN UI has incorrect values
> 
>
> Key: YARN-7364
> URL: https://issues.apache.org/jira/browse/YARN-7364
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: webapp
>Reporter: Sunil G
>Assignee: Sunil G
> Attachments: YARN-7364.001.patch
>
>
> Queue dashboard in cluster over page does not show queue metrics.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Created] (YARN-7370) Intra-queue preemption properties should be refreshable

2017-10-19 Thread Eric Payne (JIRA)
Eric Payne created YARN-7370:


 Summary: Intra-queue preemption properties should be refreshable
 Key: YARN-7370
 URL: https://issues.apache.org/jira/browse/YARN-7370
 Project: Hadoop YARN
  Issue Type: Bug
  Components: capacity scheduler, scheduler preemption
Affects Versions: 3.0.0-alpha3, 2.8.0
Reporter: Eric Payne


At least the properties for {{max-allowable-limit}} and {{minimum-threshold}} 
should be refreshable. It would also be nice to make 
{{intra-queue-preemption.enabled}} and {{preemption-order-policy}} refreshable.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7365) ResourceLocalization cache cleanup thread stuck

2017-10-19 Thread Jason Lowe (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7365?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16211770#comment-16211770
 ] 

Jason Lowe commented on YARN-7365:
--

Thanks for the report!

YARN-4655 apparently only went into 2.9.  Is this really a problem in 2.8 as 
well?


> ResourceLocalization cache cleanup thread stuck
> ---
>
> Key: YARN-7365
> URL: https://issues.apache.org/jira/browse/YARN-7365
> Project: Hadoop YARN
>  Issue Type: Bug
>Affects Versions: 2.8.0, 3.0.0-alpha1
>Reporter: Bibin A Chundatt
>Assignee: Bibin A Chundatt
>Priority: Critical
>
> {code}
> "ResourceLocalizationService Cache Cleanup" #36 prio=5 os_prio=0 
> tid=0x7f943562a000 nid=0x1017 waiting on condition [0x7f9419bd7000]
>java.lang.Thread.State: WAITING (parking)
> at sun.misc.Unsafe.park(Native Method)
> - parking to wait for  <0xc21103f8> (a 
> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask)
> at java.util.concurrent.locks.LockSupport.park(LockSupport.java:175)
> at java.util.concurrent.FutureTask.awaitDone(FutureTask.java:429)
> at java.util.concurrent.FutureTask.get(FutureTask.java:191)
> at 
> org.apache.hadoop.util.concurrent.ExecutorHelper.logThrowableFromAfterExecute(ExecutorHelper.java:47)
> at 
> org.apache.hadoop.util.concurrent.HadoopScheduledThreadPoolExecutor.afterExecute(HadoopScheduledThreadPoolExecutor.java:69)
> at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1150)
> at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
> at java.lang.Thread.run(Thread.java:745)
> {code}
> ResourceLocalization Cache Clean Up thread waiting on {{FutureTask.get()}} 
> for infinite time after first execution



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-7217) Improve API service usability for updating service spec and state

2017-10-19 Thread Eric Yang (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-7217?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Eric Yang updated YARN-7217:

Description: 
API service for deploy, and manage YARN services have several limitations.

{{updateService}} API provides multiple functions:

# Stopping a service.
# Start a service.
# Increase or decrease number of containers.  (This was removed in YARN-7323).

The overloading is buggy depending on how the configuration should be applied.

h4. Scenario 1
A user retrieves Service object from getService call, and the Service object 
contains state: STARTED.  The user would like to increase number of containers 
for the deployed service.  The JSON has been updated to increase container 
count.  The PUT method does not actually increase container count.

h4. Scenario 2
A user retrieves Service object from getService call, and the Service object 
contains state: STOPPED.  The user would like to make a environment 
configuration change.  The configuration does not get updated after PUT method.

This is possible to address by rearranging the logic of START/STOP after 
configuration update.  However, there are other potential combinations that can 
break PUT method.  For example, user like to make configuration changes, but 
not yet restart the service until a later time.

h4. Scenario 3
There is no API to list all deployed applications by the same user.

h4. Scenario 4
Desired state (spec) and current state are represented by the same Service 
object.  There is no easy way to identify "state" is desired state to reach or, 
the current state of the service.  It would be nice to have ability to retrieve 
both desired state, and current state with separated entry points.  By 
implementing /spec and /state, it can resolve this problem.

h4. Scenario 5
List all services deploy by the same user can trigger a directory listing 
operation on namenode if hdfs is used as storage for metadata.  When hundred of 
users use Service UI to view or deploy applications, this will trigger denial 
of services attack on namenode.  The sparse small metadata files also reduce 
efficiency of Namenode memory usage.  Hence, a cache layer for storing service 
metadata can reduce namenode stress.

h3. Proposed change

ApiService can separate the PUT method into two PUT methods for configuration 
changes vs operation changes.  New API could look like:

{code}
@PUT
/ws/v1/services/[service_name]/spec

Request Data:
{
  "name": "amp",
  "components": [
{
  "name": "mysql",
  "number_of_containers": 2,
  "artifact": {
"id": "centos/mysql-57-centos7:latest",
"type": "DOCKER"
  },
  "run_privileged_container": false,
  "launch_command": "",
  "resource": {
"cpus": 1,
"memory": "2048"
  },
  "configuration": {
"env": {
  "MYSQL_USER":"${USER}",
  "MYSQL_PASSWORD":"password"
}
  }
 }
  ],
  "quicklinks": {
"Apache Document Root": 
"http://httpd.${SERVICE_NAME}.${USER}.${DOMAIN}:8080/;,
"PHP MyAdmin": "http://phpmyadmin.${SERVICE_NAME}.${USER}.${DOMAIN}:8080/;
  }
}
{code}

{code}
@PUT
/ws/v1/services/[service_name]/state

Request data:
{
  "name": "amp",
  "components": [
{
  "name": "mysql",
  "state": "STOPPED"
 }
  ]
}
{code}

SOLR can be used to cache Yarnfile to improve lookup performance and reduce 
stress of namenode small file problems and high frequency lookup.  SOLR is 
chosen for caching metadata because its indexing feature can be used to build 
full text search for application catalog as well.

For service that requires configuration to increase or decrease node count.  
The calling sequence is:

{code}
# GET /ws/v1/services/{service_name}/spec
# Change number_of_containers to desired number.
# PUT /ws/v1/services/{service_name}/spec to update the spec.
# PUT /ws/v1/services/{service_name}/state to stop existing service.
# PUT /ws/v1/services/{service_name}/state to start service.
{code}

For components that can increase node count without rewrite configuration:

{code}
# GET /ws/v1/services/{service_name}/spec
# Change number_of_containers to desired number.
# PUT /ws/v1/services/{service_name}/spec to update the spec.
# PUT /ws/v1/services/{service_name}/component/{component_name} to change node 
count.
{code}


  was:
API service for deploy, and manage YARN services have several limitations.

{{updateService}} API provides multiple functions:

# Stopping a service.
# Start a service.
# Increase or decrease number of containers.  (This was removed in YARN-7323).

The overloading is buggy depending on how the configuration should be applied.

h4. Scenario 1
A user retrieves Service object from getService call, and the Service object 
contains state: STARTED.  The user would like to increase number of containers 
for the deployed service.  The JSON has been updated to increase container 
count.  The PUT method does 

[jira] [Commented] (YARN-7364) Queue dash board in new YARN UI has incorrect values

2017-10-19 Thread Vrushali C (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7364?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16211743#comment-16211743
 ] 

Vrushali C commented on YARN-7364:
--

Hi [~wangda] [~sunil.gov...@gmail.com] 
Just checking, looks like this will go into trunk, is that correct? 

If so, I would like to go ahead with the branch2 merge once YARN-7338 is 
committed. I will backport this once this is closed. 

> Queue dash board in new YARN UI has incorrect values
> 
>
> Key: YARN-7364
> URL: https://issues.apache.org/jira/browse/YARN-7364
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: webapp
>Reporter: Sunil G
>Assignee: Sunil G
> Attachments: YARN-7364.001.patch
>
>
> Queue dashboard in cluster over page does not show queue metrics.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7338) Support same origin policy for cross site scripting prevention.

2017-10-19 Thread Vrushali C (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7338?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16211741#comment-16211741
 ] 

Vrushali C commented on YARN-7338:
--

Thanks [~eyang] and [~wangda] for the updates. 

> Support same origin policy for cross site scripting prevention.
> ---
>
> Key: YARN-7338
> URL: https://issues.apache.org/jira/browse/YARN-7338
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: yarn-ui-v2
>Reporter: Vrushali C
> Attachments: YARN-7338.001.patch
>
>
> Opening jira as suggested b [~eyang] on the thread for merging YARN-3368 (new 
> web UI) to branch2  
> http://mail-archives.apache.org/mod_mbox/hadoop-yarn-dev/201610.mbox/%3ccad++ecmvvqnzqz9ynkvkcxaczdkg50yiofxktgk3mmms9sh...@mail.gmail.com%3E
> --
> Ui2 does not seem to support same origin policy for cross site scripting 
> prevention.
> The following parameters has no effect for /ui2:
> hadoop.http.cross-origin.enabled = true
> yarn.resourcemanager.webapp.cross-origin.enabled = true
> This is because ui2 is designed as a separate web application.  WebFilters 
> setup for existing resource manager doesn’t apply to the new web application.
> Please open JIRA to track the security issue and resolve the problem prior to 
> backporting this to branch-2.
> This would minimize the risk to open up security hole in branch-2.
> --



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-7217) Improve API service usability for updating service spec and state

2017-10-19 Thread Eric Yang (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-7217?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Eric Yang updated YARN-7217:

Description: 
API service for deploy, and manage YARN services have several limitations.

{{updateService}} API provides multiple functions:

# Stopping a service.
# Start a service.
# Increase or decrease number of containers.  (This was removed in YARN-7323).

The overloading is buggy depending on how the configuration should be applied.

h4. Scenario 1
A user retrieves Service object from getService call, and the Service object 
contains state: STARTED.  The user would like to increase number of containers 
for the deployed service.  The JSON has been updated to increase container 
count.  The PUT method does not actually increase container count.

h4. Scenario 2
A user retrieves Service object from getService call, and the Service object 
contains state: STOPPED.  The user would like to make a environment 
configuration change.  The configuration does not get updated after PUT method.

This is possible to address by rearranging the logic of START/STOP after 
configuration update.  However, there are other potential combinations that can 
break PUT method.  For example, user like to make configuration changes, but 
not yet restart the service until a later time.

h4. Scenario 3
There is no API to list all deployed applications by the same user.

h4. Scenario 4
Desired state (spec) and current state are represented by the same Service 
object.  There is no easy way to identify "state" is desired state to reach or, 
the current state of the service.  It would be nice to have ability to retrieve 
both desired state, and current state with separated entry points.  By 
implementing /spec and /state, it can resolve this problem.

h4. Scenario 5
List all services deploy by the same user can trigger a directory listing 
operation on namenode if hdfs is used as storage for metadata.  When hundred of 
users use Service UI to view or deploy applications, this will trigger denial 
of services attack on namenode.  The sparse small metadata files also reduce 
efficiency of Namenode memory usage.  Hence, a cache layer for storing service 
metadata can reduce namenode stress.

h3. Proposed change

ApiService can separate the PUT method into two PUT methods for configuration 
changes vs operation changes.  New API could look like:

{code}
@PUT
/ws/v1/services/[service_name]/spec

Request Data:
{
  "name": "amp",
  "components": [
{
  "name": "mysql",
  "number_of_containers": 2,
  "artifact": {
"id": "centos/mysql-57-centos7:latest",
"type": "DOCKER"
  },
  "run_privileged_container": false,
  "launch_command": "",
  "resource": {
"cpus": 1,
"memory": "2048"
  },
  "configuration": {
"env": {
  "MYSQL_USER":"${USER}",
  "MYSQL_PASSWORD":"password"
}
  }
 }
  ],
  "quicklinks": {
"Apache Document Root": 
"http://httpd.${SERVICE_NAME}.${USER}.${DOMAIN}:8080/;,
"PHP MyAdmin": "http://phpmyadmin.${SERVICE_NAME}.${USER}.${DOMAIN}:8080/;
  }
}
{code}

{code}
@PUT
/ws/v1/services/[service_name]/state

Request data:
{
  "name": "amp",
  "components": [
{
  "name": "mysql",
  "state": "STOPPED"
 }
  ]
}
{code}

SOLR can be used to cache Yarnfile to improve lookup performance and reduce 
stress of namenode small file problems and high frequency lookup.  SOLR is 
chosen for caching metadata because its indexing feature can be used to build 
full text search for application catalog as well.

  was:
API service for deploy, and manage YARN services have several limitations.

The update service API provides multiple functions:

# Stopping a service.
# Start a service.
# Increase or decrease number of containers.

The overloading is buggy depending on how the configuration should be applied.

Scenario 1
A user retrieves Service object from getService call, and the Service object 
contains state: STARTED.  The user would like to increase number of containers 
for the deployed service.  The JSON has been updated to increase container 
count.  The PUT method does not actually increase container count.

Scenario 2
A user retrieves Service object from getService call, and the Service object 
contains state: STOPPED.  The user would like to make a environment 
configuration change.  The configuration does not get updated after PUT method.

This is possible to address by rearranging the logic of START/STOP after 
configuration update.  However, there are other potential combinations that can 
break PUT method.  For example, user like to make configuration changes, but 
not yet restart the service until a later time.

The alternative is to separate the PUT method into PUT method for configuration 
vs status.  This increase the number of action that can be performed.  New API 
could look like:

{code}
@PUT

[jira] [Commented] (YARN-7217) Improve API service usability for updating service spec and state

2017-10-19 Thread Billie Rinaldi (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7217?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16211730#comment-16211730
 ] 

Billie Rinaldi commented on YARN-7217:
--

Some comments on patch 004:
* fs / solr implemention should be different implementations of a pluggable 
interface (unless we change how the solr instance is being used, as discussed 
below)
* related: getServicesList has no fs implementation
* actionBuild needs a solr implementation
* we have been putting service configuration properties in YarnServiceConf, not 
in YarnConfiguration
* solr version should not be specified in services poms; instead all 
dependencies should be added to hadoop-project/pom.xml
* flexing issue mentioned previously
* the user handling is confusing. Yarn solr client appears to ignore the user 
for some methods such as findAppEntry and deleteApp, but not others. With this 
implementation, can different users use the same app from the app store 
(assuming this solr instance is meant to be the app store)? Also, a user 
updating their own instance of an app should probably not change the app spec 
in the app store. I’m not sure apps should be stored per-user (or per-instance) 
in the solr store. Maybe there should be a global version of an app spec, and a 
normal user can create an instance of that global version and make changes to 
their instance but not to the global version. We should have a way to create an 
app instance from an app spec stored in the app store, as well. Basically, I 
don’t think the app specs in the app store should represent instances. It would 
probably be more useful to store specs, and then a list of instances 
(user/appName pairs) that are currently running (or stopped) that were started 
from that spec.

> Improve API service usability for updating service spec and state
> -
>
> Key: YARN-7217
> URL: https://issues.apache.org/jira/browse/YARN-7217
> Project: Hadoop YARN
>  Issue Type: Task
>  Components: api, applications
>Reporter: Eric Yang
>Assignee: Eric Yang
> Attachments: YARN-7217.yarn-native-services.001.patch, 
> YARN-7217.yarn-native-services.002.patch, 
> YARN-7217.yarn-native-services.003.patch, 
> YARN-7217.yarn-native-services.004.patch
>
>
> API service for deploy, and manage YARN services have several limitations.
> The update service API provides multiple functions:
> # Stopping a service.
> # Start a service.
> # Increase or decrease number of containers.
> The overloading is buggy depending on how the configuration should be applied.
> Scenario 1
> A user retrieves Service object from getService call, and the Service object 
> contains state: STARTED.  The user would like to increase number of 
> containers for the deployed service.  The JSON has been updated to increase 
> container count.  The PUT method does not actually increase container count.
> Scenario 2
> A user retrieves Service object from getService call, and the Service object 
> contains state: STOPPED.  The user would like to make a environment 
> configuration change.  The configuration does not get updated after PUT 
> method.
> This is possible to address by rearranging the logic of START/STOP after 
> configuration update.  However, there are other potential combinations that 
> can break PUT method.  For example, user like to make configuration changes, 
> but not yet restart the service until a later time.
> The alternative is to separate the PUT method into PUT method for 
> configuration vs status.  This increase the number of action that can be 
> performed.  New API could look like:
> {code}
> @PUT
> /ws/v1/services/[service_name]/spec
> Request Data:
> {
>   "name":"[service_name]",
>   "number_of_containers": 5
> }
> {code}
> {code}
> @PUT
> /ws/v1/services/[service_name]/state
> Request data:
> {
>   "name": "[service_name]",
>   "state": "STOPPED|STARTED"
> }
> {code}
> Scenario 3
> There is no API to list all deployed applications by the same user.
> Scenario 4
> Desired state (spec) and current state are represented by the same Service 
> object.  There is no easy way to identify "state" is desired state to reach 
> or, the current state of the service.  It would be nice to have ability to 
> retrieve both desired state, and current state with separated entry points.  
> By implementing /spec and /state, it can resolve this problem.
> Scenario 5
> List all services deploy by the same user can trigger a directory listing 
> operation on namenode if hdfs is used as storage for metadata.  When hundred 
> of users use Service UI to view or deploy applications, this will trigger 
> denial of services attack on namenode.  The sparse small metadata files also 
> reduce efficiency of Namenode memory usage.  Hence, a cache layer for storing 
> service metadata would be nice.




[jira] [Updated] (YARN-7117) Capacity Scheduler: Support Auto Creation of Leaf Queues While Doing Queue Mapping

2017-10-19 Thread Suma Shivaprasad (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-7117?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Suma Shivaprasad updated YARN-7117:
---
Attachment: YARN-7117.poc.patch

Attaching POC patch for auto leaf-queue creation and capacity management of 
auto created queues. Capacity management policy activates leaf queues which 
have scheduleable applications and dectivates them when no applications are 
pending

> Capacity Scheduler: Support Auto Creation of Leaf Queues While Doing Queue 
> Mapping
> --
>
> Key: YARN-7117
> URL: https://issues.apache.org/jira/browse/YARN-7117
> Project: Hadoop YARN
>  Issue Type: New Feature
>  Components: capacity scheduler
>Reporter: Wangda Tan
>Assignee: Wangda Tan
> Attachments: 
> YARN-7117.Capacity.Scheduler.Support.Auto.Creation.Of.Leaf.Queue.pdf, 
> YARN-7117.poc.patch
>
>
> Currently Capacity Scheduler doesn't support auto creation of queues when 
> doing queue mapping. We saw more and more use cases which has complex queue 
> mapping policies configured to handle application to queues mapping. 
> The most common use case of CapacityScheduler queue mapping is to create one 
> queue for each user/group. However update {{capacity-scheduler.xml}} and 
> {{RMAdmin:refreshQueues}} needs to be done when new user/group onboard. One 
> of the option to solve the problem is automatically create queues when new 
> user/group arrives.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-7369) Improve the resource types docs

2017-10-19 Thread Daniel Templeton (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-7369?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Daniel Templeton updated YARN-7369:
---
Summary: Improve the resource types docs  (was: Improve the docs)

> Improve the resource types docs
> ---
>
> Key: YARN-7369
> URL: https://issues.apache.org/jira/browse/YARN-7369
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: docs
>Affects Versions: 3.1.0
>Reporter: Daniel Templeton
>Assignee: Daniel Templeton
> Attachments: YARN-7369.001.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-7369) Improve the docs

2017-10-19 Thread Daniel Templeton (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-7369?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Daniel Templeton updated YARN-7369:
---
Attachment: YARN-7369.001.patch

> Improve the docs
> 
>
> Key: YARN-7369
> URL: https://issues.apache.org/jira/browse/YARN-7369
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: docs
>Affects Versions: 3.1.0
>Reporter: Daniel Templeton
>Assignee: Daniel Templeton
> Attachments: YARN-7369.001.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Created] (YARN-7369) Improve the docs

2017-10-19 Thread Daniel Templeton (JIRA)
Daniel Templeton created YARN-7369:
--

 Summary: Improve the docs
 Key: YARN-7369
 URL: https://issues.apache.org/jira/browse/YARN-7369
 Project: Hadoop YARN
  Issue Type: Sub-task
  Components: docs
Affects Versions: 3.1.0
Reporter: Daniel Templeton
Assignee: Daniel Templeton






--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7102) NM heartbeat stuck when responseId overflows MAX_INT

2017-10-19 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7102?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16211651#comment-16211651
 ] 

Hadoop QA commented on YARN-7102:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 12m  
1s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 5 new or modified test 
files. {color} |
|| || || || {color:brown} branch-2.8 Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
34s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  6m 
28s{color} | {color:green} branch-2.8 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  6m 
53s{color} | {color:green} branch-2.8 passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
 6s{color} | {color:green} branch-2.8 passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m  
0s{color} | {color:green} branch-2.8 passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
46s{color} | {color:green} branch-2.8 passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
43s{color} | {color:green} branch-2.8 passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
16s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
51s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  6m 
53s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  6m 
53s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
1m  9s{color} | {color:orange} root: The patch generated 1 new + 142 unchanged 
- 10 fixed = 143 total (was 152) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m  
5s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m 
14s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
49s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 76m  3s{color} 
| {color:red} hadoop-yarn-server-resourcemanager in the patch failed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  1m  
4s{color} | {color:green} hadoop-sls in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
28s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}143m 40s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.yarn.server.resourcemanager.TestAMAuthorization |
|   | 
hadoop.yarn.server.resourcemanager.scheduler.capacity.TestCapacitySchedulerSurgicalPreemption
 |
|   | hadoop.yarn.server.resourcemanager.TestClientRMTokens |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:c2d96dd |
| JIRA Issue | YARN-7102 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12893076/YARN-7102-branch-2.8.v10.patch
 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  shadedclient  findbugs  checkstyle  |
| uname | Linux 81d50fa181e4 3.13.0-119-generic #166-Ubuntu SMP Wed May 3 
12:18:55 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | branch-2.8 / a83f87e |
| Default Java | 1.7.0_151 |
| findbugs | v3.0.0 |
| checkstyle | 
https://builds.apache.org/job/PreCommit-YARN-Build/18031/artifact/patchprocess/diff-checkstyle-root.txt
 |
| unit | 

[jira] [Updated] (YARN-7368) Yarn Work-Preserving Better Handling Failed Disk

2017-10-19 Thread BELUGA BEHR (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-7368?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

BELUGA BEHR updated YARN-7368:
--
Affects Version/s: 3.0.0

> Yarn Work-Preserving Better Handling Failed Disk
> 
>
> Key: YARN-7368
> URL: https://issues.apache.org/jira/browse/YARN-7368
> Project: Hadoop YARN
>  Issue Type: Improvement
>  Components: nodemanager, yarn
>Affects Versions: 2.8.1, 3.0.0
>Reporter: BELUGA BEHR
>
> If the drive that hosts the {{yarn.nodemanager.recovery.dir}} is broken then 
> the entire NodeManager will not start.  Please improve this so that if the 
> directory is not able to be created/accessed then the recovery portion of the 
> NM is simply skipped and the NM continues to operate as normal.
> It may also be beneficial to be able to define multiple directories, like 
> YARN logging directories, so that if one drive fails, not all of the recovery 
> data is lost.
> https://hadoop.apache.org/docs/r2.7.2/hadoop-yarn/hadoop-yarn-site/NodeManagerRestart.html



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Created] (YARN-7368) Yarn Work-Preserving Better Handling Failed Disk

2017-10-19 Thread BELUGA BEHR (JIRA)
BELUGA BEHR created YARN-7368:
-

 Summary: Yarn Work-Preserving Better Handling Failed Disk
 Key: YARN-7368
 URL: https://issues.apache.org/jira/browse/YARN-7368
 Project: Hadoop YARN
  Issue Type: Improvement
  Components: nodemanager, yarn
Affects Versions: 2.8.1
Reporter: BELUGA BEHR


If the drive that hosts the {{yarn.nodemanager.recovery.dir}} is broken then 
the entire NodeManager will not start.  Please improve this so that if the 
directory is not able to be created/accessed then the recovery portion of the 
NM is simply skipped and the NM continues to operate as normal.

It may also be beneficial to be able to define multiple directories, like YARN 
logging directories, so that if one drive fails, not all of the recovery data 
is lost.


https://hadoop.apache.org/docs/r2.7.2/hadoop-yarn/hadoop-yarn-site/NodeManagerRestart.html



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-4511) Common scheduler changes supporting scheduler-specific implementations

2017-10-19 Thread Haibo Chen (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-4511?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16211631#comment-16211631
 ] 

Haibo Chen commented on YARN-4511:
--

Thanks [~miklos.szeg...@cloudera.com] for the review! I will address your 
comments in the following patch,
but will wait until the HADOOP-14816 is resolved so that the jenkins can give 
some feedback.

> Common scheduler changes supporting scheduler-specific implementations
> --
>
> Key: YARN-4511
> URL: https://issues.apache.org/jira/browse/YARN-4511
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Wangda Tan
>Assignee: Haibo Chen
> Attachments: YARN-4511-YARN-1011.00.patch, 
> YARN-4511-YARN-1011.01.patch, YARN-4511-YARN-1011.02.patch, 
> YARN-4511-YARN-1011.03.patch, YARN-4511-YARN-1011.04.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Created] (YARN-7367) ResourceInformation lacks stability and audience annotations

2017-10-19 Thread Daniel Templeton (JIRA)
Daniel Templeton created YARN-7367:
--

 Summary: ResourceInformation lacks stability and audience 
annotations
 Key: YARN-7367
 URL: https://issues.apache.org/jira/browse/YARN-7367
 Project: Hadoop YARN
  Issue Type: Improvement
  Components: yarn
Affects Versions: 3.1.0
Reporter: Daniel Templeton






--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-6707) [ATSv2] Update HBase version to 1.2.6

2017-10-19 Thread Varun Saxena (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-6707?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Varun Saxena updated YARN-6707:
---
Fix Version/s: 2.9.0

> [ATSv2] Update HBase version to 1.2.6
> -
>
> Key: YARN-6707
> URL: https://issues.apache.org/jira/browse/YARN-6707
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: timelineserver
>Affects Versions: YARN-5355
>Reporter: Varun Saxena
>Assignee: Vrushali C
>  Labels: atsv2-hbase
> Fix For: 2.9.0, YARN-5355, YARN-5355-branch-2, 3.0.0-alpha4
>
> Attachments: YARN-6707-YARN-5355.001.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-2928) YARN Timeline Service v.2: alpha 1

2017-10-19 Thread Varun Saxena (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-2928?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Varun Saxena updated YARN-2928:
---
Fix Version/s: 2.9.0

> YARN Timeline Service v.2: alpha 1
> --
>
> Key: YARN-2928
> URL: https://issues.apache.org/jira/browse/YARN-2928
> Project: Hadoop YARN
>  Issue Type: New Feature
>  Components: timelineserver
>Reporter: Sangjin Lee
>Assignee: Sangjin Lee
>Priority: Critical
> Fix For: 2.9.0, 3.0.0-alpha1
>
> Attachments: ATSv2.rev1.pdf, ATSv2.rev2.pdf, 
> ATSv2BackendHBaseSchemaproposal.pdf, Data model proposal v1.pdf, The YARN 
> Timeline Service v.2 Documentation.pdf, Timeline Service Next Gen - Planning 
> - ppt.pptx, TimelineServiceStoragePerformanceTestSummaryYARN-2928.pdf, 
> YARN-2928.01.patch, YARN-2928.02.patch, YARN-2928.03.patch, 
> timeline_service_v2_next_milestones.pdf
>
>
> We have the application timeline server implemented in yarn per YARN-1530 and 
> YARN-321. Although it is a great feature, we have recognized several critical 
> issues and features that need to be addressed.
> This JIRA proposes the design and implementation changes to address those. 
> This is phase 1 of this effort.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7364) Queue dash board in new YARN UI has incorrect values

2017-10-19 Thread Wangda Tan (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7364?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16211523#comment-16211523
 ] 

Wangda Tan commented on YARN-7364:
--

[~sunilg], I'm not sure if this patch could break fair scheduler UI. Could you 
add more details of the fix?

> Queue dash board in new YARN UI has incorrect values
> 
>
> Key: YARN-7364
> URL: https://issues.apache.org/jira/browse/YARN-7364
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: webapp
>Reporter: Sunil G
>Assignee: Sunil G
> Attachments: YARN-7364.001.patch
>
>
> Queue dashboard in cluster over page does not show queue metrics.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-5355) YARN Timeline Service v.2: alpha 2

2017-10-19 Thread Varun Saxena (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-5355?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Varun Saxena updated YARN-5355:
---
Fix Version/s: 2.9.0

> YARN Timeline Service v.2: alpha 2
> --
>
> Key: YARN-5355
> URL: https://issues.apache.org/jira/browse/YARN-5355
> Project: Hadoop YARN
>  Issue Type: New Feature
>  Components: timelineserver
>Reporter: Sangjin Lee
>Assignee: Vrushali C
>Priority: Critical
> Fix For: 2.9.0, 3.0.0-beta1
>
> Attachments: Documentation - The YARN Timeline Service v2.pdf, 
> Timeline Service v2_ Ideas for Next Steps.pdf, YARN-5355-branch-2.01.patch, 
> YARN-5355.01.patch, YARN-5355.02.patch, YARN-5355.03.patch
>
>
> This is an umbrella JIRA for the alpha 2 milestone for YARN Timeline Service 
> v.2.
> This is developed on feature branches: {{YARN-5355}} for the trunk-based 
> development and {{YARN-5355-branch-2}} to maintain backports to branch-2. Any 
> subtask work on this JIRA will be committed to those 2 branches.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-7063) TestTimelineReaderWebServicesHBaseStorage fails with NoClassDefFoundError on TSv2 branch2

2017-10-19 Thread Varun Saxena (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-7063?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Varun Saxena updated YARN-7063:
---
Fix Version/s: 2.9.0

> TestTimelineReaderWebServicesHBaseStorage fails with NoClassDefFoundError on 
> TSv2 branch2
> -
>
> Key: YARN-7063
> URL: https://issues.apache.org/jira/browse/YARN-7063
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: timelineserver
>Reporter: Vrushali C
>Assignee: Varun Saxena
> Fix For: 2.9.0, YARN-5335_branch2
>
> Attachments: YARN-7063-YARN-5355_branch2.01.patch, 
> YARN-7063-YARN-5355_branch2.02.patch
>
>
> Seeing NoClassDefFound on the branch2 at runtime
> Stack trace 
> {code}
> java.lang.NoClassDefFoundError: 
> org/apache/hadoop/security/AuthenticationWithProxyUserFilter
>   at java.net.URLClassLoader.findClass(URLClassLoader.java:381)
>   at java.lang.ClassLoader.loadClass(ClassLoader.java:424)
>   at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:331)
>   at java.lang.ClassLoader.loadClass(ClassLoader.java:357)
>   at 
> org.apache.hadoop.yarn.server.timelineservice.reader.security.TimelineReaderAuthenticationFilterInitializer.initFilter(TimelineReaderAuthenticationFilterInitializer.java:49)
>   at 
> org.apache.hadoop.http.HttpServer2.initializeWebServer(HttpServer2.java:393)
>   at org.apache.hadoop.http.HttpServer2.(HttpServer2.java:344)
>   at org.apache.hadoop.http.HttpServer2.(HttpServer2.java:104)
>   at 
> org.apache.hadoop.http.HttpServer2$Builder.build(HttpServer2.java:292)
>   at 
> org.apache.hadoop.yarn.server.timelineservice.reader.TimelineReaderServer.startTimelineReaderWebApp(TimelineReaderServer.java:181)
>   at 
> org.apache.hadoop.yarn.server.timelineservice.reader.TimelineReaderServer.serviceStart(TimelineReaderServer.java:124)
>   at 
> org.apache.hadoop.service.AbstractService.start(AbstractService.java:193)
>   at 
> org.apache.hadoop.yarn.server.timelineservice.reader.AbstractTimelineReaderHBaseTestBase.initialize(AbstractTimelineReaderHBaseTestBase.java:91)
>   at 
> org.apache.hadoop.yarn.server.timelineservice.reader.TestTimelineReaderWebServicesHBaseStorage.setupBeforeClass(TestTimelineReaderWebServicesHBaseStorage.java:79)
> {code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-7190) Ensure only NM classpath in 2.x gets TSv2 related hbase jars, not the user classpath

2017-10-19 Thread Varun Saxena (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-7190?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Varun Saxena updated YARN-7190:
---
Fix Version/s: 2.9.0

> Ensure only NM classpath in 2.x gets TSv2 related hbase jars, not the user 
> classpath
> 
>
> Key: YARN-7190
> URL: https://issues.apache.org/jira/browse/YARN-7190
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: timelineclient, timelinereader, timelineserver
>Reporter: Vrushali C
>Assignee: Varun Saxena
> Fix For: 2.9.0, YARN-5355_branch2
>
> Attachments: YARN-7190-YARN-5355_branch2.01.patch, 
> YARN-7190-YARN-5355_branch2.02.patch, YARN-7190-YARN-5355_branch2.03.patch
>
>
> [~jlowe] had a good observation about the user classpath getting extra jars 
> in hadoop 2.x brought in with TSv2.  If users start picking up Hadoop 2,x's 
> version of HBase jars instead of the ones they shipped with their job, it 
> could be a problem.
> So when TSv2 is to be used in 2,x, the hbase related jars should come into 
> only the NM classpath not the user classpath.
> Here is a list of some jars
> {code}
> commons-csv-1.0.jar
> commons-el-1.0.jar
> commons-httpclient-3.1.jar
> disruptor-3.3.0.jar
> findbugs-annotations-1.3.9-1.jar
> hbase-annotations-1.2.6.jar
> hbase-client-1.2.6.jar
> hbase-common-1.2.6.jar
> hbase-hadoop2-compat-1.2.6.jar
> hbase-hadoop-compat-1.2.6.jar
> hbase-prefix-tree-1.2.6.jar
> hbase-procedure-1.2.6.jar
> hbase-protocol-1.2.6.jar
> hbase-server-1.2.6.jar
> htrace-core-3.1.0-incubating.jar
> jamon-runtime-2.4.1.jar
> jasper-compiler-5.5.23.jar
> jasper-runtime-5.5.23.jar
> jcodings-1.0.8.jar
> joni-2.1.2.jar
> jsp-2.1-6.1.14.jar
> jsp-api-2.1-6.1.14.jar
> jsr311-api-1.1.1.jar
> metrics-core-2.2.0.jar
> servlet-api-2.5-6.1.14.jar
> {code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-4511) Common scheduler changes supporting scheduler-specific implementations

2017-10-19 Thread Miklos Szegedi (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-4511?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16211480#comment-16211480
 ] 

Miklos Szegedi commented on YARN-4511:
--

Thank you, [~haibochen] for the patch. I have a few comments.
{code}
112   public synchronized void updateTotalResource(Resource resource){
{code}
I think we need to clone resource before assigning.
{code}
308   public RMContainer swapContainer(RMContainer tempRMContainer,
{code}
The two node updates should be atomic, and I would release the resources first 
and assign second.
{code}
177   allocatedContainers.put(
178   container.getId(),
179   new ContainerInfo(rmContainer, launchedOnNode));
{code}
I think this can be pulled outside the if.
{code}
203   public synchronized void guaranteedContainerResourceAllocated(
{code}
It might be helpful, to throw if the unallocated resource is less than the 
request.
{code}
419 getNumOpportunisticContainers()+ " available=" +
{code}
There is a missing space, also we mention containers in the row above instead 
of guaranteed containers.

> Common scheduler changes supporting scheduler-specific implementations
> --
>
> Key: YARN-4511
> URL: https://issues.apache.org/jira/browse/YARN-4511
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Wangda Tan
>Assignee: Haibo Chen
> Attachments: YARN-4511-YARN-1011.00.patch, 
> YARN-4511-YARN-1011.01.patch, YARN-4511-YARN-1011.02.patch, 
> YARN-4511-YARN-1011.03.patch, YARN-4511-YARN-1011.04.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7339) LocalityMulticastAMRMProxyPolicy should handle cancel request properly

2017-10-19 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7339?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16211461#comment-16211461
 ] 

Hadoop QA commented on YARN-7339:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m  
0s{color} | {color:blue} Docker mode activated. {color} |
| {color:red}-1{color} | {color:red} docker {color} | {color:red}  0m 
11s{color} | {color:red} Docker failed to build yetus/hadoop:0de40f0. {color} |
\\
\\
|| Subsystem || Report/Notes ||
| JIRA Issue | YARN-7339 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12893084/YARN-7339-v5.patch |
| Console output | 
https://builds.apache.org/job/PreCommit-YARN-Build/18034/console |
| Powered by | Apache Yetus 0.6.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> LocalityMulticastAMRMProxyPolicy should handle cancel request properly
> --
>
> Key: YARN-7339
> URL: https://issues.apache.org/jira/browse/YARN-7339
> Project: Hadoop YARN
>  Issue Type: Bug
>Reporter: Botong Huang
>Assignee: Botong Huang
>Priority: Minor
> Attachments: YARN-7339-v1.patch, YARN-7339-v2.patch, 
> YARN-7339-v3.patch, YARN-7339-v4.patch, YARN-7339-v5.patch
>
>
> Currently inside AMRMProxy, LocalityMulticastAMRMProxyPolicy is not handling 
> and splitting cancel requests from AM properly: 
> # For node cancel request, we should not treat it as a localized resource 
> request. Otherwise it can lead to all weight zero issue when computing 
> localized resource weight. 
> # For ANY cancel, we should broadcast to all known subclusters, not just the 
> ones associated with localized resources. 



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7351) High CPU usage issue in RegistryDNS

2017-10-19 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7351?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16211453#comment-16211453
 ] 

Hadoop QA commented on YARN-7351:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m  
0s{color} | {color:blue} Docker mode activated. {color} |
| {color:red}-1{color} | {color:red} docker {color} | {color:red}  0m 
10s{color} | {color:red} Docker failed to build yetus/hadoop:0de40f0. {color} |
\\
\\
|| Subsystem || Report/Notes ||
| JIRA Issue | YARN-7351 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12892911/YARN-7351.yarn-native-services.03.patch
 |
| Console output | 
https://builds.apache.org/job/PreCommit-YARN-Build/18033/console |
| Powered by | Apache Yetus 0.6.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> High CPU usage issue in RegistryDNS
> ---
>
> Key: YARN-7351
> URL: https://issues.apache.org/jira/browse/YARN-7351
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Jian He
>Assignee: Jian He
> Attachments: YARN-7351.yarn-native-services.01.patch, 
> YARN-7351.yarn-native-services.02.patch, 
> YARN-7351.yarn-native-services.03.patch
>
>
> Thanks [~aw] for finding this issue.
> The current RegistryDNS implementation is always running on high CPU and 
> pretty much eats one core. 



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Moved] (YARN-7366) YarnClientImpl.getRootQueueInfos() should not do a recursive call to rmClient.getQueueInfo()

2017-10-19 Thread Daniel Templeton (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-7366?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Daniel Templeton moved HADOOP-14968 to YARN-7366:
-

Affects Version/s: (was: 3.0.0-beta1)
   3.0.0-beta1
  Key: YARN-7366  (was: HADOOP-14968)
  Project: Hadoop YARN  (was: Hadoop Common)

> YarnClientImpl.getRootQueueInfos() should not do a recursive call to 
> rmClient.getQueueInfo()
> 
>
> Key: YARN-7366
> URL: https://issues.apache.org/jira/browse/YARN-7366
> Project: Hadoop YARN
>  Issue Type: Bug
>Affects Versions: 3.0.0-beta1
>Reporter: Daniel Templeton
>Priority: Minor
>
> {code}
> QueueInfo rootQueue =
> rmClient.getQueueInfo(getQueueInfoRequest(ROOT, false, true, true))
>   .getQueueInfo();
> getChildQueues(rootQueue, queues, false);
> {code}
> The final parameter to {{getQueueInfoRequest()}} should match the final 
> parameter to {{getChildQueues()}}.  They should both be false in this case.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-7339) LocalityMulticastAMRMProxyPolicy should handle cancel request properly

2017-10-19 Thread Botong Huang (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-7339?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Botong Huang updated YARN-7339:
---
Attachment: YARN-7339-v5.patch

> LocalityMulticastAMRMProxyPolicy should handle cancel request properly
> --
>
> Key: YARN-7339
> URL: https://issues.apache.org/jira/browse/YARN-7339
> Project: Hadoop YARN
>  Issue Type: Bug
>Reporter: Botong Huang
>Assignee: Botong Huang
>Priority: Minor
> Attachments: YARN-7339-v1.patch, YARN-7339-v2.patch, 
> YARN-7339-v3.patch, YARN-7339-v4.patch, YARN-7339-v5.patch
>
>
> Currently inside AMRMProxy, LocalityMulticastAMRMProxyPolicy is not handling 
> and splitting cancel requests from AM properly: 
> # For node cancel request, we should not treat it as a localized resource 
> request. Otherwise it can lead to all weight zero issue when computing 
> localized resource weight. 
> # For ANY cancel, we should broadcast to all known subclusters, not just the 
> ones associated with localized resources. 



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7217) Improve API service usability for updating service spec and state

2017-10-19 Thread Jian He (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7217?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16211440#comment-16211440
 ] 

Jian He commented on YARN-7217:
---

In description: 
bq. The update service API provides multiple functions:
bq. Stopping a service.
bq. Start a service.
bq. Increase or decrease number of containers.
I meant there are only stop and start in current code,  Increase or decrease  
is already removed.  anyway, I can check the code what it does exactly. 

> Improve API service usability for updating service spec and state
> -
>
> Key: YARN-7217
> URL: https://issues.apache.org/jira/browse/YARN-7217
> Project: Hadoop YARN
>  Issue Type: Task
>  Components: api, applications
>Reporter: Eric Yang
>Assignee: Eric Yang
> Attachments: YARN-7217.yarn-native-services.001.patch, 
> YARN-7217.yarn-native-services.002.patch, 
> YARN-7217.yarn-native-services.003.patch, 
> YARN-7217.yarn-native-services.004.patch
>
>
> API service for deploy, and manage YARN services have several limitations.
> The update service API provides multiple functions:
> # Stopping a service.
> # Start a service.
> # Increase or decrease number of containers.
> The overloading is buggy depending on how the configuration should be applied.
> Scenario 1
> A user retrieves Service object from getService call, and the Service object 
> contains state: STARTED.  The user would like to increase number of 
> containers for the deployed service.  The JSON has been updated to increase 
> container count.  The PUT method does not actually increase container count.
> Scenario 2
> A user retrieves Service object from getService call, and the Service object 
> contains state: STOPPED.  The user would like to make a environment 
> configuration change.  The configuration does not get updated after PUT 
> method.
> This is possible to address by rearranging the logic of START/STOP after 
> configuration update.  However, there are other potential combinations that 
> can break PUT method.  For example, user like to make configuration changes, 
> but not yet restart the service until a later time.
> The alternative is to separate the PUT method into PUT method for 
> configuration vs status.  This increase the number of action that can be 
> performed.  New API could look like:
> {code}
> @PUT
> /ws/v1/services/[service_name]/spec
> Request Data:
> {
>   "name":"[service_name]",
>   "number_of_containers": 5
> }
> {code}
> {code}
> @PUT
> /ws/v1/services/[service_name]/state
> Request data:
> {
>   "name": "[service_name]",
>   "state": "STOPPED|STARTED"
> }
> {code}
> Scenario 3
> There is no API to list all deployed applications by the same user.
> Scenario 4
> Desired state (spec) and current state are represented by the same Service 
> object.  There is no easy way to identify "state" is desired state to reach 
> or, the current state of the service.  It would be nice to have ability to 
> retrieve both desired state, and current state with separated entry points.  
> By implementing /spec and /state, it can resolve this problem.
> Scenario 5
> List all services deploy by the same user can trigger a directory listing 
> operation on namenode if hdfs is used as storage for metadata.  When hundred 
> of users use Service UI to view or deploy applications, this will trigger 
> denial of services attack on namenode.  The sparse small metadata files also 
> reduce efficiency of Namenode memory usage.  Hence, a cache layer for storing 
> service metadata would be nice.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7338) Support same origin policy for cross site scripting prevention.

2017-10-19 Thread Wangda Tan (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7338?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16211421#comment-16211421
 ] 

Wangda Tan commented on YARN-7338:
--

Thanks [~eyang]!

> Support same origin policy for cross site scripting prevention.
> ---
>
> Key: YARN-7338
> URL: https://issues.apache.org/jira/browse/YARN-7338
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: yarn-ui-v2
>Reporter: Vrushali C
> Attachments: YARN-7338.001.patch
>
>
> Opening jira as suggested b [~eyang] on the thread for merging YARN-3368 (new 
> web UI) to branch2  
> http://mail-archives.apache.org/mod_mbox/hadoop-yarn-dev/201610.mbox/%3ccad++ecmvvqnzqz9ynkvkcxaczdkg50yiofxktgk3mmms9sh...@mail.gmail.com%3E
> --
> Ui2 does not seem to support same origin policy for cross site scripting 
> prevention.
> The following parameters has no effect for /ui2:
> hadoop.http.cross-origin.enabled = true
> yarn.resourcemanager.webapp.cross-origin.enabled = true
> This is because ui2 is designed as a separate web application.  WebFilters 
> setup for existing resource manager doesn’t apply to the new web application.
> Please open JIRA to track the security issue and resolve the problem prior to 
> backporting this to branch-2.
> This would minimize the risk to open up security hole in branch-2.
> --



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7289) TestApplicationLifetimeMonitor.testApplicationLifetimeMonitor times out

2017-10-19 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7289?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16211416#comment-16211416
 ] 

Hadoop QA commented on YARN-7289:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m  
0s{color} | {color:blue} Docker mode activated. {color} |
| {color:red}-1{color} | {color:red} docker {color} | {color:red}  0m 
12s{color} | {color:red} Docker failed to build yetus/hadoop:0de40f0. {color} |
\\
\\
|| Subsystem || Report/Notes ||
| JIRA Issue | YARN-7289 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12893077/YARN-7289.003.patch |
| Console output | 
https://builds.apache.org/job/PreCommit-YARN-Build/18032/console |
| Powered by | Apache Yetus 0.6.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> TestApplicationLifetimeMonitor.testApplicationLifetimeMonitor times out
> ---
>
> Key: YARN-7289
> URL: https://issues.apache.org/jira/browse/YARN-7289
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: resourcemanager
>Reporter: Miklos Szegedi
>Assignee: Miklos Szegedi
> Attachments: YARN-7289.000.patch, YARN-7289.001.patch, 
> YARN-7289.002.patch, YARN-7289.003.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7289) TestApplicationLifetimeMonitor.testApplicationLifetimeMonitor times out

2017-10-19 Thread Miklos Szegedi (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7289?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16211407#comment-16211407
 ] 

Miklos Szegedi commented on YARN-7289:
--

Thank you, [~rohithsharma] and [~templedf] for the reviews.
Yes, I ran with Fair scheduler hardcoded manually. I updated the patch, so that 
we run as fully parametrized with two schedulers. The patch should address all 
comments.

> TestApplicationLifetimeMonitor.testApplicationLifetimeMonitor times out
> ---
>
> Key: YARN-7289
> URL: https://issues.apache.org/jira/browse/YARN-7289
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: resourcemanager
>Reporter: Miklos Szegedi
>Assignee: Miklos Szegedi
> Attachments: YARN-7289.000.patch, YARN-7289.001.patch, 
> YARN-7289.002.patch, YARN-7289.003.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-7289) TestApplicationLifetimeMonitor.testApplicationLifetimeMonitor times out

2017-10-19 Thread Miklos Szegedi (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-7289?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Miklos Szegedi updated YARN-7289:
-
Attachment: YARN-7289.003.patch

> TestApplicationLifetimeMonitor.testApplicationLifetimeMonitor times out
> ---
>
> Key: YARN-7289
> URL: https://issues.apache.org/jira/browse/YARN-7289
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: resourcemanager
>Reporter: Miklos Szegedi
>Assignee: Miklos Szegedi
> Attachments: YARN-7289.000.patch, YARN-7289.001.patch, 
> YARN-7289.002.patch, YARN-7289.003.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-7102) NM heartbeat stuck when responseId overflows MAX_INT

2017-10-19 Thread Botong Huang (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-7102?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Botong Huang updated YARN-7102:
---
Attachment: YARN-7102-branch-2.8.v10.patch

> NM heartbeat stuck when responseId overflows MAX_INT
> 
>
> Key: YARN-7102
> URL: https://issues.apache.org/jira/browse/YARN-7102
> Project: Hadoop YARN
>  Issue Type: Bug
>Reporter: Botong Huang
>Assignee: Botong Huang
>Priority: Critical
> Attachments: YARN-7102-branch-2.8.v10.patch, 
> YARN-7102-branch-2.8.v9.patch, YARN-7102-branch-2.v9.patch, 
> YARN-7102.v1.patch, YARN-7102.v2.patch, YARN-7102.v3.patch, 
> YARN-7102.v4.patch, YARN-7102.v5.patch, YARN-7102.v6.patch, 
> YARN-7102.v7.patch, YARN-7102.v8.patch, YARN-7102.v9.patch
>
>
> ResponseId overflow problem in NM-RM heartbeat. This is same as AM-RM 
> heartbeat in YARN-6640, please refer to YARN-6640 for details. 



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7361) Improve the docker container runtime documentation

2017-10-19 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7361?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16211383#comment-16211383
 ] 

Hadoop QA commented on YARN-7361:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m  
0s{color} | {color:blue} Docker mode activated. {color} |
| {color:red}-1{color} | {color:red} docker {color} | {color:red}  2m 
59s{color} | {color:red} Docker failed to build yetus/hadoop:0de40f0. {color} |
\\
\\
|| Subsystem || Report/Notes ||
| JIRA Issue | YARN-7361 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12893074/YARN-7361.001.patch |
| Console output | 
https://builds.apache.org/job/PreCommit-YARN-Build/18030/console |
| Powered by | Apache Yetus 0.6.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> Improve the docker container runtime documentation
> --
>
> Key: YARN-7361
> URL: https://issues.apache.org/jira/browse/YARN-7361
> Project: Hadoop YARN
>  Issue Type: Bug
>Reporter: Shane Kumpf
>Assignee: Shane Kumpf
> Attachments: YARN-7361.001.patch
>
>
> During review of YARN-7230, it was found that 
> yarn.nodemanager.runtime.linux.docker.capabilities is missing from the docker 
> containers documentation in most of the active branches. We can also improve 
> the warning that was introduced in YARN-6622.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-7361) Improve the docker container runtime documentation

2017-10-19 Thread Shane Kumpf (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-7361?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Shane Kumpf updated YARN-7361:
--
Attachment: (was: YARN-7361.001.patch)

> Improve the docker container runtime documentation
> --
>
> Key: YARN-7361
> URL: https://issues.apache.org/jira/browse/YARN-7361
> Project: Hadoop YARN
>  Issue Type: Bug
>Reporter: Shane Kumpf
>Assignee: Shane Kumpf
> Attachments: YARN-7361.001.patch
>
>
> During review of YARN-7230, it was found that 
> yarn.nodemanager.runtime.linux.docker.capabilities is missing from the docker 
> containers documentation in most of the active branches. We can also improve 
> the warning that was introduced in YARN-6622.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-7361) Improve the docker container runtime documentation

2017-10-19 Thread Shane Kumpf (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-7361?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Shane Kumpf updated YARN-7361:
--
Attachment: YARN-7361.001.patch

> Improve the docker container runtime documentation
> --
>
> Key: YARN-7361
> URL: https://issues.apache.org/jira/browse/YARN-7361
> Project: Hadoop YARN
>  Issue Type: Bug
>Reporter: Shane Kumpf
>Assignee: Shane Kumpf
> Attachments: YARN-7361.001.patch
>
>
> During review of YARN-7230, it was found that 
> yarn.nodemanager.runtime.linux.docker.capabilities is missing from the docker 
> containers documentation in most of the active branches. We can also improve 
> the warning that was introduced in YARN-6622.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7338) Support same origin policy for cross site scripting prevention.

2017-10-19 Thread Eric Yang (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7338?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16211362#comment-16211362
 ] 

Eric Yang commented on YARN-7338:
-

[~wangda] HADOOP-14967 is opened for standard jetty CORS solution.  Discussion 
thread updated.

> Support same origin policy for cross site scripting prevention.
> ---
>
> Key: YARN-7338
> URL: https://issues.apache.org/jira/browse/YARN-7338
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: yarn-ui-v2
>Reporter: Vrushali C
> Attachments: YARN-7338.001.patch
>
>
> Opening jira as suggested b [~eyang] on the thread for merging YARN-3368 (new 
> web UI) to branch2  
> http://mail-archives.apache.org/mod_mbox/hadoop-yarn-dev/201610.mbox/%3ccad++ecmvvqnzqz9ynkvkcxaczdkg50yiofxktgk3mmms9sh...@mail.gmail.com%3E
> --
> Ui2 does not seem to support same origin policy for cross site scripting 
> prevention.
> The following parameters has no effect for /ui2:
> hadoop.http.cross-origin.enabled = true
> yarn.resourcemanager.webapp.cross-origin.enabled = true
> This is because ui2 is designed as a separate web application.  WebFilters 
> setup for existing resource manager doesn’t apply to the new web application.
> Please open JIRA to track the security issue and resolve the problem prior to 
> backporting this to branch-2.
> This would minimize the risk to open up security hole in branch-2.
> --



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7346) Fix compilation errors against hbase2 alpha release

2017-10-19 Thread Haibo Chen (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7346?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16211339#comment-16211339
 ] 

Haibo Chen commented on YARN-7346:
--

Thanks [~varun_saxena] for the info!

> Fix compilation errors against hbase2 alpha release
> ---
>
> Key: YARN-7346
> URL: https://issues.apache.org/jira/browse/YARN-7346
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Ted Yu
>Assignee: Vrushali C
>
> When compiling hadoop-yarn-server-timelineservice-hbase against 2.0.0-alpha3, 
> I got the following errors:
> https://pastebin.com/Ms4jYEVB
> This issue is to fix the compilation errors.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7326) Some issues in RegistryDNS

2017-10-19 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7326?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16211337#comment-16211337
 ] 

Hadoop QA commented on YARN-7326:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m  
0s{color} | {color:blue} Docker mode activated. {color} |
| {color:red}-1{color} | {color:red} docker {color} | {color:red}  0m 
14s{color} | {color:red} Docker failed to build yetus/hadoop:0de40f0. {color} |
\\
\\
|| Subsystem || Report/Notes ||
| JIRA Issue | YARN-7326 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12893070/YARN-7326.yarn-native-services.002.patch
 |
| Console output | 
https://builds.apache.org/job/PreCommit-YARN-Build/18029/console |
| Powered by | Apache Yetus 0.6.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> Some issues in RegistryDNS
> --
>
> Key: YARN-7326
> URL: https://issues.apache.org/jira/browse/YARN-7326
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Jian He
>Assignee: Eric Yang
> Attachments: YARN-7326.yarn-native-services.001.patch, 
> YARN-7326.yarn-native-services.002.patch
>
>
> [~aw] helped to identify these issues: 
> Now some general bad news, not related to this patch:
> Ran a few queries, but this one is a bit concerning:
> {code}
> root@ubuntu:/hadoop/logs# dig @localhost -p 54 .
> ;; Warning: query response not set
> ; <<>> DiG 9.10.3-P4-Ubuntu <<>> @localhost -p 54 .
> ; (2 servers found)
> ;; global options: +cmd
> ;; Got answer:
> ;; ->>HEADER<<- opcode: QUERY, status: NOTAUTH, id: 47794
> ;; flags: rd ad; QUERY: 0, ANSWER: 0, AUTHORITY: 0, ADDITIONAL: 0
> ;; WARNING: recursion requested but not available
> ;; Query time: 0 msec
> ;; SERVER: 127.0.0.1#54(127.0.0.1)
> ;; WHEN: Thu Oct 12 16:04:54 PDT 2017
> ;; MSG SIZE  rcvd: 12
> root@ubuntu:/hadoop/logs# dig @localhost -p 54 axfr .
> ;; Connection to ::1#54(::1) for . failed: connection refused.
> ;; communications error to 127.0.0.1#54: end of file
> root@ubuntu:/hadoop/logs# 
> {code}
> It looks like it effectively fails when asked about a root zone, which is bad.
> It's also kind of interesting in what it does and doesn't log. Probably 
> should be configured to rotate logs based on size not date.
> The real showstopper though: RegistryDNS basically eats a core. It is running 
> with 100% cpu utilization with and without jsvc. On my laptop, this is 
> triggering my fan.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



  1   2   >