[jira] [Updated] (YARN-8974) Improve the assertion message in TestGPUResourceHandler

2018-11-05 Thread Zhankun Tang (JIRA)


 [ 
https://issues.apache.org/jira/browse/YARN-8974?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Zhankun Tang updated YARN-8974:
---
Attachment: YARN-8974-trunk.001.patch

> Improve the assertion message in TestGPUResourceHandler
> ---
>
> Key: YARN-8974
> URL: https://issues.apache.org/jira/browse/YARN-8974
> Project: Hadoop YARN
>  Issue Type: Improvement
>Reporter: Zhankun Tang
>Assignee: Zhankun Tang
>Priority: Trivial
> Attachments: YARN-8974-trunk.001.patch
>
>
> In test case "testRecoverResourceAllocation".
> When recovery scheduler statue and a device id is already assigned to another 
> container, the original assertion message is 
> {code:java}
> "Should fail since requested device Id is not in allowed list"{code}
> But should be
> {code:java}
> "Should fail since requested device Id is already assigned"{code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Assigned] (YARN-8974) Improve the assertion message in TestGPUResourceHandler

2018-11-05 Thread Zhankun Tang (JIRA)


 [ 
https://issues.apache.org/jira/browse/YARN-8974?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Zhankun Tang reassigned YARN-8974:
--

Assignee: Zhankun Tang

> Improve the assertion message in TestGPUResourceHandler
> ---
>
> Key: YARN-8974
> URL: https://issues.apache.org/jira/browse/YARN-8974
> Project: Hadoop YARN
>  Issue Type: Improvement
>Reporter: Zhankun Tang
>Assignee: Zhankun Tang
>Priority: Trivial
>
> In test case "testRecoverResourceAllocation".
> When recovery scheduler statue and a device id is already assigned to another 
> container, the original assertion message is 
> {code:java}
> "Should fail since requested device Id is not in allowed list"{code}
> But should be
> {code:java}
> "Should fail since requested device Id is already assigned"{code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Created] (YARN-8974) Improve the assertion message in TestGPUResourceHandler

2018-11-05 Thread Zhankun Tang (JIRA)
Zhankun Tang created YARN-8974:
--

 Summary: Improve the assertion message in TestGPUResourceHandler
 Key: YARN-8974
 URL: https://issues.apache.org/jira/browse/YARN-8974
 Project: Hadoop YARN
  Issue Type: Improvement
Reporter: Zhankun Tang


In test case "testRecoverResourceAllocation".

When recovery scheduler statue and a device id is already assigned to another 
container, the original assertion message is 
{code:java}
"Should fail since requested device Id is not in allowed list"{code}
But should be
{code:java}
"Should fail since requested device Id is already assigned"{code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-8877) Extend service spec to allow setting resource attributes

2018-11-05 Thread Weiwei Yang (JIRA)


[ 
https://issues.apache.org/jira/browse/YARN-8877?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16676284#comment-16676284
 ] 

Weiwei Yang commented on YARN-8877:
---

Thanks [~leftnoteasy] for the review, I uploaded the doc a bit. Since we are 
probably not going to expose as first class user interface (see more in 
YARN-8940). I think it is sufficient for now. Thanks.

> Extend service spec to allow setting resource attributes
> 
>
> Key: YARN-8877
> URL: https://issues.apache.org/jira/browse/YARN-8877
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Weiwei Yang
>Assignee: Weiwei Yang
>Priority: Major
> Attachments: YARN-8877.001.patch, YARN-8877.002.patch
>
>
> Extend yarn native service spec to support setting resource attributes in the 
> spec file.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-8877) Extend service spec to allow setting resource attributes

2018-11-05 Thread Weiwei Yang (JIRA)


 [ 
https://issues.apache.org/jira/browse/YARN-8877?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Weiwei Yang updated YARN-8877:
--
Attachment: YARN-8877.002.patch

> Extend service spec to allow setting resource attributes
> 
>
> Key: YARN-8877
> URL: https://issues.apache.org/jira/browse/YARN-8877
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Weiwei Yang
>Assignee: Weiwei Yang
>Priority: Major
> Attachments: YARN-8877.001.patch, YARN-8877.002.patch
>
>
> Extend yarn native service spec to support setting resource attributes in the 
> spec file.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-8970) Improve the debug message in CS#allocateContainerOnSingleNode

2018-11-05 Thread Zhankun Tang (JIRA)


[ 
https://issues.apache.org/jira/browse/YARN-8970?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16676279#comment-16676279
 ] 

Zhankun Tang commented on YARN-8970:


Thanks! [~cheersyang]

> Improve the debug message in CS#allocateContainerOnSingleNode
> -
>
> Key: YARN-8970
> URL: https://issues.apache.org/jira/browse/YARN-8970
> Project: Hadoop YARN
>  Issue Type: Bug
>Reporter: Weiwei Yang
>Assignee: Zhankun Tang
>Priority: Trivial
> Fix For: 3.0.4, 3.1.2, 3.3.0, 3.2.1
>
> Attachments: YARN-8970-trunk.001.patch
>
>
> When a node is unable to allocate container due to insufficient resource, 
> following DEBUG message is printed,
> {noformat}
> 2018-11-06 00:05:03,657 DEBUG [AsyncDispatcher event handler] 
> capacity.CapacityScheduler 
> (CapacityScheduler.java:allocateContainerOnSingleNode(1618)) - This node or 
> this node partition doesn't have available orkillable resource
> {noformat}
> this message should be revised to
> {noformat}
> This node or node partition doesn't have available or preemptible resource
> {noformat}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-8957) Add Serializable interface to ComponentContainers

2018-11-05 Thread Zhankun Tang (JIRA)


 [ 
https://issues.apache.org/jira/browse/YARN-8957?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Zhankun Tang updated YARN-8957:
---
Attachment: (was: YARN-8957-trunk.001.patch)

> Add Serializable interface to ComponentContainers
> -
>
> Key: YARN-8957
> URL: https://issues.apache.org/jira/browse/YARN-8957
> Project: Hadoop YARN
>  Issue Type: Improvement
>Reporter: Zhankun Tang
>Assignee: Zhankun Tang
>Priority: Minor
> Attachments: YARN-8957-trunk.001.patch
>
>
> In YARN service API:
> public class ComponentContainers
> { private static final long serialVersionUID = -1456748479118874991L; ... }
>  
>  seems should be 
>  
>  public class ComponentContainers {color:#d04437}implements 
> Serializable{color} {
> private static final long serialVersionUID = -1456748479118874991L; ... }



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-8957) Add Serializable interface to ComponentContainers

2018-11-05 Thread Zhankun Tang (JIRA)


 [ 
https://issues.apache.org/jira/browse/YARN-8957?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Zhankun Tang updated YARN-8957:
---
Attachment: YARN-8957-trunk.001.patch

> Add Serializable interface to ComponentContainers
> -
>
> Key: YARN-8957
> URL: https://issues.apache.org/jira/browse/YARN-8957
> Project: Hadoop YARN
>  Issue Type: Improvement
>Reporter: Zhankun Tang
>Assignee: Zhankun Tang
>Priority: Minor
> Attachments: YARN-8957-trunk.001.patch, YARN-8957-trunk.001.patch
>
>
> In YARN service API:
> public class ComponentContainers
> { private static final long serialVersionUID = -1456748479118874991L; ... }
>  
>  seems should be 
>  
>  public class ComponentContainers {color:#d04437}implements 
> Serializable{color} {
> private static final long serialVersionUID = -1456748479118874991L; ... }



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-8714) [Submarine] Support files/tarballs to be localized for a training job.

2018-11-05 Thread Zhankun Tang (JIRA)


[ 
https://issues.apache.org/jira/browse/YARN-8714?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16676270#comment-16676270
 ] 

Zhankun Tang commented on YARN-8714:


Attached a WIP patch. Please review. [~sunilg] , [~leftnoteasy]

The usage of the localization is like this:

 
{code:java}
yarn jar ... job run \
  --localization hdfs:///user/yarn/script1.py->algorithm1.py 
/opt/script2.py->script2.py \
  --...
{code}
 

The "File->LocalFileName" pairs indicates Submarine to localize the "File" into 
worker's working directory.

The "File" can be a local one or HDFS. And archieve type including "jar, tgz" 
.etc. is also supported.

 

> [Submarine] Support files/tarballs to be localized for a training job.
> --
>
> Key: YARN-8714
> URL: https://issues.apache.org/jira/browse/YARN-8714
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Wangda Tan
>Assignee: Zhankun Tang
>Priority: Major
> Attachments: YARN-8714-WIP1-trunk-001.patch
>
>
> See 
> https://docs.google.com/document/d/199J4pB3blqgV9SCNvBbTqkEoQdjoyGMjESV4MktCo0k/edit#heading=h.vkxp9edl11m7,
>  {{job run --localizations ...}}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-8714) [Submarine] Support files/tarballs to be localized for a training job.

2018-11-05 Thread Zhankun Tang (JIRA)


 [ 
https://issues.apache.org/jira/browse/YARN-8714?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Zhankun Tang updated YARN-8714:
---
Attachment: YARN-8714-WIP1-trunk-001.patch

> [Submarine] Support files/tarballs to be localized for a training job.
> --
>
> Key: YARN-8714
> URL: https://issues.apache.org/jira/browse/YARN-8714
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Wangda Tan
>Assignee: Zhankun Tang
>Priority: Major
> Attachments: YARN-8714-WIP1-trunk-001.patch
>
>
> See 
> https://docs.google.com/document/d/199J4pB3blqgV9SCNvBbTqkEoQdjoyGMjESV4MktCo0k/edit#heading=h.vkxp9edl11m7,
>  {{job run --localizations ...}}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-8714) [Submarine] Support files/tarballs to be localized for a training job.

2018-11-05 Thread Zhankun Tang (JIRA)


 [ 
https://issues.apache.org/jira/browse/YARN-8714?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Zhankun Tang updated YARN-8714:
---
Attachment: (was: YARN-8714-WIP1-trunk-001.patch)

> [Submarine] Support files/tarballs to be localized for a training job.
> --
>
> Key: YARN-8714
> URL: https://issues.apache.org/jira/browse/YARN-8714
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Wangda Tan
>Assignee: Zhankun Tang
>Priority: Major
>
> See 
> https://docs.google.com/document/d/199J4pB3blqgV9SCNvBbTqkEoQdjoyGMjESV4MktCo0k/edit#heading=h.vkxp9edl11m7,
>  {{job run --localizations ...}}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-8970) Improve the debug message in CS#allocateContainerOnSingleNode

2018-11-05 Thread Hudson (JIRA)


[ 
https://issues.apache.org/jira/browse/YARN-8970?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16676262#comment-16676262
 ] 

Hudson commented on YARN-8970:
--

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #15366 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/15366/])
YARN-8970. Improve the debug message in (wwei: rev 
5d6554c722f08f79bce904e021243605ee75bae3)
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/capacity/CapacityScheduler.java


> Improve the debug message in CS#allocateContainerOnSingleNode
> -
>
> Key: YARN-8970
> URL: https://issues.apache.org/jira/browse/YARN-8970
> Project: Hadoop YARN
>  Issue Type: Bug
>Reporter: Weiwei Yang
>Assignee: Zhankun Tang
>Priority: Trivial
> Fix For: 3.0.4, 3.1.2, 3.3.0, 3.2.1
>
> Attachments: YARN-8970-trunk.001.patch
>
>
> When a node is unable to allocate container due to insufficient resource, 
> following DEBUG message is printed,
> {noformat}
> 2018-11-06 00:05:03,657 DEBUG [AsyncDispatcher event handler] 
> capacity.CapacityScheduler 
> (CapacityScheduler.java:allocateContainerOnSingleNode(1618)) - This node or 
> this node partition doesn't have available orkillable resource
> {noformat}
> this message should be revised to
> {noformat}
> This node or node partition doesn't have available or preemptible resource
> {noformat}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-8714) [Submarine] Support files/tarballs to be localized for a training job.

2018-11-05 Thread Zhankun Tang (JIRA)


 [ 
https://issues.apache.org/jira/browse/YARN-8714?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Zhankun Tang updated YARN-8714:
---
Attachment: YARN-8714-WIP1-trunk-001.patch

> [Submarine] Support files/tarballs to be localized for a training job.
> --
>
> Key: YARN-8714
> URL: https://issues.apache.org/jira/browse/YARN-8714
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Wangda Tan
>Assignee: Zhankun Tang
>Priority: Major
> Attachments: YARN-8714-WIP1-trunk-001.patch
>
>
> See 
> https://docs.google.com/document/d/199J4pB3blqgV9SCNvBbTqkEoQdjoyGMjESV4MktCo0k/edit#heading=h.vkxp9edl11m7,
>  {{job run --localizations ...}}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-8970) Improve the debug message in CS#allocateContainerOnSingleNode

2018-11-05 Thread Weiwei Yang (JIRA)


[ 
https://issues.apache.org/jira/browse/YARN-8970?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16676254#comment-16676254
 ] 

Weiwei Yang commented on YARN-8970:
---

Thanks [~tangzhankun] for the contribution, I've pushed this to all 3.x streams.

> Improve the debug message in CS#allocateContainerOnSingleNode
> -
>
> Key: YARN-8970
> URL: https://issues.apache.org/jira/browse/YARN-8970
> Project: Hadoop YARN
>  Issue Type: Bug
>Reporter: Weiwei Yang
>Assignee: Zhankun Tang
>Priority: Trivial
> Fix For: 3.0.4, 3.1.2, 3.3.0, 3.2.1
>
> Attachments: YARN-8970-trunk.001.patch
>
>
> When a node is unable to allocate container due to insufficient resource, 
> following DEBUG message is printed,
> {noformat}
> 2018-11-06 00:05:03,657 DEBUG [AsyncDispatcher event handler] 
> capacity.CapacityScheduler 
> (CapacityScheduler.java:allocateContainerOnSingleNode(1618)) - This node or 
> this node partition doesn't have available orkillable resource
> {noformat}
> this message should be revised to
> {noformat}
> This node or node partition doesn't have available or preemptible resource
> {noformat}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-8970) Improve the debug message in CS#allocateContainerOnSingleNode

2018-11-05 Thread Weiwei Yang (JIRA)


 [ 
https://issues.apache.org/jira/browse/YARN-8970?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Weiwei Yang updated YARN-8970:
--
Fix Version/s: 3.0.4

> Improve the debug message in CS#allocateContainerOnSingleNode
> -
>
> Key: YARN-8970
> URL: https://issues.apache.org/jira/browse/YARN-8970
> Project: Hadoop YARN
>  Issue Type: Bug
>Reporter: Weiwei Yang
>Assignee: Zhankun Tang
>Priority: Trivial
> Fix For: 3.0.4, 3.1.2, 3.3.0, 3.2.1
>
> Attachments: YARN-8970-trunk.001.patch
>
>
> When a node is unable to allocate container due to insufficient resource, 
> following DEBUG message is printed,
> {noformat}
> 2018-11-06 00:05:03,657 DEBUG [AsyncDispatcher event handler] 
> capacity.CapacityScheduler 
> (CapacityScheduler.java:allocateContainerOnSingleNode(1618)) - This node or 
> this node partition doesn't have available orkillable resource
> {noformat}
> this message should be revised to
> {noformat}
> This node or node partition doesn't have available or preemptible resource
> {noformat}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-8970) Improve the debug message in CS#allocateContainerOnSingleNode

2018-11-05 Thread Weiwei Yang (JIRA)


 [ 
https://issues.apache.org/jira/browse/YARN-8970?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Weiwei Yang updated YARN-8970:
--
Fix Version/s: 3.1.2

> Improve the debug message in CS#allocateContainerOnSingleNode
> -
>
> Key: YARN-8970
> URL: https://issues.apache.org/jira/browse/YARN-8970
> Project: Hadoop YARN
>  Issue Type: Bug
>Reporter: Weiwei Yang
>Assignee: Zhankun Tang
>Priority: Trivial
> Fix For: 3.1.2, 3.3.0, 3.2.1
>
> Attachments: YARN-8970-trunk.001.patch
>
>
> When a node is unable to allocate container due to insufficient resource, 
> following DEBUG message is printed,
> {noformat}
> 2018-11-06 00:05:03,657 DEBUG [AsyncDispatcher event handler] 
> capacity.CapacityScheduler 
> (CapacityScheduler.java:allocateContainerOnSingleNode(1618)) - This node or 
> this node partition doesn't have available orkillable resource
> {noformat}
> this message should be revised to
> {noformat}
> This node or node partition doesn't have available or preemptible resource
> {noformat}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-8970) Improve the debug message in CS#allocateContainerOnSingleNode

2018-11-05 Thread Weiwei Yang (JIRA)


 [ 
https://issues.apache.org/jira/browse/YARN-8970?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Weiwei Yang updated YARN-8970:
--
Fix Version/s: 3.2.1

> Improve the debug message in CS#allocateContainerOnSingleNode
> -
>
> Key: YARN-8970
> URL: https://issues.apache.org/jira/browse/YARN-8970
> Project: Hadoop YARN
>  Issue Type: Bug
>Reporter: Weiwei Yang
>Assignee: Zhankun Tang
>Priority: Trivial
> Fix For: 3.3.0, 3.2.1
>
> Attachments: YARN-8970-trunk.001.patch
>
>
> When a node is unable to allocate container due to insufficient resource, 
> following DEBUG message is printed,
> {noformat}
> 2018-11-06 00:05:03,657 DEBUG [AsyncDispatcher event handler] 
> capacity.CapacityScheduler 
> (CapacityScheduler.java:allocateContainerOnSingleNode(1618)) - This node or 
> this node partition doesn't have available orkillable resource
> {noformat}
> this message should be revised to
> {noformat}
> This node or node partition doesn't have available or preemptible resource
> {noformat}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-8970) Improve the debug message in CS#allocateContainerOnSingleNode

2018-11-05 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/YARN-8970?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16676235#comment-16676235
 ] 

Hadoop QA commented on YARN-8970:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
18s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 21m 
34s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
48s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
38s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
48s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
13m 30s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
13s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
28s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
46s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
39s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
39s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
34s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
43s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
13m 22s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
17s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
26s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red}108m  3s{color} 
| {color:red} hadoop-yarn-server-resourcemanager in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
24s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}165m 15s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:8f97d6f |
| JIRA Issue | YARN-8970 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12947005/YARN-8970-trunk.001.patch
 |
| Optional Tests |  dupname  asflicense  compile  javac  javadoc  mvninstall  
mvnsite  unit  shadedclient  findbugs  checkstyle  |
| uname | Linux f6abdd0ece62 3.13.0-153-generic #203-Ubuntu SMP Thu Jun 14 
08:52:28 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / f329650 |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_181 |
| findbugs | v3.1.0-RC1 |
| unit | 
https://builds.apache.org/job/PreCommit-YARN-Build/22423/artifact/out/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-YARN-Build/22423/testReport/ |
| Max. process+thread count | 957 (vs. ulimit of 1) |
| modules | C: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager
 U: 

[jira] [Commented] (YARN-8858) CapacityScheduler should respect maximum node resource when per-queue maximum-allocation is being used.

2018-11-05 Thread Weiwei Yang (JIRA)


[ 
https://issues.apache.org/jira/browse/YARN-8858?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16676233#comment-16676233
 ] 

Weiwei Yang commented on YARN-8858:
---

Thanks [~ajisakaa] for the help!

> CapacityScheduler should respect maximum node resource when per-queue 
> maximum-allocation is being used.
> ---
>
> Key: YARN-8858
> URL: https://issues.apache.org/jira/browse/YARN-8858
> Project: Hadoop YARN
>  Issue Type: Bug
>Reporter: Sumana Sathish
>Assignee: Wangda Tan
>Priority: Major
> Fix For: 2.10.0, 3.2.0, 2.9.2, 3.0.4, 3.1.2, 3.3.0, 2.8.6
>
> Attachments: YARN-8858-branch-2.8.001.patch, 
> YARN-8858-branch-2.8.002.patch, YARN-8858.001.patch, YARN-8858.002.patch
>
>
> This issue happens after YARN-8720.
> Before that, AMS uses scheduler.getMaximumAllocation to do the normalization. 
> After that, AMS uses LeafQueue.getMaximumAllocation. The scheduler one uses 
> nodeTracker.getMaximumAllocation, but the LeafQueue.getMaximum doesn't. 
> We should use the scheduler.getMaximumAllocation to cap the per-queue's 
> maximum-allocation every time.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-8858) CapacityScheduler should respect maximum node resource when per-queue maximum-allocation is being used.

2018-11-05 Thread Wangda Tan (JIRA)


[ 
https://issues.apache.org/jira/browse/YARN-8858?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16676232#comment-16676232
 ] 

Wangda Tan commented on YARN-8858:
--

Thanks [~cheersyang] / [~ajisakaa] for rebasing and committing the patch!

> CapacityScheduler should respect maximum node resource when per-queue 
> maximum-allocation is being used.
> ---
>
> Key: YARN-8858
> URL: https://issues.apache.org/jira/browse/YARN-8858
> Project: Hadoop YARN
>  Issue Type: Bug
>Reporter: Sumana Sathish
>Assignee: Wangda Tan
>Priority: Major
> Fix For: 2.10.0, 3.2.0, 2.9.2, 3.0.4, 3.1.2, 3.3.0, 2.8.6
>
> Attachments: YARN-8858-branch-2.8.001.patch, 
> YARN-8858-branch-2.8.002.patch, YARN-8858.001.patch, YARN-8858.002.patch
>
>
> This issue happens after YARN-8720.
> Before that, AMS uses scheduler.getMaximumAllocation to do the normalization. 
> After that, AMS uses LeafQueue.getMaximumAllocation. The scheduler one uses 
> nodeTracker.getMaximumAllocation, but the LeafQueue.getMaximum doesn't. 
> We should use the scheduler.getMaximumAllocation to cap the per-queue's 
> maximum-allocation every time.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-8108) RM metrics rest API throws GSSException in kerberized environment

2018-11-05 Thread Sunil Govindan (JIRA)


[ 
https://issues.apache.org/jira/browse/YARN-8108?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16676227#comment-16676227
 ] 

Sunil Govindan commented on YARN-8108:
--

Updated Fixed Versions. [~eyang] cud u pls confirm whether this is correct.

> RM metrics rest API throws GSSException in kerberized environment
> -
>
> Key: YARN-8108
> URL: https://issues.apache.org/jira/browse/YARN-8108
> Project: Hadoop YARN
>  Issue Type: Bug
>Affects Versions: 3.0.0
>Reporter: Kshitij Badani
>Assignee: Sunil Govindan
>Priority: Blocker
> Fix For: 3.2.0, 3.1.2, 3.3.0
>
> Attachments: YARN-8108.001.patch, YARN-8108.002.patch
>
>
> Test is trying to pull up metrics data from SHS after kiniting as 'test_user'
> It is throwing GSSException as follows
> {code:java}
> b2b460b80713|RUNNING: curl --silent -k -X GET -D 
> /hwqe/hadoopqe/artifacts/tmp-94845 --negotiate -u : 
> http://rm_host:8088/proxy/application_1518674952153_0070/metrics/json2018-02-15
>  07:15:48,757|INFO|MainThread|machine.py:194 - 
> run()||GUID=fc5a3266-28f8-4eed-bae2-b2b460b80713|Exit Code: 0
> 2018-02-15 07:15:48,758|INFO|MainThread|spark.py:1757 - 
> getMetricsJsonData()|metrics:
> 
> 
> 
> Error 403 GSSException: Failure unspecified at GSS-API level 
> (Mechanism level: Request is a replay (34))
> 
> HTTP ERROR 403
> Problem accessing /proxy/application_1518674952153_0070/metrics/json. 
> Reason:
>  GSSException: Failure unspecified at GSS-API level (Mechanism level: 
> Request is a replay (34))
> 
> 
> {code}
> Rootcausing : proxyserver on RM can't be supported for Kerberos enabled 
> cluster because AuthenticationFilter is applied twice in Hadoop code (once in 
> httpServer2 for RM, and another instance from AmFilterInitializer for proxy 
> server). This will require code changes to hadoop-yarn-server-web-proxy 
> project



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-8108) RM metrics rest API throws GSSException in kerberized environment

2018-11-05 Thread Sunil Govindan (JIRA)


 [ 
https://issues.apache.org/jira/browse/YARN-8108?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sunil Govindan updated YARN-8108:
-
Fix Version/s: 3.3.0
   3.1.2
   3.2.0

> RM metrics rest API throws GSSException in kerberized environment
> -
>
> Key: YARN-8108
> URL: https://issues.apache.org/jira/browse/YARN-8108
> Project: Hadoop YARN
>  Issue Type: Bug
>Affects Versions: 3.0.0
>Reporter: Kshitij Badani
>Assignee: Sunil Govindan
>Priority: Blocker
> Fix For: 3.2.0, 3.1.2, 3.3.0
>
> Attachments: YARN-8108.001.patch, YARN-8108.002.patch
>
>
> Test is trying to pull up metrics data from SHS after kiniting as 'test_user'
> It is throwing GSSException as follows
> {code:java}
> b2b460b80713|RUNNING: curl --silent -k -X GET -D 
> /hwqe/hadoopqe/artifacts/tmp-94845 --negotiate -u : 
> http://rm_host:8088/proxy/application_1518674952153_0070/metrics/json2018-02-15
>  07:15:48,757|INFO|MainThread|machine.py:194 - 
> run()||GUID=fc5a3266-28f8-4eed-bae2-b2b460b80713|Exit Code: 0
> 2018-02-15 07:15:48,758|INFO|MainThread|spark.py:1757 - 
> getMetricsJsonData()|metrics:
> 
> 
> 
> Error 403 GSSException: Failure unspecified at GSS-API level 
> (Mechanism level: Request is a replay (34))
> 
> HTTP ERROR 403
> Problem accessing /proxy/application_1518674952153_0070/metrics/json. 
> Reason:
>  GSSException: Failure unspecified at GSS-API level (Mechanism level: 
> Request is a replay (34))
> 
> 
> {code}
> Rootcausing : proxyserver on RM can't be supported for Kerberos enabled 
> cluster because AuthenticationFilter is applied twice in Hadoop code (once in 
> httpServer2 for RM, and another instance from AmFilterInitializer for proxy 
> server). This will require code changes to hadoop-yarn-server-web-proxy 
> project



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-8858) CapacityScheduler should respect maximum node resource when per-queue maximum-allocation is being used.

2018-11-05 Thread Akira Ajisaka (JIRA)


[ 
https://issues.apache.org/jira/browse/YARN-8858?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16676212#comment-16676212
 ] 

Akira Ajisaka commented on YARN-8858:
-

The failed tests are not related to the patch. Committing this.

> CapacityScheduler should respect maximum node resource when per-queue 
> maximum-allocation is being used.
> ---
>
> Key: YARN-8858
> URL: https://issues.apache.org/jira/browse/YARN-8858
> Project: Hadoop YARN
>  Issue Type: Bug
>Reporter: Sumana Sathish
>Assignee: Wangda Tan
>Priority: Major
> Fix For: 2.10.0, 3.2.0, 2.9.2, 3.0.4, 3.1.2, 3.3.0
>
> Attachments: YARN-8858-branch-2.8.001.patch, 
> YARN-8858-branch-2.8.002.patch, YARN-8858.001.patch, YARN-8858.002.patch
>
>
> This issue happens after YARN-8720.
> Before that, AMS uses scheduler.getMaximumAllocation to do the normalization. 
> After that, AMS uses LeafQueue.getMaximumAllocation. The scheduler one uses 
> nodeTracker.getMaximumAllocation, but the LeafQueue.getMaximum doesn't. 
> We should use the scheduler.getMaximumAllocation to cap the per-queue's 
> maximum-allocation every time.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-8969) AbstractYarnScheduler#getNodeTracker should return generic type to avoid type casting

2018-11-05 Thread Weiwei Yang (JIRA)


[ 
https://issues.apache.org/jira/browse/YARN-8969?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16676195#comment-16676195
 ] 

Weiwei Yang commented on YARN-8969:
---

Pushed to trunk, branch-3.2, branch-3.1, branch-3.0 and branch-2.9. Thanks 
[~jiwq] for the contribution, thanks [~yufeigu], [~eepayne] for the review.

> AbstractYarnScheduler#getNodeTracker should return generic type to avoid type 
> casting
> -
>
> Key: YARN-8969
> URL: https://issues.apache.org/jira/browse/YARN-8969
> Project: Hadoop YARN
>  Issue Type: Improvement
>Affects Versions: 2.9.1, 3.1.1
>Reporter: Wanqiang Ji
>Assignee: Wanqiang Ji
>Priority: Major
> Fix For: 3.0.4, 3.1.2, 3.3.0, 3.2.1, 2.9.3
>
> Attachments: YARN-8969.001.patch
>
>
> Some warning problems like:
> {quote}Unchecked assignment: 'java.util.List' to 
> 'java.util.List'.
>  Reason: 'scheduler.getNodeTracker()' has raw type, so result of 
> getNodesByResourceName is erased{quote}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-8969) AbstractYarnScheduler#getNodeTracker should return generic type to avoid type casting

2018-11-05 Thread Weiwei Yang (JIRA)


 [ 
https://issues.apache.org/jira/browse/YARN-8969?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Weiwei Yang updated YARN-8969:
--
Fix Version/s: 2.9.3

> AbstractYarnScheduler#getNodeTracker should return generic type to avoid type 
> casting
> -
>
> Key: YARN-8969
> URL: https://issues.apache.org/jira/browse/YARN-8969
> Project: Hadoop YARN
>  Issue Type: Improvement
>Affects Versions: 2.9.1, 3.1.1
>Reporter: Wanqiang Ji
>Assignee: Wanqiang Ji
>Priority: Major
> Fix For: 3.0.4, 3.1.2, 3.3.0, 3.2.1, 2.9.3
>
> Attachments: YARN-8969.001.patch
>
>
> Some warning problems like:
> {quote}Unchecked assignment: 'java.util.List' to 
> 'java.util.List'.
>  Reason: 'scheduler.getNodeTracker()' has raw type, so result of 
> getNodesByResourceName is erased{quote}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-8969) AbstractYarnScheduler#getNodeTracker should return generic type to avoid type casting

2018-11-05 Thread Weiwei Yang (JIRA)


 [ 
https://issues.apache.org/jira/browse/YARN-8969?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Weiwei Yang updated YARN-8969:
--
Fix Version/s: 3.0.4

> AbstractYarnScheduler#getNodeTracker should return generic type to avoid type 
> casting
> -
>
> Key: YARN-8969
> URL: https://issues.apache.org/jira/browse/YARN-8969
> Project: Hadoop YARN
>  Issue Type: Improvement
>Affects Versions: 2.9.1, 3.1.1
>Reporter: Wanqiang Ji
>Assignee: Wanqiang Ji
>Priority: Major
> Fix For: 3.0.4, 3.1.2, 3.3.0, 3.2.1
>
> Attachments: YARN-8969.001.patch
>
>
> Some warning problems like:
> {quote}Unchecked assignment: 'java.util.List' to 
> 'java.util.List'.
>  Reason: 'scheduler.getNodeTracker()' has raw type, so result of 
> getNodesByResourceName is erased{quote}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-8969) AbstractYarnScheduler#getNodeTracker should return generic type to avoid type casting

2018-11-05 Thread Weiwei Yang (JIRA)


 [ 
https://issues.apache.org/jira/browse/YARN-8969?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Weiwei Yang updated YARN-8969:
--
Fix Version/s: 3.1.2

> AbstractYarnScheduler#getNodeTracker should return generic type to avoid type 
> casting
> -
>
> Key: YARN-8969
> URL: https://issues.apache.org/jira/browse/YARN-8969
> Project: Hadoop YARN
>  Issue Type: Improvement
>Affects Versions: 2.9.1, 3.1.1
>Reporter: Wanqiang Ji
>Assignee: Wanqiang Ji
>Priority: Major
> Fix For: 3.1.2, 3.3.0, 3.2.1
>
> Attachments: YARN-8969.001.patch
>
>
> Some warning problems like:
> {quote}Unchecked assignment: 'java.util.List' to 
> 'java.util.List'.
>  Reason: 'scheduler.getNodeTracker()' has raw type, so result of 
> getNodesByResourceName is erased{quote}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-8969) AbstractYarnScheduler#getNodeTracker should return generic type to avoid type casting

2018-11-05 Thread Hudson (JIRA)


[ 
https://issues.apache.org/jira/browse/YARN-8969?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16676162#comment-16676162
 ] 

Hudson commented on YARN-8969:
--

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #15364 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/15364/])
YARN-8969. AbstractYarnScheduler#getNodeTracker should return generic (wwei: 
rev c7fcca0d7ec9e31d43ef3040ecd576ec808f1f8b)
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/fair/FSPreemptionThread.java
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/AbstractYarnScheduler.java


> AbstractYarnScheduler#getNodeTracker should return generic type to avoid type 
> casting
> -
>
> Key: YARN-8969
> URL: https://issues.apache.org/jira/browse/YARN-8969
> Project: Hadoop YARN
>  Issue Type: Improvement
>Affects Versions: 2.9.1, 3.1.1
>Reporter: Wanqiang Ji
>Assignee: Wanqiang Ji
>Priority: Major
> Fix For: 3.3.0, 3.2.1
>
> Attachments: YARN-8969.001.patch
>
>
> Some warning problems like:
> {quote}Unchecked assignment: 'java.util.List' to 
> 'java.util.List'.
>  Reason: 'scheduler.getNodeTracker()' has raw type, so result of 
> getNodesByResourceName is erased{quote}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-8969) AbstractYarnScheduler#getNodeTracker should return generic type to avoid type casting

2018-11-05 Thread Weiwei Yang (JIRA)


 [ 
https://issues.apache.org/jira/browse/YARN-8969?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Weiwei Yang updated YARN-8969:
--
Fix Version/s: 3.2.1

> AbstractYarnScheduler#getNodeTracker should return generic type to avoid type 
> casting
> -
>
> Key: YARN-8969
> URL: https://issues.apache.org/jira/browse/YARN-8969
> Project: Hadoop YARN
>  Issue Type: Improvement
>Affects Versions: 2.9.1, 3.1.1
>Reporter: Wanqiang Ji
>Assignee: Wanqiang Ji
>Priority: Major
> Fix For: 3.3.0, 3.2.1
>
> Attachments: YARN-8969.001.patch
>
>
> Some warning problems like:
> {quote}Unchecked assignment: 'java.util.List' to 
> 'java.util.List'.
>  Reason: 'scheduler.getNodeTracker()' has raw type, so result of 
> getNodesByResourceName is erased{quote}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-8969) Change the return type to generic type of AbstractYarnScheduler#getNodeTracker

2018-11-05 Thread Wanqiang Ji (JIRA)


[ 
https://issues.apache.org/jira/browse/YARN-8969?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16676148#comment-16676148
 ] 

Wanqiang Ji commented on YARN-8969:
---

Thanks [~eepayne] [~yufeigu] to watch this and works.

Except AbstractYarnScheduler is a @Private @Unstable class, the instance object 
named nodeTracker's type is ClusterNodeTracker, so I think it will work fine 
with this patch.
{quote}protected final ClusterNodeTracker nodeTracker =
 new ClusterNodeTracker<>();{quote}
Thanks [~ajisakaa] works.

> Change the return type to generic type of AbstractYarnScheduler#getNodeTracker
> --
>
> Key: YARN-8969
> URL: https://issues.apache.org/jira/browse/YARN-8969
> Project: Hadoop YARN
>  Issue Type: Improvement
>Affects Versions: 2.9.1, 3.1.1
>Reporter: Wanqiang Ji
>Assignee: Wanqiang Ji
>Priority: Major
> Attachments: YARN-8969.001.patch
>
>
> Some warning problems like:
> {quote}Unchecked assignment: 'java.util.List' to 
> 'java.util.List'.
>  Reason: 'scheduler.getNodeTracker()' has raw type, so result of 
> getNodesByResourceName is erased{quote}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-8969) AbstractYarnScheduler#getNodeTracker should return generic type to avoid type casting

2018-11-05 Thread Weiwei Yang (JIRA)


 [ 
https://issues.apache.org/jira/browse/YARN-8969?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Weiwei Yang updated YARN-8969:
--
Summary: AbstractYarnScheduler#getNodeTracker should return generic type to 
avoid type casting  (was: Change the return type to generic type of 
AbstractYarnScheduler#getNodeTracker)

> AbstractYarnScheduler#getNodeTracker should return generic type to avoid type 
> casting
> -
>
> Key: YARN-8969
> URL: https://issues.apache.org/jira/browse/YARN-8969
> Project: Hadoop YARN
>  Issue Type: Improvement
>Affects Versions: 2.9.1, 3.1.1
>Reporter: Wanqiang Ji
>Assignee: Wanqiang Ji
>Priority: Major
> Attachments: YARN-8969.001.patch
>
>
> Some warning problems like:
> {quote}Unchecked assignment: 'java.util.List' to 
> 'java.util.List'.
>  Reason: 'scheduler.getNodeTracker()' has raw type, so result of 
> getNodesByResourceName is erased{quote}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (YARN-8969) AbstractYarnScheduler#getNodeTracker should return generic type to avoid type casting

2018-11-05 Thread Wanqiang Ji (JIRA)


[ 
https://issues.apache.org/jira/browse/YARN-8969?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16676148#comment-16676148
 ] 

Wanqiang Ji edited comment on YARN-8969 at 11/6/18 5:14 AM:


Thanks [~eepayne] [~yufeigu] to watch this and works.

Except AbstractYarnScheduler is a @Private @Unstable class, the instance object 
named nodeTracker's type is ClusterNodeTracker, so I think it will work fine 
with this patch.
{quote}protected final ClusterNodeTracker nodeTracker =
 new ClusterNodeTracker<>();
{quote}
Thanks [~ajisakaa]  [~cheersyang] works.


was (Author: jiwq):
Thanks [~eepayne] [~yufeigu] to watch this and works.

Except AbstractYarnScheduler is a @Private @Unstable class, the instance object 
named nodeTracker's type is ClusterNodeTracker, so I think it will work fine 
with this patch.
{quote}protected final ClusterNodeTracker nodeTracker =
 new ClusterNodeTracker<>();{quote}
Thanks [~ajisakaa] works.

> AbstractYarnScheduler#getNodeTracker should return generic type to avoid type 
> casting
> -
>
> Key: YARN-8969
> URL: https://issues.apache.org/jira/browse/YARN-8969
> Project: Hadoop YARN
>  Issue Type: Improvement
>Affects Versions: 2.9.1, 3.1.1
>Reporter: Wanqiang Ji
>Assignee: Wanqiang Ji
>Priority: Major
> Attachments: YARN-8969.001.patch
>
>
> Some warning problems like:
> {quote}Unchecked assignment: 'java.util.List' to 
> 'java.util.List'.
>  Reason: 'scheduler.getNodeTracker()' has raw type, so result of 
> getNodesByResourceName is erased{quote}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-8972) [Router] Add support to prevent DoS attack over ApplicationSubmissionContext size

2018-11-05 Thread Bibin A Chundatt (JIRA)


[ 
https://issues.apache.org/jira/browse/YARN-8972?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16676143#comment-16676143
 ] 

Bibin A Chundatt commented on YARN-8972:


Thank you [~giovanni.fumarola] for patch.

I was under the impression zk level issue was fixed using YARN-5006. 
Interceptor level control prevents all store limitations.

> [Router] Add support to prevent DoS attack over ApplicationSubmissionContext 
> size
> -
>
> Key: YARN-8972
> URL: https://issues.apache.org/jira/browse/YARN-8972
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Giovanni Matteo Fumarola
>Assignee: Giovanni Matteo Fumarola
>Priority: Major
> Attachments: YARN-8972.v1.patch, YARN-8972.v2.patch
>
>
> This jira tracks the effort to add a new interceptor in the Router to prevent 
> user to submit applications with oversized ASC.
> This avoid YARN cluster to failover.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-8969) Change the return type to generic type of AbstractYarnScheduler#getNodeTracker

2018-11-05 Thread Weiwei Yang (JIRA)


[ 
https://issues.apache.org/jira/browse/YARN-8969?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16676137#comment-16676137
 ] 

Weiwei Yang commented on YARN-8969:
---

Thanks [~eepayne], [~yufeigu], I also think this is fine and better to fix. 
Commiting now.

> Change the return type to generic type of AbstractYarnScheduler#getNodeTracker
> --
>
> Key: YARN-8969
> URL: https://issues.apache.org/jira/browse/YARN-8969
> Project: Hadoop YARN
>  Issue Type: Improvement
>Affects Versions: 2.9.1, 3.1.1
>Reporter: Wanqiang Ji
>Assignee: Wanqiang Ji
>Priority: Major
> Attachments: YARN-8969.001.patch
>
>
> Some warning problems like:
> {quote}Unchecked assignment: 'java.util.List' to 
> 'java.util.List'.
>  Reason: 'scheduler.getNodeTracker()' has raw type, so result of 
> getNodesByResourceName is erased{quote}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-8969) Change the return type to generic type of AbstractYarnScheduler#getNodeTracker

2018-11-05 Thread Akira Ajisaka (JIRA)


 [ 
https://issues.apache.org/jira/browse/YARN-8969?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Akira Ajisaka updated YARN-8969:

Target Version/s: 3.2.0, 3.2.1, 2.9.3  (was: 3.2.0, 2.9.2, 3.2.1, 2.9.3)

Moved non-blocker/critical issues from 2.9.2 to 2.9.3.

> Change the return type to generic type of AbstractYarnScheduler#getNodeTracker
> --
>
> Key: YARN-8969
> URL: https://issues.apache.org/jira/browse/YARN-8969
> Project: Hadoop YARN
>  Issue Type: Improvement
>Affects Versions: 2.9.1, 3.1.1
>Reporter: Wanqiang Ji
>Assignee: Wanqiang Ji
>Priority: Major
> Attachments: YARN-8969.001.patch
>
>
> Some warning problems like:
> {quote}Unchecked assignment: 'java.util.List' to 
> 'java.util.List'.
>  Reason: 'scheduler.getNodeTracker()' has raw type, so result of 
> getNodesByResourceName is erased{quote}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-8970) Improve the debug message in CS#allocateContainerOnSingleNode

2018-11-05 Thread Weiwei Yang (JIRA)


[ 
https://issues.apache.org/jira/browse/YARN-8970?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16676125#comment-16676125
 ] 

Weiwei Yang commented on YARN-8970:
---

LGTM, pending on jenkins.

> Improve the debug message in CS#allocateContainerOnSingleNode
> -
>
> Key: YARN-8970
> URL: https://issues.apache.org/jira/browse/YARN-8970
> Project: Hadoop YARN
>  Issue Type: Bug
>Reporter: Weiwei Yang
>Assignee: Zhankun Tang
>Priority: Trivial
> Attachments: YARN-8970-trunk.001.patch
>
>
> When a node is unable to allocate container due to insufficient resource, 
> following DEBUG message is printed,
> {noformat}
> 2018-11-06 00:05:03,657 DEBUG [AsyncDispatcher event handler] 
> capacity.CapacityScheduler 
> (CapacityScheduler.java:allocateContainerOnSingleNode(1618)) - This node or 
> this node partition doesn't have available orkillable resource
> {noformat}
> this message should be revised to
> {noformat}
> This node or node partition doesn't have available or preemptible resource
> {noformat}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-8191) Fair scheduler: queue deletion without RM restart

2018-11-05 Thread Sunil Govindan (JIRA)


 [ 
https://issues.apache.org/jira/browse/YARN-8191?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sunil Govindan updated YARN-8191:
-
Release Note: To support Queue deletion with RM restart feature, 
AllocationFileLoaderService constructor signature was changed. YARN-8390 
corrected this and made as a compatible change.

> Fair scheduler: queue deletion without RM restart
> -
>
> Key: YARN-8191
> URL: https://issues.apache.org/jira/browse/YARN-8191
> Project: Hadoop YARN
>  Issue Type: Improvement
>  Components: fairscheduler
>Affects Versions: 3.0.1
>Reporter: Gergo Repas
>Assignee: Gergo Repas
>Priority: Major
> Fix For: 3.2.0
>
> Attachments: Queue Deletion in Fair Scheduler.pdf, 
> YARN-8191.000.patch, YARN-8191.001.patch, YARN-8191.002.patch, 
> YARN-8191.003.patch, YARN-8191.004.patch, YARN-8191.005.patch, 
> YARN-8191.006.patch, YARN-8191.007.patch, YARN-8191.008.patch, 
> YARN-8191.009.patch, YARN-8191.010.patch, YARN-8191.011.patch, 
> YARN-8191.012.patch, YARN-8191.013.patch, YARN-8191.014.patch, 
> YARN-8191.015.patch, YARN-8191.016.patch, YARN-8191.017.patch
>
>
> The Fair Scheduler never cleans up queues even if they are deleted in the 
> allocation file, or were dynamically created and are never going to be used 
> again. Queues always remain in memory which leads to two following issues.
>  # Steady fairshares aren’t calculated correctly due to remaining queues
>  # WebUI shows deleted queues, which is confusing for users (YARN-4022).
> We want to support proper queue deletion without restarting the Resource 
> Manager:
>  # Static queues without any entries that are removed from fair-scheduler.xml 
> should be deleted from memory.
>  # Dynamic queues without any entries should be deleted.
>  # RM Web UI should only show the queues defined in the scheduler at that 
> point in time.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-8970) Improve the debug message in CS#allocateContainerOnSingleNode

2018-11-05 Thread Zhankun Tang (JIRA)


[ 
https://issues.apache.org/jira/browse/YARN-8970?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16676100#comment-16676100
 ] 

Zhankun Tang commented on YARN-8970:


[~cheersyang] , Attached a patch. Please review.

> Improve the debug message in CS#allocateContainerOnSingleNode
> -
>
> Key: YARN-8970
> URL: https://issues.apache.org/jira/browse/YARN-8970
> Project: Hadoop YARN
>  Issue Type: Bug
>Reporter: Weiwei Yang
>Assignee: Zhankun Tang
>Priority: Trivial
> Attachments: YARN-8970-trunk.001.patch
>
>
> When a node is unable to allocate container due to insufficient resource, 
> following DEBUG message is printed,
> {noformat}
> 2018-11-06 00:05:03,657 DEBUG [AsyncDispatcher event handler] 
> capacity.CapacityScheduler 
> (CapacityScheduler.java:allocateContainerOnSingleNode(1618)) - This node or 
> this node partition doesn't have available orkillable resource
> {noformat}
> this message should be revised to
> {noformat}
> This node or node partition doesn't have available or preemptible resource
> {noformat}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-8970) Improve the debug message in CS#allocateContainerOnSingleNode

2018-11-05 Thread Zhankun Tang (JIRA)


 [ 
https://issues.apache.org/jira/browse/YARN-8970?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Zhankun Tang updated YARN-8970:
---
Attachment: YARN-8970-trunk.001.patch

> Improve the debug message in CS#allocateContainerOnSingleNode
> -
>
> Key: YARN-8970
> URL: https://issues.apache.org/jira/browse/YARN-8970
> Project: Hadoop YARN
>  Issue Type: Bug
>Reporter: Weiwei Yang
>Assignee: Zhankun Tang
>Priority: Trivial
> Attachments: YARN-8970-trunk.001.patch
>
>
> When a node is unable to allocate container due to insufficient resource, 
> following DEBUG message is printed,
> {noformat}
> 2018-11-06 00:05:03,657 DEBUG [AsyncDispatcher event handler] 
> capacity.CapacityScheduler 
> (CapacityScheduler.java:allocateContainerOnSingleNode(1618)) - This node or 
> this node partition doesn't have available orkillable resource
> {noformat}
> this message should be revised to
> {noformat}
> This node or node partition doesn't have available or preemptible resource
> {noformat}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-8882) Add a shared device mapping manager for device plugin to use

2018-11-05 Thread Zhankun Tang (JIRA)


 [ 
https://issues.apache.org/jira/browse/YARN-8882?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Zhankun Tang updated YARN-8882:
---
Description: Since quite a few devices uses FIFO policy to assign devices 
to the container, we use a shared device manager to handle all types of 
devices.  (was: Since quite a few devices uses FIFO policy to assign devices to 
the container, we use a shared schedule to handle all types of devices.)

> Add a shared device mapping manager for device plugin to use
> 
>
> Key: YARN-8882
> URL: https://issues.apache.org/jira/browse/YARN-8882
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Zhankun Tang
>Assignee: Zhankun Tang
>Priority: Major
>
> Since quite a few devices uses FIFO policy to assign devices to the 
> container, we use a shared device manager to handle all types of devices.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-8882) Add a shared device mapping manager for device plugin to use

2018-11-05 Thread Zhankun Tang (JIRA)


 [ 
https://issues.apache.org/jira/browse/YARN-8882?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Zhankun Tang updated YARN-8882:
---
Summary: Add a shared device mapping manager for device plugin to use  
(was: Add a shared device local scheduler for device plugin to use)

> Add a shared device mapping manager for device plugin to use
> 
>
> Key: YARN-8882
> URL: https://issues.apache.org/jira/browse/YARN-8882
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Zhankun Tang
>Assignee: Zhankun Tang
>Priority: Major
>
> Since quite a few devices uses FIFO policy to assign devices to the 
> container, we use a shared schedule to handle all types of devices.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (YARN-8851) [Umbrella] A new pluggable device plugin framework to ease vendor plugin development

2018-11-05 Thread Zhankun Tang (JIRA)


[ 
https://issues.apache.org/jira/browse/YARN-8851?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16676074#comment-16676074
 ] 

Zhankun Tang edited comment on YARN-8851 at 11/6/18 3:30 AM:
-

{quote}1) Regarding to the 
NM_PLUGGABLE_DEVICE_FRAMEWORK_PREFER_CUSTOMIZED_SCHEDULER, should we just use 
default scheduler if device plugin doesn't provide their customized scheduler? 
We should assume that load device plugin runs "trusted" code, we may not need 
to add extra protection here.
{quote}
Zhankun–> Agree.
{quote}2) DeviceSchedulerManager, it sounds like "manages scheduler", however 
it handles how to map device to containers, and scheduler is just 
implementation details. How about call it DeviceMappingManager?
{quote}
-

 
{quote}internalAssignDevices should be private, and it is a bit long, might be 
better for future maintenance if you can break it down to multiple methods.
{quote}
Zhankun -> Good idea. Will do that.
{quote}I think we could move to make this POC to sub tasks and get them done 
piece by piece. It gonna be helpful if you can highlight subtasks required.
{quote}
Zhankun-> The YARN-8880, YARN-8881, YARN-8882, YARN-8883, YARN-8885 are our 
Phase 1 highlighted subtasks. They'll be marked "InProgress" to highlight.

Thanks for the review! [~leftnoteasy]


was (Author: tangzhankun):
{quote}1) Regarding to the 
NM_PLUGGABLE_DEVICE_FRAMEWORK_PREFER_CUSTOMIZED_SCHEDULER, should we just use 
default scheduler if device plugin doesn't provide their customized scheduler? 
We should assume that load device plugin runs "trusted" code, we may not need 
to add extra protection here.
{quote}
Zhankun–> Agree.
{quote}2) DeviceSchedulerManager, it sounds like "manages scheduler", however 
it handles how to map device to containers, and scheduler is just 
implementation details. How about call it DeviceMappingManager?
{quote}
-

 
{quote}internalAssignDevices should be private, and it is a bit long, might be 
better for future maintenance if you can break it down to multiple methods.
{quote}
Zhankun -> Good idea. Will do that.
{quote}I think we could move to make this POC to sub tasks and get them done 
piece by piece. It gonna be helpful if you can highlight subtasks required.
{quote}
Zhankun-> The YARN-8880, YARN-8881, YARN-8882, YARN-8883, YARN-8885 are our 
Phase 1 highlighted subtasks.

Thanks for the review! [~leftnoteasy]

> [Umbrella] A new pluggable device plugin framework to ease vendor plugin 
> development
> 
>
> Key: YARN-8851
> URL: https://issues.apache.org/jira/browse/YARN-8851
> Project: Hadoop YARN
>  Issue Type: New Feature
>  Components: yarn
>Reporter: Zhankun Tang
>Assignee: Zhankun Tang
>Priority: Major
> Attachments: YARN-8851-WIP2-trunk.001.patch, 
> YARN-8851-WIP3-trunk.001.patch, YARN-8851-WIP4-trunk.001.patch, 
> YARN-8851-WIP5-trunk.001.patch, YARN-8851-WIP6-trunk.001.patch, 
> YARN-8851-WIP7-trunk.001.patch, YARN-8851-WIP8-trunk.001.patch, 
> YARN-8851-WIP9-trunk.001.patch, YARN-8851-trunk.001.patch, [YARN-8851] 
> YARN_New_Device_Plugin_Framework_Design_Proposal-3.pdf, [YARN-8851] 
> YARN_New_Device_Plugin_Framework_Design_Proposal-4.pdf, [YARN-8851] 
> YARN_New_Device_Plugin_Framework_Design_Proposal.pdf
>
>
> At present, we support GPU/FPGA device in YARN through a native, coupling 
> way. But it's difficult for a vendor to implement such a device plugin 
> because the developer needs much knowledge of YARN internals. And this brings 
> burden to the community to maintain both YARN core and vendor-specific code.
> Here we propose a new device plugin framework to ease vendor device plugin 
> development and provide a more flexible way to integrate with YARN NM.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-8867) Retrieve the status of resource localization

2018-11-05 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/YARN-8867?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16676079#comment-16676079
 ] 

Hadoop QA commented on YARN-8867:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
23s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 11 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  1m  
8s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 21m 
44s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 17m 
41s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  3m 
31s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  5m 
30s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
21m 43s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  8m 
19s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  4m  
3s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
21s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  4m 
20s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 16m  
3s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} cc {color} | {color:green} 16m  
3s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 16m  
3s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  3m 
33s{color} | {color:green} root: The patch generated 0 new + 480 unchanged - 1 
fixed = 480 total (was 481) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  5m 
30s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
12m 24s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  9m 
50s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  4m  
2s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
51s{color} | {color:green} hadoop-yarn-api in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  3m 
35s{color} | {color:green} hadoop-yarn-common in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  2m 
37s{color} | {color:green} hadoop-yarn-server-common in the patch passed. 
{color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 19m 
25s{color} | {color:green} hadoop-yarn-server-nodemanager in the patch passed. 
{color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red}105m 39s{color} 
| {color:red} hadoop-yarn-server-resourcemanager in the patch failed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 25m 
14s{color} | {color:green} hadoop-yarn-client in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  9m 
25s{color} | {color:green} hadoop-mapreduce-client-app in the patch passed. 
{color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
42s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| 

[jira] [Commented] (YARN-8851) [Umbrella] A new pluggable device plugin framework to ease vendor plugin development

2018-11-05 Thread Zhankun Tang (JIRA)


[ 
https://issues.apache.org/jira/browse/YARN-8851?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16676074#comment-16676074
 ] 

Zhankun Tang commented on YARN-8851:


{quote}1) Regarding to the 
NM_PLUGGABLE_DEVICE_FRAMEWORK_PREFER_CUSTOMIZED_SCHEDULER, should we just use 
default scheduler if device plugin doesn't provide their customized scheduler? 
We should assume that load device plugin runs "trusted" code, we may not need 
to add extra protection here.
{quote}
Zhankun–> Agree.
{quote}2) DeviceSchedulerManager, it sounds like "manages scheduler", however 
it handles how to map device to containers, and scheduler is just 
implementation details. How about call it DeviceMappingManager?
{quote}
-

 
{quote}internalAssignDevices should be private, and it is a bit long, might be 
better for future maintenance if you can break it down to multiple methods.
{quote}
Zhankun -> Good idea. Will do that.
{quote}I think we could move to make this POC to sub tasks and get them done 
piece by piece. It gonna be helpful if you can highlight subtasks required.
{quote}
Zhankun-> The YARN-8880, YARN-8881, YARN-8882, YARN-8883, YARN-8885 are our 
Phase 1 highlighted subtasks.

Thanks for the review! [~leftnoteasy]

> [Umbrella] A new pluggable device plugin framework to ease vendor plugin 
> development
> 
>
> Key: YARN-8851
> URL: https://issues.apache.org/jira/browse/YARN-8851
> Project: Hadoop YARN
>  Issue Type: New Feature
>  Components: yarn
>Reporter: Zhankun Tang
>Assignee: Zhankun Tang
>Priority: Major
> Attachments: YARN-8851-WIP2-trunk.001.patch, 
> YARN-8851-WIP3-trunk.001.patch, YARN-8851-WIP4-trunk.001.patch, 
> YARN-8851-WIP5-trunk.001.patch, YARN-8851-WIP6-trunk.001.patch, 
> YARN-8851-WIP7-trunk.001.patch, YARN-8851-WIP8-trunk.001.patch, 
> YARN-8851-WIP9-trunk.001.patch, YARN-8851-trunk.001.patch, [YARN-8851] 
> YARN_New_Device_Plugin_Framework_Design_Proposal-3.pdf, [YARN-8851] 
> YARN_New_Device_Plugin_Framework_Design_Proposal-4.pdf, [YARN-8851] 
> YARN_New_Device_Plugin_Framework_Design_Proposal.pdf
>
>
> At present, we support GPU/FPGA device in YARN through a native, coupling 
> way. But it's difficult for a vendor to implement such a device plugin 
> because the developer needs much knowledge of YARN internals. And this brings 
> burden to the community to maintain both YARN core and vendor-specific code.
> Here we propose a new device plugin framework to ease vendor device plugin 
> development and provide a more flexible way to integrate with YARN NM.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-7512) Support service upgrade via YARN Service API and CLI

2018-11-05 Thread Sunil Govindan (JIRA)


 [ 
https://issues.apache.org/jira/browse/YARN-7512?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sunil Govindan updated YARN-7512:
-
Release Note: 
Further Functionality support for Long Running Services in YARN includes:
1. In place service (and containers) upgrade
2. Option to cancel an ongoing upgrade.

> Support service upgrade via YARN Service API and CLI
> 
>
> Key: YARN-7512
> URL: https://issues.apache.org/jira/browse/YARN-7512
> Project: Hadoop YARN
>  Issue Type: New Feature
>  Components: yarn-native-services
>Reporter: Gour Saha
>Assignee: Chandni Singh
>Priority: Major
> Attachments: _In-Place Upgrade of Long-Running Applications in 
> YARN_v1.pdf, _In-Place Upgrade of Long-Running Applications in YARN_v2.pdf, 
> _In-Place Upgrade of Long-Running Applications in YARN_v3.pdf
>
>
> YARN Service API and CLI needs to support service (and containers) upgrade in 
> line with what Slider supported in SLIDER-787 
> (http://slider.incubator.apache.org/docs/slider_specs/application_pkg_upgrade.html)



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-7512) Support service upgrade via YARN Service API and CLI

2018-11-05 Thread Sunil Govindan (JIRA)


 [ 
https://issues.apache.org/jira/browse/YARN-7512?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sunil Govindan updated YARN-7512:
-
Release Note: 
Further functionality support for Long Running Services in YARN includes:
1. In place service (and containers) upgrade
2. Option to cancel an ongoing upgrade.

  was:
Further Functionality support for Long Running Services in YARN includes:
1. In place service (and containers) upgrade
2. Option to cancel an ongoing upgrade.


> Support service upgrade via YARN Service API and CLI
> 
>
> Key: YARN-7512
> URL: https://issues.apache.org/jira/browse/YARN-7512
> Project: Hadoop YARN
>  Issue Type: New Feature
>  Components: yarn-native-services
>Reporter: Gour Saha
>Assignee: Chandni Singh
>Priority: Major
> Attachments: _In-Place Upgrade of Long-Running Applications in 
> YARN_v1.pdf, _In-Place Upgrade of Long-Running Applications in YARN_v2.pdf, 
> _In-Place Upgrade of Long-Running Applications in YARN_v3.pdf
>
>
> YARN Service API and CLI needs to support service (and containers) upgrade in 
> line with what Slider supported in SLIDER-787 
> (http://slider.incubator.apache.org/docs/slider_specs/application_pkg_upgrade.html)



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (YARN-8902) Add volume manager that manages CSI volume lifecycle

2018-11-05 Thread Weiwei Yang (JIRA)


[ 
https://issues.apache.org/jira/browse/YARN-8902?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16676053#comment-16676053
 ] 

Weiwei Yang edited comment on YARN-8902 at 11/6/18 2:37 AM:


Hi [~leftnoteasy]

Thanks for the review comments. Pls see my response below
{quote}CsiAdaptorClient is not implemented, does this patch works end to end?
{quote}
This task is focusing on the RM side changes, the adaptor will be deployed on 
NM and that will be implemented in YARN-8953. The interface is added here 
because I've created some fake impls used by UT which can test volume manager 
functions.
{quote}How to handle client ask volumes for every allocate request (let's say 
same volume id)? What will the expectation be for users, should they expect 
failures for the allocate() call or duplicated volume id will be simply ignored?
{quote}
Volume manager tracks all known volume states, see more in {{VolumeStates}} 
class. If client asks for same volume (by specifying same pre-provisioned 
volume ID), we just ensure volume is transited to the desire state, AKA 
{{NODE_READY}} state (which means controller publish is already done). So if 
volume is new, volume manager will do validation then publish operation; if 
volume is already published, then no operation is needed.
{quote}How to handle RM recovery case for volumes, are we going to recover 
volume states? or do we need to do that?
{quote}
Not necessarily, I think we can do this in stateless manner. According to the 
CSI spec, e.g
{noformat}
ControllerPublishVolume

This operation MUST be idempotent. If the volume corresponding to the 
{{volume_id}} has already been published at the node corresponding to the 
{{node_id}}, and is compatible with the specified {{volume_capability}} and 
{{readonly}} flag, the Plugin MUST reply {{0 OK}}.

{noformat}
that means it allows us to call e.g {{ControllerPublishVolume}} multiple times 
even a volume is already published. Most of APIs are defined as idempotent. So 
as in the recovery, we could just reset the volume to new and start all over 
again, the driver should response OK.

Thanks


was (Author: cheersyang):
Hi [~leftnoteasy]

Thanks for the review comments. Pls see my response below
{quote}CsiAdaptorClient is not implemented, does this patch works end to end?
{quote}
This task is focusing on the RM side changes, the adaptor will be deployed on 
NM and that will be implemented in YARN-8953. The interface is added here 
because I've created some fake impls used by UT which can test volume manager 
functions.
{quote}How to handle client ask volumes for every allocate request (let's say 
same volume id)? What will the expectation be for users, should they expect 
failures for the allocate() call or duplicated volume id will be simply ignored?
{quote}
Volume manager tracks all known volume states, see more in {{VolumeStates}} 
class. If client asks for same volume (by specifying same pre-provisioned 
volume ID), we just ensure volume is transited to the desire state, AKA 
{{NODE_READY}} state (which means controller publish is already done). So if 
volume is new, volume manager will do validation then publish operation; if 
volume is already published, then no operation is needed.
{quote}How to handle RM recovery case for volumes, are we going to recover 
volume states? or do we need to do that?
{quote}
Not necessarily, I think we can do this in stateless manner. According to the 
CSI spec, e.g

{noformat}

ControllerPublishVolume

This operation MUST be idempotent. If the volume corresponding to the 
{{volume_id}} has already been published at the node corresponding to the 
{{node_id}}, and is compatible with the specified {{volume_capability}} and 
{{readonly}} flag, the Plugin MUST reply {{0 OK}}.

{noformat}

that means it allows us to call ControllerPublishVolume multiple times even a 
volume is already published. Most of APIs are defined as idempotent. So as in 
the recovery, we could just reset the volume to new and start all over again, 
the driver should response OK.

Thanks

 

 

 

 

> Add volume manager that manages CSI volume lifecycle
> 
>
> Key: YARN-8902
> URL: https://issues.apache.org/jira/browse/YARN-8902
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Weiwei Yang
>Assignee: Weiwei Yang
>Priority: Major
> Attachments: YARN-8902.001.patch, YARN-8902.002.patch, 
> YARN-8902.003.patch, YARN-8902.004.patch, YARN-8902.005.patch, 
> YARN-8902.006.patch, YARN-8902.007.patch
>
>
> The CSI volume manager is a service running in RM process, that manages all 
> CSI volumes' lifecycle. The details about volume's lifecycle states can be 
> found in [CSI 
> spec|https://github.com/container-storage-interface/spec/blob/master/spec.md].
>  



--
This message was sent by 

[jira] [Commented] (YARN-8902) Add volume manager that manages CSI volume lifecycle

2018-11-05 Thread Weiwei Yang (JIRA)


[ 
https://issues.apache.org/jira/browse/YARN-8902?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16676053#comment-16676053
 ] 

Weiwei Yang commented on YARN-8902:
---

Hi [~leftnoteasy]

Thanks for the review comments. Pls see my response below
{quote}CsiAdaptorClient is not implemented, does this patch works end to end?
{quote}
This task is focusing on the RM side changes, the adaptor will be deployed on 
NM and that will be implemented in YARN-8953. The interface is added here 
because I've created some fake impls used by UT which can test volume manager 
functions.
{quote}How to handle client ask volumes for every allocate request (let's say 
same volume id)? What will the expectation be for users, should they expect 
failures for the allocate() call or duplicated volume id will be simply ignored?
{quote}
Volume manager tracks all known volume states, see more in {{VolumeStates}} 
class. If client asks for same volume (by specifying same pre-provisioned 
volume ID), we just ensure volume is transited to the desire state, AKA 
{{NODE_READY}} state (which means controller publish is already done). So if 
volume is new, volume manager will do validation then publish operation; if 
volume is already published, then no operation is needed.
{quote}How to handle RM recovery case for volumes, are we going to recover 
volume states? or do we need to do that?
{quote}
Not necessarily, I think we can do this in stateless manner. According to the 
CSI spec, e.g

{noformat}

ControllerPublishVolume

This operation MUST be idempotent. If the volume corresponding to the 
{{volume_id}} has already been published at the node corresponding to the 
{{node_id}}, and is compatible with the specified {{volume_capability}} and 
{{readonly}} flag, the Plugin MUST reply {{0 OK}}.

{noformat}

that means it allows us to call ControllerPublishVolume multiple times even a 
volume is already published. Most of APIs are defined as idempotent. So as in 
the recovery, we could just reset the volume to new and start all over again, 
the driver should response OK.

Thanks

 

 

 

 

> Add volume manager that manages CSI volume lifecycle
> 
>
> Key: YARN-8902
> URL: https://issues.apache.org/jira/browse/YARN-8902
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Weiwei Yang
>Assignee: Weiwei Yang
>Priority: Major
> Attachments: YARN-8902.001.patch, YARN-8902.002.patch, 
> YARN-8902.003.patch, YARN-8902.004.patch, YARN-8902.005.patch, 
> YARN-8902.006.patch, YARN-8902.007.patch
>
>
> The CSI volume manager is a service running in RM process, that manages all 
> CSI volumes' lifecycle. The details about volume's lifecycle states can be 
> found in [CSI 
> spec|https://github.com/container-storage-interface/spec/blob/master/spec.md].
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-7812) Improvements to Rich Placement Constraints in YARN

2018-11-05 Thread Weiwei Yang (JIRA)


 [ 
https://issues.apache.org/jira/browse/YARN-7812?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Weiwei Yang updated YARN-7812:
--
Release Note: 
Further functionality and usability improvements over placement constraints, 
which includes:
1. Support both intra-app and inter-app placement constraints
2. Support composite placement constraints, such as AND/OR expressions
3. Integrate placement constraint checks into Capacity Scheduler

  was:
Continual functionality and usability improvements over placement constraints, 
which includes:
1. Support both intra-app and inter-app placement constraints
2. Support composite placement constraints, such as AND/OR expressions
3. Integrate placement constraint checks into Capacity Scheduler


> Improvements to Rich Placement Constraints in YARN
> --
>
> Key: YARN-7812
> URL: https://issues.apache.org/jira/browse/YARN-7812
> Project: Hadoop YARN
>  Issue Type: New Feature
>Reporter: Arun Suresh
>Priority: Major
>
> This umbrella tracks the efforts for supporting following features
> # Inter-app placement constraints
> # Composite placement constraints, such as AND/OR expressions
> # Support placement constraints in Capacity Scheduler



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-8858) CapacityScheduler should respect maximum node resource when per-queue maximum-allocation is being used.

2018-11-05 Thread Akira Ajisaka (JIRA)


[ 
https://issues.apache.org/jira/browse/YARN-8858?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16676007#comment-16676007
 ] 

Akira Ajisaka commented on YARN-8858:
-

The new UT looks good. +1, thanks [~cheersyang].

> CapacityScheduler should respect maximum node resource when per-queue 
> maximum-allocation is being used.
> ---
>
> Key: YARN-8858
> URL: https://issues.apache.org/jira/browse/YARN-8858
> Project: Hadoop YARN
>  Issue Type: Bug
>Reporter: Sumana Sathish
>Assignee: Wangda Tan
>Priority: Major
> Fix For: 2.10.0, 3.2.0, 2.9.2, 3.0.4, 3.1.2, 3.3.0
>
> Attachments: YARN-8858-branch-2.8.001.patch, 
> YARN-8858-branch-2.8.002.patch, YARN-8858.001.patch, YARN-8858.002.patch
>
>
> This issue happens after YARN-8720.
> Before that, AMS uses scheduler.getMaximumAllocation to do the normalization. 
> After that, AMS uses LeafQueue.getMaximumAllocation. The scheduler one uses 
> nodeTracker.getMaximumAllocation, but the LeafQueue.getMaximum doesn't. 
> We should use the scheduler.getMaximumAllocation to cap the per-queue's 
> maximum-allocation every time.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-7812) Improvements to Rich Placement Constraints in YARN

2018-11-05 Thread Weiwei Yang (JIRA)


 [ 
https://issues.apache.org/jira/browse/YARN-7812?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Weiwei Yang updated YARN-7812:
--
Release Note: 
Continual functionality and usability improvements over placement constraints, 
which includes:
1. Support both intra-app and inter-app placement constraints
2. Support composite placement constraints, such as AND/OR expressions
3. Integrate placement constraint checks into Capacity Scheduler

> Improvements to Rich Placement Constraints in YARN
> --
>
> Key: YARN-7812
> URL: https://issues.apache.org/jira/browse/YARN-7812
> Project: Hadoop YARN
>  Issue Type: New Feature
>Reporter: Arun Suresh
>Priority: Major
>
> This umbrella tracks the efforts for supporting following features
> # Inter-app placement constraints
> # Composite placement constraints, such as AND/OR expressions
> # Support placement constraints in Capacity Scheduler



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-8969) Change the return type to generic type of AbstractYarnScheduler#getNodeTracker

2018-11-05 Thread Yufei Gu (JIRA)


[ 
https://issues.apache.org/jira/browse/YARN-8969?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16675958#comment-16675958
 ] 

Yufei Gu commented on YARN-8969:


[~eepayne], it is probably fine in this case. {{AbstractYarnScheduler}} is a 
@Private @Unstable class.

> Change the return type to generic type of AbstractYarnScheduler#getNodeTracker
> --
>
> Key: YARN-8969
> URL: https://issues.apache.org/jira/browse/YARN-8969
> Project: Hadoop YARN
>  Issue Type: Improvement
>Affects Versions: 2.9.1, 3.1.1
>Reporter: Wanqiang Ji
>Assignee: Wanqiang Ji
>Priority: Major
> Attachments: YARN-8969.001.patch
>
>
> Some warning problems like:
> {quote}Unchecked assignment: 'java.util.List' to 
> 'java.util.List'.
>  Reason: 'scheduler.getNodeTracker()' has raw type, so result of 
> getNodesByResourceName is erased{quote}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-8973) [Router] Add missing methods in RMWebProtocol

2018-11-05 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/YARN-8973?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16675950#comment-16675950
 ] 

Hadoop QA commented on YARN-8973:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
18s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 2 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
11s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 20m 
26s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  2m 
56s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
 1s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
19s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
14m 44s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m  
5s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
51s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
12s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
23s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  2m 
49s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  2m 
49s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
 1s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
16s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
13m 19s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m  
3s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
46s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red}108m 31s{color} 
| {color:red} hadoop-yarn-server-resourcemanager in the patch failed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  1m 
51s{color} | {color:green} hadoop-yarn-server-router in the patch passed. 
{color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
25s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}176m 41s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | 
hadoop.yarn.server.resourcemanager.reservation.TestCapacityOverTimePolicy |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:8f97d6f |
| JIRA Issue | YARN-8973 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12946967/YARN-8973.v1.patch |
| Optional Tests |  dupname  asflicense  compile  javac  javadoc  mvninstall  
mvnsite  unit  shadedclient  findbugs  checkstyle  |
| uname | Linux 9a44925f8982 3.13.0-153-generic #203-Ubuntu SMP Thu Jun 14 
08:52:28 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / f3f5e7a |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_181 |
| findbugs | v3.1.0-RC1 |
| unit | 

[jira] [Commented] (YARN-8972) [Router] Add support to prevent DoS attack over ApplicationSubmissionContext size

2018-11-05 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/YARN-8972?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16675915#comment-16675915
 ] 

Hadoop QA commented on YARN-8972:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
11s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 2 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  3m 
44s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 20m 
42s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  7m 
25s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
18s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m  
6s{color} | {color:green} trunk passed {color} |
| {color:red}-1{color} | {color:red} shadedclient {color} | {color:red} 13m  
3s{color} | {color:red} branch has errors when building and testing our client 
artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
53s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
49s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
12s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
55s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  6m 
50s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  6m 
50s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
26s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
16s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} whitespace {color} | {color:red}  0m  
0s{color} | {color:red} The patch has 1 line(s) that end in whitespace. Use git 
apply --whitespace=fix <>. Refer https://git-scm.com/docs/git-apply 
{color} |
| {color:red}-1{color} | {color:red} shadedclient {color} | {color:red} 10m 
57s{color} | {color:red} patch has errors when building and testing our client 
artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m 
24s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m  
2s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red}  0m 43s{color} 
| {color:red} hadoop-yarn-api in the patch failed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red}  0m 38s{color} 
| {color:red} hadoop-yarn-server-router in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
38s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 76m 54s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:8f97d6f |
| JIRA Issue | YARN-8972 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12946979/YARN-8972.v2.patch |
| Optional Tests |  dupname  asflicense  compile  javac  javadoc  mvninstall  
mvnsite  unit  shadedclient  findbugs  checkstyle  |
| uname | Linux f02e71b7758b 4.4.0-138-generic #164-Ubuntu SMP Tue Oct 2 
17:16:02 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / f3f5e7a |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_181 |
| findbugs | v3.1.0-RC1 |
| whitespace | 
https://builds.apache.org/job/PreCommit-YARN-Build/22422/artifact/out/whitespace-eol.txt
 |
| unit | 

[jira] [Commented] (YARN-8972) [Router] Add support to prevent DoS attack over ApplicationSubmissionContext size

2018-11-05 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/YARN-8972?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16675902#comment-16675902
 ] 

Hadoop QA commented on YARN-8972:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
11s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 3 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  3m 
51s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 20m 
19s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  7m 
43s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
17s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
10s{color} | {color:green} trunk passed {color} |
| {color:red}-1{color} | {color:red} shadedclient {color} | {color:red} 13m  
7s{color} | {color:red} branch has errors when building and testing our client 
artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m  
1s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
51s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
10s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
56s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  6m 
43s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  6m 
43s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
14s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m  
3s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} whitespace {color} | {color:red}  0m  
0s{color} | {color:red} The patch has 1 line(s) that end in whitespace. Use git 
apply --whitespace=fix <>. Refer https://git-scm.com/docs/git-apply 
{color} |
| {color:red}-1{color} | {color:red} shadedclient {color} | {color:red} 11m 
15s{color} | {color:red} patch has errors when building and testing our client 
artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m 
18s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
44s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red}  0m 36s{color} 
| {color:red} hadoop-yarn-api in the patch failed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red}  0m 30s{color} 
| {color:red} hadoop-yarn-server-router in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
28s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 75m 57s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:8f97d6f |
| JIRA Issue | YARN-8972 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12946978/YARN-8972.v2.patch |
| Optional Tests |  dupname  asflicense  compile  javac  javadoc  mvninstall  
mvnsite  unit  shadedclient  findbugs  checkstyle  |
| uname | Linux 515ec01351ac 4.4.0-138-generic #164-Ubuntu SMP Tue Oct 2 
17:16:02 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / f3f5e7a |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_181 |
| findbugs | v3.1.0-RC1 |
| whitespace | 
https://builds.apache.org/job/PreCommit-YARN-Build/22421/artifact/out/whitespace-eol.txt
 |
| unit | 

[jira] [Comment Edited] (YARN-8973) [Router] Add missing methods in RMWebProtocol

2018-11-05 Thread JIRA


[ 
https://issues.apache.org/jira/browse/YARN-8973?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16675864#comment-16675864
 ] 

Íñigo Goiri edited comment on YARN-8973 at 11/5/18 11:58 PM:
-

Thanks [~elgoiri].
* YARN-8559 and YARN-5952 added 2 REST methods in RMWebService without 
inserting them in the RMWebServiceProtocol. The jira tracks the effort make 
sure everything is in place.
* I will remove.
* Let me add in the Router.


was (Author: giovanni.fumarola):
Thanks [~elgoiri].
*YARN-8559 and YARN-5952 added 2 REST methods in RMWebService without inserting 
them in the RMWebServiceProtocol. The jira tracks the effort make sure 
everything is in place.
* I will remove.
* Let me add in the Router.

> [Router] Add missing methods in RMWebProtocol
> -
>
> Key: YARN-8973
> URL: https://issues.apache.org/jira/browse/YARN-8973
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Giovanni Matteo Fumarola
>Assignee: Giovanni Matteo Fumarola
>Priority: Major
> Attachments: YARN-8973.v1.patch
>
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-8973) [Router] Add missing methods in RMWebProtocol

2018-11-05 Thread Giovanni Matteo Fumarola (JIRA)


[ 
https://issues.apache.org/jira/browse/YARN-8973?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16675864#comment-16675864
 ] 

Giovanni Matteo Fumarola commented on YARN-8973:


Thanks [~elgoiri].
*YARN-8559 and YARN-5952 added 2 REST methods in RMWebService without inserting 
them in the RMWebServiceProtocol. The jira tracks the effort make sure 
everything is in place.
* I will remove.
* Let me add in the Router.

> [Router] Add missing methods in RMWebProtocol
> -
>
> Key: YARN-8973
> URL: https://issues.apache.org/jira/browse/YARN-8973
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Giovanni Matteo Fumarola
>Assignee: Giovanni Matteo Fumarola
>Priority: Major
> Attachments: YARN-8973.v1.patch
>
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-8972) [Router] Add support to prevent DoS attack over ApplicationSubmissionContext size

2018-11-05 Thread Giovanni Matteo Fumarola (JIRA)


[ 
https://issues.apache.org/jira/browse/YARN-8972?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16675862#comment-16675862
 ] 

Giovanni Matteo Fumarola commented on YARN-8972:


Thanks [~elgoiri] for the comments.
* I fixed the checkstyles.
* PassThroughClientRequestInterceptor was created under the test package. 
ApplicationSubmissionContextInterceptor just implements 1 method, without 
PassThroughClientRequestInterceptor in the main package, we have to duplicate 
bunch of that code.
* No, there is no need to add options in yarn-default.
* I fixed it.
* It is a comment to explain what it is inside the CLC and what we should check.
* I fixed it.

> [Router] Add support to prevent DoS attack over ApplicationSubmissionContext 
> size
> -
>
> Key: YARN-8972
> URL: https://issues.apache.org/jira/browse/YARN-8972
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Giovanni Matteo Fumarola
>Assignee: Giovanni Matteo Fumarola
>Priority: Major
> Attachments: YARN-8972.v1.patch, YARN-8972.v2.patch
>
>
> This jira tracks the effort to add a new interceptor in the Router to prevent 
> user to submit applications with oversized ASC.
> This avoid YARN cluster to failover.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-8972) [Router] Add support to prevent DoS attack over ApplicationSubmissionContext size

2018-11-05 Thread Giovanni Matteo Fumarola (JIRA)


 [ 
https://issues.apache.org/jira/browse/YARN-8972?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Giovanni Matteo Fumarola updated YARN-8972:
---
Attachment: YARN-8972.v2.patch

> [Router] Add support to prevent DoS attack over ApplicationSubmissionContext 
> size
> -
>
> Key: YARN-8972
> URL: https://issues.apache.org/jira/browse/YARN-8972
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Giovanni Matteo Fumarola
>Assignee: Giovanni Matteo Fumarola
>Priority: Major
> Attachments: YARN-8972.v1.patch, YARN-8972.v2.patch
>
>
> This jira tracks the effort to add a new interceptor in the Router to prevent 
> user to submit applications with oversized ASC.
> This avoid YARN cluster to failover.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-8972) [Router] Add support to prevent DoS attack over ApplicationSubmissionContext size

2018-11-05 Thread Giovanni Matteo Fumarola (JIRA)


 [ 
https://issues.apache.org/jira/browse/YARN-8972?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Giovanni Matteo Fumarola updated YARN-8972:
---
Attachment: (was: YARN-8972.v2.patch)

> [Router] Add support to prevent DoS attack over ApplicationSubmissionContext 
> size
> -
>
> Key: YARN-8972
> URL: https://issues.apache.org/jira/browse/YARN-8972
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Giovanni Matteo Fumarola
>Assignee: Giovanni Matteo Fumarola
>Priority: Major
> Attachments: YARN-8972.v1.patch, YARN-8972.v2.patch
>
>
> This jira tracks the effort to add a new interceptor in the Router to prevent 
> user to submit applications with oversized ASC.
> This avoid YARN cluster to failover.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-8972) [Router] Add support to prevent DoS attack over ApplicationSubmissionContext size

2018-11-05 Thread Giovanni Matteo Fumarola (JIRA)


 [ 
https://issues.apache.org/jira/browse/YARN-8972?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Giovanni Matteo Fumarola updated YARN-8972:
---
Attachment: YARN-8972.v2.patch

> [Router] Add support to prevent DoS attack over ApplicationSubmissionContext 
> size
> -
>
> Key: YARN-8972
> URL: https://issues.apache.org/jira/browse/YARN-8972
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Giovanni Matteo Fumarola
>Assignee: Giovanni Matteo Fumarola
>Priority: Major
> Attachments: YARN-8972.v1.patch, YARN-8972.v2.patch
>
>
> This jira tracks the effort to add a new interceptor in the Router to prevent 
> user to submit applications with oversized ASC.
> This avoid YARN cluster to failover.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-8838) Add security check for container user is same as websocket user

2018-11-05 Thread Eric Yang (JIRA)


 [ 
https://issues.apache.org/jira/browse/YARN-8838?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Eric Yang updated YARN-8838:

Attachment: YARN-8838.004.patch

> Add security check for container user is same as websocket user
> ---
>
> Key: YARN-8838
> URL: https://issues.apache.org/jira/browse/YARN-8838
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: nodemanager
>Reporter: Eric Yang
>Assignee: Eric Yang
>Priority: Major
>  Labels: docker
> Attachments: YARN-8838.001.patch, YARN-8838.002.patch, 
> YARN-8838.003.patch, YARN-8838.004.patch
>
>
> When user is authenticate via SPNEGO entry point, node manager must verify 
> the remote user is the same as the container user to start the web socket 
> session.  One possible solution is to verify the web request user matches 
> yarn container local directory owne during onWebSocketConnect..



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-8973) [Router] Add missing methods in RMWebProtocol

2018-11-05 Thread JIRA


[ 
https://issues.apache.org/jira/browse/YARN-8973?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16675819#comment-16675819
 ] 

Íñigo Goiri commented on YARN-8973:
---

Thanks [~giovanni.fumarola] for  [^YARN-8973.v1.patch].
* Where do the new methods come from? Is there any interface we can leverage?
* Avoid changing FederationInterceptorREST line 1286.
* Any tests?


> [Router] Add missing methods in RMWebProtocol
> -
>
> Key: YARN-8973
> URL: https://issues.apache.org/jira/browse/YARN-8973
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Giovanni Matteo Fumarola
>Assignee: Giovanni Matteo Fumarola
>Priority: Major
> Attachments: YARN-8973.v1.patch
>
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-8972) [Router] Add support to prevent DoS attack over ApplicationSubmissionContext size

2018-11-05 Thread JIRA


[ 
https://issues.apache.org/jira/browse/YARN-8972?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16675817#comment-16675817
 ] 

Íñigo Goiri commented on YARN-8972:
---

Thanks [~giovanni.fumarola] for  [^YARN-8972.v1.patch].
A few comments:
* We can fix the checkstyles.
* What's the story behind the old 
{{test/java/org/apache/hadoop/yarn/server/router/clientrm/PassThroughClientRequestInterceptor.java}}?
 It still looks like a mock but now is in main.
* Do you need to document this new option in yarn-default.xml or some other 
documentation?
* The javadoc for {{RouterServerUtil#checkAppSubmissionContext}} is confusing 
because the name of the method is generic. In general, the prevention of DoS is 
understood as many requests, I would put that as a comment inside the method.
* There are a bunch of code commented in logContainerLaunchContext.
* The style of the break lines is correct in 
ApplicationSubmissionContextInterceptor#submitApplication but is hard to read. 
Maybe extracting the variable would make it easier.

> [Router] Add support to prevent DoS attack over ApplicationSubmissionContext 
> size
> -
>
> Key: YARN-8972
> URL: https://issues.apache.org/jira/browse/YARN-8972
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Giovanni Matteo Fumarola
>Assignee: Giovanni Matteo Fumarola
>Priority: Major
> Attachments: YARN-8972.v1.patch
>
>
> This jira tracks the effort to add a new interceptor in the Router to prevent 
> user to submit applications with oversized ASC.
> This avoid YARN cluster to failover.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-8867) Retrieve the status of resource localization

2018-11-05 Thread Chandni Singh (JIRA)


[ 
https://issues.apache.org/jira/browse/YARN-8867?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16675810#comment-16675810
 ] 

Chandni Singh commented on YARN-8867:
-

Undid the changes to import statements in patch 3.

> Retrieve the status of resource localization
> 
>
> Key: YARN-8867
> URL: https://issues.apache.org/jira/browse/YARN-8867
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: yarn
>Reporter: Chandni Singh
>Assignee: Chandni Singh
>Priority: Major
> Attachments: YARN-8867.001.patch, YARN-8867.002.patch, 
> YARN-8867.003.patch, YARN-8867.wip.patch
>
>
> Refer YARN-3854.
> Currently NM does not have an API to retrieve the status of localization. 
> Unless the client can know when the localization of a resource is complete 
> irrespective of the type of the resource, it cannot take any appropriate 
> action. 
> We need an API in {{ContainerManagementProtocol}} to retrieve the status on 
> the localization. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-8867) Retrieve the status of resource localization

2018-11-05 Thread Chandni Singh (JIRA)


 [ 
https://issues.apache.org/jira/browse/YARN-8867?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chandni Singh updated YARN-8867:

Attachment: YARN-8867.003.patch

> Retrieve the status of resource localization
> 
>
> Key: YARN-8867
> URL: https://issues.apache.org/jira/browse/YARN-8867
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: yarn
>Reporter: Chandni Singh
>Assignee: Chandni Singh
>Priority: Major
> Attachments: YARN-8867.001.patch, YARN-8867.002.patch, 
> YARN-8867.003.patch, YARN-8867.wip.patch
>
>
> Refer YARN-3854.
> Currently NM does not have an API to retrieve the status of localization. 
> Unless the client can know when the localization of a resource is complete 
> irrespective of the type of the resource, it cannot take any appropriate 
> action. 
> We need an API in {{ContainerManagementProtocol}} to retrieve the status on 
> the localization. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-8867) Retrieve the status of resource localization

2018-11-05 Thread Chandni Singh (JIRA)


 [ 
https://issues.apache.org/jira/browse/YARN-8867?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chandni Singh updated YARN-8867:

Attachment: (was: YARN-8867.003.patch)

> Retrieve the status of resource localization
> 
>
> Key: YARN-8867
> URL: https://issues.apache.org/jira/browse/YARN-8867
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: yarn
>Reporter: Chandni Singh
>Assignee: Chandni Singh
>Priority: Major
> Attachments: YARN-8867.001.patch, YARN-8867.002.patch, 
> YARN-8867.wip.patch
>
>
> Refer YARN-3854.
> Currently NM does not have an API to retrieve the status of localization. 
> Unless the client can know when the localization of a resource is complete 
> irrespective of the type of the resource, it cannot take any appropriate 
> action. 
> We need an API in {{ContainerManagementProtocol}} to retrieve the status on 
> the localization. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-8867) Retrieve the status of resource localization

2018-11-05 Thread Chandni Singh (JIRA)


 [ 
https://issues.apache.org/jira/browse/YARN-8867?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chandni Singh updated YARN-8867:

Attachment: YARN-8867.003.patch

> Retrieve the status of resource localization
> 
>
> Key: YARN-8867
> URL: https://issues.apache.org/jira/browse/YARN-8867
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: yarn
>Reporter: Chandni Singh
>Assignee: Chandni Singh
>Priority: Major
> Attachments: YARN-8867.001.patch, YARN-8867.002.patch, 
> YARN-8867.003.patch, YARN-8867.wip.patch
>
>
> Refer YARN-3854.
> Currently NM does not have an API to retrieve the status of localization. 
> Unless the client can know when the localization of a resource is complete 
> irrespective of the type of the resource, it cannot take any appropriate 
> action. 
> We need an API in {{ContainerManagementProtocol}} to retrieve the status on 
> the localization. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-8973) [Router] Add missing methods in RMWebProtocol

2018-11-05 Thread Giovanni Matteo Fumarola (JIRA)


 [ 
https://issues.apache.org/jira/browse/YARN-8973?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Giovanni Matteo Fumarola updated YARN-8973:
---
Attachment: YARN-8973.v1.patch

> [Router] Add missing methods in RMWebProtocol
> -
>
> Key: YARN-8973
> URL: https://issues.apache.org/jira/browse/YARN-8973
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Giovanni Matteo Fumarola
>Assignee: Giovanni Matteo Fumarola
>Priority: Major
> Attachments: YARN-8973.v1.patch
>
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Created] (YARN-8973) [Router] Add missing methods in RMWebProtocol

2018-11-05 Thread Giovanni Matteo Fumarola (JIRA)
Giovanni Matteo Fumarola created YARN-8973:
--

 Summary: [Router] Add missing methods in RMWebProtocol
 Key: YARN-8973
 URL: https://issues.apache.org/jira/browse/YARN-8973
 Project: Hadoop YARN
  Issue Type: Sub-task
Reporter: Giovanni Matteo Fumarola
Assignee: Giovanni Matteo Fumarola






--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-8972) [Router] Add support to prevent DoS attack over ApplicationSubmissionContext size

2018-11-05 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/YARN-8972?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16675774#comment-16675774
 ] 

Hadoop QA commented on YARN-8972:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
11s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 2 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  3m 
48s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 19m 
55s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  7m 
42s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
20s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m  
7s{color} | {color:green} trunk passed {color} |
| {color:red}-1{color} | {color:red} shadedclient {color} | {color:red} 12m 
37s{color} | {color:red} branch has errors when building and testing our client 
artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
52s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
48s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
11s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
59s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  7m 
14s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  7m 
14s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
1m 27s{color} | {color:orange} hadoop-yarn-project/hadoop-yarn: The patch 
generated 2 new + 213 unchanged - 0 fixed = 215 total (was 213) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
17s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:red}-1{color} | {color:red} shadedclient {color} | {color:red} 11m 
54s{color} | {color:red} patch has errors when building and testing our client 
artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m 
23s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m  
2s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red}  0m 43s{color} 
| {color:red} hadoop-yarn-api in the patch failed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red}  0m 35s{color} 
| {color:red} hadoop-yarn-server-router in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
34s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 77m 21s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:8f97d6f |
| JIRA Issue | YARN-8972 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12946960/YARN-8972.v1.patch |
| Optional Tests |  dupname  asflicense  compile  javac  javadoc  mvninstall  
mvnsite  unit  shadedclient  findbugs  checkstyle  |
| uname | Linux ee5ac9ae4c28 4.4.0-138-generic #164-Ubuntu SMP Tue Oct 2 
17:16:02 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / f3f5e7a |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_181 |
| findbugs | v3.1.0-RC1 |
| checkstyle | 

[jira] [Commented] (YARN-8672) TestContainerManager#testLocalingResourceWhileContainerRunning occasionally times out

2018-11-05 Thread Eric Yang (JIRA)


[ 
https://issues.apache.org/jira/browse/YARN-8672?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16675768#comment-16675768
 ] 

Eric Yang commented on YARN-8672:
-

[~csingh] Patch 005 changes the filename of the token file, but the 
runLocalization method is still expecting the use the old filename pattern.  I 
think this will break the deprecated runLocalization method from reading the 
token file.

> TestContainerManager#testLocalingResourceWhileContainerRunning occasionally 
> times out
> -
>
> Key: YARN-8672
> URL: https://issues.apache.org/jira/browse/YARN-8672
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: nodemanager
>Affects Versions: 3.2.0
>Reporter: Jason Lowe
>Assignee: Chandni Singh
>Priority: Major
> Attachments: YARN-8672.001.patch, YARN-8672.002.patch, 
> YARN-8672.003.patch, YARN-8672.004.patch, YARN-8672.005.patch
>
>
> Precommit builds have been failing in 
> TestContainerManager#testLocalingResourceWhileContainerRunning.  I have been 
> able to reproduce the problem without any patch applied if I run the test 
> enough times.  It looks like something is removing container tokens from the 
> nmPrivate area just as a new localizer starts.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-8672) TestContainerManager#testLocalingResourceWhileContainerRunning occasionally times out

2018-11-05 Thread Chandni Singh (JIRA)


[ 
https://issues.apache.org/jira/browse/YARN-8672?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16675754#comment-16675754
 ] 

Chandni Singh commented on YARN-8672:
-

[~jlowe] could you please review patch 5?

> TestContainerManager#testLocalingResourceWhileContainerRunning occasionally 
> times out
> -
>
> Key: YARN-8672
> URL: https://issues.apache.org/jira/browse/YARN-8672
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: nodemanager
>Affects Versions: 3.2.0
>Reporter: Jason Lowe
>Assignee: Chandni Singh
>Priority: Major
> Attachments: YARN-8672.001.patch, YARN-8672.002.patch, 
> YARN-8672.003.patch, YARN-8672.004.patch, YARN-8672.005.patch
>
>
> Precommit builds have been failing in 
> TestContainerManager#testLocalingResourceWhileContainerRunning.  I have been 
> able to reproduce the problem without any patch applied if I run the test 
> enough times.  It looks like something is removing container tokens from the 
> nmPrivate area just as a new localizer starts.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-8858) CapacityScheduler should respect maximum node resource when per-queue maximum-allocation is being used.

2018-11-05 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/YARN-8858?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16675713#comment-16675713
 ] 

Hadoop QA commented on YARN-8858:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
23s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Findbugs executables are not available. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} branch-2.8 Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  9m 
42s{color} | {color:green} branch-2.8 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
35s{color} | {color:green} branch-2.8 passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
22s{color} | {color:green} branch-2.8 passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
42s{color} | {color:green} branch-2.8 passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
28s{color} | {color:green} branch-2.8 passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
33s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
32s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
32s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
18s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
36s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
24s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 82m 35s{color} 
| {color:red} hadoop-yarn-server-resourcemanager in the patch failed. {color} |
| {color:red}-1{color} | {color:red} asflicense {color} | {color:red}  0m 
21s{color} | {color:red} The patch generated 1 ASF License warnings. {color} |
| {color:black}{color} | {color:black} {color} | {color:black} 98m 24s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.yarn.server.resourcemanager.TestClientRMTokens |
|   | hadoop.yarn.server.resourcemanager.TestAMAuthorization |
|   | 
hadoop.yarn.server.resourcemanager.scheduler.capacity.TestCapacitySchedulerSurgicalPreemption
 |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:ae3769f |
| JIRA Issue | YARN-8858 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12946925/YARN-8858-branch-2.8.002.patch
 |
| Optional Tests |  dupname  asflicense  compile  javac  javadoc  mvninstall  
mvnsite  unit  shadedclient  findbugs  checkstyle  |
| uname | Linux 2af3c1dbaa4c 3.13.0-153-generic #203-Ubuntu SMP Thu Jun 14 
08:52:28 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | branch-2.8 / 3d76d47 |
| maven | version: Apache Maven 3.0.5 |
| Default Java | 1.7.0_181 |
| unit | 
https://builds.apache.org/job/PreCommit-YARN-Build/22417/artifact/out/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-YARN-Build/22417/testReport/ |
| asflicense | 
https://builds.apache.org/job/PreCommit-YARN-Build/22417/artifact/out/patch-asflicense-problems.txt
 |
| Max. process+thread count | 634 (vs. ulimit of 1) |
| modules | C: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager
 U: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager
 |
| Console output | 
https://builds.apache.org/job/PreCommit-YARN-Build/22417/console |
| Powered by | Apache Yetus 0.8.0   http://yetus.apache.org |


This message 

[jira] [Updated] (YARN-8972) [Router] Add support to prevent DoS attack over ApplicationSubmissionContext size

2018-11-05 Thread Giovanni Matteo Fumarola (JIRA)


 [ 
https://issues.apache.org/jira/browse/YARN-8972?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Giovanni Matteo Fumarola updated YARN-8972:
---
Description: 
This jira tracks the effort to add a new interceptor in the Router to prevent 
user to submit applications with oversized ASC.
This avoid YARN cluster to failover.

> [Router] Add support to prevent DoS attack over ApplicationSubmissionContext 
> size
> -
>
> Key: YARN-8972
> URL: https://issues.apache.org/jira/browse/YARN-8972
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Giovanni Matteo Fumarola
>Assignee: Giovanni Matteo Fumarola
>Priority: Major
> Attachments: YARN-8972.v1.patch
>
>
> This jira tracks the effort to add a new interceptor in the Router to prevent 
> user to submit applications with oversized ASC.
> This avoid YARN cluster to failover.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-8972) [Router] Add support to prevent DoS attack over ApplicationSubmissionContext size

2018-11-05 Thread Giovanni Matteo Fumarola (JIRA)


 [ 
https://issues.apache.org/jira/browse/YARN-8972?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Giovanni Matteo Fumarola updated YARN-8972:
---
Attachment: YARN-8972.v1.patch

> [Router] Add support to prevent DoS attack over ApplicationSubmissionContext 
> size
> -
>
> Key: YARN-8972
> URL: https://issues.apache.org/jira/browse/YARN-8972
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Giovanni Matteo Fumarola
>Assignee: Giovanni Matteo Fumarola
>Priority: Major
> Attachments: YARN-8972.v1.patch
>
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Created] (YARN-8972) [Router] Add support to prevent DoS attack over ApplicationSubmissionContext size

2018-11-05 Thread Giovanni Matteo Fumarola (JIRA)
Giovanni Matteo Fumarola created YARN-8972:
--

 Summary: [Router] Add support to prevent DoS attack over 
ApplicationSubmissionContext size
 Key: YARN-8972
 URL: https://issues.apache.org/jira/browse/YARN-8972
 Project: Hadoop YARN
  Issue Type: Sub-task
Reporter: Giovanni Matteo Fumarola
Assignee: Giovanni Matteo Fumarola






--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-8867) Retrieve the status of resource localization

2018-11-05 Thread Wangda Tan (JIRA)


[ 
https://issues.apache.org/jira/browse/YARN-8867?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16675664#comment-16675664
 ] 

Wangda Tan commented on YARN-8867:
--

Thanks [~csingh] for working on this, from high-level I think the patch looks 
good. I may not have the bandwidth to review all details, so I will let others 
continue reviewing the patch.

Regarding percentage vs. diagnostic, I think diagnostic should be good enough 
for now. Once we can support percentage progress (or absolute value progress) 
from the backend, we can think more about how to add them to the protocol to 
avoid unnecessary protocol changes.

And one misc: could u please update your IDE preference to not adding import 
...* for packages? We typically discourage to do that to avoid backport 
conflict.

> Retrieve the status of resource localization
> 
>
> Key: YARN-8867
> URL: https://issues.apache.org/jira/browse/YARN-8867
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: yarn
>Reporter: Chandni Singh
>Assignee: Chandni Singh
>Priority: Major
> Attachments: YARN-8867.001.patch, YARN-8867.002.patch, 
> YARN-8867.wip.patch
>
>
> Refer YARN-3854.
> Currently NM does not have an API to retrieve the status of localization. 
> Unless the client can know when the localization of a resource is complete 
> irrespective of the type of the resource, it cannot take any appropriate 
> action. 
> We need an API in {{ContainerManagementProtocol}} to retrieve the status on 
> the localization. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-8969) Change the return type to generic type of AbstractYarnScheduler#getNodeTracker

2018-11-05 Thread Eric Payne (JIRA)


[ 
https://issues.apache.org/jira/browse/YARN-8969?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16675661#comment-16675661
 ] 

Eric Payne commented on YARN-8969:
--

Is it wise to change the API of a public method in an abstract class? Wouldn't 
it be better to create a new method with the desired return type?

> Change the return type to generic type of AbstractYarnScheduler#getNodeTracker
> --
>
> Key: YARN-8969
> URL: https://issues.apache.org/jira/browse/YARN-8969
> Project: Hadoop YARN
>  Issue Type: Improvement
>Affects Versions: 2.9.1, 3.1.1
>Reporter: Wanqiang Ji
>Assignee: Wanqiang Ji
>Priority: Major
> Attachments: YARN-8969.001.patch
>
>
> Some warning problems like:
> {quote}Unchecked assignment: 'java.util.List' to 
> 'java.util.List'.
>  Reason: 'scheduler.getNodeTracker()' has raw type, so result of 
> getNodesByResourceName is erased{quote}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-8933) [AMRMProxy] Fix potential empty AvailableResource and NumClusterNode in allocation response

2018-11-05 Thread Botong Huang (JIRA)


[ 
https://issues.apache.org/jira/browse/YARN-8933?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16675623#comment-16675623
 ] 

Botong Huang commented on YARN-8933:


Ah good catch, and thx for reviewing! 

> [AMRMProxy] Fix potential empty AvailableResource and NumClusterNode in 
> allocation response
> ---
>
> Key: YARN-8933
> URL: https://issues.apache.org/jira/browse/YARN-8933
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: amrmproxy, federation
>Reporter: Botong Huang
>Assignee: Botong Huang
>Priority: Major
> Attachments: YARN-8933.v1.patch, YARN-8933.v2.patch
>
>
> After YARN-8696, the allocate response by FederationInterceptor is merged 
> from the responses from a random subset of all sub-clusters, depending on the 
> async heartbeat timing. As a result, cluster-wide information fields in the 
> response, e.g. AvailableResources and NumClusterNodes, are not consistent at 
> all. It can even be null/zero because the specific response is merged from an 
> empty set of sub-cluster responses. 
> In this patch, we let FederationInterceptor remember the last allocate 
> response from all known sub-clusters, and always construct the cluster-wide 
> info fields from all of them. We also moved sub-cluster timeout from 
> LocalityMulticastAMRMProxyPolicy to FederationInterceptor, so that 
> sub-clusters that expired (haven't had a successful allocate response for a 
> while) won't be included in the computation.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-8898) Fix FederationInterceptor#allocate to set application priority in allocateResponse

2018-11-05 Thread Botong Huang (JIRA)


[ 
https://issues.apache.org/jira/browse/YARN-8898?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16675609#comment-16675609
 ] 

Botong Huang commented on YARN-8898:


bq. Better option could be pushing along with ApplicationHomeSubCluster the 
application Submission Context too. And let interceptor query when AM 
registration happens.
If necessary, yes I agree this works. But if you are talking about 
ApplicationPriority alone, the change would seem big (Router, StateStore, 
AMRMProxy). Down the line we might need to deal with two source of truth issues 
(from StateStore vs RM allocate response) as well. On the other hand, the 
existing priority value is in AllocateResponse and thus we are relying on the 
RM version rather than AM version. We can cherry-pick YARN-4170 to 2.7 if 
needed. For old RM versions where this value is not fed in, I guess we can 
leave the UAM to default priority. What do you think? 

> Fix FederationInterceptor#allocate to set application priority in 
> allocateResponse
> --
>
> Key: YARN-8898
> URL: https://issues.apache.org/jira/browse/YARN-8898
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Bibin A Chundatt
>Assignee: Bilwa S T
>Priority: Major
>
> In case of FederationInterceptor#mergeAllocateResponses skips 
> application_priority in response returned



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-8851) [Umbrella] A new pluggable device plugin framework to ease vendor plugin development

2018-11-05 Thread Wangda Tan (JIRA)


[ 
https://issues.apache.org/jira/browse/YARN-8851?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16675606#comment-16675606
 ] 

Wangda Tan commented on YARN-8851:
--

Thanks [~tangzhankun] , 

1) Regarding to the NM_PLUGGABLE_DEVICE_FRAMEWORK_PREFER_CUSTOMIZED_SCHEDULER, 
should we just use default scheduler if device plugin doesn't provide their 
customized scheduler? We should assume that load device plugin runs "trusted" 
code, we may not need to add extra protection here.


2) DeviceSchedulerManager, it sounds like "manages scheduler", however it 
handles how to map device to containers, and scheduler is just implementation 
details. How about call it DeviceMappingManager?
- internalAssignDevices should be private, and it is a bit long, might be 
better for future maintenance if you can break it down to multiple methods.

I think we could move to make this POC to sub tasks and get them done piece by 
piece. It gonna be helpful if you can highlight subtasks required.

> [Umbrella] A new pluggable device plugin framework to ease vendor plugin 
> development
> 
>
> Key: YARN-8851
> URL: https://issues.apache.org/jira/browse/YARN-8851
> Project: Hadoop YARN
>  Issue Type: New Feature
>  Components: yarn
>Reporter: Zhankun Tang
>Assignee: Zhankun Tang
>Priority: Major
> Attachments: YARN-8851-WIP2-trunk.001.patch, 
> YARN-8851-WIP3-trunk.001.patch, YARN-8851-WIP4-trunk.001.patch, 
> YARN-8851-WIP5-trunk.001.patch, YARN-8851-WIP6-trunk.001.patch, 
> YARN-8851-WIP7-trunk.001.patch, YARN-8851-WIP8-trunk.001.patch, 
> YARN-8851-WIP9-trunk.001.patch, YARN-8851-trunk.001.patch, [YARN-8851] 
> YARN_New_Device_Plugin_Framework_Design_Proposal-3.pdf, [YARN-8851] 
> YARN_New_Device_Plugin_Framework_Design_Proposal-4.pdf, [YARN-8851] 
> YARN_New_Device_Plugin_Framework_Design_Proposal.pdf
>
>
> At present, we support GPU/FPGA device in YARN through a native, coupling 
> way. But it's difficult for a vendor to implement such a device plugin 
> because the developer needs much knowledge of YARN internals. And this brings 
> burden to the community to maintain both YARN core and vendor-specific code.
> Here we propose a new device plugin framework to ease vendor device plugin 
> development and provide a more flexible way to integrate with YARN NM.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-8858) CapacityScheduler should respect maximum node resource when per-queue maximum-allocation is being used.

2018-11-05 Thread Wangda Tan (JIRA)


[ 
https://issues.apache.org/jira/browse/YARN-8858?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16675572#comment-16675572
 ] 

Wangda Tan commented on YARN-8858:
--

Triggered Jenkins build to find flaky tests. 

> CapacityScheduler should respect maximum node resource when per-queue 
> maximum-allocation is being used.
> ---
>
> Key: YARN-8858
> URL: https://issues.apache.org/jira/browse/YARN-8858
> Project: Hadoop YARN
>  Issue Type: Bug
>Reporter: Sumana Sathish
>Assignee: Wangda Tan
>Priority: Major
> Fix For: 2.10.0, 3.2.0, 2.9.2, 3.0.4, 3.1.2, 3.3.0
>
> Attachments: YARN-8858-branch-2.8.001.patch, 
> YARN-8858-branch-2.8.002.patch, YARN-8858.001.patch, YARN-8858.002.patch
>
>
> This issue happens after YARN-8720.
> Before that, AMS uses scheduler.getMaximumAllocation to do the normalization. 
> After that, AMS uses LeafQueue.getMaximumAllocation. The scheduler one uses 
> nodeTracker.getMaximumAllocation, but the LeafQueue.getMaximum doesn't. 
> We should use the scheduler.getMaximumAllocation to cap the per-queue's 
> maximum-allocation every time.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7631) ResourceRequest with different Capacity (Resource) overrides each other in RM and thus lost

2018-11-05 Thread Botong Huang (JIRA)


[ 
https://issues.apache.org/jira/browse/YARN-7631?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16675566#comment-16675566
 ] 

Botong Huang commented on YARN-7631:


Please consider directly using _ResourceRequestSetKey_ to replace 
_SchedulerRequestKey_ for this, thx!

> ResourceRequest with different Capacity (Resource) overrides each other in RM 
> and thus lost
> ---
>
> Key: YARN-7631
> URL: https://issues.apache.org/jira/browse/YARN-7631
> Project: Hadoop YARN
>  Issue Type: Bug
>Reporter: Botong Huang
>Assignee: Szilard Nemeth
>Priority: Major
> Attachments: resourcebug.patch
>
>
> Today in AMRMClientImpl, the ResourceRequests (RR) are kept as: RequestId -> 
> Priority -> ResourceName -> ExecutionType -> Resource (Capacity) -> 
> ResourceRequestInfo (the actual RR). This means that only RRs with the same 
> (requestId, priority, resourcename, executionType, resource) will be grouped 
> and aggregated together. 
> While in RM side, the mapping is SchedulerRequestKey (RequestId, priority) -> 
> LocalityAppPlacementAllocator (ResourceName -> RR). 
> The issue is that in RM side Resource is not in the key to the RR at all. 
> (Note that executionType is also not in the RM side, but it is fine because 
> RM handles it separately as container update requests.) This means that under 
> the same value of (requestId, priority, resourcename), RRs with different 
> Resource values will be grouped together and override each other in RM. As a 
> result, some of the container requests are lost and will never be allocated. 
> Furthermore, since the two RRs are kept under different keys in AMRMClient 
> side, allocation of RR1 will only trigger cancel for RR1, the pending RR2 
> will not get resend as well. 
> I’ve attached an unit test (resourcebug.patch) which is failing in trunk to 
> illustrate this issue. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6091) the AppMaster register failed when use Docker on LinuxContainer

2018-11-05 Thread Sunil Govindan (JIRA)


[ 
https://issues.apache.org/jira/browse/YARN-6091?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16675542#comment-16675542
 ] 

Sunil Govindan commented on YARN-6091:
--

Removing Fixed versions as YARN-7654  is containing this patch and committed to 
branches which matches to Fixed version field there. Pls let me know if any 
issues. Thanks.

> the AppMaster register failed when use Docker on LinuxContainer 
> 
>
> Key: YARN-6091
> URL: https://issues.apache.org/jira/browse/YARN-6091
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: nodemanager, yarn
>Affects Versions: 2.8.1
> Environment: CentOS
>Reporter: zhengchenyu
>Assignee: Eric Badger
>Priority: Critical
>  Labels: Docker
> Attachments: YARN-6091.001.patch, YARN-6091.002.patch
>
>   Original Estimate: 336h
>  Remaining Estimate: 336h
>
> In some servers, When I use Docker on LinuxContainer, I found the aciton that 
> AppMaster register to Resourcemanager failed. But didn't happen in other 
> servers. 
> I found the pclose (in container-executor.c) return different value in 
> different server, even though the process which is launched by popen is 
> running normally. Some server return 0, and others return 13. 
> Because yarn regard the application as failed application when pclose return 
> nonzero, and yarn will remove the AMRMToken, then the AppMaster register 
> failed because Resourcemanager have removed this applicaiton's token. 
> In container-executor.c, the judgement condition is whether the return code 
> is zero. But man the pclose, the document tells that "pclose return -1" 
> represent wrong. So I change the judgement condition, then slove this 
> problem. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-6091) the AppMaster register failed when use Docker on LinuxContainer

2018-11-05 Thread Sunil Govindan (JIRA)


 [ 
https://issues.apache.org/jira/browse/YARN-6091?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sunil Govindan updated YARN-6091:
-
Fix Version/s: (was: 3.1.1)
   (was: 3.2.0)

> the AppMaster register failed when use Docker on LinuxContainer 
> 
>
> Key: YARN-6091
> URL: https://issues.apache.org/jira/browse/YARN-6091
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: nodemanager, yarn
>Affects Versions: 2.8.1
> Environment: CentOS
>Reporter: zhengchenyu
>Assignee: Eric Badger
>Priority: Critical
>  Labels: Docker
> Attachments: YARN-6091.001.patch, YARN-6091.002.patch
>
>   Original Estimate: 336h
>  Remaining Estimate: 336h
>
> In some servers, When I use Docker on LinuxContainer, I found the aciton that 
> AppMaster register to Resourcemanager failed. But didn't happen in other 
> servers. 
> I found the pclose (in container-executor.c) return different value in 
> different server, even though the process which is launched by popen is 
> running normally. Some server return 0, and others return 13. 
> Because yarn regard the application as failed application when pclose return 
> nonzero, and yarn will remove the AMRMToken, then the AppMaster register 
> failed because Resourcemanager have removed this applicaiton's token. 
> In container-executor.c, the judgement condition is whether the return code 
> is zero. But man the pclose, the document tells that "pclose return -1" 
> represent wrong. So I change the judgement condition, then slove this 
> problem. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6989) Ensure timeline service v2 codebase gets UGI from HttpServletRequest in a consistent way

2018-11-05 Thread Sunil Govindan (JIRA)


[ 
https://issues.apache.org/jira/browse/YARN-6989?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16675535#comment-16675535
 ] 

Sunil Govindan commented on YARN-6989:
--

Corrected fixed version as 3.2.1 as this patch was not landed in branch-3.2.0

> Ensure timeline service v2 codebase gets UGI from HttpServletRequest in a 
> consistent way
> 
>
> Key: YARN-6989
> URL: https://issues.apache.org/jira/browse/YARN-6989
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: timelineserver
>Reporter: Vrushali C
>Assignee: Abhishek Modi
>Priority: Major
> Fix For: 2.10.0, 3.0.4, 3.1.2, 3.2.1
>
> Attachments: YARN-6989.001.patch, YARN-6989.002.patch
>
>
> As noticed during discussions in YARN-6820, the webservices in timeline 
> service v2 get the UGI created from the user obtained by invoking 
> getRemoteUser on the HttpServletRequest . 
> It will be good to use getUserPrincipal instead of invoking getRemoteUser on 
> the HttpServletRequest. 
> Filing jira to update the code. 
> Per Java EE documentations for 6 and 7, the behavior around getRemoteUser and 
> getUserPrincipal is listed at:
> http://docs.oracle.com/javaee/6/tutorial/doc/gjiie.html#bncba
> https://docs.oracle.com/javaee/7/tutorial/security-webtier003.htm
> {code}
> getRemoteUser, which determines the user name with which the client 
> authenticated. The getRemoteUser method returns the name of the remote user 
> (the caller) associated by the container with the request. If no user has 
> been authenticated, this method returns null.
> getUserPrincipal, which determines the principal name of the current user and 
> returns a java.security.Principal object. If no user has been authenticated, 
> this method returns null. Calling the getName method on the Principal 
> returned by getUserPrincipal returns the name of the remote user.
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-6989) Ensure timeline service v2 codebase gets UGI from HttpServletRequest in a consistent way

2018-11-05 Thread Sunil Govindan (JIRA)


 [ 
https://issues.apache.org/jira/browse/YARN-6989?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sunil Govindan updated YARN-6989:
-
Fix Version/s: (was: 3.2.0)
   3.2.1

> Ensure timeline service v2 codebase gets UGI from HttpServletRequest in a 
> consistent way
> 
>
> Key: YARN-6989
> URL: https://issues.apache.org/jira/browse/YARN-6989
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: timelineserver
>Reporter: Vrushali C
>Assignee: Abhishek Modi
>Priority: Major
> Fix For: 2.10.0, 3.0.4, 3.1.2, 3.2.1
>
> Attachments: YARN-6989.001.patch, YARN-6989.002.patch
>
>
> As noticed during discussions in YARN-6820, the webservices in timeline 
> service v2 get the UGI created from the user obtained by invoking 
> getRemoteUser on the HttpServletRequest . 
> It will be good to use getUserPrincipal instead of invoking getRemoteUser on 
> the HttpServletRequest. 
> Filing jira to update the code. 
> Per Java EE documentations for 6 and 7, the behavior around getRemoteUser and 
> getUserPrincipal is listed at:
> http://docs.oracle.com/javaee/6/tutorial/doc/gjiie.html#bncba
> https://docs.oracle.com/javaee/7/tutorial/security-webtier003.htm
> {code}
> getRemoteUser, which determines the user name with which the client 
> authenticated. The getRemoteUser method returns the name of the remote user 
> (the caller) associated by the container with the request. If no user has 
> been authenticated, this method returns null.
> getUserPrincipal, which determines the principal name of the current user and 
> returns a java.security.Principal object. If no user has been authenticated, 
> this method returns null. Calling the getName method on the Principal 
> returned by getUserPrincipal returns the name of the remote user.
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-8902) Add volume manager that manages CSI volume lifecycle

2018-11-05 Thread Wangda Tan (JIRA)


[ 
https://issues.apache.org/jira/browse/YARN-8902?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16675532#comment-16675532
 ] 

Wangda Tan commented on YARN-8902:
--

Thanks [~cheersyang] , a couple of high-level questions and miscs: 

CsiAdaptorClient is not implemented, does this patch works end to end?

How to handle client ask volumes for every allocate request (let's say same 
volume id)? What will the expectation be for users, should they expect failures 
for the allocate() call or duplicated volume id will be simply ignored?

How to handle RM recovery case for volumes, are we going to recover volume 
states? or do we need to do that?

*Miscs:* 
- org.apache.hadoop.yarn.server.resourcemanager.volume => ..rmvolume?

> Add volume manager that manages CSI volume lifecycle
> 
>
> Key: YARN-8902
> URL: https://issues.apache.org/jira/browse/YARN-8902
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Weiwei Yang
>Assignee: Weiwei Yang
>Priority: Major
> Attachments: YARN-8902.001.patch, YARN-8902.002.patch, 
> YARN-8902.003.patch, YARN-8902.004.patch, YARN-8902.005.patch, 
> YARN-8902.006.patch, YARN-8902.007.patch
>
>
> The CSI volume manager is a service running in RM process, that manages all 
> CSI volumes' lifecycle. The details about volume's lifecycle states can be 
> found in [CSI 
> spec|https://github.com/container-storage-interface/spec/blob/master/spec.md].
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-3409) Support Node Attribute functionality

2018-11-05 Thread Sunil Govindan (JIRA)


 [ 
https://issues.apache.org/jira/browse/YARN-3409?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sunil Govindan updated YARN-3409:
-
Release Note: With this feature task, Node Attributes is supported in YARN 
which will help user's to effectively use resources and assign to applications 
based on characteristics of each node's in the cluster.  (was: Node Attribute 
Feature has been added as part of this jira.)

> Support Node Attribute functionality
> 
>
> Key: YARN-3409
> URL: https://issues.apache.org/jira/browse/YARN-3409
> Project: Hadoop YARN
>  Issue Type: New Feature
>  Components: api, client, RM
>Reporter: Wangda Tan
>Assignee: Naganarasimha G R
>Priority: Major
> Fix For: 3.2.0
>
> Attachments: 3409-apiChanges_v2.pdf (4).pdf, 
> Constraint-Node-Labels-Requirements-Design-doc_v1.pdf, 
> Node-Attributes-Requirements-Design-doc_v2.pdf, YARN-3409.WIP.001.patch
>
>
> Specify only one label for each node (IAW, partition a cluster) is a way to 
> determinate how resources of a special set of nodes could be shared by a 
> group of entities (like teams, departments, etc.). Partitions of a cluster 
> has following characteristics:
> - Cluster divided to several disjoint sub clusters.
> - ACL/priority can apply on partition (Only market team / marke team has 
> priority to use the partition).
> - Percentage of capacities can apply on partition (Market team has 40% 
> minimum capacity and Dev team has 60% of minimum capacity of the partition).
> Attributes are orthogonal to partition, they’re describing features of node’s 
> hardware/software just for affinity. Some example of attributes:
> - glibc version
> - JDK version
> - Type of CPU (x86_64/i686)
> - Type of OS (windows, linux, etc.)
> With this, application can be able to ask for resource has (glibc.version >= 
> 2.20 && JDK.version >= 8u20 && x86_64).



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Resolved] (YARN-7031) Support distributed node attributes

2018-11-05 Thread Sunil Govindan (JIRA)


 [ 
https://issues.apache.org/jira/browse/YARN-7031?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sunil Govindan resolved YARN-7031.
--
Resolution: Not A Problem

> Support distributed node attributes
> ---
>
> Key: YARN-7031
> URL: https://issues.apache.org/jira/browse/YARN-7031
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Bibin A Chundatt
>Assignee: Bibin A Chundatt
>Priority: Major
> Attachments: Distributed node attributes v1.pdf, 
> YARN-7031-YARN-3409.001.patch
>
>
> Allow nodemanagers to push its attributes to RM



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Reopened] (YARN-7031) Support distributed node attributes

2018-11-05 Thread Sunil Govindan (JIRA)


 [ 
https://issues.apache.org/jira/browse/YARN-7031?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sunil Govindan reopened YARN-7031:
--

Reopening to close this Jira correctly

> Support distributed node attributes
> ---
>
> Key: YARN-7031
> URL: https://issues.apache.org/jira/browse/YARN-7031
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Bibin A Chundatt
>Assignee: Bibin A Chundatt
>Priority: Major
> Attachments: Distributed node attributes v1.pdf, 
> YARN-7031-YARN-3409.001.patch
>
>
> Allow nodemanagers to push its attributes to RM



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-7031) Support distributed node attributes

2018-11-05 Thread Sunil Govindan (JIRA)


 [ 
https://issues.apache.org/jira/browse/YARN-7031?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sunil Govindan updated YARN-7031:
-
Fix Version/s: (was: 3.2.0)

> Support distributed node attributes
> ---
>
> Key: YARN-7031
> URL: https://issues.apache.org/jira/browse/YARN-7031
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Bibin A Chundatt
>Assignee: Bibin A Chundatt
>Priority: Major
> Attachments: Distributed node attributes v1.pdf, 
> YARN-7031-YARN-3409.001.patch
>
>
> Allow nodemanagers to push its attributes to RM



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-7213) [Umbrella] Test and validate HBase-2.0.x with Atsv2

2018-11-05 Thread Sunil Govindan (JIRA)


 [ 
https://issues.apache.org/jira/browse/YARN-7213?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sunil Govindan updated YARN-7213:
-
Fix Version/s: (was: 3.2.0)

> [Umbrella] Test and validate HBase-2.0.x with Atsv2
> ---
>
> Key: YARN-7213
> URL: https://issues.apache.org/jira/browse/YARN-7213
> Project: Hadoop YARN
>  Issue Type: Task
>Reporter: Rohith Sharma K S
>Priority: Major
> Attachments: YARN-7213.prelim.patch, YARN-7213.prelim.patch, 
> YARN-7213.wip.patch
>
>
> Hbase-2.0.x officially support hadoop-alpha compilations. And also they are 
> getting ready for Hadoop-beta release so that HBase can release their 
> versions compatible with Hadoop-beta. So, this JIRA is to keep track of 
> HBase-2.0 integration issues. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7213) [Umbrella] Test and validate HBase-2.0.x with Atsv2

2018-11-05 Thread Sunil Govindan (JIRA)


[ 
https://issues.apache.org/jira/browse/YARN-7213?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16675525#comment-16675525
 ] 

Sunil Govindan commented on YARN-7213:
--

Removing fixed versions as this work spanned across multiple versions.

> [Umbrella] Test and validate HBase-2.0.x with Atsv2
> ---
>
> Key: YARN-7213
> URL: https://issues.apache.org/jira/browse/YARN-7213
> Project: Hadoop YARN
>  Issue Type: Task
>Reporter: Rohith Sharma K S
>Priority: Major
> Attachments: YARN-7213.prelim.patch, YARN-7213.prelim.patch, 
> YARN-7213.wip.patch
>
>
> Hbase-2.0.x officially support hadoop-alpha compilations. And also they are 
> getting ready for Hadoop-beta release so that HBase can release their 
> versions compatible with Hadoop-beta. So, this JIRA is to keep track of 
> HBase-2.0 integration issues. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7512) Support service upgrade via YARN Service API and CLI

2018-11-05 Thread Sunil Govindan (JIRA)


[ 
https://issues.apache.org/jira/browse/YARN-7512?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16675521#comment-16675521
 ] 

Sunil Govindan commented on YARN-7512:
--

Hi [~csingh] [~eyang]

cud u pls give a short write up as Release Notes which says a bit about this 
feature.
you could edit this jira, and update ReleaseNotes section for same. Thank you.

> Support service upgrade via YARN Service API and CLI
> 
>
> Key: YARN-7512
> URL: https://issues.apache.org/jira/browse/YARN-7512
> Project: Hadoop YARN
>  Issue Type: New Feature
>  Components: yarn-native-services
>Reporter: Gour Saha
>Assignee: Chandni Singh
>Priority: Major
> Attachments: _In-Place Upgrade of Long-Running Applications in 
> YARN_v1.pdf, _In-Place Upgrade of Long-Running Applications in YARN_v2.pdf, 
> _In-Place Upgrade of Long-Running Applications in YARN_v3.pdf
>
>
> YARN Service API and CLI needs to support service (and containers) upgrade in 
> line with what Slider supported in SLIDER-787 
> (http://slider.incubator.apache.org/docs/slider_specs/application_pkg_upgrade.html)



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7812) Improvements to Rich Placement Constraints in YARN

2018-11-05 Thread Sunil Govindan (JIRA)


[ 
https://issues.apache.org/jira/browse/YARN-7812?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16675523#comment-16675523
 ] 

Sunil Govindan commented on YARN-7812:
--

Hi [~cheersyang]

cud u pls give a short write up as Release Notes which says a bit about this 
feature.
you could edit this jira, and update ReleaseNotes section for same. Thank you.

> Improvements to Rich Placement Constraints in YARN
> --
>
> Key: YARN-7812
> URL: https://issues.apache.org/jira/browse/YARN-7812
> Project: Hadoop YARN
>  Issue Type: New Feature
>Reporter: Arun Suresh
>Priority: Major
>
> This umbrella tracks the efforts for supporting following features
> # Inter-app placement constraints
> # Composite placement constraints, such as AND/OR expressions
> # Support placement constraints in Capacity Scheduler



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Resolved] (YARN-7990) Node attribute prefix definition and validation

2018-11-05 Thread Sunil Govindan (JIRA)


 [ 
https://issues.apache.org/jira/browse/YARN-7990?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sunil Govindan resolved YARN-7990.
--
  Resolution: Won't Fix
Target Version/s:   (was: YARN-3409)

> Node attribute prefix definition and validation 
> 
>
> Key: YARN-7990
> URL: https://issues.apache.org/jira/browse/YARN-7990
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Weiwei Yang
>Assignee: Weiwei Yang
>Priority: Major
> Attachments: Node Attribute Prefix Definition v1.pdf
>
>
> Summary:
> # Centralized: rm.yarn.io
> # Distributed: nm.yarn.io
> # System: *.yarn.io (yarn.io suffix is preserved for other yarn system set 
> node attributes)
> # User-Defined: other form of prefixes except reserved ones
> See detail in design doc  [^Node Attribute Prefix Definition v1.pdf] 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Reopened] (YARN-7990) Node attribute prefix definition and validation

2018-11-05 Thread Sunil Govindan (JIRA)


 [ 
https://issues.apache.org/jira/browse/YARN-7990?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sunil Govindan reopened YARN-7990:
--

Done is not correct status. Reopening to correctly close the jira

> Node attribute prefix definition and validation 
> 
>
> Key: YARN-7990
> URL: https://issues.apache.org/jira/browse/YARN-7990
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Weiwei Yang
>Assignee: Weiwei Yang
>Priority: Major
> Attachments: Node Attribute Prefix Definition v1.pdf
>
>
> Summary:
> # Centralized: rm.yarn.io
> # Distributed: nm.yarn.io
> # System: *.yarn.io (yarn.io suffix is preserved for other yarn system set 
> node attributes)
> # User-Defined: other form of prefixes except reserved ones
> See detail in design doc  [^Node Attribute Prefix Definition v1.pdf] 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7512) Support service upgrade via YARN Service API and CLI

2018-11-05 Thread Sunil Govindan (JIRA)


[ 
https://issues.apache.org/jira/browse/YARN-7512?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16675510#comment-16675510
 ] 

Sunil Govindan commented on YARN-7512:
--

No fix-version given the tasks spanned across releases

> Support service upgrade via YARN Service API and CLI
> 
>
> Key: YARN-7512
> URL: https://issues.apache.org/jira/browse/YARN-7512
> Project: Hadoop YARN
>  Issue Type: New Feature
>  Components: yarn-native-services
>Reporter: Gour Saha
>Assignee: Chandni Singh
>Priority: Major
> Attachments: _In-Place Upgrade of Long-Running Applications in 
> YARN_v1.pdf, _In-Place Upgrade of Long-Running Applications in YARN_v2.pdf, 
> _In-Place Upgrade of Long-Running Applications in YARN_v3.pdf
>
>
> YARN Service API and CLI needs to support service (and containers) upgrade in 
> line with what Slider supported in SLIDER-787 
> (http://slider.incubator.apache.org/docs/slider_specs/application_pkg_upgrade.html)



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



  1   2   >