[jira] [Commented] (YARN-7919) Split timelineservice-hbase module to make YARN-7346 easier

2018-02-16 Thread Rohith Sharma K S (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7919?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16368126#comment-16368126
 ] 

Rohith Sharma K S commented on YARN-7919:
-

[~haibo.chen] With fresh deployment for 04 patch, I see an error while creating 
flowrun table. I am guessing something issue with fat jar generation. 
{noformat}
2018-02-17 12:34:55,458 INFO  [main] flow.FlowRunTableRW: 
CoprocessorJarPath=file:/Users/rsharmaks/Cluster/atsv2/hbase/lib/hadoop-yarn-server-timelineservice-hbase-coprocessor-3.2.0-SNAPSHOT.jar
2018-02-17 12:34:55,541 WARN  [main] storage.TimelineSchemaCreator: Skip and 
continue on: org.apache.hadoop.hbase.DoNotRetryIOException: Class 
org.apache.hadoop.yarn.server.timelineservice.storage.flow.FlowRunCoprocessor 
cannot be loaded Set hbase.table.sanity.checks to false at conf or table 
descriptor if you want to bypass sanity checks
at 
org.apache.hadoop.hbase.master.HMaster.warnOrThrowExceptionForFailure(HMaster.java:1754)
at 
org.apache.hadoop.hbase.master.HMaster.sanityCheckTableDescriptor(HMaster.java:1615)
at org.apache.hadoop.hbase.master.HMaster.createTable(HMaster.java:1541)
at 
org.apache.hadoop.hbase.master.MasterRpcServices.createTable(MasterRpcServices.java:464)
at 
org.apache.hadoop.hbase.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java:55682)
at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2196)
at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:112)
at 
org.apache.hadoop.hbase.ipc.RpcExecutor.consumerLoop(RpcExecutor.java:133)
at org.apache.hadoop.hbase.ipc.RpcExecutor$1.run(RpcExecutor.java:108)
at java.lang.Thread.run(Thread.java:748)
{noformat}

> Split timelineservice-hbase module to make YARN-7346 easier
> ---
>
> Key: YARN-7919
> URL: https://issues.apache.org/jira/browse/YARN-7919
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: timelineservice
>Affects Versions: 3.0.0
>Reporter: Haibo Chen
>Assignee: Haibo Chen
>Priority: Major
> Attachments: YARN-7919.00.patch, YARN-7919.01.patch, 
> YARN-7919.02.patch, YARN-7919.03.patch, YARN-7919.04.patch
>
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7919) Split timelineservice-hbase module to make YARN-7346 easier

2018-02-16 Thread Rohith Sharma K S (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7919?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16368107#comment-16368107
 ] 

Rohith Sharma K S commented on YARN-7919:
-

thanks [~vrushalic]! I am doing another round of testing with this patch. I 
will commit it once I finish sanity testing!

> Split timelineservice-hbase module to make YARN-7346 easier
> ---
>
> Key: YARN-7919
> URL: https://issues.apache.org/jira/browse/YARN-7919
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: timelineservice
>Affects Versions: 3.0.0
>Reporter: Haibo Chen
>Assignee: Haibo Chen
>Priority: Major
> Attachments: YARN-7919.00.patch, YARN-7919.01.patch, 
> YARN-7919.02.patch, YARN-7919.03.patch, YARN-7919.04.patch
>
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7328) ResourceUtils allows yarn.nodemanager.resource-types.memory-mb and .vcores to override yarn.nodemanager.resource.memory-mb and .cpu-vcores

2018-02-16 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7328?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16368104#comment-16368104
 ] 

Hudson commented on YARN-7328:
--

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #13675 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/13675/])
YARN-7328. ResourceUtils allows (wangda: rev 
ca1043ab9030339d7cdd3275c3f8f4713b8bff59)
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/util/resource/ResourceUtils.java
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/test/resources/resource-types/resource-types-error-2.xml
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/test/resources/resource-types/node-resources-2.xml
* (add) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/test/resources/resource-types/node-resources-error-1.xml
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/test/java/org/apache/hadoop/yarn/util/resource/TestResourceUtils.java


> ResourceUtils allows yarn.nodemanager.resource-types.memory-mb and .vcores to 
> override yarn.nodemanager.resource.memory-mb and .cpu-vcores
> --
>
> Key: YARN-7328
> URL: https://issues.apache.org/jira/browse/YARN-7328
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: nodemanager
>Affects Versions: 3.1.0
>Reporter: Daniel Templeton
>Assignee: lovekesh bansal
>Priority: Critical
> Fix For: 3.1.0
>
> Attachments: YARN-7328_trunk.001.patch, YARN-7328_trunk.002.patch
>
>
> We will throw an exception if yarn.nodemanager.resource-types.memory is 
> configured, but not if .memory-mb or .vcores is configured.  We should be 
> consistent.  We should not allow resource types to redefine something for 
> which we already have a property to set. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7919) Split timelineservice-hbase module to make YARN-7346 easier

2018-02-16 Thread Vrushali C (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7919?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16368099#comment-16368099
 ] 

Vrushali C commented on YARN-7919:
--

+1 to the overall approach. I haven't had a lot of time unfortunately to look 
through it in as much detail as I would have liked to, but so far it looks good 
to me. 

thanks [~haibochen] and [~rohithsharma] 

> Split timelineservice-hbase module to make YARN-7346 easier
> ---
>
> Key: YARN-7919
> URL: https://issues.apache.org/jira/browse/YARN-7919
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: timelineservice
>Affects Versions: 3.0.0
>Reporter: Haibo Chen
>Assignee: Haibo Chen
>Priority: Major
> Attachments: YARN-7919.00.patch, YARN-7919.01.patch, 
> YARN-7919.02.patch, YARN-7919.03.patch, YARN-7919.04.patch
>
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-7893) Document the FPGA isolation feature

2018-02-16 Thread Wangda Tan (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-7893?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wangda Tan updated YARN-7893:
-
Target Version/s: 3.1.0

> Document the FPGA isolation feature
> ---
>
> Key: YARN-7893
> URL: https://issues.apache.org/jira/browse/YARN-7893
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Zhankun Tang
>Assignee: Zhankun Tang
>Priority: Blocker
> Attachments: FPGA-doc-YARN-7893.pdf, YARN-7893-trunk-001.patch, 
> YARN-7893-trunk-002.patch
>
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7893) Document the FPGA isolation feature

2018-02-16 Thread Wangda Tan (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7893?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16368098#comment-16368098
 ] 

Wangda Tan commented on YARN-7893:
--

Boost the priority to blocker and set target version to 3.1.0, please update 
the doc if you get chance (better by next Monday).

> Document the FPGA isolation feature
> ---
>
> Key: YARN-7893
> URL: https://issues.apache.org/jira/browse/YARN-7893
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Zhankun Tang
>Assignee: Zhankun Tang
>Priority: Blocker
> Attachments: FPGA-doc-YARN-7893.pdf, YARN-7893-trunk-001.patch, 
> YARN-7893-trunk-002.patch
>
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-7893) Document the FPGA isolation feature

2018-02-16 Thread Wangda Tan (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-7893?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wangda Tan updated YARN-7893:
-
Fix Version/s: 3.1.0

> Document the FPGA isolation feature
> ---
>
> Key: YARN-7893
> URL: https://issues.apache.org/jira/browse/YARN-7893
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Zhankun Tang
>Assignee: Zhankun Tang
>Priority: Blocker
> Attachments: FPGA-doc-YARN-7893.pdf, YARN-7893-trunk-001.patch, 
> YARN-7893-trunk-002.patch
>
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-7893) Document the FPGA isolation feature

2018-02-16 Thread Wangda Tan (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-7893?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wangda Tan updated YARN-7893:
-
Priority: Blocker  (was: Major)

> Document the FPGA isolation feature
> ---
>
> Key: YARN-7893
> URL: https://issues.apache.org/jira/browse/YARN-7893
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Zhankun Tang
>Assignee: Zhankun Tang
>Priority: Blocker
> Attachments: FPGA-doc-YARN-7893.pdf, YARN-7893-trunk-001.patch, 
> YARN-7893-trunk-002.patch
>
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-7893) Document the FPGA isolation feature

2018-02-16 Thread Wangda Tan (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-7893?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wangda Tan updated YARN-7893:
-
Fix Version/s: (was: 3.1.0)

> Document the FPGA isolation feature
> ---
>
> Key: YARN-7893
> URL: https://issues.apache.org/jira/browse/YARN-7893
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Zhankun Tang
>Assignee: Zhankun Tang
>Priority: Blocker
> Attachments: FPGA-doc-YARN-7893.pdf, YARN-7893-trunk-001.patch, 
> YARN-7893-trunk-002.patch
>
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7893) Document the FPGA isolation feature

2018-02-16 Thread Wangda Tan (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7893?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16368097#comment-16368097
 ] 

Wangda Tan commented on YARN-7893:
--

[~tangzhankun], apologize for my late response, somehow this dropped from my 
radar:

I suggest to add following changes, I put similar changes to GPU doc as well: 
https://issues.apache.org/jira/browse/YARN-7223 (Ver.2 doc)

1) DominantResourceCalculator.

2) yarn.nodemanager.linux-container-executor.cgroups.mount

3) Distributed shell example can use the new syntax added to specify multiple 
resource values instead of using resource profile.

4)
{code} 
184 ## YARN native services + FPGA
185 
186 TODO
187 
188 ## Other applications use FPGA.
189 
190 TODO
{code} 
 
Can be removed from the doc. 

> Document the FPGA isolation feature
> ---
>
> Key: YARN-7893
> URL: https://issues.apache.org/jira/browse/YARN-7893
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Zhankun Tang
>Assignee: Zhankun Tang
>Priority: Major
> Attachments: FPGA-doc-YARN-7893.pdf, YARN-7893-trunk-001.patch, 
> YARN-7893-trunk-002.patch
>
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7223) Document GPU isolation feature

2018-02-16 Thread Wangda Tan (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7223?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16368096#comment-16368096
 ] 

Wangda Tan commented on YARN-7223:
--

Attached ver.2 patch and PDF for review.

[~sunilg], [~Zian Chen], [~ssath...@hortonworks.com], please take a look if you 
get chance, let's try to finish it by next Monday since it is a 3.1.0 blocker.

 

> Document GPU isolation feature
> --
>
> Key: YARN-7223
> URL: https://issues.apache.org/jira/browse/YARN-7223
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Wangda Tan
>Assignee: Wangda Tan
>Priority: Blocker
> Attachments: YARN-7223.002.patch, YARN-7223.002.pdf, 
> YARN-7223.wip.001.patch, YARN-7223.wip.001.pdf
>
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-7223) Document GPU isolation feature

2018-02-16 Thread Wangda Tan (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-7223?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wangda Tan updated YARN-7223:
-
Attachment: YARN-7223.002.pdf

> Document GPU isolation feature
> --
>
> Key: YARN-7223
> URL: https://issues.apache.org/jira/browse/YARN-7223
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Wangda Tan
>Assignee: Wangda Tan
>Priority: Blocker
> Attachments: YARN-7223.002.patch, YARN-7223.002.pdf, 
> YARN-7223.wip.001.patch, YARN-7223.wip.001.pdf
>
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-7223) Document GPU isolation feature

2018-02-16 Thread Wangda Tan (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-7223?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wangda Tan updated YARN-7223:
-
Attachment: YARN-7223.002.patch

> Document GPU isolation feature
> --
>
> Key: YARN-7223
> URL: https://issues.apache.org/jira/browse/YARN-7223
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Wangda Tan
>Assignee: Wangda Tan
>Priority: Blocker
> Attachments: YARN-7223.002.patch, YARN-7223.wip.001.patch, 
> YARN-7223.wip.001.pdf
>
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-7223) Document GPU isolation feature

2018-02-16 Thread Wangda Tan (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-7223?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wangda Tan updated YARN-7223:
-
Fix Version/s: (was: 3.1.0)

> Document GPU isolation feature
> --
>
> Key: YARN-7223
> URL: https://issues.apache.org/jira/browse/YARN-7223
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Wangda Tan
>Assignee: Wangda Tan
>Priority: Blocker
> Attachments: YARN-7223.wip.001.patch, YARN-7223.wip.001.pdf
>
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-7223) Document GPU isolation feature

2018-02-16 Thread Wangda Tan (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-7223?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wangda Tan updated YARN-7223:
-
Target Version/s: 3.1.0

> Document GPU isolation feature
> --
>
> Key: YARN-7223
> URL: https://issues.apache.org/jira/browse/YARN-7223
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Wangda Tan
>Assignee: Wangda Tan
>Priority: Blocker
> Attachments: YARN-7223.wip.001.patch, YARN-7223.wip.001.pdf
>
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7934) [GQ] Refactor preemption calculators to allow overriding for Federation Global Algos

2018-02-16 Thread Carlo Curino (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7934?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16368062#comment-16368062
 ] 

Carlo Curino commented on YARN-7934:


[~subru] thanks for the review. I agree the test seems unrelated and passed in 
v3 (that diff with v4 only in comments), so likely just a flacky one. 

> [GQ] Refactor preemption calculators to allow overriding for Federation 
> Global Algos
> 
>
> Key: YARN-7934
> URL: https://issues.apache.org/jira/browse/YARN-7934
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Carlo Curino
>Assignee: Carlo Curino
>Priority: Major
> Attachments: YARN-7934.v1.patch, YARN-7934.v2.patch, 
> YARN-7934.v3.patch, YARN-7934.v4.patch
>
>
> This Jira tracks minimal changes in the capacity scheduler preemption 
> mechanics that allow for sub-classing and overriding of certain behaviors, 
> which we use to implement federation global algorithms, e.g., in YARN-7403.
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7934) [GQ] Refactor preemption calculators to allow overriding for Federation Global Algos

2018-02-16 Thread Subru Krishnan (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7934?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16368055#comment-16368055
 ] 

Subru Krishnan commented on YARN-7934:
--

Thanks [~curino] for updating the patch, the latest rev (v4) LGTM. The test 
failure looks unrelated, can you confirm?

[~leftnoteasy], do you want to take a quick look before we commit?

> [GQ] Refactor preemption calculators to allow overriding for Federation 
> Global Algos
> 
>
> Key: YARN-7934
> URL: https://issues.apache.org/jira/browse/YARN-7934
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Carlo Curino
>Assignee: Carlo Curino
>Priority: Major
> Attachments: YARN-7934.v1.patch, YARN-7934.v2.patch, 
> YARN-7934.v3.patch, YARN-7934.v4.patch
>
>
> This Jira tracks minimal changes in the capacity scheduler preemption 
> mechanics that allow for sub-classing and overriding of certain behaviors, 
> which we use to implement federation global algorithms, e.g., in YARN-7403.
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7934) [GQ] Refactor preemption calculators to allow overriding for Federation Global Algos

2018-02-16 Thread genericqa (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7934?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16368054#comment-16368054
 ] 

genericqa commented on YARN-7934:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
25s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 19m 
50s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
17s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
34s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
50s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
12m 36s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
21s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
30s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
51s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
43s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
43s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
29s{color} | {color:green} 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager:
 The patch generated 0 new + 34 unchanged - 2 fixed = 34 total (was 36) {color} 
|
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
45s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
12m 37s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
32s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
30s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 64m 50s{color} 
| {color:red} hadoop-yarn-server-resourcemanager in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
24s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}119m 48s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | 
hadoop.yarn.server.resourcemanager.metrics.TestSystemMetricsPublisher |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:5b98639 |
| JIRA Issue | YARN-7934 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12910985/YARN-7934.v4.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  shadedclient  findbugs  checkstyle  |
| uname | Linux 096d8bdd4206 3.13.0-135-generic #184-Ubuntu SMP Wed Oct 18 
11:55:51 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / 0898ff4 |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_151 |
| findbugs | v3.1.0-RC1 |
| unit | 
https://builds.apache.org/job/PreCommit-YARN-Build/19732/artifact/out/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager.txt
 |
|  Test Results 

[jira] [Commented] (YARN-7626) Allow regular expression matching in container-executor.cfg for devices and named docker volumes mount

2018-02-16 Thread Zian Chen (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7626?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16368027#comment-16368027
 ] 

Zian Chen commented on YARN-7626:
-

Hi [~miklos.szeg...@cloudera.com] , really appreciate your comments for the 
latest patch. I agree with most of the suggestions. Just have two quick 
questions here for the conceptual issues No. 2,
 # Since current trunk code base uses ^$ pair as the judgment criteria for 
regex, could you kindly give us an example that there is any other type of 
regex that is not like ^$ pair which is useful to us in docker volume mount 
scenario?
 # I'm also concerned using regex: as a prefix instead of ^$ pair for regex may 
introduce security holes, what if hackers input user mount like regex+ as a 
prefix?  

Could you please advice? Thank you so much!

 

 

> Allow regular expression matching in container-executor.cfg for devices and 
> named docker volumes mount
> --
>
> Key: YARN-7626
> URL: https://issues.apache.org/jira/browse/YARN-7626
> Project: Hadoop YARN
>  Issue Type: New Feature
>Reporter: Zian Chen
>Assignee: Zian Chen
>Priority: Major
> Attachments: YARN-7626.001.patch, YARN-7626.002.patch, 
> YARN-7626.003.patch, YARN-7626.004.patch, YARN-7626.005.patch, 
> YARN-7626.006.patch, YARN-7626.007.patch, YARN-7626.008.patch
>
>
> Currently when we config some of the GPU devices related fields (like ) in 
> container-executor.cfg, these fields are generated based on different driver 
> versions or GPU device names. We want to enable regular expression matching 
> so that user don't need to manually set up these fields when config 
> container-executor.cfg,



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-7934) [GQ] Refactor preemption calculators to allow overriding for Federation Global Algos

2018-02-16 Thread Carlo Curino (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-7934?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Carlo Curino updated YARN-7934:
---
Attachment: YARN-7934.v4.patch

> [GQ] Refactor preemption calculators to allow overriding for Federation 
> Global Algos
> 
>
> Key: YARN-7934
> URL: https://issues.apache.org/jira/browse/YARN-7934
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Carlo Curino
>Assignee: Carlo Curino
>Priority: Major
> Attachments: YARN-7934.v1.patch, YARN-7934.v2.patch, 
> YARN-7934.v3.patch, YARN-7934.v4.patch
>
>
> This Jira tracks minimal changes in the capacity scheduler preemption 
> mechanics that allow for sub-classing and overriding of certain behaviors, 
> which we use to implement federation global algorithms, e.g., in YARN-7403.
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-7944) Remove master node link from headers of application pages

2018-02-16 Thread Yesha Vora (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-7944?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yesha Vora updated YARN-7944:
-
Attachment: YARN-7944.001.patch

> Remove master node link from headers of application pages
> -
>
> Key: YARN-7944
> URL: https://issues.apache.org/jira/browse/YARN-7944
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: yarn-ui-v2
>Affects Versions: 3.1.0
>Reporter: Yesha Vora
>Assignee: Yesha Vora
>Priority: Major
> Fix For: 3.1.0
>
> Attachments: YARN-7944.001.patch
>
>
> Rm UI2 has links for Master container log and master node. 
> This link published on application and service page. This links are not 
> required on all pages because AM container node link and container log link 
> are already present in Application view. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-7944) Remove master node link from headers of application pages

2018-02-16 Thread Yesha Vora (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-7944?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yesha Vora updated YARN-7944:
-
Fix Version/s: 3.1.0

> Remove master node link from headers of application pages
> -
>
> Key: YARN-7944
> URL: https://issues.apache.org/jira/browse/YARN-7944
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: yarn-ui-v2
>Affects Versions: 3.1.0
>Reporter: Yesha Vora
>Assignee: Yesha Vora
>Priority: Major
> Fix For: 3.1.0
>
> Attachments: YARN-7944.001.patch
>
>
> Rm UI2 has links for Master container log and master node. 
> This link published on application and service page. This links are not 
> required on all pages because AM container node link and container log link 
> are already present in Application view. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Assigned] (YARN-7944) Remove master node link from headers of application pages

2018-02-16 Thread Yesha Vora (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-7944?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yesha Vora reassigned YARN-7944:


Assignee: Yesha Vora

> Remove master node link from headers of application pages
> -
>
> Key: YARN-7944
> URL: https://issues.apache.org/jira/browse/YARN-7944
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: yarn-ui-v2
>Affects Versions: 3.1.0
>Reporter: Yesha Vora
>Assignee: Yesha Vora
>Priority: Major
>
> Rm UI2 has links for Master container log and master node. 
> This link published on application and service page. This links are not 
> required on all pages because AM container node link and container log link 
> are already present in Application view. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Created] (YARN-7944) Remove master node link from headers of application pages

2018-02-16 Thread Yesha Vora (JIRA)
Yesha Vora created YARN-7944:


 Summary: Remove master node link from headers of application pages
 Key: YARN-7944
 URL: https://issues.apache.org/jira/browse/YARN-7944
 Project: Hadoop YARN
  Issue Type: Bug
  Components: yarn-ui-v2
Affects Versions: 3.1.0
Reporter: Yesha Vora


Rm UI2 has links for Master container log and master node. 

This link published on application and service page. This links are not 
required on all pages because AM container node link and container log link are 
already present in Application view. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7935) Expose container's hostname to applications running within the docker container

2018-02-16 Thread genericqa (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7935?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16367921#comment-16367921
 ] 

genericqa commented on YARN-7935:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
13s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  1m  
0s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 15m 
54s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  7m 
31s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
 1s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
20s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
12m 11s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  1m 
14s{color} | {color:red} hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api in 
trunk has 1 extant Findbugs warnings. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m  
0s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
10s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
 1s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  6m 
23s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  6m 
23s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 56s{color} | {color:orange} hadoop-yarn-project/hadoop-yarn: The patch 
generated 5 new + 51 unchanged - 0 fixed = 56 total (was 51) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
16s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
10m 31s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m 
25s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
57s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
36s{color} | {color:green} hadoop-yarn-api in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 19m 
12s{color} | {color:green} hadoop-yarn-server-nodemanager in the patch passed. 
{color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
34s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 85m 41s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:5b98639 |
| JIRA Issue | YARN-7935 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12910967/YARN-7935.2.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  shadedclient  findbugs  checkstyle  |
| uname | Linux 87c0fbcdedbe 4.4.0-43-generic #63-Ubuntu SMP Wed Oct 12 
13:48:03 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / 4c2119f |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_151 |
| findbugs | v3.1.0-

[jira] [Commented] (YARN-7521) Add some missing @VisibleForTesting annotations

2018-02-16 Thread genericqa (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7521?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16367894#comment-16367894
 ] 

genericqa commented on YARN-7521:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
24s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 15m 
 6s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
35s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
23s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
39s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green}  
9m 29s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m  
0s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
21s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
35s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
31s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
31s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
21s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
36s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 1s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green}  
9m 35s{color} | {color:green} patch has no errors when building and testing our 
client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m  
4s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
22s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 73m 
29s{color} | {color:green} hadoop-yarn-server-resourcemanager in the patch 
passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
20s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}114m 34s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:5b98639 |
| JIRA Issue | YARN-7521 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12898082/YARN-7521.001.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  shadedclient  findbugs  checkstyle  |
| uname | Linux 980ef96f3baa 4.4.0-89-generic #112-Ubuntu SMP Mon Jul 31 
19:38:41 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / 4c2119f |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_151 |
| findbugs | v3.1.0-RC1 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-YARN-Build/19730/testReport/ |
| Max. process+thread count | 821 (vs. ulimit of 5500) |
| modules | C: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager
 U: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager
 |
| Console output | 
https://builds.apache.org/job/PreCommit-YARN-Build/19730/console |
| Powered by | Apache Yetus 0.

[jira] [Commented] (YARN-7813) Capacity Scheduler Intra-queue Preemption should be configurable for each queue

2018-02-16 Thread Eric Payne (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7813?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16367867#comment-16367867
 ] 

Eric Payne commented on YARN-7813:
--

{{YARN-7813.005.patch}}
 - This patch backports to 3.1 cleanly
 - {{TestAMRMClientPlacementConstraints}} failure was not caused by this patch. 
It fails in trunk without the patch

{{YARN-7813.005.branch-3.0.patch}}
 - This patch backports to branch-2 and branch-2.9 with one minor merge 
conflict.
 - {{TestNodeLabelContainerAllocation}} failure was not caused by this patch. 
It fails in branch-3.0 without the patch

{{YARN-7813.005.branch-2.8.patch}}
 - The only unit test that fails for me in my local repo's build is 
{{TestRMWebServiceAppsNodelabel}}, which also fails in branch-2.8 without the 
patch

> Capacity Scheduler Intra-queue Preemption should be configurable for each 
> queue
> ---
>
> Key: YARN-7813
> URL: https://issues.apache.org/jira/browse/YARN-7813
> Project: Hadoop YARN
>  Issue Type: Improvement
>  Components: capacity scheduler, scheduler preemption
>Affects Versions: 2.9.0, 2.8.3, 3.0.0
>Reporter: Eric Payne
>Assignee: Eric Payne
>Priority: Major
> Attachments: YARN-7813.001.patch, YARN-7813.002.branch-3.0.patch, 
> YARN-7813.002.patch, YARN-7813.003.branch-2.patch, 
> YARN-7813.003.branch-3.0.patch, YARN-7813.004.patch, 
> YARN-7813.005.branch-2.8.patch, YARN-7813.005.branch-3.0.patch, 
> YARN-7813.005.patch
>
>
> Just as inter-queue (a.k.a. cross-queue) preemption is configurable per 
> queue, intra-queue (a.k.a. in-queue) preemption should be configurable per 
> queue. If a queue does not have a setting for intra-queue preemption, it 
> should inherit its parents value.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5028) RMStateStore should trim down app state for completed applications

2018-02-16 Thread genericqa (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5028?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16367864#comment-16367864
 ] 

genericqa commented on YARN-5028:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
28s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 15m 
23s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
33s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
25s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
36s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green}  
9m 37s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m  
1s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
21s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
37s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
34s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
34s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 22s{color} | {color:orange} 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager:
 The patch generated 2 new + 76 unchanged - 0 fixed = 78 total (was 76) {color} 
|
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
34s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green}  
9m 36s{color} | {color:green} patch has no errors when building and testing our 
client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m  
8s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
24s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 67m 11s{color} 
| {color:red} hadoop-yarn-server-resourcemanager in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
21s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}109m  4s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | 
hadoop.yarn.server.resourcemanager.webapp.TestRMWebServicesSchedulerActivities |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:5b98639 |
| JIRA Issue | YARN-5028 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12910901/YARN-5028.004.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  shadedclient  findbugs  checkstyle  |
| uname | Linux 76f1396301fe 4.4.0-64-generic #85-Ubuntu SMP Mon Feb 20 
11:50:30 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / 4c2119f |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_151 |
| findbugs | v3.1.0-RC1 |
| checkstyle | 
https://builds.apache.org/job/PreCommit-YARN-Build/19729/artifact/out/diff-checkstyle-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager.txt
 |
| unit | 
https://builds.apache.org/job/PreCommit-YARN-Build/19729/artifact/out/patch-unit-hadoop-yarn-project_hadoop-yarn

[jira] [Commented] (YARN-7900) [AMRMProxy] AMRMClientRelayer for stateful FederationInterceptor

2018-02-16 Thread Botong Huang (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7900?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16367838#comment-16367838
 ] 

Botong Huang commented on YARN-7900:


Hi [~asuresh], yes I understand. I don't think I am breaking this requirement 
though. After the flattening, everything in {{List 
schedulingRequests}} will still be put into the {{allocate}} call each time. I 
did this change because it simplify the code a bit, and the code pattern 
becomes more consistent with other channels, i.e. release, updates, blacklists. 
Please let me know what you think. 

> [AMRMProxy] AMRMClientRelayer for stateful FederationInterceptor
> 
>
> Key: YARN-7900
> URL: https://issues.apache.org/jira/browse/YARN-7900
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Botong Huang
>Assignee: Botong Huang
>Priority: Major
> Attachments: YARN-7900.v1.patch, YARN-7900.v2.patch
>
>
> Inside stateful FederationInterceptor (YARN-7899), we need a component 
> similar to AMRMClient that remembers all pending (outstands) requests we've 
> sent to YarnRM, auto re-register and do full pending resend when YarnRM fails 
> over and throws ApplicationMasterNotRegisteredException back. This JIRA adds 
> this component as AMRMClientRelayer.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7943) testCSReservationWithRootUnblocked() and testCSQueueBlocked() in TestCapacityScheduler are flaky

2018-02-16 Thread Tarun Parimi (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7943?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16367829#comment-16367829
 ] 

Tarun Parimi commented on YARN-7943:


Attached a trivial patch which increases the 
yarn.resourcemanager.nodemanagers.heartbeat-interval-ms to 10s

Alternatively, calling nodeUpdate() on the nodes, instead of 
CapacityScheduler.schedule() should also work.

> testCSReservationWithRootUnblocked() and testCSQueueBlocked() in 
> TestCapacityScheduler are flaky
> 
>
> Key: YARN-7943
> URL: https://issues.apache.org/jira/browse/YARN-7943
> Project: Hadoop YARN
>  Issue Type: Task
>  Components: capacity scheduler, test
>Reporter: Tarun Parimi
>Priority: Major
> Attachments: YARN-7943.001.patch
>
>
> testCSReservationWithRootUnblocked() and testCSQueueBlocked() test for the 
> expected value of Used Resources in different queues after calling allocate() 
> and CapacityScheduler.schedule(). 
> If CapacityScheduler.schedule() gets executed more than a second after 
> NodeAddedSchedulerEvent was handled, then the node will be skipped from 
> scheduling. This results in test failure as the node resources will not be 
> allocated and so the expected Used Resource value differs.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-7943) testCSReservationWithRootUnblocked() and testCSQueueBlocked() in TestCapacityScheduler are flaky

2018-02-16 Thread Tarun Parimi (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-7943?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tarun Parimi updated YARN-7943:
---
Attachment: YARN-7943.001.patch

> testCSReservationWithRootUnblocked() and testCSQueueBlocked() in 
> TestCapacityScheduler are flaky
> 
>
> Key: YARN-7943
> URL: https://issues.apache.org/jira/browse/YARN-7943
> Project: Hadoop YARN
>  Issue Type: Task
>  Components: capacity scheduler, test
>Reporter: Tarun Parimi
>Priority: Major
> Attachments: YARN-7943.001.patch
>
>
> testCSReservationWithRootUnblocked() and testCSQueueBlocked() test for the 
> expected value of Used Resources in different queues after calling allocate() 
> and CapacityScheduler.schedule(). 
> If CapacityScheduler.schedule() gets executed more than a second after 
> NodeAddedSchedulerEvent was handled, then the node will be skipped from 
> scheduling. This results in test failure as the node resources will not be 
> allocated and so the expected Used Resource value differs.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Created] (YARN-7943) testCSReservationWithRootUnblocked() and testCSQueueBlocked() in TestCapacityScheduler are flaky

2018-02-16 Thread Tarun Parimi (JIRA)
Tarun Parimi created YARN-7943:
--

 Summary: testCSReservationWithRootUnblocked() and 
testCSQueueBlocked() in TestCapacityScheduler are flaky
 Key: YARN-7943
 URL: https://issues.apache.org/jira/browse/YARN-7943
 Project: Hadoop YARN
  Issue Type: Task
  Components: capacity scheduler, test
Reporter: Tarun Parimi


testCSReservationWithRootUnblocked() and testCSQueueBlocked() test for the 
expected value of Used Resources in different queues after calling allocate() 
and CapacityScheduler.schedule(). 

If CapacityScheduler.schedule() gets executed more than a second after 
NodeAddedSchedulerEvent was handled, then the node will be skipped from 
scheduling. This results in test failure as the node resources will not be 
allocated and so the expected Used Resource value differs.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7919) Split timelineservice-hbase module to make YARN-7346 easier

2018-02-16 Thread Rohith Sharma K S (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7919?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16367808#comment-16367808
 ] 

Rohith Sharma K S commented on YARN-7919:
-

thanks [~vrushalic] for pitching in. As per your suggestion I tried to build 
for Hbase-2 and found jcoding issue :-)
Let us know your feedback on 04 patch apart from jcoding issue, there are 
couple of nits exist but those can be handled in YARN-7346 JIRA. I will commit 
04 patch this weekend if no more objections from you. 

> Split timelineservice-hbase module to make YARN-7346 easier
> ---
>
> Key: YARN-7919
> URL: https://issues.apache.org/jira/browse/YARN-7919
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: timelineservice
>Affects Versions: 3.0.0
>Reporter: Haibo Chen
>Assignee: Haibo Chen
>Priority: Major
> Attachments: YARN-7919.00.patch, YARN-7919.01.patch, 
> YARN-7919.02.patch, YARN-7919.03.patch, YARN-7919.04.patch
>
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-7935) Expose container's hostname to applications running within the docker container

2018-02-16 Thread Suma Shivaprasad (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-7935?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Suma Shivaprasad updated YARN-7935:
---
Attachment: YARN-7935.2.patch

> Expose container's hostname to applications running within the docker 
> container
> ---
>
> Key: YARN-7935
> URL: https://issues.apache.org/jira/browse/YARN-7935
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: yarn
>Reporter: Suma Shivaprasad
>Assignee: Suma Shivaprasad
>Priority: Major
> Attachments: YARN-7935.1.patch, YARN-7935.2.patch
>
>
> Some applications have a need to bind to the container's hostname (like 
> Spark) which is different from the NodeManager's hostname(NM_HOST which is 
> available as an env during container launch) when launched through Docker 
> runtime. The container's hostname can be exposed to applications via an env 
> CONTAINER_HOSTNAME. Another potential candidate is the container's IP but 
> this can be addressed in a separate jira.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7919) Split timelineservice-hbase module to make YARN-7346 easier

2018-02-16 Thread Vrushali C (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7919?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16367772#comment-16367772
 ] 

Vrushali C commented on YARN-7919:
--

Thanks [~haibochen] for working on this patch! This will help resolve our path 
for working with two hbase versions going forward. Appreciate the effort. 

As I suggested in the weekly call yesterday, I am trying out the hbase 
compilation with 2.x .I see that Rohith has already reported the jcoding issue. 

For completeness, I will also paste in the build error for the coprocessor. 

I think we can deal with this version issue during the conditional compilation 
work later via profiles. 

 

> Split timelineservice-hbase module to make YARN-7346 easier
> ---
>
> Key: YARN-7919
> URL: https://issues.apache.org/jira/browse/YARN-7919
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: timelineservice
>Affects Versions: 3.0.0
>Reporter: Haibo Chen
>Assignee: Haibo Chen
>Priority: Major
> Attachments: YARN-7919.00.patch, YARN-7919.01.patch, 
> YARN-7919.02.patch, YARN-7919.03.patch, YARN-7919.04.patch
>
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7919) Split timelineservice-hbase module to make YARN-7346 easier

2018-02-16 Thread Haibo Chen (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7919?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16367769#comment-16367769
 ] 

Haibo Chen commented on YARN-7919:
--

Yes, doing the jcoding fix in YARN-7346 sounds good to me.

> Split timelineservice-hbase module to make YARN-7346 easier
> ---
>
> Key: YARN-7919
> URL: https://issues.apache.org/jira/browse/YARN-7919
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: timelineservice
>Affects Versions: 3.0.0
>Reporter: Haibo Chen
>Assignee: Haibo Chen
>Priority: Major
> Attachments: YARN-7919.00.patch, YARN-7919.01.patch, 
> YARN-7919.02.patch, YARN-7919.03.patch, YARN-7919.04.patch
>
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-7521) Add some missing @VisibleForTesting annotations

2018-02-16 Thread Ray Chiang (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-7521?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ray Chiang updated YARN-7521:
-
Summary: Add some missing @VisibleForTesting annotations   (was: Add some 
misisng @VisibleForTesting annotations )

> Add some missing @VisibleForTesting annotations 
> 
>
> Key: YARN-7521
> URL: https://issues.apache.org/jira/browse/YARN-7521
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: capacityscheduler
>Reporter: Ray Chiang
>Assignee: Ray Chiang
>Priority: Trivial
> Attachments: YARN-7521.001.patch
>
>
> While reviewing some other code, I ran into a few places where the 
> @VisibleForTesting annotation should be placed.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7813) Capacity Scheduler Intra-queue Preemption should be configurable for each queue

2018-02-16 Thread genericqa (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7813?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16367748#comment-16367748
 ] 

genericqa commented on YARN-7813:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 21m  
0s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 6 new or modified test 
files. {color} |
|| || || || {color:brown} branch-2.8 Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
19s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  7m 
59s{color} | {color:green} branch-2.8 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  2m  
2s{color} | {color:green} branch-2.8 passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
29s{color} | {color:green} branch-2.8 passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  2m 
12s{color} | {color:green} branch-2.8 passed {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Skipped patched modules with no Java source: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-site {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  3m 
48s{color} | {color:green} branch-2.8 passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
33s{color} | {color:green} branch-2.8 passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m  
8s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
43s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  2m  
6s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} cc {color} | {color:green}  2m  
6s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  2m  
6s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 32s{color} | {color:orange} hadoop-yarn-project/hadoop-yarn: The patch 
generated 8 new + 221 unchanged - 0 fixed = 229 total (was 221) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  2m  
6s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Skipped patched modules with no Java source: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-site {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  4m 
31s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
32s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
25s{color} | {color:green} hadoop-yarn-api in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  2m 
34s{color} | {color:green} hadoop-yarn-common in the patch passed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red}108m 14s{color} 
| {color:red} hadoop-yarn-server-resourcemanager in the patch failed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 65m 49s{color} 
| {color:red} hadoop-yarn-client in the patch failed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
12s{color} | {color:green} hadoop-yarn-site in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
18s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}231m  5s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Unreaped Processes | hadoop-yarn-server-resourcemanager:2 |
|   | hadoop-yarn-client:4 |
| Failed junit test

[jira] [Created] (YARN-7942) Yarn ServiceClient does not not delete znode from secure ZooKeeper

2018-02-16 Thread Eric Yang (JIRA)
Eric Yang created YARN-7942:
---

 Summary: Yarn ServiceClient does not not delete znode from secure 
ZooKeeper
 Key: YARN-7942
 URL: https://issues.apache.org/jira/browse/YARN-7942
 Project: Hadoop YARN
  Issue Type: Bug
  Components: yarn-native-services
Affects Versions: 3.1.0
Reporter: Eric Yang


Even with sasl:rm:cdrwa set on the ZK node (from the registry system accounts 
property), the RM fails to remove the node with the below error. Also, the 
destroy call succeeds.

{code}
2018-02-16 15:49:29,691 WARN  client.ServiceClient 
(ServiceClient.java:actionDestroy(470)) - Error deleting registry entry 
/users/hbase/services/yarn-service/hbase-app-test
org.apache.hadoop.registry.client.exceptions.NoPathPermissionsException: 
`/registry/users/hbase/services/yarn-service/hbase-app-test': Not authorized to 
access path; ACLs: [null ACL]: KeeperErrorCode = NoAuth for 
/registry/users/hbase/services/yarn-service/hbase-app-test
at 
org.apache.hadoop.registry.client.impl.zk.CuratorService.operationFailure(CuratorService.java:412)
at 
org.apache.hadoop.registry.client.impl.zk.CuratorService.operationFailure(CuratorService.java:390)
at 
org.apache.hadoop.registry.client.impl.zk.CuratorService.zkDelete(CuratorService.java:722)
at 
org.apache.hadoop.registry.client.impl.zk.RegistryOperationsService.delete(RegistryOperationsService.java:162)
at 
org.apache.hadoop.yarn.service.client.ServiceClient.actionDestroy(ServiceClient.java:462)
at 
org.apache.hadoop.yarn.service.webapp.ApiServer$4.run(ApiServer.java:253)
at 
org.apache.hadoop.yarn.service.webapp.ApiServer$4.run(ApiServer.java:243)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:422)
at 
org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1965)
at 
org.apache.hadoop.yarn.service.webapp.ApiServer.stopService(ApiServer.java:243)
at 
org.apache.hadoop.yarn.service.webapp.ApiServer.deleteService(ApiServer.java:223)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
com.sun.jersey.spi.container.JavaMethodInvokerFactory$1.invoke(JavaMethodInvokerFactory.java:60)
at 
com.sun.jersey.server.impl.model.method.dispatch.AbstractResourceMethodDispatchProvider$ResponseOutInvoker._dispatch(AbstractResourceMethodDispatchProvider.java:205)
at 
com.sun.jersey.server.impl.model.method.dispatch.ResourceJavaMethodDispatcher.dispatch(ResourceJavaMethodDispatcher.java:75)
at 
com.sun.jersey.server.impl.uri.rules.HttpMethodRule.accept(HttpMethodRule.java:302)
at 
com.sun.jersey.server.impl.uri.rules.RightHandPathRule.accept(RightHandPathRule.java:147)
at 
com.sun.jersey.server.impl.uri.rules.ResourceClassRule.accept(ResourceClassRule.java:108)
at 
com.sun.jersey.server.impl.uri.rules.RightHandPathRule.accept(RightHandPathRule.java:147)
at 
com.sun.jersey.server.impl.uri.rules.RootResourceClassesRule.accept(RootResourceClassesRule.java:84)
at 
com.sun.jersey.server.impl.application.WebApplicationImpl._handleRequest(WebApplicationImpl.java:1542)
at 
com.sun.jersey.server.impl.application.WebApplicationImpl._handleRequest(WebApplicationImpl.java:1473)
at 
com.sun.jersey.server.impl.application.WebApplicationImpl.handleRequest(WebApplicationImpl.java:1419)
at 
com.sun.jersey.server.impl.application.WebApplicationImpl.handleRequest(WebApplicationImpl.java:1409)
at 
com.sun.jersey.spi.container.servlet.WebComponent.service(WebComponent.java:409)
at 
com.sun.jersey.spi.container.servlet.ServletContainer.service(ServletContainer.java:558)
at 
com.sun.jersey.spi.container.servlet.ServletContainer.service(ServletContainer.java:733)
at javax.servlet.http.HttpServlet.service(HttpServlet.java:790)
at 
org.eclipse.jetty.servlet.ServletHolder.handle(ServletHolder.java:848)
at 
org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1772)
at 
com.google.inject.servlet.FilterChainInvocation.doFilter(FilterChainInvocation.java:89)
at 
com.sun.jersey.spi.container.servlet.ServletContainer.doFilter(ServletContainer.java:941)
at 
com.sun.jersey.spi.container.servlet.ServletContainer.doFilter(ServletContainer.java:875)
at 
org.apache.hadoop.yarn.server.resourcemanager.webapp.RMWebAppFilter.doFilter(RMWebAppFilter.java:178)
at 
com.sun.jersey.spi.container.servlet.ServletContainer.doFilter(ServletContainer.java:829)
at 
com.googl

[jira] [Commented] (YARN-7919) Split timelineservice-hbase module to make YARN-7346 easier

2018-02-16 Thread Rohith Sharma K S (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7919?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16367727#comment-16367727
 ] 

Rohith Sharma K S commented on YARN-7919:
-

[~haibo.chen] Given this issue happens to be for conditional compilation with 
Hbase-2, would you recommend to commit this current patch first which we have 
verified in hbase-1.2.6 since patch size is much bigger? Later while doing 
YARN-7346, lets worry more about this jcoding dependency issue?

> Split timelineservice-hbase module to make YARN-7346 easier
> ---
>
> Key: YARN-7919
> URL: https://issues.apache.org/jira/browse/YARN-7919
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: timelineservice
>Affects Versions: 3.0.0
>Reporter: Haibo Chen
>Assignee: Haibo Chen
>Priority: Major
> Attachments: YARN-7919.00.patch, YARN-7919.01.patch, 
> YARN-7919.02.patch, YARN-7919.03.patch, YARN-7919.04.patch
>
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6523) Newly retrieved security Tokens are sent as part of each heartbeat to each node from RM which is not desirable in large cluster

2018-02-16 Thread genericqa (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6523?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16367718#comment-16367718
 ] 

genericqa commented on YARN-6523:
-

| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 16m 
23s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 3 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
12s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 18m 
 5s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  2m 
31s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
47s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
54s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
12m 42s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  3m  
4s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m  
7s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m  
9s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
53s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  2m 
34s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} cc {color} | {color:green}  2m 
34s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  2m 
34s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
42s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
49s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
11m 24s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  3m 
26s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m  
7s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  2m 
11s{color} | {color:green} hadoop-yarn-server-common in the patch passed. 
{color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 19m 
53s{color} | {color:green} hadoop-yarn-server-nodemanager in the patch passed. 
{color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 68m 
25s{color} | {color:green} hadoop-yarn-server-resourcemanager in the patch 
passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
23s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}169m 20s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:5b98639 |
| JIRA Issue | YARN-6523 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12910944/YARN-6523.003.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  shadedclient  findbugs  checkstyle  cc  |
| uname | Linux f266db04ae2e 3.13.0-135-generic #184-Ubuntu SMP Wed Oct 18 
11:55:51 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/pre

[jira] [Commented] (YARN-7941) Transitive dependencies for component are not resolved

2018-02-16 Thread Rohith Sharma K S (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7941?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16367681#comment-16367681
 ] 

Rohith Sharma K S commented on YARN-7941:
-

cc :/ [~jianhe] [~billie.rinaldi]

> Transitive dependencies for component are not resolved 
> ---
>
> Key: YARN-7941
> URL: https://issues.apache.org/jira/browse/YARN-7941
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Rohith Sharma K S
>Priority: Major
>
> It is observed that transitive dependencies are not resolved as a result one 
> of the component is started earlier. 
> Ex : In HBase app, 
> master is independent component, 
> regionserver is depends on master.  
> hbaseclient depends on regionserver, 
> but I always see that HBaseClient is launched before regionserver.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-7941) Transitive dependencies for component are not resolved

2018-02-16 Thread Rohith Sharma K S (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-7941?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Rohith Sharma K S updated YARN-7941:

Issue Type: Sub-task  (was: Bug)
Parent: YARN-7054

> Transitive dependencies for component are not resolved 
> ---
>
> Key: YARN-7941
> URL: https://issues.apache.org/jira/browse/YARN-7941
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Rohith Sharma K S
>Priority: Major
>
> It is observed that transitive dependencies are not resolved as a result one 
> of the component is started earlier. 
> Ex : In HBase app, 
> master is independent component, 
> regionserver is depends on master.  
> hbaseclient depends on regionserver, 
> but I always see that HBaseClient is launched before regionserver.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7941) Transitive dependencies for component are not resolved

2018-02-16 Thread Rohith Sharma K S (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7941?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16367680#comment-16367680
 ] 

Rohith Sharma K S commented on YARN-7941:
-

Sorry for confusion. It is totally different issue in native service. I will 
make this as subtask for native service i.e YARN-7054

> Transitive dependencies for component are not resolved 
> ---
>
> Key: YARN-7941
> URL: https://issues.apache.org/jira/browse/YARN-7941
> Project: Hadoop YARN
>  Issue Type: Bug
>Reporter: Rohith Sharma K S
>Priority: Major
>
> It is observed that transitive dependencies are not resolved as a result one 
> of the component is started earlier. 
> Ex : In HBase app, 
> master is independent component, 
> regionserver is depends on master.  
> hbaseclient depends on regionserver, 
> but I always see that HBaseClient is launched before regionserver.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7941) Transitive dependencies for component are not resolved

2018-02-16 Thread Haibo Chen (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7941?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16367678#comment-16367678
 ] 

Haibo Chen commented on YARN-7941:
--

[~rohithsharma] Is this a HBase issue or a YARN issue? How is it related to 
transitive dependencies?

> Transitive dependencies for component are not resolved 
> ---
>
> Key: YARN-7941
> URL: https://issues.apache.org/jira/browse/YARN-7941
> Project: Hadoop YARN
>  Issue Type: Bug
>Reporter: Rohith Sharma K S
>Priority: Major
>
> It is observed that transitive dependencies are not resolved as a result one 
> of the component is started earlier. 
> Ex : In HBase app, 
> master is independent component, 
> regionserver is depends on master.  
> hbaseclient depends on regionserver, 
> but I always see that HBaseClient is launched before regionserver.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7919) Split timelineservice-hbase module to make YARN-7346 easier

2018-02-16 Thread Haibo Chen (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7919?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16367669#comment-16367669
 ] 

Haibo Chen commented on YARN-7919:
--

YARN currently does not have jcodings as a direct dependency. Copying from 
Rohith's comment above, this is what the dependency checker complains about
[INFO] --- maven-enforcer-plugin:3.0.0-M1:enforce (depcheck) @ 
hadoop-yarn-server-timelineservice-hbase-client ---
[WARNING]
Dependency convergence error for org.jruby.jcodings:jcodings:1.0.18 paths to 
dependency are:
+-org.apache.hadoop:hadoop-yarn-server-timelineservice-hbase-client:3.2.0-SNAPSHOT
  +-org.apache.hbase:hbase-client:2.0.0-beta-1
+-org.jruby.jcodings:jcodings:1.0.18
and
+-org.apache.hadoop:hadoop-yarn-server-timelineservice-hbase-client:3.2.0-SNAPSHOT
  +-org.apache.hbase:hbase-client:2.0.0-beta-1
+-org.jruby.joni:joni:2.1.11
  +-org.jruby.jcodings:jcodings:1.0.13
So maven does see different versions. I suspect this is because the checker 
plugin only analyzes the pom tree without turning to maven dependency mediation 
mechanism.

 

> Split timelineservice-hbase module to make YARN-7346 easier
> ---
>
> Key: YARN-7919
> URL: https://issues.apache.org/jira/browse/YARN-7919
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: timelineservice
>Affects Versions: 3.0.0
>Reporter: Haibo Chen
>Assignee: Haibo Chen
>Priority: Major
> Attachments: YARN-7919.00.patch, YARN-7919.01.patch, 
> YARN-7919.02.patch, YARN-7919.03.patch, YARN-7919.04.patch
>
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (YARN-7919) Split timelineservice-hbase module to make YARN-7346 easier

2018-02-16 Thread Haibo Chen (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7919?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16367669#comment-16367669
 ] 

Haibo Chen edited comment on YARN-7919 at 2/16/18 6:02 PM:
---

YARN currently does not have jcodings as a direct dependency. Copying from 
Rohith's comment above, this is what the dependency checker complains about
{code:java}
[INFO] — maven-enforcer-plugin:3.0.0-M1:enforce (depcheck) @ 
hadoop-yarn-server-timelineservice-hbase-client —
 [WARNING]
 Dependency convergence error for org.jruby.jcodings:jcodings:1.0.18 paths to 
dependency are:
 
+-org.apache.hadoop:hadoop-yarn-server-timelineservice-hbase-client:3.2.0-SNAPSHOT
 +-org.apache.hbase:hbase-client:2.0.0-beta-1
 +-org.jruby.jcodings:jcodings:1.0.18
 and
 
+-org.apache.hadoop:hadoop-yarn-server-timelineservice-hbase-client:3.2.0-SNAPSHOT
 +-org.apache.hbase:hbase-client:2.0.0-beta-1
 +-org.jruby.joni:joni:2.1.11
 +-org.jruby.jcodings:jcodings:1.0.13{code}

 So maven does see different versions. I suspect this is because the checker 
plugin only analyzes the pom tree without turning to maven dependency mediation 
mechanism.

 


was (Author: haibochen):
YARN currently does not have jcodings as a direct dependency. Copying from 
Rohith's comment above, this is what the dependency checker complains about
[INFO] --- maven-enforcer-plugin:3.0.0-M1:enforce (depcheck) @ 
hadoop-yarn-server-timelineservice-hbase-client ---
[WARNING]
Dependency convergence error for org.jruby.jcodings:jcodings:1.0.18 paths to 
dependency are:
+-org.apache.hadoop:hadoop-yarn-server-timelineservice-hbase-client:3.2.0-SNAPSHOT
  +-org.apache.hbase:hbase-client:2.0.0-beta-1
+-org.jruby.jcodings:jcodings:1.0.18
and
+-org.apache.hadoop:hadoop-yarn-server-timelineservice-hbase-client:3.2.0-SNAPSHOT
  +-org.apache.hbase:hbase-client:2.0.0-beta-1
+-org.jruby.joni:joni:2.1.11
  +-org.jruby.jcodings:jcodings:1.0.13
So maven does see different versions. I suspect this is because the checker 
plugin only analyzes the pom tree without turning to maven dependency mediation 
mechanism.

 

> Split timelineservice-hbase module to make YARN-7346 easier
> ---
>
> Key: YARN-7919
> URL: https://issues.apache.org/jira/browse/YARN-7919
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: timelineservice
>Affects Versions: 3.0.0
>Reporter: Haibo Chen
>Assignee: Haibo Chen
>Priority: Major
> Attachments: YARN-7919.00.patch, YARN-7919.01.patch, 
> YARN-7919.02.patch, YARN-7919.03.patch, YARN-7919.04.patch
>
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7940) Service AM gets NoAuth with secure ZK

2018-02-16 Thread Vinod Kumar Vavilapalli (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7940?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16367656#comment-16367656
 ] 

Vinod Kumar Vavilapalli commented on YARN-7940:
---

What a great catch, [~billie.rinaldi]!

> Service AM gets NoAuth with secure ZK
> -
>
> Key: YARN-7940
> URL: https://issues.apache.org/jira/browse/YARN-7940
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: yarn-native-services
>Reporter: Billie Rinaldi
>Assignee: Billie Rinaldi
>Priority: Blocker
> Attachments: YARN-7940.01.patch
>
>
> There is a bug in the RegistrySecurity utility class that is causing the ZK 
> sasl client to be misconfigured. This results in NoAuth after the Service AM 
> successfully creates the first node with sasl ACLs for the user.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-7941) Transitive dependencies for component are not resolved

2018-02-16 Thread Rohith Sharma K S (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-7941?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Rohith Sharma K S updated YARN-7941:

Environment: (was: It is observed that transitive dependencies are not 
resolved as a result one of the component is started earlier. 
Ex : In HBase app, 
master is independent component, 
regionserver is depends on master.  
hbaseclient depends on regionserver, 
but I always see that HBaseClient is launched before regionserver.)

> Transitive dependencies for component are not resolved 
> ---
>
> Key: YARN-7941
> URL: https://issues.apache.org/jira/browse/YARN-7941
> Project: Hadoop YARN
>  Issue Type: Bug
>Reporter: Rohith Sharma K S
>Priority: Major
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-7941) Transitive dependencies for component are not resolved

2018-02-16 Thread Rohith Sharma K S (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-7941?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Rohith Sharma K S updated YARN-7941:

Description: 
It is observed that transitive dependencies are not resolved as a result one of 
the component is started earlier. 
Ex : In HBase app, 
master is independent component, 
regionserver is depends on master.  
hbaseclient depends on regionserver, 
but I always see that HBaseClient is launched before regionserver.

> Transitive dependencies for component are not resolved 
> ---
>
> Key: YARN-7941
> URL: https://issues.apache.org/jira/browse/YARN-7941
> Project: Hadoop YARN
>  Issue Type: Bug
>Reporter: Rohith Sharma K S
>Priority: Major
>
> It is observed that transitive dependencies are not resolved as a result one 
> of the component is started earlier. 
> Ex : In HBase app, 
> master is independent component, 
> regionserver is depends on master.  
> hbaseclient depends on regionserver, 
> but I always see that HBaseClient is launched before regionserver.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Created] (YARN-7941) Transitive dependencies for component are not resolved

2018-02-16 Thread Rohith Sharma K S (JIRA)
Rohith Sharma K S created YARN-7941:
---

 Summary: Transitive dependencies for component are not resolved 
 Key: YARN-7941
 URL: https://issues.apache.org/jira/browse/YARN-7941
 Project: Hadoop YARN
  Issue Type: Bug
 Environment: It is observed that transitive dependencies are not 
resolved as a result one of the component is started earlier. 
Ex : In HBase app, 
master is independent component, 
regionserver is depends on master.  
hbaseclient depends on regionserver, 
but I always see that HBaseClient is launched before regionserver.
Reporter: Rohith Sharma K S






--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7940) Service AM gets NoAuth with secure ZK

2018-02-16 Thread Gour Saha (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7940?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16367646#comment-16367646
 ] 

Gour Saha commented on YARN-7940:
-

The patch looks good. We tested it internally as well. +1 for commit.

> Service AM gets NoAuth with secure ZK
> -
>
> Key: YARN-7940
> URL: https://issues.apache.org/jira/browse/YARN-7940
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: yarn-native-services
>Reporter: Billie Rinaldi
>Assignee: Billie Rinaldi
>Priority: Blocker
> Attachments: YARN-7940.01.patch
>
>
> There is a bug in the RegistrySecurity utility class that is causing the ZK 
> sasl client to be misconfigured. This results in NoAuth after the Service AM 
> successfully creates the first node with sasl ACLs for the user.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7919) Split timelineservice-hbase module to make YARN-7346 easier

2018-02-16 Thread Josh Elser (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7919?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16367604#comment-16367604
 ] 

Josh Elser commented on YARN-7919:
--

bq. In this case, hbase depends on jcodings-1.0-18 (directly) and 
jcodings-1.0.13 (transitively from joni). It is probably not a real problem 
given both versions are in the same 1.0.x release line. But hadoop always takes 
the conservative approach. Is there any downside to make hbase-client depend on 
the exactly same version, 1.0.13, of joni? It is more of a nice-to-have 
dependency hygiene.

Hrm, this is the part that confuses me. HBase defining a dependency on 
jcoding-1.0.18 should override the transitive 1.0.13 dependency and prevent 
YARN from seeing it. Is YARN setting a direct dependency on the same version of 
jcodings for some reason?

> Split timelineservice-hbase module to make YARN-7346 easier
> ---
>
> Key: YARN-7919
> URL: https://issues.apache.org/jira/browse/YARN-7919
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: timelineservice
>Affects Versions: 3.0.0
>Reporter: Haibo Chen
>Assignee: Haibo Chen
>Priority: Major
> Attachments: YARN-7919.00.patch, YARN-7919.01.patch, 
> YARN-7919.02.patch, YARN-7919.03.patch, YARN-7919.04.patch
>
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7919) Split timelineservice-hbase module to make YARN-7346 easier

2018-02-16 Thread Haibo Chen (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7919?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16367594#comment-16367594
 ] 

Haibo Chen commented on YARN-7919:
--

Agreed, [~elserj]. My understanding of why enforcer-plugin is enabled in YARN 
is,  it avoids surprises (such as reported in HADOOP-13866) that come from the 
breaking changes between two versions of the same dependency.

In this case, hbase depends on jcodings-1.0-18 (directly) and jcodings-1.0.13 
(transitively from joni). It is probably not a real problem given both versions 
are in the same 1.0.x release line. But hadoop always takes the conservative 
approach. Is there any downside to make hbase-client depend on the exactly same 
version, 1.0.13, of joni? It is more of a nice-to-have dependency hygiene.

Regarless, we can always work around in hadoop by explicitly overriding the 
joni version

> Split timelineservice-hbase module to make YARN-7346 easier
> ---
>
> Key: YARN-7919
> URL: https://issues.apache.org/jira/browse/YARN-7919
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: timelineservice
>Affects Versions: 3.0.0
>Reporter: Haibo Chen
>Assignee: Haibo Chen
>Priority: Major
> Attachments: YARN-7919.00.patch, YARN-7919.01.patch, 
> YARN-7919.02.patch, YARN-7919.03.patch, YARN-7919.04.patch
>
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7940) Service AM gets NoAuth with secure ZK

2018-02-16 Thread genericqa (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7940?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16367574#comment-16367574
 ] 

genericqa commented on YARN-7940:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
13s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 15m 
36s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
21s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
16s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
21s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green}  
9m 54s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
31s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
18s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
21s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
17s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
17s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 12s{color} | {color:orange} 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-registry: The patch generated 1 new 
+ 108 unchanged - 0 fixed = 109 total (was 108) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
19s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
10m 39s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
34s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
16s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
52s{color} | {color:green} hadoop-yarn-registry in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
21s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 41m 43s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:5b98639 |
| JIRA Issue | YARN-7940 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12910946/YARN-7940.01.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  shadedclient  findbugs  checkstyle  |
| uname | Linux 53ffcae424d8 4.4.0-43-generic #63-Ubuntu SMP Wed Oct 12 
13:48:03 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / 82f029f |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_151 |
| findbugs | v3.1.0-RC1 |
| checkstyle | 
https://builds.apache.org/job/PreCommit-YARN-Build/19728/artifact/out/diff-checkstyle-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-registry.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-YARN-Build/19728/testReport/ |
| Max. process+thread count | 431 (vs. ulimit of 5500) |
| modules | C: hadoop-yarn-project/hadoop-yarn/hado

[jira] [Commented] (YARN-7919) Split timelineservice-hbase module to make YARN-7346 easier

2018-02-16 Thread Josh Elser (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7919?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16367571#comment-16367571
 ] 

Josh Elser commented on YARN-7919:
--

I'm struggling here. I don't understand what the problem is. HBase-2.0 depends 
on joni 2.1.11 and jcodings 1.0.18.

The build failure is due to the enforcer-plugin you configured in YARN.

> Split timelineservice-hbase module to make YARN-7346 easier
> ---
>
> Key: YARN-7919
> URL: https://issues.apache.org/jira/browse/YARN-7919
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: timelineservice
>Affects Versions: 3.0.0
>Reporter: Haibo Chen
>Assignee: Haibo Chen
>Priority: Major
> Attachments: YARN-7919.00.patch, YARN-7919.01.patch, 
> YARN-7919.02.patch, YARN-7919.03.patch, YARN-7919.04.patch
>
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5028) RMStateStore should trim down app state for completed applications

2018-02-16 Thread Gergo Repas (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5028?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16367565#comment-16367565
 ] 

Gergo Repas commented on YARN-5028:
---

[~rohithsharma] - Sure, I'm going to put together statistics on znode data 
sizes with/without the patch.

In the earlier patch versions AMContainerSpec was also removed from the 
submission context, but it was put back in because the ACLs for the 
applications are coming from the AMContainerSpec - maybe a trimmed down 
AMContainerSpec containing only ACL data could help - I'll also include such a 
version in the comparison.

> RMStateStore should trim down app state for completed applications
> --
>
> Key: YARN-5028
> URL: https://issues.apache.org/jira/browse/YARN-5028
> Project: Hadoop YARN
>  Issue Type: Improvement
>  Components: resourcemanager
>Affects Versions: 2.8.0
>Reporter: Karthik Kambatla
>Assignee: Gergo Repas
>Priority: Major
> Attachments: YARN-5028.000.patch, YARN-5028.001.patch, 
> YARN-5028.002.patch, YARN-5028.003.patch, YARN-5028.004.patch
>
>
> RMStateStore stores enough information to recover applications in case of a 
> restart. The store also retains this information for completed applications 
> to serve their status to REST, WebUI, Java and CLI clients. We don't need all 
> the information we store today to serve application status; for instance, we 
> don't need the {{ApplicationSubmissionContext}}. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7918) TestAMRMClientPlacementConstraints.testAMRMClientWithPlacementConstraints failing in trunk

2018-02-16 Thread genericqa (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7918?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16367540#comment-16367540
 ] 

genericqa commented on YARN-7918:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
25s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 15m 
34s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
38s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
27s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
41s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
10m 31s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m  
1s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
25s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
39s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
34s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
34s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
22s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
37s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green}  
9m 57s{color} | {color:green} patch has no errors when building and testing our 
client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m  
9s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
20s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 67m 
23s{color} | {color:green} hadoop-yarn-server-resourcemanager in the patch 
passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
20s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}110m 50s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:5b98639 |
| JIRA Issue | YARN-7918 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12910912/YARN-7918.001.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  shadedclient  findbugs  checkstyle  |
| uname | Linux b85ba1bf0500 4.4.0-64-generic #85-Ubuntu SMP Mon Feb 20 
11:50:30 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / 82f029f |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_151 |
| findbugs | v3.1.0-RC1 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-YARN-Build/19725/testReport/ |
| Max. process+thread count | 855 (vs. ulimit of 5500) |
| modules | C: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager
 U: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager
 |
| Console output | 
https://builds.apache.org/job/PreCommit-YARN-Build/19725/console |
| Powered by | Apache Yetus 0.8

[jira] [Updated] (YARN-7940) Service AM gets NoAuth with secure ZK

2018-02-16 Thread Billie Rinaldi (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-7940?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Billie Rinaldi updated YARN-7940:
-
Attachment: YARN-7940.01.patch

> Service AM gets NoAuth with secure ZK
> -
>
> Key: YARN-7940
> URL: https://issues.apache.org/jira/browse/YARN-7940
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: yarn-native-services
>Reporter: Billie Rinaldi
>Assignee: Billie Rinaldi
>Priority: Blocker
> Attachments: YARN-7940.01.patch
>
>
> There is a bug in the RegistrySecurity utility class that is causing the ZK 
> sasl client to be misconfigured. This results in NoAuth after the Service AM 
> successfully creates the first node with sasl ACLs for the user.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6523) Newly retrieved security Tokens are sent as part of each heartbeat to each node from RM which is not desirable in large cluster

2018-02-16 Thread Manikandan R (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6523?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16367505#comment-16367505
 ] 

Manikandan R commented on YARN-6523:


Rebasing patch.

> Newly retrieved security Tokens are sent as part of each heartbeat to each 
> node from RM which is not desirable in large cluster
> ---
>
> Key: YARN-6523
> URL: https://issues.apache.org/jira/browse/YARN-6523
> Project: Hadoop YARN
>  Issue Type: Improvement
>  Components: RM
>Affects Versions: 2.8.0, 2.7.3
>Reporter: Naganarasimha G R
>Assignee: Manikandan R
>Priority: Major
> Attachments: YARN-6523.001.patch, YARN-6523.002.patch, 
> YARN-6523.003.patch
>
>
> Currently as part of heartbeat response RM sets all application's tokens 
> though all applications might not be active on the node. On top of it 
> NodeHeartbeatResponsePBImpl converts tokens for each app into 
> SystemCredentialsForAppsProto. Hence for each node and each heartbeat too 
> many SystemCredentialsForAppsProto objects were getting created.
> We hit a OOM while testing for 2000 concurrent apps on 500 nodes cluster with 
> 8GB RAM configured for RM



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-6523) Newly retrieved security Tokens are sent as part of each heartbeat to each node from RM which is not desirable in large cluster

2018-02-16 Thread Manikandan R (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-6523?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Manikandan R updated YARN-6523:
---
Attachment: YARN-6523.003.patch

> Newly retrieved security Tokens are sent as part of each heartbeat to each 
> node from RM which is not desirable in large cluster
> ---
>
> Key: YARN-6523
> URL: https://issues.apache.org/jira/browse/YARN-6523
> Project: Hadoop YARN
>  Issue Type: Improvement
>  Components: RM
>Affects Versions: 2.8.0, 2.7.3
>Reporter: Naganarasimha G R
>Assignee: Manikandan R
>Priority: Major
> Attachments: YARN-6523.001.patch, YARN-6523.002.patch, 
> YARN-6523.003.patch
>
>
> Currently as part of heartbeat response RM sets all application's tokens 
> though all applications might not be active on the node. On top of it 
> NodeHeartbeatResponsePBImpl converts tokens for each app into 
> SystemCredentialsForAppsProto. Hence for each node and each heartbeat too 
> many SystemCredentialsForAppsProto objects were getting created.
> We hit a OOM while testing for 2000 concurrent apps on 500 nodes cluster with 
> 8GB RAM configured for RM



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7919) Split timelineservice-hbase module to make YARN-7346 easier

2018-02-16 Thread Rohith Sharma K S (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7919?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16367502#comment-16367502
 ] 

Rohith Sharma K S commented on YARN-7919:
-

{quote}we could either report to and ask HBase community so that they enforce 
the same version of jcodings in hbase-client, or override jcodings version 
explicitly in timelineservice modules.
{quote}
make sense to me. we can discuss or create a JIRA in HBase community as well. I 
will also check with HBase members internally for feedback. cc :/ [~elserj]
{quote}The override is only necessary in hbase 2.0, so the explicit dependency 
on jcodings is going to be in a maven profile.
{quote}
By worst case this would be our choice.

> Split timelineservice-hbase module to make YARN-7346 easier
> ---
>
> Key: YARN-7919
> URL: https://issues.apache.org/jira/browse/YARN-7919
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: timelineservice
>Affects Versions: 3.0.0
>Reporter: Haibo Chen
>Assignee: Haibo Chen
>Priority: Major
> Attachments: YARN-7919.00.patch, YARN-7919.01.patch, 
> YARN-7919.02.patch, YARN-7919.03.patch, YARN-7919.04.patch
>
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5028) RMStateStore should trim down app state for completed applications

2018-02-16 Thread Rohith Sharma K S (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5028?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16367486#comment-16367486
 ] 

Rohith Sharma K S commented on YARN-5028:
-

Apologies for pitching in late! thanks [~grepas] for working on this JIRA. 

IIUC, the primary reason for this JIRA is reduce size of finished application 
data being stored into RMStateStore. YARN-65 already reduces memory footprint 
for complected applications after recovery. Most of the data resides on 
ApplicationSubmissionContext#AMContainerSpec. But current patch is NOT removing 
AMContainerSpec from submission context. This brings me a doubt that how much 
memory is being reduced storing from RMStateStore? Do you have any statical 
comparison before and after the patch?

> RMStateStore should trim down app state for completed applications
> --
>
> Key: YARN-5028
> URL: https://issues.apache.org/jira/browse/YARN-5028
> Project: Hadoop YARN
>  Issue Type: Improvement
>  Components: resourcemanager
>Affects Versions: 2.8.0
>Reporter: Karthik Kambatla
>Assignee: Gergo Repas
>Priority: Major
> Attachments: YARN-5028.000.patch, YARN-5028.001.patch, 
> YARN-5028.002.patch, YARN-5028.003.patch, YARN-5028.004.patch
>
>
> RMStateStore stores enough information to recover applications in case of a 
> restart. The store also retains this information for completed applications 
> to serve their status to REST, WebUI, Java and CLI clients. We don't need all 
> the information we store today to serve application status; for instance, we 
> don't need the {{ApplicationSubmissionContext}}. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (YARN-7919) Split timelineservice-hbase module to make YARN-7346 easier

2018-02-16 Thread Haibo Chen (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7919?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16367485#comment-16367485
 ] 

Haibo Chen edited comment on YARN-7919 at 2/16/18 3:45 PM:
---

Good catch, [~rohithsharma]! The jcodings problem is more of a dependency 
divergence  rather than a compilation issue, in my opinion. I greped its usage 
in hbase-client (2.0) module, here is what I found.
{code:java}
grep "org.jcodings.*" -R hbase-client
hbase-client/src/main/java/org/apache/hadoop/hbase/filter/RegexStringComparator.java:30:import
 org.jcodings.Encoding;
hbase-client/src/main/java/org/apache/hadoop/hbase/filter/RegexStringComparator.java:31:import
 org.jcodings.EncodingDB;
hbase-client/src/main/java/org/apache/hadoop/hbase/filter/RegexStringComparator.java:32:import
 org.jcodings.specific.UTF8Encoding;
Binary file 
hbase-client/target/classes/org/apache/hadoop/hbase/filter/RegexStringComparator$JoniRegexEngine.class
 matches {code}
Looks like hbase-client relies on Jruby's joni regex engine to do pattern 
matching.  I am not sure if it is safe to just exclude jcondings to work around 
the dependency divergence problem.

Alternatively, we could either report to and ask HBase community so that they 
enforce the same version of jcodings in hbase-client, or override jcodings 
version explicitly in timelineservice modules. The override is only necessary 
in hbase 2.0, so the explicit dependency on jcodings is going to be in a maven 
profile. I think we should do both, do the override as a safe work around to 
unblock YARN-7346, but also file an HBase issue so that it can be properly 
addressed. Thoughts?

Will address your hbase-schema > hbase-common comment when we decide what to do 
with the jcondings issue.


was (Author: haibochen):
Good catch, [~rohithsharma]! The jcodings problem is more of a dependency 
divergence  rather than a compilation issue, in my opinion. I greped its usage 
in hbase-client (2.0) module, here is what I found.
{code:java}
grep "org.jcodings.*" -R hbase-client
hbase-client/src/main/java/org/apache/hadoop/hbase/filter/RegexStringComparator.java:30:import
 org.jcodings.Encoding;
hbase-client/src/main/java/org/apache/hadoop/hbase/filter/RegexStringComparator.java:31:import
 org.jcodings.EncodingDB;
hbase-client/src/main/java/org/apache/hadoop/hbase/filter/RegexStringComparator.java:32:import
 org.jcodings.specific.UTF8Encoding;
Binary file 
hbase-client/target/classes/org/apache/hadoop/hbase/filter/RegexStringComparator$JoniRegexEngine.class
 matches {code}
Looks like hbase-client relies on Jruby's joni regex engine to do pattern 
matching.  I am not sure if it is safe to just exclude jcondings to work around 
the dependency divergence problem.

Alternatively, we could either report to and ask HBase community so that they 
enforce the same version of jcodings in hbase-client, or override jcodings 
version explicitly in timelineservice modules. The override is only necessary 
in hbase 2.0, so the explicit dependency on jcodings is going to be in a maven 
profile. I think we should do both, do the override as a safe work around to 
unblock YARN-7346, but also file an HBase issue so that it can be properly 
addressed. Thoughts?

Will address your hbase-schema > hbase-common comment when we decide what to do 
with the jcondings issue.--

 

 

 

 

dfasfdas

> Split timelineservice-hbase module to make YARN-7346 easier
> ---
>
> Key: YARN-7919
> URL: https://issues.apache.org/jira/browse/YARN-7919
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: timelineservice
>Affects Versions: 3.0.0
>Reporter: Haibo Chen
>Assignee: Haibo Chen
>Priority: Major
> Attachments: YARN-7919.00.patch, YARN-7919.01.patch, 
> YARN-7919.02.patch, YARN-7919.03.patch, YARN-7919.04.patch
>
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7919) Split timelineservice-hbase module to make YARN-7346 easier

2018-02-16 Thread Haibo Chen (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7919?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16367485#comment-16367485
 ] 

Haibo Chen commented on YARN-7919:
--

Good catch, [~rohithsharma]! The jcodings problem is more of a dependency 
divergence  rather than a compilation issue, in my opinion. I greped its usage 
in hbase-client (2.0) module, here is what I found.
{code:java}
grep "org.jcodings.*" -R hbase-client
hbase-client/src/main/java/org/apache/hadoop/hbase/filter/RegexStringComparator.java:30:import
 org.jcodings.Encoding;
hbase-client/src/main/java/org/apache/hadoop/hbase/filter/RegexStringComparator.java:31:import
 org.jcodings.EncodingDB;
hbase-client/src/main/java/org/apache/hadoop/hbase/filter/RegexStringComparator.java:32:import
 org.jcodings.specific.UTF8Encoding;
Binary file 
hbase-client/target/classes/org/apache/hadoop/hbase/filter/RegexStringComparator$JoniRegexEngine.class
 matches {code}
Looks like hbase-client relies on Jruby's joni regex engine to do pattern 
matching.  I am not sure if it is safe to just exclude jcondings to work around 
the dependency divergence problem.

Alternatively, we could either report to and ask HBase community so that they 
enforce the same version of jcodings in hbase-client, or override jcodings 
version explicitly in timelineservice modules. The override is only necessary 
in hbase 2.0, so the explicit dependency on jcodings is going to be in a maven 
profile. I think we should do both, do the override as a safe work around to 
unblock YARN-7346, but also file an HBase issue so that it can be properly 
addressed. Thoughts?

Will address your hbase-schema > hbase-common comment when we decide what to do 
with the jcondings issue.--

 

 

 

 

dfasfdas

> Split timelineservice-hbase module to make YARN-7346 easier
> ---
>
> Key: YARN-7919
> URL: https://issues.apache.org/jira/browse/YARN-7919
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: timelineservice
>Affects Versions: 3.0.0
>Reporter: Haibo Chen
>Assignee: Haibo Chen
>Priority: Major
> Attachments: YARN-7919.00.patch, YARN-7919.01.patch, 
> YARN-7919.02.patch, YARN-7919.03.patch, YARN-7919.04.patch
>
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7799) YARN Service dependency follow up work

2018-02-16 Thread Gour Saha (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7799?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16367436#comment-16367436
 ] 

Gour Saha commented on YARN-7799:
-

[~leftnoteasy] what is timeline for 3.1.0? It's not really a critical issue so 
I think 3.2.0 should be fine.

> YARN Service dependency follow up work
> --
>
> Key: YARN-7799
> URL: https://issues.apache.org/jira/browse/YARN-7799
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: client, resourcemanager
>Reporter: Gour Saha
>Assignee: Gour Saha
>Priority: Critical
>
> As per [~jianhe] these are some followup items that make sense to do after 
> YARN-7766. Quoting Jian's comment below -
> Currently, if user doesn't supply location when run yarn app 
> -enableFastLaunch, the jars will be put under this location
> {code}
> hdfs:///yarn-services//service-dep.tar.gz
> {code}
> Since API server is embedded in RM, should RM look for this location too if 
> "yarn.service.framework.path" is not specified ?
> And if "yarn.service.framework.path" is not specified and still the file 
> doesn't exist at above default location, I think RM can try to upload the 
> jars to above default location instead, currently RM is uploading the jars to 
> the location defined by below code. This folder is per app and also 
> inconsistent with CLI location.
> {code}
>   protected Path addJarResource(String serviceName,
>   Map localResources)
>   throws IOException, SliderException {
> Path libPath = fs.buildClusterDirPath(serviceName);
> {code}
> By doing this, the next time a submission request comes, RM doesn't need to 
> upload the jars again.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Created] (YARN-7940) Service AM gets NoAuth with secure ZK

2018-02-16 Thread Billie Rinaldi (JIRA)
Billie Rinaldi created YARN-7940:


 Summary: Service AM gets NoAuth with secure ZK
 Key: YARN-7940
 URL: https://issues.apache.org/jira/browse/YARN-7940
 Project: Hadoop YARN
  Issue Type: Sub-task
  Components: yarn-native-services
Reporter: Billie Rinaldi
Assignee: Billie Rinaldi


There is a bug in the RegistrySecurity utility class that is causing the ZK 
sasl client to be misconfigured. This results in NoAuth after the Service AM 
successfully creates the first node with sasl ACLs for the user.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (YARN-7900) [AMRMProxy] AMRMClientRelayer for stateful FederationInterceptor

2018-02-16 Thread Arun Suresh (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7900?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16367416#comment-16367416
 ] 

Arun Suresh edited comment on YARN-7900 at 2/16/18 2:52 PM:


Thanks for working on this [~botong]..
Just went thru the patch. The original {{batchedSchedulingRequests}} is of type 
{{Queue>}} to ensure that all 
{{SchedulingRequests}} added in the same {{addSchedulingRequests}} call must be 
sent in the same {{allocate}} call to the RM. The Placement Processor optimizes 
the placement for a batch of scheduling requests at a time. The AM is meant to 
take advantage of this, by ensuring that all co-related requests are sent in 
the same allocate call - this is the reason it has to be a collection of 
collection not just a simple collection.



was (Author: asuresh):
Thanks for working on this [~botong]..
Just went thru the patch. The original {{batchedSchedulingRequests}} is of type 
{{Queue>}} to ensure that all 
{{SchedulingRequests}} added in the same {{addSchedulingRequests}} call must be 
sent in the same {{allocate}} call to the RM. The Placement Processor optimizes 
the placement for a batch of scheduling requests at a time. The AM is meant to 
take advantage of this, by ensuring that all co-related requests are sent in 
the same allocate call.


> [AMRMProxy] AMRMClientRelayer for stateful FederationInterceptor
> 
>
> Key: YARN-7900
> URL: https://issues.apache.org/jira/browse/YARN-7900
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Botong Huang
>Assignee: Botong Huang
>Priority: Major
> Attachments: YARN-7900.v1.patch, YARN-7900.v2.patch
>
>
> Inside stateful FederationInterceptor (YARN-7899), we need a component 
> similar to AMRMClient that remembers all pending (outstands) requests we've 
> sent to YarnRM, auto re-register and do full pending resend when YarnRM fails 
> over and throws ApplicationMasterNotRegisteredException back. This JIRA adds 
> this component as AMRMClientRelayer.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7900) [AMRMProxy] AMRMClientRelayer for stateful FederationInterceptor

2018-02-16 Thread Arun Suresh (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7900?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16367416#comment-16367416
 ] 

Arun Suresh commented on YARN-7900:
---

Thanks for working on this [~botong]..
Just went thru the patch. The original {{batchedSchedulingRequests}} is of type 
{{Queue>}} to ensure that all 
{{SchedulingRequests}} added in the same {{addSchedulingRequests}} call must be 
sent in the same {{allocate}} call to the RM. The Placement Processor optimizes 
the placement for a batch of scheduling requests at a time. The AM is meant to 
take advantage of this, by ensuring that all co-related requests are sent in 
the same allocate call.


> [AMRMProxy] AMRMClientRelayer for stateful FederationInterceptor
> 
>
> Key: YARN-7900
> URL: https://issues.apache.org/jira/browse/YARN-7900
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Botong Huang
>Assignee: Botong Huang
>Priority: Major
> Attachments: YARN-7900.v1.patch, YARN-7900.v2.patch
>
>
> Inside stateful FederationInterceptor (YARN-7899), we need a component 
> similar to AMRMClient that remembers all pending (outstands) requests we've 
> sent to YarnRM, auto re-register and do full pending resend when YarnRM fails 
> over and throws ApplicationMasterNotRegisteredException back. This JIRA adds 
> this component as AMRMClientRelayer.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7918) TestAMRMClientPlacementConstraints.testAMRMClientWithPlacementConstraints failing in trunk

2018-02-16 Thread Arun Suresh (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7918?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16367364#comment-16367364
 ] 

Arun Suresh commented on YARN-7918:
---

Thanks for taking a look [~GergelyNovak]. Wondering how it passed in the first 
place though.

> TestAMRMClientPlacementConstraints.testAMRMClientWithPlacementConstraints 
> failing in trunk
> --
>
> Key: YARN-7918
> URL: https://issues.apache.org/jira/browse/YARN-7918
> Project: Hadoop YARN
>  Issue Type: Task
>Reporter: Botong Huang
>Assignee: Gergely Novák
>Priority: Major
> Attachments: YARN-7918.001.patch
>
>
> java.lang.AssertionError: expected:<2> but was:<1>
>   at org.junit.Assert.fail(Assert.java:88)
>   at org.junit.Assert.failNotEquals(Assert.java:743)
>   at org.junit.Assert.assertEquals(Assert.java:118)
>   at org.junit.Assert.assertEquals(Assert.java:555)
>   at org.junit.Assert.assertEquals(Assert.java:542)
>   at 
> org.apache.hadoop.yarn.client.api.impl.TestAMRMClientPlacementConstraints.testAMRMClientWithPlacementConstraints(TestAMRMClientPlacementConstraints.java:161)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:498)
>   at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:47)
>   at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
>   at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:44)
>   at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
>   at 
> org.junit.internal.runners.statements.FailOnTimeout$StatementThread.run(FailOnTimeout.java:74)



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Assigned] (YARN-7918) TestAMRMClientPlacementConstraints.testAMRMClientWithPlacementConstraints failing in trunk

2018-02-16 Thread Arun Suresh (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-7918?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arun Suresh reassigned YARN-7918:
-

Assignee: Gergely Novák

> TestAMRMClientPlacementConstraints.testAMRMClientWithPlacementConstraints 
> failing in trunk
> --
>
> Key: YARN-7918
> URL: https://issues.apache.org/jira/browse/YARN-7918
> Project: Hadoop YARN
>  Issue Type: Task
>Reporter: Botong Huang
>Assignee: Gergely Novák
>Priority: Major
> Attachments: YARN-7918.001.patch
>
>
> java.lang.AssertionError: expected:<2> but was:<1>
>   at org.junit.Assert.fail(Assert.java:88)
>   at org.junit.Assert.failNotEquals(Assert.java:743)
>   at org.junit.Assert.assertEquals(Assert.java:118)
>   at org.junit.Assert.assertEquals(Assert.java:555)
>   at org.junit.Assert.assertEquals(Assert.java:542)
>   at 
> org.apache.hadoop.yarn.client.api.impl.TestAMRMClientPlacementConstraints.testAMRMClientWithPlacementConstraints(TestAMRMClientPlacementConstraints.java:161)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:498)
>   at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:47)
>   at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
>   at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:44)
>   at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
>   at 
> org.junit.internal.runners.statements.FailOnTimeout$StatementThread.run(FailOnTimeout.java:74)



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5028) RMStateStore should trim down app state for completed applications

2018-02-16 Thread Gergo Repas (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5028?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16367317#comment-16367317
 ] 

Gergo Repas commented on YARN-5028:
---

I fixed the bug in 003.patch. The latest run was marked as failed too (There 
was a timeout or other error in the fork), but there were 2208 tests run, 0 
failure, 0 error, 7 skipped - [~yufeigu] I think it's not a real unit test 
failure, could you please rerun this build?

> RMStateStore should trim down app state for completed applications
> --
>
> Key: YARN-5028
> URL: https://issues.apache.org/jira/browse/YARN-5028
> Project: Hadoop YARN
>  Issue Type: Improvement
>  Components: resourcemanager
>Affects Versions: 2.8.0
>Reporter: Karthik Kambatla
>Assignee: Gergo Repas
>Priority: Major
> Attachments: YARN-5028.000.patch, YARN-5028.001.patch, 
> YARN-5028.002.patch, YARN-5028.003.patch, YARN-5028.004.patch
>
>
> RMStateStore stores enough information to recover applications in case of a 
> restart. The store also retains this information for completed applications 
> to serve their status to REST, WebUI, Java and CLI clients. We don't need all 
> the information we store today to serve application status; for instance, we 
> don't need the {{ApplicationSubmissionContext}}. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6523) Newly retrieved security Tokens are sent as part of each heartbeat to each node from RM which is not desirable in large cluster

2018-02-16 Thread genericqa (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6523?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16366938#comment-16366938
 ] 

genericqa commented on YARN-6523:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m  
0s{color} | {color:blue} Docker mode activated. {color} |
| {color:red}-1{color} | {color:red} patch {color} | {color:red}  0m  5s{color} 
| {color:red} YARN-6523 does not apply to trunk. Rebase required? Wrong Branch? 
See https://wiki.apache.org/hadoop/HowToContribute for help. {color} |
\\
\\
|| Subsystem || Report/Notes ||
| JIRA Issue | YARN-6523 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12898470/YARN-6523.002.patch |
| Console output | 
https://builds.apache.org/job/PreCommit-YARN-Build/19724/console |
| Powered by | Apache Yetus 0.8.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> Newly retrieved security Tokens are sent as part of each heartbeat to each 
> node from RM which is not desirable in large cluster
> ---
>
> Key: YARN-6523
> URL: https://issues.apache.org/jira/browse/YARN-6523
> Project: Hadoop YARN
>  Issue Type: Improvement
>  Components: RM
>Affects Versions: 2.8.0, 2.7.3
>Reporter: Naganarasimha G R
>Assignee: Manikandan R
>Priority: Major
> Attachments: YARN-6523.001.patch, YARN-6523.002.patch
>
>
> Currently as part of heartbeat response RM sets all application's tokens 
> though all applications might not be active on the node. On top of it 
> NodeHeartbeatResponsePBImpl converts tokens for each app into 
> SystemCredentialsForAppsProto. Hence for each node and each heartbeat too 
> many SystemCredentialsForAppsProto objects were getting created.
> We hit a OOM while testing for 2000 concurrent apps on 500 nodes cluster with 
> 8GB RAM configured for RM



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (YARN-6523) Newly retrieved security Tokens are sent as part of each heartbeat to each node from RM which is not desirable in large cluster

2018-02-16 Thread Manikandan R (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6523?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16366935#comment-16366935
 ] 

Manikandan R edited comment on YARN-6523 at 2/16/18 12:48 PM:
--

Thank you [~rohithsharma] for helping me in setting up a secure cluster. Huh ! 
:)  Thank you [~bibinchundatt] for inputs.

[~Naganarasimha] / [~jlowe]

Was able to test the patch in live pseudo cluster and able to see token 
sequence no getting incremented as and when any new token fetches from HDFS. 
Can you please review the patch?


was (Author: maniraj...@gmail.com):
Thank you [~rohithsharma] for helping me in setting up a secure cluster. Huh ! 
:)  Thank you [~bibinchundatt] for inputs.

[~Naganarasimha] / [~jlowe]

Was able to test the patch in live pseudo cluste and able to see token sequence 
no getting incremented as and when any new token fetches from HDFS. Can you 
please review the patch?

> Newly retrieved security Tokens are sent as part of each heartbeat to each 
> node from RM which is not desirable in large cluster
> ---
>
> Key: YARN-6523
> URL: https://issues.apache.org/jira/browse/YARN-6523
> Project: Hadoop YARN
>  Issue Type: Improvement
>  Components: RM
>Affects Versions: 2.8.0, 2.7.3
>Reporter: Naganarasimha G R
>Assignee: Manikandan R
>Priority: Major
> Attachments: YARN-6523.001.patch, YARN-6523.002.patch
>
>
> Currently as part of heartbeat response RM sets all application's tokens 
> though all applications might not be active on the node. On top of it 
> NodeHeartbeatResponsePBImpl converts tokens for each app into 
> SystemCredentialsForAppsProto. Hence for each node and each heartbeat too 
> many SystemCredentialsForAppsProto objects were getting created.
> We hit a OOM while testing for 2000 concurrent apps on 500 nodes cluster with 
> 8GB RAM configured for RM



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6523) Newly retrieved security Tokens are sent as part of each heartbeat to each node from RM which is not desirable in large cluster

2018-02-16 Thread Manikandan R (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6523?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16366935#comment-16366935
 ] 

Manikandan R commented on YARN-6523:


Thank you [~rohithsharma] for helping me in setting up a secure cluster. Huh ! 
:)  Thank you [~bibinchundatt] for inputs.

[~Naganarasimha] / [~jlowe]

Was able to test the patch in live pseudo cluste and able to see token sequence 
no getting incremented as and when any new token fetches from HDFS. Can you 
please review the patch?

> Newly retrieved security Tokens are sent as part of each heartbeat to each 
> node from RM which is not desirable in large cluster
> ---
>
> Key: YARN-6523
> URL: https://issues.apache.org/jira/browse/YARN-6523
> Project: Hadoop YARN
>  Issue Type: Improvement
>  Components: RM
>Affects Versions: 2.8.0, 2.7.3
>Reporter: Naganarasimha G R
>Assignee: Manikandan R
>Priority: Major
> Attachments: YARN-6523.001.patch, YARN-6523.002.patch
>
>
> Currently as part of heartbeat response RM sets all application's tokens 
> though all applications might not be active on the node. On top of it 
> NodeHeartbeatResponsePBImpl converts tokens for each app into 
> SystemCredentialsForAppsProto. Hence for each node and each heartbeat too 
> many SystemCredentialsForAppsProto objects were getting created.
> We hit a OOM while testing for 2000 concurrent apps on 500 nodes cluster with 
> 8GB RAM configured for RM



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7919) Split timelineservice-hbase module to make YARN-7346 easier

2018-02-16 Thread Rohith Sharma K S (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7919?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16366904#comment-16366904
 ] 

Rohith Sharma K S commented on YARN-7919:
-

I added jcodings artifact in exclusion list in 
hadoop-yarn-server-timelineservice-hbase-client/pom.xml for hbase-client 
dependency and able to compile this module in both HBase versions i.e 
HBaes-1.2.6 and Hbase-2.0-beta1. And for HBase-2.0-beta1 failed at 
hadoop-yarn-server-timelineservice-hbase-server module which is expected as 
long we fix YARN-7346.

jcodings-1.0.8.jar used to be in dependent library for hbase-client in 
hbase-1.2.6 version. It used to be shipped in timelineservice/lib folder. But 
after this change, I do not see jcoding jar in timelineservice/lib folder when 
we generate hadoop tar.gz. 

My concern is does this jar really required for hbase-client or we can go ahead 
excluding this jar for hbase-1.2.6 version! Does it going to cause 
functionality impact? 

nit : I see that package hadoop-yarn-server-timelineservice-hbase-common name 
is given as {{Apache Hadoop YARN TimelineService HBase Schema}}. I think we can 
keep name as same as module name i.e {{Apache Hadoop YARN TimelineService HBase 
Common}}.

> Split timelineservice-hbase module to make YARN-7346 easier
> ---
>
> Key: YARN-7919
> URL: https://issues.apache.org/jira/browse/YARN-7919
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: timelineservice
>Affects Versions: 3.0.0
>Reporter: Haibo Chen
>Assignee: Haibo Chen
>Priority: Major
> Attachments: YARN-7919.00.patch, YARN-7919.01.patch, 
> YARN-7919.02.patch, YARN-7919.03.patch, YARN-7919.04.patch
>
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5028) RMStateStore should trim down app state for completed applications

2018-02-16 Thread genericqa (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5028?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16366897#comment-16366897
 ] 

genericqa commented on YARN-5028:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
27s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 15m 
17s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
36s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
25s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
38s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
10m 15s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m  
5s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
23s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
42s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
37s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
37s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 25s{color} | {color:orange} 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager:
 The patch generated 2 new + 76 unchanged - 0 fixed = 78 total (was 76) {color} 
|
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
40s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
10m 16s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
15s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
23s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 80m 11s{color} 
| {color:red} hadoop-yarn-server-resourcemanager in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
18s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}123m 39s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:5b98639 |
| JIRA Issue | YARN-5028 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12910901/YARN-5028.004.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  shadedclient  findbugs  checkstyle  |
| uname | Linux c9da2dc169af 4.4.0-89-generic #112-Ubuntu SMP Mon Jul 31 
19:38:41 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / a1e05e0 |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_151 |
| findbugs | v3.1.0-RC1 |
| checkstyle | 
https://builds.apache.org/job/PreCommit-YARN-Build/19723/artifact/out/diff-checkstyle-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager.txt
 |
| unit | 
https://builds.apache.org/job/PreCommit-YARN-Build/19723/artifact/out/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-YARN-Build/

[jira] [Commented] (YARN-7919) Split timelineservice-hbase module to make YARN-7346 easier

2018-02-16 Thread Rohith Sharma K S (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7919?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16366871#comment-16366871
 ] 

Rohith Sharma K S commented on YARN-7919:
-

[~haibochen] I applied 04 patch and tried to build in default mode and got 
succeeded. Later I tried to build by changing hbase version into beta1 with 
below properties.  
{code}2.0.0-beta-1 
3.0.0{code}
This is giving compilation ERROR at 
hadoop-yarn-server-timelineservice-hbase-client module.
{code}
[INFO] --- maven-enforcer-plugin:3.0.0-M1:enforce (depcheck) @ 
hadoop-yarn-server-timelineservice-hbase-client ---
[WARNING]
Dependency convergence error for org.jruby.jcodings:jcodings:1.0.18 paths to 
dependency are:
+-org.apache.hadoop:hadoop-yarn-server-timelineservice-hbase-client:3.2.0-SNAPSHOT
  +-org.apache.hbase:hbase-client:2.0.0-beta-1
+-org.jruby.jcodings:jcodings:1.0.18
and
+-org.apache.hadoop:hadoop-yarn-server-timelineservice-hbase-client:3.2.0-SNAPSHOT
  +-org.apache.hbase:hbase-client:2.0.0-beta-1
+-org.jruby.joni:joni:2.1.11
  +-org.jruby.jcodings:jcodings:1.0.13
{code}
 Is there any additional libraries to be loaded for hbase-2 compilation even or 
client package as well?

> Split timelineservice-hbase module to make YARN-7346 easier
> ---
>
> Key: YARN-7919
> URL: https://issues.apache.org/jira/browse/YARN-7919
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: timelineservice
>Affects Versions: 3.0.0
>Reporter: Haibo Chen
>Assignee: Haibo Chen
>Priority: Major
> Attachments: YARN-7919.00.patch, YARN-7919.01.patch, 
> YARN-7919.02.patch, YARN-7919.03.patch, YARN-7919.04.patch
>
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-7918) TestAMRMClientPlacementConstraints.testAMRMClientWithPlacementConstraints failing in trunk

2018-02-16 Thread JIRA

 [ 
https://issues.apache.org/jira/browse/YARN-7918?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gergely Novák updated YARN-7918:

Attachment: YARN-7918.001.patch

> TestAMRMClientPlacementConstraints.testAMRMClientWithPlacementConstraints 
> failing in trunk
> --
>
> Key: YARN-7918
> URL: https://issues.apache.org/jira/browse/YARN-7918
> Project: Hadoop YARN
>  Issue Type: Task
>Reporter: Botong Huang
>Priority: Major
> Attachments: YARN-7918.001.patch
>
>
> java.lang.AssertionError: expected:<2> but was:<1>
>   at org.junit.Assert.fail(Assert.java:88)
>   at org.junit.Assert.failNotEquals(Assert.java:743)
>   at org.junit.Assert.assertEquals(Assert.java:118)
>   at org.junit.Assert.assertEquals(Assert.java:555)
>   at org.junit.Assert.assertEquals(Assert.java:542)
>   at 
> org.apache.hadoop.yarn.client.api.impl.TestAMRMClientPlacementConstraints.testAMRMClientWithPlacementConstraints(TestAMRMClientPlacementConstraints.java:161)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:498)
>   at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:47)
>   at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
>   at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:44)
>   at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
>   at 
> org.junit.internal.runners.statements.FailOnTimeout$StatementThread.run(FailOnTimeout.java:74)



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-7918) TestAMRMClientPlacementConstraints.testAMRMClientWithPlacementConstraints failing in trunk

2018-02-16 Thread JIRA

 [ 
https://issues.apache.org/jira/browse/YARN-7918?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gergely Novák updated YARN-7918:

Attachment: (was: YARN-7918.001.patch)

> TestAMRMClientPlacementConstraints.testAMRMClientWithPlacementConstraints 
> failing in trunk
> --
>
> Key: YARN-7918
> URL: https://issues.apache.org/jira/browse/YARN-7918
> Project: Hadoop YARN
>  Issue Type: Task
>Reporter: Botong Huang
>Priority: Major
>
> java.lang.AssertionError: expected:<2> but was:<1>
>   at org.junit.Assert.fail(Assert.java:88)
>   at org.junit.Assert.failNotEquals(Assert.java:743)
>   at org.junit.Assert.assertEquals(Assert.java:118)
>   at org.junit.Assert.assertEquals(Assert.java:555)
>   at org.junit.Assert.assertEquals(Assert.java:542)
>   at 
> org.apache.hadoop.yarn.client.api.impl.TestAMRMClientPlacementConstraints.testAMRMClientWithPlacementConstraints(TestAMRMClientPlacementConstraints.java:161)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:498)
>   at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:47)
>   at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
>   at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:44)
>   at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
>   at 
> org.junit.internal.runners.statements.FailOnTimeout$StatementThread.run(FailOnTimeout.java:74)



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-7918) TestAMRMClientPlacementConstraints.testAMRMClientWithPlacementConstraints failing in trunk

2018-02-16 Thread JIRA

 [ 
https://issues.apache.org/jira/browse/YARN-7918?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gergely Novák updated YARN-7918:

Attachment: YARN-7918.001.patch

> TestAMRMClientPlacementConstraints.testAMRMClientWithPlacementConstraints 
> failing in trunk
> --
>
> Key: YARN-7918
> URL: https://issues.apache.org/jira/browse/YARN-7918
> Project: Hadoop YARN
>  Issue Type: Task
>Reporter: Botong Huang
>Priority: Major
>
> java.lang.AssertionError: expected:<2> but was:<1>
>   at org.junit.Assert.fail(Assert.java:88)
>   at org.junit.Assert.failNotEquals(Assert.java:743)
>   at org.junit.Assert.assertEquals(Assert.java:118)
>   at org.junit.Assert.assertEquals(Assert.java:555)
>   at org.junit.Assert.assertEquals(Assert.java:542)
>   at 
> org.apache.hadoop.yarn.client.api.impl.TestAMRMClientPlacementConstraints.testAMRMClientWithPlacementConstraints(TestAMRMClientPlacementConstraints.java:161)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:498)
>   at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:47)
>   at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
>   at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:44)
>   at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
>   at 
> org.junit.internal.runners.statements.FailOnTimeout$StatementThread.run(FailOnTimeout.java:74)



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7918) TestAMRMClientPlacementConstraints.testAMRMClientWithPlacementConstraints failing in trunk

2018-02-16 Thread JIRA

[ 
https://issues.apache.org/jira/browse/YARN-7918?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16366836#comment-16366836
 ] 

Gergely Novák commented on YARN-7918:
-

The problem is that when {{PlacementConstraintProcessor.addToRetryList}} 
creates the {{BatchedRequests}} object, it creates the {{requests}} collection 
as an inmutable set, thus later the {{addToBatch}} method fails with this trace:
{noformat}
java.lang.UnsupportedOperationException
at java.util.AbstractCollection.add(AbstractCollection.java:262)
at 
org.apache.hadoop.yarn.server.resourcemanager.scheduler.constraint.processor.BatchedRequests.addToBatch(BatchedRequests.java:108)
at 
org.apache.hadoop.yarn.server.resourcemanager.scheduler.constraint.processor.PlacementConstraintProcessor.addToRetryList(PlacementConstraintProcessor.java:322)
at 
org.apache.hadoop.yarn.server.resourcemanager.scheduler.constraint.processor.PlacementConstraintProcessor.handleSchedulingResponse(PlacementConstraintProcessor.java:296)
at 
org.apache.hadoop.yarn.server.resourcemanager.scheduler.constraint.processor.PlacementConstraintProcessor.lambda$handleRejectedRequests$3(PlacementConstraintProcessor.java:246)
{noformat}

Attached a trivial patch to fix this. 

> TestAMRMClientPlacementConstraints.testAMRMClientWithPlacementConstraints 
> failing in trunk
> --
>
> Key: YARN-7918
> URL: https://issues.apache.org/jira/browse/YARN-7918
> Project: Hadoop YARN
>  Issue Type: Task
>Reporter: Botong Huang
>Priority: Major
>
> java.lang.AssertionError: expected:<2> but was:<1>
>   at org.junit.Assert.fail(Assert.java:88)
>   at org.junit.Assert.failNotEquals(Assert.java:743)
>   at org.junit.Assert.assertEquals(Assert.java:118)
>   at org.junit.Assert.assertEquals(Assert.java:555)
>   at org.junit.Assert.assertEquals(Assert.java:542)
>   at 
> org.apache.hadoop.yarn.client.api.impl.TestAMRMClientPlacementConstraints.testAMRMClientWithPlacementConstraints(TestAMRMClientPlacementConstraints.java:161)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:498)
>   at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:47)
>   at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
>   at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:44)
>   at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
>   at 
> org.junit.internal.runners.statements.FailOnTimeout$StatementThread.run(FailOnTimeout.java:74)



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-5028) RMStateStore should trim down app state for completed applications

2018-02-16 Thread Gergo Repas (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-5028?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gergo Repas updated YARN-5028:
--
Attachment: YARN-5028.004.patch

> RMStateStore should trim down app state for completed applications
> --
>
> Key: YARN-5028
> URL: https://issues.apache.org/jira/browse/YARN-5028
> Project: Hadoop YARN
>  Issue Type: Improvement
>  Components: resourcemanager
>Affects Versions: 2.8.0
>Reporter: Karthik Kambatla
>Assignee: Gergo Repas
>Priority: Major
> Attachments: YARN-5028.000.patch, YARN-5028.001.patch, 
> YARN-5028.002.patch, YARN-5028.003.patch, YARN-5028.004.patch
>
>
> RMStateStore stores enough information to recover applications in case of a 
> restart. The store also retains this information for completed applications 
> to serve their status to REST, WebUI, Java and CLI clients. We don't need all 
> the information we store today to serve application status; for instance, we 
> don't need the {{ApplicationSubmissionContext}}. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7292) Retrospect Resource Profile Behavior for overriding capability

2018-02-16 Thread Sunil G (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7292?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16366709#comment-16366709
 ] 

Sunil G commented on YARN-7292:
---

Sorry [~leftnoteasy], I missed that we cut branch-3.1. Committed to branch-3.1 
as well.

> Retrospect Resource Profile Behavior for overriding capability
> --
>
> Key: YARN-7292
> URL: https://issues.apache.org/jira/browse/YARN-7292
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: nodemanager, resourcemanager
>Reporter: Wangda Tan
>Assignee: Wangda Tan
>Priority: Blocker
> Fix For: 3.1.0
>
> Attachments: YARN-7292.002.patch, YARN-7292.003.patch, 
> YARN-7292.004.patch, YARN-7292.005.patch, YARN-7292.006.patch, 
> YARN-7292.007.patch, YARN-7292.wip.001.patch
>
>
> Had discussions with [~templedf], [~vvasudev], [~sunilg] offline. There're a 
> couple of resource profile related behaviors might need to be updated:
> 1) Configure resource profile in server side or client side: 
> Currently resource profile can be only configured centrally:
> - Advantages:
> A given resource profile has a the same meaning in the cluster. It won’t 
> change when we run different apps in different configurations. A job can run 
> under Amazon’s G2.8X can also run on YARN with G2.8X profile. A side benefit 
> is YARN scheduler can potentially do better bin packing.
> - Disadvantages: 
> Hard for applications to add their own resource profiles. 
> 2) Do we really need mandatory resource profiles such as 
> minimum/maximum/default? 
> 3) Should we send resource profile name inside ResourceRequest, or should 
> client/AM translate it to resource and set it to the existing resource 
> fields? 
> 4) Related to above, should we allow resource overrides or client/AM should 
> send final resource to RM?



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-7292) Retrospect Resource Profile Behavior for overriding capability

2018-02-16 Thread Sunil G (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-7292?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sunil G updated YARN-7292:
--
Target Version/s: 3.1.0, 3.2.0  (was: 3.1.0)

> Retrospect Resource Profile Behavior for overriding capability
> --
>
> Key: YARN-7292
> URL: https://issues.apache.org/jira/browse/YARN-7292
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: nodemanager, resourcemanager
>Reporter: Wangda Tan
>Assignee: Wangda Tan
>Priority: Blocker
> Fix For: 3.1.0
>
> Attachments: YARN-7292.002.patch, YARN-7292.003.patch, 
> YARN-7292.004.patch, YARN-7292.005.patch, YARN-7292.006.patch, 
> YARN-7292.007.patch, YARN-7292.wip.001.patch
>
>
> Had discussions with [~templedf], [~vvasudev], [~sunilg] offline. There're a 
> couple of resource profile related behaviors might need to be updated:
> 1) Configure resource profile in server side or client side: 
> Currently resource profile can be only configured centrally:
> - Advantages:
> A given resource profile has a the same meaning in the cluster. It won’t 
> change when we run different apps in different configurations. A job can run 
> under Amazon’s G2.8X can also run on YARN with G2.8X profile. A side benefit 
> is YARN scheduler can potentially do better bin packing.
> - Disadvantages: 
> Hard for applications to add their own resource profiles. 
> 2) Do we really need mandatory resource profiles such as 
> minimum/maximum/default? 
> 3) Should we send resource profile name inside ResourceRequest, or should 
> client/AM translate it to resource and set it to the existing resource 
> fields? 
> 4) Related to above, should we allow resource overrides or client/AM should 
> send final resource to RM?



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7732) Support Generic AM Simulator from SynthGenerator

2018-02-16 Thread Young Chen (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7732?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16366707#comment-16366707
 ] 

Young Chen commented on YARN-7732:
--

Right now this is only for the synthetic generator, so the old traces shouldn't 
be affected at all. Let me know if you run into any issues though!

There should be no backwards compatibility issues - I made sure to explicitly 
check if an old syn.json was being used, and if so extract the parameters and 
convert it internally to the new format. This is done in 
SynthTraceJobProducer#validateJobDef and tested in 
TestSynthJobGeneration#testMapReduce.

> Support Generic AM Simulator from SynthGenerator
> 
>
> Key: YARN-7732
> URL: https://issues.apache.org/jira/browse/YARN-7732
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: scheduler-load-simulator
>Reporter: Young Chen
>Assignee: Young Chen
>Priority: Minor
> Attachments: YARN-7732-YARN-7798.01.patch, 
> YARN-7732-YARN-7798.02.patch, YARN-7732.01.patch, YARN-7732.02.patch, 
> YARN-7732.03.patch, YARN-7732.04.patch, YARN-7732.05.patch
>
>
> Extract the MapReduce specific set-up in the SLSRunner into the 
> MRAMSimulator, and enable support for pluggable AMSimulators.
> Previously, the AM set up in SLSRunner had the MRAMSimulator type hard coded, 
> for example startAMFromSynthGenerator() calls this:
>  
> {code:java}
> runNewAM(SLSUtils.DEFAULT_JOB_TYPE, user, jobQueue, oldJobId,
> jobStartTimeMS, jobFinishTimeMS, containerList, reservationId,
> job.getDeadline(), getAMContainerResource(null));
> {code}
> where SLSUtils.DEFAULT_JOB_TYPE = "mapreduce"
> The container set up was also only suitable for mapreduce: 
>  
> {code:java}
> Version:1.0 StartHTML:00286 EndHTML:12564 StartFragment:03634 
> EndFragment:12474 StartSelection:03700 EndSelection:12464 
> SourceURL:https://github.com/apache/hadoop/blob/trunk/hadoop-tools/hadoop-sls/src/main/java/org/apache/hadoop/yarn/sls/SLSRunner.java
>  
> // map tasks
> for (int i = 0; i < job.getNumberMaps(); i++) {
>   TaskAttemptInfo tai = job.getTaskAttemptInfo(TaskType.MAP, i, 0);
>   RMNode node =
>   nmMap.get(keyAsArray.get(rand.nextInt(keyAsArray.size(
>   .getNode();
>   String hostname = "/" + node.getRackName() + "/" + node.getHostName();
>   long containerLifeTime = tai.getRuntime();
>   Resource containerResource =
>   Resource.newInstance((int) tai.getTaskInfo().getTaskMemory(),
>   (int) tai.getTaskInfo().getTaskVCores());
>   containerList.add(new ContainerSimulator(containerResource,
>   containerLifeTime, hostname, DEFAULT_MAPPER_PRIORITY, "map"));
> }
> // reduce tasks
> for (int i = 0; i < job.getNumberReduces(); i++) {
>   TaskAttemptInfo tai = job.getTaskAttemptInfo(TaskType.REDUCE, i, 0);
>   RMNode node =
>   nmMap.get(keyAsArray.get(rand.nextInt(keyAsArray.size(
>   .getNode();
>   String hostname = "/" + node.getRackName() + "/" + node.getHostName();
>   long containerLifeTime = tai.getRuntime();
>   Resource containerResource =
>   Resource.newInstance((int) tai.getTaskInfo().getTaskMemory(),
>   (int) tai.getTaskInfo().getTaskVCores());
>   containerList.add(
>   new ContainerSimulator(containerResource, containerLifeTime,
>   hostname, DEFAULT_REDUCER_PRIORITY, "reduce"));
> }
> {code}
>  
> In addition, the syn.json format supported only mapreduce (the parameters 
> were very specific: mtime, rtime, mtasks, rtasks, etc..).
> This patch aims to introduce a new syn.json format that can describe generic 
> jobs, and the SLS setup required to support the synth generation of generic 
> jobs.
> See syn_generic.json for an equivalent of the previous syn.json in the new 
> format.
> Using the new generic format, we describe a StreamAMSimulator simulates a 
> long running streaming service that maintains N number of containers for the 
> lifetime of the AM. See syn_stream.json.
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org