[jira] [Updated] (YARN-7346) Fix compilation errors against hbase2 alpha release

2017-11-29 Thread Haibo Chen (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-7346?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Haibo Chen updated YARN-7346:
-
Attachment: YARN-7346.prelim2.patch

> Fix compilation errors against hbase2 alpha release
> ---
>
> Key: YARN-7346
> URL: https://issues.apache.org/jira/browse/YARN-7346
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Ted Yu
>Assignee: Vrushali C
> Attachments: YARN-7346.prelim1.patch, YARN-7346.prelim2.patch, 
> YARN-7581.prelim.patch
>
>
> When compiling hadoop-yarn-server-timelineservice-hbase against 2.0.0-alpha3, 
> I got the following errors:
> https://pastebin.com/Ms4jYEVB
> This issue is to fix the compilation errors.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7522) Add application tags manager implementation

2017-11-29 Thread Konstantinos Karanasos (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7522?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16272227#comment-16272227
 ] 

Konstantinos Karanasos commented on YARN-7522:
--

Thanks for working on this, [~leftnoteasy] -- I will check the latest patch, 
most probably tomorrow.

> Add application tags manager implementation
> ---
>
> Key: YARN-7522
> URL: https://issues.apache.org/jira/browse/YARN-7522
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Wangda Tan
>Assignee: Wangda Tan
> Attachments: YARN-7522.YARN-6592.002.patch, 
> YARN-7522.YARN-6592.003.patch, YARN-7522.YARN-6592.wip-001.patch
>
>
> This is different from YARN-6596, YARN-6596 is targeted to add constraint 
> manager to store intra/inter application placement constraints. This JIRA is 
> targeted to support storing maps between container-tags/applications and 
> nodes. This will be required by affinity/anti-affinity implementation and 
> cardinality.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7346) Fix compilation errors against hbase2 alpha release

2017-11-29 Thread Vrushali C (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7346?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16272200#comment-16272200
 ] 

Vrushali C commented on YARN-7346:
--

Thanks [~haibo.chen] , catching up on hbase atsv2 jiras. I believe the hmaster 
initialization failures are relevant. 

Just thinking out loud, with hbase 2.0.0-beta1, we will have more changes in 
coprocessors and tags. See HBASE-19092 . 

> Fix compilation errors against hbase2 alpha release
> ---
>
> Key: YARN-7346
> URL: https://issues.apache.org/jira/browse/YARN-7346
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Ted Yu
>Assignee: Vrushali C
> Attachments: YARN-7346.prelim1.patch, YARN-7581.prelim.patch
>
>
> When compiling hadoop-yarn-server-timelineservice-hbase against 2.0.0-alpha3, 
> I got the following errors:
> https://pastebin.com/Ms4jYEVB
> This issue is to fix the compilation errors.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7213) [Umbrella] Test and validate HBase-2.0.x with Atsv2

2017-11-29 Thread Vrushali C (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7213?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16272196#comment-16272196
 ] 

Vrushali C commented on YARN-7213:
--


Thanks for the patch [~haibo.chen] on YARN-7346 for hbase 2.x compilation and 
the discussion [~rohithsharma]

I am wondering if we need one more jira to create maven profiles for supporting 
compilation against different hbase versions at the same time? 

> [Umbrella] Test and validate HBase-2.0.x with Atsv2
> ---
>
> Key: YARN-7213
> URL: https://issues.apache.org/jira/browse/YARN-7213
> Project: Hadoop YARN
>  Issue Type: Task
>Reporter: Rohith Sharma K S
>Assignee: Rohith Sharma K S
> Attachments: YARN-7213.prelim.patch, YARN-7213.prelim.patch, 
> YARN-7213.wip.patch
>
>
> Hbase-2.0.x officially support hadoop-alpha compilations. And also they are 
> getting ready for Hadoop-beta release so that HBase can release their 
> versions compatible with Hadoop-beta. So, this JIRA is to keep track of 
> HBase-2.0 integration issues. 



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-7537) [Atsv2] load hbase configuration from filesystem rather than URL

2017-11-29 Thread Vrushali C (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-7537?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vrushali C updated YARN-7537:
-
Issue Type: Sub-task  (was: Bug)
Parent: YARN-7055

> [Atsv2] load hbase configuration from filesystem rather than URL
> 
>
> Key: YARN-7537
> URL: https://issues.apache.org/jira/browse/YARN-7537
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Rohith Sharma K S
>Assignee: Rohith Sharma K S
> Attachments: YARN-7537.01.patch
>
>
> Currently HBaseTimelineStorageUtils#getTimelineServiceHBaseConf loads hbase 
> configurations using URL if *yarn.timeline-service.hbase.configuration.file* 
> is configured. But it is restricted to URLs only. This need to be changed to 
> load from file system. In deployment, hbase configuration can be kept under 
> filesystem so that it be utilized by all the NodeManager and ResourceManager.
> cc :/ [~vrushalic] [~varun_saxena]



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Created] (YARN-7585) NodeManager should go unhealthy when state store throws DBException

2017-11-29 Thread Wilfred Spiegelenburg (JIRA)
Wilfred Spiegelenburg created YARN-7585:
---

 Summary: NodeManager should go unhealthy when state store throws 
DBException 
 Key: YARN-7585
 URL: https://issues.apache.org/jira/browse/YARN-7585
 Project: Hadoop YARN
  Issue Type: Bug
  Components: nodemanager
Reporter: Wilfred Spiegelenburg
Assignee: Wilfred Spiegelenburg


If work preserving recover is enabled the NM will not start up if the state 
store does not initialise. However if the state store becomes unavailable after 
that for any reason the NM will not go unhealthy. 
Since the state store is not available new containers can not be started any 
more and the NM should become unhealthy:
{code}
AMLauncher: Error launching appattempt_1508806289867_268617_01. Got 
exception: org.apache.hadoop.yarn.exceptions.YarnException: 
java.io.IOException: org.iq80.leveldb.DBException: IO error: 
/dsk/app/var/lib/hadoop-yarn/yarn-nm-recovery/yarn-nm-state/028269.log: 
Read-only file system
at o.a.h.yarn.ipc.RPCUtil.getRemoteException(RPCUtil.java:38)
at 
o.a.h.y.s.n.cm.ContainerManagerImpl.startContainers(ContainerManagerImpl.java:721)
...
Caused by: java.io.IOException: org.iq80.leveldb.DBException: IO error: 
/dsk/app/var/lib/hadoop-yarn/yarn-nm-recovery/yarn-nm-state/028269.log: 
Read-only file system
at 
o.a.h.y.s.n.r.NMLeveldbStateStoreService.storeApplication(NMLeveldbStateStoreService.java:374)
at 
o.a.h.y.s.n.cm.ContainerManagerImpl.startContainerInternal(ContainerManagerImpl.java:848)
at 
o.a.h.y.s.n.cm.ContainerManagerImpl.startContainers(ContainerManagerImpl.java:712)
{code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7562) queuePlacementPolicy should not match parent queue

2017-11-29 Thread Wilfred Spiegelenburg (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7562?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16272151#comment-16272151
 ] 

Wilfred Spiegelenburg commented on YARN-7562:
-

I think you are going the wrong direction: this is not a code problem but a 
configuration issue. Please read my previous update.

You are still breaking existing configurations and use cases. The 
_primaryGroup_ and _secondaryGroupExistingQueue_ can be used as the nested rule 
in the _nestedUserQueue_ rule as per the example that we have in the 
[documentation|https://hadoop.apache.org/docs/current/hadoop-yarn/hadoop-yarn-site/FairScheduler.html#Automatically_placing_applications_in_queues].
 They must be allowed to return a parent queue.

> queuePlacementPolicy should not match parent queue
> --
>
> Key: YARN-7562
> URL: https://issues.apache.org/jira/browse/YARN-7562
> Project: Hadoop YARN
>  Issue Type: Improvement
>  Components: fairscheduler, resourcemanager
>Affects Versions: 2.7.1
>Reporter: chuanjie.duan
> Attachments: YARN-7562.002.patch, YARN-7562.003.patch, YARN-7562.patch
>
>
> User algo submit a mapreduce job, console log said "root.algo is not a leaf 
> queue exception".
> root.algo is a parent queue, it's meanless for me. Not sure why parent queue 
> added before
> 
>   3000 mb, 1 vcores
>   24000 mb, 8 vcores
>   4
>   1
>   fifo
> 
> 
>   3000 mb, 1 vcores
>   24000 mb, 8 vcores
>   4
>   1
>   fifo
> 
> 
>   300
>   4 mb, 10 vcores
>   20 mb, 60 vcores
>   
> 300
> 4 mb, 10 vcores
> 10 mb, 30 vcores
> 20
> fifo
> 4
>   
>   
> 300
> 4 mb, 10 vcores
> 10 mb, 30 vcores
> 20
> fifo
> 4
>   
> 
> 
> 
> 
> 
> 



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7560) Resourcemanager hangs when resourceUsedWithWeightToResourceRatio return a overflow value

2017-11-29 Thread Wilfred Spiegelenburg (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7560?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16272149#comment-16272149
 ] 

Wilfred Spiegelenburg commented on YARN-7560:
-

[~c61...@163.com] I don't think a sweeping change like that is a good idea as 
part of this change. If you want to tackle that problem I would suggest that 
you open a new jira that shows where the issue are located and work on it there.

> Resourcemanager hangs when  resourceUsedWithWeightToResourceRatio return a 
> overflow value 
> --
>
> Key: YARN-7560
> URL: https://issues.apache.org/jira/browse/YARN-7560
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: fairscheduler, resourcemanager
>Affects Versions: 3.0.0
>Reporter: zhengchenyu
>Assignee: zhengchenyu
> Fix For: 3.0.0
>
> Attachments: YARN-7560.000.patch, YARN-7560.001.patch
>
>
> In our cluster, we changed the configuration, then refreshQueues, we found 
> the resourcemanager hangs. And the Resourcemanager can't restart 
> successfully. We got jstack information, always show like this:
> {code}
> "main" #1 prio=5 os_prio=0 tid=0x7f98e8017000 nid=0x2f5 runnable 
> [0x7f98eed9a000]
>java.lang.Thread.State: RUNNABLE
> at 
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.fair.policies.ComputeFairShares.resourceUsedWithWeightToResourceRatio(ComputeFairShares.java:182)
> at 
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.fair.policies.ComputeFairShares.computeSharesInternal(ComputeFairShares.java:140)
> at 
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.fair.policies.ComputeFairShares.computeSteadyShares(ComputeFairShares.java:66)
> at 
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.fair.policies.FairSharePolicy.computeSteadyShares(FairSharePolicy.java:148)
> at 
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.fair.FSParentQueue.recomputeSteadyShares(FSParentQueue.java:102)
> at 
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.fair.QueueManager.getQueue(QueueManager.java:148)
> - locked <0x7f8c4a8177a0> (a java.util.HashMap)
> at 
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.fair.QueueManager.getLeafQueue(QueueManager.java:101)
> at 
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.fair.QueueManager.updateAllocationConfiguration(QueueManager.java:387)
> at 
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.fair.FairScheduler$AllocationReloadListener.onReload(FairScheduler.java:1728)
> at 
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.fair.AllocationFileLoaderService.reloadAllocations(AllocationFileLoaderService.java:422)
> - locked <0x7f8c4a7eb2e0> (a 
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.fair.AllocationFileLoaderService)
> at 
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.fair.FairScheduler.initScheduler(FairScheduler.java:1597)
> at 
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.fair.FairScheduler.serviceInit(FairScheduler.java:1621)
> at 
> org.apache.hadoop.service.AbstractService.init(AbstractService.java:163)
> - locked <0x7f8c4a76ac48> (a java.lang.Object)
> at 
> org.apache.hadoop.service.CompositeService.serviceInit(CompositeService.java:107)
> at 
> org.apache.hadoop.yarn.server.resourcemanager.ResourceManager$RMActiveServices.serviceInit(ResourceManager.java:569)
> at 
> org.apache.hadoop.service.AbstractService.init(AbstractService.java:163)
> - locked <0x7f8c49254268> (a java.lang.Object)
> at 
> org.apache.hadoop.yarn.server.resourcemanager.ResourceManager.createAndInitActiveServices(ResourceManager.java:997)
> at 
> org.apache.hadoop.yarn.server.resourcemanager.ResourceManager.serviceInit(ResourceManager.java:257)
> at 
> org.apache.hadoop.service.AbstractService.init(AbstractService.java:163)
> - locked <0x7f8c467495e0> (a java.lang.Object)
> at 
> org.apache.hadoop.yarn.server.resourcemanager.ResourceManager.main(ResourceManager.java:1220)
> {code}
> When we debug the cluster, we found resourceUsedWithWeightToResourceRatio 
> return a negative value. So the loop can't return. We found in our cluster, 
> the sum of all minRes is over int.max, so 
> resourceUsedWithWeightToResourceRatio return a negative value.
> below is the loop. Because totalResource is long, so always postive. But 
> resourceUsedWithWeightToResourceRatio return int type. Our cluster is so big 
> that resourceUsedWithWeightToResourceRatio will return a overflow value, just 
> a negative. So the loop will never break.
> {code}
> while 

[jira] [Commented] (YARN-7346) Fix compilation errors against hbase2 alpha release

2017-11-29 Thread genericqa (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7346?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16272134#comment-16272134
 ] 

genericqa commented on YARN-7346:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
25s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 2 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  1m 
50s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 14m 
49s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 12m 
11s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
43s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m  
6s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
11m 43s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Skipped patched modules with no Java source: 
hadoop-project 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-timelineservice-hbase-tests
 {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
31s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
50s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  1m  
2s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
58s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 11m 
14s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} javac {color} | {color:red} 11m 14s{color} 
| {color:red} root generated 197 new + 1237 unchanged - 0 fixed = 1434 total 
(was 1237) {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
49s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
15s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green}  0m  
4s{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green}  
8m 38s{color} | {color:green} patch has no errors when building and testing our 
client artifacts. {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Skipped patched modules with no Java source: 
hadoop-project 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-timelineservice-hbase-tests
 {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
43s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
53s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
17s{color} | {color:green} hadoop-project in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
35s{color} | {color:green} hadoop-yarn-server-timelineservice-hbase in the 
patch passed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red}  1m 18s{color} 
| {color:red} hadoop-yarn-server-timelineservice-hbase-tests in the patch 
failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
28s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 

[jira] [Commented] (YARN-7580) ContainersMonitorImpl logged message lacks detail when exceeding memory limits

2017-11-29 Thread Wilfred Spiegelenburg (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7580?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16272133#comment-16272133
 ] 

Wilfred Spiegelenburg commented on YARN-7580:
-

Failure does not seem related to the change:
{code}
2017-11-29 11:30:28,018 WARN  [NM ContainerManager dispatcher] 
container.ContainerImpl (ContainerImpl.java:handle(2083)) - Can't handle this 
event at current state: Current: [DONE], eventType: [CONTAINER_LAUNCHED], 
container: [container_0__01_00]
{code}

> ContainersMonitorImpl logged message lacks detail when exceeding memory limits
> --
>
> Key: YARN-7580
> URL: https://issues.apache.org/jira/browse/YARN-7580
> Project: Hadoop YARN
>  Issue Type: Improvement
>  Components: nodemanager
>Affects Versions: 3.1.0
>Reporter: Wilfred Spiegelenburg
>Assignee: Wilfred Spiegelenburg
> Attachments: YARN-7580.001.patch, YARN-7580.002.patch
>
>
> Currently in the RM logs container memory usage for a container that exceeds 
> the memory limit is reported like this:
> {code}
> 2016-06-14 09:15:36,694 INFO [AsyncDispatcher event handler] 
> org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl: Diagnostics 
> report from attempt_1464251583966_0932_r_000876_0: Container 
> [pid=134938,containerID=container_1464251583966_0932_01_002237] is running 
> beyond physical memory limits. Current usage: 1.0 GB of 1 GB physical memory 
> used; 1.9 GB of 2.1 GB virtual memory used. Killing container.
> {code}
> Two enhancements as part of this jira:
> - make it clearer which limit we exceed
> - show exactly how much we exceeded the limit by



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7522) Add application tags manager implementation

2017-11-29 Thread genericqa (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7522?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16272114#comment-16272114
 ] 

genericqa commented on YARN-7522:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
16s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 4 new or modified test 
files. {color} |
|| || || || {color:brown} YARN-6592 Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 21m 
47s{color} | {color:green} YARN-6592 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
40s{color} | {color:green} YARN-6592 passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
35s{color} | {color:green} YARN-6592 passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
51s{color} | {color:green} YARN-6592 passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
11m 29s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m  
7s{color} | {color:green} YARN-6592 passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
27s{color} | {color:green} YARN-6592 passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
40s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
36s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
36s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 27s{color} | {color:orange} 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager:
 The patch generated 8 new + 247 unchanged - 0 fixed = 255 total (was 247) 
{color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
40s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
11m 35s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
20s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
24s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 61m 34s{color} 
| {color:red} hadoop-yarn-server-resourcemanager in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
21s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}114m 24s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | 
hadoop.yarn.server.resourcemanager.scheduler.capacity.TestNodeLabelContainerAllocation
 |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:5b98639 |
| JIRA Issue | YARN-7522 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12899929/YARN-7522.YARN-6592.003.patch
 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  shadedclient  findbugs  checkstyle  |
| uname | Linux 70c54c86333d 3.13.0-129-generic #178-Ubuntu SMP Fri Aug 11 
12:48:20 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | YARN-6592 / 2d5d3f1 |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_151 |
| findbugs | v3.1.0-RC1 |
| checkstyle | 
https://builds.apache.org/job/PreCommit-YARN-Build/18730/artifact/out/diff-checkstyle-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager.txt
 |
| unit | 

[jira] [Commented] (YARN-7560) Resourcemanager hangs when resourceUsedWithWeightToResourceRatio return a overflow value

2017-11-29 Thread xiayang (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7560?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16272107#comment-16272107
 ] 

xiayang commented on YARN-7560:
---

Int type overflow is very common in the hadoop, because many variables that 
have been growing use of type int, hope to have a big patch to solve the 
problem.

> Resourcemanager hangs when  resourceUsedWithWeightToResourceRatio return a 
> overflow value 
> --
>
> Key: YARN-7560
> URL: https://issues.apache.org/jira/browse/YARN-7560
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: fairscheduler, resourcemanager
>Affects Versions: 3.0.0
>Reporter: zhengchenyu
>Assignee: zhengchenyu
> Fix For: 3.0.0
>
> Attachments: YARN-7560.000.patch, YARN-7560.001.patch
>
>
> In our cluster, we changed the configuration, then refreshQueues, we found 
> the resourcemanager hangs. And the Resourcemanager can't restart 
> successfully. We got jstack information, always show like this:
> {code}
> "main" #1 prio=5 os_prio=0 tid=0x7f98e8017000 nid=0x2f5 runnable 
> [0x7f98eed9a000]
>java.lang.Thread.State: RUNNABLE
> at 
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.fair.policies.ComputeFairShares.resourceUsedWithWeightToResourceRatio(ComputeFairShares.java:182)
> at 
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.fair.policies.ComputeFairShares.computeSharesInternal(ComputeFairShares.java:140)
> at 
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.fair.policies.ComputeFairShares.computeSteadyShares(ComputeFairShares.java:66)
> at 
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.fair.policies.FairSharePolicy.computeSteadyShares(FairSharePolicy.java:148)
> at 
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.fair.FSParentQueue.recomputeSteadyShares(FSParentQueue.java:102)
> at 
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.fair.QueueManager.getQueue(QueueManager.java:148)
> - locked <0x7f8c4a8177a0> (a java.util.HashMap)
> at 
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.fair.QueueManager.getLeafQueue(QueueManager.java:101)
> at 
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.fair.QueueManager.updateAllocationConfiguration(QueueManager.java:387)
> at 
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.fair.FairScheduler$AllocationReloadListener.onReload(FairScheduler.java:1728)
> at 
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.fair.AllocationFileLoaderService.reloadAllocations(AllocationFileLoaderService.java:422)
> - locked <0x7f8c4a7eb2e0> (a 
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.fair.AllocationFileLoaderService)
> at 
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.fair.FairScheduler.initScheduler(FairScheduler.java:1597)
> at 
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.fair.FairScheduler.serviceInit(FairScheduler.java:1621)
> at 
> org.apache.hadoop.service.AbstractService.init(AbstractService.java:163)
> - locked <0x7f8c4a76ac48> (a java.lang.Object)
> at 
> org.apache.hadoop.service.CompositeService.serviceInit(CompositeService.java:107)
> at 
> org.apache.hadoop.yarn.server.resourcemanager.ResourceManager$RMActiveServices.serviceInit(ResourceManager.java:569)
> at 
> org.apache.hadoop.service.AbstractService.init(AbstractService.java:163)
> - locked <0x7f8c49254268> (a java.lang.Object)
> at 
> org.apache.hadoop.yarn.server.resourcemanager.ResourceManager.createAndInitActiveServices(ResourceManager.java:997)
> at 
> org.apache.hadoop.yarn.server.resourcemanager.ResourceManager.serviceInit(ResourceManager.java:257)
> at 
> org.apache.hadoop.service.AbstractService.init(AbstractService.java:163)
> - locked <0x7f8c467495e0> (a java.lang.Object)
> at 
> org.apache.hadoop.yarn.server.resourcemanager.ResourceManager.main(ResourceManager.java:1220)
> {code}
> When we debug the cluster, we found resourceUsedWithWeightToResourceRatio 
> return a negative value. So the loop can't return. We found in our cluster, 
> the sum of all minRes is over int.max, so 
> resourceUsedWithWeightToResourceRatio return a negative value.
> below is the loop. Because totalResource is long, so always postive. But 
> resourceUsedWithWeightToResourceRatio return int type. Our cluster is so big 
> that resourceUsedWithWeightToResourceRatio will return a overflow value, just 
> a negative. So the loop will never break.
> {code}
> while (resourceUsedWithWeightToResourceRatio(rMax, schedulables, type)
> < totalResource) {
>   rMax *= 2.0;
>   

[jira] [Commented] (YARN-7540) Convert yarn app cli to call yarn api services

2017-11-29 Thread genericqa (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7540?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16272098#comment-16272098
 ] 

genericqa commented on YARN-7540:
-

| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
16s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 5 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
13s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 17m 
24s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  8m 
18s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
 2s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
33s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
12m 56s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
56s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m  
5s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
10s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
11s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  7m 
17s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  7m 
17s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
1m  2s{color} | {color:orange} hadoop-yarn-project/hadoop-yarn: The patch 
generated 2 new + 59 unchanged - 0 fixed = 61 total (was 59) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
32s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green}  0m  
1s{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
10m 44s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m 
11s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m  
3s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 21m 
17s{color} | {color:green} hadoop-yarn-client in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  3m 
44s{color} | {color:green} hadoop-yarn-services-core in the patch passed. 
{color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
30s{color} | {color:green} hadoop-yarn-services-api in the patch passed. 
{color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
32s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 95m 11s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:5b98639 |
| JIRA Issue | YARN-7540 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12899927/YARN-7540.002.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  shadedclient  xml  findbugs  checkstyle  |
| uname | Linux 01d24b26a546 3.13.0-129-generic #178-Ubuntu SMP Fri Aug 11 
12:48:20 UTC 2017 x86_64 x86_64 x86_64 

[jira] [Commented] (YARN-7381) Enable the configuration: yarn.nodemanager.log-container-debug-info.enabled

2017-11-29 Thread Ray Chiang (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7381?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16272084#comment-16272084
 ] 

Ray Chiang commented on YARN-7381:
--

Also, it's my understanding that there will be overhead of 2 files per 
container to HDFS unless everyone is running the tool from MAPREDUCE-6415.  So, 
processing ~1M mappers per day will add ~2M files to HDFS per day.  Leaving 
this on could be an issue for large/really busy clusters.

> Enable the configuration: yarn.nodemanager.log-container-debug-info.enabled
> ---
>
> Key: YARN-7381
> URL: https://issues.apache.org/jira/browse/YARN-7381
> Project: Hadoop YARN
>  Issue Type: Bug
>Affects Versions: 2.9.0, 3.0.0, 3.1.0
>Reporter: Xuan Gong
>Assignee: Xuan Gong
>Priority: Critical
> Attachments: YARN-7381.1.patch
>
>
> Enable the configuration "yarn.nodemanager.log-container-debug-info.enabled", 
> so we can aggregate launch_container.sh and directory.info



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7381) Enable the configuration: yarn.nodemanager.log-container-debug-info.enabled

2017-11-29 Thread Ray Chiang (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7381?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16272056#comment-16272056
 ] 

Ray Chiang commented on YARN-7381:
--

[~leftnoteasy], I believe launch_container.sh contains all the environment 
variables.  If anyone has sensitive information there, then it will get exposed 
by turning on this debugging information, correct?  That's why we've had to 
have whitelist style filters for environment variables before.

I can't think of any security risk related to the directory listing offhand, 
not that I'm any kind of security expert.

If we go forward with this change, I'd strongly recommend putting in detailed 
information about that in the Release Notes field.

> Enable the configuration: yarn.nodemanager.log-container-debug-info.enabled
> ---
>
> Key: YARN-7381
> URL: https://issues.apache.org/jira/browse/YARN-7381
> Project: Hadoop YARN
>  Issue Type: Bug
>Affects Versions: 2.9.0, 3.0.0, 3.1.0
>Reporter: Xuan Gong
>Assignee: Xuan Gong
>Priority: Critical
> Attachments: YARN-7381.1.patch
>
>
> Enable the configuration "yarn.nodemanager.log-container-debug-info.enabled", 
> so we can aggregate launch_container.sh and directory.info



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-7578) Extend TestDiskFailures.waitForDiskHealthCheck() sleeping time.

2017-11-29 Thread Guangming Zhang (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-7578?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Guangming Zhang updated YARN-7578:
--
Attachment: YARN-7578.0.patch

> Extend TestDiskFailures.waitForDiskHealthCheck() sleeping time.
> ---
>
> Key: YARN-7578
> URL: https://issues.apache.org/jira/browse/YARN-7578
> Project: Hadoop YARN
>  Issue Type: Test
>Affects Versions: 3.1.0
> Environment: ARMv8 AArch64, Ubuntu16.04
>Reporter: Guangming Zhang
>Priority: Minor
>  Labels: dtest, patch, test
> Fix For: 3.1.0
>
> Attachments: YARN-7578.0.patch
>
>   Original Estimate: 48h
>  Remaining Estimate: 48h
>
> Thread.sleep() function is called to wait for NodeManager to identify disk 
> failures. But in some cases, for example the lower-end hardware computer, the 
> sleep time is too short so that the NodeManager may haven't finished 
> identifying disk failures. This will occur test errors:
> {code:java}
>   Running org.apache.hadoop.yarn.server.TestDiskFailures
>   Tests run: 3, Failures: 2, Errors: 0, Skipped: 0, Time elapsed: 17.686 
> sec <<< FAILURE! - in org.apache.hadoop.yarn.server.TestDiskFailures
>   testLocalDirsFailures(org.apache.hadoop.yarn.server.TestDiskFailures)  
> Time elapsed: 10.412 sec  <<< FAILURE!
>   java.lang.AssertionError: NodeManager could not identify disk failure.
>   at org.junit.Assert.fail(Assert.java:88)
>   at org.junit.Assert.assertTrue(Assert.java:41)
>   at 
> org.apache.hadoop.yarn.server.TestDiskFailures.verifyDisksHealth(TestDiskFailures.java:239)
>   at 
> org.apache.hadoop.yarn.server.TestDiskFailures.testDirsFailures(TestDiskFailures.java:186)
>   at 
> org.apache.hadoop.yarn.server.TestDiskFailures.testLocalDirsFailures(TestDiskFailures.java:99)
>   testLogDirsFailures(org.apache.hadoop.yarn.server.TestDiskFailures)  
> Time elapsed: 5.99 sec  <<< FAILURE!
>   java.lang.AssertionError: NodeManager could not identify disk failure.
>   at org.junit.Assert.fail(Assert.java:88)
>   at org.junit.Assert.assertTrue(Assert.java:41)
>   at 
> org.apache.hadoop.yarn.server.TestDiskFailures.verifyDisksHealth(TestDiskFailures.java:239)
>   at 
> org.apache.hadoop.yarn.server.TestDiskFailures.testDirsFailures(TestDiskFailures.java:186)
>   at 
> org.apache.hadoop.yarn.server.TestDiskFailures.testLogDirsFailures(TestDiskFailures.java:111)
> {code}
>  So extend the sleep time from 1000ms to 1500ms to avoid some unit test 
> errors.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7573) Gpu Information page could be empty for nodes without GPU

2017-11-29 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7573?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16272031#comment-16272031
 ] 

Hudson commented on YARN-7573:
--

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #13294 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/13294/])
YARN-7573. Gpu Information page could be empty for nodes without GPU. (wangda: 
rev c9a54aab6b1ad91b14de934178018d8e7eecd001)
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/app/templates/yarn-node-containers.hbs
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/app/models/yarn-rm-node.js
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/app/templates/yarn-node-apps.hbs
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/webapp/NMWebServices.java
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/app/templates/yarn-node/yarn-nm-gpu.hbs
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/app/templates/components/node-menu-panel.hbs


> Gpu Information page could be empty for nodes without GPU
> -
>
> Key: YARN-7573
> URL: https://issues.apache.org/jira/browse/YARN-7573
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: webapp, yarn-ui-v2
>Reporter: Sunil G
>Assignee: Sunil G
> Fix For: 3.1.0
>
> Attachments: YARN-7573.001.patch
>
>
> In new YARN UI, node page is not accessible if that node doesnt have any GPU.
> Also Under node page, when we click on "List of Containers/Applications", Gpu 
> Information left nave is disappearing.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-7497) Add HDFSSchedulerConfigurationStore for RM HA

2017-11-29 Thread Jiandan Yang (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-7497?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jiandan Yang  updated YARN-7497:

Attachment: YARN-7497.006.patch

fix TestYarnConfigurationFields fail

> Add HDFSSchedulerConfigurationStore for RM HA
> -
>
> Key: YARN-7497
> URL: https://issues.apache.org/jira/browse/YARN-7497
> Project: Hadoop YARN
>  Issue Type: New Feature
>  Components: yarn
>Reporter: Jiandan Yang 
> Attachments: YARN-7497.001.patch, YARN-7497.002.patch, 
> YARN-7497.003.patch, YARN-7497.004.patch, YARN-7497.005.patch, 
> YARN-7497.006.patch
>
>
> YARN-5947 add LeveldbConfigurationStore using Leveldb as backing store, but 
> it does not support Yarn RM HA. 
> YARN-6840 supports RM HA, but too many scheduler configurations may exceed 
> znode limit, for example 10 thousand queues.
> HDFSSchedulerConfigurationStore store conf file in HDFS, when RM failover, 
> new active RM can load scheduler configuration from HDFS.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7573) Gpu Information page could be empty for nodes without GPU

2017-11-29 Thread Wangda Tan (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7573?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16271995#comment-16271995
 ] 

Wangda Tan commented on YARN-7573:
--

Thanks [~sunilg], tested on GPU cluster. +1 to the patch, committing now.

> Gpu Information page could be empty for nodes without GPU
> -
>
> Key: YARN-7573
> URL: https://issues.apache.org/jira/browse/YARN-7573
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: webapp, yarn-ui-v2
>Reporter: Sunil G
>Assignee: Sunil G
> Attachments: YARN-7573.001.patch
>
>
> In new YARN UI, node page is not accessible if that node doesnt have any GPU.
> Also Under node page, when we click on "List of Containers/Applications", Gpu 
> Information left nave is disappearing.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-7522) Add application tags manager implementation

2017-11-29 Thread Wangda Tan (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-7522?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wangda Tan updated YARN-7522:
-
Attachment: YARN-7522.YARN-6592.003.patch

Attached ver.3 patch solved UT failures / javadocs / findbugs warnings 

> Add application tags manager implementation
> ---
>
> Key: YARN-7522
> URL: https://issues.apache.org/jira/browse/YARN-7522
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Wangda Tan
>Assignee: Wangda Tan
> Attachments: YARN-7522.YARN-6592.002.patch, 
> YARN-7522.YARN-6592.003.patch, YARN-7522.YARN-6592.wip-001.patch
>
>
> This is different from YARN-6596, YARN-6596 is targeted to add constraint 
> manager to store intra/inter application placement constraints. This JIRA is 
> targeted to support storing maps between container-tags/applications and 
> nodes. This will be required by affinity/anti-affinity implementation and 
> cardinality.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-7540) Convert yarn app cli to call yarn api services

2017-11-29 Thread Eric Yang (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-7540?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Eric Yang updated YARN-7540:

Attachment: YARN-7540.002.patch

- Fixed styling issues.

> Convert yarn app cli to call yarn api services
> --
>
> Key: YARN-7540
> URL: https://issues.apache.org/jira/browse/YARN-7540
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Eric Yang
>Assignee: Eric Yang
> Fix For: yarn-native-services
>
> Attachments: YARN-7540.001.patch, YARN-7540.002.patch
>
>
> For YARN docker application to launch through CLI, it works differently from 
> launching through REST API.  All application launched through REST API is 
> currently stored in yarn user HDFS home directory.  Application managed 
> through CLI are stored into individual user's HDFS home directory.  For 
> consistency, we want to have yarn app cli to interact with API service to 
> manage applications.  For performance reason, it is easier to implement list 
> all applications from one user's home directory instead of crawling all 
> user's home directories.  For security reason, it is safer to access only one 
> user home directory instead of all users.  Given the reasons above, the 
> proposal is to change how {{yarn app -launch}}, {{yarn app -list}} and {{yarn 
> app -destroy}} work.  Instead of calling HDFS API and RM API to launch 
> containers, CLI will be converted to call API service REST API resides in RM. 
>  RM perform the persist and operations to launch the actual application.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7274) Ability to disable elasticity at leaf queue level

2017-11-29 Thread Zian Chen (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7274?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16271916#comment-16271916
 ] 

Zian Chen commented on YARN-7274:
-

I want to show the Queue configuration for the UT I wrote for the initial patch 
so that we could see more clearly if the test meets our expectation.
The Queue hierarchy is like this,
* root
** a
*** a1
*** a2
** b
*** b1
*** b2
*** b3

And the queue capacities are shown below, 

||queue||label||capacity||maxCapacity||absCapacity||abcMaxCapacity||
|a|none|50|100|50|100|
|b|none|50|100|50|100|
|a1|none|60|60|30|60|
|a2|none|40|85|20|85|
|b1|none|10|10|5|10|
|b2|none|80|40|40|40|
|b3|none|10|25|5|25|
|a|red|50|100|50|100|
|b|red|50|100|50|100|
|a1|red|60|30|30|30|
|a2|red|40|60|20|60|
|b1|red|60|30|30|30|
|b2|red|30|100|15|100|
|b3|red|10|100|5|100|
|a|blue|30|50|30|50|
|b|blue|70|100|70|100|
|a1|blue|100|100|30|50|
|a2|blue|/|/|/|/|
|b1|blue|50|100|35|100|
|b2|blue|25|100|17.5|100|
|b3|blue|25|100|17.5|100


> Ability to disable elasticity at leaf queue level
> -
>
> Key: YARN-7274
> URL: https://issues.apache.org/jira/browse/YARN-7274
> Project: Hadoop YARN
>  Issue Type: Improvement
>  Components: capacityscheduler
>Reporter: Scott Brokaw
>Assignee: Zian Chen
> Attachments: YARN-7274.wip.1.patch
>
>
> The 
> [documentation|https://hadoop.apache.org/docs/current/hadoop-yarn/hadoop-yarn-site/CapacityScheduler.html]
>  defines yarn.scheduler.capacity..maximum-capacity as "Maximum 
> queue capacity in percentage (%) as a float. This limits the elasticity for 
> applications in the queue. Defaults to -1 which disables it."
> However, setting this value to -1 sets maximum capacity to 100% but I thought 
> (perhaps incorrectly) that the intention of the -1 setting is that it would 
> disable elasticity.  This is confirmed looking at the code:
> {code:java}
> public static final float MAXIMUM_CAPACITY_VALUE = 100;
> public static final float DEFAULT_MAXIMUM_CAPACITY_VALUE = -1.0f;
> ..
> maxCapacity = (maxCapacity == DEFAULT_MAXIMUM_CAPACITY_VALUE) ? 
> MAXIMUM_CAPACITY_VALUE : maxCapacity;
> {code}
> The sum of yarn.scheduler.capacity..capacity for all queues, at 
> each level, must be equal to 100 but for 
> yarn.scheduler.capacity..maximum-capacity this value is actually 
> a percentage of the entire cluster not just the parent queue.  Yet it can not 
> be set lower then the leaf queue's capacity setting. This seems to make it 
> impossible to disable elasticity at a leaf queue level.
> This improvement is proposing that YARN have the ability to have elasticity 
> disabled at a leaf queue level even if a parent queue permits elasticity by 
> having a yarn.scheduler.capacity..maximum-capacity greater then 
> it's yarn.scheduler.capacity..capacity



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7455) quote_and_append_arg can overflow buffer

2017-11-29 Thread genericqa (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7455?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16271904#comment-16271904
 ] 

genericqa commented on YARN-7455:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
28s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 14m 
59s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
49s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
30s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
24m 52s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
41s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
44s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} cc {color} | {color:green}  0m 
44s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
44s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
26s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green}  
9m 48s{color} | {color:green} patch has no errors when building and testing our 
client artifacts. {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 19m 24s{color} 
| {color:red} hadoop-yarn-server-nodemanager in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
17s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 57m  0s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.yarn.server.nodemanager.webapp.TestNMWebServices |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:5b98639 |
| JIRA Issue | YARN-7455 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12899899/YARN-7455.001.patch |
| Optional Tests |  asflicense  compile  cc  mvnsite  javac  unit  |
| uname | Linux 41a1baff9c1b 4.4.0-64-generic #85-Ubuntu SMP Mon Feb 20 
11:50:30 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / 333ef30 |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_151 |
| unit | 
https://builds.apache.org/job/PreCommit-YARN-Build/18726/artifact/out/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-nodemanager.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-YARN-Build/18726/testReport/ |
| Max. process+thread count | 440 (vs. ulimit of 5000) |
| modules | C: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager
 U: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager
 |
| Console output | 
https://builds.apache.org/job/PreCommit-YARN-Build/18726/console |
| Powered by | Apache Yetus 0.7.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> quote_and_append_arg can overflow buffer
> 
>
> Key: YARN-7455
> URL: https://issues.apache.org/jira/browse/YARN-7455
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: nodemanager
>Affects Versions: 2.9.0, 3.0.0
>Reporter: Jason Lowe
>Assignee: Jim Brennan
> Attachments: YARN-7455.001.patch
>
>
> While reviewing YARN-7197 I noticed that add_mounts in docker_util.c has a 
> potential 

[jira] [Commented] (YARN-7540) Convert yarn app cli to call yarn api services

2017-11-29 Thread genericqa (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7540?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16271901#comment-16271901
 ] 

genericqa commented on YARN-7540:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
16s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 5 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
12s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 16m 
38s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  8m 
11s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
59s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
33s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
12m 44s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
53s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m  
3s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
10s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
 8s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  6m 
55s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  6m 
55s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 57s{color} | {color:orange} hadoop-yarn-project/hadoop-yarn: The patch 
generated 6 new + 59 unchanged - 0 fixed = 65 total (was 59) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
25s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} whitespace {color} | {color:red}  0m  
0s{color} | {color:red} The patch has 5 line(s) that end in whitespace. Use git 
apply --whitespace=fix <>. Refer https://git-scm.com/docs/git-apply 
{color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green}  0m  
2s{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
10m 50s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m 
31s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m  
2s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 20m 
57s{color} | {color:green} hadoop-yarn-client in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  3m 
44s{color} | {color:green} hadoop-yarn-services-core in the patch passed. 
{color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
29s{color} | {color:green} hadoop-yarn-services-api in the patch passed. 
{color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
34s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 93m 38s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:5b98639 |
| JIRA Issue | YARN-7540 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12899896/YARN-7540.001.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  shadedclient  xml  findbugs  checkstyle  |
| uname | Linux 7a437fe866df 

[jira] [Commented] (YARN-6669) Support security for YARN service framework

2017-11-29 Thread Eric Yang (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6669?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16271879#comment-16271879
 ] 

Eric Yang commented on YARN-6669:
-

ServiceScheduler tries to register the application in zookeeper, but it seems 
to fail with secure ZooKeeper:

AM Log output:
{code}
2017-11-30 00:13:04,343 [pool-5-thread-1] ERROR service.ServiceScheduler - 
Failed to register app ww4 in registry
{code}

ZooKeeper log:
{code}
2017-11-30 00:21:22,164 - INFO  [ProcessThread(sid:0 
cport:-1)::PrepRequestProcessor@643] - Got user-level KeeperException when 
processing sessionid:0x15fe03595b30090 type:create cxid:0x46 zxid:0xa21e5 
txntype:-1 reqpath:n/a Error 
Path:/registry/users/spark/services/yarn-service/ww4 Error:KeeperErrorCode = 
NoNode for /registry/users/spark/services/yarn-service/ww4
{code}

> Support security for YARN service framework
> ---
>
> Key: YARN-6669
> URL: https://issues.apache.org/jira/browse/YARN-6669
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Jian He
>Assignee: Jian He
> Attachments: YARN-6669.01.patch, YARN-6669.02.patch, 
> YARN-6669.03.patch, YARN-6669.04.patch, YARN-6669.05.patch, 
> YARN-6669.06.patch, YARN-6669.07.patch, YARN-6669.08.patch, 
> YARN-6669.09.patch, YARN-6669.10.patch, 
> YARN-6669.yarn-native-services.01.patch, 
> YARN-6669.yarn-native-services.03.patch, 
> YARN-6669.yarn-native-services.04.patch, 
> YARN-6669.yarn-native-services.05.patch
>
>
> Changes include:
> -  Make registry client to programmatically generate the jaas conf for secure 
> access ZK quorum
> - Create a KerberosPrincipal resource object in REST API for user to supply 
> keberos keytab and principal 
> - User has two ways to configure:
> -- If keytab starts with "hdfs://",  the keytab will be localized by YARN
> -- If keytab starts with "file://", it is assumed that the keytab are 
> available on the localhost.
> - AM will use the keytab to log in
> - ServiceClient is changed to ask hdfs delegation token when submitting the 
> service
> - AM code will use the tokens when launching containers 
> - Support kerberized communication between client and AM



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7509) AsyncScheduleThread and ResourceCommitterService are still running after RM is transitioned to standby

2017-11-29 Thread Wangda Tan (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7509?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16271858#comment-16271858
 ] 

Wangda Tan commented on YARN-7509:
--

Thanks [~andrew.wang], just backported to branch-3.0.0. 
[~subru], just pushed to branch-2.9.

> AsyncScheduleThread and ResourceCommitterService are still running after RM 
> is transitioned to standby
> --
>
> Key: YARN-7509
> URL: https://issues.apache.org/jira/browse/YARN-7509
> Project: Hadoop YARN
>  Issue Type: Bug
>Affects Versions: 3.0.0-alpha4, 2.9.1
>Reporter: Tao Yang
>Assignee: Tao Yang
>Priority: Critical
> Fix For: 3.0.0, 3.1.0, 2.9.1
>
> Attachments: YARN-7509.001.patch
>
>
> After RM is transitioned to standby, AsyncScheduleThread and 
> ResourceCommitterService will receive interrupt signal. When thread is 
> sleeping, it will ignore the interrupt signal since InterruptedException is 
> catched inside and the interrupt signal is cleared.
> For AsyncScheduleThread, InterruptedException was catched and ignored in  
> CapacityScheduler#schedule.
> For ResourceCommitterService, InterruptedException was catched inside and 
> ignored in ResourceCommitterService#run. 
> We should let the interrupt signal out and make these threads exit.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Resolved] (YARN-7509) AsyncScheduleThread and ResourceCommitterService are still running after RM is transitioned to standby

2017-11-29 Thread Wangda Tan (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-7509?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wangda Tan resolved YARN-7509.
--
   Resolution: Fixed
Fix Version/s: (was: 3.0.1)
   3.0.0

> AsyncScheduleThread and ResourceCommitterService are still running after RM 
> is transitioned to standby
> --
>
> Key: YARN-7509
> URL: https://issues.apache.org/jira/browse/YARN-7509
> Project: Hadoop YARN
>  Issue Type: Bug
>Affects Versions: 3.0.0-alpha4, 2.9.1
>Reporter: Tao Yang
>Assignee: Tao Yang
>Priority: Critical
> Fix For: 3.0.0, 3.1.0, 2.9.1
>
> Attachments: YARN-7509.001.patch
>
>
> After RM is transitioned to standby, AsyncScheduleThread and 
> ResourceCommitterService will receive interrupt signal. When thread is 
> sleeping, it will ignore the interrupt signal since InterruptedException is 
> catched inside and the interrupt signal is cleared.
> For AsyncScheduleThread, InterruptedException was catched and ignored in  
> CapacityScheduler#schedule.
> For ResourceCommitterService, InterruptedException was catched inside and 
> ignored in ResourceCommitterService#run. 
> We should let the interrupt signal out and make these threads exit.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-7346) Fix compilation errors against hbase2 alpha release

2017-11-29 Thread Haibo Chen (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-7346?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Haibo Chen updated YARN-7346:
-
Attachment: YARN-7346.prelim1.patch

> Fix compilation errors against hbase2 alpha release
> ---
>
> Key: YARN-7346
> URL: https://issues.apache.org/jira/browse/YARN-7346
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Ted Yu
>Assignee: Vrushali C
> Attachments: YARN-7346.prelim1.patch, YARN-7581.prelim.patch
>
>
> When compiling hadoop-yarn-server-timelineservice-hbase against 2.0.0-alpha3, 
> I got the following errors:
> https://pastebin.com/Ms4jYEVB
> This issue is to fix the compilation errors.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7346) Fix compilation errors against hbase2 alpha release

2017-11-29 Thread Haibo Chen (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7346?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16271852#comment-16271852
 ] 

Haibo Chen commented on YARN-7346:
--

update the patch to address issues identified in the jenkins reports

> Fix compilation errors against hbase2 alpha release
> ---
>
> Key: YARN-7346
> URL: https://issues.apache.org/jira/browse/YARN-7346
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Ted Yu
>Assignee: Vrushali C
> Attachments: YARN-7346.prelim1.patch, YARN-7581.prelim.patch
>
>
> When compiling hadoop-yarn-server-timelineservice-hbase against 2.0.0-alpha3, 
> I got the following errors:
> https://pastebin.com/Ms4jYEVB
> This issue is to fix the compilation errors.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-5594) Handle old RMDelegationToken format when recovering RM

2017-11-29 Thread Robert Kanter (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-5594?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Robert Kanter updated YARN-5594:

Attachment: YARN-5594.002.patch

The 002 patch:
- It turns out that the {{readOldFormatFields}} code is the same as the 
{{readFields}} code in {{AbstractDelegationTokenIdentifer}}, so we can simply 
call {{super.readFields}} instead of duplicating the code.
- Moved the token reading (old and new format handling) logic to a common place 
in {{RMStateStoreUtils}} so it can be used by {{LeveldbRMStateStore}} and 
{{ZKRMStateStore}} (in addition to {{FileSystemRMStateStore}})
- Improved existing unit test
- Added additional unit tests

I also manually verified that it fixes the problem in a cluster with the 
{{ZKRMStateStore}}.

> Handle old RMDelegationToken format when recovering RM
> --
>
> Key: YARN-5594
> URL: https://issues.apache.org/jira/browse/YARN-5594
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: resourcemanager
>Affects Versions: 2.6.0
>Reporter: Tatyana But
>Assignee: Robert Kanter
>  Labels: oct16-medium
> Attachments: YARN-5594.001.patch, YARN-5594.002.patch
>
>
> We've got that error after upgrade cluster from v.2.5.1 to 2.7.0.
> {noformat}
> 2016-08-25 17:20:33,293 ERROR
> org.apache.hadoop.yarn.server.resourcemanager.ResourceManager: Failed to
> load/recover state
> com.google.protobuf.InvalidProtocolBufferException: Protocol message contained
> an invalid tag (zero).
> at 
> com.google.protobuf.InvalidProtocolBufferException.invalidTag(InvalidProtocolBufferException.java:89)
> at com.google.protobuf.CodedInputStream.readTag(CodedInputStream.java:108)
> at 
> org.apache.hadoop.yarn.proto.YarnServerResourceManagerRecoveryProtos$RMDelegationTokenIdentifierDataProto.(YarnServerResourceManagerRecoveryProtos.java:4680)
> at 
> org.apache.hadoop.yarn.proto.YarnServerResourceManagerRecoveryProtos$RMDelegationTokenIdentifierDataProto.(YarnServerResourceManagerRecoveryProtos.java:4644)
> at 
> org.apache.hadoop.yarn.proto.YarnServerResourceManagerRecoveryProtos$RMDelegationTokenIdentifierDataProto$1.parsePartialFrom(YarnServerResourceManagerRecoveryProtos.java:4740)
> at 
> org.apache.hadoop.yarn.proto.YarnServerResourceManagerRecoveryProtos$RMDelegationTokenIdentifierDataProto$1.parsePartialFrom(YarnServerResourceManagerRecoveryProtos.java:4735)
> at 
> org.apache.hadoop.yarn.proto.YarnServerResourceManagerRecoveryProtos$RMDelegationTokenIdentifierDataProto$Builder.mergeFrom(YarnServerResourceManagerRecoveryProtos.java:5075)
> at 
> org.apache.hadoop.yarn.proto.YarnServerResourceManagerRecoveryProtos$RMDelegationTokenIdentifierDataProto$Builder.mergeFrom(YarnServerResourceManagerRecoveryProtos.java:4955)
> at 
> com.google.protobuf.AbstractMessage$Builder.mergeFrom(AbstractMessage.java:337)
> at 
> com.google.protobuf.AbstractMessage$Builder.mergeFrom(AbstractMessage.java:267)
> at 
> com.google.protobuf.AbstractMessageLite$Builder.mergeFrom(AbstractMessageLite.java:210)
> at 
> com.google.protobuf.AbstractMessage$Builder.mergeFrom(AbstractMessage.java:904)
> at 
> org.apache.hadoop.yarn.server.resourcemanager.recovery.records.RMDelegationTokenIdentifierData.readFields(RMDelegationTokenIdentifierData.java:43)
> at 
> org.apache.hadoop.yarn.server.resourcemanager.recovery.FileSystemRMStateStore.loadRMDTSecretManagerState(FileSystemRMStateStore.java:355)
> at 
> org.apache.hadoop.yarn.server.resourcemanager.recovery.FileSystemRMStateStore.loadState(FileSystemRMStateStore.java:199)
> at 
> org.apache.hadoop.yarn.server.resourcemanager.ResourceManager$RMActiveServices.serviceStart(ResourceManager.java:587)
> at org.apache.hadoop.service.AbstractService.start(AbstractService.java:193)
> at 
> org.apache.hadoop.yarn.server.resourcemanager.ResourceManager.startActiveServices(ResourceManager.java:1007)
> at 
> org.apache.hadoop.yarn.server.resourcemanager.ResourceManager$1.run(ResourceManager.java:1048)
> at 
> org.apache.hadoop.yarn.server.resourcemanager.ResourceManager$1.run(ResourceManager.java:1044
> {noformat}
> The reason of this problem is that we use different formats of files 
> /var/mapr/cluster/yarn/rm/system/FSRMStateRoot/RMDTSecretManagerRoot/RMDelegationToken*
>  in these hadoop versions.
> This fix handle old data format during RM recover if 
> InvalidProtocolBufferException occures.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Assigned] (YARN-5594) Handle old RMDelegationToken format when recovering RM

2017-11-29 Thread Robert Kanter (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-5594?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Robert Kanter reassigned YARN-5594:
---

 Assignee: Robert Kanter
Affects Version/s: (was: 2.7.0)
   2.6.0
 Target Version/s: 3.1.0, 2.10.0
  Summary: Handle old RMDelegationToken format when recovering RM  
(was: Handle old data format while recovering RM)

> Handle old RMDelegationToken format when recovering RM
> --
>
> Key: YARN-5594
> URL: https://issues.apache.org/jira/browse/YARN-5594
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: resourcemanager
>Affects Versions: 2.6.0
>Reporter: Tatyana But
>Assignee: Robert Kanter
>  Labels: oct16-medium
> Attachments: YARN-5594.001.patch
>
>
> We've got that error after upgrade cluster from v.2.5.1 to 2.7.0.
> {noformat}
> 2016-08-25 17:20:33,293 ERROR
> org.apache.hadoop.yarn.server.resourcemanager.ResourceManager: Failed to
> load/recover state
> com.google.protobuf.InvalidProtocolBufferException: Protocol message contained
> an invalid tag (zero).
> at 
> com.google.protobuf.InvalidProtocolBufferException.invalidTag(InvalidProtocolBufferException.java:89)
> at com.google.protobuf.CodedInputStream.readTag(CodedInputStream.java:108)
> at 
> org.apache.hadoop.yarn.proto.YarnServerResourceManagerRecoveryProtos$RMDelegationTokenIdentifierDataProto.(YarnServerResourceManagerRecoveryProtos.java:4680)
> at 
> org.apache.hadoop.yarn.proto.YarnServerResourceManagerRecoveryProtos$RMDelegationTokenIdentifierDataProto.(YarnServerResourceManagerRecoveryProtos.java:4644)
> at 
> org.apache.hadoop.yarn.proto.YarnServerResourceManagerRecoveryProtos$RMDelegationTokenIdentifierDataProto$1.parsePartialFrom(YarnServerResourceManagerRecoveryProtos.java:4740)
> at 
> org.apache.hadoop.yarn.proto.YarnServerResourceManagerRecoveryProtos$RMDelegationTokenIdentifierDataProto$1.parsePartialFrom(YarnServerResourceManagerRecoveryProtos.java:4735)
> at 
> org.apache.hadoop.yarn.proto.YarnServerResourceManagerRecoveryProtos$RMDelegationTokenIdentifierDataProto$Builder.mergeFrom(YarnServerResourceManagerRecoveryProtos.java:5075)
> at 
> org.apache.hadoop.yarn.proto.YarnServerResourceManagerRecoveryProtos$RMDelegationTokenIdentifierDataProto$Builder.mergeFrom(YarnServerResourceManagerRecoveryProtos.java:4955)
> at 
> com.google.protobuf.AbstractMessage$Builder.mergeFrom(AbstractMessage.java:337)
> at 
> com.google.protobuf.AbstractMessage$Builder.mergeFrom(AbstractMessage.java:267)
> at 
> com.google.protobuf.AbstractMessageLite$Builder.mergeFrom(AbstractMessageLite.java:210)
> at 
> com.google.protobuf.AbstractMessage$Builder.mergeFrom(AbstractMessage.java:904)
> at 
> org.apache.hadoop.yarn.server.resourcemanager.recovery.records.RMDelegationTokenIdentifierData.readFields(RMDelegationTokenIdentifierData.java:43)
> at 
> org.apache.hadoop.yarn.server.resourcemanager.recovery.FileSystemRMStateStore.loadRMDTSecretManagerState(FileSystemRMStateStore.java:355)
> at 
> org.apache.hadoop.yarn.server.resourcemanager.recovery.FileSystemRMStateStore.loadState(FileSystemRMStateStore.java:199)
> at 
> org.apache.hadoop.yarn.server.resourcemanager.ResourceManager$RMActiveServices.serviceStart(ResourceManager.java:587)
> at org.apache.hadoop.service.AbstractService.start(AbstractService.java:193)
> at 
> org.apache.hadoop.yarn.server.resourcemanager.ResourceManager.startActiveServices(ResourceManager.java:1007)
> at 
> org.apache.hadoop.yarn.server.resourcemanager.ResourceManager$1.run(ResourceManager.java:1048)
> at 
> org.apache.hadoop.yarn.server.resourcemanager.ResourceManager$1.run(ResourceManager.java:1044
> {noformat}
> The reason of this problem is that we use different formats of files 
> /var/mapr/cluster/yarn/rm/system/FSRMStateRoot/RMDTSecretManagerRoot/RMDelegationToken*
>  in these hadoop versions.
> This fix handle old data format during RM recover if 
> InvalidProtocolBufferException occures.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-7584) Support resource profiles in native services

2017-11-29 Thread Jonathan Hung (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-7584?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jonathan Hung updated YARN-7584:

Issue Type: Sub-task  (was: Improvement)
Parent: YARN-7054

> Support resource profiles in native services
> 
>
> Key: YARN-7584
> URL: https://issues.apache.org/jira/browse/YARN-7584
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Jonathan Hung
>
> Currently resource profiles does not appear to be supported: {noformat}// 
> Currently resource profile is not supported yet, so we will raise
> // validation error if only resource profile is specified
> if (StringUtils.isNotEmpty(resource.getProfile())) {
>   throw new IllegalArgumentException(
>   RestApiErrorMessages.ERROR_RESOURCE_PROFILE_NOT_SUPPORTED_YET);
> }{noformat}
> Also attempting to specify profiles in the service spec throws an exception 
> since cpu default value is 1:
> {noformat}Exception in thread "main" java.lang.IllegalArgumentException: 
> Cannot specify cpus/memory along with profile for component ps
>   at 
> org.apache.hadoop.yarn.service.utils.ServiceApiUtil.validateServiceResource(ServiceApiUtil.java:278)
>   at 
> org.apache.hadoop.yarn.service.utils.ServiceApiUtil.validateComponent(ServiceApiUtil.java:201)
>   at 
> org.apache.hadoop.yarn.service.utils.ServiceApiUtil.validateAndResolveService(ServiceApiUtil.java:174)
>   at 
> org.apache.hadoop.yarn.service.client.ServiceClient.actionCreate(ServiceClient.java:214)
>   at 
> org.apache.hadoop.yarn.service.client.ServiceClient.actionLaunch(ServiceClient.java:205)
>   at 
> org.apache.hadoop.yarn.client.cli.ApplicationCLI.run(ApplicationCLI.java:447)
>   at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:76)
>   at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:90)
>   at 
> org.apache.hadoop.yarn.client.cli.ApplicationCLI.main(ApplicationCLI.java:111){noformat}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Created] (YARN-7584) Support resource profiles in native services

2017-11-29 Thread Jonathan Hung (JIRA)
Jonathan Hung created YARN-7584:
---

 Summary: Support resource profiles in native services
 Key: YARN-7584
 URL: https://issues.apache.org/jira/browse/YARN-7584
 Project: Hadoop YARN
  Issue Type: Improvement
Reporter: Jonathan Hung


Currently resource profiles does not appear to be supported: {noformat}// 
Currently resource profile is not supported yet, so we will raise
// validation error if only resource profile is specified
if (StringUtils.isNotEmpty(resource.getProfile())) {
  throw new IllegalArgumentException(
  RestApiErrorMessages.ERROR_RESOURCE_PROFILE_NOT_SUPPORTED_YET);
}{noformat}

Also attempting to specify profiles in the service spec throws an exception 
since cpu default value is 1:
{noformat}Exception in thread "main" java.lang.IllegalArgumentException: Cannot 
specify cpus/memory along with profile for component ps
at 
org.apache.hadoop.yarn.service.utils.ServiceApiUtil.validateServiceResource(ServiceApiUtil.java:278)
at 
org.apache.hadoop.yarn.service.utils.ServiceApiUtil.validateComponent(ServiceApiUtil.java:201)
at 
org.apache.hadoop.yarn.service.utils.ServiceApiUtil.validateAndResolveService(ServiceApiUtil.java:174)
at 
org.apache.hadoop.yarn.service.client.ServiceClient.actionCreate(ServiceClient.java:214)
at 
org.apache.hadoop.yarn.service.client.ServiceClient.actionLaunch(ServiceClient.java:205)
at 
org.apache.hadoop.yarn.client.cli.ApplicationCLI.run(ApplicationCLI.java:447)
at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:76)
at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:90)
at 
org.apache.hadoop.yarn.client.cli.ApplicationCLI.main(ApplicationCLI.java:111){noformat}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7522) Add application tags manager implementation

2017-11-29 Thread genericqa (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7522?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16271824#comment-16271824
 ] 

genericqa commented on YARN-7522:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
24s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 3 new or modified test 
files. {color} |
|| || || || {color:brown} YARN-6592 Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 15m 
40s{color} | {color:green} YARN-6592 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
32s{color} | {color:green} YARN-6592 passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
24s{color} | {color:green} YARN-6592 passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
36s{color} | {color:green} YARN-6592 passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green}  
9m 35s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m  
4s{color} | {color:green} YARN-6592 passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
20s{color} | {color:green} YARN-6592 passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
39s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
34s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
34s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 23s{color} | {color:orange} 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager:
 The patch generated 13 new + 155 unchanged - 0 fixed = 168 total (was 155) 
{color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
36s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green}  
9m 51s{color} | {color:green} patch has no errors when building and testing our 
client artifacts. {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  1m 
11s{color} | {color:red} 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager
 generated 1 new + 0 unchanged - 0 fixed = 1 total (was 0) {color} |
| {color:red}-1{color} | {color:red} javadoc {color} | {color:red}  0m 
20s{color} | {color:red} 
hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager
 generated 1 new + 4 unchanged - 0 fixed = 5 total (was 4) {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 73m 38s{color} 
| {color:red} hadoop-yarn-server-resourcemanager in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
20s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}115m 58s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| FindBugs | 
module:hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager
 |
|  |  
org.apache.hadoop.yarn.server.resourcemanager.placement.PlacementTagsManager$NodeToCountedTags.getCardinality(NodeId,
 Set, LongBinaryOperator) makes inefficient use of keySet iterator instead of 
entrySet iterator  At PlacementTagsManager.java:use of keySet iterator instead 
of entrySet iterator  At PlacementTagsManager.java:[line 120] |
| Failed junit tests | 
hadoop.yarn.server.resourcemanager.scheduler.fifo.TestFifoScheduler |
|   | 
hadoop.yarn.server.resourcemanager.scheduler.capacity.TestIncreaseAllocationExpirer
 |
|   | 
hadoop.yarn.server.resourcemanager.scheduler.capacity.TestContainerResizing |
|   | hadoop.yarn.server.resourcemanager.scheduler.capacity.TestLeafQueue |
|   | 

[jira] [Commented] (YARN-7438) Additional changes to make SchedulingPlacementSet agnostic to ResourceRequest / placement algorithm

2017-11-29 Thread genericqa (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7438?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16271810#comment-16271810
 ] 

genericqa commented on YARN-7438:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
23s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 3 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 16m 
44s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
38s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
31s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
40s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
10m 58s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m  
2s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
24s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
39s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
34s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
34s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 28s{color} | {color:orange} 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager:
 The patch generated 8 new + 380 unchanged - 1 fixed = 388 total (was 381) 
{color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
36s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
11m  3s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m  
9s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
22s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 61m  6s{color} 
| {color:red} hadoop-yarn-server-resourcemanager in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
22s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}107m 28s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | 
hadoop.yarn.server.resourcemanager.scheduler.capacity.TestNodeLabelContainerAllocation
 |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:5b98639 |
| JIRA Issue | YARN-7438 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12899878/YARN-7438.003.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  shadedclient  findbugs  checkstyle  |
| uname | Linux 583bd08d72d3 3.13.0-129-generic #178-Ubuntu SMP Fri Aug 11 
12:48:20 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / 53509f2 |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_151 |
| findbugs | v3.1.0-RC1 |
| checkstyle | 
https://builds.apache.org/job/PreCommit-YARN-Build/18721/artifact/out/diff-checkstyle-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager.txt
 |
| unit | 

[jira] [Created] (YARN-7583) Reduce overhead of container reacquisition

2017-11-29 Thread Jason Lowe (JIRA)
Jason Lowe created YARN-7583:


 Summary: Reduce overhead of container reacquisition
 Key: YARN-7583
 URL: https://issues.apache.org/jira/browse/YARN-7583
 Project: Hadoop YARN
  Issue Type: Improvement
  Components: nodemanager
Reporter: Jason Lowe


When reacquiring containers after a nodemanager restart the Linux container 
executor invokes the container executor to essentially kill -0 the process to 
check if it is alive.  It would be a lot cheaper on Linux to stat the 
/proc/ directory which the nodemanager can do directly rather than pay for 
the fork-and-exec through the container executor and potential signal 
permission issues.




--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7537) [Atsv2] load hbase configuration from filesystem rather than URL

2017-11-29 Thread Vrushali C (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7537?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16271751#comment-16271751
 ] 

Vrushali C commented on YARN-7537:
--

Okay, sounds good. thanks!

> [Atsv2] load hbase configuration from filesystem rather than URL
> 
>
> Key: YARN-7537
> URL: https://issues.apache.org/jira/browse/YARN-7537
> Project: Hadoop YARN
>  Issue Type: Bug
>Reporter: Rohith Sharma K S
>Assignee: Rohith Sharma K S
> Attachments: YARN-7537.01.patch
>
>
> Currently HBaseTimelineStorageUtils#getTimelineServiceHBaseConf loads hbase 
> configurations using URL if *yarn.timeline-service.hbase.configuration.file* 
> is configured. But it is restricted to URLs only. This need to be changed to 
> load from file system. In deployment, hbase configuration can be kept under 
> filesystem so that it be utilized by all the NodeManager and ResourceManager.
> cc :/ [~vrushalic] [~varun_saxena]



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-7455) quote_and_append_arg can overflow buffer

2017-11-29 Thread Jim Brennan (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-7455?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jim Brennan updated YARN-7455:
--
Attachment: YARN-7455.001.patch

I am attaching a patch that addresses the buffer overflow issues described in 
this Jira.  I've also updated the test cases for quote_and_append_arg to add 
cases that demonstrate the failure with the original code and pass with the new 
code.
Please review.

> quote_and_append_arg can overflow buffer
> 
>
> Key: YARN-7455
> URL: https://issues.apache.org/jira/browse/YARN-7455
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: nodemanager
>Affects Versions: 2.9.0, 3.0.0
>Reporter: Jason Lowe
>Assignee: Jim Brennan
> Attachments: YARN-7455.001.patch
>
>
> While reviewing YARN-7197 I noticed that add_mounts in docker_util.c has a 
> potential buffer overflow since tmp_buffer is only 1024 bytes which may not 
> be sufficient to hold the specified mount path.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6851) Capacity Scheduler: document configs for controlling # containers allowed to be allocated per node heartbeat

2017-11-29 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6851?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16271731#comment-16271731
 ] 

Hudson commented on YARN-6851:
--

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #13293 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/13293/])
YARN-6851. Capacity Scheduler: document configs for controlling # (weiy: rev 
333ef303ff0caf9adfd378652a8f966377901768)
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-site/src/site/markdown/CapacityScheduler.md


> Capacity Scheduler: document configs for controlling # containers allowed to 
> be allocated per node heartbeat
> 
>
> Key: YARN-6851
> URL: https://issues.apache.org/jira/browse/YARN-6851
> Project: Hadoop YARN
>  Issue Type: Improvement
>Reporter: Wei Yan
>Assignee: Wei Yan
>Priority: Minor
> Fix For: 3.1.0, 2.10.0, 2.9.1
>
> Attachments: YARN-6851-branch-2.001.patch, YARN-6851.001.patch
>
>
> YARN-4161 introduces new configs for controlling how many containers allowed 
> to be allocated in each node heartbeat. And we also have offswitchCount 
> config before. Would be better to document these configurations in CS section.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Assigned] (YARN-7540) Convert yarn app cli to call yarn api services

2017-11-29 Thread Eric Yang (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-7540?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Eric Yang reassigned YARN-7540:
---

Assignee: Eric Yang

> Convert yarn app cli to call yarn api services
> --
>
> Key: YARN-7540
> URL: https://issues.apache.org/jira/browse/YARN-7540
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Eric Yang
>Assignee: Eric Yang
> Fix For: yarn-native-services
>
> Attachments: YARN-7540.001.patch
>
>
> For YARN docker application to launch through CLI, it works differently from 
> launching through REST API.  All application launched through REST API is 
> currently stored in yarn user HDFS home directory.  Application managed 
> through CLI are stored into individual user's HDFS home directory.  For 
> consistency, we want to have yarn app cli to interact with API service to 
> manage applications.  For performance reason, it is easier to implement list 
> all applications from one user's home directory instead of crawling all 
> user's home directories.  For security reason, it is safer to access only one 
> user home directory instead of all users.  Given the reasons above, the 
> proposal is to change how {{yarn app -launch}}, {{yarn app -list}} and {{yarn 
> app -destroy}} work.  Instead of calling HDFS API and RM API to launch 
> containers, CLI will be converted to call API service REST API resides in RM. 
>  RM perform the persist and operations to launch the actual application.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-7540) Convert yarn app cli to call yarn api services

2017-11-29 Thread Eric Yang (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-7540?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Eric Yang updated YARN-7540:

Attachment: YARN-7540.001.patch

> Convert yarn app cli to call yarn api services
> --
>
> Key: YARN-7540
> URL: https://issues.apache.org/jira/browse/YARN-7540
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Eric Yang
> Fix For: yarn-native-services
>
> Attachments: YARN-7540.001.patch
>
>
> For YARN docker application to launch through CLI, it works differently from 
> launching through REST API.  All application launched through REST API is 
> currently stored in yarn user HDFS home directory.  Application managed 
> through CLI are stored into individual user's HDFS home directory.  For 
> consistency, we want to have yarn app cli to interact with API service to 
> manage applications.  For performance reason, it is easier to implement list 
> all applications from one user's home directory instead of crawling all 
> user's home directories.  For security reason, it is safer to access only one 
> user home directory instead of all users.  Given the reasons above, the 
> proposal is to change how {{yarn app -launch}}, {{yarn app -list}} and {{yarn 
> app -destroy}} work.  Instead of calling HDFS API and RM API to launch 
> containers, CLI will be converted to call API service REST API resides in RM. 
>  RM perform the persist and operations to launch the actual application.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-7540) Convert yarn app cli to call yarn api services

2017-11-29 Thread Eric Yang (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-7540?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Eric Yang updated YARN-7540:

Attachment: (was: YARN-7540.001.patch)

> Convert yarn app cli to call yarn api services
> --
>
> Key: YARN-7540
> URL: https://issues.apache.org/jira/browse/YARN-7540
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Eric Yang
> Fix For: yarn-native-services
>
>
> For YARN docker application to launch through CLI, it works differently from 
> launching through REST API.  All application launched through REST API is 
> currently stored in yarn user HDFS home directory.  Application managed 
> through CLI are stored into individual user's HDFS home directory.  For 
> consistency, we want to have yarn app cli to interact with API service to 
> manage applications.  For performance reason, it is easier to implement list 
> all applications from one user's home directory instead of crawling all 
> user's home directories.  For security reason, it is safer to access only one 
> user home directory instead of all users.  Given the reasons above, the 
> proposal is to change how {{yarn app -launch}}, {{yarn app -list}} and {{yarn 
> app -destroy}} work.  Instead of calling HDFS API and RM API to launch 
> containers, CLI will be converted to call API service REST API resides in RM. 
>  RM perform the persist and operations to launch the actual application.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-6851) Capacity Scheduler: document configs for controlling # containers allowed to be allocated per node heartbeat

2017-11-29 Thread Wei Yan (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-6851?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wei Yan updated YARN-6851:
--
Fix Version/s: 2.9.1
   2.10.0
   3.1.0

> Capacity Scheduler: document configs for controlling # containers allowed to 
> be allocated per node heartbeat
> 
>
> Key: YARN-6851
> URL: https://issues.apache.org/jira/browse/YARN-6851
> Project: Hadoop YARN
>  Issue Type: Improvement
>Reporter: Wei Yan
>Assignee: Wei Yan
>Priority: Minor
> Fix For: 3.1.0, 2.10.0, 2.9.1
>
> Attachments: YARN-6851-branch-2.001.patch, YARN-6851.001.patch
>
>
> YARN-4161 introduces new configs for controlling how many containers allowed 
> to be allocated in each node heartbeat. And we also have offswitchCount 
> config before. Would be better to document these configurations in CS section.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6851) Capacity Scheduler: document configs for controlling # containers allowed to be allocated per node heartbeat

2017-11-29 Thread Wei Yan (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6851?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16271703#comment-16271703
 ] 

Wei Yan commented on YARN-6851:
---

Thanks for the review, [~wangda]. Committed to branch-2, branch-2.9 and trunk.

> Capacity Scheduler: document configs for controlling # containers allowed to 
> be allocated per node heartbeat
> 
>
> Key: YARN-6851
> URL: https://issues.apache.org/jira/browse/YARN-6851
> Project: Hadoop YARN
>  Issue Type: Improvement
>Reporter: Wei Yan
>Assignee: Wei Yan
>Priority: Minor
> Fix For: 3.1.0, 2.10.0, 2.9.1
>
> Attachments: YARN-6851-branch-2.001.patch, YARN-6851.001.patch
>
>
> YARN-4161 introduces new configs for controlling how many containers allowed 
> to be allocated in each node heartbeat. And we also have offswitchCount 
> config before. Would be better to document these configurations in CS section.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7495) Improve robustness of the AggregatedLogDeletionService

2017-11-29 Thread genericqa (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7495?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16271693#comment-16271693
 ] 

genericqa commented on YARN-7495:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
18s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 15m 
 0s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
34s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
24s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
36s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
10m 26s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m  
7s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
37s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
35s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
30s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
30s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 21s{color} | {color:orange} 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common: The patch generated 1 new + 
37 unchanged - 2 fixed = 38 total (was 39) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
33s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
10m 35s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
15s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
36s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red}  3m  5s{color} 
| {color:red} hadoop-yarn-common in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
20s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 46m 52s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.yarn.api.TestPBImplRecords |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:5b98639 |
| JIRA Issue | YARN-7495 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12899876/YARN-7495.003.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  shadedclient  findbugs  checkstyle  |
| uname | Linux 97496b44d793 4.4.0-64-generic #85-Ubuntu SMP Mon Feb 20 
11:50:30 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / 53509f2 |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_151 |
| findbugs | v3.1.0-RC1 |
| checkstyle | 
https://builds.apache.org/job/PreCommit-YARN-Build/18722/artifact/out/diff-checkstyle-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-common.txt
 |
| unit | 
https://builds.apache.org/job/PreCommit-YARN-Build/18722/artifact/out/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-common.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-YARN-Build/18722/testReport/ |
| Max. process+thread count | 

[jira] [Created] (YARN-7582) Yarn Services - restore descriptive exception types

2017-11-29 Thread Sergey Shelukhin (JIRA)
Sergey Shelukhin created YARN-7582:
--

 Summary: Yarn Services - restore descriptive exception types
 Key: YARN-7582
 URL: https://issues.apache.org/jira/browse/YARN-7582
 Project: Hadoop YARN
  Issue Type: Bug
Reporter: Sergey Shelukhin


Slider used to throw descriptive exceptions like UnknownApp, etc. from various 
commands (e.g. destroy). It looks like YARN Services throw generic exceptions 
from these (see the review in HIVE-18037). 
It would be good to restore the exceptions.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7565) Yarn service pre-maturely releases the container after AM restart

2017-11-29 Thread Jian He (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7565?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16271625#comment-16271625
 ] 

Jian He commented on YARN-7565:
---

- onContainersFromPreviousAttempts ->  onContainersReceivedFromPreviousAttempts
To avoid compilation error of existing apps extending AMRMClientCallBack, it is 
safer to make it empty method instead of abstract method.
- ServiceConfiguration: there's a YarnServiceConf class for the configs
- ComponentInstanceEventType#RECOVER not used, can be removed
- recoveringInstances - probably no need a per component timer ? as they all 
seem start at the same time. Also, looks like the recoveringInstances will 
remain forever if there’s no container recovered for this instance later.
I think we can probably have a global reset timer in ServiceMonitor, say after 
3 min, just clear all the unrecovered instances, from then on, the previous 
containers received can be released.

> Yarn service pre-maturely releases the container after AM restart 
> --
>
> Key: YARN-7565
> URL: https://issues.apache.org/jira/browse/YARN-7565
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Chandni Singh
>Assignee: Chandni Singh
> Fix For: yarn-native-services
>
> Attachments: YARN-7565.001.patch
>
>
> With YARN-6168, recovered containers can be reported to AM in response to the 
> AM heartbeat. 
> Currently, the Service Master will release the containers, that are not 
> reported in the AM registration response, immediately.
> Instead, the master can wait for a configured amount of time for the 
> containers to be recovered by RM. These containers are sent to AM in the 
> heartbeat response. Once a container is not reported in the configured 
> interval, it can be released by the master.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7558) "yarn logs" command fails to get logs for running containers if UI authentication is enabled.

2017-11-29 Thread Junping Du (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7558?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16271586#comment-16271586
 ] 

Junping Du commented on YARN-7558:
--

Thanks for reply, Xuan! Agree that this is not easy to add a UT and the fix 
looks straightforward. +1. Will commit it shortly if no further comments.

> "yarn logs" command fails to get logs for running containers if UI 
> authentication is enabled.
> -
>
> Key: YARN-7558
> URL: https://issues.apache.org/jira/browse/YARN-7558
> Project: Hadoop YARN
>  Issue Type: Bug
>Reporter: Namit Maheshwari
>Assignee: Xuan Gong
>Priority: Critical
> Attachments: YARN-7558.1.patch, YARN-7558.2.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7541) Node updates don't update the maximum cluster capability for resources other than CPU and memory

2017-11-29 Thread genericqa (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7541?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16271574#comment-16271574
 ] 

genericqa commented on YARN-7541:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
31s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 2 new or modified test 
files. {color} |
|| || || || {color:brown} branch-3.0 Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
25s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 20m 
10s{color} | {color:green} branch-3.0 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  9m 
21s{color} | {color:green} branch-3.0 passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
52s{color} | {color:green} branch-3.0 passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
21s{color} | {color:green} branch-3.0 passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
11m 35s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m 
20s{color} | {color:green} branch-3.0 passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
52s{color} | {color:green} branch-3.0 passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m  
8s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
 7s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  7m 
57s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  7m 
57s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
53s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
13s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green}  
9m 59s{color} | {color:green} patch has no errors when building and testing our 
client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m 
28s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
51s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
39s{color} | {color:green} hadoop-yarn-api in the patch passed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 61m 37s{color} 
| {color:red} hadoop-yarn-server-resourcemanager in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
31s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}134m 14s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | 
hadoop.yarn.server.resourcemanager.scheduler.capacity.TestIncreaseAllocationExpirer
 |
|   | hadoop.yarn.server.resourcemanager.TestResourceTrackerService |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:20ca677 |
| JIRA Issue | YARN-7541 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12899860/YARN-7541.branch-3.0.001.patch
 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  shadedclient  findbugs  checkstyle  |
| uname | Linux eb603e23486d 4.4.0-89-generic #112-Ubuntu SMP Mon Jul 31 
19:38:41 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | branch-3.0 / 4b1a215 |
| maven 

[jira] [Assigned] (YARN-7572) Make the service status output more readable

2017-11-29 Thread Chandni Singh (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-7572?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chandni Singh reassigned YARN-7572:
---

Assignee: Chandni Singh

> Make the service status output more readable 
> -
>
> Key: YARN-7572
> URL: https://issues.apache.org/jira/browse/YARN-7572
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Jian He
>Assignee: Chandni Singh
> Fix For: yarn-native-services
>
>
> Currently the service status output is just a JSON spec, we can make it more 
> human readable



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-7438) Additional changes to make SchedulingPlacementSet agnostic to ResourceRequest / placement algorithm

2017-11-29 Thread Wangda Tan (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-7438?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wangda Tan updated YARN-7438:
-
Attachment: YARN-7438.003.patch

Attached ver.3 patch. Fixed ASF License warning. 

[~asuresh] / [~sunilg], mind to check again?

> Additional changes to make SchedulingPlacementSet agnostic to ResourceRequest 
> / placement algorithm
> ---
>
> Key: YARN-7438
> URL: https://issues.apache.org/jira/browse/YARN-7438
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Wangda Tan
>Assignee: Wangda Tan
> Attachments: YARN-7438.001.patch, YARN-7438.002.patch, 
> YARN-7438.003.patch
>
>
> In additional to YARN-6040, we need to make changes to SchedulingPlacementSet 
> to make it: 
> 1) Agnostic to ResourceRequest (so once we have YARN-6592 merged, we can add 
> new SchedulingPlacementSet implementation in parallel with 
> LocalitySchedulingPlacementSet to use/manage new requests API)
> 2) Agnostic to placement algorithm (now it is bind to delayed scheduling, we 
> should update APIs to make sure new placement algorithms such as complex 
> placement algorithms can be implemented by using SchedulingPlacementSet).



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-7522) Add application tags manager implementation

2017-11-29 Thread Wangda Tan (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-7522?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wangda Tan updated YARN-7522:
-
Attachment: YARN-7522.YARN-6592.002.patch

Thanks [~asuresh] for your comments.

For #1, We can do this in RMContainer transitions, please find details in 
attached patch.
For #2, We can put tags to container token so it could be recovered when NM 
re-register. 

Attached ver.2 patch with necessary test coverages. 

[~asuresh]/[~kkaranasos]/[~sunilg], could you help to take a look?

> Add application tags manager implementation
> ---
>
> Key: YARN-7522
> URL: https://issues.apache.org/jira/browse/YARN-7522
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Wangda Tan
>Assignee: Wangda Tan
> Attachments: YARN-7522.YARN-6592.002.patch, 
> YARN-7522.YARN-6592.wip-001.patch
>
>
> This is different from YARN-6596, YARN-6596 is targeted to add constraint 
> manager to store intra/inter application placement constraints. This JIRA is 
> targeted to support storing maps between container-tags/applications and 
> nodes. This will be required by affinity/anti-affinity implementation and 
> cardinality.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-7495) Improve robustness of the AggregatedLogDeletionService

2017-11-29 Thread Jonathan Eagles (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-7495?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jonathan Eagles updated YARN-7495:
--
Attachment: YARN-7495.003.patch

> Improve robustness of the AggregatedLogDeletionService
> --
>
> Key: YARN-7495
> URL: https://issues.apache.org/jira/browse/YARN-7495
> Project: Hadoop YARN
>  Issue Type: Improvement
>  Components: log-aggregation
>Reporter: Jonathan Eagles
>Assignee: Jonathan Eagles
> Attachments: YARN-7495.001.patch, YARN-7495.002.patch, 
> YARN-7495.003.patch
>
>
> The deletion tasks are scheduled with a TimerTask whose scheduler is a Timer 
> scheduleAtFixedRate. If an exception occurs in the log deletion task, the 
> Timer scheduler interprets this as a task cancelation and stops scheduling 
> future deletion tasks.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-7540) Convert yarn app cli to call yarn api services

2017-11-29 Thread Eric Yang (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-7540?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Eric Yang updated YARN-7540:

Attachment: YARN-7540.001.patch

- Rewired YARN client to API Service REST API.

> Convert yarn app cli to call yarn api services
> --
>
> Key: YARN-7540
> URL: https://issues.apache.org/jira/browse/YARN-7540
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Eric Yang
> Fix For: yarn-native-services
>
> Attachments: YARN-7540.001.patch
>
>
> For YARN docker application to launch through CLI, it works differently from 
> launching through REST API.  All application launched through REST API is 
> currently stored in yarn user HDFS home directory.  Application managed 
> through CLI are stored into individual user's HDFS home directory.  For 
> consistency, we want to have yarn app cli to interact with API service to 
> manage applications.  For performance reason, it is easier to implement list 
> all applications from one user's home directory instead of crawling all 
> user's home directories.  For security reason, it is safer to access only one 
> user home directory instead of all users.  Given the reasons above, the 
> proposal is to change how {{yarn app -launch}}, {{yarn app -list}} and {{yarn 
> app -destroy}} work.  Instead of calling HDFS API and RM API to launch 
> containers, CLI will be converted to call API service REST API resides in RM. 
>  RM perform the persist and operations to launch the actual application.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7346) Fix compilation errors against hbase2 alpha release

2017-11-29 Thread genericqa (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7346?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16271518#comment-16271518
 ] 

genericqa commented on YARN-7346:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 19m 
14s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 2 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
20s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 14m 
47s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 12m  
5s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
49s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
16s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
12m 25s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Skipped patched modules with no Java source: 
hadoop-project 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-timelineservice-hbase-tests
 {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
32s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
59s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  1m 
12s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:red}-1{color} | {color:red} mvninstall {color} | {color:red}  0m 
24s{color} | {color:red} hadoop-yarn-server-timelineservice-hbase in the patch 
failed. {color} |
| {color:red}-1{color} | {color:red} mvninstall {color} | {color:red}  0m  
9s{color} | {color:red} hadoop-yarn-server-timelineservice-hbase-tests in the 
patch failed. {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 11m 
11s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} javac {color} | {color:red} 11m 11s{color} 
| {color:red} root generated 197 new + 1237 unchanged - 0 fixed = 1434 total 
(was 1237) {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
1m 47s{color} | {color:orange} root: The patch generated 2 new + 2 unchanged - 
0 fixed = 4 total (was 2) {color} |
| {color:red}-1{color} | {color:red} mvnsite {color} | {color:red}  0m 
15s{color} | {color:red} hadoop-yarn-server-timelineservice-hbase-tests in the 
patch failed. {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green}  0m  
3s{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green}  
8m 45s{color} | {color:green} patch has no errors when building and testing our 
client artifacts. {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Skipped patched modules with no Java source: 
hadoop-project 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-timelineservice-hbase-tests
 {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
45s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} javadoc {color} | {color:red}  0m 
17s{color} | {color:red} hadoop-yarn-server-timelineservice-hbase-tests in the 
patch failed. {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
19s{color} | {color:green} hadoop-project in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
31s{color} | {color:green} hadoop-yarn-server-timelineservice-hbase in the 
patch passed. {color} |
| {color:red}-1{color} | {color:red} 

[jira] [Commented] (YARN-7274) Ability to disable elasticity at leaf queue level

2017-11-29 Thread genericqa (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7274?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16271499#comment-16271499
 ] 

genericqa commented on YARN-7274:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
31s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 24m 
51s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m  
5s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
40s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m  
3s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
14m 27s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
49s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
38s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
10s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m  
5s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  1m  
5s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 37s{color} | {color:orange} 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager:
 The patch generated 17 new + 96 unchanged - 0 fixed = 113 total (was 96) 
{color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m  
5s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
13m  8s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m 
15s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
36s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 65m 16s{color} 
| {color:red} hadoop-yarn-server-resourcemanager in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
34s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}130m 34s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | 
hadoop.yarn.server.resourcemanager.scheduler.capacity.TestNodeLabelContainerAllocation
 |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:5b98639 |
| JIRA Issue | YARN-7274 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12899784/YARN-7274.wip.1.patch 
|
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  shadedclient  findbugs  checkstyle  |
| uname | Linux c7b599e50bd1 3.13.0-129-generic #178-Ubuntu SMP Fri Aug 11 
12:48:20 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / 3016418 |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_151 |
| findbugs | v3.1.0-RC1 |
| checkstyle | 
https://builds.apache.org/job/PreCommit-YARN-Build/18717/artifact/out/diff-checkstyle-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager.txt
 |
| unit | 

[jira] [Commented] (YARN-2889) Limit in the number of opportunistic container requests per AM

2017-11-29 Thread Arun Suresh (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-2889?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16271444#comment-16271444
 ] 

Arun Suresh commented on YARN-2889:
---

bq. I don't think you can expect user to respect this limit, at least not for 
the first time. On the other hand, number of container requests is vary against 
the size of the job, how you gonna define this limit? Therefore, it doesn't 
seem to be practical to me.
Apologize - but am not sure I follow the argument.
So the limit currently is something we enforce at a per-AM-per-allocate call 
level. If an AM asks for more O containers, the requests are queued in the RM 
(in the application's {{OpportunisticContainerContext}}) and will be satisfied 
in subsequent allocate calls.

bq. If NM queue is full, can we avoid assigning any O containers to that node? 
That means when preparing top K least loaded nodes, we need to exclude nodes 
whose queue is already full.
Agreed - that is a good idea. Something we have been planning on doing 
actually. Feel free to raise a JIRA - will help review. To be honest, instead 
of a total sort, a partial sorting of nodes which includes only nodes whose 
queue length < x, where x is a small value to give a higher probably for the O 
container to run - might be useful.

bq. Each queue size is limited, so I don't see why lots of O containers would 
flood the system.
Hmmm.. so it is still possible for queues to be filled with O containers from a 
single AM - thereby denying other AMs for getting O containers. YARN-7258 
handles this somewhat, by spreading the O containers requested by an AM across 
multiple allocate calls - so that other AMs get a chance.

bq. You can't say an AM is malicious if it requests only opportunistic 
containers (too many). Unless this was the design, then you need to setup 
correct user expectation with some document, and explain what is the correct 
user case.
Understand - which is why we are interested in auto-limiting the number of O 
containers allocated to an AM. Another option is to say - the AM does not 
explicitly ask for O containers and the RM will allocate an O container based 
on queue/cluster capacity or number of nodes with 0 queue length etc.

> Limit in the number of opportunistic container requests per AM
> --
>
> Key: YARN-2889
> URL: https://issues.apache.org/jira/browse/YARN-2889
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: nodemanager, resourcemanager
>Reporter: Konstantinos Karanasos
>Assignee: Arun Suresh
>
> We introduce a way to limit the number of queueable requests that each AM can 
> submit to the LocalRM.
> This way we can restrict the number of queueable containers handed out by the 
> system, as well as throttle down misbehaving AMs (asking for too many 
> queueable containers).



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6124) Make SchedulingEditPolicy can be enabled / disabled / updated with RMAdmin -refreshQueues

2017-11-29 Thread Wangda Tan (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6124?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16271427#comment-16271427
 ] 

Wangda Tan commented on YARN-6124:
--

Thanks [~Zian Chen], 

+1 to the latest patch. 

[~eepayne], do you want to take another look at the patch?

> Make SchedulingEditPolicy can be enabled / disabled / updated with RMAdmin 
> -refreshQueues
> -
>
> Key: YARN-6124
> URL: https://issues.apache.org/jira/browse/YARN-6124
> Project: Hadoop YARN
>  Issue Type: Task
>Reporter: Wangda Tan
>Assignee: Zian Chen
> Attachments: YARN-6124.4.patch, YARN-6124.5.patch, YARN-6124.6.patch, 
> YARN-6124.wip.1.patch, YARN-6124.wip.2.patch, YARN-6124.wip.3.patch
>
>
> Now enabled / disable / update SchedulingEditPolicy config requires restart 
> RM. This is inconvenient when admin wants to make changes to 
> SchedulingEditPolicies.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7213) [Umbrella] Test and validate HBase-2.0.x with Atsv2

2017-11-29 Thread Haibo Chen (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7213?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16271425#comment-16271425
 ] 

Haibo Chen commented on YARN-7213:
--

[~rohithsharma] Thanks for your comments!
I have submitted a patch (without the filter change bits) in YARN-7346 and 
created YARN-7581 to discuss the hbase filter issue that comes up after the 
hbase upgrade.



> [Umbrella] Test and validate HBase-2.0.x with Atsv2
> ---
>
> Key: YARN-7213
> URL: https://issues.apache.org/jira/browse/YARN-7213
> Project: Hadoop YARN
>  Issue Type: Task
>Reporter: Rohith Sharma K S
>Assignee: Rohith Sharma K S
> Attachments: YARN-7213.prelim.patch, YARN-7213.prelim.patch, 
> YARN-7213.wip.patch
>
>
> Hbase-2.0.x officially support hadoop-alpha compilations. And also they are 
> getting ready for Hadoop-beta release so that HBase can release their 
> versions compatible with Hadoop-beta. So, this JIRA is to keep track of 
> HBase-2.0 integration issues. 



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6851) Capacity Scheduler: document configs for controlling # containers allowed to be allocated per node heartbeat

2017-11-29 Thread genericqa (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6851?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16271424#comment-16271424
 ] 

genericqa commented on YARN-6851:
-

| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 26m 
48s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
|| || || || {color:brown} branch-2 Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 10m 
13s{color} | {color:green} branch-2 passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
19s{color} | {color:green} branch-2 passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
15s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
20s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 38m 23s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:17213a0 |
| JIRA Issue | YARN-6851 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12899856/YARN-6851-branch-2.001.patch
 |
| Optional Tests |  asflicense  mvnsite  |
| uname | Linux e5ba1c82df62 4.4.0-64-generic #85-Ubuntu SMP Mon Feb 20 
11:50:30 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | branch-2 / 9452cac |
| maven | version: Apache Maven 3.3.9 
(bb52d8502b132ec0a5a3f4c09453c07478323dc5; 2015-11-10T16:41:47+00:00) |
| Max. process+thread count | 76 (vs. ulimit of 5000) |
| modules | C: hadoop-yarn-project/hadoop-yarn/hadoop-yarn-site U: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-site |
| Console output | 
https://builds.apache.org/job/PreCommit-YARN-Build/18718/console |
| Powered by | Apache Yetus 0.7.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> Capacity Scheduler: document configs for controlling # containers allowed to 
> be allocated per node heartbeat
> 
>
> Key: YARN-6851
> URL: https://issues.apache.org/jira/browse/YARN-6851
> Project: Hadoop YARN
>  Issue Type: Improvement
>Reporter: Wei Yan
>Assignee: Wei Yan
>Priority: Minor
> Attachments: YARN-6851-branch-2.001.patch, YARN-6851.001.patch
>
>
> YARN-4161 introduces new configs for controlling how many containers allowed 
> to be allocated in each node heartbeat. And we also have offswitchCount 
> config before. Would be better to document these configurations in CS section.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6669) Support security for YARN service framework

2017-11-29 Thread genericqa (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6669?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16271422#comment-16271422
 ] 

genericqa commented on YARN-6669:
-

| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
22s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 3 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
20s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 23m 
49s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 10m 
22s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
33s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  3m 
58s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
17m 54s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Skipped patched modules with no Java source: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-services 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-site {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m 
34s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  2m 
37s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
19s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  2m 
30s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  9m 
58s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  9m 
58s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
1m 48s{color} | {color:orange} hadoop-yarn-project/hadoop-yarn: The patch 
generated 24 new + 300 unchanged - 47 fixed = 324 total (was 347) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  3m 
42s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green}  0m  
2s{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
12m 43s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Skipped patched modules with no Java source: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-services 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-site {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  3m 
47s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  2m  
4s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  1m  
6s{color} | {color:green} hadoop-yarn-registry in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  4m  
0s{color} | {color:green} hadoop-yarn-services in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  4m 
17s{color} | {color:green} hadoop-yarn-services-core in the patch passed. 
{color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
30s{color} | {color:green} hadoop-yarn-services-api in the patch passed. 
{color} |
| {color:green}+1{color} | 

[jira] [Commented] (YARN-7541) Node updates don't update the maximum cluster capability for resources other than CPU and memory

2017-11-29 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7541?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16271419#comment-16271419
 ] 

Hudson commented on YARN-7541:
--

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #13291 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/13291/])
YARN-7541. Node updates don't update the maximum cluster capability for 
(templedf: rev 8498d287cd3beddcf8fe19625227e09982ec4be2)
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/MockNodes.java
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/api/records/Resource.java
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/TestClusterNodeTracker.java
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/util/resource/ResourceUtils.java
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/ClusterNodeTracker.java


> Node updates don't update the maximum cluster capability for resources other 
> than CPU and memory
> 
>
> Key: YARN-7541
> URL: https://issues.apache.org/jira/browse/YARN-7541
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: resourcemanager
>Affects Versions: 3.0.0-beta1, 3.1.0
>Reporter: Daniel Templeton
>Assignee: Daniel Templeton
>Priority: Critical
> Fix For: 3.1.0
>
> Attachments: YARN-7541.001.patch, YARN-7541.002.patch, 
> YARN-7541.003.patch, YARN-7541.004.patch, YARN-7541.005.patch, 
> YARN-7541.006.patch, YARN-7541.branch-3.0.001.patch
>
>
> When I submit an MR job that asks for too much memory or CPU for the map or 
> reduce, the AM will fail because it recognizes that the request is too large. 
>  With any other resources, however, the resource requests will instead be 
> made and remain pending forever.  Looks like we forgot to update the code 
> that tracks the maximum container allocation in {{ClusterNodeTracker}}.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7581) ATSv2 does not construct HBase filters correctly in HBase 2.0

2017-11-29 Thread Haibo Chen (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7581?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16271415#comment-16271415
 ] 

Haibo Chen commented on YARN-7581:
--

The final HBase filter is constructed in entity readers as
A FilterList based on fields to retrieve(metricstoretrieve, configstoretrieve, 
etc) 
AND   
A FilterList based on atsv2 filters (info/conf/event/metric filters)

Attach a preliminary patch that extracts column families that are present only 
in atsv2 filters,
and adds to the fields-to-retrieve-based FilterList a FamilyFilter for each of 
them.
In the case of 
TestTimelineReaderWebServicesHBaseStorage.testGetEntitiesConfigFilters(),
the patch will generate a new HBase filter
{code}
FilterList AND (2/2): [
  FilterList AND (1/1): [
FilterList OR (2/2): [
  SingleColumnValueFilter (c, config_param1, EQUAL, "value1"),
  SingleColumnValueFilter (c, config_param1, EQUAL, "value3")
]
  ],
  FilterList OR (2/2): [
FilterList AND (5/5): [
  FamilyFilter (EQUAL, i),
  QualifierFilter (NOT_EQUAL, e!),
  QualifierFilter (NOT_EQUAL, i!),
  QualifierFilter (NOT_EQUAL, s!),
  QualifierFilter (NOT_EQUAL, r!)
],
FilterList AND (1/1): [
 FamilyFilter(Equal, c) 
]
  ]
]
{code}

> ATSv2 does not construct HBase filters correctly in HBase 2.0
> -
>
> Key: YARN-7581
> URL: https://issues.apache.org/jira/browse/YARN-7581
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: ATSv2
>Affects Versions: 3.0.0-beta1
>Reporter: Haibo Chen
>Assignee: Haibo Chen
> Attachments: YARN-7581.prelim.patch
>
>
> TestTimelineReaderWebServicesHBaseStorage.testGetEntitiesConfigFilters() and 
> TestTimelineReaderWebServicesHBaseStorage.testGetEntitiesMetricFilters() 
> start to fail after we upgrade HBase to 2.0-alpha4 (To reproduce locally, 
> apply YARN-7581.prelim.patch that is attached in YARN-7346 and run the atsv2 
> unit tests)
> *Error Message*
> [ERROR] Failures:
> [ERROR]   
> TestTimelineReaderWebServicesHBaseStorage.testGetEntitiesConfigFilters:1266 
> expected:<2> but was:<0>
> [ERROR]   
> TestTimelineReaderWebServicesHBaseStorage.testGetEntitiesMetricFilters:1523 
> expected:<1> but was:<0>



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-7541) Node updates don't update the maximum cluster capability for resources other than CPU and memory

2017-11-29 Thread Daniel Templeton (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-7541?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Daniel Templeton updated YARN-7541:
---
Attachment: YARN-7541.branch-3.0.001.patch

branch-3.0 patch that fixes trivial incompatibilities.

> Node updates don't update the maximum cluster capability for resources other 
> than CPU and memory
> 
>
> Key: YARN-7541
> URL: https://issues.apache.org/jira/browse/YARN-7541
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: resourcemanager
>Affects Versions: 3.0.0-beta1, 3.1.0
>Reporter: Daniel Templeton
>Assignee: Daniel Templeton
>Priority: Critical
> Attachments: YARN-7541.001.patch, YARN-7541.002.patch, 
> YARN-7541.003.patch, YARN-7541.004.patch, YARN-7541.005.patch, 
> YARN-7541.006.patch, YARN-7541.branch-3.0.001.patch
>
>
> When I submit an MR job that asks for too much memory or CPU for the map or 
> reduce, the AM will fail because it recognizes that the request is too large. 
>  With any other resources, however, the resource requests will instead be 
> made and remain pending forever.  Looks like we forgot to update the code 
> that tracks the maximum container allocation in {{ClusterNodeTracker}}.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-7581) ATSv2 does not construct HBase filters correctly in HBase 2.0

2017-11-29 Thread Haibo Chen (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-7581?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Haibo Chen updated YARN-7581:
-
Attachment: YARN-7581.prelim.patch

> ATSv2 does not construct HBase filters correctly in HBase 2.0
> -
>
> Key: YARN-7581
> URL: https://issues.apache.org/jira/browse/YARN-7581
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: ATSv2
>Affects Versions: 3.0.0-beta1
>Reporter: Haibo Chen
>Assignee: Haibo Chen
> Attachments: YARN-7581.prelim.patch
>
>
> TestTimelineReaderWebServicesHBaseStorage.testGetEntitiesConfigFilters() and 
> TestTimelineReaderWebServicesHBaseStorage.testGetEntitiesMetricFilters() 
> start to fail after we upgrade HBase to 2.0-alpha4 (To reproduce locally, 
> apply YARN-7581.prelim.patch that is attached in YARN-7346 and run the atsv2 
> unit tests)
> *Error Message*
> [ERROR] Failures:
> [ERROR]   
> TestTimelineReaderWebServicesHBaseStorage.testGetEntitiesConfigFilters:1266 
> expected:<2> but was:<0>
> [ERROR]   
> TestTimelineReaderWebServicesHBaseStorage.testGetEntitiesMetricFilters:1523 
> expected:<1> but was:<0>



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7581) ATSv2 does not construct HBase filters correctly in HBase 2.0

2017-11-29 Thread Haibo Chen (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7581?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16271360#comment-16271360
 ] 

Haibo Chen commented on YARN-7581:
--

We have the relevant data (entity 5 that is supposed to be returned in the 
result) in HBase as
| Row Key | Column Family|  Column Qualifier |   Cell Value |
| 
||-|-|
| r1| c  |  config_param1|
value1  |
| r1| c  |  config_param2|
value2  |
| r1| c  |  cfg_param1 |
value3  |
| r1| i   |  info1| 
   cluster1|
When given the above filter, HBase regionserver happens to evaluate
the FamilyFilter first while iterating over the cells

Quoting analysis from [~appy]
> Looks like HBASE-13122 is the culprit.
> Since we do SVCF on cf "c" and FamilyFilter on "i", earlier scanner was still 
> iterating on each cell of cf "c" and at some point found the ones we were 
> filtering for.
> But after the change, we skipped the whole cf "c" after seeing the first 
> cell, as a result SVCF didn't see other cells and failed to match the row.

> So despite HBASE-13122 being in 1.2.6 also, the difference came because of 
> this filter:
{code}
  FilterList OR (1/1):[
FilterList AND (5/5):[
  FamilyFilter (EQUAL, i),
  QualifierFilter (NOT_EQUAL, e!),
  QualifierFilter (NOT_EQUAL, i!),
  QualifierFilter (NOT_EQUAL, s!),
  QualifierFilter (NOT_EQUAL, r!)]]]
{code}
> In 1.2.6, filter.filterKeyValue (filter=AND) will return SKIP_ROW, break out 
> of loop and return "rc" which was initialized to SKIP. So we still keep going 
> with current StoreScanner.
> But in 2.0.0, filter.internalFiterCell (filter=AND) will return SKIP_ROW, and 
> rc will be set to SKIP_ROW before returning. As a result we skip the 
> StoreScanner and as a result, all cells in this column family.


> ATSv2 does not construct HBase filters correctly in HBase 2.0
> -
>
> Key: YARN-7581
> URL: https://issues.apache.org/jira/browse/YARN-7581
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: ATSv2
>Affects Versions: 3.0.0-beta1
>Reporter: Haibo Chen
>Assignee: Haibo Chen
>
> TestTimelineReaderWebServicesHBaseStorage.testGetEntitiesConfigFilters() and 
> TestTimelineReaderWebServicesHBaseStorage.testGetEntitiesMetricFilters() 
> start to fail after we upgrade HBase to 2.0-alpha4 (To reproduce locally, 
> apply YARN-7581.prelim.patch that is attached in YARN-7346 and run the atsv2 
> unit tests)
> *Error Message*
> [ERROR] Failures:
> [ERROR]   
> TestTimelineReaderWebServicesHBaseStorage.testGetEntitiesConfigFilters:1266 
> expected:<2> but was:<0>
> [ERROR]   
> TestTimelineReaderWebServicesHBaseStorage.testGetEntitiesMetricFilters:1523 
> expected:<1> but was:<0>



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-6851) Capacity Scheduler: document configs for controlling # containers allowed to be allocated per node heartbeat

2017-11-29 Thread Wei Yan (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-6851?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wei Yan updated YARN-6851:
--
Attachment: YARN-6851-branch-2.001.patch

Also trigger branch-2 Jenkins

> Capacity Scheduler: document configs for controlling # containers allowed to 
> be allocated per node heartbeat
> 
>
> Key: YARN-6851
> URL: https://issues.apache.org/jira/browse/YARN-6851
> Project: Hadoop YARN
>  Issue Type: Improvement
>Reporter: Wei Yan
>Assignee: Wei Yan
>Priority: Minor
> Attachments: YARN-6851-branch-2.001.patch, YARN-6851.001.patch
>
>
> YARN-4161 introduces new configs for controlling how many containers allowed 
> to be allocated in each node heartbeat. And we also have offswitchCount 
> config before. Would be better to document these configurations in CS section.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7581) ATSv2 does not construct HBase filters correctly in HBase 2.0

2017-11-29 Thread Haibo Chen (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7581?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16271320#comment-16271320
 ] 

Haibo Chen commented on YARN-7581:
--

Based on the discussion I had with our internal HBase folks, the issue has to 
do how we construct HBase Filters in ATSv2.
Specifically, in the case of 
TestTimelineReaderWebServicesHBaseStorage.testGetEntitiesConfigFilters(), the 
query that
is failing the test,   
{code}
URI uri = URI.create("http://localhost:; + getServerPort() + "/ws/v2/" +
  "timeline/clusters/cluster1/apps/application_11_/" +
  "entities/type1?conffilters=config_param1%20eq%20value1%20OR%20" +
  "config_param1%20eq%20value3");
{code}
generates the following HBase filter
{code}
FilterList AND (2/2): [
  FilterList AND (1/1): [
FilterList OR (2/2): [
  SingleColumnValueFilter (c, config_param1, EQUAL, "value1"),
  SingleColumnValueFilter (c, config_param1, EQUAL, "value3")
]
  ],
  FilterList OR (1/1): [
FilterList AND (5/5): [
  FamilyFilter (EQUAL, i),
  QualifierFilter (NOT_EQUAL, e!),
  QualifierFilter (NOT_EQUAL, i!),
  QualifierFilter (NOT_EQUAL, s!),
  QualifierFilter (NOT_EQUAL, r!)
]
  ]
]
{code}
and  when the SingleColumnValueFilter are created (in 
TimelineFilterUtils.createHBaseSingleColValueFilter()),
we call setFilterIfMissing(true)  so that the whole row will be skipped if SCVF 
does not see  config_param1.



> ATSv2 does not construct HBase filters correctly in HBase 2.0
> -
>
> Key: YARN-7581
> URL: https://issues.apache.org/jira/browse/YARN-7581
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: ATSv2
>Affects Versions: 3.0.0-beta1
>Reporter: Haibo Chen
>Assignee: Haibo Chen
>
> TestTimelineReaderWebServicesHBaseStorage.testGetEntitiesConfigFilters() and 
> TestTimelineReaderWebServicesHBaseStorage.testGetEntitiesMetricFilters() 
> start to fail after we upgrade HBase to 2.0-alpha4 (To reproduce locally, 
> apply YARN-7581.prelim.patch that is attached in YARN-7346 and run the atsv2 
> unit tests)
> *Error Message*
> [ERROR] Failures:
> [ERROR]   
> TestTimelineReaderWebServicesHBaseStorage.testGetEntitiesConfigFilters:1266 
> expected:<2> but was:<0>
> [ERROR]   
> TestTimelineReaderWebServicesHBaseStorage.testGetEntitiesMetricFilters:1523 
> expected:<1> but was:<0>



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6124) Make SchedulingEditPolicy can be enabled / disabled / updated with RMAdmin -refreshQueues

2017-11-29 Thread Zian Chen (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6124?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16271302#comment-16271302
 ] 

Zian Chen commented on YARN-6124:
-

For the two unit test failures, TestNodeLabelContainerAllocation was not 
introduced by this patch, trunk has the same failure as well, also 
TestIncreaseAllocationExpirer cannot be reproduced in local environment, should 
be an intermittent failure. Could you share your opinions [~leftnoteasy] Thanks!

> Make SchedulingEditPolicy can be enabled / disabled / updated with RMAdmin 
> -refreshQueues
> -
>
> Key: YARN-6124
> URL: https://issues.apache.org/jira/browse/YARN-6124
> Project: Hadoop YARN
>  Issue Type: Task
>Reporter: Wangda Tan
>Assignee: Zian Chen
> Attachments: YARN-6124.4.patch, YARN-6124.5.patch, YARN-6124.6.patch, 
> YARN-6124.wip.1.patch, YARN-6124.wip.2.patch, YARN-6124.wip.3.patch
>
>
> Now enabled / disable / update SchedulingEditPolicy config requires restart 
> RM. This is inconvenient when admin wants to make changes to 
> SchedulingEditPolicies.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7558) "yarn logs" command fails to get logs for running containers if UI authentication is enabled.

2017-11-29 Thread Xuan Gong (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7558?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16271283#comment-16271283
 ] 

Xuan Gong commented on YARN-7558:
-

Thanks,[~djp] for the review.

bq. is it possible to add a UT to cover this case?

It is very hard to add a UT for this. This is for using CLI to call NW/ATS 
web-service. Previously, we were using mock function to mimic the Restful 
calls. By testing this fix, we need to *real* restful calls. Anyway, I have 
test the fix in both secure/unsecure environment manually to make sure it fixes 
our problem.

> "yarn logs" command fails to get logs for running containers if UI 
> authentication is enabled.
> -
>
> Key: YARN-7558
> URL: https://issues.apache.org/jira/browse/YARN-7558
> Project: Hadoop YARN
>  Issue Type: Bug
>Reporter: Namit Maheshwari
>Assignee: Xuan Gong
>Priority: Critical
> Attachments: YARN-7558.1.patch, YARN-7558.2.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-7581) ATSv2 does not construct HBase filters correctly in HBase 2.0

2017-11-29 Thread Haibo Chen (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-7581?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Haibo Chen updated YARN-7581:
-
Description: 
TestTimelineReaderWebServicesHBaseStorage.testGetEntitiesConfigFilters() and 
TestTimelineReaderWebServicesHBaseStorage.testGetEntitiesMetricFilters() start 
to fail after we upgrade HBase to 2.0-alpha4 (To reproduce locally, apply 
YARN-7581.prelim.patch that is attached in YARN-7346 and run the atsv2 unit 
tests)

*Error Message*
[ERROR] Failures:
[ERROR]   
TestTimelineReaderWebServicesHBaseStorage.testGetEntitiesConfigFilters:1266 
expected:<2> but was:<0>
[ERROR]   
TestTimelineReaderWebServicesHBaseStorage.testGetEntitiesMetricFilters:1523 
expected:<1> but was:<0>

> ATSv2 does not construct HBase filters correctly in HBase 2.0
> -
>
> Key: YARN-7581
> URL: https://issues.apache.org/jira/browse/YARN-7581
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: ATSv2
>Affects Versions: 3.0.0-beta1
>Reporter: Haibo Chen
>Assignee: Haibo Chen
>
> TestTimelineReaderWebServicesHBaseStorage.testGetEntitiesConfigFilters() and 
> TestTimelineReaderWebServicesHBaseStorage.testGetEntitiesMetricFilters() 
> start to fail after we upgrade HBase to 2.0-alpha4 (To reproduce locally, 
> apply YARN-7581.prelim.patch that is attached in YARN-7346 and run the atsv2 
> unit tests)
> *Error Message*
> [ERROR] Failures:
> [ERROR]   
> TestTimelineReaderWebServicesHBaseStorage.testGetEntitiesConfigFilters:1266 
> expected:<2> but was:<0>
> [ERROR]   
> TestTimelineReaderWebServicesHBaseStorage.testGetEntitiesMetricFilters:1523 
> expected:<1> but was:<0>



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7487) Make sure volume includes GPU base libraries exists after created by plugin

2017-11-29 Thread Wangda Tan (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7487?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16271278#comment-16271278
 ] 

Wangda Tan commented on YARN-7487:
--

[~sunilg], #4 is not a problem, since we only use create/ls, inspect is an 
invalid subcommand, and inspect is not invalid any longer.

> Make sure volume includes GPU base libraries exists after created by plugin
> ---
>
> Key: YARN-7487
> URL: https://issues.apache.org/jira/browse/YARN-7487
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Wangda Tan
>Assignee: Wangda Tan
> Attachments: YARN-7487.002.patch, YARN-7487.003.patch, 
> YARN-7487.004.patch, YARN-7487.wip.001.patch
>
>
> YARN-7224 will create docker volume includes GPU base libraries when launch a 
> docker container which needs GPU. 
> This JIRA will add necessary checks to make sure docker volume exists before 
> launching the container to reduce debug efforts if container fails.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-7346) Fix compilation errors against hbase2 alpha release

2017-11-29 Thread Haibo Chen (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-7346?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Haibo Chen updated YARN-7346:
-
Attachment: YARN-7581.prelim.patch

Attached a patch that upgrades HBase from 1.2.6 to 2.0-alpha4 now that it has 
been released.
The patch causes two unit test failures for me locally, namely 
"TestTimelineReaderWebServicesHBaseStorage.testGetEntitiesConfigFilters:1266 
expected:<2> but was:<0>"
and  
"TestTimelineReaderWebServicesHBaseStorage.testGetEntitiesMetricFilters:1523 
expected:<1> but was:<0>"
Let's discuss the two unit test failures in YARN-7581 as I believe this is due 
to a 'bug' in how ATSv2 constructs HBase filters.

> Fix compilation errors against hbase2 alpha release
> ---
>
> Key: YARN-7346
> URL: https://issues.apache.org/jira/browse/YARN-7346
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Ted Yu
>Assignee: Vrushali C
> Attachments: YARN-7581.prelim.patch
>
>
> When compiling hadoop-yarn-server-timelineservice-hbase against 2.0.0-alpha3, 
> I got the following errors:
> https://pastebin.com/Ms4jYEVB
> This issue is to fix the compilation errors.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (YARN-7346) Fix compilation errors against hbase2 alpha release

2017-11-29 Thread Haibo Chen (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7346?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16271276#comment-16271276
 ] 

Haibo Chen edited comment on YARN-7346 at 11/29/17 6:14 PM:


Attached a patch that upgrades HBase from 1.2.6 to 2.0-alpha4 now that it has 
been released.

The patch causes two unit test failures for me locally, namely 
"TestTimelineReaderWebServicesHBaseStorage.testGetEntitiesConfigFilters:1266 
expected:<2> but was:<0>"
and  
"TestTimelineReaderWebServicesHBaseStorage.testGetEntitiesMetricFilters:1523 
expected:<1> but was:<0>"

Let's discuss the two unit test failures in YARN-7581 as I believe this is due 
to a 'bug' in how ATSv2 constructs HBase filters.


was (Author: haibochen):
Attached a patch that upgrades HBase from 1.2.6 to 2.0-alpha4 now that it has 
been released.
The patch causes two unit test failures for me locally, namely 
"TestTimelineReaderWebServicesHBaseStorage.testGetEntitiesConfigFilters:1266 
expected:<2> but was:<0>"
and  
"TestTimelineReaderWebServicesHBaseStorage.testGetEntitiesMetricFilters:1523 
expected:<1> but was:<0>"
Let's discuss the two unit test failures in YARN-7581 as I believe this is due 
to a 'bug' in how ATSv2 constructs HBase filters.

> Fix compilation errors against hbase2 alpha release
> ---
>
> Key: YARN-7346
> URL: https://issues.apache.org/jira/browse/YARN-7346
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Ted Yu
>Assignee: Vrushali C
> Attachments: YARN-7581.prelim.patch
>
>
> When compiling hadoop-yarn-server-timelineservice-hbase against 2.0.0-alpha3, 
> I got the following errors:
> https://pastebin.com/Ms4jYEVB
> This issue is to fix the compilation errors.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7577) Unit Fail: TestAMRestart#testPreemptedAMRestartOnRMRestart

2017-11-29 Thread Miklos Szegedi (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7577?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16271256#comment-16271256
 ] 

Miklos Szegedi commented on YARN-7577:
--

The unit test issues are unrelated.

> Unit Fail: TestAMRestart#testPreemptedAMRestartOnRMRestart
> --
>
> Key: YARN-7577
> URL: https://issues.apache.org/jira/browse/YARN-7577
> Project: Hadoop YARN
>  Issue Type: Bug
>Reporter: Miklos Szegedi
>Assignee: Miklos Szegedi
> Attachments: YARN-7577.000.patch, YARN-7577.001.patch, 
> YARN-7577.002.patch
>
>
> This happens, if Fair Scheduler is the default. The test should run with both 
> schedulers
> {code}
> java.lang.AssertionError: 
> Expected :-102
> Actual   :-106
>   at org.junit.Assert.fail(Assert.java:88)
>   at org.junit.Assert.failNotEquals(Assert.java:743)
>   at org.junit.Assert.assertEquals(Assert.java:118)
>   at org.junit.Assert.assertEquals(Assert.java:555)
>   at org.junit.Assert.assertEquals(Assert.java:542)
>   at 
> org.apache.hadoop.yarn.server.resourcemanager.applicationsmanager.TestAMRestart.testPreemptedAMRestartOnRMRestart(TestAMRestart.java:583)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:498)
>   at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:47)
>   at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
>   at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:44)
>   at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
>   at 
> org.junit.internal.runners.statements.FailOnTimeout$StatementThread.run(FailOnTimeout.java:74)
> {code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-6669) Support security for YARN service framework

2017-11-29 Thread Jian He (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-6669?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jian He updated YARN-6669:
--
Attachment: YARN-6669.10.patch

> Support security for YARN service framework
> ---
>
> Key: YARN-6669
> URL: https://issues.apache.org/jira/browse/YARN-6669
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Jian He
>Assignee: Jian He
> Attachments: YARN-6669.01.patch, YARN-6669.02.patch, 
> YARN-6669.03.patch, YARN-6669.04.patch, YARN-6669.05.patch, 
> YARN-6669.06.patch, YARN-6669.07.patch, YARN-6669.08.patch, 
> YARN-6669.09.patch, YARN-6669.10.patch, 
> YARN-6669.yarn-native-services.01.patch, 
> YARN-6669.yarn-native-services.03.patch, 
> YARN-6669.yarn-native-services.04.patch, 
> YARN-6669.yarn-native-services.05.patch
>
>
> Changes include:
> -  Make registry client to programmatically generate the jaas conf for secure 
> access ZK quorum
> - Create a KerberosPrincipal resource object in REST API for user to supply 
> keberos keytab and principal 
> - User has two ways to configure:
> -- If keytab starts with "hdfs://",  the keytab will be localized by YARN
> -- If keytab starts with "file://", it is assumed that the keytab are 
> available on the localhost.
> - AM will use the keytab to log in
> - ServiceClient is changed to ask hdfs delegation token when submitting the 
> service
> - AM code will use the tokens when launching containers 
> - Support kerberized communication between client and AM



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-7533) Documentation for absolute resource support in Capacity Scheduler

2017-11-29 Thread Sunil G (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-7533?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sunil G updated YARN-7533:
--
Summary: Documentation for absolute resource support in Capacity Scheduler  
(was: Documentation for absolute resource support in CS)

> Documentation for absolute resource support in Capacity Scheduler
> -
>
> Key: YARN-7533
> URL: https://issues.apache.org/jira/browse/YARN-7533
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: capacity scheduler
>Reporter: Sunil G
>Assignee: Sunil G
> Attachments: YARN-7533-YARN-5881.002.patch, YARN-7533.001.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7533) Documentation for absolute resource support in CS

2017-11-29 Thread Sunil G (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7533?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16271139#comment-16271139
 ] 

Sunil G commented on YARN-7533:
---

Thanks [~eepayne]. Committing shortly.

> Documentation for absolute resource support in CS
> -
>
> Key: YARN-7533
> URL: https://issues.apache.org/jira/browse/YARN-7533
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: capacity scheduler
>Reporter: Sunil G
>Assignee: Sunil G
> Attachments: YARN-7533-YARN-5881.002.patch, YARN-7533.001.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7562) queuePlacementPolicy should not match parent queue

2017-11-29 Thread genericqa (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7562?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16271064#comment-16271064
 ] 

genericqa commented on YARN-7562:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 17m 
15s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 20m 
 1s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
51s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
31s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
55s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
11m  9s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
29s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
31s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
56s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
48s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
48s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 29s{color} | {color:orange} 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager:
 The patch generated 2 new + 23 unchanged - 0 fixed = 25 total (was 23) {color} 
|
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
52s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} whitespace {color} | {color:red}  0m  
0s{color} | {color:red} The patch has 7 line(s) that end in whitespace. Use git 
apply --whitespace=fix <>. Refer https://git-scm.com/docs/git-apply 
{color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
11m 58s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m 
21s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
38s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 70m 42s{color} 
| {color:red} hadoop-yarn-server-resourcemanager in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
31s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}141m 40s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | 
hadoop.yarn.server.resourcemanager.TestOpportunisticContainerAllocatorAMService 
|
|   | 
hadoop.yarn.server.resourcemanager.scheduler.fair.TestQueuePlacementPolicy |
|   | hadoop.yarn.server.resourcemanager.scheduler.fair.TestFairScheduler |
|   | 
hadoop.yarn.server.resourcemanager.scheduler.capacity.TestNodeLabelContainerAllocation
 |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:5b98639 |
| JIRA Issue | YARN-7562 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12899825/YARN-7562.003.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  shadedclient  findbugs  checkstyle  |
| uname | Linux e614cbd76ecb 3.13.0-129-generic #178-Ubuntu SMP Fri Aug 11 
12:48:20 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / d331762 |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_151 |
| findbugs | 

[jira] [Commented] (YARN-7473) Implement Framework and policy for capacity management of auto created queues

2017-11-29 Thread Sunil G (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7473?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16270818#comment-16270818
 ] 

Sunil G commented on YARN-7473:
---

Thanks [~suma.shivaprasad] and [~leftnoteasy]

Few comments:
# Once queue capacity is set to 0, same queue will be assigned back with some 
capacity based on available capacity. If capacity is not there, queue will be 
still 0. If many such queues with 0 capacity is starving, queue which got 
submitted with an app first will be selected. We might need to consider 
priority also here.
# AbstractManagedParentQueue#validateQueueEntitlementChange is directly 
operating on capacity. When absolute resource will get merged, this code will 
be a problem?
# In {{initializeLimitsFromTemplate}}, should below code to be 
{{setMaxApplications(leafQueueTemplate.getMaxApps());}} 
related to parent queue as well. What if some one provided more apps in 
template which could violate parent max-apps?
# In {{validateConfigurations}}, does 0 a valid capacity? one could configure 0 
as capacity and +ve integer for max-capacity?
# How to configure orderingPolicy for AutoCreatedLeafQueue?
# in below code, better to avoid *_* for queue config names
{code}
1684  public static final String QUEUE_MANAGEMENT_MONITORING_INTERVAL =
1685  QUEUE_MANAGEMENT_CONFIG_PREFIX + "monitoring_interval";
{code}
# In {{CapacitySchedulerContext}}, better to use MonotonicClock
# In LeafQueue, 
{code}
2011  public void setMaxAMResourcePerQueuePercent(
2012  float maxAMResourcePerQueuePercent) {
2013this.maxAMResourcePerQueuePercent = maxAMResourcePerQueuePercent;
2014  }
{code}
how are we handling node labels?
# PendingApplicationComparator could reuse existing fifo/fair app comparators?
# 

> Implement Framework and policy for capacity management of auto created queues 
> --
>
> Key: YARN-7473
> URL: https://issues.apache.org/jira/browse/YARN-7473
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: capacity scheduler
>Reporter: Suma Shivaprasad
>Assignee: Suma Shivaprasad
> Attachments: YARN-7473.1.patch, YARN-7473.10.patch, 
> YARN-7473.2.patch, YARN-7473.3.patch, YARN-7473.4.patch, YARN-7473.5.patch, 
> YARN-7473.6.patch, YARN-7473.7.patch, YARN-7473.8.patch, YARN-7473.9.patch
>
>
> This jira mainly addresses the following
>  
> 1.Support adding pluggable policies on parent queue for dynamically managing 
> capacity/state for leaf queues.
> 2. Implement  a default policy that manages capacity based on pending 
> applications and either grants guaranteed or zero capacity to queues based on 
> parent's available guaranteed capacity.
> 3. Integrate with SchedulingEditPolicy framework to trigger this periodically 
> and signal scheduler to take necessary actions for capacity/queue management.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-7562) queuePlacementPolicy should not match parent queue

2017-11-29 Thread chuanjie.duan (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-7562?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

chuanjie.duan updated YARN-7562:

Attachment: YARN-7562.003.patch

Add unit test for parent queue, Patch logic changed, only policy 
user,primaryGroup,secondaryGroupExistingQueue would ignore parent queue

> queuePlacementPolicy should not match parent queue
> --
>
> Key: YARN-7562
> URL: https://issues.apache.org/jira/browse/YARN-7562
> Project: Hadoop YARN
>  Issue Type: Improvement
>  Components: fairscheduler, resourcemanager
>Affects Versions: 2.7.1
>Reporter: chuanjie.duan
> Attachments: YARN-7562.002.patch, YARN-7562.003.patch, YARN-7562.patch
>
>
> User algo submit a mapreduce job, console log said "root.algo is not a leaf 
> queue exception".
> root.algo is a parent queue, it's meanless for me. Not sure why parent queue 
> added before
> 
>   3000 mb, 1 vcores
>   24000 mb, 8 vcores
>   4
>   1
>   fifo
> 
> 
>   3000 mb, 1 vcores
>   24000 mb, 8 vcores
>   4
>   1
>   fifo
> 
> 
>   300
>   4 mb, 10 vcores
>   20 mb, 60 vcores
>   
> 300
> 4 mb, 10 vcores
> 10 mb, 30 vcores
> 20
> fifo
> 4
>   
>   
> 300
> 4 mb, 10 vcores
> 10 mb, 30 vcores
> 20
> fifo
> 4
>   
> 
> 
> 
> 
> 
> 



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Created] (YARN-7581) ATSv2 does not construct HBase filters correctly in HBase 2.0

2017-11-29 Thread Haibo Chen (JIRA)
Haibo Chen created YARN-7581:


 Summary: ATSv2 does not construct HBase filters correctly in HBase 
2.0
 Key: YARN-7581
 URL: https://issues.apache.org/jira/browse/YARN-7581
 Project: Hadoop YARN
  Issue Type: Bug
  Components: ATSv2
Affects Versions: 3.0.0-beta1
Reporter: Haibo Chen
Assignee: Haibo Chen






--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-7581) ATSv2 does not construct HBase filters correctly in HBase 2.0

2017-11-29 Thread Haibo Chen (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-7581?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Haibo Chen updated YARN-7581:
-
Issue Type: Sub-task  (was: Bug)
Parent: YARN-7213

> ATSv2 does not construct HBase filters correctly in HBase 2.0
> -
>
> Key: YARN-7581
> URL: https://issues.apache.org/jira/browse/YARN-7581
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: ATSv2
>Affects Versions: 3.0.0-beta1
>Reporter: Haibo Chen
>Assignee: Haibo Chen
>




--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7510) Merge work for YARN-5881

2017-11-29 Thread genericqa (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7510?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16270726#comment-16270726
 ] 

genericqa commented on YARN-7510:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
21s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 16 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
20s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 21m 
50s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 22m 
22s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  3m 
21s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  5m 
45s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
21m 33s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  1m 
56s{color} | {color:red} hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api in 
trunk has 1 extant Findbugs warnings. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  4m  
7s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
29s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  5m 
 3s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 20m 
50s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} cc {color} | {color:green} 20m 
50s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 20m 
50s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
2m 56s{color} | {color:orange} root: The patch generated 66 new + 1836 
unchanged - 40 fixed = 1902 total (was 1876) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  5m 
28s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} whitespace {color} | {color:red}  0m  
0s{color} | {color:red} The patch has 2 line(s) that end in whitespace. Use git 
apply --whitespace=fix <>. Refer https://git-scm.com/docs/git-apply 
{color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
11m 46s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  8m 
56s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  3m 
30s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 11m 33s{color} 
| {color:red} hadoop-common in the patch failed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
53s{color} | {color:green} hadoop-yarn-api in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  3m 
57s{color} | {color:green} hadoop-yarn-common in the patch passed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 67m 45s{color} 
| {color:red} hadoop-yarn-server-resourcemanager in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
35s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}226m 15s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.ha.TestZKFailoverController |
|   | hadoop.security.TestGroupsCaching |
|   | 
hadoop.yarn.server.resourcemanager.scheduler.capacity.TestNodeLabelContainerAllocation
 |
\\
\\
|| Subsystem 

[jira] [Commented] (YARN-7497) Add HDFSSchedulerConfigurationStore for RM HA

2017-11-29 Thread genericqa (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7497?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16270687#comment-16270687
 ] 

genericqa commented on YARN-7497:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
21s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 2 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
16s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 26m 
59s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 11m 
46s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
20s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  2m 
10s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
15m 44s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  2m 
21s{color} | {color:red} hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api in 
trunk has 1 extant Findbugs warnings. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
25s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
18s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  2m 
22s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 11m 
49s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 11m 
49s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
1m 37s{color} | {color:orange} hadoop-yarn-project/hadoop-yarn: The patch 
generated 18 new + 289 unchanged - 0 fixed = 307 total (was 289) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  2m 
34s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
13m 39s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  4m 
46s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
25s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red}  1m  1s{color} 
| {color:red} hadoop-yarn-api in the patch failed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 71m 28s{color} 
| {color:red} hadoop-yarn-server-resourcemanager in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
40s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}175m 18s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.yarn.conf.TestYarnConfigurationFields |
|   | 
hadoop.yarn.server.resourcemanager.scheduler.capacity.TestContainerResizing |
|   | 
hadoop.yarn.server.resourcemanager.scheduler.capacity.TestNodeLabelContainerAllocation
 |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:5b98639 |
| JIRA Issue | YARN-7497 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12899791/YARN-7497.005.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  shadedclient  findbugs  checkstyle  |
| uname | Linux 908fad371ee9 3.13.0-129-generic #178-Ubuntu SMP Fri Aug 11 
12:48:20 UTC 

[jira] [Commented] (YARN-7580) ContainersMonitorImpl logged message lacks detail when exceeding memory limits

2017-11-29 Thread genericqa (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7580?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16270642#comment-16270642
 ] 

genericqa commented on YARN-7580:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
32s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 17m 
17s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
55s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
19s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
38s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
10m  4s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
51s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
20s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
36s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
49s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
49s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
18s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
34s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
10m 32s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
58s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
19s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 20m  5s{color} 
| {color:red} hadoop-yarn-server-nodemanager in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
28s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 65m 35s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | 
hadoop.yarn.server.nodemanager.containermanager.TestContainerManager |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:5b98639 |
| JIRA Issue | YARN-7580 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12899792/YARN-7580.002.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  shadedclient  findbugs  checkstyle  |
| uname | Linux ed9bac5dd096 4.4.0-89-generic #112-Ubuntu SMP Mon Jul 31 
19:38:41 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / d331762 |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_151 |
| findbugs | v3.1.0-RC1 |
| unit | 
https://builds.apache.org/job/PreCommit-YARN-Build/18714/artifact/out/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-nodemanager.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-YARN-Build/18714/testReport/ |
| Max. process+thread count | 407 (vs. ulimit of 5000) |
| modules | C: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager
 U: 

[jira] [Commented] (YARN-5542) Scheduling of opportunistic containers

2017-11-29 Thread Weiwei Yang (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5542?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16270595#comment-16270595
 ] 

Weiwei Yang commented on YARN-5542:
---

bq. Feel free to post more questions or any other findings here, so that we can 
help along the way.

Appreciate that [~kkaranasos]!

> Scheduling of opportunistic containers
> --
>
> Key: YARN-5542
> URL: https://issues.apache.org/jira/browse/YARN-5542
> Project: Hadoop YARN
>  Issue Type: New Feature
>Reporter: Konstantinos Karanasos
>
> This JIRA groups all efforts related to the scheduling of opportunistic 
> containers. 
> It includes the scheduling of opportunistic container through the central RM 
> (YARN-5220), through distributed scheduling (YARN-2877), as well as the 
> scheduling of containers based on actual node utilization (YARN-1011) and the 
> container promotion/demotion (YARN-5085).



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-2889) Limit in the number of opportunistic container requests per AM

2017-11-29 Thread Weiwei Yang (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-2889?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16270591#comment-16270591
 ] 

Weiwei Yang commented on YARN-2889:
---

Thanks [~asuresh], [~kkaranasos] for the quick response.
I don't think you can expect user to respect this limit, at least not for the 
first time. On the other hand, number of container requests is vary against the 
size of the job, how you gonna define this limit? Therefore, it doesn't seem to 
be practical to me. 

Some other thoughts might be related,
# If NM queue is full, can we avoid assigning any O containers to that node? 
That means when preparing top K least loaded nodes, we need to exclude nodes 
whose queue is already full.
# Each queue size is limited, so I don't see why lots of O containers would 
flood the system.
# You can't say an AM is malicious if it requests only opportunistic containers 
(too many). Unless this was the design, then you need to setup correct user 
expectation with some document, and explain what is the correct user case.

Well these comments are based on my current understanding by reading JIRA 
comments and design doc, please correct me if anything goes wrong. Thanks!

> Limit in the number of opportunistic container requests per AM
> --
>
> Key: YARN-2889
> URL: https://issues.apache.org/jira/browse/YARN-2889
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: nodemanager, resourcemanager
>Reporter: Konstantinos Karanasos
>Assignee: Arun Suresh
>
> We introduce a way to limit the number of queueable requests that each AM can 
> submit to the LocalRM.
> This way we can restrict the number of queueable containers handed out by the 
> system, as well as throttle down misbehaving AMs (asking for too many 
> queueable containers).



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-7580) ContainersMonitorImpl logged message lacks detail when exceeding memory limits

2017-11-29 Thread Wilfred Spiegelenburg (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-7580?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wilfred Spiegelenburg updated YARN-7580:

Attachment: YARN-7580.002.patch

Typos in the log entry fixed

> ContainersMonitorImpl logged message lacks detail when exceeding memory limits
> --
>
> Key: YARN-7580
> URL: https://issues.apache.org/jira/browse/YARN-7580
> Project: Hadoop YARN
>  Issue Type: Improvement
>  Components: nodemanager
>Affects Versions: 3.1.0
>Reporter: Wilfred Spiegelenburg
>Assignee: Wilfred Spiegelenburg
> Attachments: YARN-7580.001.patch, YARN-7580.002.patch
>
>
> Currently in the RM logs container memory usage for a container that exceeds 
> the memory limit is reported like this:
> {code}
> 2016-06-14 09:15:36,694 INFO [AsyncDispatcher event handler] 
> org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl: Diagnostics 
> report from attempt_1464251583966_0932_r_000876_0: Container 
> [pid=134938,containerID=container_1464251583966_0932_01_002237] is running 
> beyond physical memory limits. Current usage: 1.0 GB of 1 GB physical memory 
> used; 1.9 GB of 2.1 GB virtual memory used. Killing container.
> {code}
> Two enhancements as part of this jira:
> - make it clearer which limit we exceed
> - show exactly how much we exceeded the limit by



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7092) [YARN-3368] Log viewer in application page in yarn-ui-v2

2017-11-29 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7092?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16270552#comment-16270552
 ] 

ASF GitHub Bot commented on YARN-7092:
--

Github user skmvasu commented on the issue:

https://github.com/apache/hadoop/pull/306
  
@sunilgovind Rebased Akhil's patch to the new layout. 


> [YARN-3368] Log viewer in application page in yarn-ui-v2
> 
>
> Key: YARN-7092
> URL: https://issues.apache.org/jira/browse/YARN-7092
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: yarn-ui-v2
>Reporter: Akhil PB
>Assignee: Akhil PB
> Attachments: YARN-7092.001.patch
>
>
> Feature to view application logs in new yarn-ui.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7092) [YARN-3368] Log viewer in application page in yarn-ui-v2

2017-11-29 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7092?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16270551#comment-16270551
 ] 

ASF GitHub Bot commented on YARN-7092:
--

GitHub user skmvasu opened a pull request:

https://github.com/apache/hadoop/pull/306

YARN-7092.  Log viewer in application page in yarn-ui-v2



You can merge this pull request into a Git repository by running:

$ git pull https://github.com/skmvasu/hadoop logs_merge

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/hadoop/pull/306.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #306


commit f34d46a3e95bfa8cd7d7ea32c53deef5f819ad4b
Author: Vasu 
Date:   2017-11-29T10:05:47Z

Rebase logs into the new application page

commit aa104bdd20db961c283d02d2efb30755a2c426b7
Author: Vasu 
Date:   2017-11-29T10:20:42Z

Fix issue with passing appId




> [YARN-3368] Log viewer in application page in yarn-ui-v2
> 
>
> Key: YARN-7092
> URL: https://issues.apache.org/jira/browse/YARN-7092
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: yarn-ui-v2
>Reporter: Akhil PB
>Assignee: Akhil PB
> Attachments: YARN-7092.001.patch
>
>
> Feature to view application logs in new yarn-ui.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (YARN-7487) Make sure volume includes GPU base libraries exists after created by plugin

2017-11-29 Thread Sunil G (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7487?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16270531#comment-16270531
 ] 

Sunil G edited comment on YARN-7487 at 11/29/17 10:07 AM:
--

Thanks [~leftnoteasy], Looks fine.
One doubt here. From my above comment, #4 is fixed or its not a problem ?


was (Author: sunilg):
Thanks [~leftnoteasy], Looks fine.
One doubt here. From my above comment, #4 is fixed or it not a problem ?

> Make sure volume includes GPU base libraries exists after created by plugin
> ---
>
> Key: YARN-7487
> URL: https://issues.apache.org/jira/browse/YARN-7487
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Wangda Tan
>Assignee: Wangda Tan
> Attachments: YARN-7487.002.patch, YARN-7487.003.patch, 
> YARN-7487.004.patch, YARN-7487.wip.001.patch
>
>
> YARN-7224 will create docker volume includes GPU base libraries when launch a 
> docker container which needs GPU. 
> This JIRA will add necessary checks to make sure docker volume exists before 
> launching the container to reduce debug efforts if container fails.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (YARN-7487) Make sure volume includes GPU base libraries exists after created by plugin

2017-11-29 Thread Sunil G (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7487?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16270531#comment-16270531
 ] 

Sunil G edited comment on YARN-7487 at 11/29/17 10:07 AM:
--

Thanks [~leftnoteasy], Looks fine.
One doubt here. From my above comment, #4 is fixed or it not a problem ?


was (Author: sunilg):
Thanks [~leftnoteasy], Looks fine.
Only point, from my above comment, #4 is fixed or it not a problem ?

> Make sure volume includes GPU base libraries exists after created by plugin
> ---
>
> Key: YARN-7487
> URL: https://issues.apache.org/jira/browse/YARN-7487
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Wangda Tan
>Assignee: Wangda Tan
> Attachments: YARN-7487.002.patch, YARN-7487.003.patch, 
> YARN-7487.004.patch, YARN-7487.wip.001.patch
>
>
> YARN-7224 will create docker volume includes GPU base libraries when launch a 
> docker container which needs GPU. 
> This JIRA will add necessary checks to make sure docker volume exists before 
> launching the container to reduce debug efforts if container fails.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7487) Make sure volume includes GPU base libraries exists after created by plugin

2017-11-29 Thread Sunil G (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7487?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16270531#comment-16270531
 ] 

Sunil G commented on YARN-7487:
---

Thanks [~leftnoteasy], Looks fine.
Only point, from my above comment, #4 is fixed or it not a problem ?

> Make sure volume includes GPU base libraries exists after created by plugin
> ---
>
> Key: YARN-7487
> URL: https://issues.apache.org/jira/browse/YARN-7487
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Wangda Tan
>Assignee: Wangda Tan
> Attachments: YARN-7487.002.patch, YARN-7487.003.patch, 
> YARN-7487.004.patch, YARN-7487.wip.001.patch
>
>
> YARN-7224 will create docker volume includes GPU base libraries when launch a 
> docker container which needs GPU. 
> This JIRA will add necessary checks to make sure docker volume exists before 
> launching the container to reduce debug efforts if container fails.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7497) Add HDFSSchedulerConfigurationStore for RM HA

2017-11-29 Thread Jiandan Yang (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7497?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16270525#comment-16270525
 ] 

Jiandan Yang  commented on YARN-7497:
-

[~gphillips] I have moved those two static constant into YarnConfiguration.

> Add HDFSSchedulerConfigurationStore for RM HA
> -
>
> Key: YARN-7497
> URL: https://issues.apache.org/jira/browse/YARN-7497
> Project: Hadoop YARN
>  Issue Type: New Feature
>  Components: yarn
>Reporter: Jiandan Yang 
> Attachments: YARN-7497.001.patch, YARN-7497.002.patch, 
> YARN-7497.003.patch, YARN-7497.004.patch, YARN-7497.005.patch
>
>
> YARN-5947 add LeveldbConfigurationStore using Leveldb as backing store, but 
> it does not support Yarn RM HA. 
> YARN-6840 supports RM HA, but too many scheduler configurations may exceed 
> znode limit, for example 10 thousand queues.
> HDFSSchedulerConfigurationStore store conf file in HDFS, when RM failover, 
> new active RM can load scheduler configuration from HDFS.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-7497) Add HDFSSchedulerConfigurationStore for RM HA

2017-11-29 Thread Jiandan Yang (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-7497?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jiandan Yang  updated YARN-7497:

Attachment: YARN-7497.005.patch

> Add HDFSSchedulerConfigurationStore for RM HA
> -
>
> Key: YARN-7497
> URL: https://issues.apache.org/jira/browse/YARN-7497
> Project: Hadoop YARN
>  Issue Type: New Feature
>  Components: yarn
>Reporter: Jiandan Yang 
> Attachments: YARN-7497.001.patch, YARN-7497.002.patch, 
> YARN-7497.003.patch, YARN-7497.004.patch, YARN-7497.005.patch
>
>
> YARN-5947 add LeveldbConfigurationStore using Leveldb as backing store, but 
> it does not support Yarn RM HA. 
> YARN-6840 supports RM HA, but too many scheduler configurations may exceed 
> znode limit, for example 10 thousand queues.
> HDFSSchedulerConfigurationStore store conf file in HDFS, when RM failover, 
> new active RM can load scheduler configuration from HDFS.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7497) Add HDFSSchedulerConfigurationStore for RM HA

2017-11-29 Thread Jiandan Yang (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7497?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16270497#comment-16270497
 ] 

Jiandan Yang  commented on YARN-7497:
-

[~jhung] Please help me to review and give me some comment about this patch. 
Thank you.

> Add HDFSSchedulerConfigurationStore for RM HA
> -
>
> Key: YARN-7497
> URL: https://issues.apache.org/jira/browse/YARN-7497
> Project: Hadoop YARN
>  Issue Type: New Feature
>  Components: yarn
>Reporter: Jiandan Yang 
> Attachments: YARN-7497.001.patch, YARN-7497.002.patch, 
> YARN-7497.003.patch, YARN-7497.004.patch
>
>
> YARN-5947 add LeveldbConfigurationStore using Leveldb as backing store, but 
> it does not support Yarn RM HA. 
> YARN-6840 supports RM HA, but too many scheduler configurations may exceed 
> znode limit, for example 10 thousand queues.
> HDFSSchedulerConfigurationStore store conf file in HDFS, when RM failover, 
> new active RM can load scheduler configuration from HDFS.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-7510) Merge work for YARN-5881

2017-11-29 Thread Sunil G (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-7510?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sunil G updated YARN-7510:
--
Attachment: YARN-7510.004.patch

branch is rebased. running against latest trunk.

> Merge work for YARN-5881
> 
>
> Key: YARN-7510
> URL: https://issues.apache.org/jira/browse/YARN-7510
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: capacity scheduler
>Reporter: Sunil G
>Assignee: Sunil G
> Attachments: YARN-7510.001.patch, YARN-7510.002.patch, 
> YARN-7510.003.patch, YARN-7510.004.patch
>
>
> Merge YARN-5881 work



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6669) Support security for YARN service framework

2017-11-29 Thread genericqa (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6669?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16270464#comment-16270464
 ] 

genericqa commented on YARN-6669:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
19s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 3 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
13s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 20m 
37s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  9m 
44s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
10s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  2m 
45s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
14m 14s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Skipped patched modules with no Java source: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-services 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-site {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m 
17s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
44s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
13s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  2m 
13s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  8m 
42s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  8m 
42s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
1m  7s{color} | {color:orange} hadoop-yarn-project/hadoop-yarn: The patch 
generated 24 new + 300 unchanged - 47 fixed = 324 total (was 347) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  2m 
38s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} whitespace {color} | {color:red}  0m  
0s{color} | {color:red} The patch has 1 line(s) that end in whitespace. Use git 
apply --whitespace=fix <>. Refer https://git-scm.com/docs/git-apply 
{color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green}  0m  
1s{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
11m 13s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Skipped patched modules with no Java source: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-services 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-site {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m 
54s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
42s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  1m  
6s{color} | {color:green} hadoop-yarn-registry in the patch passed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red}  4m 23s{color} 
| {color:red} hadoop-yarn-services in the patch failed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  4m  
3s{color} | {color:green} hadoop-yarn-services-core in the patch passed. 
{color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
29s{color} | {color:green} 

[jira] [Updated] (YARN-7575) NPE in scheduler UI when max-capacity is not configured

2017-11-29 Thread Sunil G (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-7575?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sunil G updated YARN-7575:
--
Summary: NPE in scheduler UI when max-capacity is not configured  (was: 
When using absolute capacity configuration with no max capacity, scheduler UI 
NPEs and can't grow queue)

> NPE in scheduler UI when max-capacity is not configured
> ---
>
> Key: YARN-7575
> URL: https://issues.apache.org/jira/browse/YARN-7575
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: capacity scheduler
>Reporter: Eric Payne
>Assignee: Sunil G
> Attachments: YARN-7575-YARN-5881.001.patch
>
>
> I encountered the following while reviewing and testing branch YARN-5881.
> The design document from YARN-5881 says that for max-capacity:
> {quote}
> 3)  For each queue, we require:
> a) if max-resource not set, it automatically set to parent.max-resource
> {quote}
> When I try leaving blank {{yarn.scheduler.capacity.< 
> queue-path>.maximum-capacity}}, the RMUI scheduler page refuses to render. It 
> looks like it's in {{CapacitySchedulerPage$ LeafQueueInfoBlock}}:
> {noformat}
> 2017-11-28 11:29:16,974 [qtp43473566-220] ERROR webapp.Dispatcher: error 
> handling URI: /cluster/scheduler
> java.lang.reflect.InvocationTargetException
> ...
> at 
> org.apache.hadoop.yarn.server.resourcemanager.webapp.CapacitySchedulerPage$LeafQueueInfoBlock.renderQueueCapacityInfo(CapacitySchedulerPage.java:164)
> at 
> org.apache.hadoop.yarn.server.resourcemanager.webapp.CapacitySchedulerPage$LeafQueueInfoBlock.renderLeafQueueInfoWithoutParition(CapacitySchedulerPage.java:129)
> {noformat}
> Also... A job will run in the leaf queue with no max capacity set and it will 
> grow to the max capacity of the cluster, but if I add resources to the node, 
> the job won't grow any more even though it has pending resources.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



  1   2   >