[jira] [Commented] (YARN-4606) CapacityScheduler: applications could get starved because computation of #activeUsers considers pending apps

2018-05-30 Thread Manikandan R (JIRA)


[ 
https://issues.apache.org/jira/browse/YARN-4606?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16496118#comment-16496118
 ] 

Manikandan R commented on YARN-4606:


[~eepayne] Thanks for reaching out proactively.

Sorry for the delay, was completely offline little more than a week for 
personal work. I resumed activities on this yesterday and facing some issues in 
extracting am limit using 
SchedulerApplicationAttempt#getAppAttemptResourceUsage. I am in touch with 
[~sunilg] on this particular issue. Will upload patch as early as possible.

> CapacityScheduler: applications could get starved because computation of 
> #activeUsers considers pending apps 
> -
>
> Key: YARN-4606
> URL: https://issues.apache.org/jira/browse/YARN-4606
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: capacity scheduler, capacityscheduler
>Affects Versions: 2.8.0, 2.7.1
>Reporter: Karam Singh
>Assignee: Manikandan R
>Priority: Critical
> Attachments: YARN-4606.001.patch, YARN-4606.002.patch, 
> YARN-4606.1.poc.patch, YARN-4606.POC.2.patch, YARN-4606.POC.patch
>
>
> Currently, if all applications belong to same user in LeafQueue are pending 
> (caused by max-am-percent, etc.), ActiveUsersManager still considers the user 
> is an active user. This could lead to starvation of active applications, for 
> example:
> - App1(belongs to user1)/app2(belongs to user2) are active, app3(belongs to 
> user3)/app4(belongs to user4) are pending
> - ActiveUsersManager returns #active-users=4
> - However, there're only two users (user1/user2) are able to allocate new 
> resources. So computed user-limit-resource could be lower than expected.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-8367) 2 components, one with placement constraint and one without causes NPE in SingleConstraintAppPlacementAllocator

2018-05-30 Thread Weiwei Yang (JIRA)


[ 
https://issues.apache.org/jira/browse/YARN-8367?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16496116#comment-16496116
 ] 

Weiwei Yang commented on YARN-8367:
---

Hi [~gsaha] I don't think UT failure was related, I tested and it works on my 
local environment too.

> 2 components, one with placement constraint and one without causes NPE in 
> SingleConstraintAppPlacementAllocator
> ---
>
> Key: YARN-8367
> URL: https://issues.apache.org/jira/browse/YARN-8367
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: scheduler
>Affects Versions: 3.1.0
>Reporter: Gour Saha
>Assignee: Weiwei Yang
>Priority: Major
> Attachments: YARN-8367.001.patch
>
>
> While testing the fix for YARN-8350, [~billie.rinaldi] encountered this NPE 
> in AM log. Filling this on her behalf -
> {noformat}
> 2018-05-25 21:11:54,006 [AMRM Heartbeater thread] ERROR 
> impl.AMRMClientAsyncImpl - Exception on heartbeat
> java.lang.NullPointerException: java.lang.NullPointerException
>   at 
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.placement.SingleConstraintAppPlacementAllocator.validateAndSetSchedulingRequest(SingleConstraintAppPlacementAllocator.java:245)
>   at 
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.placement.SingleConstraintAppPlacementAllocator.internalUpdatePendingAsk(SingleConstraintAppPlacementAllocator.java:193)
>   at 
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.placement.SingleConstraintAppPlacementAllocator.updatePendingAsk(SingleConstraintAppPlacementAllocator.java:207)
>   at 
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.AppSchedulingInfo.addSchedulingRequests(AppSchedulingInfo.java:269)
>   at 
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.AppSchedulingInfo.updateSchedulingRequests(AppSchedulingInfo.java:240)
>   at 
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.SchedulerApplicationAttempt.updateSchedulingRequests(SchedulerApplicationAttempt.java:469)
>   at 
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.CapacityScheduler.allocate(CapacityScheduler.java:1154)
>   at 
> org.apache.hadoop.yarn.server.resourcemanager.DefaultAMSProcessor.allocate(DefaultAMSProcessor.java:278)
>   at 
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.constraint.processor.SchedulerPlacementProcessor.allocate(SchedulerPlacementProcessor.java:53)
>   at 
> org.apache.hadoop.yarn.server.resourcemanager.AMSProcessingChain.allocate(AMSProcessingChain.java:92)
>   at 
> org.apache.hadoop.yarn.server.resourcemanager.ApplicationMasterService.allocate(ApplicationMasterService.java:433)
>   at 
> org.apache.hadoop.yarn.api.impl.pb.service.ApplicationMasterProtocolPBServiceImpl.allocate(ApplicationMasterProtocolPBServiceImpl.java:60)
>   at 
> org.apache.hadoop.yarn.proto.ApplicationMasterProtocol$ApplicationMasterProtocolService$2.callBlockingMethod(ApplicationMasterProtocol.java:99)
>   at 
> org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:523)
>   at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:991)
>   at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:872)
>   at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:818)
>   at java.security.AccessController.doPrivileged(Native Method)
>   at javax.security.auth.Subject.doAs(Subject.java:422)
>   at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1682)
>   at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2678)
>   at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
>   at 
> sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
>   at 
> sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
>   at java.lang.reflect.Constructor.newInstance(Constructor.java:423)
>   at 
> org.apache.hadoop.yarn.ipc.RPCUtil.instantiateException(RPCUtil.java:53)
>   at 
> org.apache.hadoop.yarn.ipc.RPCUtil.instantiateRuntimeException(RPCUtil.java:85)
>   at 
> org.apache.hadoop.yarn.ipc.RPCUtil.unwrapAndThrowException(RPCUtil.java:122)
>   at 
> org.apache.hadoop.yarn.api.impl.pb.client.ApplicationMasterProtocolPBClientImpl.allocate(ApplicationMasterProtocolPBClientImpl.java:79)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:498)
>   at 
> 

[jira] [Commented] (YARN-8372) ApplicationAttemptNotFoundException should be handled correctly by Distributed Shell App Master

2018-05-30 Thread genericqa (JIRA)


[ 
https://issues.apache.org/jira/browse/YARN-8372?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16496113#comment-16496113
 ] 

genericqa commented on YARN-8372:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
28s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 23m 
25s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
23s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
17s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
25s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
10m  6s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
33s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
18s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
23s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
20s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
20s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 12s{color} | {color:orange} 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-applications-distributedshell:
 The patch generated 1 new + 46 unchanged - 0 fixed = 47 total (was 46) {color} 
|
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
21s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
10m 33s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
34s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
13s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 16m  4s{color} 
| {color:red} hadoop-yarn-applications-distributedshell in the patch failed. 
{color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
24s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 65m 13s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:abb62dd |
| JIRA Issue | YARN-8372 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12925846/YARN-8372.1.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  shadedclient  findbugs  checkstyle  |
| uname | Linux d558be9ad9e5 4.4.0-89-generic #112-Ubuntu SMP Mon Jul 31 
19:38:41 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / 02c4b89 |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_162 |
| findbugs | v3.1.0-RC1 |
| checkstyle | 
https://builds.apache.org/job/PreCommit-YARN-Build/20904/artifact/out/diff-checkstyle-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-applications_hadoop-yarn-applications-distributedshell.txt
 |
| unit | 

[jira] [Commented] (YARN-8382) cgroup file leak in NM

2018-05-30 Thread genericqa (JIRA)


[ 
https://issues.apache.org/jira/browse/YARN-8382?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16496104#comment-16496104
 ] 

genericqa commented on YARN-8382:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 19m 
55s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
|| || || || {color:brown} branch-2.8.3 Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
16s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  6m 
48s{color} | {color:green} branch-2.8.3 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  2m 
19s{color} | {color:green} branch-2.8.3 passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
34s{color} | {color:green} branch-2.8.3 passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m  
6s{color} | {color:green} branch-2.8.3 passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m  
6s{color} | {color:green} branch-2.8.3 passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
55s{color} | {color:green} branch-2.8.3 passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
11s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
54s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  2m 
18s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  2m 
18s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 31s{color} | {color:orange} hadoop-yarn-project/hadoop-yarn: The patch 
generated 2 new + 3 unchanged - 0 fixed = 5 total (was 3) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m  
2s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green}  0m  
0s{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m 
27s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
51s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  2m 
34s{color} | {color:green} hadoop-yarn-common in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  9m 
56s{color} | {color:green} hadoop-yarn-server-nodemanager in the patch passed. 
{color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
21s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 56m 37s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:c2d96dd |
| JIRA Issue | YARN-8382 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12925848/YARN-8382-branch-2.8.3.001.patch
 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  shadedclient  xml  findbugs  checkstyle  |
| uname | Linux 23b6229be0b9 3.13.0-139-generic #188-Ubuntu SMP Tue Jan 9 
14:43:09 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | branch-2.8.3 / b3fe564 |
| maven | version: Apache Maven 3.0.5 |
| Default Java | 1.7.0_171 |
| findbugs | v3.0.0 |
| checkstyle | 

[jira] [Updated] (YARN-8373) RM Received RMFatalEvent of type CRITICAL_THREAD_CRASH

2018-05-30 Thread Haibo Chen (JIRA)


 [ 
https://issues.apache.org/jira/browse/YARN-8373?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Haibo Chen updated YARN-8373:
-
Component/s: fairscheduler

> RM  Received RMFatalEvent of type CRITICAL_THREAD_CRASH
> ---
>
> Key: YARN-8373
> URL: https://issues.apache.org/jira/browse/YARN-8373
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: fairscheduler, resourcemanager
>Affects Versions: 2.9.0
>Reporter: Girish Bhat
>Priority: Major
>  Labels: newbie
>
>  
>  
> {noformat}
> sudo -u yarn /usr/local/hadoop/latest/bin/yarn version Hadoop 2.9.0 
> Subversion https://git-wip-us.apache.org/repos/asf/hadoop.git -r 
> 756ebc8394e473ac25feac05fa493f6d612e6c50 Compiled by arsuresh on 
> 2017-11-13T23:15Z Compiled with protoc 2.5.0 From source with checksum 
> 0a76a9a32a5257331741f8d5932f183 This command was run using 
> /usr/local/hadoop/hadoop-2.9.0/share/hadoop/common/hadoop-common-2.9.0.jar{noformat}
> This is for version 2.9.0 
>  
> {noformat}
> 2018-05-25 05:53:12,742 ERROR 
> org.apache.hadoop.yarn.server.resourcemanager.ResourceManager: Received 
> RMFatalEvent of type CRITICAL_THREAD_CRASH, caused by a critical thread, Fai
> rSchedulerContinuousScheduling, that exited unexpectedly: 
> java.lang.IllegalArgumentException: Comparison method violates its general 
> contract!
> at java.util.TimSort.mergeHi(TimSort.java:899)
> at java.util.TimSort.mergeAt(TimSort.java:516)
> at java.util.TimSort.mergeForceCollapse(TimSort.java:457)
> at java.util.TimSort.sort(TimSort.java:254)
> at java.util.Arrays.sort(Arrays.java:1512)
> at java.util.ArrayList.sort(ArrayList.java:1454)
> at java.util.Collections.sort(Collections.java:175)
> at 
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.ClusterNodeTracker.sortedNodeList(ClusterNodeTracker.java:340)
> at 
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.fair.FairScheduler.continuousSchedulingAttempt(FairScheduler.java:907)
> at 
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.fair.FairScheduler$ContinuousSchedulingThread.run(FairScheduler.java:296)
> 2018-05-25 05:53:12,743 FATAL 
> org.apache.hadoop.yarn.server.resourcemanager.ResourceManager: Shutting down 
> the resource manager.
> 2018-05-25 05:53:12,749 INFO org.apache.hadoop.util.ExitUtil: Exiting with 
> status 1: a critical thread, FairSchedulerContinuousScheduling, that exited 
> unexpectedly: java.lang.IllegalArgumentException: Comparison method violates 
> its general contract!
> at java.util.TimSort.mergeHi(TimSort.java:899)
> at java.util.TimSort.mergeAt(TimSort.java:516)
> at java.util.TimSort.mergeForceCollapse(TimSort.java:457)
> at java.util.TimSort.sort(TimSort.java:254)
> at java.util.Arrays.sort(Arrays.java:1512)
> at java.util.ArrayList.sort(ArrayList.java:1454)
> at java.util.Collections.sort(Collections.java:175)
> at 
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.ClusterNodeTracker.sortedNodeList(ClusterNodeTracker.java:340)
> at 
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.fair.FairScheduler.continuousSchedulingAttempt(FairScheduler.java:907)
> at 
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.fair.FairScheduler$ContinuousSchedulingThread.run(FairScheduler.java:296)
> 2018-05-25 05:53:12,772 ERROR 
> org.apache.hadoop.security.token.delegation.AbstractDelegationTokenSecretManager:
>  ExpiredTokenRemover received java.lang.InterruptedException: sleep 
> interrupted{noformat}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-8373) RM Received RMFatalEvent of type CRITICAL_THREAD_CRASH

2018-05-30 Thread Miklos Szegedi (JIRA)


[ 
https://issues.apache.org/jira/browse/YARN-8373?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16496085#comment-16496085
 ] 

Miklos Szegedi commented on YARN-8373:
--

This seems to be a race condition. Lots of parameters may affect it but that 
would be an unreliable workaround not solving the root cause.

> RM  Received RMFatalEvent of type CRITICAL_THREAD_CRASH
> ---
>
> Key: YARN-8373
> URL: https://issues.apache.org/jira/browse/YARN-8373
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: resourcemanager
>Affects Versions: 2.9.0
>Reporter: Girish Bhat
>Priority: Major
>  Labels: newbie
>
>  
>  
> {noformat}
> sudo -u yarn /usr/local/hadoop/latest/bin/yarn version Hadoop 2.9.0 
> Subversion https://git-wip-us.apache.org/repos/asf/hadoop.git -r 
> 756ebc8394e473ac25feac05fa493f6d612e6c50 Compiled by arsuresh on 
> 2017-11-13T23:15Z Compiled with protoc 2.5.0 From source with checksum 
> 0a76a9a32a5257331741f8d5932f183 This command was run using 
> /usr/local/hadoop/hadoop-2.9.0/share/hadoop/common/hadoop-common-2.9.0.jar{noformat}
> This is for version 2.9.0 
>  
> {noformat}
> 2018-05-25 05:53:12,742 ERROR 
> org.apache.hadoop.yarn.server.resourcemanager.ResourceManager: Received 
> RMFatalEvent of type CRITICAL_THREAD_CRASH, caused by a critical thread, Fai
> rSchedulerContinuousScheduling, that exited unexpectedly: 
> java.lang.IllegalArgumentException: Comparison method violates its general 
> contract!
> at java.util.TimSort.mergeHi(TimSort.java:899)
> at java.util.TimSort.mergeAt(TimSort.java:516)
> at java.util.TimSort.mergeForceCollapse(TimSort.java:457)
> at java.util.TimSort.sort(TimSort.java:254)
> at java.util.Arrays.sort(Arrays.java:1512)
> at java.util.ArrayList.sort(ArrayList.java:1454)
> at java.util.Collections.sort(Collections.java:175)
> at 
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.ClusterNodeTracker.sortedNodeList(ClusterNodeTracker.java:340)
> at 
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.fair.FairScheduler.continuousSchedulingAttempt(FairScheduler.java:907)
> at 
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.fair.FairScheduler$ContinuousSchedulingThread.run(FairScheduler.java:296)
> 2018-05-25 05:53:12,743 FATAL 
> org.apache.hadoop.yarn.server.resourcemanager.ResourceManager: Shutting down 
> the resource manager.
> 2018-05-25 05:53:12,749 INFO org.apache.hadoop.util.ExitUtil: Exiting with 
> status 1: a critical thread, FairSchedulerContinuousScheduling, that exited 
> unexpectedly: java.lang.IllegalArgumentException: Comparison method violates 
> its general contract!
> at java.util.TimSort.mergeHi(TimSort.java:899)
> at java.util.TimSort.mergeAt(TimSort.java:516)
> at java.util.TimSort.mergeForceCollapse(TimSort.java:457)
> at java.util.TimSort.sort(TimSort.java:254)
> at java.util.Arrays.sort(Arrays.java:1512)
> at java.util.ArrayList.sort(ArrayList.java:1454)
> at java.util.Collections.sort(Collections.java:175)
> at 
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.ClusterNodeTracker.sortedNodeList(ClusterNodeTracker.java:340)
> at 
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.fair.FairScheduler.continuousSchedulingAttempt(FairScheduler.java:907)
> at 
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.fair.FairScheduler$ContinuousSchedulingThread.run(FairScheduler.java:296)
> 2018-05-25 05:53:12,772 ERROR 
> org.apache.hadoop.security.token.delegation.AbstractDelegationTokenSecretManager:
>  ExpiredTokenRemover received java.lang.InterruptedException: sleep 
> interrupted{noformat}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-8367) 2 components, one with placement constraint and one without causes NPE in SingleConstraintAppPlacementAllocator

2018-05-30 Thread Gour Saha (JIRA)


[ 
https://issues.apache.org/jira/browse/YARN-8367?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16496069#comment-16496069
 ] 

Gour Saha commented on YARN-8367:
-

I am not sure if the UT failure is related, but it succeeds in my local.

> 2 components, one with placement constraint and one without causes NPE in 
> SingleConstraintAppPlacementAllocator
> ---
>
> Key: YARN-8367
> URL: https://issues.apache.org/jira/browse/YARN-8367
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: scheduler
>Affects Versions: 3.1.0
>Reporter: Gour Saha
>Assignee: Weiwei Yang
>Priority: Major
> Attachments: YARN-8367.001.patch
>
>
> While testing the fix for YARN-8350, [~billie.rinaldi] encountered this NPE 
> in AM log. Filling this on her behalf -
> {noformat}
> 2018-05-25 21:11:54,006 [AMRM Heartbeater thread] ERROR 
> impl.AMRMClientAsyncImpl - Exception on heartbeat
> java.lang.NullPointerException: java.lang.NullPointerException
>   at 
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.placement.SingleConstraintAppPlacementAllocator.validateAndSetSchedulingRequest(SingleConstraintAppPlacementAllocator.java:245)
>   at 
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.placement.SingleConstraintAppPlacementAllocator.internalUpdatePendingAsk(SingleConstraintAppPlacementAllocator.java:193)
>   at 
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.placement.SingleConstraintAppPlacementAllocator.updatePendingAsk(SingleConstraintAppPlacementAllocator.java:207)
>   at 
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.AppSchedulingInfo.addSchedulingRequests(AppSchedulingInfo.java:269)
>   at 
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.AppSchedulingInfo.updateSchedulingRequests(AppSchedulingInfo.java:240)
>   at 
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.SchedulerApplicationAttempt.updateSchedulingRequests(SchedulerApplicationAttempt.java:469)
>   at 
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.CapacityScheduler.allocate(CapacityScheduler.java:1154)
>   at 
> org.apache.hadoop.yarn.server.resourcemanager.DefaultAMSProcessor.allocate(DefaultAMSProcessor.java:278)
>   at 
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.constraint.processor.SchedulerPlacementProcessor.allocate(SchedulerPlacementProcessor.java:53)
>   at 
> org.apache.hadoop.yarn.server.resourcemanager.AMSProcessingChain.allocate(AMSProcessingChain.java:92)
>   at 
> org.apache.hadoop.yarn.server.resourcemanager.ApplicationMasterService.allocate(ApplicationMasterService.java:433)
>   at 
> org.apache.hadoop.yarn.api.impl.pb.service.ApplicationMasterProtocolPBServiceImpl.allocate(ApplicationMasterProtocolPBServiceImpl.java:60)
>   at 
> org.apache.hadoop.yarn.proto.ApplicationMasterProtocol$ApplicationMasterProtocolService$2.callBlockingMethod(ApplicationMasterProtocol.java:99)
>   at 
> org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:523)
>   at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:991)
>   at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:872)
>   at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:818)
>   at java.security.AccessController.doPrivileged(Native Method)
>   at javax.security.auth.Subject.doAs(Subject.java:422)
>   at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1682)
>   at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2678)
>   at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
>   at 
> sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
>   at 
> sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
>   at java.lang.reflect.Constructor.newInstance(Constructor.java:423)
>   at 
> org.apache.hadoop.yarn.ipc.RPCUtil.instantiateException(RPCUtil.java:53)
>   at 
> org.apache.hadoop.yarn.ipc.RPCUtil.instantiateRuntimeException(RPCUtil.java:85)
>   at 
> org.apache.hadoop.yarn.ipc.RPCUtil.unwrapAndThrowException(RPCUtil.java:122)
>   at 
> org.apache.hadoop.yarn.api.impl.pb.client.ApplicationMasterProtocolPBClientImpl.allocate(ApplicationMasterProtocolPBClientImpl.java:79)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:498)
>   at 
> 

[jira] [Commented] (YARN-8382) cgroup file leak in NM

2018-05-30 Thread Hu Ziqian (JIRA)


[ 
https://issues.apache.org/jira/browse/YARN-8382?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16496049#comment-16496049
 ] 

Hu Ziqian commented on YARN-8382:
-

Hi [~leftnoteasy], can you help me reviewing this patch?

> cgroup file leak in NM
> --
>
> Key: YARN-8382
> URL: https://issues.apache.org/jira/browse/YARN-8382
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: nodemanager
> Environment: we write an container with a shutdownHook which has a 
> piece of code like  "while(true) sleep(100)" . when 
> *yarn.nodemanager.linux-container-executor.cgroups.delete-timeout-ms <* 
> *yarn.nodemanager.sleep-delay-before-sigkill.ms , cgourp file leak happens; 
> when* *yarn.nodemanager.linux-container-executor.cgroups.delete-timeout-ms >* 
> ** *yarn.nodemanager.sleep-delay-before-sigkill.ms, cgroup file is deleted 
> successfully***
>Reporter: Hu Ziqian
>Assignee: Hu Ziqian
>Priority: Major
> Attachments: YARN-8382-branch-2.8.3.001.patch, YARN-8382.001.patch
>
>
> As Jiandan said in YARN-6525, NM may delete  Cgroup container file timeout 
> with logs like below:
> org.apache.hadoop.yarn.server.nodemanager.util.CgroupsLCEResourcesHandler: 
> Unable to delete cgroup at: /cgroup/cpu/hadoop-yarn/container_xxx, tried to 
> delete for 1000ms
>  
> we found one situation is that when we set 
> *yarn.nodemanager.sleep-delay-before-sigkill.ms* bigger than 
> yarn.nodemanager.linux-container-executor.cgroups.delete-timeout-ms, the 
> cgroup file leak happens *.* 
>  
> One container process tree looks like follow graph:
> bash(16097)───java(16099)─┬─\{java}(16100) 
>                                                   ├─\{java}(16101) 
> {{                       ├─\{java}(16102)}}
>  
> {{when NM kill a container, NM send kill -15 -pid to kill container process 
> group. Bash process will exit when it received sigterm, but java process may 
> do some job (shutdownHook etc.), and may exit unit receive sigkill. And when 
> bash process exit, CgroupsLCEResourcesHandler begin to try to delete cgroup. 
> So when *yarn.nodemanager.linux-container-executor.cgroups.delete-timeout-ms* 
> arrived, the java processes may still running and cgourp/tasks still not 
> empty and cause a cgroup file leak.}}
>  
> {{we add a condition that 
> *yarn.nodemanager.linux-container-executor.cgroups.delete-timeout-ms* must 
> bigger than *yarn.nodemanager.sleep-delay-before-sigkill.ms* to solve this 
> problem.}}
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-8382) cgroup file leak in NM

2018-05-30 Thread Hu Ziqian (JIRA)


 [ 
https://issues.apache.org/jira/browse/YARN-8382?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Hu Ziqian updated YARN-8382:

Description: 
As Jiandan said in YARN-6525, NM may delete  Cgroup container file timeout with 
logs like below:

org.apache.hadoop.yarn.server.nodemanager.util.CgroupsLCEResourcesHandler: 
Unable to delete cgroup at: /cgroup/cpu/hadoop-yarn/container_xxx, tried to 
delete for 1000ms

 

we found one situation is that when we set 
*yarn.nodemanager.sleep-delay-before-sigkill.ms* bigger than 
yarn.nodemanager.linux-container-executor.cgroups.delete-timeout-ms, the cgroup 
file leak happens *.* 

 

One container process tree looks like follow graph:

bash(16097)───java(16099)─┬─\{java}(16100) 

                                                  ├─\{java}(16101) 

{{                       ├─\{java}(16102)}}

 

{{when NM kill a container, NM send kill -15 -pid to kill container process 
group. Bash process will exit when it received sigterm, but java process may do 
some job (shutdownHook etc.), and may exit unit receive sigkill. And when bash 
process exit, CgroupsLCEResourcesHandler begin to try to delete cgroup. So when 
*yarn.nodemanager.linux-container-executor.cgroups.delete-timeout-ms* arrived, 
the java processes may still running and cgourp/tasks still not empty and cause 
a cgroup file leak.}}

 

{{we add a condition that 
*yarn.nodemanager.linux-container-executor.cgroups.delete-timeout-ms* must 
bigger than *yarn.nodemanager.sleep-delay-before-sigkill.ms* to solve this 
problem.}}

 

  was:
As Jiandan said in YARN-6525, NM may delete  Cgroup container file timeout with 
logs like

org.apache.hadoop.yarn.server.nodemanager.util.CgroupsLCEResourcesHandler: 
Unable to delete cgroup at: /cgroup/cpu/hadoop-yarn/container_xxx, tried to 
delete for 1000ms

 

we found one situation is that when we set 
*yarn.nodemanager.sleep-delay-before-sigkill.ms* bigger than 
yarn.nodemanager.linux-container-executor.cgroups.delete-timeout-ms, the cgroup 
file leak happens *.* 

 

One container process tree looks like follow graph:

bash(16097)───java(16099)─┬─\{java}(16100) 
 
                                                  ├─\{java}(16101) 

{{                       ├─\{java}(16102)}}

 

{{when NM kill a container, NM send kill -15 -pid to kill container process 
group. Bash process will exit when it received sigterm, but java process may do 
some job (shutdownHook etc.), and may exit unit receive sigkill. And when bash 
process exit, CgroupsLCEResourcesHandler begin to try to delete cgroup. So when 
*yarn.nodemanager.linux-container-executor.cgroups.delete-timeout-ms* arrived, 
the java processes may still running and cgourp/tasks still not empty and cause 
a cgroup file leak.}}

 

{{we add a condition that 
*yarn.nodemanager.linux-container-executor.cgroups.delete-timeout-ms* must 
bigger than *yarn.nodemanager.sleep-delay-before-sigkill.ms* to solve this 
problem.}}

 


> cgroup file leak in NM
> --
>
> Key: YARN-8382
> URL: https://issues.apache.org/jira/browse/YARN-8382
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: nodemanager
> Environment: we write an container with a shutdownHook which has a 
> piece of code like  "while(true) sleep(100)" . when 
> *yarn.nodemanager.linux-container-executor.cgroups.delete-timeout-ms <* 
> *yarn.nodemanager.sleep-delay-before-sigkill.ms , cgourp file leak happens; 
> when* *yarn.nodemanager.linux-container-executor.cgroups.delete-timeout-ms >* 
> ** *yarn.nodemanager.sleep-delay-before-sigkill.ms, cgroup file is deleted 
> successfully***
>Reporter: Hu Ziqian
>Assignee: Hu Ziqian
>Priority: Major
> Attachments: YARN-8382-branch-2.8.3.001.patch, YARN-8382.001.patch
>
>
> As Jiandan said in YARN-6525, NM may delete  Cgroup container file timeout 
> with logs like below:
> org.apache.hadoop.yarn.server.nodemanager.util.CgroupsLCEResourcesHandler: 
> Unable to delete cgroup at: /cgroup/cpu/hadoop-yarn/container_xxx, tried to 
> delete for 1000ms
>  
> we found one situation is that when we set 
> *yarn.nodemanager.sleep-delay-before-sigkill.ms* bigger than 
> yarn.nodemanager.linux-container-executor.cgroups.delete-timeout-ms, the 
> cgroup file leak happens *.* 
>  
> One container process tree looks like follow graph:
> bash(16097)───java(16099)─┬─\{java}(16100) 
>                                                   ├─\{java}(16101) 
> {{                       ├─\{java}(16102)}}
>  
> {{when NM kill a container, NM send kill -15 -pid to kill container process 
> group. Bash process will exit when it received sigterm, but java process may 
> do some job (shutdownHook etc.), and may exit unit receive sigkill. And when 
> bash process exit, CgroupsLCEResourcesHandler begin to try to delete cgroup. 
> So when 

[jira] [Updated] (YARN-8382) cgroup file leak in NM

2018-05-30 Thread Hu Ziqian (JIRA)


 [ 
https://issues.apache.org/jira/browse/YARN-8382?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Hu Ziqian updated YARN-8382:

Attachment: YARN-8382-branch-2.8.3.001.patch
YARN-8382.001.patch

> cgroup file leak in NM
> --
>
> Key: YARN-8382
> URL: https://issues.apache.org/jira/browse/YARN-8382
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: nodemanager
> Environment: we write an container with a shutdownHook which has a 
> piece of code like  "while(true) sleep(100)" . when 
> *yarn.nodemanager.linux-container-executor.cgroups.delete-timeout-ms <* 
> *yarn.nodemanager.sleep-delay-before-sigkill.ms , cgourp file leak happens; 
> when* *yarn.nodemanager.linux-container-executor.cgroups.delete-timeout-ms >* 
> ** *yarn.nodemanager.sleep-delay-before-sigkill.ms, cgroup file is deleted 
> successfully***
>Reporter: Hu Ziqian
>Assignee: Hu Ziqian
>Priority: Major
> Attachments: YARN-8382-branch-2.8.3.001.patch, YARN-8382.001.patch
>
>
> As Jiandan said in YARN-6525, NM may delete  Cgroup container file timeout 
> with logs like
> org.apache.hadoop.yarn.server.nodemanager.util.CgroupsLCEResourcesHandler: 
> Unable to delete cgroup at: /cgroup/cpu/hadoop-yarn/container_xxx, tried to 
> delete for 1000ms
>  
> we found one situation is that when we set 
> *yarn.nodemanager.sleep-delay-before-sigkill.ms* bigger than 
> yarn.nodemanager.linux-container-executor.cgroups.delete-timeout-ms, the 
> cgroup file leak happens *.* 
>  
> One container process tree looks like follow graph:
> bash(16097)───java(16099)─┬─\{java}(16100) 
>  
>                                                   ├─\{java}(16101) 
> {{                       ├─\{java}(16102)}}
>  
> {{when NM kill a container, NM send kill -15 -pid to kill container process 
> group. Bash process will exit when it received sigterm, but java process may 
> do some job (shutdownHook etc.), and may exit unit receive sigkill. And when 
> bash process exit, CgroupsLCEResourcesHandler begin to try to delete cgroup. 
> So when *yarn.nodemanager.linux-container-executor.cgroups.delete-timeout-ms* 
> arrived, the java processes may still running and cgourp/tasks still not 
> empty and cause a cgroup file leak.}}
>  
> {{we add a condition that 
> *yarn.nodemanager.linux-container-executor.cgroups.delete-timeout-ms* must 
> bigger than *yarn.nodemanager.sleep-delay-before-sigkill.ms* to solve this 
> problem.}}
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-8372) ApplicationAttemptNotFoundException should be handled correctly by Distributed Shell App Master

2018-05-30 Thread Suma Shivaprasad (JIRA)


[ 
https://issues.apache.org/jira/browse/YARN-8372?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16496041#comment-16496041
 ] 

Suma Shivaprasad commented on YARN-8372:


Thanks [~vinodkv] and [~rohithsharma]. Attached patch which cleans up in 
shut-down hook only when keep_containers_across_application_attempts is enabled

> ApplicationAttemptNotFoundException should be handled correctly by 
> Distributed Shell App Master
> ---
>
> Key: YARN-8372
> URL: https://issues.apache.org/jira/browse/YARN-8372
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: distributed-shell
>Reporter: Charan Hebri
>Assignee: Suma Shivaprasad
>Priority: Major
> Attachments: YARN-8372.1.patch
>
>
> {noformat}
> try {
>   response = client.allocate(progress);
> } catch (ApplicationAttemptNotFoundException e) {
> handler.onShutdownRequest();
> LOG.info("Shutdown requested. Stopping callback.");
> return;{noformat}
> is a code snippet from AMRMClientAsyncImpl. The corresponding 
> onShutdownRequest call for the Distributed Shell App master,
> {noformat}
> @Override
> public void onShutdownRequest() {
>   done = true;
> }{noformat}
> Due to the above change, the current behavior is that whenever an application 
> attempt fails due to a NM restart (NM where the DS AM is running), an 
> ApplicationAttemptNotFoundException is thrown and all containers for that 
> attempt including the ones that are running on other NMs are killed by the AM 
> and marked as COMPLETE. The subsequent attempt spawns new containers just 
> like a new attempt. This behavior is different to a Map Reduce application 
> where the containers are not killed.
> cc [~rohithsharma]



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-8372) ApplicationAttemptNotFoundException should be handled correctly by Distributed Shell App Master

2018-05-30 Thread Suma Shivaprasad (JIRA)


 [ 
https://issues.apache.org/jira/browse/YARN-8372?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Suma Shivaprasad updated YARN-8372:
---
Attachment: YARN-8372.1.patch

> ApplicationAttemptNotFoundException should be handled correctly by 
> Distributed Shell App Master
> ---
>
> Key: YARN-8372
> URL: https://issues.apache.org/jira/browse/YARN-8372
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: distributed-shell
>Reporter: Charan Hebri
>Assignee: Suma Shivaprasad
>Priority: Major
> Attachments: YARN-8372.1.patch
>
>
> {noformat}
> try {
>   response = client.allocate(progress);
> } catch (ApplicationAttemptNotFoundException e) {
> handler.onShutdownRequest();
> LOG.info("Shutdown requested. Stopping callback.");
> return;{noformat}
> is a code snippet from AMRMClientAsyncImpl. The corresponding 
> onShutdownRequest call for the Distributed Shell App master,
> {noformat}
> @Override
> public void onShutdownRequest() {
>   done = true;
> }{noformat}
> Due to the above change, the current behavior is that whenever an application 
> attempt fails due to a NM restart (NM where the DS AM is running), an 
> ApplicationAttemptNotFoundException is thrown and all containers for that 
> attempt including the ones that are running on other NMs are killed by the AM 
> and marked as COMPLETE. The subsequent attempt spawns new containers just 
> like a new attempt. This behavior is different to a Map Reduce application 
> where the containers are not killed.
> cc [~rohithsharma]



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-8382) cgroup file leak in NM

2018-05-30 Thread Hu Ziqian (JIRA)


 [ 
https://issues.apache.org/jira/browse/YARN-8382?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Hu Ziqian updated YARN-8382:

Description: 
As Jiandan said in YARN-6525, NM may delete  Cgroup container file timeout with 
logs like

org.apache.hadoop.yarn.server.nodemanager.util.CgroupsLCEResourcesHandler: 
Unable to delete cgroup at: /cgroup/cpu/hadoop-yarn/container_xxx, tried to 
delete for 1000ms

 

we found one situation is that when we set 
*yarn.nodemanager.sleep-delay-before-sigkill.ms* bigger than 
yarn.nodemanager.linux-container-executor.cgroups.delete-timeout-ms, the cgroup 
file leak happens *.* 

 

One container process tree looks like follow graph:

bash(16097)───java(16099)─┬─\{java}(16100) 
 
                                                  ├─\{java}(16101) 

{{                       ├─\{java}(16102)}}

 

{{when NM kill a container, NM send kill -15 -pid to kill container process 
group. Bash process will exit when it received sigterm, but java process may do 
some job (shutdownHook etc.), and may exit unit receive sigkill. And when bash 
process exit, CgroupsLCEResourcesHandler begin to try to delete cgroup. So when 
*yarn.nodemanager.linux-container-executor.cgroups.delete-timeout-ms* arrived, 
the java processes may still running and cgourp/tasks still not empty and cause 
a cgroup file leak.}}

 

{{we add a condition that 
*yarn.nodemanager.linux-container-executor.cgroups.delete-timeout-ms* must 
bigger than *yarn.nodemanager.sleep-delay-before-sigkill.ms* to solve this 
problem.}}

 

  was:
As Jiandan said in YARN-6525, NM may delete  Cgroup container file timeout with 
logs like

org.apache.hadoop.yarn.server.nodemanager.util.CgroupsLCEResourcesHandler: 
Unable to delete cgroup at: /cgroup/cpu/hadoop-yarn/container_xxx, tried to 
delete for 1000ms

 

we found one situation is that when we set 
*yarn.nodemanager.sleep-delay-before-sigkill.ms* bigger than 
yarn.nodemanager.linux-container-executor.cgroups.delete-timeout-ms, the cgroup 
file leak happens *.* 

 

One container process tree looks like follow graph:

{\{bash(16097)───java(16099)─┬─{java}(16100) }}

{\{                                                  ├─{java}(16101) }}

{{                        ├─\{java}(16102)}}

 

{{when NM kill a container, NM send kill -15 -pid to kill container process 
group. Bash process will exit when it received sigterm, but java process may do 
some job (shutdownHook etc.), and may exit unit receive sigkill. And when bash 
process exit, CgroupsLCEResourcesHandler begin to try to delete cgroup. So when 
*yarn.nodemanager.linux-container-executor.cgroups.delete-timeout-ms* arrived, 
the java processes may still running and cgourp/tasks still not empty and cause 
a cgroup file leak.}}

 

{{we add a condition that 
*yarn.nodemanager.linux-container-executor.cgroups.delete-timeout-ms* must 
bigger than *yarn.nodemanager.sleep-delay-before-sigkill.ms* to solve this 
problem.}}

 


> cgroup file leak in NM
> --
>
> Key: YARN-8382
> URL: https://issues.apache.org/jira/browse/YARN-8382
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: nodemanager
> Environment: we write an container with a shutdownHook which has a 
> piece of code like  "while(true) sleep(100)" . when 
> *yarn.nodemanager.linux-container-executor.cgroups.delete-timeout-ms <* 
> *yarn.nodemanager.sleep-delay-before-sigkill.ms , cgourp file leak happens; 
> when* *yarn.nodemanager.linux-container-executor.cgroups.delete-timeout-ms >* 
> ** *yarn.nodemanager.sleep-delay-before-sigkill.ms, cgroup file is deleted 
> successfully***
>Reporter: Hu Ziqian
>Assignee: Hu Ziqian
>Priority: Major
>
> As Jiandan said in YARN-6525, NM may delete  Cgroup container file timeout 
> with logs like
> org.apache.hadoop.yarn.server.nodemanager.util.CgroupsLCEResourcesHandler: 
> Unable to delete cgroup at: /cgroup/cpu/hadoop-yarn/container_xxx, tried to 
> delete for 1000ms
>  
> we found one situation is that when we set 
> *yarn.nodemanager.sleep-delay-before-sigkill.ms* bigger than 
> yarn.nodemanager.linux-container-executor.cgroups.delete-timeout-ms, the 
> cgroup file leak happens *.* 
>  
> One container process tree looks like follow graph:
> bash(16097)───java(16099)─┬─\{java}(16100) 
>  
>                                                   ├─\{java}(16101) 
> {{                       ├─\{java}(16102)}}
>  
> {{when NM kill a container, NM send kill -15 -pid to kill container process 
> group. Bash process will exit when it received sigterm, but java process may 
> do some job (shutdownHook etc.), and may exit unit receive sigkill. And when 
> bash process exit, CgroupsLCEResourcesHandler begin to try to delete cgroup. 
> So when *yarn.nodemanager.linux-container-executor.cgroups.delete-timeout-ms* 
> arrived, the java processes may still running 

[jira] [Updated] (YARN-8382) cgroup file leak in NM

2018-05-30 Thread Hu Ziqian (JIRA)


 [ 
https://issues.apache.org/jira/browse/YARN-8382?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Hu Ziqian updated YARN-8382:

Description: 
As Jiandan said in YARN-6525, NM may delete  Cgroup container file timeout with 
logs like

org.apache.hadoop.yarn.server.nodemanager.util.CgroupsLCEResourcesHandler: 
Unable to delete cgroup at: /cgroup/cpu/hadoop-yarn/container_xxx, tried to 
delete for 1000ms

 

we found one situation is that when we set 
*yarn.nodemanager.sleep-delay-before-sigkill.ms* bigger than 
yarn.nodemanager.linux-container-executor.cgroups.delete-timeout-ms, the cgroup 
file leak happens *.* 

 

One container process tree looks like follow graph:

{\{bash(16097)───java(16099)─┬─{java}(16100) }}

{\{                                                  ├─{java}(16101) }}

{{                        ├─\{java}(16102)}}

 

{{when NM kill a container, NM send kill -15 -pid to kill container process 
group. Bash process will exit when it received sigterm, but java process may do 
some job (shutdownHook etc.), and may exit unit receive sigkill. And when bash 
process exit, CgroupsLCEResourcesHandler begin to try to delete cgroup. So when 
*yarn.nodemanager.linux-container-executor.cgroups.delete-timeout-ms* arrived, 
the java processes may still running and cgourp/tasks still not empty and cause 
a cgroup file leak.}}

 

{{we add a condition that 
*yarn.nodemanager.linux-container-executor.cgroups.delete-timeout-ms* must 
bigger than *yarn.nodemanager.sleep-delay-before-sigkill.ms* to solve this 
problem.}}

 

  was:
As Jiandan said in YARN-6525, NM may delete  Cgroup container file timeout with 
logs like

org.apache.hadoop.yarn.server.nodemanager.util.CgroupsLCEResourcesHandler: 
Unable to delete cgroup at: /cgroup/cpu/hadoop-yarn/container_xxx, tried to 
delete for 1000ms

 

we found one situation is that when we set 
*yarn.nodemanager.sleep-delay-before-sigkill.ms* bigger than 
yarn.nodemanager.linux-container-executor.cgroups.delete-timeout-ms, the cgroup 
file leak happens *.* 

 

One container process tree looks like follow graph:

{{bash(16097)───java(16099)─┬─\{java}(16100) }}

{{                             ├─\{java}(16101) }}

{{                             ├─\{java}(16102)}}

 

{{when NM kill a container, NM send kill -15 -pid to kill container process 
group. Bash process will exit when it received sigterm, but java process may do 
some job (shutdownHook etc.), and may exit unit receive sigkill. And when bash 
process exit, CgroupsLCEResourcesHandler begin to try to delete cgroup. So when 
*yarn.nodemanager.linux-container-executor.cgroups.delete-timeout-ms* arrived, 
the java processes may still running and cgourp/tasks still not empty and cause 
a cgroup file leak.}}

 

{{we add a condition that 
*yarn.nodemanager.linux-container-executor.cgroups.delete-timeout-ms* must 
bigger than *yarn.nodemanager.sleep-delay-before-sigkill.ms* to solve this 
problem.}}

 


> cgroup file leak in NM
> --
>
> Key: YARN-8382
> URL: https://issues.apache.org/jira/browse/YARN-8382
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: nodemanager
> Environment: we write an container with a shutdownHook which has a 
> piece of code like  "while(true) sleep(100)" . when 
> *yarn.nodemanager.linux-container-executor.cgroups.delete-timeout-ms <* 
> *yarn.nodemanager.sleep-delay-before-sigkill.ms , cgourp file leak happens; 
> when* *yarn.nodemanager.linux-container-executor.cgroups.delete-timeout-ms >* 
> ** *yarn.nodemanager.sleep-delay-before-sigkill.ms, cgroup file is deleted 
> successfully***
>Reporter: Hu Ziqian
>Assignee: Hu Ziqian
>Priority: Major
>
> As Jiandan said in YARN-6525, NM may delete  Cgroup container file timeout 
> with logs like
> org.apache.hadoop.yarn.server.nodemanager.util.CgroupsLCEResourcesHandler: 
> Unable to delete cgroup at: /cgroup/cpu/hadoop-yarn/container_xxx, tried to 
> delete for 1000ms
>  
> we found one situation is that when we set 
> *yarn.nodemanager.sleep-delay-before-sigkill.ms* bigger than 
> yarn.nodemanager.linux-container-executor.cgroups.delete-timeout-ms, the 
> cgroup file leak happens *.* 
>  
> One container process tree looks like follow graph:
> {\{bash(16097)───java(16099)─┬─{java}(16100) }}
> {\{                                                  ├─{java}(16101) }}
> {{                        ├─\{java}(16102)}}
>  
> {{when NM kill a container, NM send kill -15 -pid to kill container process 
> group. Bash process will exit when it received sigterm, but java process may 
> do some job (shutdownHook etc.), and may exit unit receive sigkill. And when 
> bash process exit, CgroupsLCEResourcesHandler begin to try to delete cgroup. 
> So when *yarn.nodemanager.linux-container-executor.cgroups.delete-timeout-ms* 
> arrived, the java processes may still running and 

[jira] [Created] (YARN-8382) cgroup file leak in NM

2018-05-30 Thread Hu Ziqian (JIRA)
Hu Ziqian created YARN-8382:
---

 Summary: cgroup file leak in NM
 Key: YARN-8382
 URL: https://issues.apache.org/jira/browse/YARN-8382
 Project: Hadoop YARN
  Issue Type: Bug
  Components: nodemanager
 Environment: we write an container with a shutdownHook which has a 
piece of code like  "while(true) sleep(100)" . when 
*yarn.nodemanager.linux-container-executor.cgroups.delete-timeout-ms <* 
*yarn.nodemanager.sleep-delay-before-sigkill.ms , cgourp file leak happens; 
when* *yarn.nodemanager.linux-container-executor.cgroups.delete-timeout-ms >* 
** *yarn.nodemanager.sleep-delay-before-sigkill.ms, cgroup file is deleted 
successfully***
Reporter: Hu Ziqian
Assignee: Hu Ziqian


As Jiandan said in YARN-6525, NM may delete  Cgroup container file timeout with 
logs like

org.apache.hadoop.yarn.server.nodemanager.util.CgroupsLCEResourcesHandler: 
Unable to delete cgroup at: /cgroup/cpu/hadoop-yarn/container_xxx, tried to 
delete for 1000ms

 

we found one situation is that when we set 
*yarn.nodemanager.sleep-delay-before-sigkill.ms* bigger than 
yarn.nodemanager.linux-container-executor.cgroups.delete-timeout-ms, the cgroup 
file leak happens *.* 

 

One container process tree looks like follow graph:

{{bash(16097)───java(16099)─┬─\{java}(16100) }}

{{                             ├─\{java}(16101) }}

{{                             ├─\{java}(16102)}}

 

{{when NM kill a container, NM send kill -15 -pid to kill container process 
group. Bash process will exit when it received sigterm, but java process may do 
some job (shutdownHook etc.), and may exit unit receive sigkill. And when bash 
process exit, CgroupsLCEResourcesHandler begin to try to delete cgroup. So when 
*yarn.nodemanager.linux-container-executor.cgroups.delete-timeout-ms* arrived, 
the java processes may still running and cgourp/tasks still not empty and cause 
a cgroup file leak.}}

 

{{we add a condition that 
*yarn.nodemanager.linux-container-executor.cgroups.delete-timeout-ms* must 
bigger than *yarn.nodemanager.sleep-delay-before-sigkill.ms* to solve this 
problem.}}

 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-8381) Job got stuck while node was unhealthy, but without log messages to indicate such case

2018-05-30 Thread lujie (JIRA)


 [ 
https://issues.apache.org/jira/browse/YARN-8381?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

lujie updated YARN-8381:

Description: 
I started a fresh pseudo-distributed system on an node, then run a  job but it 
stuck. My first reaction was checking log message to local problem, but 
obtaining no error message.

After  reading log messages for long time, I waked up to check the node health 
. The Yarn web UI showed that the nodemanager is unhealthy, due to "local-dirs 
are bad: /tmp/hadoop-hduser/nm-local-dir".  I reconfigure the 
"{{yarn.nodemanager.disk-health-checker.max-disk-utilization-per-disk-percentage}}"
 to 98% and solved this problem.

{color:#d04437}*But I still  strongly recommend adding error log messages for 
unhealthy nodemanger.*{color}

  was:
I started a fresh pseudo-distributed system on an node, then run a  job but it 
stuck. My first reaction was checking log message to local problem, but 
obtaining no error message.

After  reading log messages for long time, I waked up to check the node health 
. The Yarn web UI showed that the nodemanager is unhealthy, due to the 
"local-dirs are bad: /tmp/hadoop-hduser/nm-local-dir".  I reconfigure the 
"{{yarn.nodemanager.disk-health-checker.max-disk-utilization-per-disk-percentage}}"
 to 98% and solved this problem.

{color:#d04437}*But I still  strongly recommend adding error log messages for 
unhealthy nodemanger.*{color}


> Job got stuck while node was unhealthy, but without log messages to indicate 
> such case
> --
>
> Key: YARN-8381
> URL: https://issues.apache.org/jira/browse/YARN-8381
> Project: Hadoop YARN
>  Issue Type: Improvement
>Reporter: lujie
>Priority: Major
>
> I started a fresh pseudo-distributed system on an node, then run a  job but 
> it stuck. My first reaction was checking log message to local problem, but 
> obtaining no error message.
> After  reading log messages for long time, I waked up to check the node 
> health . The Yarn web UI showed that the nodemanager is unhealthy, due to 
> "local-dirs are bad: /tmp/hadoop-hduser/nm-local-dir".  I reconfigure the 
> "{{yarn.nodemanager.disk-health-checker.max-disk-utilization-per-disk-percentage}}"
>  to 98% and solved this problem.
> {color:#d04437}*But I still  strongly recommend adding error log messages for 
> unhealthy nodemanger.*{color}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-8381) Job got stuck while node was unhealthy, but without log messages to indicate such case

2018-05-30 Thread lujie (JIRA)


 [ 
https://issues.apache.org/jira/browse/YARN-8381?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

lujie updated YARN-8381:

Description: 
I started a fresh pseudo-distributed system on an node, then run a  job but it 
stuck. My first reaction was checking log message to local problem, but 
obtaining no error message.

After  reading log messages for long time, I waked up to check the node health 
. The Yarn web UI showed that the nodemanager is unhealthy, due to the 
"l\{{ocal-dirs are bad: /tmp/hadoop-hduser/nm-local-dir}}".  I reconfigure the 
"{{yarn.nodemanager.disk-health-checker.max-disk-utilization-per-disk-percentage}}"
 to 98% and solved this problem.

{color:#d04437}*But I still  strongly recommend adding error log messages for 
unhealthy nodemanger.*{color}

  was:
I started a fresh pseudo-distributed system on an node, then run a  job but it 
stuck. My first reaction was checking log message to local problem, but 
obtaining no error message.

After  reading log messages for long time, I waked up to check the node health 
. The Yarn web UI showed that the nodemanager is unhealthy, due to the 
"l\{{ocal-dirs are bad: /tmp/hadoop-hduser/nm-local-dir}}".  I reconfigure the 
"{{yarn.nodemanager.disk-health-checker.max-disk-utilization-per-disk-percentage}}"
 to 98% and solved this problem. But I still  strongly recommend adding error 
log messages for unhealthy nodemanger.


> Job got stuck while node was unhealthy, but without log messages to indicate 
> such case
> --
>
> Key: YARN-8381
> URL: https://issues.apache.org/jira/browse/YARN-8381
> Project: Hadoop YARN
>  Issue Type: Improvement
>Reporter: lujie
>Priority: Major
>
> I started a fresh pseudo-distributed system on an node, then run a  job but 
> it stuck. My first reaction was checking log message to local problem, but 
> obtaining no error message.
> After  reading log messages for long time, I waked up to check the node 
> health . The Yarn web UI showed that the nodemanager is unhealthy, due to the 
> "l\{{ocal-dirs are bad: /tmp/hadoop-hduser/nm-local-dir}}".  I reconfigure 
> the 
> "{{yarn.nodemanager.disk-health-checker.max-disk-utilization-per-disk-percentage}}"
>  to 98% and solved this problem.
> {color:#d04437}*But I still  strongly recommend adding error log messages for 
> unhealthy nodemanger.*{color}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-8381) Job got stuck while node was unhealthy, but without log messages to indicate such case

2018-05-30 Thread lujie (JIRA)


 [ 
https://issues.apache.org/jira/browse/YARN-8381?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

lujie updated YARN-8381:

Description: 
I started a fresh pseudo-distributed system on an node, then run a  job but it 
stuck. My first reaction was checking log message to local problem, but 
obtaining no error message.

After  reading log messages for long time, I waked up to check the node health 
. The Yarn web UI showed that the nodemanager is unhealthy, due to the 
"local-dirs are bad: /tmp/hadoop-hduser/nm-local-dir".  I reconfigure the 
"{{yarn.nodemanager.disk-health-checker.max-disk-utilization-per-disk-percentage}}"
 to 98% and solved this problem.

{color:#d04437}*But I still  strongly recommend adding error log messages for 
unhealthy nodemanger.*{color}

  was:
I started a fresh pseudo-distributed system on an node, then run a  job but it 
stuck. My first reaction was checking log message to local problem, but 
obtaining no error message.

After  reading log messages for long time, I waked up to check the node health 
. The Yarn web UI showed that the nodemanager is unhealthy, due to the 
"l\{{ocal-dirs are bad: /tmp/hadoop-hduser/nm-local-dir}}".  I reconfigure the 
"{{yarn.nodemanager.disk-health-checker.max-disk-utilization-per-disk-percentage}}"
 to 98% and solved this problem.

{color:#d04437}*But I still  strongly recommend adding error log messages for 
unhealthy nodemanger.*{color}


> Job got stuck while node was unhealthy, but without log messages to indicate 
> such case
> --
>
> Key: YARN-8381
> URL: https://issues.apache.org/jira/browse/YARN-8381
> Project: Hadoop YARN
>  Issue Type: Improvement
>Reporter: lujie
>Priority: Major
>
> I started a fresh pseudo-distributed system on an node, then run a  job but 
> it stuck. My first reaction was checking log message to local problem, but 
> obtaining no error message.
> After  reading log messages for long time, I waked up to check the node 
> health . The Yarn web UI showed that the nodemanager is unhealthy, due to the 
> "local-dirs are bad: /tmp/hadoop-hduser/nm-local-dir".  I reconfigure the 
> "{{yarn.nodemanager.disk-health-checker.max-disk-utilization-per-disk-percentage}}"
>  to 98% and solved this problem.
> {color:#d04437}*But I still  strongly recommend adding error log messages for 
> unhealthy nodemanger.*{color}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-8381) Job got stuck while node was unhealthy, but without log messages to indicate such case

2018-05-30 Thread lujie (JIRA)


 [ 
https://issues.apache.org/jira/browse/YARN-8381?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

lujie updated YARN-8381:

Summary: Job got stuck while node was unhealthy, but without log messages 
to indicate such case  (was: Job got stuck while node is unhealthy, but without 
log messages to indicate such case)

> Job got stuck while node was unhealthy, but without log messages to indicate 
> such case
> --
>
> Key: YARN-8381
> URL: https://issues.apache.org/jira/browse/YARN-8381
> Project: Hadoop YARN
>  Issue Type: Improvement
>Reporter: lujie
>Priority: Major
>
> I started a fresh pseudo-distributed system on an node, then run a  job but 
> it stuck. My first reaction was checking log message to local problem, but 
> obtaining no error message. Then I waked up to check the node health after  
> reading log message for long time. The Yarn web UI showed that the 
> nodemanager is unhealthy, due to the "l{{ocal-dirs are bad: 
> /tmp/hadoop-hduser/nm-local-dir}}".  I reconfigure the 
> "{{yarn.nodemanager.disk-health-checker.max-disk-utilization-per-disk-percentage}}"
>  to 98% and solved this problem. But I still  strongly recommend adding error 
> log messages for unhealthy nodemanger.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-8381) Job got stuck while node was unhealthy, but without log messages to indicate such case

2018-05-30 Thread lujie (JIRA)


 [ 
https://issues.apache.org/jira/browse/YARN-8381?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

lujie updated YARN-8381:

Description: 
I started a fresh pseudo-distributed system on an node, then run a  job but it 
stuck. My first reaction was checking log message to local problem, but 
obtaining no error message.

After  reading log messages for long time, I waked up to check the node health 
. The Yarn web UI showed that the nodemanager is unhealthy, due to the 
"l\{{ocal-dirs are bad: /tmp/hadoop-hduser/nm-local-dir}}".  I reconfigure the 
"{{yarn.nodemanager.disk-health-checker.max-disk-utilization-per-disk-percentage}}"
 to 98% and solved this problem. But I still  strongly recommend adding error 
log messages for unhealthy nodemanger.

  was:I started a fresh pseudo-distributed system on an node, then run a  job 
but it stuck. My first reaction was checking log message to local problem, but 
obtaining no error message. Then I waked up to check the node health after  
reading log message for long time. The Yarn web UI showed that the nodemanager 
is unhealthy, due to the "l{{ocal-dirs are bad: 
/tmp/hadoop-hduser/nm-local-dir}}".  I reconfigure the 
"{{yarn.nodemanager.disk-health-checker.max-disk-utilization-per-disk-percentage}}"
 to 98% and solved this problem. But I still  strongly recommend adding error 
log messages for unhealthy nodemanger.


> Job got stuck while node was unhealthy, but without log messages to indicate 
> such case
> --
>
> Key: YARN-8381
> URL: https://issues.apache.org/jira/browse/YARN-8381
> Project: Hadoop YARN
>  Issue Type: Improvement
>Reporter: lujie
>Priority: Major
>
> I started a fresh pseudo-distributed system on an node, then run a  job but 
> it stuck. My first reaction was checking log message to local problem, but 
> obtaining no error message.
> After  reading log messages for long time, I waked up to check the node 
> health . The Yarn web UI showed that the nodemanager is unhealthy, due to the 
> "l\{{ocal-dirs are bad: /tmp/hadoop-hduser/nm-local-dir}}".  I reconfigure 
> the 
> "{{yarn.nodemanager.disk-health-checker.max-disk-utilization-per-disk-percentage}}"
>  to 98% and solved this problem. But I still  strongly recommend adding error 
> log messages for unhealthy nodemanger.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Created] (YARN-8381) Job get stuck while node is unhealthy, but without log messages to indicate such case

2018-05-30 Thread lujie (JIRA)
lujie created YARN-8381:
---

 Summary: Job get stuck while node is unhealthy, but without log 
messages to indicate such case
 Key: YARN-8381
 URL: https://issues.apache.org/jira/browse/YARN-8381
 Project: Hadoop YARN
  Issue Type: Improvement
Reporter: lujie


I started a fresh pseudo-distributed system on an node, then run a  job but it 
stuck. My first reaction was checking log message to local problem, but 
obtaining no error message. Then I waked up to check the node health after  
reading log message for long time. The Yarn web UI showed that the nodemanager 
is unhealthy, due to the "l{{ocal-dirs are bad: 
/tmp/hadoop-hduser/nm-local-dir}}".  I reconfigure the 
"{{yarn.nodemanager.disk-health-checker.max-disk-utilization-per-disk-percentage}}"
 to 98% and solved this problem. But I still  strongly recommend adding error 
log messages for unhealthy nodemanger.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-8381) Job got stuck while node is unhealthy, but without log messages to indicate such case

2018-05-30 Thread lujie (JIRA)


 [ 
https://issues.apache.org/jira/browse/YARN-8381?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

lujie updated YARN-8381:

Summary: Job got stuck while node is unhealthy, but without log messages to 
indicate such case  (was: Job get stuck while node is unhealthy, but without 
log messages to indicate such case)

> Job got stuck while node is unhealthy, but without log messages to indicate 
> such case
> -
>
> Key: YARN-8381
> URL: https://issues.apache.org/jira/browse/YARN-8381
> Project: Hadoop YARN
>  Issue Type: Improvement
>Reporter: lujie
>Priority: Major
>
> I started a fresh pseudo-distributed system on an node, then run a  job but 
> it stuck. My first reaction was checking log message to local problem, but 
> obtaining no error message. Then I waked up to check the node health after  
> reading log message for long time. The Yarn web UI showed that the 
> nodemanager is unhealthy, due to the "l{{ocal-dirs are bad: 
> /tmp/hadoop-hduser/nm-local-dir}}".  I reconfigure the 
> "{{yarn.nodemanager.disk-health-checker.max-disk-utilization-per-disk-percentage}}"
>  to 98% and solved this problem. But I still  strongly recommend adding error 
> log messages for unhealthy nodemanger.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-8372) ApplicationAttemptNotFoundException should be handled correctly by Distributed Shell App Master

2018-05-30 Thread Vinod Kumar Vavilapalli (JIRA)


[ 
https://issues.apache.org/jira/browse/YARN-8372?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16496007#comment-16496007
 ] 

Vinod Kumar Vavilapalli commented on YARN-8372:
---

bq. DS app master should handle shutdown request properly whether to clean up 
or not based on the attempt number check.
This is not possible to do. The AM actually doesn't know what the last 
attempt-number is. See MAPREDUCE-5956 and YARN-2261 for background.

May be DS should (a) just never cleanup containers if the CLI flag 
{{keep_containers_across_application_attempts}} is true. (b) cleanup in 
shut-down hook like it does today if 
{{keep_containers_across_application_attempts}} is false.

> ApplicationAttemptNotFoundException should be handled correctly by 
> Distributed Shell App Master
> ---
>
> Key: YARN-8372
> URL: https://issues.apache.org/jira/browse/YARN-8372
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: distributed-shell
>Reporter: Charan Hebri
>Assignee: Suma Shivaprasad
>Priority: Major
>
> {noformat}
> try {
>   response = client.allocate(progress);
> } catch (ApplicationAttemptNotFoundException e) {
> handler.onShutdownRequest();
> LOG.info("Shutdown requested. Stopping callback.");
> return;{noformat}
> is a code snippet from AMRMClientAsyncImpl. The corresponding 
> onShutdownRequest call for the Distributed Shell App master,
> {noformat}
> @Override
> public void onShutdownRequest() {
>   done = true;
> }{noformat}
> Due to the above change, the current behavior is that whenever an application 
> attempt fails due to a NM restart (NM where the DS AM is running), an 
> ApplicationAttemptNotFoundException is thrown and all containers for that 
> attempt including the ones that are running on other NMs are killed by the AM 
> and marked as COMPLETE. The subsequent attempt spawns new containers just 
> like a new attempt. This behavior is different to a Map Reduce application 
> where the containers are not killed.
> cc [~rohithsharma]



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Created] (YARN-8380) Support shared mounts in docker runtime

2018-05-30 Thread Billie Rinaldi (JIRA)
Billie Rinaldi created YARN-8380:


 Summary: Support shared mounts in docker runtime
 Key: YARN-8380
 URL: https://issues.apache.org/jira/browse/YARN-8380
 Project: Hadoop YARN
  Issue Type: Bug
Reporter: Billie Rinaldi
Assignee: Billie Rinaldi


The docker run command supports the mount type shared, but currently we are 
only supporting ro and rw mount types in the docker runtime.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-8367) 2 components, one with placement constraint and one without causes NPE in SingleConstraintAppPlacementAllocator

2018-05-30 Thread Weiwei Yang (JIRA)


[ 
https://issues.apache.org/jira/browse/YARN-8367?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16495965#comment-16495965
 ] 

Weiwei Yang commented on YARN-8367:
---

Thanks [~gsaha], I can get this committed if no other further comments by end 
of today. 

> 2 components, one with placement constraint and one without causes NPE in 
> SingleConstraintAppPlacementAllocator
> ---
>
> Key: YARN-8367
> URL: https://issues.apache.org/jira/browse/YARN-8367
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: scheduler
>Affects Versions: 3.1.0
>Reporter: Gour Saha
>Assignee: Weiwei Yang
>Priority: Major
> Attachments: YARN-8367.001.patch
>
>
> While testing the fix for YARN-8350, [~billie.rinaldi] encountered this NPE 
> in AM log. Filling this on her behalf -
> {noformat}
> 2018-05-25 21:11:54,006 [AMRM Heartbeater thread] ERROR 
> impl.AMRMClientAsyncImpl - Exception on heartbeat
> java.lang.NullPointerException: java.lang.NullPointerException
>   at 
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.placement.SingleConstraintAppPlacementAllocator.validateAndSetSchedulingRequest(SingleConstraintAppPlacementAllocator.java:245)
>   at 
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.placement.SingleConstraintAppPlacementAllocator.internalUpdatePendingAsk(SingleConstraintAppPlacementAllocator.java:193)
>   at 
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.placement.SingleConstraintAppPlacementAllocator.updatePendingAsk(SingleConstraintAppPlacementAllocator.java:207)
>   at 
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.AppSchedulingInfo.addSchedulingRequests(AppSchedulingInfo.java:269)
>   at 
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.AppSchedulingInfo.updateSchedulingRequests(AppSchedulingInfo.java:240)
>   at 
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.SchedulerApplicationAttempt.updateSchedulingRequests(SchedulerApplicationAttempt.java:469)
>   at 
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.CapacityScheduler.allocate(CapacityScheduler.java:1154)
>   at 
> org.apache.hadoop.yarn.server.resourcemanager.DefaultAMSProcessor.allocate(DefaultAMSProcessor.java:278)
>   at 
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.constraint.processor.SchedulerPlacementProcessor.allocate(SchedulerPlacementProcessor.java:53)
>   at 
> org.apache.hadoop.yarn.server.resourcemanager.AMSProcessingChain.allocate(AMSProcessingChain.java:92)
>   at 
> org.apache.hadoop.yarn.server.resourcemanager.ApplicationMasterService.allocate(ApplicationMasterService.java:433)
>   at 
> org.apache.hadoop.yarn.api.impl.pb.service.ApplicationMasterProtocolPBServiceImpl.allocate(ApplicationMasterProtocolPBServiceImpl.java:60)
>   at 
> org.apache.hadoop.yarn.proto.ApplicationMasterProtocol$ApplicationMasterProtocolService$2.callBlockingMethod(ApplicationMasterProtocol.java:99)
>   at 
> org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:523)
>   at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:991)
>   at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:872)
>   at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:818)
>   at java.security.AccessController.doPrivileged(Native Method)
>   at javax.security.auth.Subject.doAs(Subject.java:422)
>   at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1682)
>   at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2678)
>   at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
>   at 
> sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
>   at 
> sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
>   at java.lang.reflect.Constructor.newInstance(Constructor.java:423)
>   at 
> org.apache.hadoop.yarn.ipc.RPCUtil.instantiateException(RPCUtil.java:53)
>   at 
> org.apache.hadoop.yarn.ipc.RPCUtil.instantiateRuntimeException(RPCUtil.java:85)
>   at 
> org.apache.hadoop.yarn.ipc.RPCUtil.unwrapAndThrowException(RPCUtil.java:122)
>   at 
> org.apache.hadoop.yarn.api.impl.pb.client.ApplicationMasterProtocolPBClientImpl.allocate(ApplicationMasterProtocolPBClientImpl.java:79)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:498)
>   at 
> 

[jira] [Commented] (YARN-8342) Using docker image from a non-privileged registry, the launch_command is not honored

2018-05-30 Thread genericqa (JIRA)


[ 
https://issues.apache.org/jira/browse/YARN-8342?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16495959#comment-16495959
 ] 

genericqa commented on YARN-8342:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
29s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
17s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 27m 
 6s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  9m 
17s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
35s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  5m 
56s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
10m 35s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Skipped patched modules with no Java source: 
hadoop-yarn-project/hadoop-yarn 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-site {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
50s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  3m  
7s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
15s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  5m 
53s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  8m 
36s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} cc {color} | {color:green}  8m 
36s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  8m 
36s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
1m 47s{color} | {color:orange} hadoop-yarn-project/hadoop-yarn: The patch 
generated 1 new + 16 unchanged - 0 fixed = 17 total (was 16) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  5m 
42s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} shellcheck {color} | {color:green}  0m 
 0s{color} | {color:green} There were no new shellcheck issues. {color} |
| {color:green}+1{color} | {color:green} shelldocs {color} | {color:green}  0m 
15s{color} | {color:green} There were no new shelldocs issues. {color} |
| {color:red}-1{color} | {color:red} whitespace {color} | {color:red}  0m  
0s{color} | {color:red} The patch has 4 line(s) that end in whitespace. Use git 
apply --whitespace=fix <>. Refer https://git-scm.com/docs/git-apply 
{color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
11m 20s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Skipped patched modules with no Java source: 
hadoop-yarn-project/hadoop-yarn 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-site {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
51s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  2m 
50s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red}  7m 45s{color} 
| {color:red} hadoop-yarn in the patch failed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 34m 22s{color} 
| {color:red} hadoop-yarn-server-nodemanager in the patch failed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 10m 
53s{color} | 

[jira] [Assigned] (YARN-8372) ApplicationAttemptNotFoundException should be handled correctly by Distributed Shell App Master

2018-05-30 Thread Suma Shivaprasad (JIRA)


 [ 
https://issues.apache.org/jira/browse/YARN-8372?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Suma Shivaprasad reassigned YARN-8372:
--

Assignee: Suma Shivaprasad

> ApplicationAttemptNotFoundException should be handled correctly by 
> Distributed Shell App Master
> ---
>
> Key: YARN-8372
> URL: https://issues.apache.org/jira/browse/YARN-8372
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: distributed-shell
>Reporter: Charan Hebri
>Assignee: Suma Shivaprasad
>Priority: Major
>
> {noformat}
> try {
>   response = client.allocate(progress);
> } catch (ApplicationAttemptNotFoundException e) {
> handler.onShutdownRequest();
> LOG.info("Shutdown requested. Stopping callback.");
> return;{noformat}
> is a code snippet from AMRMClientAsyncImpl. The corresponding 
> onShutdownRequest call for the Distributed Shell App master,
> {noformat}
> @Override
> public void onShutdownRequest() {
>   done = true;
> }{noformat}
> Due to the above change, the current behavior is that whenever an application 
> attempt fails due to a NM restart (NM where the DS AM is running), an 
> ApplicationAttemptNotFoundException is thrown and all containers for that 
> attempt including the ones that are running on other NMs are killed by the AM 
> and marked as COMPLETE. The subsequent attempt spawns new containers just 
> like a new attempt. This behavior is different to a Map Reduce application 
> where the containers are not killed.
> cc [~rohithsharma]



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-8367) 2 components, one with placement constraint and one without causes NPE in SingleConstraintAppPlacementAllocator

2018-05-30 Thread Gour Saha (JIRA)


[ 
https://issues.apache.org/jira/browse/YARN-8367?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16495948#comment-16495948
 ] 

Gour Saha commented on YARN-8367:
-

[~cheersyang] thank you for the patch. 001 looks good. I even tested in my 
cluster where I was getting NPE and your patch fixes the problem. So +1 for 001 
patch. I think [~billie.rinaldi] also successfully tested your patch while 
testing YARN-8350.

> 2 components, one with placement constraint and one without causes NPE in 
> SingleConstraintAppPlacementAllocator
> ---
>
> Key: YARN-8367
> URL: https://issues.apache.org/jira/browse/YARN-8367
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: scheduler
>Affects Versions: 3.1.0
>Reporter: Gour Saha
>Assignee: Weiwei Yang
>Priority: Major
> Attachments: YARN-8367.001.patch
>
>
> While testing the fix for YARN-8350, [~billie.rinaldi] encountered this NPE 
> in AM log. Filling this on her behalf -
> {noformat}
> 2018-05-25 21:11:54,006 [AMRM Heartbeater thread] ERROR 
> impl.AMRMClientAsyncImpl - Exception on heartbeat
> java.lang.NullPointerException: java.lang.NullPointerException
>   at 
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.placement.SingleConstraintAppPlacementAllocator.validateAndSetSchedulingRequest(SingleConstraintAppPlacementAllocator.java:245)
>   at 
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.placement.SingleConstraintAppPlacementAllocator.internalUpdatePendingAsk(SingleConstraintAppPlacementAllocator.java:193)
>   at 
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.placement.SingleConstraintAppPlacementAllocator.updatePendingAsk(SingleConstraintAppPlacementAllocator.java:207)
>   at 
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.AppSchedulingInfo.addSchedulingRequests(AppSchedulingInfo.java:269)
>   at 
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.AppSchedulingInfo.updateSchedulingRequests(AppSchedulingInfo.java:240)
>   at 
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.SchedulerApplicationAttempt.updateSchedulingRequests(SchedulerApplicationAttempt.java:469)
>   at 
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.CapacityScheduler.allocate(CapacityScheduler.java:1154)
>   at 
> org.apache.hadoop.yarn.server.resourcemanager.DefaultAMSProcessor.allocate(DefaultAMSProcessor.java:278)
>   at 
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.constraint.processor.SchedulerPlacementProcessor.allocate(SchedulerPlacementProcessor.java:53)
>   at 
> org.apache.hadoop.yarn.server.resourcemanager.AMSProcessingChain.allocate(AMSProcessingChain.java:92)
>   at 
> org.apache.hadoop.yarn.server.resourcemanager.ApplicationMasterService.allocate(ApplicationMasterService.java:433)
>   at 
> org.apache.hadoop.yarn.api.impl.pb.service.ApplicationMasterProtocolPBServiceImpl.allocate(ApplicationMasterProtocolPBServiceImpl.java:60)
>   at 
> org.apache.hadoop.yarn.proto.ApplicationMasterProtocol$ApplicationMasterProtocolService$2.callBlockingMethod(ApplicationMasterProtocol.java:99)
>   at 
> org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:523)
>   at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:991)
>   at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:872)
>   at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:818)
>   at java.security.AccessController.doPrivileged(Native Method)
>   at javax.security.auth.Subject.doAs(Subject.java:422)
>   at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1682)
>   at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2678)
>   at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
>   at 
> sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
>   at 
> sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
>   at java.lang.reflect.Constructor.newInstance(Constructor.java:423)
>   at 
> org.apache.hadoop.yarn.ipc.RPCUtil.instantiateException(RPCUtil.java:53)
>   at 
> org.apache.hadoop.yarn.ipc.RPCUtil.instantiateRuntimeException(RPCUtil.java:85)
>   at 
> org.apache.hadoop.yarn.ipc.RPCUtil.unwrapAndThrowException(RPCUtil.java:122)
>   at 
> org.apache.hadoop.yarn.api.impl.pb.client.ApplicationMasterProtocolPBClientImpl.allocate(ApplicationMasterProtocolPBClientImpl.java:79)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>   at 
> 

[jira] [Commented] (YARN-8350) NPE in service AM related to placement policy

2018-05-30 Thread Billie Rinaldi (JIRA)


[ 
https://issues.apache.org/jira/browse/YARN-8350?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16495947#comment-16495947
 ] 

Billie Rinaldi commented on YARN-8350:
--

Committed to trunk and branch-3.1. Thanks for the patch, [~gsaha]!

> NPE in service AM related to placement policy
> -
>
> Key: YARN-8350
> URL: https://issues.apache.org/jira/browse/YARN-8350
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: yarn-native-services
>Reporter: Billie Rinaldi
>Assignee: Gour Saha
>Priority: Critical
> Fix For: 3.2.0, 3.1.1
>
> Attachments: YARN-8350.01.patch, YARN-8350.02.patch
>
>
> It seems like this NPE is happening in a service with more than one component 
> when one component has a placement policy and the other does not. It causes 
> the AM to crash.
> {noformat}
> java.lang.NullPointerException
> at 
> org.apache.hadoop.yarn.service.component.Component.requestContainers(Component.java:644)
> at 
> org.apache.hadoop.yarn.service.component.Component$FlexComponentTransition.transition(Component.java:310)
> at 
> org.apache.hadoop.yarn.service.component.Component$FlexComponentTransition.transition(Component.java:293)
> at 
> org.apache.hadoop.yarn.state.StateMachineFactory$MultipleInternalArc.doTransition(StateMachineFactory.java:385)
> at 
> org.apache.hadoop.yarn.state.StateMachineFactory.doTransition(StateMachineFactory.java:302)
> at 
> org.apache.hadoop.yarn.state.StateMachineFactory.access$500(StateMachineFactory.java:46)
> at 
> org.apache.hadoop.yarn.state.StateMachineFactory$InternalStateMachine.doTransition(StateMachineFactory.java:487)
> at 
> org.apache.hadoop.yarn.service.component.Component.handle(Component.java:919)
> at 
> org.apache.hadoop.yarn.service.ServiceScheduler.serviceStart(ServiceScheduler.java:344)
> at 
> org.apache.hadoop.service.AbstractService.start(AbstractService.java:194)
> at 
> org.apache.hadoop.service.CompositeService.serviceStart(CompositeService.java:121)
> at 
> org.apache.hadoop.yarn.service.ServiceMaster.lambda$serviceStart$0(ServiceMaster.java:253)
> at java.security.AccessController.doPrivileged(Native Method)
> at javax.security.auth.Subject.doAs(Subject.java:422)
> at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1682)
> at 
> org.apache.hadoop.yarn.service.ServiceMaster.serviceStart(ServiceMaster.java:251)
> at 
> org.apache.hadoop.service.AbstractService.start(AbstractService.java:194)
> at 
> org.apache.hadoop.yarn.service.ServiceMaster.main(ServiceMaster.java:317)
> {noformat}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-8377) Javadoc build failed in hadoop-yarn-server-nodemanager

2018-05-30 Thread Takanobu Asanuma (JIRA)


[ 
https://issues.apache.org/jira/browse/YARN-8377?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16495945#comment-16495945
 ] 

Takanobu Asanuma commented on YARN-8377:


Thanks for reviewing and committing it, [~eepayne]!

> Javadoc build failed in hadoop-yarn-server-nodemanager
> --
>
> Key: YARN-8377
> URL: https://issues.apache.org/jira/browse/YARN-8377
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: build, docs
>Reporter: Takanobu Asanuma
>Assignee: Takanobu Asanuma
>Priority: Critical
> Fix For: 3.2.0, 3.1.1
>
> Attachments: YARN-8377.1.patch
>
>
> This is the same cause as YARN-8369.
> {code}
> [ERROR] 
> /hadoop/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/container/SlidingWindowRetryPolicy.java:88:
>  error: bad use of '>'
> [ERROR]* When failuresValidityInterval is > 0, it also removes time 
> entries from
> [ERROR]   ^
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (YARN-8197) Tracking URL in the app state does not get redirected to MR ApplicationMaster for Running applications

2018-05-30 Thread Eric Yang (JIRA)


[ 
https://issues.apache.org/jira/browse/YARN-8197?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16495934#comment-16495934
 ] 

Eric Yang edited comment on YARN-8197 at 5/31/18 12:36 AM:
---

I am able to look at AM UI for Mapreduce job on a secure cluster using patch 
002 and combined with patch 001 for YARN-8108.  

[~sunilg] 9 of 10 checkstyle issues can be fixed.  Can you generate a new patch?


was (Author: eyang):
+1 . I am able to look at AM UI for Mapreduce job on a secure cluster using 
patch 002 and combined with patch 001 for YARN-8108.

> Tracking URL in the app state does not get redirected to MR ApplicationMaster 
> for Running applications
> --
>
> Key: YARN-8197
> URL: https://issues.apache.org/jira/browse/YARN-8197
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: yarn
>Reporter: Sumana Sathish
>Assignee: Sunil Govindan
>Priority: Critical
> Attachments: YARN-8197.001.patch, YARN-8197.002.patch
>
>
> {code}
> org.eclipse.jetty.servlet.ServletHandler:
> javax.servlet.ServletException: Could not determine the proxy server for 
> redirection
>   at 
> org.apache.hadoop.yarn.server.webproxy.amfilter.AmIpFilter.findRedirectUrl(AmIpFilter.java:211)
>   at 
> org.apache.hadoop.yarn.server.webproxy.amfilter.AmIpFilter.doFilter(AmIpFilter.java:145)
>   at 
> org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1759)
>   at 
> org.apache.hadoop.http.HttpServer2$QuotingInputFilter.doFilter(HttpServer2.java:1617)
>   at 
> org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1759)
>   at org.apache.hadoop.http.NoCacheFilter.doFilter(NoCacheFilter.java:45)
>   at 
> org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1759)
>   at 
> org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:582)
>   at 
> org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:143)
>   at 
> org.eclipse.jetty.security.SecurityHandler.handle(SecurityHandler.java:548)
>   at 
> org.eclipse.jetty.server.session.SessionHandler.doHandle(SessionHandler.java:226)
>   at 
> org.eclipse.jetty.server.handler.ContextHandler.doHandle(ContextHandler.java:1180)
>   at 
> org.eclipse.jetty.servlet.ServletHandler.doScope(ServletHandler.java:512)
>   at 
> org.eclipse.jetty.server.session.SessionHandler.doScope(SessionHandler.java:185)
>   at 
> org.eclipse.jetty.server.handler.ContextHandler.doScope(ContextHandler.java:1112)
>   at 
> org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:141)
>   at 
> org.eclipse.jetty.server.handler.HandlerCollection.handle(HandlerCollection.java:119)
>   at 
> org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:134)
>   at org.eclipse.jetty.server.Server.handle(Server.java:534)
>   at org.eclipse.jetty.server.HttpChannel.handle(HttpChannel.java:320)
>   at 
> org.eclipse.jetty.server.HttpConnection.onFillable(HttpConnection.java:251)
>   at 
> org.eclipse.jetty.io.AbstractConnection$ReadCallback.succeeded(AbstractConnection.java:283)
>   at org.eclipse.jetty.io.FillInterest.fillable(FillInterest.java:108)
>   at 
> org.eclipse.jetty.io.SelectChannelEndPoint$2.run(SelectChannelEndPoint.java:93)
>   at 
> org.eclipse.jetty.util.thread.strategy.ExecuteProduceConsume.executeProduceConsume(ExecuteProduceConsume.java:303)
>   at 
> org.eclipse.jetty.util.thread.strategy.ExecuteProduceConsume.produceConsume(ExecuteProduceConsume.java:148)
>   at 
> org.eclipse.jetty.util.thread.strategy.ExecuteProduceConsume.run(ExecuteProduceConsume.java:136)
>   at 
> org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:671)
>   at 
> org.eclipse.jetty.util.thread.QueuedThreadPool$2.run(QueuedThreadPool.java:589)
>   at java.lang.Thread.run(Thread.java:748)
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-8333) Load balance YARN services using RegistryDNS multiple A records

2018-05-30 Thread genericqa (JIRA)


[ 
https://issues.apache.org/jira/browse/YARN-8333?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16495935#comment-16495935
 ] 

genericqa commented on YARN-8333:
-

| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
25s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  1m 
18s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 24m 
16s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  7m 
40s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
 7s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
48s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
11m 26s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Skipped patched modules with no Java source: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-site {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
41s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
39s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
26s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
30s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  6m 
31s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  6m 
31s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
 6s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
44s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green}  
9m 48s{color} | {color:green} patch has no errors when building and testing our 
client artifacts. {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Skipped patched modules with no Java source: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-site {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
45s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
49s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  1m  
4s{color} | {color:green} hadoop-yarn-registry in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
23s{color} | {color:green} hadoop-yarn-site in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
39s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 71m  0s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:abb62dd |
| JIRA Issue | YARN-8333 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12925828/YARN-8333.003.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  shadedclient  findbugs  checkstyle  |
| uname | Linux a2808df67ac0 4.4.0-64-generic #85-Ubuntu SMP Mon Feb 20 
11:50:30 UTC 2017 x86_64 x86_64 x86_64 

[jira] [Commented] (YARN-8197) Tracking URL in the app state does not get redirected to MR ApplicationMaster for Running applications

2018-05-30 Thread Eric Yang (JIRA)


[ 
https://issues.apache.org/jira/browse/YARN-8197?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16495934#comment-16495934
 ] 

Eric Yang commented on YARN-8197:
-

+1 . I am able to look at AM UI for Mapreduce job on a secure cluster using 
patch 002 and combined with patch 001 for YARN-8108.

> Tracking URL in the app state does not get redirected to MR ApplicationMaster 
> for Running applications
> --
>
> Key: YARN-8197
> URL: https://issues.apache.org/jira/browse/YARN-8197
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: yarn
>Reporter: Sumana Sathish
>Assignee: Sunil Govindan
>Priority: Critical
> Attachments: YARN-8197.001.patch, YARN-8197.002.patch
>
>
> {code}
> org.eclipse.jetty.servlet.ServletHandler:
> javax.servlet.ServletException: Could not determine the proxy server for 
> redirection
>   at 
> org.apache.hadoop.yarn.server.webproxy.amfilter.AmIpFilter.findRedirectUrl(AmIpFilter.java:211)
>   at 
> org.apache.hadoop.yarn.server.webproxy.amfilter.AmIpFilter.doFilter(AmIpFilter.java:145)
>   at 
> org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1759)
>   at 
> org.apache.hadoop.http.HttpServer2$QuotingInputFilter.doFilter(HttpServer2.java:1617)
>   at 
> org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1759)
>   at org.apache.hadoop.http.NoCacheFilter.doFilter(NoCacheFilter.java:45)
>   at 
> org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1759)
>   at 
> org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:582)
>   at 
> org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:143)
>   at 
> org.eclipse.jetty.security.SecurityHandler.handle(SecurityHandler.java:548)
>   at 
> org.eclipse.jetty.server.session.SessionHandler.doHandle(SessionHandler.java:226)
>   at 
> org.eclipse.jetty.server.handler.ContextHandler.doHandle(ContextHandler.java:1180)
>   at 
> org.eclipse.jetty.servlet.ServletHandler.doScope(ServletHandler.java:512)
>   at 
> org.eclipse.jetty.server.session.SessionHandler.doScope(SessionHandler.java:185)
>   at 
> org.eclipse.jetty.server.handler.ContextHandler.doScope(ContextHandler.java:1112)
>   at 
> org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:141)
>   at 
> org.eclipse.jetty.server.handler.HandlerCollection.handle(HandlerCollection.java:119)
>   at 
> org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:134)
>   at org.eclipse.jetty.server.Server.handle(Server.java:534)
>   at org.eclipse.jetty.server.HttpChannel.handle(HttpChannel.java:320)
>   at 
> org.eclipse.jetty.server.HttpConnection.onFillable(HttpConnection.java:251)
>   at 
> org.eclipse.jetty.io.AbstractConnection$ReadCallback.succeeded(AbstractConnection.java:283)
>   at org.eclipse.jetty.io.FillInterest.fillable(FillInterest.java:108)
>   at 
> org.eclipse.jetty.io.SelectChannelEndPoint$2.run(SelectChannelEndPoint.java:93)
>   at 
> org.eclipse.jetty.util.thread.strategy.ExecuteProduceConsume.executeProduceConsume(ExecuteProduceConsume.java:303)
>   at 
> org.eclipse.jetty.util.thread.strategy.ExecuteProduceConsume.produceConsume(ExecuteProduceConsume.java:148)
>   at 
> org.eclipse.jetty.util.thread.strategy.ExecuteProduceConsume.run(ExecuteProduceConsume.java:136)
>   at 
> org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:671)
>   at 
> org.eclipse.jetty.util.thread.QueuedThreadPool$2.run(QueuedThreadPool.java:589)
>   at java.lang.Thread.run(Thread.java:748)
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-8256) Pluggable provider for node membership management

2018-05-30 Thread Dagang Wei (JIRA)


[ 
https://issues.apache.org/jira/browse/YARN-8256?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16495933#comment-16495933
 ] 

Dagang Wei commented on YARN-8256:
--

Friendly ping. Thanks!

> Pluggable provider for node membership management
> -
>
> Key: YARN-8256
> URL: https://issues.apache.org/jira/browse/YARN-8256
> Project: Hadoop YARN
>  Issue Type: New Feature
>  Components: resourcemanager
>Affects Versions: 2.8.3, 3.0.2
>Reporter: Dagang Wei
>Priority: Major
>   Original Estimate: 24h
>  Remaining Estimate: 24h
>
> h1. Background
> HDFS-7541 introduced a pluggable provider framework for node membership 
> management, which gives HDFS the flexibility to have different ways to manage 
> node membership for different needs.
> [org.apache.hadoop.hdfs.server.blockmanagement.HostConfigManager|https://github.com/apache/hadoop/blob/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/HostConfigManager.java]
>  is the class which provides the abstraction. Currently, there are 2 
> implementations in the HDFS codebase:
> 1) 
> [org.apache.hadoop.hdfs.server.blockmanagement.HostFileManager|https://github.com/apache/hadoop/blob/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/HostFileManager.java]
>  which uses 2 config files which are defined by the properties dfs.hosts and 
> dfs.hosts.exclude.
> 2) 
> [org.apache.hadoop.hdfs.server.blockmanagement.CombinedHostFileManager|https://github.com/apache/hadoop/blob/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/CombinedHostFileManager.java]
>  which uses a single JSON file defined by the property dfs.hosts.
> dfs.namenode.hosts.provider.classname is the property determining which 
> implementation is used
> h1. Problem
> YARN should be consistent with HDFS in terms of pluggable provider for node 
> membership management. The absence of it makes YARN impossible to have other 
> config sources, e.g., ZooKeeper, database, other config file formats, etc.
> h1. Proposed solution
> [org.apache.hadoop.yarn.server.resourcemanager.NodesListManager|https://github.com/apache/hadoop/blob/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/NodesListManager.java]
>  is the class for managing YARN node membership today. It uses 
> [HostsFileReader|https://github.com/apache/hadoop/blob/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/util/HostsFileReader.java]
>  to read config files specified by the property 
> yarn.resourcemanager.nodes.include-path for nodes to include and 
> yarn.resourcemanager.nodes.nodes.exclude-path for nodes to exclude.
> The proposed solution is to
> 1) introduce a new interface {color:#008000}HostsConfigManager{color} which 
> provides the abstraction for node membership management. Update 
> {color:#008000}NodeListManager{color} to depend on 
> {color:#008000}HostsConfigManager{color} instead of 
> {color:#008000}HostsFileReader{color}. Then create a wrapper class for 
> {color:#008000}HostsFileReader{color} which implements the interface.
> 2) introduce a new config property 
> {color:#008000}yarn.resourcemanager.hosts-config.manager.class{color} for 
> specifying the implementation class. Set the default value to the wrapper 
> class of {color:#008000}HostsFileReader{color} for backward compatibility 
> between new code and old config.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-8333) Load balance YARN services using RegistryDNS multiple A records

2018-05-30 Thread genericqa (JIRA)


[ 
https://issues.apache.org/jira/browse/YARN-8333?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16495924#comment-16495924
 ] 

genericqa commented on YARN-8333:
-

| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
25s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  1m  
6s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 23m 
 4s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  7m 
53s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
 3s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
48s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
11m 14s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Skipped patched modules with no Java source: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-site {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
39s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
34s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m  
9s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
33s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  6m 
45s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  6m 
45s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
20s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
45s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
10m 11s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Skipped patched modules with no Java source: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-site {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
39s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
38s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
52s{color} | {color:green} hadoop-yarn-registry in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
19s{color} | {color:green} hadoop-yarn-site in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
27s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 69m 13s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:abb62dd |
| JIRA Issue | YARN-8333 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12925826/YARN-8333.003.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  shadedclient  findbugs  checkstyle  |
| uname | Linux a92e85fc5dbf 4.4.0-64-generic #85-Ubuntu SMP Mon Feb 20 
11:50:30 UTC 2017 x86_64 x86_64 x86_64 

[jira] [Commented] (YARN-8349) Remove YARN registry entries when a service is killed by the RM

2018-05-30 Thread genericqa (JIRA)


[ 
https://issues.apache.org/jira/browse/YARN-8349?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16495907#comment-16495907
 ] 

genericqa commented on YARN-8349:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
34s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 4 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  2m 
19s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 27m 
18s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 32m 
16s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  2m 
59s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  4m  
9s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
17m  7s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Skipped patched modules with no Java source: 
hadoop-project {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  5m 
21s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  3m 
18s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
19s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  3m 
12s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 30m 
13s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 30m 
13s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  3m 
 0s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  4m  
0s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green}  0m  
3s{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green}  
9m 13s{color} | {color:green} patch has no errors when building and testing our 
client artifacts. {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Skipped patched modules with no Java source: 
hadoop-project {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  5m 
58s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  3m 
46s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
25s{color} | {color:green} hadoop-project in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  3m 
28s{color} | {color:green} hadoop-yarn-common in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  1m  
9s{color} | {color:green} hadoop-yarn-registry in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 69m 
10s{color} | {color:green} hadoop-yarn-server-resourcemanager in the patch 
passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 27m 
23s{color} | {color:green} hadoop-yarn-client in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 11m  
7s{color} | {color:green} hadoop-yarn-services-core in the patch passed. 
{color} |
| {color:red}-1{color} | {color:red} unit 

[jira] [Updated] (YARN-8333) Load balance YARN services using RegistryDNS multiple A records

2018-05-30 Thread Eric Yang (JIRA)


 [ 
https://issues.apache.org/jira/browse/YARN-8333?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Eric Yang updated YARN-8333:

Attachment: (was: YARN-8333.003.patch)

> Load balance YARN services using RegistryDNS multiple A records
> ---
>
> Key: YARN-8333
> URL: https://issues.apache.org/jira/browse/YARN-8333
> Project: Hadoop YARN
>  Issue Type: Improvement
>  Components: yarn-native-services
>Affects Versions: 3.1.0
>Reporter: Eric Yang
>Assignee: Eric Yang
>Priority: Major
> Attachments: YARN-8333.001.patch, YARN-8333.002.patch, 
> YARN-8333.003.patch
>
>
> For scaling stateless containers, it would be great to support DNS round 
> robin for fault tolerance and load balancing.  The current DNS record format 
> for RegistryDNS is 
> [container-instance].[application-name].[username].[domain].  For example:
> {code}
> appcatalog-0.appname.hbase.ycluster. IN A 123.123.123.120
> appcatalog-1.appname.hbase.ycluster. IN A 123.123.123.121
> appcatalog-2.appname.hbase.ycluster. IN A 123.123.123.122
> appcatalog-3.appname.hbase.ycluster. IN A 123.123.123.123
> {code}
> It would be nice to add multi-A record that contains all IP addresses of the 
> same component in addition to the instance based records.  For example:
> {code}
> appcatalog.appname.hbase.ycluster. IN A 123.123.123.120
> appcatalog.appname.hbase.ycluster. IN A 123.123.123.121
> appcatalog.appname.hbase.ycluster. IN A 123.123.123.122
> appcatalog.appname.hbase.ycluster. IN A 123.123.123.123
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-8333) Load balance YARN services using RegistryDNS multiple A records

2018-05-30 Thread Eric Yang (JIRA)


 [ 
https://issues.apache.org/jira/browse/YARN-8333?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Eric Yang updated YARN-8333:

Attachment: YARN-8333.003.patch

> Load balance YARN services using RegistryDNS multiple A records
> ---
>
> Key: YARN-8333
> URL: https://issues.apache.org/jira/browse/YARN-8333
> Project: Hadoop YARN
>  Issue Type: Improvement
>  Components: yarn-native-services
>Affects Versions: 3.1.0
>Reporter: Eric Yang
>Assignee: Eric Yang
>Priority: Major
> Attachments: YARN-8333.001.patch, YARN-8333.002.patch, 
> YARN-8333.003.patch
>
>
> For scaling stateless containers, it would be great to support DNS round 
> robin for fault tolerance and load balancing.  The current DNS record format 
> for RegistryDNS is 
> [container-instance].[application-name].[username].[domain].  For example:
> {code}
> appcatalog-0.appname.hbase.ycluster. IN A 123.123.123.120
> appcatalog-1.appname.hbase.ycluster. IN A 123.123.123.121
> appcatalog-2.appname.hbase.ycluster. IN A 123.123.123.122
> appcatalog-3.appname.hbase.ycluster. IN A 123.123.123.123
> {code}
> It would be nice to add multi-A record that contains all IP addresses of the 
> same component in addition to the instance based records.  For example:
> {code}
> appcatalog.appname.hbase.ycluster. IN A 123.123.123.120
> appcatalog.appname.hbase.ycluster. IN A 123.123.123.121
> appcatalog.appname.hbase.ycluster. IN A 123.123.123.122
> appcatalog.appname.hbase.ycluster. IN A 123.123.123.123
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-8333) Load balance YARN services using RegistryDNS multiple A records

2018-05-30 Thread Eric Yang (JIRA)


[ 
https://issues.apache.org/jira/browse/YARN-8333?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16495861#comment-16495861
 ] 

Eric Yang commented on YARN-8333:
-

[~billie.rinaldi] Thank you for the review.  Patch 3 added documentation to 
ServiceDiscovery.md.

> Load balance YARN services using RegistryDNS multiple A records
> ---
>
> Key: YARN-8333
> URL: https://issues.apache.org/jira/browse/YARN-8333
> Project: Hadoop YARN
>  Issue Type: Improvement
>  Components: yarn-native-services
>Affects Versions: 3.1.0
>Reporter: Eric Yang
>Assignee: Eric Yang
>Priority: Major
> Attachments: YARN-8333.001.patch, YARN-8333.002.patch, 
> YARN-8333.003.patch
>
>
> For scaling stateless containers, it would be great to support DNS round 
> robin for fault tolerance and load balancing.  The current DNS record format 
> for RegistryDNS is 
> [container-instance].[application-name].[username].[domain].  For example:
> {code}
> appcatalog-0.appname.hbase.ycluster. IN A 123.123.123.120
> appcatalog-1.appname.hbase.ycluster. IN A 123.123.123.121
> appcatalog-2.appname.hbase.ycluster. IN A 123.123.123.122
> appcatalog-3.appname.hbase.ycluster. IN A 123.123.123.123
> {code}
> It would be nice to add multi-A record that contains all IP addresses of the 
> same component in addition to the instance based records.  For example:
> {code}
> appcatalog.appname.hbase.ycluster. IN A 123.123.123.120
> appcatalog.appname.hbase.ycluster. IN A 123.123.123.121
> appcatalog.appname.hbase.ycluster. IN A 123.123.123.122
> appcatalog.appname.hbase.ycluster. IN A 123.123.123.123
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-8333) Load balance YARN services using RegistryDNS multiple A records

2018-05-30 Thread Eric Yang (JIRA)


 [ 
https://issues.apache.org/jira/browse/YARN-8333?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Eric Yang updated YARN-8333:

Attachment: YARN-8333.003.patch

> Load balance YARN services using RegistryDNS multiple A records
> ---
>
> Key: YARN-8333
> URL: https://issues.apache.org/jira/browse/YARN-8333
> Project: Hadoop YARN
>  Issue Type: Improvement
>  Components: yarn-native-services
>Affects Versions: 3.1.0
>Reporter: Eric Yang
>Assignee: Eric Yang
>Priority: Major
> Attachments: YARN-8333.001.patch, YARN-8333.002.patch, 
> YARN-8333.003.patch
>
>
> For scaling stateless containers, it would be great to support DNS round 
> robin for fault tolerance and load balancing.  The current DNS record format 
> for RegistryDNS is 
> [container-instance].[application-name].[username].[domain].  For example:
> {code}
> appcatalog-0.appname.hbase.ycluster. IN A 123.123.123.120
> appcatalog-1.appname.hbase.ycluster. IN A 123.123.123.121
> appcatalog-2.appname.hbase.ycluster. IN A 123.123.123.122
> appcatalog-3.appname.hbase.ycluster. IN A 123.123.123.123
> {code}
> It would be nice to add multi-A record that contains all IP addresses of the 
> same component in addition to the instance based records.  For example:
> {code}
> appcatalog.appname.hbase.ycluster. IN A 123.123.123.120
> appcatalog.appname.hbase.ycluster. IN A 123.123.123.121
> appcatalog.appname.hbase.ycluster. IN A 123.123.123.122
> appcatalog.appname.hbase.ycluster. IN A 123.123.123.123
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-8342) Using docker image from a non-privileged registry, the launch_command is not honored

2018-05-30 Thread Eric Yang (JIRA)


 [ 
https://issues.apache.org/jira/browse/YARN-8342?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Eric Yang updated YARN-8342:

Target Version/s: 3.2.0, 3.1.1

> Using docker image from a non-privileged registry, the launch_command is not 
> honored
> 
>
> Key: YARN-8342
> URL: https://issues.apache.org/jira/browse/YARN-8342
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Wangda Tan
>Assignee: Eric Yang
>Priority: Critical
>  Labels: Docker
> Attachments: YARN-8342.001.patch, YARN-8342.002.patch
>
>
> During test of the Docker feature, I found that if a container comes from 
> non-privileged docker registry, the specified launch command will be ignored. 
> Container will success without any log, which is very confusing to end users. 
> And this behavior is inconsistent to containers from privileged docker 
> registries.
> cc: [~eyang], [~shaneku...@gmail.com], [~ebadger], [~jlowe]



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (YARN-8342) Using docker image from a non-privileged registry, the launch_command is not honored

2018-05-30 Thread Eric Yang (JIRA)


[ 
https://issues.apache.org/jira/browse/YARN-8342?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16495831#comment-16495831
 ] 

Eric Yang edited comment on YARN-8342 at 5/30/18 10:51 PM:
---

Patch 002 includes:
- Allow untrusted image to supply launch_command to Docker mode (ENTRY_POINT).
- Renamed docker.privileged-containers.registries to docker.trusted.registries 
to reflect the current implementation.
- Add a paragraph on how to make ENTRY_POINT mode as global setting in 
yarn-env.sh and yarn-site.xml.


was (Author: eyang):
Patch 002 includes:
- Allow untrusted image to supply launch_command to Docker mode (ENTRY_POINT).
- Add a paragraph on how to make ENTRY_POINT mode as global setting in 
yarn-env.sh and yarn-site.xml.

> Using docker image from a non-privileged registry, the launch_command is not 
> honored
> 
>
> Key: YARN-8342
> URL: https://issues.apache.org/jira/browse/YARN-8342
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Wangda Tan
>Assignee: Eric Yang
>Priority: Critical
>  Labels: Docker
> Attachments: YARN-8342.001.patch, YARN-8342.002.patch
>
>
> During test of the Docker feature, I found that if a container comes from 
> non-privileged docker registry, the specified launch command will be ignored. 
> Container will success without any log, which is very confusing to end users. 
> And this behavior is inconsistent to containers from privileged docker 
> registries.
> cc: [~eyang], [~shaneku...@gmail.com], [~ebadger], [~jlowe]



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-8342) Using docker image from a non-privileged registry, the launch_command is not honored

2018-05-30 Thread Eric Yang (JIRA)


[ 
https://issues.apache.org/jira/browse/YARN-8342?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16495831#comment-16495831
 ] 

Eric Yang commented on YARN-8342:
-

Patch 002 includes:
- Allow untrusted image to supply launch_command to Docker mode (ENTRY_POINT).
- Add a paragraph on how to make ENTRY_POINT mode as global setting in 
yarn-env.sh and yarn-site.xml.

> Using docker image from a non-privileged registry, the launch_command is not 
> honored
> 
>
> Key: YARN-8342
> URL: https://issues.apache.org/jira/browse/YARN-8342
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Wangda Tan
>Assignee: Eric Yang
>Priority: Critical
>  Labels: Docker
> Attachments: YARN-8342.001.patch, YARN-8342.002.patch
>
>
> During test of the Docker feature, I found that if a container comes from 
> non-privileged docker registry, the specified launch command will be ignored. 
> Container will success without any log, which is very confusing to end users. 
> And this behavior is inconsistent to containers from privileged docker 
> registries.
> cc: [~eyang], [~shaneku...@gmail.com], [~ebadger], [~jlowe]



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-8342) Using docker image from a non-privileged registry, the launch_command is not honored

2018-05-30 Thread Eric Yang (JIRA)


 [ 
https://issues.apache.org/jira/browse/YARN-8342?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Eric Yang updated YARN-8342:

Attachment: YARN-8342.002.patch

> Using docker image from a non-privileged registry, the launch_command is not 
> honored
> 
>
> Key: YARN-8342
> URL: https://issues.apache.org/jira/browse/YARN-8342
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Wangda Tan
>Assignee: Eric Yang
>Priority: Critical
>  Labels: Docker
> Attachments: YARN-8342.001.patch, YARN-8342.002.patch
>
>
> During test of the Docker feature, I found that if a container comes from 
> non-privileged docker registry, the specified launch command will be ignored. 
> Container will success without any log, which is very confusing to end users. 
> And this behavior is inconsistent to containers from privileged docker 
> registries.
> cc: [~eyang], [~shaneku...@gmail.com], [~ebadger], [~jlowe]



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-8379) Add an option to allow Capacity Scheduler preemption to balance satisfied queues

2018-05-30 Thread Wangda Tan (JIRA)


[ 
https://issues.apache.org/jira/browse/YARN-8379?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16495817#comment-16495817
 ] 

Wangda Tan commented on YARN-8379:
--

To make better resource balances between queues, we propose to make the 
additional preemption between queues configurable. And admin can set a 
different kill-before-wait timeout to control the pace of the additional queue 
balance preemption.

cc: [~jlowe], [~eepayne], [~sunilg] for suggestions.

Thanks [~clayb]/[~Zian Chen] for offline suggestions and feedbacks.

> Add an option to allow Capacity Scheduler preemption to balance satisfied 
> queues
> 
>
> Key: YARN-8379
> URL: https://issues.apache.org/jira/browse/YARN-8379
> Project: Hadoop YARN
>  Issue Type: Bug
>Reporter: Wangda Tan
>Assignee: Wangda Tan
>Priority: Major
>
> Existing capacity scheduler only supports preemption for an underutilized 
> queue to reach its guaranteed resource. In addition to that, there’s an 
> requirement to get better balance between queues when all of them reach 
> guaranteed resource but with different fairness resource.
> An example is, 3 queues with capacity, queue_a = 30%, queue_b = 30%, queue_c 
> = 40%. At time T. queue_a is using 30%, queue_b is using 70%. Existing 
> scheduler preemption won't happen. But this is unfair to queue_b since 
> queue_b has the same guaranteed resources.
> Before YARN-5864, capacity scheduler do additional preemption to balance 
> queues. We changed the logic since it could preempt too many containers 
> between queues when all queues are satisfied.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Created] (YARN-8379) Add an option to allow Capacity Scheduler preemption to balance satisfied queues

2018-05-30 Thread Wangda Tan (JIRA)
Wangda Tan created YARN-8379:


 Summary: Add an option to allow Capacity Scheduler preemption to 
balance satisfied queues
 Key: YARN-8379
 URL: https://issues.apache.org/jira/browse/YARN-8379
 Project: Hadoop YARN
  Issue Type: Bug
Reporter: Wangda Tan
Assignee: Wangda Tan


Existing capacity scheduler only supports preemption for an underutilized queue 
to reach its guaranteed resource. In addition to that, there’s an requirement 
to get better balance between queues when all of them reach guaranteed resource 
but with different fairness resource.

An example is, 3 queues with capacity, queue_a = 30%, queue_b = 30%, queue_c = 
40%. At time T. queue_a is using 30%, queue_b is using 70%. Existing scheduler 
preemption won't happen. But this is unfair to queue_b since queue_b has the 
same guaranteed resources.

Before YARN-5864, capacity scheduler do additional preemption to balance 
queues. We changed the logic since it could preempt too many containers between 
queues when all queues are satisfied.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7953) [GQ] Data structures for federation global queues calculations

2018-05-30 Thread genericqa (JIRA)


[ 
https://issues.apache.org/jira/browse/YARN-7953?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16495786#comment-16495786
 ] 

genericqa commented on YARN-7953:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
29s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 5 new or modified test 
files. {color} |
|| || || || {color:brown} YARN-7402 Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 30m 
45s{color} | {color:green} YARN-7402 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
44s{color} | {color:green} YARN-7402 passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
36s{color} | {color:green} YARN-7402 passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
47s{color} | {color:green} YARN-7402 passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
12m  1s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
15s{color} | {color:green} YARN-7402 passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
30s{color} | {color:green} YARN-7402 passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
45s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
39s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} javac {color} | {color:red}  0m 39s{color} 
| {color:red} 
hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager
 generated 1 new + 22 unchanged - 0 fixed = 23 total (was 22) {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
32s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
43s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green}  0m  
1s{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
11m 37s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  1m 
20s{color} | {color:red} 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager
 generated 5 new + 0 unchanged - 0 fixed = 5 total (was 0) {color} |
| {color:red}-1{color} | {color:red} javadoc {color} | {color:red}  0m 
26s{color} | {color:red} 
hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager
 generated 1 new + 4 unchanged - 0 fixed = 5 total (was 4) {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 67m 55s{color} 
| {color:red} hadoop-yarn-server-resourcemanager in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
32s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}131m 17s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| FindBugs | 
module:hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager
 |
|  |  Possible null pointer dereference of f in 
org.apache.hadoop.yarn.server.resourcemanager.federation.globalqueues.FedQueue.recursiveChildrenByName(FedQueue,
 String)  Dereferenced at FedQueue.java:f in 
org.apache.hadoop.yarn.server.resourcemanager.federation.globalqueues.FedQueue.recursiveChildrenByName(FedQueue,
 String)  Dereferenced at FedQueue.java:[line 349] |
|  |  Nullcheck of FedQueue.children at line 149 of value previously 
dereferenced in 
org.apache.hadoop.yarn.server.resourcemanager.federation.globalqueues.FedQueue.propagate(Resource)
  At FedQueue.java:149 of value previously dereferenced in 

[jira] [Commented] (YARN-4606) CapacityScheduler: applications could get starved because computation of #activeUsers considers pending apps

2018-05-30 Thread Eric Payne (JIRA)


[ 
https://issues.apache.org/jira/browse/YARN-4606?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16495656#comment-16495656
 ] 

Eric Payne commented on YARN-4606:
--

[~maniraj...@gmail.com], do you have a status on updating this patch? Do you 
need any help from the community?

> CapacityScheduler: applications could get starved because computation of 
> #activeUsers considers pending apps 
> -
>
> Key: YARN-4606
> URL: https://issues.apache.org/jira/browse/YARN-4606
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: capacity scheduler, capacityscheduler
>Affects Versions: 2.8.0, 2.7.1
>Reporter: Karam Singh
>Assignee: Manikandan R
>Priority: Critical
> Attachments: YARN-4606.001.patch, YARN-4606.002.patch, 
> YARN-4606.1.poc.patch, YARN-4606.POC.2.patch, YARN-4606.POC.patch
>
>
> Currently, if all applications belong to same user in LeafQueue are pending 
> (caused by max-am-percent, etc.), ActiveUsersManager still considers the user 
> is an active user. This could lead to starvation of active applications, for 
> example:
> - App1(belongs to user1)/app2(belongs to user2) are active, app3(belongs to 
> user3)/app4(belongs to user4) are pending
> - ActiveUsersManager returns #active-users=4
> - However, there're only two users (user1/user2) are able to allocate new 
> resources. So computed user-limit-resource could be lower than expected.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-8350) NPE in service AM related to placement policy

2018-05-30 Thread Hudson (JIRA)


[ 
https://issues.apache.org/jira/browse/YARN-8350?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16495648#comment-16495648
 ] 

Hudson commented on YARN-8350:
--

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #14319 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/14319/])
YARN-8350. NPE in service AM related to placement policy. Contributed by 
(billie: rev 778a4a24be176382a5704f709c00bdfcfe6ddc8c)
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-services/hadoop-yarn-services-core/src/main/java/org/apache/hadoop/yarn/service/utils/ServiceApiUtil.java
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-services/hadoop-yarn-services-core/src/test/java/org/apache/hadoop/yarn/service/TestServiceApiUtil.java
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-services/hadoop-yarn-services-core/src/main/java/org/apache/hadoop/yarn/service/component/Component.java
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-services/hadoop-yarn-services-core/src/main/java/org/apache/hadoop/yarn/service/exceptions/RestApiErrorMessages.java


> NPE in service AM related to placement policy
> -
>
> Key: YARN-8350
> URL: https://issues.apache.org/jira/browse/YARN-8350
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: yarn-native-services
>Reporter: Billie Rinaldi
>Assignee: Gour Saha
>Priority: Critical
> Fix For: 3.2.0, 3.1.1
>
> Attachments: YARN-8350.01.patch, YARN-8350.02.patch
>
>
> It seems like this NPE is happening in a service with more than one component 
> when one component has a placement policy and the other does not. It causes 
> the AM to crash.
> {noformat}
> java.lang.NullPointerException
> at 
> org.apache.hadoop.yarn.service.component.Component.requestContainers(Component.java:644)
> at 
> org.apache.hadoop.yarn.service.component.Component$FlexComponentTransition.transition(Component.java:310)
> at 
> org.apache.hadoop.yarn.service.component.Component$FlexComponentTransition.transition(Component.java:293)
> at 
> org.apache.hadoop.yarn.state.StateMachineFactory$MultipleInternalArc.doTransition(StateMachineFactory.java:385)
> at 
> org.apache.hadoop.yarn.state.StateMachineFactory.doTransition(StateMachineFactory.java:302)
> at 
> org.apache.hadoop.yarn.state.StateMachineFactory.access$500(StateMachineFactory.java:46)
> at 
> org.apache.hadoop.yarn.state.StateMachineFactory$InternalStateMachine.doTransition(StateMachineFactory.java:487)
> at 
> org.apache.hadoop.yarn.service.component.Component.handle(Component.java:919)
> at 
> org.apache.hadoop.yarn.service.ServiceScheduler.serviceStart(ServiceScheduler.java:344)
> at 
> org.apache.hadoop.service.AbstractService.start(AbstractService.java:194)
> at 
> org.apache.hadoop.service.CompositeService.serviceStart(CompositeService.java:121)
> at 
> org.apache.hadoop.yarn.service.ServiceMaster.lambda$serviceStart$0(ServiceMaster.java:253)
> at java.security.AccessController.doPrivileged(Native Method)
> at javax.security.auth.Subject.doAs(Subject.java:422)
> at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1682)
> at 
> org.apache.hadoop.yarn.service.ServiceMaster.serviceStart(ServiceMaster.java:251)
> at 
> org.apache.hadoop.service.AbstractService.start(AbstractService.java:194)
> at 
> org.apache.hadoop.yarn.service.ServiceMaster.main(ServiceMaster.java:317)
> {noformat}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-8350) NPE in service AM related to placement policy

2018-05-30 Thread Billie Rinaldi (JIRA)


[ 
https://issues.apache.org/jira/browse/YARN-8350?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16495617#comment-16495617
 ] 

Billie Rinaldi commented on YARN-8350:
--

Ah, okay. Thanks, I missed that part.

> NPE in service AM related to placement policy
> -
>
> Key: YARN-8350
> URL: https://issues.apache.org/jira/browse/YARN-8350
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: yarn-native-services
>Reporter: Billie Rinaldi
>Assignee: Gour Saha
>Priority: Critical
> Attachments: YARN-8350.01.patch, YARN-8350.02.patch
>
>
> It seems like this NPE is happening in a service with more than one component 
> when one component has a placement policy and the other does not. It causes 
> the AM to crash.
> {noformat}
> java.lang.NullPointerException
> at 
> org.apache.hadoop.yarn.service.component.Component.requestContainers(Component.java:644)
> at 
> org.apache.hadoop.yarn.service.component.Component$FlexComponentTransition.transition(Component.java:310)
> at 
> org.apache.hadoop.yarn.service.component.Component$FlexComponentTransition.transition(Component.java:293)
> at 
> org.apache.hadoop.yarn.state.StateMachineFactory$MultipleInternalArc.doTransition(StateMachineFactory.java:385)
> at 
> org.apache.hadoop.yarn.state.StateMachineFactory.doTransition(StateMachineFactory.java:302)
> at 
> org.apache.hadoop.yarn.state.StateMachineFactory.access$500(StateMachineFactory.java:46)
> at 
> org.apache.hadoop.yarn.state.StateMachineFactory$InternalStateMachine.doTransition(StateMachineFactory.java:487)
> at 
> org.apache.hadoop.yarn.service.component.Component.handle(Component.java:919)
> at 
> org.apache.hadoop.yarn.service.ServiceScheduler.serviceStart(ServiceScheduler.java:344)
> at 
> org.apache.hadoop.service.AbstractService.start(AbstractService.java:194)
> at 
> org.apache.hadoop.service.CompositeService.serviceStart(CompositeService.java:121)
> at 
> org.apache.hadoop.yarn.service.ServiceMaster.lambda$serviceStart$0(ServiceMaster.java:253)
> at java.security.AccessController.doPrivileged(Native Method)
> at javax.security.auth.Subject.doAs(Subject.java:422)
> at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1682)
> at 
> org.apache.hadoop.yarn.service.ServiceMaster.serviceStart(ServiceMaster.java:251)
> at 
> org.apache.hadoop.service.AbstractService.start(AbstractService.java:194)
> at 
> org.apache.hadoop.yarn.service.ServiceMaster.main(ServiceMaster.java:317)
> {noformat}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-8350) NPE in service AM related to placement policy

2018-05-30 Thread Gour Saha (JIRA)


[ 
https://issues.apache.org/jira/browse/YARN-8350?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16495610#comment-16495610
 ] 

Gour Saha commented on YARN-8350:
-

Thanks [~billie.rinaldi] for reviewing the patch.

The missing space between "%s" and "in" is deliberate. I wrote a comment above 
the code to explain -

{code}
 // Note: %sin is not a typo. Constraint name is optional so the error messages
 // below handle that scenario by adding a space if name is specified.
{code}

> NPE in service AM related to placement policy
> -
>
> Key: YARN-8350
> URL: https://issues.apache.org/jira/browse/YARN-8350
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: yarn-native-services
>Reporter: Billie Rinaldi
>Assignee: Gour Saha
>Priority: Critical
> Attachments: YARN-8350.01.patch, YARN-8350.02.patch
>
>
> It seems like this NPE is happening in a service with more than one component 
> when one component has a placement policy and the other does not. It causes 
> the AM to crash.
> {noformat}
> java.lang.NullPointerException
> at 
> org.apache.hadoop.yarn.service.component.Component.requestContainers(Component.java:644)
> at 
> org.apache.hadoop.yarn.service.component.Component$FlexComponentTransition.transition(Component.java:310)
> at 
> org.apache.hadoop.yarn.service.component.Component$FlexComponentTransition.transition(Component.java:293)
> at 
> org.apache.hadoop.yarn.state.StateMachineFactory$MultipleInternalArc.doTransition(StateMachineFactory.java:385)
> at 
> org.apache.hadoop.yarn.state.StateMachineFactory.doTransition(StateMachineFactory.java:302)
> at 
> org.apache.hadoop.yarn.state.StateMachineFactory.access$500(StateMachineFactory.java:46)
> at 
> org.apache.hadoop.yarn.state.StateMachineFactory$InternalStateMachine.doTransition(StateMachineFactory.java:487)
> at 
> org.apache.hadoop.yarn.service.component.Component.handle(Component.java:919)
> at 
> org.apache.hadoop.yarn.service.ServiceScheduler.serviceStart(ServiceScheduler.java:344)
> at 
> org.apache.hadoop.service.AbstractService.start(AbstractService.java:194)
> at 
> org.apache.hadoop.service.CompositeService.serviceStart(CompositeService.java:121)
> at 
> org.apache.hadoop.yarn.service.ServiceMaster.lambda$serviceStart$0(ServiceMaster.java:253)
> at java.security.AccessController.doPrivileged(Native Method)
> at javax.security.auth.Subject.doAs(Subject.java:422)
> at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1682)
> at 
> org.apache.hadoop.yarn.service.ServiceMaster.serviceStart(ServiceMaster.java:251)
> at 
> org.apache.hadoop.service.AbstractService.start(AbstractService.java:194)
> at 
> org.apache.hadoop.yarn.service.ServiceMaster.main(ServiceMaster.java:317)
> {noformat}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-8350) NPE in service AM related to placement policy

2018-05-30 Thread Billie Rinaldi (JIRA)


[ 
https://issues.apache.org/jira/browse/YARN-8350?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16495609#comment-16495609
 ] 

Billie Rinaldi commented on YARN-8350:
--

[~gsaha], I noticed one thing in the error strings. They are missing a space 
after the constraint name: "constraint %sin placement policy".

> NPE in service AM related to placement policy
> -
>
> Key: YARN-8350
> URL: https://issues.apache.org/jira/browse/YARN-8350
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: yarn-native-services
>Reporter: Billie Rinaldi
>Assignee: Gour Saha
>Priority: Critical
> Attachments: YARN-8350.01.patch, YARN-8350.02.patch
>
>
> It seems like this NPE is happening in a service with more than one component 
> when one component has a placement policy and the other does not. It causes 
> the AM to crash.
> {noformat}
> java.lang.NullPointerException
> at 
> org.apache.hadoop.yarn.service.component.Component.requestContainers(Component.java:644)
> at 
> org.apache.hadoop.yarn.service.component.Component$FlexComponentTransition.transition(Component.java:310)
> at 
> org.apache.hadoop.yarn.service.component.Component$FlexComponentTransition.transition(Component.java:293)
> at 
> org.apache.hadoop.yarn.state.StateMachineFactory$MultipleInternalArc.doTransition(StateMachineFactory.java:385)
> at 
> org.apache.hadoop.yarn.state.StateMachineFactory.doTransition(StateMachineFactory.java:302)
> at 
> org.apache.hadoop.yarn.state.StateMachineFactory.access$500(StateMachineFactory.java:46)
> at 
> org.apache.hadoop.yarn.state.StateMachineFactory$InternalStateMachine.doTransition(StateMachineFactory.java:487)
> at 
> org.apache.hadoop.yarn.service.component.Component.handle(Component.java:919)
> at 
> org.apache.hadoop.yarn.service.ServiceScheduler.serviceStart(ServiceScheduler.java:344)
> at 
> org.apache.hadoop.service.AbstractService.start(AbstractService.java:194)
> at 
> org.apache.hadoop.service.CompositeService.serviceStart(CompositeService.java:121)
> at 
> org.apache.hadoop.yarn.service.ServiceMaster.lambda$serviceStart$0(ServiceMaster.java:253)
> at java.security.AccessController.doPrivileged(Native Method)
> at javax.security.auth.Subject.doAs(Subject.java:422)
> at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1682)
> at 
> org.apache.hadoop.yarn.service.ServiceMaster.serviceStart(ServiceMaster.java:251)
> at 
> org.apache.hadoop.service.AbstractService.start(AbstractService.java:194)
> at 
> org.apache.hadoop.yarn.service.ServiceMaster.main(ServiceMaster.java:317)
> {noformat}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-8368) yarn app start cli should print applicationId

2018-05-30 Thread Hudson (JIRA)


[ 
https://issues.apache.org/jira/browse/YARN-8368?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16495604#comment-16495604
 ] 

Hudson commented on YARN-8368:
--

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #14318 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/14318/])
YARN-8368. yarn app start cli should print applicationId. Contributed by 
(billie: rev 96eefcc84aacc4cc82ad7e3e72c5bdad56f4a7b7)
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-services/hadoop-yarn-services-api/src/test/java/org/apache/hadoop/yarn/service/ServiceClientTest.java
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-services/hadoop-yarn-services-core/src/main/java/org/apache/hadoop/yarn/service/client/ServiceClient.java
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-services/hadoop-yarn-services-api/src/main/java/org/apache/hadoop/yarn/service/webapp/ApiServer.java


> yarn app start cli should print applicationId
> -
>
> Key: YARN-8368
> URL: https://issues.apache.org/jira/browse/YARN-8368
> Project: Hadoop YARN
>  Issue Type: Bug
>Reporter: Yesha Vora
>Assignee: Rohith Sharma K S
>Priority: Critical
> Attachments: YARN-8368.01.patch, YARN-8368.02.patch
>
>
> yarn app start cli should print the application Id similar to yarn launch cmd.
> {code:java}
> bash-4.2$ yarn app -start hbase-app-test
> WARNING: YARN_LOGFILE has been replaced by HADOOP_LOGFILE. Using value of 
> YARN_LOGFILE.
> WARNING: YARN_PID_DIR has been replaced by HADOOP_PID_DIR. Using value of 
> YARN_PID_DIR.
> 18/05/24 15:15:53 INFO client.RMProxy: Connecting to ResourceManager at 
> xxx/xxx:8050
> 18/05/24 15:15:54 INFO client.RMProxy: Connecting to ResourceManager at 
> xxx/xxx:8050
> 18/05/24 15:15:55 INFO client.ApiServiceClient: Service hbase-app-test is 
> successfully started.{code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Resolved] (YARN-8343) YARN should have ability to run images only from a whitelist docker registries

2018-05-30 Thread Eric Badger (JIRA)


 [ 
https://issues.apache.org/jira/browse/YARN-8343?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Eric Badger resolved YARN-8343.
---
Resolution: Duplicate

> YARN should have ability to run images only from a whitelist docker registries
> --
>
> Key: YARN-8343
> URL: https://issues.apache.org/jira/browse/YARN-8343
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Wangda Tan
>Priority: Critical
>  Labels: Docker
>
> This is a superset of docker.privileged-containers.registries, admin can 
> specify a whitelist and all images from non-privileged-container.registries 
> will be rejected.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-8349) Remove YARN registry entries when a service is killed by the RM

2018-05-30 Thread Billie Rinaldi (JIRA)


[ 
https://issues.apache.org/jira/browse/YARN-8349?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16495586#comment-16495586
 ] 

Billie Rinaldi commented on YARN-8349:
--

Fixed checkstyle. I'm not seeing the unit test failure locally.

> Remove YARN registry entries when a service is killed by the RM
> ---
>
> Key: YARN-8349
> URL: https://issues.apache.org/jira/browse/YARN-8349
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: yarn-native-services
>Affects Versions: 3.2.0, 3.1.1
>Reporter: Shane Kumpf
>Assignee: Billie Rinaldi
>Priority: Major
> Attachments: YARN-8349.1.patch, YARN-8349.2.patch
>
>
> As the title states, when a service is killed by the RM (for exceeding its 
> lifetime for example), the YARN registry entries should be cleaned up.
> Without cleanup, DNS can contain multiple hostnames for a single IP address 
> in the case where IPs are reused. This impacts reverse lookups, which breaks 
> services, such as kerberos, that depend on those lookups.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-8349) Remove YARN registry entries when a service is killed by the RM

2018-05-30 Thread Billie Rinaldi (JIRA)


 [ 
https://issues.apache.org/jira/browse/YARN-8349?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Billie Rinaldi updated YARN-8349:
-
Attachment: YARN-8349.2.patch

> Remove YARN registry entries when a service is killed by the RM
> ---
>
> Key: YARN-8349
> URL: https://issues.apache.org/jira/browse/YARN-8349
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: yarn-native-services
>Affects Versions: 3.2.0, 3.1.1
>Reporter: Shane Kumpf
>Assignee: Billie Rinaldi
>Priority: Major
> Attachments: YARN-8349.1.patch, YARN-8349.2.patch
>
>
> As the title states, when a service is killed by the RM (for exceeding its 
> lifetime for example), the YARN registry entries should be cleaned up.
> Without cleanup, DNS can contain multiple hostnames for a single IP address 
> in the case where IPs are reused. This impacts reverse lookups, which breaks 
> services, such as kerberos, that depend on those lookups.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-8377) Javadoc build failed in hadoop-yarn-server-nodemanager

2018-05-30 Thread Hudson (JIRA)


[ 
https://issues.apache.org/jira/browse/YARN-8377?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16495472#comment-16495472
 ] 

Hudson commented on YARN-8377:
--

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #14316 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/14316/])
YARN-8377: Javadoc build failed in hadoop-yarn-server-nodemanager. (ericp: rev 
e44c0849d7982c8f1ed43af25d2092090881d19f)
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/container/SlidingWindowRetryPolicy.java


> Javadoc build failed in hadoop-yarn-server-nodemanager
> --
>
> Key: YARN-8377
> URL: https://issues.apache.org/jira/browse/YARN-8377
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: build, docs
>Reporter: Takanobu Asanuma
>Assignee: Takanobu Asanuma
>Priority: Critical
> Attachments: YARN-8377.1.patch
>
>
> This is the same cause as YARN-8369.
> {code}
> [ERROR] 
> /hadoop/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/container/SlidingWindowRetryPolicy.java:88:
>  error: bad use of '>'
> [ERROR]* When failuresValidityInterval is > 0, it also removes time 
> entries from
> [ERROR]   ^
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-8349) Remove YARN registry entries when a service is killed by the RM

2018-05-30 Thread genericqa (JIRA)


[ 
https://issues.apache.org/jira/browse/YARN-8349?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16495469#comment-16495469
 ] 

genericqa commented on YARN-8349:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
23s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 4 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  1m 
39s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 25m 
17s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 28m  
4s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  3m 
12s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  4m 
11s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
18m  6s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Skipped patched modules with no Java source: 
hadoop-project {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  5m 
14s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  3m 
18s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
15s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  3m 
13s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 26m 
47s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 26m 
47s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
3m 11s{color} | {color:orange} root: The patch generated 4 new + 160 unchanged 
- 0 fixed = 164 total (was 160) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  4m 
14s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green}  0m  
4s{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
10m 14s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Skipped patched modules with no Java source: 
hadoop-project {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  5m 
56s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  3m 
17s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
20s{color} | {color:green} hadoop-project in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  3m 
11s{color} | {color:green} hadoop-yarn-common in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
59s{color} | {color:green} hadoop-yarn-registry in the patch passed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 68m  4s{color} 
| {color:red} hadoop-yarn-server-resourcemanager in the patch failed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 28m 
28s{color} | {color:green} hadoop-yarn-client in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 11m 
11s{color} | {color:green} hadoop-yarn-services-core in the patch 

[jira] [Commented] (YARN-4781) Support intra-queue preemption for fairness ordering policy.

2018-05-30 Thread Sunil Govindan (JIRA)


[ 
https://issues.apache.org/jira/browse/YARN-4781?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16495427#comment-16495427
 ] 

Sunil Govindan commented on YARN-4781:
--

Thank you. I ll commit this to branch-2 shortly.

> Support intra-queue preemption for fairness ordering policy.
> 
>
> Key: YARN-4781
> URL: https://issues.apache.org/jira/browse/YARN-4781
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: scheduler
>Reporter: Wangda Tan
>Assignee: Eric Payne
>Priority: Major
> Fix For: 3.0.3
>
> Attachments: YARN-4781.001.patch, YARN-4781.002.patch, 
> YARN-4781.003.patch, YARN-4781.004.patch, YARN-4781.005.branch-2.patch, 
> YARN-4781.005.patch
>
>
> We introduced fairness queue policy since YARN-3319, which will let large 
> applications make progresses and not starve small applications. However, if a 
> large application takes the queue’s resources, and containers of the large 
> app has long lifespan, small applications could still wait for resources for 
> long time and SLAs cannot be guaranteed.
> Instead of wait for application release resources on their own, we need to 
> preempt resources of queue with fairness policy enabled.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-8377) Javadoc build failed in hadoop-yarn-server-nodemanager

2018-05-30 Thread Eric Payne (JIRA)


[ 
https://issues.apache.org/jira/browse/YARN-8377?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16495410#comment-16495410
 ] 

Eric Payne commented on YARN-8377:
--

Thanks a lot, [~tasanuma0829] for tracking this all down and providing the 
fixes.

+1. I will commit shortly

> Javadoc build failed in hadoop-yarn-server-nodemanager
> --
>
> Key: YARN-8377
> URL: https://issues.apache.org/jira/browse/YARN-8377
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: build, docs
>Reporter: Takanobu Asanuma
>Assignee: Takanobu Asanuma
>Priority: Critical
> Attachments: YARN-8377.1.patch
>
>
> This is the same cause as YARN-8369.
> {code}
> [ERROR] 
> /hadoop/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/container/SlidingWindowRetryPolicy.java:88:
>  error: bad use of '>'
> [ERROR]* When failuresValidityInterval is > 0, it also removes time 
> entries from
> [ERROR]   ^
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7953) [GQ] Data structures for federation global queues calculations

2018-05-30 Thread genericqa (JIRA)


[ 
https://issues.apache.org/jira/browse/YARN-7953?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16495352#comment-16495352
 ] 

genericqa commented on YARN-7953:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
53s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 5 new or modified test 
files. {color} |
|| || || || {color:brown} YARN-7402 Compile Tests {color} ||
| {color:red}-1{color} | {color:red} mvninstall {color} | {color:red}  0m 
23s{color} | {color:red} root in YARN-7402 failed. {color} |
| {color:red}-1{color} | {color:red} compile {color} | {color:red}  0m 
14s{color} | {color:red} hadoop-yarn-server-resourcemanager in YARN-7402 
failed. {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  3m 
 8s{color} | {color:green} YARN-7402 passed {color} |
| {color:red}-1{color} | {color:red} mvnsite {color} | {color:red}  0m 
11s{color} | {color:red} hadoop-yarn-server-resourcemanager in YARN-7402 
failed. {color} |
| {color:red}-1{color} | {color:red} shadedclient {color} | {color:red}  3m 
32s{color} | {color:red} branch has errors when building and testing our client 
artifacts. {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  0m 
10s{color} | {color:red} hadoop-yarn-server-resourcemanager in YARN-7402 
failed. {color} |
| {color:red}-1{color} | {color:red} javadoc {color} | {color:red}  0m 
12s{color} | {color:red} hadoop-yarn-server-resourcemanager in YARN-7402 
failed. {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:red}-1{color} | {color:red} mvninstall {color} | {color:red}  0m 
12s{color} | {color:red} hadoop-yarn-server-resourcemanager in the patch 
failed. {color} |
| {color:red}-1{color} | {color:red} compile {color} | {color:red}  0m 
11s{color} | {color:red} hadoop-yarn-server-resourcemanager in the patch 
failed. {color} |
| {color:red}-1{color} | {color:red} javac {color} | {color:red}  0m 11s{color} 
| {color:red} hadoop-yarn-server-resourcemanager in the patch failed. {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
35s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} mvnsite {color} | {color:red}  0m 
10s{color} | {color:red} hadoop-yarn-server-resourcemanager in the patch 
failed. {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green}  0m  
3s{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:red}-1{color} | {color:red} shadedclient {color} | {color:red}  0m 
13s{color} | {color:red} patch has errors when building and testing our client 
artifacts. {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  0m 
13s{color} | {color:red} hadoop-yarn-server-resourcemanager in the patch 
failed. {color} |
| {color:red}-1{color} | {color:red} javadoc {color} | {color:red}  0m 
12s{color} | {color:red} hadoop-yarn-server-resourcemanager in the patch 
failed. {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red}  0m 11s{color} 
| {color:red} hadoop-yarn-server-resourcemanager in the patch failed. {color} |
| {color:blue}0{color} | {color:blue} asflicense {color} | {color:blue}  0m 
16s{color} | {color:blue} ASF License check generated no output? {color} |
| {color:black}{color} | {color:black} {color} | {color:black}  8m 55s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:abb62dd |
| JIRA Issue | YARN-7953 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12925775/YARN-7953-YARN-7402.v1.patch
 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  shadedclient  xml  findbugs  checkstyle  |
| uname | Linux c3ac84ceebfe 3.13.0-139-generic #188-Ubuntu SMP Tue Jan 9 
14:43:09 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | YARN-7402 / c5bf22d |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_162 |
| mvninstall | 
https://builds.apache.org/job/PreCommit-YARN-Build/20896/artifact/out/branch-mvninstall-root.txt
 |
| compile | 

[jira] [Commented] (YARN-7953) [GQ] Data structures for federation global queues calculations

2018-05-30 Thread Abhishek Modi (JIRA)


[ 
https://issues.apache.org/jira/browse/YARN-7953?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16495336#comment-16495336
 ] 

Abhishek Modi commented on YARN-7953:
-

[~botong] [~subru] [~curino] [~kkaranasos] Could you please review 
YARN-7953-YARN-7402.v1.patch.

> [GQ] Data structures for federation global queues calculations
> --
>
> Key: YARN-7953
> URL: https://issues.apache.org/jira/browse/YARN-7953
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Carlo Curino
>Assignee: Abhishek Modi
>Priority: Major
> Attachments: YARN-7953-YARN-7402.v1.patch, YARN-7953.v1.patch
>
>
> This Jira tracks data structures and helper classes used by the core 
> algorithms of YARN-7402 umbrella Jira (currently YARN-7403, and YARN-7834).



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-8350) NPE in service AM related to placement policy

2018-05-30 Thread Billie Rinaldi (JIRA)


[ 
https://issues.apache.org/jira/browse/YARN-8350?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16495335#comment-16495335
 ] 

Billie Rinaldi commented on YARN-8350:
--

+1 for patch 02. I verified that I could run my sample app without NPEs after 
applying this patch and the patch for YARN-8367.

> NPE in service AM related to placement policy
> -
>
> Key: YARN-8350
> URL: https://issues.apache.org/jira/browse/YARN-8350
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: yarn-native-services
>Reporter: Billie Rinaldi
>Assignee: Gour Saha
>Priority: Critical
> Attachments: YARN-8350.01.patch, YARN-8350.02.patch
>
>
> It seems like this NPE is happening in a service with more than one component 
> when one component has a placement policy and the other does not. It causes 
> the AM to crash.
> {noformat}
> java.lang.NullPointerException
> at 
> org.apache.hadoop.yarn.service.component.Component.requestContainers(Component.java:644)
> at 
> org.apache.hadoop.yarn.service.component.Component$FlexComponentTransition.transition(Component.java:310)
> at 
> org.apache.hadoop.yarn.service.component.Component$FlexComponentTransition.transition(Component.java:293)
> at 
> org.apache.hadoop.yarn.state.StateMachineFactory$MultipleInternalArc.doTransition(StateMachineFactory.java:385)
> at 
> org.apache.hadoop.yarn.state.StateMachineFactory.doTransition(StateMachineFactory.java:302)
> at 
> org.apache.hadoop.yarn.state.StateMachineFactory.access$500(StateMachineFactory.java:46)
> at 
> org.apache.hadoop.yarn.state.StateMachineFactory$InternalStateMachine.doTransition(StateMachineFactory.java:487)
> at 
> org.apache.hadoop.yarn.service.component.Component.handle(Component.java:919)
> at 
> org.apache.hadoop.yarn.service.ServiceScheduler.serviceStart(ServiceScheduler.java:344)
> at 
> org.apache.hadoop.service.AbstractService.start(AbstractService.java:194)
> at 
> org.apache.hadoop.service.CompositeService.serviceStart(CompositeService.java:121)
> at 
> org.apache.hadoop.yarn.service.ServiceMaster.lambda$serviceStart$0(ServiceMaster.java:253)
> at java.security.AccessController.doPrivileged(Native Method)
> at javax.security.auth.Subject.doAs(Subject.java:422)
> at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1682)
> at 
> org.apache.hadoop.yarn.service.ServiceMaster.serviceStart(ServiceMaster.java:251)
> at 
> org.apache.hadoop.service.AbstractService.start(AbstractService.java:194)
> at 
> org.apache.hadoop.yarn.service.ServiceMaster.main(ServiceMaster.java:317)
> {noformat}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-8368) yarn app start cli should print applicationId

2018-05-30 Thread Billie Rinaldi (JIRA)


[ 
https://issues.apache.org/jira/browse/YARN-8368?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16495326#comment-16495326
 ] 

Billie Rinaldi commented on YARN-8368:
--

+1 for patch 02. I tried this out and it worked as desired.

> yarn app start cli should print applicationId
> -
>
> Key: YARN-8368
> URL: https://issues.apache.org/jira/browse/YARN-8368
> Project: Hadoop YARN
>  Issue Type: Bug
>Reporter: Yesha Vora
>Assignee: Rohith Sharma K S
>Priority: Critical
> Attachments: YARN-8368.01.patch, YARN-8368.02.patch
>
>
> yarn app start cli should print the application Id similar to yarn launch cmd.
> {code:java}
> bash-4.2$ yarn app -start hbase-app-test
> WARNING: YARN_LOGFILE has been replaced by HADOOP_LOGFILE. Using value of 
> YARN_LOGFILE.
> WARNING: YARN_PID_DIR has been replaced by HADOOP_PID_DIR. Using value of 
> YARN_PID_DIR.
> 18/05/24 15:15:53 INFO client.RMProxy: Connecting to ResourceManager at 
> xxx/xxx:8050
> 18/05/24 15:15:54 INFO client.RMProxy: Connecting to ResourceManager at 
> xxx/xxx:8050
> 18/05/24 15:15:55 INFO client.ApiServiceClient: Service hbase-app-test is 
> successfully started.{code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-7953) [GQ] Data structures for federation global queues calculations

2018-05-30 Thread Abhishek Modi (JIRA)


 [ 
https://issues.apache.org/jira/browse/YARN-7953?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Abhishek Modi updated YARN-7953:

Attachment: YARN-7953-YARN-7402.v1.patch

> [GQ] Data structures for federation global queues calculations
> --
>
> Key: YARN-7953
> URL: https://issues.apache.org/jira/browse/YARN-7953
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Carlo Curino
>Assignee: Abhishek Modi
>Priority: Major
> Attachments: YARN-7953-YARN-7402.v1.patch, YARN-7953.v1.patch
>
>
> This Jira tracks data structures and helper classes used by the core 
> algorithms of YARN-7402 umbrella Jira (currently YARN-7403, and YARN-7834).



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-8349) Remove YARN registry entries when a service is killed by the RM

2018-05-30 Thread Billie Rinaldi (JIRA)


 [ 
https://issues.apache.org/jira/browse/YARN-8349?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Billie Rinaldi updated YARN-8349:
-
Attachment: YARN-8349.1.patch

> Remove YARN registry entries when a service is killed by the RM
> ---
>
> Key: YARN-8349
> URL: https://issues.apache.org/jira/browse/YARN-8349
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: yarn-native-services
>Affects Versions: 3.2.0, 3.1.1
>Reporter: Shane Kumpf
>Assignee: Billie Rinaldi
>Priority: Major
> Attachments: YARN-8349.1.patch
>
>
> As the title states, when a service is killed by the RM (for exceeding its 
> lifetime for example), the YARN registry entries should be cleaned up.
> Without cleanup, DNS can contain multiple hostnames for a single IP address 
> in the case where IPs are reused. This impacts reverse lookups, which breaks 
> services, such as kerberos, that depend on those lookups.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-8378) Missing default implementation of loading application with FileSystemApplicationHistoryStore

2018-05-30 Thread genericqa (JIRA)


[ 
https://issues.apache.org/jira/browse/YARN-8378?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16495075#comment-16495075
 ] 

genericqa commented on YARN-8378:
-

| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
29s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 2 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 25m 
37s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
26s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
18s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
29s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
11m  8s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
36s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
20s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
28s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
22s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
22s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 13s{color} | {color:orange} 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-applicationhistoryservice:
 The patch generated 1 new + 26 unchanged - 0 fixed = 27 total (was 26) {color} 
|
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
24s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
11m 35s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
42s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
18s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  3m 
36s{color} | {color:green} hadoop-yarn-server-applicationhistoryservice in the 
patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
23s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 57m 42s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:abb62dd |
| JIRA Issue | YARN-8378 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12925727/YARN-8378.1.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  shadedclient  findbugs  checkstyle  |
| uname | Linux 41cead18d4d1 3.13.0-139-generic #188-Ubuntu SMP Tue Jan 9 
14:43:09 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / b24098b |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_162 |
| findbugs | v3.1.0-RC1 |
| checkstyle | 
https://builds.apache.org/job/PreCommit-YARN-Build/20894/artifact/out/diff-checkstyle-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-applicationhistoryservice.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-YARN-Build/20894/testReport/ |
| Max. process+thread count | 329 (vs. ulimit of 1) |
| modules | C: 

[jira] [Comment Edited] (YARN-8373) RM Received RMFatalEvent of type CRITICAL_THREAD_CRASH

2018-05-30 Thread Girish Bhat (JIRA)


[ 
https://issues.apache.org/jira/browse/YARN-8373?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16495045#comment-16495045
 ] 

Girish Bhat edited comment on YARN-8373 at 5/30/18 11:31 AM:
-

[~miklos.szeg...@cloudera.com]

I can particularly see this when 
{noformat}
yarn.scheduler.fair.allow-undeclared-pools  true 
yarn.scheduler.fair.user-as-default-queue   true 

{noformat}
where queue creation is on fly.  Does it help setting to false ?  please let me 
know


was (Author: girishb):
[~miklos.szeg...@cloudera.com] 

I can particularly see this when 
{noformat}
yarn.scheduler.fair.allow-undeclared-pools  True 
yarn.scheduler.fair.user-as-default-queue   True 

{noformat}
where queue creation is on fly.  Does it help setting to false ?  please let me 
know

> RM  Received RMFatalEvent of type CRITICAL_THREAD_CRASH
> ---
>
> Key: YARN-8373
> URL: https://issues.apache.org/jira/browse/YARN-8373
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: resourcemanager
>Affects Versions: 2.9.0
>Reporter: Girish Bhat
>Priority: Major
>  Labels: newbie
>
>  
>  
> {noformat}
> sudo -u yarn /usr/local/hadoop/latest/bin/yarn version Hadoop 2.9.0 
> Subversion https://git-wip-us.apache.org/repos/asf/hadoop.git -r 
> 756ebc8394e473ac25feac05fa493f6d612e6c50 Compiled by arsuresh on 
> 2017-11-13T23:15Z Compiled with protoc 2.5.0 From source with checksum 
> 0a76a9a32a5257331741f8d5932f183 This command was run using 
> /usr/local/hadoop/hadoop-2.9.0/share/hadoop/common/hadoop-common-2.9.0.jar{noformat}
> This is for version 2.9.0 
>  
> {noformat}
> 2018-05-25 05:53:12,742 ERROR 
> org.apache.hadoop.yarn.server.resourcemanager.ResourceManager: Received 
> RMFatalEvent of type CRITICAL_THREAD_CRASH, caused by a critical thread, Fai
> rSchedulerContinuousScheduling, that exited unexpectedly: 
> java.lang.IllegalArgumentException: Comparison method violates its general 
> contract!
> at java.util.TimSort.mergeHi(TimSort.java:899)
> at java.util.TimSort.mergeAt(TimSort.java:516)
> at java.util.TimSort.mergeForceCollapse(TimSort.java:457)
> at java.util.TimSort.sort(TimSort.java:254)
> at java.util.Arrays.sort(Arrays.java:1512)
> at java.util.ArrayList.sort(ArrayList.java:1454)
> at java.util.Collections.sort(Collections.java:175)
> at 
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.ClusterNodeTracker.sortedNodeList(ClusterNodeTracker.java:340)
> at 
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.fair.FairScheduler.continuousSchedulingAttempt(FairScheduler.java:907)
> at 
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.fair.FairScheduler$ContinuousSchedulingThread.run(FairScheduler.java:296)
> 2018-05-25 05:53:12,743 FATAL 
> org.apache.hadoop.yarn.server.resourcemanager.ResourceManager: Shutting down 
> the resource manager.
> 2018-05-25 05:53:12,749 INFO org.apache.hadoop.util.ExitUtil: Exiting with 
> status 1: a critical thread, FairSchedulerContinuousScheduling, that exited 
> unexpectedly: java.lang.IllegalArgumentException: Comparison method violates 
> its general contract!
> at java.util.TimSort.mergeHi(TimSort.java:899)
> at java.util.TimSort.mergeAt(TimSort.java:516)
> at java.util.TimSort.mergeForceCollapse(TimSort.java:457)
> at java.util.TimSort.sort(TimSort.java:254)
> at java.util.Arrays.sort(Arrays.java:1512)
> at java.util.ArrayList.sort(ArrayList.java:1454)
> at java.util.Collections.sort(Collections.java:175)
> at 
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.ClusterNodeTracker.sortedNodeList(ClusterNodeTracker.java:340)
> at 
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.fair.FairScheduler.continuousSchedulingAttempt(FairScheduler.java:907)
> at 
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.fair.FairScheduler$ContinuousSchedulingThread.run(FairScheduler.java:296)
> 2018-05-25 05:53:12,772 ERROR 
> org.apache.hadoop.security.token.delegation.AbstractDelegationTokenSecretManager:
>  ExpiredTokenRemover received java.lang.InterruptedException: sleep 
> interrupted{noformat}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-8373) RM Received RMFatalEvent of type CRITICAL_THREAD_CRASH

2018-05-30 Thread Girish Bhat (JIRA)


[ 
https://issues.apache.org/jira/browse/YARN-8373?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16495045#comment-16495045
 ] 

Girish Bhat commented on YARN-8373:
---

[~miklos.szeg...@cloudera.com] 

I can particularly see this when 
{noformat}
yarn.scheduler.fair.allow-undeclared-pools  True 
yarn.scheduler.fair.user-as-default-queue   True 

{noformat}
where queue creation is on fly.  Does it help setting to false ?  please let me 
know

> RM  Received RMFatalEvent of type CRITICAL_THREAD_CRASH
> ---
>
> Key: YARN-8373
> URL: https://issues.apache.org/jira/browse/YARN-8373
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: resourcemanager
>Affects Versions: 2.9.0
>Reporter: Girish Bhat
>Priority: Major
>  Labels: newbie
>
>  
>  
> {noformat}
> sudo -u yarn /usr/local/hadoop/latest/bin/yarn version Hadoop 2.9.0 
> Subversion https://git-wip-us.apache.org/repos/asf/hadoop.git -r 
> 756ebc8394e473ac25feac05fa493f6d612e6c50 Compiled by arsuresh on 
> 2017-11-13T23:15Z Compiled with protoc 2.5.0 From source with checksum 
> 0a76a9a32a5257331741f8d5932f183 This command was run using 
> /usr/local/hadoop/hadoop-2.9.0/share/hadoop/common/hadoop-common-2.9.0.jar{noformat}
> This is for version 2.9.0 
>  
> {noformat}
> 2018-05-25 05:53:12,742 ERROR 
> org.apache.hadoop.yarn.server.resourcemanager.ResourceManager: Received 
> RMFatalEvent of type CRITICAL_THREAD_CRASH, caused by a critical thread, Fai
> rSchedulerContinuousScheduling, that exited unexpectedly: 
> java.lang.IllegalArgumentException: Comparison method violates its general 
> contract!
> at java.util.TimSort.mergeHi(TimSort.java:899)
> at java.util.TimSort.mergeAt(TimSort.java:516)
> at java.util.TimSort.mergeForceCollapse(TimSort.java:457)
> at java.util.TimSort.sort(TimSort.java:254)
> at java.util.Arrays.sort(Arrays.java:1512)
> at java.util.ArrayList.sort(ArrayList.java:1454)
> at java.util.Collections.sort(Collections.java:175)
> at 
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.ClusterNodeTracker.sortedNodeList(ClusterNodeTracker.java:340)
> at 
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.fair.FairScheduler.continuousSchedulingAttempt(FairScheduler.java:907)
> at 
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.fair.FairScheduler$ContinuousSchedulingThread.run(FairScheduler.java:296)
> 2018-05-25 05:53:12,743 FATAL 
> org.apache.hadoop.yarn.server.resourcemanager.ResourceManager: Shutting down 
> the resource manager.
> 2018-05-25 05:53:12,749 INFO org.apache.hadoop.util.ExitUtil: Exiting with 
> status 1: a critical thread, FairSchedulerContinuousScheduling, that exited 
> unexpectedly: java.lang.IllegalArgumentException: Comparison method violates 
> its general contract!
> at java.util.TimSort.mergeHi(TimSort.java:899)
> at java.util.TimSort.mergeAt(TimSort.java:516)
> at java.util.TimSort.mergeForceCollapse(TimSort.java:457)
> at java.util.TimSort.sort(TimSort.java:254)
> at java.util.Arrays.sort(Arrays.java:1512)
> at java.util.ArrayList.sort(ArrayList.java:1454)
> at java.util.Collections.sort(Collections.java:175)
> at 
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.ClusterNodeTracker.sortedNodeList(ClusterNodeTracker.java:340)
> at 
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.fair.FairScheduler.continuousSchedulingAttempt(FairScheduler.java:907)
> at 
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.fair.FairScheduler$ContinuousSchedulingThread.run(FairScheduler.java:296)
> 2018-05-25 05:53:12,772 ERROR 
> org.apache.hadoop.security.token.delegation.AbstractDelegationTokenSecretManager:
>  ExpiredTokenRemover received java.lang.InterruptedException: sleep 
> interrupted{noformat}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-8378) Missing default implementation of loading application with FileSystemApplicationHistoryStore

2018-05-30 Thread Lantao Jin (JIRA)


[ 
https://issues.apache.org/jira/browse/YARN-8378?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16495024#comment-16495024
 ] 

Lantao Jin commented on YARN-8378:
--

[~jianhe] could you have a chance to review?

> Missing default implementation of loading application with 
> FileSystemApplicationHistoryStore 
> -
>
> Key: YARN-8378
> URL: https://issues.apache.org/jira/browse/YARN-8378
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: resourcemanager, yarn
>Reporter: Lantao Jin
>Assignee: Lantao Jin
>Priority: Minor
> Attachments: YARN-8378.1.patch
>
>
> [YARN-3700|https://issues.apache.org/jira/browse/YARN-3700] and 
> [YARN-3787|https://issues.apache.org/jira/browse/YARN-3787] add some 
> limitations (number, time) to loading applications from yarn timelineservice. 
> But this API missing the default implementation when we use 
> FileSystemApplicationHistoryStore for applicationhistoryservice instead of 
> using timelineservice.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-8378) Missing default implementation of loading application with FileSystemApplicationHistoryStore

2018-05-30 Thread Lantao Jin (JIRA)


 [ 
https://issues.apache.org/jira/browse/YARN-8378?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lantao Jin updated YARN-8378:
-
Affects Version/s: (was: 3.0.2)

> Missing default implementation of loading application with 
> FileSystemApplicationHistoryStore 
> -
>
> Key: YARN-8378
> URL: https://issues.apache.org/jira/browse/YARN-8378
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: resourcemanager, yarn
>Reporter: Lantao Jin
>Assignee: Lantao Jin
>Priority: Minor
> Attachments: YARN-8378.1.patch
>
>
> [YARN-3700|https://issues.apache.org/jira/browse/YARN-3700] and 
> [YARN-3787|https://issues.apache.org/jira/browse/YARN-3787] add some 
> limitations (number, time) to loading applications from yarn timelineservice. 
> But this API missing the default implementation when we use 
> FileSystemApplicationHistoryStore for applicationhistoryservice instead of 
> using timelineservice.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-8378) Missing default implementation of loading application with FileSystemApplicationHistoryStore

2018-05-30 Thread Lantao Jin (JIRA)


 [ 
https://issues.apache.org/jira/browse/YARN-8378?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lantao Jin updated YARN-8378:
-
Attachment: YARN-8378.1.patch

> Missing default implementation of loading application with 
> FileSystemApplicationHistoryStore 
> -
>
> Key: YARN-8378
> URL: https://issues.apache.org/jira/browse/YARN-8378
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: resourcemanager, yarn
>Affects Versions: 3.0.2
>Reporter: Lantao Jin
>Assignee: Lantao Jin
>Priority: Minor
> Attachments: YARN-8378.1.patch
>
>
> [YARN-3700|https://issues.apache.org/jira/browse/YARN-3700] and 
> [YARN-3787|https://issues.apache.org/jira/browse/YARN-3787] add some 
> limitations (number, time) to loading applications from yarn timelineservice. 
> But this API missing the default implementation when we use 
> FileSystemApplicationHistoryStore for applicationhistoryservice instead of 
> using timelineservice.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Created] (YARN-8378) Missing default implementation of loading application with FileSystemApplicationHistoryStore

2018-05-30 Thread Lantao Jin (JIRA)
Lantao Jin created YARN-8378:


 Summary: Missing default implementation of loading application 
with FileSystemApplicationHistoryStore 
 Key: YARN-8378
 URL: https://issues.apache.org/jira/browse/YARN-8378
 Project: Hadoop YARN
  Issue Type: Bug
  Components: resourcemanager, yarn
Affects Versions: 3.0.2
Reporter: Lantao Jin
Assignee: Lantao Jin


[YARN-3700|https://issues.apache.org/jira/browse/YARN-3700] and 
[YARN-3787|https://issues.apache.org/jira/browse/YARN-3787] add some 
limitations (number, time) to loading applications from yarn timelineservice. 
But this API missing the default implementation when we use 
FileSystemApplicationHistoryStore for applicationhistoryservice instead of 
using timelineservice.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-2031) YARN Proxy model doesn't support REST APIs in AMs

2018-05-30 Thread genericqa (JIRA)


[ 
https://issues.apache.org/jira/browse/YARN-2031?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16494822#comment-16494822
 ] 

genericqa commented on YARN-2031:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m  
0s{color} | {color:blue} Docker mode activated. {color} |
| {color:red}-1{color} | {color:red} patch {color} | {color:red}  0m  6s{color} 
| {color:red} YARN-2031 does not apply to trunk. Rebase required? Wrong Branch? 
See https://wiki.apache.org/hadoop/HowToContribute for help. {color} |
\\
\\
|| Subsystem || Report/Notes ||
| JIRA Issue | YARN-2031 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12857354/YARN-2031-005.patch |
| Console output | 
https://builds.apache.org/job/PreCommit-YARN-Build/20893/console |
| Powered by | Apache Yetus 0.8.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> YARN Proxy model doesn't support REST APIs in AMs
> -
>
> Key: YARN-2031
> URL: https://issues.apache.org/jira/browse/YARN-2031
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: resourcemanager
>Affects Versions: 2.6.0
>Reporter: Steve Loughran
>Assignee: Steve Loughran
>Priority: Major
> Attachments: YARN-2031-002.patch, YARN-2031-003.patch, 
> YARN-2031-004.patch, YARN-2031-005.patch, YARN-2031.patch.001
>
>
> AMs can't support REST APIs because
> # the AM filter redirects all requests to the proxy with a 302 response (not 
> 307)
> # the proxy doesn't forward PUT/POST/DELETE verbs
> Either the AM filter needs to return 307 and the proxy to forward the verbs, 
> or Am filter should not filter a REST bit of the web site



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-4781) Support intra-queue preemption for fairness ordering policy.

2018-05-30 Thread Yongjun Zhang (JIRA)


 [ 
https://issues.apache.org/jira/browse/YARN-4781?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yongjun Zhang updated YARN-4781:

Fix Version/s: 3.0.3

> Support intra-queue preemption for fairness ordering policy.
> 
>
> Key: YARN-4781
> URL: https://issues.apache.org/jira/browse/YARN-4781
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: scheduler
>Reporter: Wangda Tan
>Assignee: Eric Payne
>Priority: Major
> Fix For: 3.0.3
>
> Attachments: YARN-4781.001.patch, YARN-4781.002.patch, 
> YARN-4781.003.patch, YARN-4781.004.patch, YARN-4781.005.branch-2.patch, 
> YARN-4781.005.patch
>
>
> We introduced fairness queue policy since YARN-3319, which will let large 
> applications make progresses and not starve small applications. However, if a 
> large application takes the queue’s resources, and containers of the large 
> app has long lifespan, small applications could still wait for resources for 
> long time and SLAs cannot be guaranteed.
> Instead of wait for application release resources on their own, we need to 
> preempt resources of queue with fairness policy enabled.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-8377) Javadoc build failed in hadoop-yarn-server-nodemanager

2018-05-30 Thread genericqa (JIRA)


[ 
https://issues.apache.org/jira/browse/YARN-8377?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16494727#comment-16494727
 ] 

genericqa commented on YARN-8377:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
36s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 23m 
46s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
55s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
21s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
32s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green}  
9m 34s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
50s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
21s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
30s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
48s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
48s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
17s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
27s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green}  
9m 41s{color} | {color:green} patch has no errors when building and testing our 
client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
55s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
19s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 34m 23s{color} 
| {color:red} hadoop-yarn-server-nodemanager in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
22s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 84m 44s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:abb62dd |
| JIRA Issue | YARN-8377 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12925673/YARN-8377.1.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  shadedclient  findbugs  checkstyle  |
| uname | Linux cea1d9d432ca 4.4.0-89-generic #112-Ubuntu SMP Mon Jul 31 
19:38:41 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / 5f6769f |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_162 |
| findbugs | v3.1.0-RC1 |
| unit | 
https://builds.apache.org/job/PreCommit-YARN-Build/20891/artifact/out/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-nodemanager.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-YARN-Build/20891/testReport/ |
| Max. process+thread count | 410 (vs. ulimit of 1) |
| modules | C: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager
 U: 

[jira] [Commented] (YARN-8375) TestCGroupElasticMemoryController fails surefire build

2018-05-30 Thread genericqa (JIRA)


[ 
https://issues.apache.org/jira/browse/YARN-8375?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16494726#comment-16494726
 ] 

genericqa commented on YARN-8375:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
28s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 23m 
41s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
53s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
19s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
33s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green}  
9m 36s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
47s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
20s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
32s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
49s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
49s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
17s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
29s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green}  
9m 39s{color} | {color:green} patch has no errors when building and testing our 
client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
55s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
21s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 21m  0s{color} 
| {color:red} hadoop-yarn-server-nodemanager in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
27s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 71m 13s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | 
hadoop.yarn.server.nodemanager.containermanager.linux.resources.TestCGroupElasticMemoryController
 |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:abb62dd |
| JIRA Issue | YARN-8375 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12925674/YARN-8375.000.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  shadedclient  findbugs  checkstyle  |
| uname | Linux d7f737df3b8b 4.4.0-89-generic #112-Ubuntu SMP Mon Jul 31 
19:38:41 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / 5f6769f |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_162 |
| findbugs | v3.1.0-RC1 |
| unit | 
https://builds.apache.org/job/PreCommit-YARN-Build/20892/artifact/out/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-nodemanager.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-YARN-Build/20892/testReport/ |
| Max. process+thread count | 408 (vs. ulimit of 1) |
| modules | C: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager
 U: