[jira] [Commented] (YARN-10139) ValidateAndGetSchedulerConfiguration API fails when cluster max allocation > default 8GB

2020-02-17 Thread Adam Antal (Jira)


[ 
https://issues.apache.org/jira/browse/YARN-10139?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17038231#comment-17038231
 ] 

Adam Antal commented on YARN-10139:
---

Hi [~prabhujoseph]!

Thanks for the patch, LGTM (non-binding).

> ValidateAndGetSchedulerConfiguration API fails when cluster max allocation > 
> default 8GB
> 
>
> Key: YARN-10139
> URL: https://issues.apache.org/jira/browse/YARN-10139
> Project: Hadoop YARN
>  Issue Type: Bug
>Affects Versions: 3.3.0
>Reporter: Prabhu Joseph
>Assignee: Prabhu Joseph
>Priority: Major
> Attachments: YARN-10139-001.patch
>
>
> ValidateAndGetSchedulerConfiguration fails when the cluster max allocation 
> (yarn.scheduler.maximum-allocation-mb) is set to resource (eg: 16GB) > 
> default 8GB in yarn-site.xml.
> As part of validation API, there are two configuration used - 
> CapacitySchedulerConfiguration and Configuration (yarn-site.xml). When 
> CapacityScheduler is initialized with CapacitySchedulerConfiguration, as part 
> of queues initialization, it checks the queue maximum allocation which is not 
> present and so checks cluster max allocation which is not present (it is 
> present only in YarnConfiguration) and defaults to 8GB. This will fail as 
> queue max allocation 8GB is decreased from previous 16GB.
> {code}
> 2020-02-14 07:38:46,087 WARN 
> org.apache.hadoop.yarn.server.resourcemanager.webapp.RMWebServices: 
> CapacityScheduler configuration validation failed:java.io.IOException: Failed 
> to re-init queues : Trying to reinitialize root.default.c1.c3 the maximum 
> allocation size can not be decreased! Current setting:  vCores:88>, trying to set it to: 
> {code}
> CapacityScheduler initialize code reads a yarn config from 
> CapacitySchedulerConfiguration causing the issue. 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Created] (YARN-10141) Interceptor in FederationInterceptorREST doesnt update on RM switchover

2020-02-17 Thread D M Murali Krishna Reddy (Jira)
D M Murali Krishna Reddy created YARN-10141:
---

 Summary: Interceptor in FederationInterceptorREST doesnt update on 
RM switchover
 Key: YARN-10141
 URL: https://issues.apache.org/jira/browse/YARN-10141
 Project: Hadoop YARN
  Issue Type: Bug
  Components: federation, restapi
Reporter: D M Murali Krishna Reddy
Assignee: D M Murali Krishna Reddy


In Federation Setup, In the event of a RM switchover in a subcluster the 
interceptor for that subcluster in FederationInterceptorREST doesnt get 
updated. Due to this, Cluster Nodes REST API doesnt return the nodes from the 
subcluster in which the RM switchover has occured.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-10130) Do not allow output dir to be the same as input dir

2020-02-17 Thread Adam Antal (Jira)


 [ 
https://issues.apache.org/jira/browse/YARN-10130?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Adam Antal updated YARN-10130:
--
Attachment: YARN-10130.002.patch

> Do not allow output dir to be the same as input dir
> ---
>
> Key: YARN-10130
> URL: https://issues.apache.org/jira/browse/YARN-10130
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Szilard Nemeth
>Assignee: Adam Antal
>Priority: Major
> Attachments: YARN-10130.001.patch, YARN-10130.002.patch
>
>
> If the input dir where fair-scheduler.xml / yarn-site.xml sits is the same as 
> the output dir (defined by the -o switch), the fs2cs tool overwrites the 
> source config files, i.e. yarn-site.xml.
> Reproduce this is easy, just run fs2cs tool with this command: 
> {code:java}
> /bin/yarn fs2cs --cluster-resource memory-mb=18044928,vcores=16 
> --no-terminal-rule-check -y yarn-site.xml -f fair-scheduler.xml -o .
> {code}
> The following (or similar) is emitted by the tool:
> {code:java}
> WARNING: YARN_OPTS has been replaced by HADOOP_OPTS. Using value of 
> YARN_OPTS.WARNING: YARN_OPTS has been replaced by HADOOP_OPTS. Using value of 
> YARN_OPTS.20/02/10 12:51:42 INFO converter.FSConfigToCSConfigConverter: 
> Output directory for yarn-site.xml and capacity-scheduler.xml is: .20/02/10 
> 12:51:42 INFO converter.FSConfigToCSConfigConverter: Conversion rules file is 
> not defined, using default conversion config!20/02/10 12:51:42 ERROR 
> conf.Configuration: error parsing conf 
> yarn-site.xmlcom.ctc.wstx.exc.WstxEOFException: Unexpected EOF in prolog at 
> [row,col,system-id]: [1,0,"yarn-site.xml"] at 
> com.ctc.wstx.sr.StreamScanner.throwUnexpectedEOF(StreamScanner.java:687) at 
> com.ctc.wstx.sr.BasicStreamReader.handleEOF(BasicStreamReader.java:2220) at 
> com.ctc.wstx.sr.BasicStreamReader.nextFromProlog(BasicStreamReader.java:2126) 
> at com.ctc.wstx.sr.BasicStreamReader.next(BasicStreamReader.java:1181) at 
> org.apache.hadoop.conf.Configuration$Parser.parseNext(Configuration.java:3343)
>  at 
> org.apache.hadoop.conf.Configuration$Parser.parse(Configuration.java:3137) at 
> org.apache.hadoop.conf.Configuration.loadResource(Configuration.java:3030) at 
> org.apache.hadoop.conf.Configuration.loadResources(Configuration.java:2996) 
> at org.apache.hadoop.conf.Configuration.getProps(Configuration.java:2871) at 
> org.apache.hadoop.conf.Configuration.set(Configuration.java:1389) at 
> org.apache.hadoop.conf.Configuration.set(Configuration.java:1361) at 
> org.apache.hadoop.conf.Configuration.setBoolean(Configuration.java:1702) at 
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.fair.converter.FSConfigToCSConfigConverter.createConfiguration(FSConfigToCSConfigConverter.java:166)
>  at 
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.fair.converter.FSConfigToCSConfigConverter.convert(FSConfigToCSConfigConverter.java:98)
>  at 
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.fair.converter.FSConfigToCSConfigArgumentHandler.parseAndConvert(FSConfigToCSConfigArgumentHandler.java:137)
>  at 
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.fair.converter.FSConfigToCSConfigConverterMain.main(FSConfigToCSConfigConverterMain.java:40)20/02/10
>  12:51:42 ERROR converter.FSConfigToCSConfigConverterMain: Error while 
> starting FS configuration conversion!
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Created] (YARN-10142) Distributed shell: add support for localization visibility

2020-02-17 Thread Peter Bacsko (Jira)
Peter Bacsko created YARN-10142:
---

 Summary: Distributed shell: add support for localization visibility
 Key: YARN-10142
 URL: https://issues.apache.org/jira/browse/YARN-10142
 Project: Hadoop YARN
  Issue Type: Improvement
Reporter: Peter Bacsko
Assignee: Peter Bacsko


The localization is now hard coded in DistributedShell:

{noformat}
FileStatus scFileStatus = fs.getFileStatus(dst);
LocalResource scRsrc =
LocalResource.newInstance(
URL.fromURI(dst.toUri()),
LocalResourceType.FILE, LocalResourceVisibility.APPLICATION,
scFileStatus.getLen(), scFileStatus.getModificationTime());
localResources.put(fileDstPath, scRsrc);
{noformat}

However, sometimes it's useful if you have the possibility to change this to 
PRIVATE/PUBLIC for testing purposes.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-10142) Distributed shell: add support for localization visibility

2020-02-17 Thread Peter Bacsko (Jira)


 [ 
https://issues.apache.org/jira/browse/YARN-10142?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Peter Bacsko updated YARN-10142:

Component/s: distributed-shell

> Distributed shell: add support for localization visibility
> --
>
> Key: YARN-10142
> URL: https://issues.apache.org/jira/browse/YARN-10142
> Project: Hadoop YARN
>  Issue Type: Improvement
>  Components: distributed-shell
>Reporter: Peter Bacsko
>Assignee: Peter Bacsko
>Priority: Major
>
> The localization is now hard coded in DistributedShell:
> {noformat}
> FileStatus scFileStatus = fs.getFileStatus(dst);
> LocalResource scRsrc =
> LocalResource.newInstance(
> URL.fromURI(dst.toUri()),
> LocalResourceType.FILE, LocalResourceVisibility.APPLICATION,
> scFileStatus.getLen(), scFileStatus.getModificationTime());
> localResources.put(fileDstPath, scRsrc);
> {noformat}
> However, sometimes it's useful if you have the possibility to change this to 
> PRIVATE/PUBLIC for testing purposes.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Created] (YARN-10143) YARN-10101 broke Yarn logs CLI

2020-02-17 Thread Adam Antal (Jira)
Adam Antal created YARN-10143:
-

 Summary: YARN-10101 broke Yarn logs CLI 
 Key: YARN-10143
 URL: https://issues.apache.org/jira/browse/YARN-10143
 Project: Hadoop YARN
  Issue Type: Bug
  Components: yarn
Affects Versions: 3.3.0, 3.2.2
Reporter: Adam Antal
Assignee: Adam Antal


YARN-10101 broke the Yarn logs CLI.

In {{LogsCLI#359}} a {{ContainerLogsRequest}} has been created with null set as 
appAttemptId, while the {{LogAggregationFileController}} instances are 
configured badly to handle this case. The new JHS API works as expected for 
defined application attempt, but we should fix this.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-10130) Do not allow output dir to be the same as input dir

2020-02-17 Thread Hadoop QA (Jira)


[ 
https://issues.apache.org/jira/browse/YARN-10130?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17038469#comment-17038469
 ] 

Hadoop QA commented on YARN-10130:
--

| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
41s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 2 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 20m 
20s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
43s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
33s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
46s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
15m 36s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
35s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
30s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
44s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
38s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
38s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 28s{color} | {color:orange} 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager:
 The patch generated 1 new + 1 unchanged - 0 fixed = 2 total (was 1) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
42s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
14m 10s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
35s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
26s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 87m 
37s{color} | {color:green} hadoop-yarn-server-resourcemanager in the patch 
passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
29s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}147m 27s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=19.03.6 Server=19.03.6 Image:yetus/hadoop:c44943d1fc3 |
| JIRA Issue | YARN-10130 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12993682/YARN-10130.002.patch |
| Optional Tests |  dupname  asflicense  compile  javac  javadoc  mvninstall  
mvnsite  unit  shadedclient  findbugs  checkstyle  |
| uname | Linux ff0d41b3696b 4.15.0-74-generic #84-Ubuntu SMP Thu Dec 19 
08:06:28 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / 439d935 |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_242 |
| findbugs | v3.1.0-RC1 |
| checkstyle | 
https://builds.apache.org/job/PreCommit-YARN-Build/25527/artifact/out/diff-checkstyle-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-YARN-Build/25527/testReport/ |
| Max. process+thread count | 837 (vs. ulimit of 5500) |
| modules | C: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-r

[jira] [Assigned] (YARN-10132) For Federation,yarn applicationattempt fail command throws an exception

2020-02-17 Thread Bilwa S T (Jira)


 [ 
https://issues.apache.org/jira/browse/YARN-10132?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bilwa S T reassigned YARN-10132:


Assignee: Bilwa S T

> For Federation,yarn applicationattempt fail command throws an exception
> ---
>
> Key: YARN-10132
> URL: https://issues.apache.org/jira/browse/YARN-10132
> Project: Hadoop YARN
>  Issue Type: Bug
>Reporter: Sushanta Sen
>Assignee: Bilwa S T
>Priority: Major
>
> yarn applicationattempt fail command is failing with exception  
> “org.apache.commons.lang.NotImplementedException: Code is not implemented”.
> {noformat}
>  ./yarn applicationattempt -fail appattempt_1581497870689_0001_01
> Failing attempt appattempt_1581497870689_0001_01 of application 
> application_1581497870689_0001
> 2020-02-12 20:48:48,530 INFO impl.YarnClientImpl: Failing application attempt 
> appattempt_1581497870689_0001_01
> Exception in thread "main" org.apache.commons.lang.NotImplementedException: 
> Code is not implemented
> at 
> org.apache.hadoop.yarn.server.router.clientrm.FederationClientInterceptor.failApplicationAttempt(FederationClientInterceptor.java:980)
> at 
> org.apache.hadoop.yarn.server.router.clientrm.RouterClientRMService.failApplicationAttempt(RouterClientRMService.java:388)
> at 
> org.apache.hadoop.yarn.api.impl.pb.service.ApplicationClientProtocolPBServiceImpl.failApplicationAttempt(ApplicationClientProtocolPBServiceImpl.java:210)
> at 
> org.apache.hadoop.yarn.proto.ApplicationClientProtocol$ApplicationClientProtocolService$2.callBlockingMethod(ApplicationClientProtocol.java:581)
> at 
> org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:530)
> at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1036)
> at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:928)
> at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:863)
> at java.security.AccessController.doPrivileged(Native Method)
> at javax.security.auth.Subject.doAs(Subject.java:422)
> at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1729)
> at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2793)
> at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native 
> Method)
> at 
> sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
> at 
> sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
> at java.lang.reflect.Constructor.newInstance(Constructor.java:423)
> at 
> org.apache.hadoop.yarn.ipc.RPCUtil.instantiateException(RPCUtil.java:53)
> at 
> org.apache.hadoop.yarn.ipc.RPCUtil.instantiateRuntimeException(RPCUtil.java:85)
> at 
> org.apache.hadoop.yarn.ipc.RPCUtil.unwrapAndThrowException(RPCUtil.java:122)
> at 
> org.apache.hadoop.yarn.api.impl.pb.client.ApplicationClientProtocolPBClientImpl.failApplicationAttempt(ApplicationClientProtocolPBClientImpl.java:223)
> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
> at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> at java.lang.reflect.Method.invoke(Method.java:498)
> at 
> org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:422)
> at 
> org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invokeMethod(RetryInvocationHandler.java:165)
> at 
> org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invoke(RetryInvocationHandler.java:157)
> at 
> org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invokeOnce(RetryInvocationHandler.java:95)
> at 
> org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:359)
> at com.sun.proxy.$Proxy8.failApplicationAttempt(Unknown Source)
> at 
> org.apache.hadoop.yarn.client.api.impl.YarnClientImpl.failApplicationAttempt(YarnClientImpl.java:447)
> at 
> org.apache.hadoop.yarn.client.cli.ApplicationCLI.failApplicationAttempt(ApplicationCLI.java:985)
> at 
> org.apache.hadoop.yarn.client.cli.ApplicationCLI.run(ApplicationCLI.java:455)
> at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:76)
> at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:90)
> at 
> org.apache.hadoop.yarn.client.cli.ApplicationCLI.main(ApplicationCLI.java:119)
> {noformat}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: yarn-issues-unsub

[jira] [Updated] (YARN-10139) ValidateAndGetSchedulerConfiguration API fails when cluster max allocation > default 8GB

2020-02-17 Thread Prabhu Joseph (Jira)


 [ 
https://issues.apache.org/jira/browse/YARN-10139?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Prabhu Joseph updated YARN-10139:
-
Attachment: YARN-10139-002.patch

> ValidateAndGetSchedulerConfiguration API fails when cluster max allocation > 
> default 8GB
> 
>
> Key: YARN-10139
> URL: https://issues.apache.org/jira/browse/YARN-10139
> Project: Hadoop YARN
>  Issue Type: Bug
>Affects Versions: 3.3.0
>Reporter: Prabhu Joseph
>Assignee: Prabhu Joseph
>Priority: Major
> Attachments: YARN-10139-001.patch, YARN-10139-002.patch
>
>
> ValidateAndGetSchedulerConfiguration fails when the cluster max allocation 
> (yarn.scheduler.maximum-allocation-mb) is set to resource (eg: 16GB) > 
> default 8GB in yarn-site.xml.
> As part of validation API, there are two configuration used - 
> CapacitySchedulerConfiguration and Configuration (yarn-site.xml). When 
> CapacityScheduler is initialized with CapacitySchedulerConfiguration, as part 
> of queues initialization, it checks the queue maximum allocation which is not 
> present and so checks cluster max allocation which is not present (it is 
> present only in YarnConfiguration) and defaults to 8GB. This will fail as 
> queue max allocation 8GB is decreased from previous 16GB.
> {code}
> 2020-02-14 07:38:46,087 WARN 
> org.apache.hadoop.yarn.server.resourcemanager.webapp.RMWebServices: 
> CapacityScheduler configuration validation failed:java.io.IOException: Failed 
> to re-init queues : Trying to reinitialize root.default.c1.c3 the maximum 
> allocation size can not be decreased! Current setting:  vCores:88>, trying to set it to: 
> {code}
> CapacityScheduler initialize code reads a yarn config from 
> CapacitySchedulerConfiguration causing the issue. 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-10139) ValidateAndGetSchedulerConfiguration API fails when cluster max allocation > default 8GB

2020-02-17 Thread Prabhu Joseph (Jira)


 [ 
https://issues.apache.org/jira/browse/YARN-10139?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Prabhu Joseph updated YARN-10139:
-
Attachment: (was: YARN-10139-002.patch)

> ValidateAndGetSchedulerConfiguration API fails when cluster max allocation > 
> default 8GB
> 
>
> Key: YARN-10139
> URL: https://issues.apache.org/jira/browse/YARN-10139
> Project: Hadoop YARN
>  Issue Type: Bug
>Affects Versions: 3.3.0
>Reporter: Prabhu Joseph
>Assignee: Prabhu Joseph
>Priority: Major
> Attachments: YARN-10139-001.patch
>
>
> ValidateAndGetSchedulerConfiguration fails when the cluster max allocation 
> (yarn.scheduler.maximum-allocation-mb) is set to resource (eg: 16GB) > 
> default 8GB in yarn-site.xml.
> As part of validation API, there are two configuration used - 
> CapacitySchedulerConfiguration and Configuration (yarn-site.xml). When 
> CapacityScheduler is initialized with CapacitySchedulerConfiguration, as part 
> of queues initialization, it checks the queue maximum allocation which is not 
> present and so checks cluster max allocation which is not present (it is 
> present only in YarnConfiguration) and defaults to 8GB. This will fail as 
> queue max allocation 8GB is decreased from previous 16GB.
> {code}
> 2020-02-14 07:38:46,087 WARN 
> org.apache.hadoop.yarn.server.resourcemanager.webapp.RMWebServices: 
> CapacityScheduler configuration validation failed:java.io.IOException: Failed 
> to re-init queues : Trying to reinitialize root.default.c1.c3 the maximum 
> allocation size can not be decreased! Current setting:  vCores:88>, trying to set it to: 
> {code}
> CapacityScheduler initialize code reads a yarn config from 
> CapacitySchedulerConfiguration causing the issue. 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-10141) Interceptor in FederationInterceptorREST doesnt update on RM switchover

2020-02-17 Thread Hadoop QA (Jira)


[ 
https://issues.apache.org/jira/browse/YARN-10141?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17038548#comment-17038548
 ] 

Hadoop QA commented on YARN-10141:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
39s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 20m 
 5s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
21s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
18s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
26s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
14m 52s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
32s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
20s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
22s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
17s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
17s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 11s{color} | {color:orange} 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-router: 
The patch generated 1 new + 1 unchanged - 0 fixed = 2 total (was 1) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
19s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
15m  1s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
37s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
17s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red}  1m 34s{color} 
| {color:red} hadoop-yarn-server-router in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
25s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 57m  3s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | 
hadoop.yarn.server.router.webapp.TestFederationInterceptorREST |
|   | hadoop.yarn.server.router.webapp.TestFederationInterceptorRESTRetry |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=19.03.6 Server=19.03.6 Image:yetus/hadoop:c44943d1fc3 |
| JIRA Issue | YARN-10141 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12993709/YARN-10141.001.patch |
| Optional Tests |  dupname  asflicense  compile  javac  javadoc  mvninstall  
mvnsite  unit  shadedclient  findbugs  checkstyle  |
| uname | Linux 104d78dd0815 4.15.0-74-generic #84-Ubuntu SMP Thu Dec 19 
08:06:28 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / 439d935 |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_242 |
| findbugs | v3.1.0-RC1 |
| checkstyle | 
https://builds.apache.org/job/PreCommit-YARN-Build/25528/artifact/out/diff-checkstyle-hadoop-yarn-project_hadoop-yarn_had

[jira] [Updated] (YARN-10139) ValidateAndGetSchedulerConfiguration API fails when cluster max allocation > default 8GB

2020-02-17 Thread Prabhu Joseph (Jira)


 [ 
https://issues.apache.org/jira/browse/YARN-10139?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Prabhu Joseph updated YARN-10139:
-
Attachment: YARN-10139-002.patch

> ValidateAndGetSchedulerConfiguration API fails when cluster max allocation > 
> default 8GB
> 
>
> Key: YARN-10139
> URL: https://issues.apache.org/jira/browse/YARN-10139
> Project: Hadoop YARN
>  Issue Type: Bug
>Affects Versions: 3.3.0
>Reporter: Prabhu Joseph
>Assignee: Prabhu Joseph
>Priority: Major
> Attachments: YARN-10139-001.patch, YARN-10139-002.patch
>
>
> ValidateAndGetSchedulerConfiguration fails when the cluster max allocation 
> (yarn.scheduler.maximum-allocation-mb) is set to resource (eg: 16GB) > 
> default 8GB in yarn-site.xml.
> As part of validation API, there are two configuration used - 
> CapacitySchedulerConfiguration and Configuration (yarn-site.xml). When 
> CapacityScheduler is initialized with CapacitySchedulerConfiguration, as part 
> of queues initialization, it checks the queue maximum allocation which is not 
> present and so checks cluster max allocation which is not present (it is 
> present only in YarnConfiguration) and defaults to 8GB. This will fail as 
> queue max allocation 8GB is decreased from previous 16GB.
> {code}
> 2020-02-14 07:38:46,087 WARN 
> org.apache.hadoop.yarn.server.resourcemanager.webapp.RMWebServices: 
> CapacityScheduler configuration validation failed:java.io.IOException: Failed 
> to re-init queues : Trying to reinitialize root.default.c1.c3 the maximum 
> allocation size can not be decreased! Current setting:  vCores:88>, trying to set it to: 
> {code}
> CapacityScheduler initialize code reads a yarn config from 
> CapacitySchedulerConfiguration causing the issue. 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-10139) ValidateAndGetSchedulerConfiguration API fails when cluster max allocation > default 8GB

2020-02-17 Thread Prabhu Joseph (Jira)


[ 
https://issues.apache.org/jira/browse/YARN-10139?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17038564#comment-17038564
 ] 

Prabhu Joseph commented on YARN-10139:
--

Thanks [~adam.antal] for reviewing.

Have included a testcase in  [^YARN-10139-002.patch] .

> ValidateAndGetSchedulerConfiguration API fails when cluster max allocation > 
> default 8GB
> 
>
> Key: YARN-10139
> URL: https://issues.apache.org/jira/browse/YARN-10139
> Project: Hadoop YARN
>  Issue Type: Bug
>Affects Versions: 3.3.0
>Reporter: Prabhu Joseph
>Assignee: Prabhu Joseph
>Priority: Major
> Attachments: YARN-10139-001.patch, YARN-10139-002.patch
>
>
> ValidateAndGetSchedulerConfiguration fails when the cluster max allocation 
> (yarn.scheduler.maximum-allocation-mb) is set to resource (eg: 16GB) > 
> default 8GB in yarn-site.xml.
> As part of validation API, there are two configuration used - 
> CapacitySchedulerConfiguration and Configuration (yarn-site.xml). When 
> CapacityScheduler is initialized with CapacitySchedulerConfiguration, as part 
> of queues initialization, it checks the queue maximum allocation which is not 
> present and so checks cluster max allocation which is not present (it is 
> present only in YarnConfiguration) and defaults to 8GB. This will fail as 
> queue max allocation 8GB is decreased from previous 16GB.
> {code}
> 2020-02-14 07:38:46,087 WARN 
> org.apache.hadoop.yarn.server.resourcemanager.webapp.RMWebServices: 
> CapacityScheduler configuration validation failed:java.io.IOException: Failed 
> to re-init queues : Trying to reinitialize root.default.c1.c3 the maximum 
> allocation size can not be decreased! Current setting:  vCores:88>, trying to set it to: 
> {code}
> CapacityScheduler initialize code reads a yarn config from 
> CapacitySchedulerConfiguration causing the issue. 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-10139) ValidateAndGetSchedulerConfiguration API fails when cluster max allocation > default 8GB

2020-02-17 Thread Hadoop QA (Jira)


[ 
https://issues.apache.org/jira/browse/YARN-10139?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17038611#comment-17038611
 ] 

Hadoop QA commented on YARN-10139:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
42s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 20m 
53s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
43s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
36s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
48s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
15m 35s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
34s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
29s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
51s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
51s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
51s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 30s{color} | {color:orange} 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager:
 The patch generated 6 new + 93 unchanged - 0 fixed = 99 total (was 93) {color} 
|
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
41s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
14m 41s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
51s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
30s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 89m 16s{color} 
| {color:red} hadoop-yarn-server-resourcemanager in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
31s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}150m 54s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.yarn.server.resourcemanager.TestRMTimelineService 
|
|   | 
hadoop.yarn.server.resourcemanager.monitor.capacity.TestProportionalCapacityPreemptionPolicy
 |
|   | 
hadoop.yarn.server.resourcemanager.scheduler.capacity.TestCapacitySchedulerSchedulingRequestUpdate
 |
|   | hadoop.yarn.server.resourcemanager.security.TestDelegationTokenRenewer |
|   | 
hadoop.yarn.server.resourcemanager.scheduler.capacity.TestCapacityScheduler |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=19.03.6 Server=19.03.6 Image:yetus/hadoop:c44943d1fc3 |
| JIRA Issue | YARN-10139 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12993710/YARN-10139-002.patch |
| Optional Tests |  dupname  asflicense  compile  javac  javadoc  mvninstall  
mvnsite  unit  shadedclient  findbugs  checkstyle  |
| uname | Linux f1d0b35782c2 4.15.0-74-generic #84-Ubuntu SMP Thu Dec 19 
08:06:28 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / 439d935 |
| maven | version: Apache Maven 3.3.9 |
| De

[jira] [Commented] (YARN-10139) ValidateAndGetSchedulerConfiguration API fails when cluster max allocation > default 8GB

2020-02-17 Thread Hadoop QA (Jira)


[ 
https://issues.apache.org/jira/browse/YARN-10139?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17038645#comment-17038645
 ] 

Hadoop QA commented on YARN-10139:
--

| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 25m 
36s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 19m 
16s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
45s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
33s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
46s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
14m 24s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
34s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
34s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
45s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
39s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
39s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
28s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
42s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
12m 58s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
37s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
29s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 88m 
30s{color} | {color:green} hadoop-yarn-server-resourcemanager in the patch 
passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
34s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}170m 14s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=19.03.6 Server=19.03.6 Image:yetus/hadoop:c44943d1fc3 |
| JIRA Issue | YARN-10139 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12993711/YARN-10139-002.patch |
| Optional Tests |  dupname  asflicense  compile  javac  javadoc  mvninstall  
mvnsite  unit  shadedclient  findbugs  checkstyle  |
| uname | Linux 08f7d88ac06b 4.15.0-58-generic #64-Ubuntu SMP Tue Aug 6 
11:12:41 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / 84f7638 |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_242 |
| findbugs | v3.1.0-RC1 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-YARN-Build/25530/testReport/ |
| Max. process+thread count | 881 (vs. ulimit of 5500) |
| modules | C: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager
 U: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager
 |
| Console output | 
https://builds.apache.org/job/PreCommit-YARN-Build/25530/console |
| Powered by | Apache Yetus 0.8.0   http://yetus.apache.org |


This message was automatically generated.



> ValidateAndGetSchedulerConfigurat

[jira] [Commented] (YARN-9693) When AMRMProxyService is enabled RMCommunicator will register with failure

2020-02-17 Thread panlijie (Jira)


[ 
https://issues.apache.org/jira/browse/YARN-9693?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17038735#comment-17038735
 ] 

panlijie commented on YARN-9693:


We config NM with configuration below:
{code:java}
yarn.nodemanager.amrmproxy.enabled  true
yarn.nodemanager.amrmproxy.interceptor-class.pipeline   
org.apache.hadoop.yarn.server.nodemanager.amrmproxy.FederationInterceptor{code}

but the error log as below:
 {code:java}
[hdfs@rbf jars]$ spark-submit --class org.apache.spark.examples.SparkPi 
--master yarn --driver-memory 1g --executor-cores 2 --queue default 
spark-examples_2.11-2.3.1.3.0.1.0-187.jar 10
20/01/07 17:01:04 INFO SparkContext: Running Spark version 2.3.1.3.0.1.0-187
20/01/07 17:01:04 INFO SparkContext: Submitted application: Spark Pi
20/01/07 17:01:04 INFO SecurityManager: Changing view acls to: hdfs
20/01/07 17:01:04 INFO SecurityManager: Changing modify acls to: hdfs
20/01/07 17:01:04 INFO SecurityManager: Changing view acls groups to: 
20/01/07 17:01:04 INFO SecurityManager: Changing modify acls groups to: 
20/01/07 17:01:04 INFO SecurityManager: SecurityManager: authentication 
disabled; ui acls disabled; users  with view permissions: Set(hdfs); groups 
with view permissions: Set(); users  with modify permissions: Set(hdfs); groups 
with modify permissions: Set()
20/01/07 17:01:04 INFO Utils: Successfully started service 'sparkDriver' on 
port 45941.
20/01/07 17:01:04 INFO SparkEnv: Registering MapOutputTracker
20/01/07 17:01:04 INFO SparkEnv: Registering BlockManagerMaster
20/01/07 17:01:04 INFO BlockManagerMasterEndpoint: Using 
org.apache.spark.storage.DefaultTopologyMapper for getting topology information
20/01/07 17:01:04 INFO BlockManagerMasterEndpoint: BlockManagerMasterEndpoint up
20/01/07 17:01:04 INFO DiskBlockManager: Created local directory at 
/tmp/blockmgr-498de21a-a616-4826-b839-a9ca32a9272f
20/01/07 17:01:04 INFO MemoryStore: MemoryStore started with capacity 366.3 MB
20/01/07 17:01:05 INFO SparkEnv: Registering OutputCommitCoordinator
20/01/07 17:01:05 INFO log: Logging initialized @1604ms
20/01/07 17:01:05 INFO Server: jetty-9.3.z-SNAPSHOT, build timestamp: 
2018-06-06T01:11:56+08:00, git hash: 84205aa28f11a4f31f2a3b86d1bba2cc8ab69827
20/01/07 17:01:05 INFO Server: Started @1676ms
20/01/07 17:01:05 INFO AbstractConnector: Started 
ServerConnector@2e8ab815{HTTP/1.1,[http/1.1]}{0.0.0.0:4040}
20/01/07 17:01:05 INFO Utils: Successfully started service 'SparkUI' on port 
4040.
20/01/07 17:01:05 INFO ContextHandler: Started 
o.s.j.s.ServletContextHandler@7c18432b{/jobs,null,AVAILABLE,@Spark}
20/01/07 17:01:05 INFO ContextHandler: Started 
o.s.j.s.ServletContextHandler@14bb2297{/jobs/json,null,AVAILABLE,@Spark}
20/01/07 17:01:05 INFO ContextHandler: Started 
o.s.j.s.ServletContextHandler@69adf72c{/jobs/job,null,AVAILABLE,@Spark}
20/01/07 17:01:05 INFO ContextHandler: Started 
o.s.j.s.ServletContextHandler@57f791c6{/jobs/job/json,null,AVAILABLE,@Spark}
20/01/07 17:01:05 INFO ContextHandler: Started 
o.s.j.s.ServletContextHandler@51650883{/stages,null,AVAILABLE,@Spark}
20/01/07 17:01:05 INFO ContextHandler: Started 
o.s.j.s.ServletContextHandler@6c4f9535{/stages/json,null,AVAILABLE,@Spark}
20/01/07 17:01:05 INFO ContextHandler: Started 
o.s.j.s.ServletContextHandler@5bd1ceca{/stages/stage,null,AVAILABLE,@Spark}
20/01/07 17:01:05 INFO ContextHandler: Started 
o.s.j.s.ServletContextHandler@596df867{/stages/stage/json,null,AVAILABLE,@Spark}
20/01/07 17:01:05 INFO ContextHandler: Started 
o.s.j.s.ServletContextHandler@c1fca1e{/stages/pool,null,AVAILABLE,@Spark}
20/01/07 17:01:05 INFO ContextHandler: Started 
o.s.j.s.ServletContextHandler@241a53ef{/stages/pool/json,null,AVAILABLE,@Spark}
20/01/07 17:01:05 INFO ContextHandler: Started 
o.s.j.s.ServletContextHandler@344344fa{/storage,null,AVAILABLE,@Spark}
20/01/07 17:01:05 INFO ContextHandler: Started 
o.s.j.s.ServletContextHandler@2db2cd5{/storage/json,null,AVAILABLE,@Spark}
20/01/07 17:01:05 INFO ContextHandler: Started 
o.s.j.s.ServletContextHandler@70e659aa{/storage/rdd,null,AVAILABLE,@Spark}
20/01/07 17:01:05 INFO ContextHandler: Started 
o.s.j.s.ServletContextHandler@615f972{/storage/rdd/json,null,AVAILABLE,@Spark}
20/01/07 17:01:05 INFO ContextHandler: Started 
o.s.j.s.ServletContextHandler@285f09de{/environment,null,AVAILABLE,@Spark}
20/01/07 17:01:05 INFO ContextHandler: Started 
o.s.j.s.ServletContextHandler@73393584{/environment/json,null,AVAILABLE,@Spark}
20/01/07 17:01:05 INFO ContextHandler: Started 
o.s.j.s.ServletContextHandler@31500940{/executors,null,AVAILABLE,@Spark}
20/01/07 17:01:05 INFO ContextHandler: Started 
o.s.j.s.ServletContextHandler@1827a871{/executors/json,null,AVAILABLE,@Spark}
20/01/07 17:01:05 INFO ContextHandler: Started 
o.s.j.s.ServletContextHandler@48e64352{/executors/threadDump,null,AVAILABLE,@Spark}
20/01/07 17:01:05 INFO ContextHandler: Started 
o.s.j.s.ServletContextHandler@7249dadf{/executors/threadDump

[jira] [Commented] (YARN-10139) ValidateAndGetSchedulerConfiguration API fails when cluster max allocation > default 8GB

2020-02-17 Thread Sunil G (Jira)


[ 
https://issues.apache.org/jira/browse/YARN-10139?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17038783#comment-17038783
 ] 

Sunil G commented on YARN-10139:


This change looks fine to me. If there are not objections, I will get this 
patch in tomorrow.

Thanks [~prabhujoseph]

> ValidateAndGetSchedulerConfiguration API fails when cluster max allocation > 
> default 8GB
> 
>
> Key: YARN-10139
> URL: https://issues.apache.org/jira/browse/YARN-10139
> Project: Hadoop YARN
>  Issue Type: Bug
>Affects Versions: 3.3.0
>Reporter: Prabhu Joseph
>Assignee: Prabhu Joseph
>Priority: Major
> Attachments: YARN-10139-001.patch, YARN-10139-002.patch
>
>
> ValidateAndGetSchedulerConfiguration fails when the cluster max allocation 
> (yarn.scheduler.maximum-allocation-mb) is set to resource (eg: 16GB) > 
> default 8GB in yarn-site.xml.
> As part of validation API, there are two configuration used - 
> CapacitySchedulerConfiguration and Configuration (yarn-site.xml). When 
> CapacityScheduler is initialized with CapacitySchedulerConfiguration, as part 
> of queues initialization, it checks the queue maximum allocation which is not 
> present and so checks cluster max allocation which is not present (it is 
> present only in YarnConfiguration) and defaults to 8GB. This will fail as 
> queue max allocation 8GB is decreased from previous 16GB.
> {code}
> 2020-02-14 07:38:46,087 WARN 
> org.apache.hadoop.yarn.server.resourcemanager.webapp.RMWebServices: 
> CapacityScheduler configuration validation failed:java.io.IOException: Failed 
> to re-init queues : Trying to reinitialize root.default.c1.c3 the maximum 
> allocation size can not be decreased! Current setting:  vCores:88>, trying to set it to: 
> {code}
> CapacityScheduler initialize code reads a yarn config from 
> CapacitySchedulerConfiguration causing the issue. 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-9831) NMTokenSecretManagerInRM#createNMToken blocks ApplicationMasterService allocate flow

2020-02-17 Thread Bilwa S T (Jira)


[ 
https://issues.apache.org/jira/browse/YARN-9831?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17038815#comment-17038815
 ] 

Bilwa S T commented on YARN-9831:
-

[~bibinchundatt] could u please check patch

> NMTokenSecretManagerInRM#createNMToken blocks ApplicationMasterService 
> allocate flow
> 
>
> Key: YARN-9831
> URL: https://issues.apache.org/jira/browse/YARN-9831
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Bibin Chundatt
>Assignee: Bilwa S T
>Priority: Critical
> Attachments: YARN-9831.001.patch, YARN-9831.002.patch
>
>
> Currently attempt's NMToken cannot be generated independently. 
> Each attempts allocate flow blocks each other. We should improve the same



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Moved] (YARN-10144) Federation: Add missing FederationClientInterceptor APIs

2020-02-17 Thread Brahma Reddy Battula (Jira)


 [ 
https://issues.apache.org/jira/browse/YARN-10144?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Brahma Reddy Battula moved HDFS-15178 to YARN-10144:


Component/s: (was: federation)
 federation
Key: YARN-10144  (was: HDFS-15178)
Project: Hadoop YARN  (was: Hadoop HDFS)

> Federation: Add missing FederationClientInterceptor APIs
> 
>
> Key: YARN-10144
> URL: https://issues.apache.org/jira/browse/YARN-10144
> Project: Hadoop YARN
>  Issue Type: Improvement
>  Components: federation
>Reporter: D M Murali Krishna Reddy
>Priority: Major
>
> In FederationClientInterceptor, many API's are not Implemented.
>  * getClusterNodes
>  * getQueueInfo
>  * getQueueUserAcls
>  * moveApplicationAcrossQueues
>  * getNewReservation
>  * submitReservation
>  * listReservations
>  * updateReservation
>  * deleteReservation
>  * getNodeToLabels
>  * getLabelsToNodes
>  * getClusterNodeLabels
>  * getApplicationAttemptReport
>  * getApplicationAttempts
>  * getContainerReport
>  * getContainers
>  * getDelegationToken
>  * renewDelegationToken
>  * cancelDelegationToken
>  * failApplicationAttempt
>  * updateApplicationPriority
>  * signalToContainer
>  * updateApplicationTimeouts
>  * getResourceProfiles
>  * getResourceProfile
>  * getResourceTypeInfo
>  * getAttributesToNodes
>  * getClusterNodeAttributes
>  * getNodesToAttributes



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-10122) In Federation,executing yarn container signal command throws an exception

2020-02-17 Thread D M Murali Krishna Reddy (Jira)


 [ 
https://issues.apache.org/jira/browse/YARN-10122?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

D M Murali Krishna Reddy updated YARN-10122:

Parent: YARN-10144
Issue Type: Sub-task  (was: Bug)

> In Federation,executing yarn container signal command throws an exception
> -
>
> Key: YARN-10122
> URL: https://issues.apache.org/jira/browse/YARN-10122
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: federation, yarn
>Reporter: Sushanta Sen
>Assignee: Bilwa S T
>Priority: Major
>
> Executing yarn container signal command failed, prompting an error 
> “org.apache.commons.lang.NotImplementedException: Code is not implemented”.
> {noformat}
> ./yarn container -signal container_e79_1581316978887_0001_01_10
> Signalling container container_e79_1581316978887_0001_01_10
> 2020-02-10 14:51:18,045 INFO impl.YarnClientImpl: Signalling container 
> container_e79_1581316978887_0001_01_10 with command OUTPUT_THREAD_DUMP
> Exception in thread "main" org.apache.commons.lang.NotImplementedException: 
> Code is not implemented
> at 
> org.apache.hadoop.yarn.server.router.clientrm.FederationClientInterceptor.signalToContainer(FederationClientInterceptor.java:993)
> at 
> org.apache.hadoop.yarn.server.router.clientrm.RouterClientRMService.signalToContainer(RouterClientRMService.java:403)
> at 
> org.apache.hadoop.yarn.api.impl.pb.service.ApplicationClientProtocolPBServiceImpl.signalToContainer(ApplicationClientProtocolPBServiceImpl.java:629)
> at 
> org.apache.hadoop.yarn.proto.ApplicationClientProtocol$ApplicationClientProtocolService$2.callBlockingMethod(ApplicationClientProtocol.java:629)
> at 
> org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:530)
> at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1036)
> at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:928)
> at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:863)
> at java.security.AccessController.doPrivileged(Native Method)
> at javax.security.auth.Subject.doAs(Subject.java:422)
> at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1729)
> at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2793)
> at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native 
> Method)
> at 
> sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
> at 
> sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
> at java.lang.reflect.Constructor.newInstance(Constructor.java:423)
> at 
> org.apache.hadoop.yarn.ipc.RPCUtil.instantiateException(RPCUtil.java:53)
> at 
> org.apache.hadoop.yarn.ipc.RPCUtil.instantiateRuntimeException(RPCUtil.java:85)
> at 
> org.apache.hadoop.yarn.ipc.RPCUtil.unwrapAndThrowException(RPCUtil.java:122)
> at 
> org.apache.hadoop.yarn.api.impl.pb.client.ApplicationClientProtocolPBClientImpl.signalToContainer(ApplicationClientProtocolPBClientImpl.java:620)
> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
> at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> at java.lang.reflect.Method.invoke(Method.java:498)
> at 
> org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:422)
> at 
> org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invokeMethod(RetryInvocationHandler.java:165)
> at 
> org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invoke(RetryInvocationHandler.java:157)
> at 
> org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invokeOnce(RetryInvocationHandler.java:95)
> at 
> org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:359)
> at com.sun.proxy.$Proxy8.signalToContainer(Unknown Source)
> at 
> org.apache.hadoop.yarn.client.api.impl.YarnClientImpl.signalToContainer(YarnClientImpl.java:949)
> at 
> org.apache.hadoop.yarn.client.cli.ApplicationCLI.signalToContainer(ApplicationCLI.java:717)
> at 
> org.apache.hadoop.yarn.client.cli.ApplicationCLI.run(ApplicationCLI.java:478)
> at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:76)
> at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:90)
> at 
> org.apache.hadoop.yarn.client.cli.ApplicationCLI.main(ApplicationCLI.java:119)
> {noformat}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (YARN-10132) For Federation,yarn applicationattempt fail command throws an exception

2020-02-17 Thread D M Murali Krishna Reddy (Jira)


 [ 
https://issues.apache.org/jira/browse/YARN-10132?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

D M Murali Krishna Reddy updated YARN-10132:

Parent: YARN-10144
Issue Type: Sub-task  (was: Bug)

> For Federation,yarn applicationattempt fail command throws an exception
> ---
>
> Key: YARN-10132
> URL: https://issues.apache.org/jira/browse/YARN-10132
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Sushanta Sen
>Assignee: Bilwa S T
>Priority: Major
>
> yarn applicationattempt fail command is failing with exception  
> “org.apache.commons.lang.NotImplementedException: Code is not implemented”.
> {noformat}
>  ./yarn applicationattempt -fail appattempt_1581497870689_0001_01
> Failing attempt appattempt_1581497870689_0001_01 of application 
> application_1581497870689_0001
> 2020-02-12 20:48:48,530 INFO impl.YarnClientImpl: Failing application attempt 
> appattempt_1581497870689_0001_01
> Exception in thread "main" org.apache.commons.lang.NotImplementedException: 
> Code is not implemented
> at 
> org.apache.hadoop.yarn.server.router.clientrm.FederationClientInterceptor.failApplicationAttempt(FederationClientInterceptor.java:980)
> at 
> org.apache.hadoop.yarn.server.router.clientrm.RouterClientRMService.failApplicationAttempt(RouterClientRMService.java:388)
> at 
> org.apache.hadoop.yarn.api.impl.pb.service.ApplicationClientProtocolPBServiceImpl.failApplicationAttempt(ApplicationClientProtocolPBServiceImpl.java:210)
> at 
> org.apache.hadoop.yarn.proto.ApplicationClientProtocol$ApplicationClientProtocolService$2.callBlockingMethod(ApplicationClientProtocol.java:581)
> at 
> org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:530)
> at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1036)
> at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:928)
> at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:863)
> at java.security.AccessController.doPrivileged(Native Method)
> at javax.security.auth.Subject.doAs(Subject.java:422)
> at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1729)
> at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2793)
> at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native 
> Method)
> at 
> sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
> at 
> sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
> at java.lang.reflect.Constructor.newInstance(Constructor.java:423)
> at 
> org.apache.hadoop.yarn.ipc.RPCUtil.instantiateException(RPCUtil.java:53)
> at 
> org.apache.hadoop.yarn.ipc.RPCUtil.instantiateRuntimeException(RPCUtil.java:85)
> at 
> org.apache.hadoop.yarn.ipc.RPCUtil.unwrapAndThrowException(RPCUtil.java:122)
> at 
> org.apache.hadoop.yarn.api.impl.pb.client.ApplicationClientProtocolPBClientImpl.failApplicationAttempt(ApplicationClientProtocolPBClientImpl.java:223)
> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
> at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> at java.lang.reflect.Method.invoke(Method.java:498)
> at 
> org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:422)
> at 
> org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invokeMethod(RetryInvocationHandler.java:165)
> at 
> org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invoke(RetryInvocationHandler.java:157)
> at 
> org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invokeOnce(RetryInvocationHandler.java:95)
> at 
> org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:359)
> at com.sun.proxy.$Proxy8.failApplicationAttempt(Unknown Source)
> at 
> org.apache.hadoop.yarn.client.api.impl.YarnClientImpl.failApplicationAttempt(YarnClientImpl.java:447)
> at 
> org.apache.hadoop.yarn.client.cli.ApplicationCLI.failApplicationAttempt(ApplicationCLI.java:985)
> at 
> org.apache.hadoop.yarn.client.cli.ApplicationCLI.run(ApplicationCLI.java:455)
> at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:76)
> at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:90)
> at 
> org.apache.hadoop.yarn.client.cli.ApplicationCLI.main(ApplicationCLI.java:119)
> {noformat}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

---

[jira] [Updated] (YARN-10121) In Federation executing yarn queue status command throws an exception

2020-02-17 Thread D M Murali Krishna Reddy (Jira)


 [ 
https://issues.apache.org/jira/browse/YARN-10121?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

D M Murali Krishna Reddy updated YARN-10121:

Parent: YARN-10144
Issue Type: Sub-task  (was: Bug)

> In Federation executing yarn queue status command throws an exception
> -
>
> Key: YARN-10121
> URL: https://issues.apache.org/jira/browse/YARN-10121
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: federation, yarn
>Reporter: Sushanta Sen
>Assignee: Bilwa S T
>Priority: Major
>
> yarn queue status is failing, prompting an error 
> “org.apache.commons.lang.NotImplementedException: Code is not implemented”.
> {noformat}
>  ./yarn queue -status default
> Exception in thread "main" org.apache.commons.lang.NotImplementedException: 
> Code is not implemented
> at 
> org.apache.hadoop.yarn.server.router.clientrm.FederationClientInterceptor.getQueueInfo(FederationClientInterceptor.java:715)
> at 
> org.apache.hadoop.yarn.server.router.clientrm.RouterClientRMService.getQueueInfo(RouterClientRMService.java:246)
> at 
> org.apache.hadoop.yarn.api.impl.pb.service.ApplicationClientProtocolPBServiceImpl.getQueueInfo(ApplicationClientProtocolPBServiceImpl.java:328)
> at 
> org.apache.hadoop.yarn.proto.ApplicationClientProtocol$ApplicationClientProtocolService$2.callBlockingMethod(ApplicationClientProtocol.java:591)
> at 
> org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:530)
> at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1036)
> at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:928)
> at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:863)
> at java.security.AccessController.doPrivileged(Native Method)
> at javax.security.auth.Subject.doAs(Subject.java:422)
> at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1729)
> at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2793)
> at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native 
> Method)
> at 
> sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
> at 
> sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
> at java.lang.reflect.Constructor.newInstance(Constructor.java:423)
> at 
> org.apache.hadoop.yarn.ipc.RPCUtil.instantiateException(RPCUtil.java:53)
> at 
> org.apache.hadoop.yarn.ipc.RPCUtil.instantiateRuntimeException(RPCUtil.java:85)
> at 
> org.apache.hadoop.yarn.ipc.RPCUtil.unwrapAndThrowException(RPCUtil.java:122)
> at 
> org.apache.hadoop.yarn.api.impl.pb.client.ApplicationClientProtocolPBClientImpl.getQueueInfo(ApplicationClientProtocolPBClientImpl.java:341)
> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
> at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> at java.lang.reflect.Method.invoke(Method.java:498)
> at 
> org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:422)
> at 
> org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invokeMethod(RetryInvocationHandler.java:165)
> at 
> org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invoke(RetryInvocationHandler.java:157)
> at 
> org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invokeOnce(RetryInvocationHandler.java:95)
> at 
> org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:359)
> at com.sun.proxy.$Proxy8.getQueueInfo(Unknown Source)
> at 
> org.apache.hadoop.yarn.client.api.impl.YarnClientImpl.getQueueInfo(YarnClientImpl.java:650)
> at 
> org.apache.hadoop.yarn.client.cli.QueueCLI.listQueue(QueueCLI.java:111)
> at org.apache.hadoop.yarn.client.cli.QueueCLI.run(QueueCLI.java:78)
> at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:76)
> at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:90)
> at org.apache.hadoop.yarn.client.cli.QueueCLI.main(QueueCLI.java:50)
> {noformat}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-10111) In Federation cluster Distributed Shell Application submission fails as YarnClient#getQueueInfo is not implemented

2020-02-17 Thread D M Murali Krishna Reddy (Jira)


 [ 
https://issues.apache.org/jira/browse/YARN-10111?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

D M Murali Krishna Reddy updated YARN-10111:

Parent: YARN-10144
Issue Type: Sub-task  (was: Bug)

> In Federation cluster Distributed Shell Application submission fails as 
> YarnClient#getQueueInfo is not implemented
> --
>
> Key: YARN-10111
> URL: https://issues.apache.org/jira/browse/YARN-10111
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Sushanta Sen
>Assignee: Bilwa S T
>Priority: Blocker
>
> In Federation cluster Distributed Shell Application submission fails as 
> YarnClient#getQueueInfo is not implemented.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Issue Comment Deleted] (YARN-6539) Create SecureLogin inside Router

2020-02-17 Thread zhengchenyu (Jira)


 [ 
https://issues.apache.org/jira/browse/YARN-6539?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

zhengchenyu updated YARN-6539:
--
Comment: was deleted

(was: This patch works, but I found secretManager is not set in 
RouterClientRMService’s server. Is it means KDC will bear more visit? because 
there is no DelegationToken for ClientRouter Protocol.)

> Create SecureLogin inside Router
> 
>
> Key: YARN-6539
> URL: https://issues.apache.org/jira/browse/YARN-6539
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Giovanni Matteo Fumarola
>Assignee: Xie YiFan
>Priority: Minor
> Attachments: YARN-6359_1.patch, YARN-6359_2.patch, YARN-6539_3.patch
>
>




--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org