[jira] [Updated] (YARN-9149) yarn container -status misses logUrl when integrated with ATSv2

2019-01-04 Thread Abhishek Modi (JIRA)


 [ 
https://issues.apache.org/jira/browse/YARN-9149?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Abhishek Modi updated YARN-9149:

Attachment: YARN-9149.003.patch

> yarn container -status misses logUrl when integrated with ATSv2
> ---
>
> Key: YARN-9149
> URL: https://issues.apache.org/jira/browse/YARN-9149
> Project: Hadoop YARN
>  Issue Type: Bug
>Reporter: Rohith Sharma K S
>Assignee: Abhishek Modi
>Priority: Major
> Attachments: YARN-9149.001.patch, YARN-9149.002.patch, 
> YARN-9149.003.patch
>
>
> Post YARN-8303, yarn client can be integrated with ATSv2. But log url and 
> start and end time is printing data is wrong!
> {code}
> Container Report :
>   Container-Id : container_1545035586969_0001_01_01
>   Start-Time : 0
>   Finish-Time : 0
>   State : COMPLETE
>   Execution-Type : GUARANTEED
>   LOG-URL : null
>   Host : localhost:25006
>   NodeHttpAddress : localhost:25008
>   Diagnostics :
> {code}
> # TimelineEntityV2Converter#convertToContainerReport set logUrl as *null*. 
> This need set for proper log url based on yarn.log.server.web-service.url
> # TimelineEntityV2Converter#convertToContainerReport parses start/end time 
> wrongly. Comparison should happen with entityType but below code is doing 
> entityId
> {code}
> if (events != null) {
>   for (TimelineEvent event : events) {
> if (event.getId().equals(
> ContainerMetricsConstants.CREATED_IN_RM_EVENT_TYPE)) {
>   createdTime = event.getTimestamp();
> } else if (event.getId().equals(
> ContainerMetricsConstants.FINISHED_IN_RM_EVENT_TYPE)) {
>   finishedTime = event.getTimestamp();
> }
>   }
> }
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-9178) TestRMAdminCli#testHelp is failing in trunk

2019-01-04 Thread Abhishek Modi (JIRA)


[ 
https://issues.apache.org/jira/browse/YARN-9178?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16734741#comment-16734741
 ] 

Abhishek Modi commented on YARN-9178:
-


*Stacktrace*
java.lang.AssertionError: 
Help messages: 
 rmadmin is the command to execute YARN administrative commands.
The full syntax is: 

yarn rmadmin [-refreshQueues] [-refreshNodes [-g|graceful [timeout in seconds] 
-client|server]] [-refreshNodesResources] 
[-refreshSuperUserGroupsConfiguration] [-refreshUserToGroupsMappings] 
[-refreshAdminAcls] [-refreshServiceAcl] [-getGroup [username]] 
[-addToClusterNodeLabels 
<"label1(exclusive=true),label2(exclusive=false),label3">] 
[-removeFromClusterNodeLabels ] [-replaceLabelsOnNode 
<"node1[:port]=label1,label2 node2[:port]=label1"> [-failOnUnknownNodes]] 
[-directlyAccessNodeLabelStore] [-refreshClusterMaxPriority] 
[-updateNodeResource [NodeID] [MemSize] [vCores] ([OvercommitTimeout]) or 
-updateNodeResource [NodeID] [ResourceTypes] ([OvercommitTimeout])] [-help 
[cmd]]

   -refreshQueues: Reload the queues' acls, states and scheduler specific 
properties. 
ResourceManager will reload the mapred-queues configuration 
file.
   -refreshNodes [-g|graceful [timeout in seconds] -client|server]: Refresh the 
hosts information at the ResourceManager. Here [-g|graceful [timeout in 
seconds] -client|server] is optional, if we specify the timeout then 
ResourceManager will wait for timeout before marking the NodeManager as 
decommissioned. The -client|server indicates if the timeout tracking should be 
handled by the client or the ResourceManager. The client-side tracking is 
blocking, while the server-side tracking is not. Omitting the timeout, or a 
timeout of -1, indicates an infinite timeout. Known Issue: the server-side 
tracking will immediately decommission if an RM HA failover occurs.
   -refreshNodesResources: Refresh resources of NodeManagers at the 
ResourceManager.
   -refreshSuperUserGroupsConfiguration: Refresh superuser proxy groups mappings
   -refreshUserToGroupsMappings: Refresh user-to-groups mappings
   -refreshAdminAcls: Refresh acls for administration of ResourceManager
   -refreshServiceAcl: Reload the service-level authorization policy file. 
ResourceManager will reload the authorization policy file.
   -getGroups [username]: Get the groups which given user belongs to.
   -addToClusterNodeLabels 
<"label1(exclusive=true),label2(exclusive=false),label3">: add to cluster node 
labels. Default exclusivity is true
   -removeFromClusterNodeLabels  (label splitted by ","): 
remove from cluster node labels
   -replaceLabelsOnNode <"node1[:port]=label1,label2 
node2[:port]=label1,label2"> [-failOnUnknownNodes] : replace labels on nodes 
(please note that we do not support specifying multiple labels on a single host 
for now.)
[-failOnUnknownNodes] is optional, when we set this option, it 
will fail if specified nodes are unknown.
   -directlyAccessNodeLabelStore: This is DEPRECATED, will be removed in future 
releases. Directly access node label store, with this option, all node label 
related operations will not connect RM. Instead, they will access/modify stored 
node labels directly. By default, it is false (access via RM). AND PLEASE NOTE: 
if you configured yarn.node-labels.fs-store.root-dir to a local directory 
(instead of NFS or HDFS), this option will only work when the command run on 
the machine where RM is running.
   -refreshClusterMaxPriority: Refresh cluster max priority
   -updateNodeResource [NodeID] [MemSize] [vCores] ([OvercommitTimeout]) 
or
[NodeID] [resourcetypes] ([OvercommitTimeout]). : Update 
resource on specific node.
   -help [cmd]: Displays help for the given command or all commands if none is 
specified.

Generic options supported are:
-conf specify an application configuration file
-Ddefine a value for a given property
-fs  specify default filesystem URL to use, 
overrides 'fs.defaultFS' property from configurations.
-jt   specify a ResourceManager
-files specify a comma-separated list of files to be 
copied to the map reduce cluster
-libjarsspecify a comma-separated list of jar files 
to be included in the classpath
-archives   specify a comma-separated list of archives to 
be unarchived on the compute machines

The general command line syntax is:
command [genericOptions] [commandOptions]

rmadmin is the command to execute YARN administrative commands.
The full syntax is: 

yarn rmadmin [-refreshQueues] [-refreshNodes [-g|graceful [timeout in seconds] 
-client|server]] [-refreshNodesResources] 
[-refreshSuperUserGroupsConfiguration] [-refreshUserToGroupsMappings] 
[-refreshAdminAcls] [-refreshServiceAcl] [-getGroup [username]] 
[-addToClusterNodeLabels 
<"label1(exclusive=true),label2(exclusive=false),label3">] 
[-removeFro

[jira] [Created] (YARN-9178) TestRMAdminCli#testHelp is failing in trunk

2019-01-04 Thread Abhishek Modi (JIRA)
Abhishek Modi created YARN-9178:
---

 Summary: TestRMAdminCli#testHelp is failing in trunk
 Key: YARN-9178
 URL: https://issues.apache.org/jira/browse/YARN-9178
 Project: Hadoop YARN
  Issue Type: Bug
Reporter: Abhishek Modi
Assignee: Abhishek Modi






--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-9166) Fix logging for preemption of Opportunistic containers for Guaranteed containers.

2019-01-04 Thread Abhishek Modi (JIRA)


[ 
https://issues.apache.org/jira/browse/YARN-9166?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16734735#comment-16734735
 ] 

Abhishek Modi commented on YARN-9166:
-

Thanks [~elgoiri] for review and [~giovanni.fumarola] for committing it.

> Fix logging for preemption of Opportunistic containers for Guaranteed 
> containers.
> -
>
> Key: YARN-9166
> URL: https://issues.apache.org/jira/browse/YARN-9166
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Abhishek Modi
>Assignee: Abhishek Modi
>Priority: Minor
> Fix For: 3.3.0
>
> Attachments: YARN-9166.001.patch
>
>
> Fix logging for preemption of Opportunistic containers for Guaranteed 
> containers.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-8619) Automate docker network configuration through YARN API

2019-01-04 Thread Eric Yang (JIRA)


[ 
https://issues.apache.org/jira/browse/YARN-8619?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16734689#comment-16734689
 ] 

Eric Yang commented on YARN-8619:
-

[~oliverhuh...@gmail.com] {quote}
Is our plan to integrate Consul into YARN for this automation?
{quote}

This is one time setup, and very few lines of setup on docker side.  It doesn't 
seem like YARN needs to be involved in Docker network setup from my point of 
view.  I am inclined to close this as no plan to fix.

> Automate docker network configuration through YARN API
> --
>
> Key: YARN-8619
> URL: https://issues.apache.org/jira/browse/YARN-8619
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: yarn-native-services
>Reporter: Eric Yang
>Priority: Major
>  Labels: Docker
>
> Docker supports bridge, host, overlay, macvlan networking.  It might be 
> useful to automate docker network setup through a set of YARN API to improve 
> management of docker networks.  Each type of network driver requires 
> different type of parameters.  For Hadoop use case, it seems more useful to 
> focus on macvlan networking for ease of use and configuration.  It would be 
> great addition to support commands like:
> {code}
> yarn network create -d macvlan \
>   --subnet=172.16.86.0/24 \
>   --gateway=172.16.86.1 \
>   -o parent=eth0 \
>   my-macvlan-net
> {code}
> This changes docker configuration to hosts managed by YARN.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-9174) branch-3.0/branch-2 refactoring of GpuDevice class

2019-01-04 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/YARN-9174?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16734687#comment-16734687
 ] 

Hadoop QA commented on YARN-9174:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
22s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Findbugs executables are not available. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 6 new or modified test 
files. {color} |
|| || || || {color:brown} YARN-8200 Compile Tests {color} ||
| {color:red}-1{color} | {color:red} mvninstall {color} | {color:red}  6m  
9s{color} | {color:red} root in YARN-8200 failed. {color} |
| {color:red}-1{color} | {color:red} compile {color} | {color:red}  0m 
25s{color} | {color:red} hadoop-yarn-server-nodemanager in YARN-8200 failed. 
{color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
22s{color} | {color:green} YARN-8200 passed {color} |
| {color:red}-1{color} | {color:red} mvnsite {color} | {color:red}  0m 
26s{color} | {color:red} hadoop-yarn-server-nodemanager in YARN-8200 failed. 
{color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
23s{color} | {color:green} YARN-8200 passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
35s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
47s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} javac {color} | {color:red}  0m 47s{color} 
| {color:red} 
hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-nodemanager
 generated 11 new + 14 unchanged - 4 fixed = 25 total (was 18) {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 20s{color} | {color:orange} 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager:
 The patch generated 13 new + 207 unchanged - 8 fixed = 220 total (was 215) 
{color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
33s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
21s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 15m 
33s{color} | {color:green} hadoop-yarn-server-nodemanager in the patch passed. 
{color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
23s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 27m 38s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:a716388 |
| JIRA Issue | YARN-9174 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12953801/YARN-9174-YARN-8200.001.patch
 |
| Optional Tests |  dupname  asflicense  compile  javac  javadoc  mvninstall  
mvnsite  unit  shadedclient  findbugs  checkstyle  |
| uname | Linux 84931e2dc285 4.4.0-138-generic #164~14.04.1-Ubuntu SMP Fri Oct 
5 08:56:16 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | YARN-8200 / a611077 |
| maven | version: Apache Maven 3.3.9 
(bb52d8502b132ec0a5a3f4c09453c07478323dc5; 2015-11-10T16:41:47+00:00) |
| Default Java | 1.7.0_181 |
| mvninstall | 
https://builds.apache.org/job/PreCommit-YARN-Build/22992/artifact/out/branch-mvninstall-root.txt
 |
| compile | 
https://builds.apache.org/job/PreCommit-YARN-Build/22992/artifact/out/branch-compile-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-nodemanager.txt
 |
| mvnsite | 
https://builds.apache.org/job/PreCommit-YARN-Build/22992/artifact/out/branch-mvnsite-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-nodemanager.txt
 |
| javac | 
https://builds.apache.org/job/PreCommit-YARN-Build/22992/artifact/out/diff-compile-javac-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-s

[jira] [Commented] (YARN-9174) branch-3.0/branch-2 refactoring of GpuDevice class

2019-01-04 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/YARN-9174?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16734648#comment-16734648
 ] 

Hadoop QA commented on YARN-9174:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 21m 
22s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Findbugs executables are not available. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 6 new or modified test 
files. {color} |
|| || || || {color:brown} YARN-8200 Compile Tests {color} ||
| {color:red}-1{color} | {color:red} mvninstall {color} | {color:red}  9m 
22s{color} | {color:red} root in YARN-8200 failed. {color} |
| {color:red}-1{color} | {color:red} compile {color} | {color:red}  0m 
24s{color} | {color:red} hadoop-yarn-server-nodemanager in YARN-8200 failed. 
{color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
22s{color} | {color:green} YARN-8200 passed {color} |
| {color:red}-1{color} | {color:red} mvnsite {color} | {color:red}  0m 
27s{color} | {color:red} hadoop-yarn-server-nodemanager in YARN-8200 failed. 
{color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
25s{color} | {color:green} YARN-8200 passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
34s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
46s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} javac {color} | {color:red}  0m 46s{color} 
| {color:red} 
hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-nodemanager
 generated 11 new + 14 unchanged - 4 fixed = 25 total (was 18) {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 19s{color} | {color:orange} 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager:
 The patch generated 13 new + 207 unchanged - 8 fixed = 220 total (was 215) 
{color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
35s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
21s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 15m 
38s{color} | {color:green} hadoop-yarn-server-nodemanager in the patch passed. 
{color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
24s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 51m 59s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:a716388 |
| JIRA Issue | YARN-9174 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12953801/YARN-9174-YARN-8200.001.patch
 |
| Optional Tests |  dupname  asflicense  compile  javac  javadoc  mvninstall  
mvnsite  unit  shadedclient  findbugs  checkstyle  |
| uname | Linux e73bcd3a0e43 4.4.0-138-generic #164~14.04.1-Ubuntu SMP Fri Oct 
5 08:56:16 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | YARN-8200 / 14dc25e |
| maven | version: Apache Maven 3.3.9 
(bb52d8502b132ec0a5a3f4c09453c07478323dc5; 2015-11-10T16:41:47+00:00) |
| Default Java | 1.7.0_181 |
| mvninstall | 
https://builds.apache.org/job/PreCommit-YARN-Build/22989/artifact/out/branch-mvninstall-root.txt
 |
| compile | 
https://builds.apache.org/job/PreCommit-YARN-Build/22989/artifact/out/branch-compile-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-nodemanager.txt
 |
| mvnsite | 
https://builds.apache.org/job/PreCommit-YARN-Build/22989/artifact/out/branch-mvnsite-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-nodemanager.txt
 |
| javac | 
https://builds.apache.org/job/PreCommit-YARN-Build/22989/artifact/out/diff-compile-javac-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-s

[jira] [Commented] (YARN-9177) Use resource map for app metrics in TestCombinedSystemMetricsPublisher for branch-2

2019-01-04 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/YARN-9177?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16734644#comment-16734644
 ] 

Hadoop QA commented on YARN-9177:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
21s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Findbugs executables are not available. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} YARN-8200 Compile Tests {color} ||
| {color:red}-1{color} | {color:red} mvninstall {color} | {color:red}  9m 
24s{color} | {color:red} root in YARN-8200 failed. {color} |
| {color:red}-1{color} | {color:red} compile {color} | {color:red}  0m 
29s{color} | {color:red} hadoop-yarn-server-resourcemanager in YARN-8200 
failed. {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
25s{color} | {color:green} YARN-8200 passed {color} |
| {color:red}-1{color} | {color:red} mvnsite {color} | {color:red}  0m 
32s{color} | {color:red} hadoop-yarn-server-resourcemanager in YARN-8200 
failed. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
30s{color} | {color:green} YARN-8200 passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:red}-1{color} | {color:red} mvninstall {color} | {color:red}  0m 
27s{color} | {color:red} hadoop-yarn-server-resourcemanager in the patch 
failed. {color} |
| {color:red}-1{color} | {color:red} compile {color} | {color:red}  0m 
27s{color} | {color:red} hadoop-yarn-server-resourcemanager in the patch 
failed. {color} |
| {color:red}-1{color} | {color:red} javac {color} | {color:red}  0m 27s{color} 
| {color:red} hadoop-yarn-server-resourcemanager in the patch failed. {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
21s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} mvnsite {color} | {color:red}  0m 
29s{color} | {color:red} hadoop-yarn-server-resourcemanager in the patch 
failed. {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
27s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red}  0m 28s{color} 
| {color:red} hadoop-yarn-server-resourcemanager in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
27s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 15m 43s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:a716388 |
| JIRA Issue | YARN-9177 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12953804/YARN-9177-YARN-8200.001.patch
 |
| Optional Tests |  dupname  asflicense  compile  javac  javadoc  mvninstall  
mvnsite  unit  shadedclient  findbugs  checkstyle  |
| uname | Linux 5f8bc93a69cc 4.4.0-138-generic #164~14.04.1-Ubuntu SMP Fri Oct 
5 08:56:16 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | YARN-8200 / f76f2cf |
| maven | version: Apache Maven 3.3.9 
(bb52d8502b132ec0a5a3f4c09453c07478323dc5; 2015-11-10T16:41:47+00:00) |
| Default Java | 1.7.0_181 |
| mvninstall | 
https://builds.apache.org/job/PreCommit-YARN-Build/22991/artifact/out/branch-mvninstall-root.txt
 |
| compile | 
https://builds.apache.org/job/PreCommit-YARN-Build/22991/artifact/out/branch-compile-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager.txt
 |
| mvnsite | 
https://builds.apache.org/job/PreCommit-YARN-Build/22991/artifact/out/branch-mvnsite-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager.txt
 |
| mvninstall | 
https://builds.apache.org/job/PreCommit-YARN-Build/22991/artifact/out/patch-mvninstall-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager.txt
 |
| compile | 
https://builds.apache.org/job/PreCommit-YARN-Build/22991/artifact/out/patch-compile-ha

[jira] [Commented] (YARN-9174) branch-3.0/branch-2 refactoring of GpuDevice class

2019-01-04 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/YARN-9174?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16734635#comment-16734635
 ] 

Hadoop QA commented on YARN-9174:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
15s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Findbugs executables are not available. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 6 new or modified test 
files. {color} |
|| || || || {color:brown} YARN-8200 Compile Tests {color} ||
| {color:red}-1{color} | {color:red} mvninstall {color} | {color:red}  8m 
59s{color} | {color:red} root in YARN-8200 failed. {color} |
| {color:red}-1{color} | {color:red} compile {color} | {color:red}  0m 
25s{color} | {color:red} hadoop-yarn-server-nodemanager in YARN-8200 failed. 
{color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
24s{color} | {color:green} YARN-8200 passed {color} |
| {color:red}-1{color} | {color:red} mvnsite {color} | {color:red}  0m 
28s{color} | {color:red} hadoop-yarn-server-nodemanager in YARN-8200 failed. 
{color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
25s{color} | {color:green} YARN-8200 passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
33s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
46s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} javac {color} | {color:red}  0m 46s{color} 
| {color:red} 
hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-nodemanager
 generated 11 new + 14 unchanged - 4 fixed = 25 total (was 18) {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 20s{color} | {color:orange} 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager:
 The patch generated 12 new + 209 unchanged - 7 fixed = 221 total (was 216) 
{color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
34s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
21s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 15m 
39s{color} | {color:green} hadoop-yarn-server-nodemanager in the patch passed. 
{color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
25s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 30m 31s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:a716388 |
| JIRA Issue | YARN-9174 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12953801/YARN-9174-YARN-8200.001.patch
 |
| Optional Tests |  dupname  asflicense  compile  javac  javadoc  mvninstall  
mvnsite  unit  shadedclient  findbugs  checkstyle  |
| uname | Linux d56d6265d489 4.4.0-139-generic #165-Ubuntu SMP Wed Oct 24 
10:58:50 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | YARN-8200 / 14dc25e |
| maven | version: Apache Maven 3.3.9 
(bb52d8502b132ec0a5a3f4c09453c07478323dc5; 2015-11-10T16:41:47+00:00) |
| Default Java | 1.7.0_181 |
| mvninstall | 
https://builds.apache.org/job/PreCommit-YARN-Build/22990/artifact/out/branch-mvninstall-root.txt
 |
| compile | 
https://builds.apache.org/job/PreCommit-YARN-Build/22990/artifact/out/branch-compile-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-nodemanager.txt
 |
| mvnsite | 
https://builds.apache.org/job/PreCommit-YARN-Build/22990/artifact/out/branch-mvnsite-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-nodemanager.txt
 |
| javac | 
https://builds.apache.org/job/PreCommit-YARN-Build/22990/artifact/out/diff-compile-javac-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-n

[jira] [Commented] (YARN-9177) Use resource map for app metrics in TestCombinedSystemMetricsPublisher for branch-2

2019-01-04 Thread Jonathan Hung (JIRA)


[ 
https://issues.apache.org/jira/browse/YARN-9177?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16734633#comment-16734633
 ] 

Jonathan Hung commented on YARN-9177:
-

attached 001 patch with the required change.

> Use resource map for app metrics in TestCombinedSystemMetricsPublisher for 
> branch-2
> ---
>
> Key: YARN-9177
> URL: https://issues.apache.org/jira/browse/YARN-9177
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Jonathan Hung
>Assignee: Jonathan Hung
>Priority: Major
> Attachments: YARN-9177-YARN-8200.001.patch
>
>
> YARN-6736 mocks RMAppMetrics in TestCombinedSystemMetricsPublisher - in 
> branch-2 and below it uses mem/vcore, and in branch-3.0 and above it uses a 
> resource map. Once resource types is ported to branch-2, this test should 
> also use resource map in branch-2.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-9177) Use resource map for app metrics in TestCombinedSystemMetricsPublisher for branch-2

2019-01-04 Thread Jonathan Hung (JIRA)


 [ 
https://issues.apache.org/jira/browse/YARN-9177?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jonathan Hung updated YARN-9177:

Attachment: YARN-9177-YARN-8200.001.patch

> Use resource map for app metrics in TestCombinedSystemMetricsPublisher for 
> branch-2
> ---
>
> Key: YARN-9177
> URL: https://issues.apache.org/jira/browse/YARN-9177
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Jonathan Hung
>Assignee: Jonathan Hung
>Priority: Major
> Attachments: YARN-9177-YARN-8200.001.patch
>
>
> YARN-6736 mocks RMAppMetrics in TestCombinedSystemMetricsPublisher - in 
> branch-2 and below it uses mem/vcore, and in branch-3.0 and above it uses a 
> resource map. Once resource types is ported to branch-2, this test should 
> also use resource map in branch-2.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-9177) Use resource map for app metrics in TestCombinedSystemMetricsPublisher for branch-2

2019-01-04 Thread Jonathan Hung (JIRA)


 [ 
https://issues.apache.org/jira/browse/YARN-9177?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jonathan Hung updated YARN-9177:

Description: YARN-6736 mocks RMAppMetrics in 

> Use resource map for app metrics in TestCombinedSystemMetricsPublisher for 
> branch-2
> ---
>
> Key: YARN-9177
> URL: https://issues.apache.org/jira/browse/YARN-9177
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Jonathan Hung
>Assignee: Jonathan Hung
>Priority: Major
>
> YARN-6736 mocks RMAppMetrics in 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-9177) Use resource map for app metrics in TestCombinedSystemMetricsPublisher for branch-2

2019-01-04 Thread Jonathan Hung (JIRA)


 [ 
https://issues.apache.org/jira/browse/YARN-9177?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jonathan Hung updated YARN-9177:

Description: YARN-6736 mocks RMAppMetrics in 
TestCombinedSystemMetricsPublisher - in branch-2 and below it uses mem/vcore, 
and in branch-3.0 and above it uses a resource map. Once resource types is 
ported to branch-2, this test should also use resource map in branch-2.  (was: 
YARN-6736 mocks RMAppMetrics in )

> Use resource map for app metrics in TestCombinedSystemMetricsPublisher for 
> branch-2
> ---
>
> Key: YARN-9177
> URL: https://issues.apache.org/jira/browse/YARN-9177
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Jonathan Hung
>Assignee: Jonathan Hung
>Priority: Major
>
> YARN-6736 mocks RMAppMetrics in TestCombinedSystemMetricsPublisher - in 
> branch-2 and below it uses mem/vcore, and in branch-3.0 and above it uses a 
> resource map. Once resource types is ported to branch-2, this test should 
> also use resource map in branch-2.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Assigned] (YARN-9177) Use resource map for app metrics in TestCombinedSystemMetricsPublisher for branch-2

2019-01-04 Thread Jonathan Hung (JIRA)


 [ 
https://issues.apache.org/jira/browse/YARN-9177?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jonathan Hung reassigned YARN-9177:
---

Assignee: Jonathan Hung

> Use resource map for app metrics in TestCombinedSystemMetricsPublisher for 
> branch-2
> ---
>
> Key: YARN-9177
> URL: https://issues.apache.org/jira/browse/YARN-9177
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Jonathan Hung
>Assignee: Jonathan Hung
>Priority: Major
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Created] (YARN-9177) Use resource map for app metrics in TestCombinedSystemMetricsPublisher for branch-2

2019-01-04 Thread Jonathan Hung (JIRA)
Jonathan Hung created YARN-9177:
---

 Summary: Use resource map for app metrics in 
TestCombinedSystemMetricsPublisher for branch-2
 Key: YARN-9177
 URL: https://issues.apache.org/jira/browse/YARN-9177
 Project: Hadoop YARN
  Issue Type: Sub-task
Reporter: Jonathan Hung






--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-9174) branch-3.0/branch-2 refactoring of GpuDevice class

2019-01-04 Thread Jonathan Hung (JIRA)


[ 
https://issues.apache.org/jira/browse/YARN-9174?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16734609#comment-16734609
 ] 

Jonathan Hung commented on YARN-9174:
-

attaching YARN-9174-YARN-8200.001.patch for the branch-2 version of the same 
logic

> branch-3.0/branch-2 refactoring of GpuDevice class
> --
>
> Key: YARN-9174
> URL: https://issues.apache.org/jira/browse/YARN-9174
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Jonathan Hung
>Assignee: Jonathan Hung
>Priority: Major
> Attachments: YARN-9174-YARN-8200.001.patch, 
> YARN-9174-YARN-8200.branch3.001.patch, YARN-9174-YARN-8200.branch3.002.patch
>
>
> YARN-7224 does two main things:
>  # refactors Gpu device numbers to a separate GpuDevice class,
>  # adds Docker support for Gpus
> This ticket is for doing *only* the GpuDevice class refactoring so we have 
> this logic in branch-3.0 and branch-2



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-9174) branch-3.0/branch-2 refactoring of GpuDevice class

2019-01-04 Thread Jonathan Hung (JIRA)


 [ 
https://issues.apache.org/jira/browse/YARN-9174?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jonathan Hung updated YARN-9174:

Attachment: YARN-9174-YARN-8200.001.patch

> branch-3.0/branch-2 refactoring of GpuDevice class
> --
>
> Key: YARN-9174
> URL: https://issues.apache.org/jira/browse/YARN-9174
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Jonathan Hung
>Assignee: Jonathan Hung
>Priority: Major
> Attachments: YARN-9174-YARN-8200.001.patch, 
> YARN-9174-YARN-8200.branch3.001.patch, YARN-9174-YARN-8200.branch3.002.patch
>
>
> YARN-7224 does two main things:
>  # refactors Gpu device numbers to a separate GpuDevice class,
>  # adds Docker support for Gpus
> This ticket is for doing *only* the GpuDevice class refactoring so we have 
> this logic in branch-3.0 and branch-2



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-9003) Support multi-homed network for docker container

2019-01-04 Thread Eric Yang (JIRA)


[ 
https://issues.apache.org/jira/browse/YARN-9003?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16734581#comment-16734581
 ] 

Eric Yang commented on YARN-9003:
-

Base on conversation with [~billie.rinaldi], docker container multi-homed 
network only works in one combination rule:

bridge + other (where other != host)

Bridge network must be the first network to be specified, otherwise, it doesn't 
work neither.  I opened [docker 
ticket|https://github.com/docker/for-linux/issues/542] to make sure that Docker 
intends to support multi-homed networks properly.  For my next patch, if more 
than one network is given, container-executor will verify both networks using 
the combination rule above.


> Support multi-homed network for docker container
> 
>
> Key: YARN-9003
> URL: https://issues.apache.org/jira/browse/YARN-9003
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Eric Yang
>Assignee: Eric Yang
>Priority: Major
>  Labels: docker
> Attachments: YARN-9003.001.patch, YARN-9003.002.patch, 
> YARN-9003.003.patch
>
>
> Docker network can be defined as configuration properties - docker.network to 
> setup docker container to connect to a specific network in YARN service.  
> Docker can run multi-homed network by specifying --net=bridge 
> --net=private-net.  This is useful to expose small number of  front end 
> container and ports, while the rest of the infrastructure runs in private 
> network.  This task is to add support for specifying multiple docker networks 
> to YARN service and docker support.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-9174) branch-3.0/branch-2 refactoring of GpuDevice class

2019-01-04 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/YARN-9174?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16734593#comment-16734593
 ] 

Hadoop QA commented on YARN-9174:
-

| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 10m  
9s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 5 new or modified test 
files. {color} |
|| || || || {color:brown} YARN-8200.branch3 Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 20m 
28s{color} | {color:green} YARN-8200.branch3 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
48s{color} | {color:green} YARN-8200.branch3 passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
25s{color} | {color:green} YARN-8200.branch3 passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
34s{color} | {color:green} YARN-8200.branch3 passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
11m  2s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
49s{color} | {color:green} YARN-8200.branch3 passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
24s{color} | {color:green} YARN-8200.branch3 passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
30s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
44s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
44s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 20s{color} | {color:orange} 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager:
 The patch generated 11 new + 131 unchanged - 7 fixed = 142 total (was 138) 
{color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
28s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
11m  9s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
54s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
20s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 16m 
56s{color} | {color:green} hadoop-yarn-server-nodemanager in the patch passed. 
{color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
22s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 76m 33s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:e402791 |
| JIRA Issue | YARN-9174 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12953794/YARN-9174-YARN-8200.branch3.002.patch
 |
| Optional Tests |  dupname  asflicense  compile  javac  javadoc  mvninstall  
mvnsite  unit  shadedclient  findbugs  checkstyle  |
| uname | Linux dd41f1bdb2f9 4.4.0-139-generic #165-Ubuntu SMP Wed Oct 24 
10:58:50 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | YARN-8200.branch3 / d513823 |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_191 |
| findbugs | v3.1.0-RC1 |
| checkstyle | 
https://builds.apache.org/job/PreCommit-YARN-Build/22988/artifact/out/diff-checkstyle-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-nodemanager.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-YARN-Build/22988/testReport/ |
| Max. process+thread count | 407 (

[jira] [Updated] (YARN-6695) Race condition in RM for publishing container events vs appFinished events causes NPE

2019-01-04 Thread Eric Yang (JIRA)


 [ 
https://issues.apache.org/jira/browse/YARN-6695?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Eric Yang updated YARN-6695:

Priority: Critical  (was: Major)

> Race condition in RM for publishing container events vs appFinished events 
> causes NPE 
> --
>
> Key: YARN-6695
> URL: https://issues.apache.org/jira/browse/YARN-6695
> Project: Hadoop YARN
>  Issue Type: Bug
>Reporter: Rohith Sharma K S
>Priority: Critical
>
> When RM publishes container events i.e by enabling 
> *yarn.rm.system-metrics-publisher.emit-container-events*, there is race 
> condition for processing events 
> vs appFinished event that removes appId from collector list which cause NPE. 
> Look at the below trace where appId is removed from collectors first and then 
> corresponding events are processed. 
> {noformat}
> 2017-06-06 19:28:48,896 INFO  capacity.ParentQueue 
> (ParentQueue.java:removeApplication(472)) - Application removed - appId: 
> application_1496758895643_0005 user: root leaf-queue of parent: root 
> #applications: 0
> 2017-06-06 19:28:48,921 INFO  collector.TimelineCollectorManager 
> (TimelineCollectorManager.java:remove(190)) - The collector service for 
> application_1496758895643_0005 was removed
> 2017-06-06 19:28:48,922 ERROR metrics.TimelineServiceV2Publisher 
> (TimelineServiceV2Publisher.java:putEntity(451)) - Error when publishing 
> entity TimelineEntity[type='YARN_CONTAINER', 
> id='container_e01_1496758895643_0005_01_02']
> java.lang.NullPointerException
>   at 
> org.apache.hadoop.yarn.server.resourcemanager.metrics.TimelineServiceV2Publisher.putEntity(TimelineServiceV2Publisher.java:448)
>   at 
> org.apache.hadoop.yarn.server.resourcemanager.metrics.TimelineServiceV2Publisher.access$100(TimelineServiceV2Publisher.java:72)
>   at 
> org.apache.hadoop.yarn.server.resourcemanager.metrics.TimelineServiceV2Publisher$TimelineV2EventHandler.handle(TimelineServiceV2Publisher.java:480)
>   at 
> org.apache.hadoop.yarn.server.resourcemanager.metrics.TimelineServiceV2Publisher$TimelineV2EventHandler.handle(TimelineServiceV2Publisher.java:469)
>   at 
> org.apache.hadoop.yarn.event.AsyncDispatcher.dispatch(AsyncDispatcher.java:201)
>   at 
> org.apache.hadoop.yarn.event.AsyncDispatcher$1.run(AsyncDispatcher.java:127)
>   at java.lang.Thread.run(Thread.java:745)
> {noformat}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6695) Race condition in RM for publishing container events vs appFinished events causes NPE

2019-01-04 Thread Eric Yang (JIRA)


[ 
https://issues.apache.org/jira/browse/YARN-6695?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16734587#comment-16734587
 ] 

Eric Yang commented on YARN-6695:
-

[~rohithsharma] Raising priority for this issue.  When destroying a YARN 
service, this issue occurs every time and leading to resource manager crash.  

> Race condition in RM for publishing container events vs appFinished events 
> causes NPE 
> --
>
> Key: YARN-6695
> URL: https://issues.apache.org/jira/browse/YARN-6695
> Project: Hadoop YARN
>  Issue Type: Bug
>Reporter: Rohith Sharma K S
>Priority: Major
>
> When RM publishes container events i.e by enabling 
> *yarn.rm.system-metrics-publisher.emit-container-events*, there is race 
> condition for processing events 
> vs appFinished event that removes appId from collector list which cause NPE. 
> Look at the below trace where appId is removed from collectors first and then 
> corresponding events are processed. 
> {noformat}
> 2017-06-06 19:28:48,896 INFO  capacity.ParentQueue 
> (ParentQueue.java:removeApplication(472)) - Application removed - appId: 
> application_1496758895643_0005 user: root leaf-queue of parent: root 
> #applications: 0
> 2017-06-06 19:28:48,921 INFO  collector.TimelineCollectorManager 
> (TimelineCollectorManager.java:remove(190)) - The collector service for 
> application_1496758895643_0005 was removed
> 2017-06-06 19:28:48,922 ERROR metrics.TimelineServiceV2Publisher 
> (TimelineServiceV2Publisher.java:putEntity(451)) - Error when publishing 
> entity TimelineEntity[type='YARN_CONTAINER', 
> id='container_e01_1496758895643_0005_01_02']
> java.lang.NullPointerException
>   at 
> org.apache.hadoop.yarn.server.resourcemanager.metrics.TimelineServiceV2Publisher.putEntity(TimelineServiceV2Publisher.java:448)
>   at 
> org.apache.hadoop.yarn.server.resourcemanager.metrics.TimelineServiceV2Publisher.access$100(TimelineServiceV2Publisher.java:72)
>   at 
> org.apache.hadoop.yarn.server.resourcemanager.metrics.TimelineServiceV2Publisher$TimelineV2EventHandler.handle(TimelineServiceV2Publisher.java:480)
>   at 
> org.apache.hadoop.yarn.server.resourcemanager.metrics.TimelineServiceV2Publisher$TimelineV2EventHandler.handle(TimelineServiceV2Publisher.java:469)
>   at 
> org.apache.hadoop.yarn.event.AsyncDispatcher.dispatch(AsyncDispatcher.java:201)
>   at 
> org.apache.hadoop.yarn.event.AsyncDispatcher$1.run(AsyncDispatcher.java:127)
>   at java.lang.Thread.run(Thread.java:745)
> {noformat}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (YARN-9003) Support multi-homed network for docker container

2019-01-04 Thread Eric Yang (JIRA)


[ 
https://issues.apache.org/jira/browse/YARN-9003?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16734581#comment-16734581
 ] 

Eric Yang edited comment on YARN-9003 at 1/4/19 9:24 PM:
-

Base on conversation with [~billie.rinaldi], docker container multi-homed 
network only works in one combination rule:

bridge + other (where other != host)

Bridge network must be the first network to be specified, otherwise, it doesn't 
work.  I opened [docker ticket|https://github.com/docker/for-linux/issues/542] 
to make sure that Docker intends to support multi-homed networks properly.  For 
my next patch, if more than one network is given, container-executor will 
verify both networks using the combination rule above.



was (Author: eyang):
Base on conversation with [~billie.rinaldi], docker container multi-homed 
network only works in one combination rule:

bridge + other (where other != host)

Bridge network must be the first network to be specified, otherwise, it doesn't 
work neither.  I opened [docker 
ticket|https://github.com/docker/for-linux/issues/542] to make sure that Docker 
intends to support multi-homed networks properly.  For my next patch, if more 
than one network is given, container-executor will verify both networks using 
the combination rule above.


> Support multi-homed network for docker container
> 
>
> Key: YARN-9003
> URL: https://issues.apache.org/jira/browse/YARN-9003
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Eric Yang
>Assignee: Eric Yang
>Priority: Major
>  Labels: docker
> Attachments: YARN-9003.001.patch, YARN-9003.002.patch, 
> YARN-9003.003.patch
>
>
> Docker network can be defined as configuration properties - docker.network to 
> setup docker container to connect to a specific network in YARN service.  
> Docker can run multi-homed network by specifying --net=bridge 
> --net=private-net.  This is useful to expose small number of  front end 
> container and ports, while the rest of the infrastructure runs in private 
> network.  This task is to add support for specifying multiple docker networks 
> to YARN service and docker support.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-9174) branch-3.0/branch-2 refactoring of GpuDevice class

2019-01-04 Thread Jonathan Hung (JIRA)


[ 
https://issues.apache.org/jira/browse/YARN-9174?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16734536#comment-16734536
 ] 

Jonathan Hung commented on YARN-9174:
-

002 patch fixes unit test (missed calling {{updateContainerResourceMapping}} in 
{{NMMemoryStateStoreService}}).

> branch-3.0/branch-2 refactoring of GpuDevice class
> --
>
> Key: YARN-9174
> URL: https://issues.apache.org/jira/browse/YARN-9174
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Jonathan Hung
>Assignee: Jonathan Hung
>Priority: Major
> Attachments: YARN-9174-YARN-8200.branch3.001.patch, 
> YARN-9174-YARN-8200.branch3.002.patch
>
>
> YARN-7224 does two main things:
>  # refactors Gpu device numbers to a separate GpuDevice class,
>  # adds Docker support for Gpus
> This ticket is for doing *only* the GpuDevice class refactoring so we have 
> this logic in branch-3.0 and branch-2



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-9174) branch-3.0/branch-2 refactoring of GpuDevice class

2019-01-04 Thread Jonathan Hung (JIRA)


 [ 
https://issues.apache.org/jira/browse/YARN-9174?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jonathan Hung updated YARN-9174:

Attachment: YARN-9174-YARN-8200.branch3.002.patch

> branch-3.0/branch-2 refactoring of GpuDevice class
> --
>
> Key: YARN-9174
> URL: https://issues.apache.org/jira/browse/YARN-9174
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Jonathan Hung
>Assignee: Jonathan Hung
>Priority: Major
> Attachments: YARN-9174-YARN-8200.branch3.001.patch, 
> YARN-9174-YARN-8200.branch3.002.patch
>
>
> YARN-7224 does two main things:
>  # refactors Gpu device numbers to a separate GpuDevice class,
>  # adds Docker support for Gpus
> This ticket is for doing *only* the GpuDevice class refactoring so we have 
> this logic in branch-3.0 and branch-2



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7904) Privileged, trusted containers need all of their bind-mounted directories to be read-only

2019-01-04 Thread Eric Badger (JIRA)


[ 
https://issues.apache.org/jira/browse/YARN-7904?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16734534#comment-16734534
 ] 

Eric Badger commented on YARN-7904:
---

bq. Instead of making all bind-mounted directories read-only. We may want to 
consider to block privileged container from non-entrypoint mode to reduce the 
incompatible changes to the minimum. Thought?
This makes sense to me, but only because I don't see an easy solution for how 
to deal with user logs or user data written as root. So I'm +1 for this idea. 

> Privileged, trusted containers need all of their bind-mounted directories to 
> be read-only
> -
>
> Key: YARN-7904
> URL: https://issues.apache.org/jira/browse/YARN-7904
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Eric Badger
>Assignee: Zhaohui Xin
>Priority: Major
>  Labels: Docker
>
> Since they will be running as some other user than themselves, the NM likely 
> won't be able to clean up after them because of permissions issues. So, to 
> prevent this, we should make these directories read-only.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-9166) Fix logging for preemption of Opportunistic containers for Guaranteed containers.

2019-01-04 Thread Hudson (JIRA)


[ 
https://issues.apache.org/jira/browse/YARN-9166?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16734518#comment-16734518
 ] 

Hudson commented on YARN-9166:
--

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #15706 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/15706/])
YARN-9166. Fix logging for preemption of Opportunistic containers for (gifuma: 
rev 6e35f7130fb3fb17665e818f838ed750425348c0)
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/scheduler/ContainerScheduler.java


> Fix logging for preemption of Opportunistic containers for Guaranteed 
> containers.
> -
>
> Key: YARN-9166
> URL: https://issues.apache.org/jira/browse/YARN-9166
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Abhishek Modi
>Assignee: Abhishek Modi
>Priority: Minor
> Fix For: 3.3.0
>
> Attachments: YARN-9166.001.patch
>
>
> Fix logging for preemption of Opportunistic containers for Guaranteed 
> containers.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6523) Newly retrieved security Tokens are sent as part of each heartbeat to each node from RM which is not desirable in large cluster

2019-01-04 Thread Jason Lowe (JIRA)


[ 
https://issues.apache.org/jira/browse/YARN-6523?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16734513#comment-16734513
 ] 

Jason Lowe commented on YARN-6523:
--

The most recently posted patch is identical to patch version 11 which is the 
last version I reviewed and no longer applies to trunk.  Maybe the wrong patch 
file was uploaded?

> Newly retrieved security Tokens are sent as part of each heartbeat to each 
> node from RM which is not desirable in large cluster
> ---
>
> Key: YARN-6523
> URL: https://issues.apache.org/jira/browse/YARN-6523
> Project: Hadoop YARN
>  Issue Type: Improvement
>  Components: RM
>Affects Versions: 2.8.0, 2.7.3
>Reporter: Naganarasimha G R
>Assignee: Manikandan R
>Priority: Major
> Attachments: YARN-6523.001.patch, YARN-6523.002.patch, 
> YARN-6523.003.patch, YARN-6523.004.patch, YARN-6523.005.patch, 
> YARN-6523.006.patch, YARN-6523.007.patch, YARN-6523.008.patch, 
> YARN-6523.009.patch, YARN-6523.010.patch, YARN-6523.011.patch, 
> YARN-6523.012.patch
>
>
> Currently as part of heartbeat response RM sets all application's tokens 
> though all applications might not be active on the node. On top of it 
> NodeHeartbeatResponsePBImpl converts tokens for each app into 
> SystemCredentialsForAppsProto. Hence for each node and each heartbeat too 
> many SystemCredentialsForAppsProto objects were getting created.
> We hit a OOM while testing for 2000 concurrent apps on 500 nodes cluster with 
> 8GB RAM configured for RM



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-9166) Fix logging for preemption of Opportunistic containers for Guaranteed containers.

2019-01-04 Thread Giovanni Matteo Fumarola (JIRA)


[ 
https://issues.apache.org/jira/browse/YARN-9166?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16734502#comment-16734502
 ] 

Giovanni Matteo Fumarola commented on YARN-9166:


Thanks [~abmodi] for the patch and [~elgoiri] for the review.
Committed to trunk.

> Fix logging for preemption of Opportunistic containers for Guaranteed 
> containers.
> -
>
> Key: YARN-9166
> URL: https://issues.apache.org/jira/browse/YARN-9166
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Abhishek Modi
>Assignee: Abhishek Modi
>Priority: Minor
> Fix For: 3.3.0
>
> Attachments: YARN-9166.001.patch
>
>
> Fix logging for preemption of Opportunistic containers for Guaranteed 
> containers.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-9149) yarn container -status misses logUrl when integrated with ATSv2

2019-01-04 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/YARN-9149?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16734473#comment-16734473
 ] 

Hadoop QA commented on YARN-9149:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
19s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
15s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 20m 
24s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  8m 
20s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
30s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
28s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
15m 32s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m 
17s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
18s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
14s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
 9s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  7m 
45s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  7m 
45s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
26s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
22s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
13m 16s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  0m 
55s{color} | {color:red} hadoop-yarn-project/hadoop-yarn/hadoop-yarn-client 
generated 1 new + 0 unchanged - 0 fixed = 1 total (was 0) {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
16s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  3m 
25s{color} | {color:green} hadoop-yarn-common in the patch passed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 18m 18s{color} 
| {color:red} hadoop-yarn-client in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
37s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}101m 51s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| FindBugs | module:hadoop-yarn-project/hadoop-yarn/hadoop-yarn-client |
|  |  Write to static field 
org.apache.hadoop.yarn.client.api.impl.AHSv2ClientImpl.logServerUrl from 
instance method 
org.apache.hadoop.yarn.client.api.impl.AHSv2ClientImpl.serviceInit(Configuration)
  At AHSv2ClientImpl.java:from instance method 
org.apache.hadoop.yarn.client.api.impl.AHSv2ClientImpl.serviceInit(Configuration)
  At AHSv2ClientImpl.java:[line 60] |
| Failed junit tests | hadoop.yarn.client.cli.TestRMAdminCLI |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:8f97d6f |
| JIRA Issue | YARN-9149 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12953783/YARN-9149.002.patch |
| Optional Tests |  dupnam

[jira] [Commented] (YARN-8489) Need to support "dominant" component concept inside YARN service

2019-01-04 Thread Suma Shivaprasad (JIRA)


[ 
https://issues.apache.org/jira/browse/YARN-8489?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16734482#comment-16734482
 ] 

Suma Shivaprasad commented on YARN-8489:


Thanks [~yuan_zac] A couple f comments

1. Please modify javadoc to clarify that if dominant component terminates, the 
service is also terminated.

/**
959* If the service state component is finished, The service will be 
terminated.
960* @param component
961*/

2. Pls add docs for this new property 

> Need to support "dominant" component concept inside YARN service
> 
>
> Key: YARN-8489
> URL: https://issues.apache.org/jira/browse/YARN-8489
> Project: Hadoop YARN
>  Issue Type: Task
>  Components: yarn-native-services
>Reporter: Wangda Tan
>Assignee: Zac Zhou
>Priority: Major
> Attachments: YARN-8489.001.patch, YARN-8489.002.patch, 
> YARN-8489.003.patch
>
>
> Existing YARN service support termination policy for different restart 
> policies. For example ALWAYS means service will not be terminated. And NEVER 
> means if all component terminated, service will be terminated.
> The name "dominant" might not be most appropriate , we can figure out better 
> names. But in simple, it means, a dominant component which final state will 
> determine job's final state regardless of other components.
> Use cases: 
> 1) Tensorflow job has master/worker/services/tensorboard. Once master goes to 
> final state, no matter if it is succeeded or failed, we should terminate 
> ps/tensorboard/workers. And the mark the job to succeeded/failed. 
> 2) Not sure if it is a real-world use case: A service which has multiple 
> component, some component is not restartable. For such services, if a 
> component is failed, we should mark the whole service to failed. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Resolved] (YARN-4299) Distcp fails even if ignoreFailures option is set

2019-01-04 Thread Prabhu Joseph (JIRA)


 [ 
https://issues.apache.org/jira/browse/YARN-4299?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Prabhu Joseph resolved YARN-4299.
-
Resolution: Duplicate

> Distcp fails even if ignoreFailures option is set
> -
>
> Key: YARN-4299
> URL: https://issues.apache.org/jira/browse/YARN-4299
> Project: Hadoop YARN
>  Issue Type: Bug
>Affects Versions: 2.7.0
>Reporter: Prabhu Joseph
>Priority: Major
>
> hadoop distcp fails even if ignoreFailures option is set using -i option.
> When an IOException is thrown from RetriableFileCopyCommand, the 
> hadoopFailures method in CopyMapper does not honor ignoreFailures.
> if (ignoreFailures && exception.getCause() instanceof 
> RetriableFileCopyCommand.CopyReadException)
> OR should be used above.
> And there is one more bug, when i wrap IOException with CopyReadException, 
> the exception.getCause is still IOException.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Resolved] (YARN-6810) YARN localizer has to validate the mapreduce.tar.gz present in cache before using it

2019-01-04 Thread Prabhu Joseph (JIRA)


 [ 
https://issues.apache.org/jira/browse/YARN-6810?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Prabhu Joseph resolved YARN-6810.
-
Resolution: Duplicate

> YARN localizer has to validate the mapreduce.tar.gz present in cache before 
> using it
> 
>
> Key: YARN-6810
> URL: https://issues.apache.org/jira/browse/YARN-6810
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: nodemanager
>Affects Versions: 2.7.3
>Reporter: Prabhu Joseph
>Priority: Major
>
> When a localized mapreduce.tar.gz is corrupt and zero bytes, all MapReduce 
> jobs fails on the cluster with "Error: Could not find or load main class 
> org.apache.hadoop.mapreduce.v2.app.MRAppMaster " as it uses corrupt 
> mapreduce.tar.gz. YARN Localizer has to check if the existing 
> mapreduce.tar.gz is a valid file before using it.
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-9166) Fix logging for preemption of Opportunistic containers for Guaranteed containers.

2019-01-04 Thread JIRA


[ 
https://issues.apache.org/jira/browse/YARN-9166?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16734389#comment-16734389
 ] 

Íñigo Goiri commented on YARN-9166:
---

The test seems to pass fine:
https://builds.apache.org/job/PreCommit-YARN-Build/22967/testReport/org.apache.hadoop.yarn.server.nodemanager.containermanager.scheduler/TestContainerSchedulerQueuing/

+1 on  [^YARN-9166.001.patch] 

> Fix logging for preemption of Opportunistic containers for Guaranteed 
> containers.
> -
>
> Key: YARN-9166
> URL: https://issues.apache.org/jira/browse/YARN-9166
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Abhishek Modi
>Assignee: Abhishek Modi
>Priority: Minor
> Fix For: 3.3.0
>
> Attachments: YARN-9166.001.patch
>
>
> Fix logging for preemption of Opportunistic containers for Guaranteed 
> containers.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-9116) Capacity Scheduler: add the default maximum-allocation-mb and maximum-allocation-vcores for the queues

2019-01-04 Thread Aihua Xu (JIRA)


[ 
https://issues.apache.org/jira/browse/YARN-9116?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16734369#comment-16734369
 ] 

Aihua Xu commented on YARN-9116:


[~cheersyang] That was my initial idea (see YARN-9055) that we can override the 
parent setting, but it introduces incompatibility since it's always assumed 
that the child queue can't have larger settings than the parents. Some clients 
such as spark will check the top settings and fail immediately if the resource 
request can't be satisfied.

> Capacity Scheduler: add the default maximum-allocation-mb and 
> maximum-allocation-vcores for the queues
> --
>
> Key: YARN-9116
> URL: https://issues.apache.org/jira/browse/YARN-9116
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: capacity scheduler
>Affects Versions: 2.7.0
>Reporter: Aihua Xu
>Assignee: Aihua Xu
>Priority: Major
> Attachments: YARN-9116.1.patch
>
>
> YARN-1582 adds the support of maximum-allocation-mb configuration per queue 
> which is targeting to support larger container features on dedicated queues 
> (larger maximum-allocation-mb/maximum-allocation-vcores for such queue) . 
> While to achieve larger container configuration, we need to increase the 
> global maximum-allocation-mb/maximum-allocation-vcores (e.g. 120G/256) and 
> then override those configurations with desired values on the queues since 
> queue configuration can't be larger than cluster configuration. There are 
> many queues in the system and if we forget to configure such values when 
> adding a new queue, then such queue gets default 120G/256 which typically is 
> not what we want.  
> We can come up with a queue-default configuration (set to normal queue 
> configuration like 16G/8), so the leaf queues gets such values by default.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-9149) yarn container -status misses logUrl when integrated with ATSv2

2019-01-04 Thread Abhishek Modi (JIRA)


 [ 
https://issues.apache.org/jira/browse/YARN-9149?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Abhishek Modi updated YARN-9149:

Attachment: YARN-9149.002.patch

> yarn container -status misses logUrl when integrated with ATSv2
> ---
>
> Key: YARN-9149
> URL: https://issues.apache.org/jira/browse/YARN-9149
> Project: Hadoop YARN
>  Issue Type: Bug
>Reporter: Rohith Sharma K S
>Assignee: Abhishek Modi
>Priority: Major
> Attachments: YARN-9149.001.patch, YARN-9149.002.patch
>
>
> Post YARN-8303, yarn client can be integrated with ATSv2. But log url and 
> start and end time is printing data is wrong!
> {code}
> Container Report :
>   Container-Id : container_1545035586969_0001_01_01
>   Start-Time : 0
>   Finish-Time : 0
>   State : COMPLETE
>   Execution-Type : GUARANTEED
>   LOG-URL : null
>   Host : localhost:25006
>   NodeHttpAddress : localhost:25008
>   Diagnostics :
> {code}
> # TimelineEntityV2Converter#convertToContainerReport set logUrl as *null*. 
> This need set for proper log url based on yarn.log.server.web-service.url
> # TimelineEntityV2Converter#convertToContainerReport parses start/end time 
> wrongly. Comparison should happen with entityType but below code is doing 
> entityId
> {code}
> if (events != null) {
>   for (TimelineEvent event : events) {
> if (event.getId().equals(
> ContainerMetricsConstants.CREATED_IN_RM_EVENT_TYPE)) {
>   createdTime = event.getTimestamp();
> } else if (event.getId().equals(
> ContainerMetricsConstants.FINISHED_IN_RM_EVENT_TYPE)) {
>   finishedTime = event.getTimestamp();
> }
>   }
> }
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6523) Newly retrieved security Tokens are sent as part of each heartbeat to each node from RM which is not desirable in large cluster

2019-01-04 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/YARN-6523?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16734318#comment-16734318
 ] 

Hadoop QA commented on YARN-6523:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m  
0s{color} | {color:blue} Docker mode activated. {color} |
| {color:red}-1{color} | {color:red} patch {color} | {color:red}  0m  6s{color} 
| {color:red} YARN-6523 does not apply to trunk. Rebase required? Wrong Branch? 
See https://wiki.apache.org/hadoop/HowToContribute for help. {color} |
\\
\\
|| Subsystem || Report/Notes ||
| JIRA Issue | YARN-6523 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12953776/YARN-6523.012.patch |
| Console output | 
https://builds.apache.org/job/PreCommit-YARN-Build/22986/console |
| Powered by | Apache Yetus 0.8.0   http://yetus.apache.org |


This message was automatically generated.



> Newly retrieved security Tokens are sent as part of each heartbeat to each 
> node from RM which is not desirable in large cluster
> ---
>
> Key: YARN-6523
> URL: https://issues.apache.org/jira/browse/YARN-6523
> Project: Hadoop YARN
>  Issue Type: Improvement
>  Components: RM
>Affects Versions: 2.8.0, 2.7.3
>Reporter: Naganarasimha G R
>Assignee: Manikandan R
>Priority: Major
> Attachments: YARN-6523.001.patch, YARN-6523.002.patch, 
> YARN-6523.003.patch, YARN-6523.004.patch, YARN-6523.005.patch, 
> YARN-6523.006.patch, YARN-6523.007.patch, YARN-6523.008.patch, 
> YARN-6523.009.patch, YARN-6523.010.patch, YARN-6523.011.patch, 
> YARN-6523.012.patch
>
>
> Currently as part of heartbeat response RM sets all application's tokens 
> though all applications might not be active on the node. On top of it 
> NodeHeartbeatResponsePBImpl converts tokens for each app into 
> SystemCredentialsForAppsProto. Hence for each node and each heartbeat too 
> many SystemCredentialsForAppsProto objects were getting created.
> We hit a OOM while testing for 2000 concurrent apps on 500 nodes cluster with 
> 8GB RAM configured for RM



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6523) Newly retrieved security Tokens are sent as part of each heartbeat to each node from RM which is not desirable in large cluster

2019-01-04 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/YARN-6523?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16734312#comment-16734312
 ] 

Hadoop QA commented on YARN-6523:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m  
0s{color} | {color:blue} Docker mode activated. {color} |
| {color:red}-1{color} | {color:red} patch {color} | {color:red}  0m  6s{color} 
| {color:red} YARN-6523 does not apply to trunk. Rebase required? Wrong Branch? 
See https://wiki.apache.org/hadoop/HowToContribute for help. {color} |
\\
\\
|| Subsystem || Report/Notes ||
| JIRA Issue | YARN-6523 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12953775/YARN-6523.012.patch |
| Console output | 
https://builds.apache.org/job/PreCommit-YARN-Build/22985/console |
| Powered by | Apache Yetus 0.8.0   http://yetus.apache.org |


This message was automatically generated.



> Newly retrieved security Tokens are sent as part of each heartbeat to each 
> node from RM which is not desirable in large cluster
> ---
>
> Key: YARN-6523
> URL: https://issues.apache.org/jira/browse/YARN-6523
> Project: Hadoop YARN
>  Issue Type: Improvement
>  Components: RM
>Affects Versions: 2.8.0, 2.7.3
>Reporter: Naganarasimha G R
>Assignee: Manikandan R
>Priority: Major
> Attachments: YARN-6523.001.patch, YARN-6523.002.patch, 
> YARN-6523.003.patch, YARN-6523.004.patch, YARN-6523.005.patch, 
> YARN-6523.006.patch, YARN-6523.007.patch, YARN-6523.008.patch, 
> YARN-6523.009.patch, YARN-6523.010.patch, YARN-6523.011.patch, 
> YARN-6523.012.patch
>
>
> Currently as part of heartbeat response RM sets all application's tokens 
> though all applications might not be active on the node. On top of it 
> NodeHeartbeatResponsePBImpl converts tokens for each app into 
> SystemCredentialsForAppsProto. Hence for each node and each heartbeat too 
> many SystemCredentialsForAppsProto objects were getting created.
> We hit a OOM while testing for 2000 concurrent apps on 500 nodes cluster with 
> 8GB RAM configured for RM



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6523) Newly retrieved security Tokens are sent as part of each heartbeat to each node from RM which is not desirable in large cluster

2019-01-04 Thread Manikandan R (JIRA)


[ 
https://issues.apache.org/jira/browse/YARN-6523?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16734311#comment-16734311
 ] 

Manikandan R commented on YARN-6523:


Sorry for the delay. 

Addressed comments in attached patch.

> Newly retrieved security Tokens are sent as part of each heartbeat to each 
> node from RM which is not desirable in large cluster
> ---
>
> Key: YARN-6523
> URL: https://issues.apache.org/jira/browse/YARN-6523
> Project: Hadoop YARN
>  Issue Type: Improvement
>  Components: RM
>Affects Versions: 2.8.0, 2.7.3
>Reporter: Naganarasimha G R
>Assignee: Manikandan R
>Priority: Major
> Attachments: YARN-6523.001.patch, YARN-6523.002.patch, 
> YARN-6523.003.patch, YARN-6523.004.patch, YARN-6523.005.patch, 
> YARN-6523.006.patch, YARN-6523.007.patch, YARN-6523.008.patch, 
> YARN-6523.009.patch, YARN-6523.010.patch, YARN-6523.011.patch, 
> YARN-6523.012.patch
>
>
> Currently as part of heartbeat response RM sets all application's tokens 
> though all applications might not be active on the node. On top of it 
> NodeHeartbeatResponsePBImpl converts tokens for each app into 
> SystemCredentialsForAppsProto. Hence for each node and each heartbeat too 
> many SystemCredentialsForAppsProto objects were getting created.
> We hit a OOM while testing for 2000 concurrent apps on 500 nodes cluster with 
> 8GB RAM configured for RM



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-6523) Newly retrieved security Tokens are sent as part of each heartbeat to each node from RM which is not desirable in large cluster

2019-01-04 Thread Manikandan R (JIRA)


 [ 
https://issues.apache.org/jira/browse/YARN-6523?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Manikandan R updated YARN-6523:
---
Attachment: (was: YARN-6523.012.patch)

> Newly retrieved security Tokens are sent as part of each heartbeat to each 
> node from RM which is not desirable in large cluster
> ---
>
> Key: YARN-6523
> URL: https://issues.apache.org/jira/browse/YARN-6523
> Project: Hadoop YARN
>  Issue Type: Improvement
>  Components: RM
>Affects Versions: 2.8.0, 2.7.3
>Reporter: Naganarasimha G R
>Assignee: Manikandan R
>Priority: Major
> Attachments: YARN-6523.001.patch, YARN-6523.002.patch, 
> YARN-6523.003.patch, YARN-6523.004.patch, YARN-6523.005.patch, 
> YARN-6523.006.patch, YARN-6523.007.patch, YARN-6523.008.patch, 
> YARN-6523.009.patch, YARN-6523.010.patch, YARN-6523.011.patch, 
> YARN-6523.012.patch
>
>
> Currently as part of heartbeat response RM sets all application's tokens 
> though all applications might not be active on the node. On top of it 
> NodeHeartbeatResponsePBImpl converts tokens for each app into 
> SystemCredentialsForAppsProto. Hence for each node and each heartbeat too 
> many SystemCredentialsForAppsProto objects were getting created.
> We hit a OOM while testing for 2000 concurrent apps on 500 nodes cluster with 
> 8GB RAM configured for RM



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-6523) Newly retrieved security Tokens are sent as part of each heartbeat to each node from RM which is not desirable in large cluster

2019-01-04 Thread Manikandan R (JIRA)


 [ 
https://issues.apache.org/jira/browse/YARN-6523?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Manikandan R updated YARN-6523:
---
Attachment: YARN-6523.012.patch

> Newly retrieved security Tokens are sent as part of each heartbeat to each 
> node from RM which is not desirable in large cluster
> ---
>
> Key: YARN-6523
> URL: https://issues.apache.org/jira/browse/YARN-6523
> Project: Hadoop YARN
>  Issue Type: Improvement
>  Components: RM
>Affects Versions: 2.8.0, 2.7.3
>Reporter: Naganarasimha G R
>Assignee: Manikandan R
>Priority: Major
> Attachments: YARN-6523.001.patch, YARN-6523.002.patch, 
> YARN-6523.003.patch, YARN-6523.004.patch, YARN-6523.005.patch, 
> YARN-6523.006.patch, YARN-6523.007.patch, YARN-6523.008.patch, 
> YARN-6523.009.patch, YARN-6523.010.patch, YARN-6523.011.patch, 
> YARN-6523.012.patch
>
>
> Currently as part of heartbeat response RM sets all application's tokens 
> though all applications might not be active on the node. On top of it 
> NodeHeartbeatResponsePBImpl converts tokens for each app into 
> SystemCredentialsForAppsProto. Hence for each node and each heartbeat too 
> many SystemCredentialsForAppsProto objects were getting created.
> We hit a OOM while testing for 2000 concurrent apps on 500 nodes cluster with 
> 8GB RAM configured for RM



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-6523) Newly retrieved security Tokens are sent as part of each heartbeat to each node from RM which is not desirable in large cluster

2019-01-04 Thread Manikandan R (JIRA)


 [ 
https://issues.apache.org/jira/browse/YARN-6523?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Manikandan R updated YARN-6523:
---
Attachment: YARN-6523.012.patch

> Newly retrieved security Tokens are sent as part of each heartbeat to each 
> node from RM which is not desirable in large cluster
> ---
>
> Key: YARN-6523
> URL: https://issues.apache.org/jira/browse/YARN-6523
> Project: Hadoop YARN
>  Issue Type: Improvement
>  Components: RM
>Affects Versions: 2.8.0, 2.7.3
>Reporter: Naganarasimha G R
>Assignee: Manikandan R
>Priority: Major
> Attachments: YARN-6523.001.patch, YARN-6523.002.patch, 
> YARN-6523.003.patch, YARN-6523.004.patch, YARN-6523.005.patch, 
> YARN-6523.006.patch, YARN-6523.007.patch, YARN-6523.008.patch, 
> YARN-6523.009.patch, YARN-6523.010.patch, YARN-6523.011.patch, 
> YARN-6523.012.patch
>
>
> Currently as part of heartbeat response RM sets all application's tokens 
> though all applications might not be active on the node. On top of it 
> NodeHeartbeatResponsePBImpl converts tokens for each app into 
> SystemCredentialsForAppsProto. Hence for each node and each heartbeat too 
> many SystemCredentialsForAppsProto objects were getting created.
> We hit a OOM while testing for 2000 concurrent apps on 500 nodes cluster with 
> 8GB RAM configured for RM



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (YARN-9166) Fix logging for preemption of Opportunistic containers for Guaranteed containers.

2019-01-04 Thread Abhishek Modi (JIRA)


[ 
https://issues.apache.org/jira/browse/YARN-9166?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16734249#comment-16734249
 ] 

Abhishek Modi edited comment on YARN-9166 at 1/4/19 3:45 PM:
-

[~elgoiri] Thanks for review. 
[TestContainerSchedulerQueueing#testPauseOpportunisticForGuaranteedContainer|https://github.com/apache/hadoop/blob/8c6978c3baef96a333ebd7e98e02098c99df7313/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/scheduler/TestContainerSchedulerQueuing.java#L569]
 covers this part of the code.


was (Author: abmodi):
[~elgoiri] Thanks for review. 
TestContainerSchedulerQueueing#testPauseOpportunisticForGuaranteedContainer 
covers this part of the code.

> Fix logging for preemption of Opportunistic containers for Guaranteed 
> containers.
> -
>
> Key: YARN-9166
> URL: https://issues.apache.org/jira/browse/YARN-9166
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Abhishek Modi
>Assignee: Abhishek Modi
>Priority: Minor
> Fix For: 3.3.0
>
> Attachments: YARN-9166.001.patch
>
>
> Fix logging for preemption of Opportunistic containers for Guaranteed 
> containers.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-9166) Fix logging for preemption of Opportunistic containers for Guaranteed containers.

2019-01-04 Thread Abhishek Modi (JIRA)


[ 
https://issues.apache.org/jira/browse/YARN-9166?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16734249#comment-16734249
 ] 

Abhishek Modi commented on YARN-9166:
-

[~elgoiri] Thanks for review. 
TestContainerSchedulerQueueing#testPauseOpportunisticForGuaranteedContainer 
covers this part of the code.

> Fix logging for preemption of Opportunistic containers for Guaranteed 
> containers.
> -
>
> Key: YARN-9166
> URL: https://issues.apache.org/jira/browse/YARN-9166
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Abhishek Modi
>Assignee: Abhishek Modi
>Priority: Minor
> Fix For: 3.3.0
>
> Attachments: YARN-9166.001.patch
>
>
> Fix logging for preemption of Opportunistic containers for Guaranteed 
> containers.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-9173) FairShare calculation broken for large values after YARN-8833

2019-01-04 Thread Wilfred Spiegelenburg (JIRA)


[ 
https://issues.apache.org/jira/browse/YARN-9173?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16734240#comment-16734240
 ] 

Wilfred Spiegelenburg commented on YARN-9173:
-

Test failure is not related to the change.
I can fix the checkstyle issue but will wait for a review to be done before I 
add a new patch

> FairShare calculation broken for large values after YARN-8833
> -
>
> Key: YARN-9173
> URL: https://issues.apache.org/jira/browse/YARN-9173
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: fairscheduler
>Affects Versions: 3.3.0
>Reporter: Wilfred Spiegelenburg
>Assignee: Wilfred Spiegelenburg
>Priority: Major
> Attachments: YARN-9173.001.patch
>
>
> After the fix for the infinite loop in YARN-8833 we now get the wrong values 
> back for fairshare calculations under certain circumstances. The current 
> implementation works when the total resource is smaller than Integer.MAXVALUE.
> When the total resource goes above that value the number of iterations is not 
> enough to converge to the correct value.
> The new test {{testResourceUsedWithWeightToResourceRatio()}} only checks that 
> the calculation does not hang but does not check the outcome of the 
> calculation.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6149) Allow port range to be specified while starting NM Timeline collector manager.

2019-01-04 Thread Abhishek Modi (JIRA)


[ 
https://issues.apache.org/jira/browse/YARN-6149?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16734212#comment-16734212
 ] 

Abhishek Modi commented on YARN-6149:
-

Thanks [~rohithsharma] for review and committing it to trunk.

> Allow port range to be specified while starting NM Timeline collector manager.
> --
>
> Key: YARN-6149
> URL: https://issues.apache.org/jira/browse/YARN-6149
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: timelineserver
>Reporter: Varun Saxena
>Assignee: Abhishek Modi
>Priority: Major
> Fix For: 3.3.0
>
> Attachments: YARN-6149.001.patch
>
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-9173) FairShare calculation broken for large values after YARN-8833

2019-01-04 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/YARN-9173?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16734190#comment-16734190
 ] 

Hadoop QA commented on YARN-9173:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
34s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 2 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 26m 
14s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
56s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
50s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m  
2s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
15m  0s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
38s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
37s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
55s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
52s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
52s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 38s{color} | {color:orange} 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager:
 The patch generated 1 new + 0 unchanged - 8 fixed = 1 total (was 8) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
57s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
15m  3s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
43s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
35s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 92m 46s{color} 
| {color:red} hadoop-yarn-server-resourcemanager in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
25s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}160m 22s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | 
hadoop.yarn.server.resourcemanager.security.TestAMRMTokens |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:8f97d6f |
| JIRA Issue | YARN-9173 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12953754/YARN-9173.001.patch |
| Optional Tests |  dupname  asflicense  compile  javac  javadoc  mvninstall  
mvnsite  unit  shadedclient  findbugs  checkstyle  |
| uname | Linux 24601c4b463e 4.4.0-138-generic #164~14.04.1-Ubuntu SMP Fri Oct 
5 08:56:16 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / 8c6978c |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_191 |
| findbugs | v3.1.0-RC1 |
| checkstyle | 
https://builds.apache.org/job/PreCommit-YARN-Build/22984/artifact/out/diff-checkstyle-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager.txt
 |
| unit | 
https://builds.apache.org/job/PreCommit-YARN-Build/22984/artifact/out/patch-unit-hadoop-yarn-project_hadoop-yarn_hado

[jira] [Comment Edited] (YARN-9173) FairShare calculation broken for large values after YARN-8833

2019-01-04 Thread Weiwei Yang (JIRA)


[ 
https://issues.apache.org/jira/browse/YARN-9173?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16734135#comment-16734135
 ] 

Weiwei Yang edited comment on YARN-9173 at 1/4/19 1:33 PM:
---

Sure, thanks [~wilfreds] for working on this, I will take a look tomorrow.

Cc [~yoelee] as well.


was (Author: cheersyang):
Sure, thanks [~wilfreds] for working on this, I will take a look tomorrow.

> FairShare calculation broken for large values after YARN-8833
> -
>
> Key: YARN-9173
> URL: https://issues.apache.org/jira/browse/YARN-9173
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: fairscheduler
>Affects Versions: 3.3.0
>Reporter: Wilfred Spiegelenburg
>Assignee: Wilfred Spiegelenburg
>Priority: Major
> Attachments: YARN-9173.001.patch
>
>
> After the fix for the infinite loop in YARN-8833 we now get the wrong values 
> back for fairshare calculations under certain circumstances. The current 
> implementation works when the total resource is smaller than Integer.MAXVALUE.
> When the total resource goes above that value the number of iterations is not 
> enough to converge to the correct value.
> The new test {{testResourceUsedWithWeightToResourceRatio()}} only checks that 
> the calculation does not hang but does not check the outcome of the 
> calculation.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-9173) FairShare calculation broken for large values after YARN-8833

2019-01-04 Thread Weiwei Yang (JIRA)


[ 
https://issues.apache.org/jira/browse/YARN-9173?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16734135#comment-16734135
 ] 

Weiwei Yang commented on YARN-9173:
---

Sure, thanks [~wilfreds] for working on this, I will take a look tomorrow.

> FairShare calculation broken for large values after YARN-8833
> -
>
> Key: YARN-9173
> URL: https://issues.apache.org/jira/browse/YARN-9173
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: fairscheduler
>Affects Versions: 3.3.0
>Reporter: Wilfred Spiegelenburg
>Assignee: Wilfred Spiegelenburg
>Priority: Major
> Attachments: YARN-9173.001.patch
>
>
> After the fix for the infinite loop in YARN-8833 we now get the wrong values 
> back for fairshare calculations under certain circumstances. The current 
> implementation works when the total resource is smaller than Integer.MAXVALUE.
> When the total resource goes above that value the number of iterations is not 
> enough to converge to the correct value.
> The new test {{testResourceUsedWithWeightToResourceRatio()}} only checks that 
> the calculation does not hang but does not check the outcome of the 
> calculation.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (YARN-9176) [Submarine] Repair 404 error of links in documentation

2019-01-04 Thread Dongdong Hong (JIRA)


[ 
https://issues.apache.org/jira/browse/YARN-9176?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16733992#comment-16733992
 ] 

Dongdong Hong edited comment on YARN-9176 at 1/4/19 10:04 AM:
--

[~sunilg]

but there have not html files in path hadoop-yarn-submarine/src/site/markdown 
but md is in.

!markdown.jpg!

is there any diff of paths form private and github

!privatepath.jpg!!apache.jpg!

 

at last,it working ok in my fork.

!image-2019-01-04-18-04-41-832.png!

[https://github.com/hddong/hadoop/blob/branch-3.2.0/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-submarine/src/site/markdown/RunningDistributedCifar10TFJobs.md|https://github.com/hddong/hadoop/blob/branch-3.2.0/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-submarine/src/site/markdown/Examples.md]
  click [Running Distributed CIFAR 10 Tensorflow 
Job|https://github.com/hddong/hadoop/blob/branch-3.2.0/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-submarine/src/site/markdown/RunningDistributedCifar10TFJobs.md]
 can get correct response.


was (Author: hongdd):
[~sunilg]

but there have not html files in path hadoop-yarn-submarine/src/site/markdown 
but md is in.

!markdown.jpg!

is there any diff of paths form private and github

!privatepath.jpg!!apache.jpg!

 

at last,it working ok in my fork.

[https://github.com/hddong/hadoop/blob/branch-3.2.0/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-submarine/src/site/markdown/RunningDistributedCifar10TFJobs.md|https://github.com/hddong/hadoop/blob/branch-3.2.0/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-submarine/src/site/markdown/Examples.md]
  click [Running Distributed CIFAR 10 Tensorflow 
Job|https://github.com/hddong/hadoop/blob/branch-3.2.0/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-submarine/src/site/markdown/RunningDistributedCifar10TFJobs.md]
 can get correct response.

> [Submarine] Repair  404 error of  links in documentation 
> -
>
> Key: YARN-9176
> URL: https://issues.apache.org/jira/browse/YARN-9176
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: documentation
>Affects Versions: 3.2.0
>Reporter: Dongdong Hong
>Priority: Critical
> Fix For: 3.2.0
>
> Attachments: 404 error.jpg, 404.jpg, Screen Shot 2019-01-04 at 
> 2.41.43 PM.png, YARN-9176.-Submarine-Repair-404-error-of-links-in-do.patch, 
> apache.jpg, image-2019-01-04-18-04-41-832.png, markdown.jpg, privatepath.jpg, 
> repair1.jpg, repair2.jpg
>
>
> links in src/site/markdown/Examples.md and src/site/markdown/QuickStart.md 
> will get 404, repair this links. !404.jpg!!404 error.jpg!



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-9176) [Submarine] Repair 404 error of links in documentation

2019-01-04 Thread Sunil Govindan (JIRA)


[ 
https://issues.apache.org/jira/browse/YARN-9176?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16733999#comment-16733999
 ] 

Sunil Govindan commented on YARN-9176:
--

Hi [~hongdd]

Once you do mvn site:site, these .md files will be converted as html. Something 
like this

file:///private/tmp/hadoop-site/hadoop-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-submarine/DeveloperGuide.html

 

 

> [Submarine] Repair  404 error of  links in documentation 
> -
>
> Key: YARN-9176
> URL: https://issues.apache.org/jira/browse/YARN-9176
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: documentation
>Affects Versions: 3.2.0
>Reporter: Dongdong Hong
>Priority: Critical
> Fix For: 3.2.0
>
> Attachments: 404 error.jpg, 404.jpg, Screen Shot 2019-01-04 at 
> 2.41.43 PM.png, YARN-9176.-Submarine-Repair-404-error-of-links-in-do.patch, 
> apache.jpg, markdown.jpg, privatepath.jpg, repair1.jpg, repair2.jpg
>
>
> links in src/site/markdown/Examples.md and src/site/markdown/QuickStart.md 
> will get 404, repair this links. !404.jpg!!404 error.jpg!



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-9176) [Submarine] Repair 404 error of links in documentation

2019-01-04 Thread Dongdong Hong (JIRA)


 [ 
https://issues.apache.org/jira/browse/YARN-9176?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Dongdong Hong updated YARN-9176:

Attachment: (was: image-2019-01-04-17-49-28-798.png)

> [Submarine] Repair  404 error of  links in documentation 
> -
>
> Key: YARN-9176
> URL: https://issues.apache.org/jira/browse/YARN-9176
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: documentation
>Affects Versions: 3.2.0
>Reporter: Dongdong Hong
>Priority: Critical
> Fix For: 3.2.0
>
> Attachments: 404 error.jpg, 404.jpg, Screen Shot 2019-01-04 at 
> 2.41.43 PM.png, YARN-9176.-Submarine-Repair-404-error-of-links-in-do.patch, 
> apache.jpg, markdown.jpg, privatepath.jpg, repair1.jpg, repair2.jpg
>
>
> links in src/site/markdown/Examples.md and src/site/markdown/QuickStart.md 
> will get 404, repair this links. !404.jpg!!404 error.jpg!



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-9176) [Submarine] Repair 404 error of links in documentation

2019-01-04 Thread Dongdong Hong (JIRA)


 [ 
https://issues.apache.org/jira/browse/YARN-9176?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Dongdong Hong updated YARN-9176:

Attachment: image-2019-01-04-17-49-28-798.png

> [Submarine] Repair  404 error of  links in documentation 
> -
>
> Key: YARN-9176
> URL: https://issues.apache.org/jira/browse/YARN-9176
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: documentation
>Affects Versions: 3.2.0
>Reporter: Dongdong Hong
>Priority: Critical
> Fix For: 3.2.0
>
> Attachments: 404 error.jpg, 404.jpg, Screen Shot 2019-01-04 at 
> 2.41.43 PM.png, YARN-9176.-Submarine-Repair-404-error-of-links-in-do.patch, 
> apache.jpg, image-2019-01-04-17-49-28-798.png, markdown.jpg, privatepath.jpg, 
> repair1.jpg, repair2.jpg
>
>
> links in src/site/markdown/Examples.md and src/site/markdown/QuickStart.md 
> will get 404, repair this links. !404.jpg!!404 error.jpg!



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-9176) [Submarine] Repair 404 error of links in documentation

2019-01-04 Thread Dongdong Hong (JIRA)


[ 
https://issues.apache.org/jira/browse/YARN-9176?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16733992#comment-16733992
 ] 

Dongdong Hong commented on YARN-9176:
-

[~sunilg]

but there have not html files in path hadoop-yarn-submarine/src/site/markdown 
but md is in.

!markdown.jpg!

is there any diff of paths form private and github

!privatepath.jpg!!apache.jpg!

 

at last,it working ok in my fork.

[https://github.com/hddong/hadoop/blob/branch-3.2.0/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-submarine/src/site/markdown/RunningDistributedCifar10TFJobs.md|https://github.com/hddong/hadoop/blob/branch-3.2.0/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-submarine/src/site/markdown/Examples.md]
  click [Running Distributed CIFAR 10 Tensorflow 
Job|https://github.com/hddong/hadoop/blob/branch-3.2.0/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-submarine/src/site/markdown/RunningDistributedCifar10TFJobs.md]
 can get correct response.

> [Submarine] Repair  404 error of  links in documentation 
> -
>
> Key: YARN-9176
> URL: https://issues.apache.org/jira/browse/YARN-9176
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: documentation
>Affects Versions: 3.2.0
>Reporter: Dongdong Hong
>Priority: Critical
> Fix For: 3.2.0
>
> Attachments: 404 error.jpg, 404.jpg, Screen Shot 2019-01-04 at 
> 2.41.43 PM.png, YARN-9176.-Submarine-Repair-404-error-of-links-in-do.patch, 
> apache.jpg, markdown.jpg, privatepath.jpg, repair1.jpg, repair2.jpg
>
>
> links in src/site/markdown/Examples.md and src/site/markdown/QuickStart.md 
> will get 404, repair this links. !404.jpg!!404 error.jpg!



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-9176) [Submarine] Repair 404 error of links in documentation

2019-01-04 Thread Dongdong Hong (JIRA)


 [ 
https://issues.apache.org/jira/browse/YARN-9176?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Dongdong Hong updated YARN-9176:

Attachment: privatepath.jpg

> [Submarine] Repair  404 error of  links in documentation 
> -
>
> Key: YARN-9176
> URL: https://issues.apache.org/jira/browse/YARN-9176
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: documentation
>Affects Versions: 3.2.0
>Reporter: Dongdong Hong
>Priority: Critical
> Fix For: 3.2.0
>
> Attachments: 404 error.jpg, 404.jpg, Screen Shot 2019-01-04 at 
> 2.41.43 PM.png, YARN-9176.-Submarine-Repair-404-error-of-links-in-do.patch, 
> apache.jpg, markdown.jpg, privatepath.jpg, repair1.jpg, repair2.jpg
>
>
> links in src/site/markdown/Examples.md and src/site/markdown/QuickStart.md 
> will get 404, repair this links. !404.jpg!!404 error.jpg!



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-9176) [Submarine] Repair 404 error of links in documentation

2019-01-04 Thread Dongdong Hong (JIRA)


 [ 
https://issues.apache.org/jira/browse/YARN-9176?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Dongdong Hong updated YARN-9176:

Attachment: apache.jpg

> [Submarine] Repair  404 error of  links in documentation 
> -
>
> Key: YARN-9176
> URL: https://issues.apache.org/jira/browse/YARN-9176
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: documentation
>Affects Versions: 3.2.0
>Reporter: Dongdong Hong
>Priority: Critical
> Fix For: 3.2.0
>
> Attachments: 404 error.jpg, 404.jpg, Screen Shot 2019-01-04 at 
> 2.41.43 PM.png, YARN-9176.-Submarine-Repair-404-error-of-links-in-do.patch, 
> apache.jpg, markdown.jpg, privatepath.jpg, repair1.jpg, repair2.jpg
>
>
> links in src/site/markdown/Examples.md and src/site/markdown/QuickStart.md 
> will get 404, repair this links. !404.jpg!!404 error.jpg!



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-9176) [Submarine] Repair 404 error of links in documentation

2019-01-04 Thread Dongdong Hong (JIRA)


 [ 
https://issues.apache.org/jira/browse/YARN-9176?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Dongdong Hong updated YARN-9176:

Attachment: markdown.jpg

> [Submarine] Repair  404 error of  links in documentation 
> -
>
> Key: YARN-9176
> URL: https://issues.apache.org/jira/browse/YARN-9176
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: documentation
>Affects Versions: 3.2.0
>Reporter: Dongdong Hong
>Priority: Critical
> Fix For: 3.2.0
>
> Attachments: 404 error.jpg, 404.jpg, Screen Shot 2019-01-04 at 
> 2.41.43 PM.png, YARN-9176.-Submarine-Repair-404-error-of-links-in-do.patch, 
> markdown.jpg, repair1.jpg, repair2.jpg
>
>
> links in src/site/markdown/Examples.md and src/site/markdown/QuickStart.md 
> will get 404, repair this links. !404.jpg!!404 error.jpg!



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6149) Allow port range to be specified while starting NM Timeline collector manager.

2019-01-04 Thread Hudson (JIRA)


[ 
https://issues.apache.org/jira/browse/YARN-6149?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16733984#comment-16733984
 ] 

Hudson commented on YARN-6149:
--

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #15702 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/15702/])
YARN-6149. Allow port range to be specified while starting NM Timeline 
(rohithsharmaks: rev 8c6978c3baef96a333ebd7e98e02098c99df7313)
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-timelineservice/src/main/java/org/apache/hadoop/yarn/server/timelineservice/collector/NodeTimelineCollectorManager.java
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-timelineservice/src/test/java/org/apache/hadoop/yarn/server/timelineservice/collector/TestNMTimelineCollectorManager.java
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/conf/YarnConfiguration.java


> Allow port range to be specified while starting NM Timeline collector manager.
> --
>
> Key: YARN-6149
> URL: https://issues.apache.org/jira/browse/YARN-6149
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: timelineserver
>Reporter: Varun Saxena
>Assignee: Abhishek Modi
>Priority: Major
> Attachments: YARN-6149.001.patch
>
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-8567) Fetching yarn logs fails for long running application if it is not present in timeline store

2019-01-04 Thread Hudson (JIRA)


[ 
https://issues.apache.org/jira/browse/YARN-8567?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16733983#comment-16733983
 ] 

Hudson commented on YARN-8567:
--

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #15702 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/15702/])
YARN-8567. Fetching yarn logs fails for long running application if it 
(rohithsharmaks: rev 573b1587918c4c0efdb7e9fff6f5be12bf31b619)
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-client/src/test/java/org/apache/hadoop/yarn/client/api/impl/TestYarnClient.java
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-client/src/main/java/org/apache/hadoop/yarn/client/api/impl/YarnClientImpl.java


> Fetching yarn logs fails for long running application if it is not present in 
> timeline store
> 
>
> Key: YARN-8567
> URL: https://issues.apache.org/jira/browse/YARN-8567
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: log-aggregation
>Affects Versions: 2.7.0
>Reporter: Tarun Parimi
>Assignee: Tarun Parimi
>Priority: Major
>  Labels: log-aggregation
> Attachments: YARN-8567.001.patch, YARN-8567.002.patch
>
>
> Using yarn logs command for a long running application which has been running 
> longer than the configured timeline service ttl 
> {{yarn.timeline-service.ttl-ms }} fails with the following exception.
> {code:java}
> Exception in thread "main" 
> org.apache.hadoop.yarn.exceptions.ApplicationNotFoundException: The entity 
> for application application_152347939332_1 doesn't exist in the timeline 
> store
> at 
> org.apache.hadoop.yarn.server.applicationhistoryservice.ApplicationHistoryManagerOnTimelineStore.getApplication(ApplicationHistoryManagerOnTimelineStore.java:670)
> at 
> org.apache.hadoop.yarn.server.applicationhistoryservice.ApplicationHistoryManagerOnTimelineStore.getContainers(ApplicationHistoryManagerOnTimelineStore.java:219)
> at 
> org.apache.hadoop.yarn.server.applicationhistoryservice.ApplicationHistoryClientService.getContainers(ApplicationHistoryClientService.java:211)
> at 
> org.apache.hadoop.yarn.api.impl.pb.service.ApplicationHistoryProtocolPBServiceImpl.getContainers(ApplicationHistoryProtocolPBServiceImpl.java:172)
> at 
> org.apache.hadoop.yarn.proto.ApplicationHistoryProtocol$ApplicationHistoryProtocolService$2.callBlockingMethod(ApplicationHistoryProtocol.java:201)
> at 
> org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:640)
> at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:982)
> at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2313)
> at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2309)
> at java.security.AccessController.doPrivileged(Native Method)
> at javax.security.auth.Subject.doAs(Subject.java:422)
> at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1724)
> at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2309)
> at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
> at 
> sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
> at 
> sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
> at java.lang.reflect.Constructor.newInstance(Constructor.java:423)
> at org.apache.hadoop.yarn.ipc.RPCUtil.instantiateException(RPCUtil.java:53)
> at 
> org.apache.hadoop.yarn.ipc.RPCUtil.unwrapAndThrowException(RPCUtil.java:101)
> at 
> org.apache.hadoop.yarn.api.impl.pb.client.ApplicationHistoryProtocolPBClientImpl.getContainers(ApplicationHistoryProtocolPBClientImpl.java:183)
> at 
> org.apache.hadoop.yarn.client.api.impl.AHSClientImpl.getContainers(AHSClientImpl.java:151)
> at 
> org.apache.hadoop.yarn.client.api.impl.YarnClientImpl.getContainers(YarnClientImpl.java:720)
> at 
> org.apache.hadoop.yarn.client.cli.LogsCLI.getContainerReportsFromRunningApplication(LogsCLI.java:1089)
> at 
> org.apache.hadoop.yarn.client.cli.LogsCLI.getContainersLogRequestForRunningApplication(LogsCLI.java:1064)
> at 
> org.apache.hadoop.yarn.client.cli.LogsCLI.fetchApplicationLogs(LogsCLI.java:976)
> at org.apache.hadoop.yarn.client.cli.LogsCLI.runCommand(LogsCLI.java:300)
> at org.apache.hadoop.yarn.client.cli.LogsCLI.run(LogsCLI.java:107)
> at org.apache.hadoop.yarn.client.cli.LogsCLI.main(LogsCLI.java:327)
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-9176) [Submarine] Repair 404 error of links in documentation

2019-01-04 Thread Sunil Govindan (JIRA)


 [ 
https://issues.apache.org/jira/browse/YARN-9176?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sunil Govindan updated YARN-9176:
-
Attachment: Screen Shot 2019-01-04 at 2.41.43 PM.png

> [Submarine] Repair  404 error of  links in documentation 
> -
>
> Key: YARN-9176
> URL: https://issues.apache.org/jira/browse/YARN-9176
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: documentation
>Affects Versions: 3.2.0
>Reporter: Dongdong Hong
>Priority: Critical
> Fix For: 3.2.0
>
> Attachments: 404 error.jpg, 404.jpg, Screen Shot 2019-01-04 at 
> 2.41.43 PM.png, YARN-9176.-Submarine-Repair-404-error-of-links-in-do.patch, 
> repair1.jpg, repair2.jpg
>
>
> links in src/site/markdown/Examples.md and src/site/markdown/QuickStart.md 
> will get 404, repair this links. !404.jpg!!404 error.jpg!



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-9176) [Submarine] Repair 404 error of links in documentation

2019-01-04 Thread Sunil Govindan (JIRA)


[ 
https://issues.apache.org/jira/browse/YARN-9176?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16733964#comment-16733964
 ] 

Sunil Govindan commented on YARN-9176:
--

I think you should not use .md for file. It should be .html

> [Submarine] Repair  404 error of  links in documentation 
> -
>
> Key: YARN-9176
> URL: https://issues.apache.org/jira/browse/YARN-9176
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: documentation
>Affects Versions: 3.2.0
>Reporter: Dongdong Hong
>Priority: Critical
> Fix For: 3.2.0
>
> Attachments: 404 error.jpg, 404.jpg, Screen Shot 2019-01-04 at 
> 2.41.43 PM.png, YARN-9176.-Submarine-Repair-404-error-of-links-in-do.patch, 
> repair1.jpg, repair2.jpg
>
>
> links in src/site/markdown/Examples.md and src/site/markdown/QuickStart.md 
> will get 404, repair this links. !404.jpg!!404 error.jpg!



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-9176) [Submarine] Repair 404 error of links in documentation

2019-01-04 Thread Sunil Govindan (JIRA)


[ 
https://issues.apache.org/jira/browse/YARN-9176?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16733963#comment-16733963
 ] 

Sunil Govindan commented on YARN-9176:
--

Somehow few links are not working with your patch.

!Screen Shot 2019-01-04 at 2.41.43 PM.png!

> [Submarine] Repair  404 error of  links in documentation 
> -
>
> Key: YARN-9176
> URL: https://issues.apache.org/jira/browse/YARN-9176
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: documentation
>Affects Versions: 3.2.0
>Reporter: Dongdong Hong
>Priority: Critical
> Fix For: 3.2.0
>
> Attachments: 404 error.jpg, 404.jpg, Screen Shot 2019-01-04 at 
> 2.41.43 PM.png, YARN-9176.-Submarine-Repair-404-error-of-links-in-do.patch, 
> repair1.jpg, repair2.jpg
>
>
> links in src/site/markdown/Examples.md and src/site/markdown/QuickStart.md 
> will get 404, repair this links. !404.jpg!!404 error.jpg!



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-9116) Capacity Scheduler: add the default maximum-allocation-mb and maximum-allocation-vcores for the queues

2019-01-04 Thread Weiwei Yang (JIRA)


[ 
https://issues.apache.org/jira/browse/YARN-9116?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16733952#comment-16733952
 ] 

Weiwei Yang commented on YARN-9116:
---

Hi [~aihuaxu]
{quote}Actually that would introduce many queue level configuration if we don't 
introduce new property even with such inheritance. Even after we implement 
inheritance mechanism, we have to set the global to be 120G/256vCores (the 
maximum value allowed in the cluster) and then override all the top queues to 
be 16G/16vCores and set the larger container top queue to 120G/256vCores. 
{quote}
I was thinking if we can setup something like following (use memory as example):
{code:java}
yarn.scheduler.capacity.root.maximum-allocation-mb=16G
yarn.scheduler.capacity.root.large.maximum-allocation-mb=120G
{code}
so if the queue structure is like
{code:java}
-- root (16G)
 a (16G)
 b (16G)
 c (16G)
 large (120G)
{code}
This implicits the queue level max-allocation is inherited from its parent and 
it can be overwritten at the same time.

Would this work?

Thanks

> Capacity Scheduler: add the default maximum-allocation-mb and 
> maximum-allocation-vcores for the queues
> --
>
> Key: YARN-9116
> URL: https://issues.apache.org/jira/browse/YARN-9116
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: capacity scheduler
>Affects Versions: 2.7.0
>Reporter: Aihua Xu
>Assignee: Aihua Xu
>Priority: Major
> Attachments: YARN-9116.1.patch
>
>
> YARN-1582 adds the support of maximum-allocation-mb configuration per queue 
> which is targeting to support larger container features on dedicated queues 
> (larger maximum-allocation-mb/maximum-allocation-vcores for such queue) . 
> While to achieve larger container configuration, we need to increase the 
> global maximum-allocation-mb/maximum-allocation-vcores (e.g. 120G/256) and 
> then override those configurations with desired values on the queues since 
> queue configuration can't be larger than cluster configuration. There are 
> many queues in the system and if we forget to configure such values when 
> adding a new queue, then such queue gets default 120G/256 which typically is 
> not what we want.  
> We can come up with a queue-default configuration (set to normal queue 
> configuration like 16G/8), so the leaf queues gets such values by default.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-9176) [Submarine] Repair 404 error of links in documentation

2019-01-04 Thread Dongdong Hong (JIRA)


[ 
https://issues.apache.org/jira/browse/YARN-9176?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16733944#comment-16733944
 ] 

Dongdong Hong commented on YARN-9176:
-

[~tangzhankun] sure,below  is the display of the correct links in  
QuickStart.md and links in Examples.md can get correct response. 
!repair1.jpg!!repair2.jpg!

 

[~sunilg] thanks for your reminding. Next time I'll do better

> [Submarine] Repair  404 error of  links in documentation 
> -
>
> Key: YARN-9176
> URL: https://issues.apache.org/jira/browse/YARN-9176
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: documentation
>Affects Versions: 3.2.0
>Reporter: Dongdong Hong
>Priority: Critical
> Fix For: 3.2.0
>
> Attachments: 404 error.jpg, 404.jpg, 
> YARN-9176.-Submarine-Repair-404-error-of-links-in-do.patch, repair1.jpg, 
> repair2.jpg
>
>
> links in src/site/markdown/Examples.md and src/site/markdown/QuickStart.md 
> will get 404, repair this links. !404.jpg!!404 error.jpg!



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-9176) [Submarine] Repair 404 error of links in documentation

2019-01-04 Thread Dongdong Hong (JIRA)


 [ 
https://issues.apache.org/jira/browse/YARN-9176?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Dongdong Hong updated YARN-9176:

Attachment: repair1.jpg

> [Submarine] Repair  404 error of  links in documentation 
> -
>
> Key: YARN-9176
> URL: https://issues.apache.org/jira/browse/YARN-9176
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: documentation
>Affects Versions: 3.2.0
>Reporter: Dongdong Hong
>Priority: Critical
> Fix For: 3.2.0
>
> Attachments: 404 error.jpg, 404.jpg, 
> YARN-9176.-Submarine-Repair-404-error-of-links-in-do.patch, repair1.jpg, 
> repair2.jpg
>
>
> links in src/site/markdown/Examples.md and src/site/markdown/QuickStart.md 
> will get 404, repair this links. !404.jpg!!404 error.jpg!



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-9176) [Submarine] Repair 404 error of links in documentation

2019-01-04 Thread Dongdong Hong (JIRA)


 [ 
https://issues.apache.org/jira/browse/YARN-9176?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Dongdong Hong updated YARN-9176:

Attachment: repair2.jpg

> [Submarine] Repair  404 error of  links in documentation 
> -
>
> Key: YARN-9176
> URL: https://issues.apache.org/jira/browse/YARN-9176
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: documentation
>Affects Versions: 3.2.0
>Reporter: Dongdong Hong
>Priority: Critical
> Fix For: 3.2.0
>
> Attachments: 404 error.jpg, 404.jpg, 
> YARN-9176.-Submarine-Repair-404-error-of-links-in-do.patch, repair1.jpg, 
> repair2.jpg
>
>
> links in src/site/markdown/Examples.md and src/site/markdown/QuickStart.md 
> will get 404, repair this links. !404.jpg!!404 error.jpg!



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-9176) [Submarine] Repair 404 error of links in documentation

2019-01-04 Thread Sunil Govindan (JIRA)


[ 
https://issues.apache.org/jira/browse/YARN-9176?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16733937#comment-16733937
 ] 

Sunil Govindan commented on YARN-9176:
--

Hi [~hongdd]

Pls rename patch like YARN-9176.0001.patch from next time. you can increment 
0001 to next integers if more patches are to be needed.

I will test and confirm whether the attached patch is fine or not. Thanks.

> [Submarine] Repair  404 error of  links in documentation 
> -
>
> Key: YARN-9176
> URL: https://issues.apache.org/jira/browse/YARN-9176
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: documentation
>Affects Versions: 3.2.0
>Reporter: Dongdong Hong
>Priority: Critical
> Fix For: 3.2.0
>
> Attachments: 404 error.jpg, 404.jpg, 
> YARN-9176.-Submarine-Repair-404-error-of-links-in-do.patch
>
>
> links in src/site/markdown/Examples.md and src/site/markdown/QuickStart.md 
> will get 404, repair this links. !404.jpg!!404 error.jpg!



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-9176) [Submarine] Repair 404 error of links in documentation

2019-01-04 Thread Zhankun Tang (JIRA)


[ 
https://issues.apache.org/jira/browse/YARN-9176?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16733924#comment-16733924
 ] 

Zhankun Tang commented on YARN-9176:


[~hongdd] , Thanks for raising this. Could you post a screenshot of the correct 
link after your patch?

> [Submarine] Repair  404 error of  links in documentation 
> -
>
> Key: YARN-9176
> URL: https://issues.apache.org/jira/browse/YARN-9176
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: documentation
>Affects Versions: 3.2.0
>Reporter: Dongdong Hong
>Priority: Critical
> Fix For: 3.2.0
>
> Attachments: 404 error.jpg, 404.jpg, 
> YARN-9176.-Submarine-Repair-404-error-of-links-in-do.patch
>
>
> links in src/site/markdown/Examples.md and src/site/markdown/QuickStart.md 
> will get 404, repair this links. !404.jpg!!404 error.jpg!



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-9176) [Submarine] Repair 404 error of links in documentation

2019-01-04 Thread Dongdong Hong (JIRA)


 [ 
https://issues.apache.org/jira/browse/YARN-9176?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Dongdong Hong updated YARN-9176:

Attachment: YARN-9176.-Submarine-Repair-404-error-of-links-in-do.patch

> [Submarine] Repair  404 error of  links in documentation 
> -
>
> Key: YARN-9176
> URL: https://issues.apache.org/jira/browse/YARN-9176
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: documentation
>Affects Versions: 3.2.0
>Reporter: Dongdong Hong
>Priority: Critical
> Fix For: 3.2.0
>
> Attachments: 404 error.jpg, 404.jpg, 
> YARN-9176.-Submarine-Repair-404-error-of-links-in-do.patch
>
>
> links in src/site/markdown/Examples.md and src/site/markdown/QuickStart.md 
> will get 404, repair this links. !404.jpg!!404 error.jpg!



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org