[jira] [Commented] (YARN-8080) YARN native service should support component restart policy

2018-03-28 Thread Wangda Tan (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-8080?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16417861#comment-16417861
 ] 

Wangda Tan commented on YARN-8080:
--

Attached ver.005 patch, which added tests to cover single component / multi 
components cases;

> YARN native service should support component restart policy
> ---
>
> Key: YARN-8080
> URL: https://issues.apache.org/jira/browse/YARN-8080
> Project: Hadoop YARN
>  Issue Type: Task
>Reporter: Wangda Tan
>Assignee: Wangda Tan
>Priority: Critical
> Attachments: YARN-8080.001.patch, YARN-8080.002.patch, 
> YARN-8080.003.patch, YARN-8080.005.patch
>
>
> Existing native service assumes the service is long running and never 
> finishes. Containers will be restarted even if exit code == 0. 
> To support boarder use cases, we need to allow restart policy of component 
> specified by users. Propose to have following policies:
> 1) Always: containers always restarted by framework regardless of container 
> exit status. This is existing/default behavior.
> 2) Never: Do not restart containers in any cases after container finishes: To 
> support job-like workload (for example Tensorflow training job). If a task 
> exit with code == 0, we should not restart the task. This can be used by 
> services which is not restart/recovery-able.
> 3) On-failure: Similar to above, only restart task with exitcode != 0. 
> Behaviors after component *instance* finalize (Succeeded or Failed when 
> restart_policy != ALWAYS): 
> 1) For single component, single instance: complete service.
> 2) For single component, multiple instance: other running instances from the 
> same component won't be affected by the finalized component instance. Service 
> will be terminated once all instances finalized. 
> 3) For multiple components: Service will be terminated once all components 
> finalized.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-8080) YARN native service should support component restart policy

2018-03-28 Thread Wangda Tan (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-8080?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wangda Tan updated YARN-8080:
-
Attachment: (was: YARN-8080.004.patch)

> YARN native service should support component restart policy
> ---
>
> Key: YARN-8080
> URL: https://issues.apache.org/jira/browse/YARN-8080
> Project: Hadoop YARN
>  Issue Type: Task
>Reporter: Wangda Tan
>Assignee: Wangda Tan
>Priority: Critical
> Attachments: YARN-8080.001.patch, YARN-8080.002.patch, 
> YARN-8080.003.patch, YARN-8080.005.patch
>
>
> Existing native service assumes the service is long running and never 
> finishes. Containers will be restarted even if exit code == 0. 
> To support boarder use cases, we need to allow restart policy of component 
> specified by users. Propose to have following policies:
> 1) Always: containers always restarted by framework regardless of container 
> exit status. This is existing/default behavior.
> 2) Never: Do not restart containers in any cases after container finishes: To 
> support job-like workload (for example Tensorflow training job). If a task 
> exit with code == 0, we should not restart the task. This can be used by 
> services which is not restart/recovery-able.
> 3) On-failure: Similar to above, only restart task with exitcode != 0. 
> Behaviors after component *instance* finalize (Succeeded or Failed when 
> restart_policy != ALWAYS): 
> 1) For single component, single instance: complete service.
> 2) For single component, multiple instance: other running instances from the 
> same component won't be affected by the finalized component instance. Service 
> will be terminated once all instances finalized. 
> 3) For multiple components: Service will be terminated once all components 
> finalized.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-8080) YARN native service should support component restart policy

2018-03-28 Thread Wangda Tan (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-8080?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wangda Tan updated YARN-8080:
-
Attachment: YARN-8080.005.patch

> YARN native service should support component restart policy
> ---
>
> Key: YARN-8080
> URL: https://issues.apache.org/jira/browse/YARN-8080
> Project: Hadoop YARN
>  Issue Type: Task
>Reporter: Wangda Tan
>Assignee: Wangda Tan
>Priority: Critical
> Attachments: YARN-8080.001.patch, YARN-8080.002.patch, 
> YARN-8080.003.patch, YARN-8080.005.patch
>
>
> Existing native service assumes the service is long running and never 
> finishes. Containers will be restarted even if exit code == 0. 
> To support boarder use cases, we need to allow restart policy of component 
> specified by users. Propose to have following policies:
> 1) Always: containers always restarted by framework regardless of container 
> exit status. This is existing/default behavior.
> 2) Never: Do not restart containers in any cases after container finishes: To 
> support job-like workload (for example Tensorflow training job). If a task 
> exit with code == 0, we should not restart the task. This can be used by 
> services which is not restart/recovery-able.
> 3) On-failure: Similar to above, only restart task with exitcode != 0. 
> Behaviors after component *instance* finalize (Succeeded or Failed when 
> restart_policy != ALWAYS): 
> 1) For single component, single instance: complete service.
> 2) For single component, multiple instance: other running instances from the 
> same component won't be affected by the finalized component instance. Service 
> will be terminated once all instances finalized. 
> 3) For multiple components: Service will be terminated once all components 
> finalized.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-8080) YARN native service should support component restart policy

2018-03-28 Thread Wangda Tan (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-8080?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wangda Tan updated YARN-8080:
-
Attachment: YARN-8080.004.patch

> YARN native service should support component restart policy
> ---
>
> Key: YARN-8080
> URL: https://issues.apache.org/jira/browse/YARN-8080
> Project: Hadoop YARN
>  Issue Type: Task
>Reporter: Wangda Tan
>Assignee: Wangda Tan
>Priority: Critical
> Attachments: YARN-8080.001.patch, YARN-8080.002.patch, 
> YARN-8080.003.patch, YARN-8080.004.patch
>
>
> Existing native service assumes the service is long running and never 
> finishes. Containers will be restarted even if exit code == 0. 
> To support boarder use cases, we need to allow restart policy of component 
> specified by users. Propose to have following policies:
> 1) Always: containers always restarted by framework regardless of container 
> exit status. This is existing/default behavior.
> 2) Never: Do not restart containers in any cases after container finishes: To 
> support job-like workload (for example Tensorflow training job). If a task 
> exit with code == 0, we should not restart the task. This can be used by 
> services which is not restart/recovery-able.
> 3) On-failure: Similar to above, only restart task with exitcode != 0. 
> Behaviors after component *instance* finalize (Succeeded or Failed when 
> restart_policy != ALWAYS): 
> 1) For single component, single instance: complete service.
> 2) For single component, multiple instance: other running instances from the 
> same component won't be affected by the finalized component instance. Service 
> will be terminated once all instances finalized. 
> 3) For multiple components: Service will be terminated once all components 
> finalized.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Resolved] (YARN-7859) New feature: add queue scheduling deadLine in fairScheduler.

2018-03-28 Thread Haibo Chen (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-7859?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Haibo Chen resolved YARN-7859.
--
Resolution: Won't Do

> New feature: add queue scheduling deadLine in fairScheduler.
> 
>
> Key: YARN-7859
> URL: https://issues.apache.org/jira/browse/YARN-7859
> Project: Hadoop YARN
>  Issue Type: New Feature
>  Components: fairscheduler
>Affects Versions: 3.0.0
>Reporter: wangwj
>Assignee: wangwj
>Priority: Major
>  Labels: fairscheduler, features, patch
> Attachments: YARN-7859-v1.patch, YARN-7859-v2.patch, log, 
> screenshot-1.png, screenshot-3.png
>
>   Original Estimate: 24h
>  Remaining Estimate: 24h
>
>  As everyone knows.In FairScheduler the phenomenon of queue scheduling 
> starvation often occurs when the number of cluster jobs is large.The App in 
> one or more queue are pending.So I have thought a way to solve this 
> problem.Add queue scheduling deadLine in fairScheduler.When a queue is not 
> scheduled for FairScheduler within a specified time.We mandatory scheduler it!
> On the basis of the above, I propose this issue...



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-7859) New feature: add queue scheduling deadLine in fairScheduler.

2018-03-28 Thread Haibo Chen (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-7859?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Haibo Chen updated YARN-7859:
-
 Hadoop Flags:   (was: Reviewed)
Fix Version/s: (was: 3.0.0)

> New feature: add queue scheduling deadLine in fairScheduler.
> 
>
> Key: YARN-7859
> URL: https://issues.apache.org/jira/browse/YARN-7859
> Project: Hadoop YARN
>  Issue Type: New Feature
>  Components: fairscheduler
>Affects Versions: 3.0.0
>Reporter: wangwj
>Assignee: wangwj
>Priority: Major
>  Labels: fairscheduler, features, patch
> Attachments: YARN-7859-v1.patch, YARN-7859-v2.patch, log, 
> screenshot-1.png, screenshot-3.png
>
>   Original Estimate: 24h
>  Remaining Estimate: 24h
>
>  As everyone knows.In FairScheduler the phenomenon of queue scheduling 
> starvation often occurs when the number of cluster jobs is large.The App in 
> one or more queue are pending.So I have thought a way to solve this 
> problem.Add queue scheduling deadLine in fairScheduler.When a queue is not 
> scheduled for FairScheduler within a specified time.We mandatory scheduler it!
> On the basis of the above, I propose this issue...



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Reopened] (YARN-7859) New feature: add queue scheduling deadLine in fairScheduler.

2018-03-28 Thread Haibo Chen (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-7859?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Haibo Chen reopened YARN-7859:
--

> New feature: add queue scheduling deadLine in fairScheduler.
> 
>
> Key: YARN-7859
> URL: https://issues.apache.org/jira/browse/YARN-7859
> Project: Hadoop YARN
>  Issue Type: New Feature
>  Components: fairscheduler
>Affects Versions: 3.0.0
>Reporter: wangwj
>Assignee: wangwj
>Priority: Major
>  Labels: fairscheduler, features, patch
> Attachments: YARN-7859-v1.patch, YARN-7859-v2.patch, log, 
> screenshot-1.png, screenshot-3.png
>
>   Original Estimate: 24h
>  Remaining Estimate: 24h
>
>  As everyone knows.In FairScheduler the phenomenon of queue scheduling 
> starvation often occurs when the number of cluster jobs is large.The App in 
> one or more queue are pending.So I have thought a way to solve this 
> problem.Add queue scheduling deadLine in fairScheduler.When a queue is not 
> scheduled for FairScheduler within a specified time.We mandatory scheduler it!
> On the basis of the above, I propose this issue...



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-8079) YARN native service should respect source file of ConfigFile inside Service/Component spec

2018-03-28 Thread Wangda Tan (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-8079?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16417852#comment-16417852
 ] 

Wangda Tan commented on YARN-8079:
--

[~eyang], thanks for the review!

> YARN native service should respect source file of ConfigFile inside 
> Service/Component spec
> --
>
> Key: YARN-8079
> URL: https://issues.apache.org/jira/browse/YARN-8079
> Project: Hadoop YARN
>  Issue Type: Bug
>Reporter: Wangda Tan
>Assignee: Wangda Tan
>Priority: Blocker
> Attachments: YARN-8079.001.patch, YARN-8079.002.patch, 
> YARN-8079.003.patch
>
>
> Currently, {{srcFile}} is not respected. {{ProviderUtils}} doesn't properly 
> read srcFile, instead it always construct {{remoteFile}} by using 
> componentDir and fileName of {{destFile}}:
> {code}
> Path remoteFile = new Path(compInstanceDir, fileName);
> {code} 
> To me it is a common use case which services have some files existed in HDFS 
> and need to be localized when components get launched. (For example, if we 
> want to serve a Tensorflow model, we need to localize Tensorflow model 
> (typically not huge, less than GB) to local disk. Otherwise launched docker 
> container has to access HDFS.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7221) Add security check for privileged docker container

2018-03-28 Thread Eric Yang (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7221?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16417827#comment-16417827
 ] 

Eric Yang commented on YARN-7221:
-

[~billie.rinaldi] Thank you for catching the defects.  Patch 12 contains fixes 
with your recommendations.

> Add security check for privileged docker container
> --
>
> Key: YARN-7221
> URL: https://issues.apache.org/jira/browse/YARN-7221
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: security
>Affects Versions: 3.0.0, 3.1.0
>Reporter: Eric Yang
>Assignee: Eric Yang
>Priority: Major
> Attachments: YARN-7221.001.patch, YARN-7221.002.patch, 
> YARN-7221.003.patch, YARN-7221.004.patch, YARN-7221.005.patch, 
> YARN-7221.006.patch, YARN-7221.007.patch, YARN-7221.008.patch, 
> YARN-7221.009.patch, YARN-7221.010.patch, YARN-7221.011.patch, 
> YARN-7221.012.patch
>
>
> When a docker is running with privileges, majority of the use case is to have 
> some program running with root then drop privileges to another user.  i.e. 
> httpd to start with privileged and bind to port 80, then drop privileges to 
> www user.  
> # We should add security check for submitting users, to verify they have 
> "sudo" access to run privileged container.  
> # We should remove --user=uid:gid for privileged containers.  
>  
> Docker can be launched with --privileged=true, and --user=uid:gid flag.  With 
> this parameter combinations, user will not have access to become root user.  
> All docker exec command will be drop to uid:gid user to run instead of 
> granting privileges.  User can gain root privileges if container file system 
> contains files that give user extra power, but this type of image is 
> considered as dangerous.  Non-privileged user can launch container with 
> special bits to acquire same level of root power.  Hence, we lose control of 
> which image should be run with --privileges, and who have sudo rights to use 
> privileged container images.  As the result, we should check for sudo access 
> then decide to parameterize --privileged=true OR --user=uid:gid.  This will 
> avoid leading developer down the wrong path.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-7221) Add security check for privileged docker container

2018-03-28 Thread Eric Yang (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-7221?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Eric Yang updated YARN-7221:

Attachment: YARN-7221.012.patch

> Add security check for privileged docker container
> --
>
> Key: YARN-7221
> URL: https://issues.apache.org/jira/browse/YARN-7221
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: security
>Affects Versions: 3.0.0, 3.1.0
>Reporter: Eric Yang
>Assignee: Eric Yang
>Priority: Major
> Attachments: YARN-7221.001.patch, YARN-7221.002.patch, 
> YARN-7221.003.patch, YARN-7221.004.patch, YARN-7221.005.patch, 
> YARN-7221.006.patch, YARN-7221.007.patch, YARN-7221.008.patch, 
> YARN-7221.009.patch, YARN-7221.010.patch, YARN-7221.011.patch, 
> YARN-7221.012.patch
>
>
> When a docker is running with privileges, majority of the use case is to have 
> some program running with root then drop privileges to another user.  i.e. 
> httpd to start with privileged and bind to port 80, then drop privileges to 
> www user.  
> # We should add security check for submitting users, to verify they have 
> "sudo" access to run privileged container.  
> # We should remove --user=uid:gid for privileged containers.  
>  
> Docker can be launched with --privileged=true, and --user=uid:gid flag.  With 
> this parameter combinations, user will not have access to become root user.  
> All docker exec command will be drop to uid:gid user to run instead of 
> granting privileges.  User can gain root privileges if container file system 
> contains files that give user extra power, but this type of image is 
> considered as dangerous.  Non-privileged user can launch container with 
> special bits to acquire same level of root power.  Hence, we lose control of 
> which image should be run with --privileges, and who have sudo rights to use 
> privileged container images.  As the result, we should check for sudo access 
> then decide to parameterize --privileged=true OR --user=uid:gid.  This will 
> avoid leading developer down the wrong path.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-1151) Ability to configure auxiliary services from HDFS-based JAR files

2018-03-28 Thread Xuan Gong (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-1151?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16417819#comment-16417819
 ] 

Xuan Gong commented on YARN-1151:
-

[~rkanter]   Could you review the latest patch, please?

> Ability to configure auxiliary services from HDFS-based JAR files
> -
>
> Key: YARN-1151
> URL: https://issues.apache.org/jira/browse/YARN-1151
> Project: Hadoop YARN
>  Issue Type: Improvement
>  Components: nodemanager
>Affects Versions: 2.1.0-beta, 2.9.0
>Reporter: john lilley
>Assignee: Xuan Gong
>Priority: Major
>  Labels: auxiliary-service, yarn
> Attachments: YARN-1151.1.patch, YARN-1151.2.patch, 
> YARN-1151.branch-2.poc.2.patch, YARN-1151.branch-2.poc.3.patch, 
> YARN-1151.branch-2.poc.patch, [YARN-1151] [Design] Configure auxiliary 
> services from HDFS-based JAR files.pdf
>
>
> I would like to install an auxiliary service in Hadoop YARN without actually 
> installing files/services on every node in the system.  Discussions on the 
> user@ list indicate that this is not easily done.  The reason we want an 
> auxiliary service is that our application has some persistent-data components 
> that are not appropriate for HDFS.  In fact, they are somewhat analogous to 
> the mapper output of MapReduce's shuffle, which is what led me to 
> auxiliary-services in the first place.  It would be much easier if we could 
> just place our service's JARs in HDFS.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-1151) Ability to configure auxiliary services from HDFS-based JAR files

2018-03-28 Thread Xuan Gong (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-1151?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xuan Gong updated YARN-1151:

Attachment: YARN-1151.2.patch

> Ability to configure auxiliary services from HDFS-based JAR files
> -
>
> Key: YARN-1151
> URL: https://issues.apache.org/jira/browse/YARN-1151
> Project: Hadoop YARN
>  Issue Type: Improvement
>  Components: nodemanager
>Affects Versions: 2.1.0-beta, 2.9.0
>Reporter: john lilley
>Assignee: Xuan Gong
>Priority: Major
>  Labels: auxiliary-service, yarn
> Attachments: YARN-1151.1.patch, YARN-1151.2.patch, 
> YARN-1151.branch-2.poc.2.patch, YARN-1151.branch-2.poc.3.patch, 
> YARN-1151.branch-2.poc.patch, [YARN-1151] [Design] Configure auxiliary 
> services from HDFS-based JAR files.pdf
>
>
> I would like to install an auxiliary service in Hadoop YARN without actually 
> installing files/services on every node in the system.  Discussions on the 
> user@ list indicate that this is not easily done.  The reason we want an 
> auxiliary service is that our application has some persistent-data components 
> that are not appropriate for HDFS.  In fact, they are somewhat analogous to 
> the mapper output of MapReduce's shuffle, which is what led me to 
> auxiliary-services in the first place.  It would be much easier if we could 
> just place our service's JARs in HDFS.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-8079) YARN native service should respect source file of ConfigFile inside Service/Component spec

2018-03-28 Thread Eric Yang (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-8079?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16417816#comment-16417816
 ] 

Eric Yang commented on YARN-8079:
-

[~leftnoteasy] For accessing remote HDFS, it requires username + password of 
the remote cluster, and the cluster has a way to contact to remote cluster KDC 
server to verify the user.  I don't think Hadoop supports 
hdfs://user:pass@cluster:port/path.  I think remoteFile throw me off in 
thinking to access another HDFS other than current cluster.  Sorry for the 
confusion.  For S3, s3://ID:SECRET@BUCKET/ maybe this works.  +1 for patch 3.

> YARN native service should respect source file of ConfigFile inside 
> Service/Component spec
> --
>
> Key: YARN-8079
> URL: https://issues.apache.org/jira/browse/YARN-8079
> Project: Hadoop YARN
>  Issue Type: Bug
>Reporter: Wangda Tan
>Assignee: Wangda Tan
>Priority: Blocker
> Attachments: YARN-8079.001.patch, YARN-8079.002.patch, 
> YARN-8079.003.patch
>
>
> Currently, {{srcFile}} is not respected. {{ProviderUtils}} doesn't properly 
> read srcFile, instead it always construct {{remoteFile}} by using 
> componentDir and fileName of {{destFile}}:
> {code}
> Path remoteFile = new Path(compInstanceDir, fileName);
> {code} 
> To me it is a common use case which services have some files existed in HDFS 
> and need to be localized when components get launched. (For example, if we 
> want to serve a Tensorflow model, we need to localize Tensorflow model 
> (typically not huge, less than GB) to local disk. Otherwise launched docker 
> container has to access HDFS.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7946) Update TimelineServerV2 doc as per YARN-7919

2018-03-28 Thread Rohith Sharma K S (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7946?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16417773#comment-16417773
 ] 

Rohith Sharma K S commented on YARN-7946:
-

The similar changes required in Building.txt file also i.e 2nd sentence in 1st 
paragraph. 

> Update TimelineServerV2 doc as per YARN-7919
> 
>
> Key: YARN-7946
> URL: https://issues.apache.org/jira/browse/YARN-7946
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Rohith Sharma K S
>Assignee: Haibo Chen
>Priority: Major
> Attachments: YARN-7946.00.patch, YARN-7946.01.patch
>
>
> Post YARN-7919, document need to be updated for co processor jar name and 
> other related details if any.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-8010) Add config in FederationRMFailoverProxy to not bypass facade cache when failing over

2018-03-28 Thread Botong Huang (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-8010?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16417768#comment-16417768
 ] 

Botong Huang commented on YARN-8010:


Thanks [~subru] and [~giovanni.fumarola]!

> Add config in FederationRMFailoverProxy to not bypass facade cache when 
> failing over
> 
>
> Key: YARN-8010
> URL: https://issues.apache.org/jira/browse/YARN-8010
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Botong Huang
>Assignee: Botong Huang
>Priority: Minor
> Fix For: 2.10.0, 2.9.1, 3.1.1
>
> Attachments: YARN-8010.v1.patch, YARN-8010.v1.patch, 
> YARN-8010.v2.patch, YARN-8010.v3.patch
>
>
> Today when YarnRM is failing over, the FederationRMFailoverProxy running in 
> AMRMProxy will perform failover, try to get latest subcluster info from 
> FederationStateStore and then retry connect to the latest YarnRM master. When 
> calling getSubCluster() to FederationStateStoreFacade, it bypasses the cache 
> with a flush flag. When YarnRM is failing over, every AM heartbeat thread 
> creates a different thread inside FederationInterceptor, each of which keeps 
> performing failover several times. This leads to a big spike of getSubCluster 
> call to FederationStateStore. 
> Depending on the cluster setup (e.g. putting a VIP before all YarnRMs), 
> YarnRM master slave change might not result in RM addr change. In other 
> cases, a small delay of getting latest subcluster information may be 
> acceptable. This patch thus creates a config option, so that it is possible 
> to ask the FederationRMFailoverProxy to not flush cache when calling 
> getSubCluster(). 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6936) [Atsv2] Retrospect storing entities into sub application table from client perspective

2018-03-28 Thread Rohith Sharma K S (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6936?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16417766#comment-16417766
 ] 

Rohith Sharma K S commented on YARN-6936:
-

bq. Let's add the scope of the entities to each of the four methods
OK, Does this modified sentence fine? {{Send the information of a number of 
conceptual entities in the scope of a YARN application to the timeline service 
v.2 collector.}}. Does all 4 API need to be modified with same way? For newer 
API, it should be out side scope of application also right?

bq.  Is it intended to extend updateAggregateStatus() so that sub application 
metrics are rolled up?
I vaguely remember this we discussed in weekly call and decided to aggregate 
for both APIs. Because newer APIs write into both tables i.e entity and subapp 
table which. So aggregated metrics can also available in application scope as 
well. 

bq. The TimelineCollectorContext is bound to an application attempt. Adding a 
subApplicationWrite flag to TimelineCollectorContext may not be the most 
intuitive approach. How about we leave subApplicationWrite as a separate flag 
instead?
I would inclined to send required information in record rather sending in 
parameter. This avoids compatibility in future. May be let's define newer 
record that contains context, ugi and subappwrite.  cc :/ [~vrushalic]



> [Atsv2] Retrospect storing entities into sub application table from client 
> perspective
> --
>
> Key: YARN-6936
> URL: https://issues.apache.org/jira/browse/YARN-6936
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Rohith Sharma K S
>Assignee: Rohith Sharma K S
>Priority: Major
> Attachments: YARN-6936.000.patch, YARN-6936.001.patch
>
>
> Currently YARN-6734 stores entities into sub application table only if doAs 
> user and submitted users are different. This holds good for Tez kind of use 
> cases. But AM runs as same as submitted user like MR also need to store 
> entities in sub application table so that it could read entities without 
> application id. 
> This would be a point of concern later stages when ATSv2 is deployed into 
> production. This JIRA is to retrospect decision of storing entities into sub 
> application table based on client side configuration driven rather than user 
> driven. 
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Resolved] (YARN-3988) DockerContainerExecutor should allow user specify "docker run" parameters

2018-03-28 Thread Haibo Chen (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-3988?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Haibo Chen resolved YARN-3988.
--
Resolution: Won't Fix

Closing this as DockerContainerExecutor has been deprecated in branch-2 and 
removed in trunk

> DockerContainerExecutor should allow user specify "docker run" parameters
> -
>
> Key: YARN-3988
> URL: https://issues.apache.org/jira/browse/YARN-3988
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: nodemanager
>Affects Versions: 2.7.1
>Reporter: Chen He
>Assignee: Chen He
>Priority: Major
>
> In current DockerContainerExecutor, the "docker run" command has fixed 
> parameters:
> String commandStr = commands.append(dockerExecutor)
>   .append(" ")
>   .append("run")
>   .append(" ")
>   .append("--rm --net=host")
>   .append(" ")
>   .append(" --name " + containerIdStr)
>   .append(localDirMount)
>   .append(logDirMount)
>   .append(containerWorkDirMount)
>   .append(" ")
>   .append(containerImageName)
>   .toString();
> For example, it is not flexible if users want to start a docker container 
> with attaching extra volume(s) and other "docker run" parameters. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7905) Parent directory permission incorrect during public localization

2018-03-28 Thread Bibin A Chundatt (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7905?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16417752#comment-16417752
 ] 

Bibin A Chundatt commented on YARN-7905:


Uploaded patch again to trigger jenkins. Missed to commit this patch.

> Parent directory permission incorrect during public localization 
> -
>
> Key: YARN-7905
> URL: https://issues.apache.org/jira/browse/YARN-7905
> Project: Hadoop YARN
>  Issue Type: Bug
>Reporter: Bibin A Chundatt
>Assignee: Bilwa S T
>Priority: Critical
> Attachments: YARN-7905-001.patch, YARN-7905-002.patch, 
> YARN-7905-003.patch, YARN-7905-004.patch, YARN-7905-005.patch, 
> YARN-7905-006.patch, YARN-7905-007.patch, YARN-7905-008.patch
>
>
> Similar to YARN-6708 during public localization also we have to take care for 
> parent directory if the umask is 027 during node manager start up.
> /filecache/0/200
> the directory permission of /filecache/0 is 750. Which cause 
> application failure 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-7905) Parent directory permission incorrect during public localization

2018-03-28 Thread Bibin A Chundatt (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-7905?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bibin A Chundatt updated YARN-7905:
---
Attachment: YARN-7905-008.patch

> Parent directory permission incorrect during public localization 
> -
>
> Key: YARN-7905
> URL: https://issues.apache.org/jira/browse/YARN-7905
> Project: Hadoop YARN
>  Issue Type: Bug
>Reporter: Bibin A Chundatt
>Assignee: Bilwa S T
>Priority: Critical
> Attachments: YARN-7905-001.patch, YARN-7905-002.patch, 
> YARN-7905-003.patch, YARN-7905-004.patch, YARN-7905-005.patch, 
> YARN-7905-006.patch, YARN-7905-007.patch, YARN-7905-008.patch
>
>
> Similar to YARN-6708 during public localization also we have to take care for 
> parent directory if the umask is 027 during node manager start up.
> /filecache/0/200
> the directory permission of /filecache/0 is 750. Which cause 
> application failure 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Resolved] (YARN-2478) Nested containers should be supported

2018-03-28 Thread Haibo Chen (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-2478?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Haibo Chen resolved YARN-2478.
--
Resolution: Won't Fix

Closing this as DockerContainerExecutor has been deprecated in branch-2 and 
removed in trunk

> Nested containers should be supported
> -
>
> Key: YARN-2478
> URL: https://issues.apache.org/jira/browse/YARN-2478
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Abin Shahab
>Priority: Major
>
> Currently DockerContainerExecutor only supports one level of containers. 
> However, YARN's responsibility is to handle resource isolation, and nested 
> containers would allow YARN to delegate handling software isolation to the 
> jobs.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (YARN-8079) YARN native service should respect source file of ConfigFile inside Service/Component spec

2018-03-28 Thread Wangda Tan (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-8079?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16417747#comment-16417747
 ] 

Wangda Tan edited comment on YARN-8079 at 3/28/18 4:58 PM:
---

Thanks [~gsaha], 

Is there any additional suggestions to the patch or we're good to go?

cc: [~billie.rinaldi]/[~eyang]


was (Author: leftnoteasy):
Thanks [~gsaha], 

Is there any additional suggestions to the patch or we're good to go?

> YARN native service should respect source file of ConfigFile inside 
> Service/Component spec
> --
>
> Key: YARN-8079
> URL: https://issues.apache.org/jira/browse/YARN-8079
> Project: Hadoop YARN
>  Issue Type: Bug
>Reporter: Wangda Tan
>Assignee: Wangda Tan
>Priority: Blocker
> Attachments: YARN-8079.001.patch, YARN-8079.002.patch, 
> YARN-8079.003.patch
>
>
> Currently, {{srcFile}} is not respected. {{ProviderUtils}} doesn't properly 
> read srcFile, instead it always construct {{remoteFile}} by using 
> componentDir and fileName of {{destFile}}:
> {code}
> Path remoteFile = new Path(compInstanceDir, fileName);
> {code} 
> To me it is a common use case which services have some files existed in HDFS 
> and need to be localized when components get launched. (For example, if we 
> want to serve a Tensorflow model, we need to localize Tensorflow model 
> (typically not huge, less than GB) to local disk. Otherwise launched docker 
> container has to access HDFS.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-8079) YARN native service should respect source file of ConfigFile inside Service/Component spec

2018-03-28 Thread Wangda Tan (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-8079?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16417747#comment-16417747
 ] 

Wangda Tan commented on YARN-8079:
--

Thanks [~gsaha], 

Is there any additional suggestions to the patch or we're good to go?

> YARN native service should respect source file of ConfigFile inside 
> Service/Component spec
> --
>
> Key: YARN-8079
> URL: https://issues.apache.org/jira/browse/YARN-8079
> Project: Hadoop YARN
>  Issue Type: Bug
>Reporter: Wangda Tan
>Assignee: Wangda Tan
>Priority: Blocker
> Attachments: YARN-8079.001.patch, YARN-8079.002.patch, 
> YARN-8079.003.patch
>
>
> Currently, {{srcFile}} is not respected. {{ProviderUtils}} doesn't properly 
> read srcFile, instead it always construct {{remoteFile}} by using 
> componentDir and fileName of {{destFile}}:
> {code}
> Path remoteFile = new Path(compInstanceDir, fileName);
> {code} 
> To me it is a common use case which services have some files existed in HDFS 
> and need to be localized when components get launched. (For example, if we 
> want to serve a Tensorflow model, we need to localize Tensorflow model 
> (typically not huge, less than GB) to local disk. Otherwise launched docker 
> container has to access HDFS.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Resolved] (YARN-2482) DockerContainerExecutor configuration

2018-03-28 Thread Haibo Chen (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-2482?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Haibo Chen resolved YARN-2482.
--
Resolution: Won't Fix

Closing this as DockerContainerExecutor has been deprecated in branch-2 and 
removed in trunk

> DockerContainerExecutor configuration
> -
>
> Key: YARN-2482
> URL: https://issues.apache.org/jira/browse/YARN-2482
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Abin Shahab
>Priority: Major
>  Labels: security
>
> Currently DockerContainerExecutor can be configured from yarn-site.xml, and 
> users can add arbtrary arguments to the container launch command. This should 
> be fixed so that the cluster and other jobs are protected from malicious 
> string injections.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Resolved] (YARN-2479) DockerContainerExecutor must support handling of distributed cache

2018-03-28 Thread Haibo Chen (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-2479?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Haibo Chen resolved YARN-2479.
--
Resolution: Won't Fix

Closing this as DockerContainerExecutor has been deprecated in branch-2 and 
removed in trunk

> DockerContainerExecutor must support handling of distributed cache
> --
>
> Key: YARN-2479
> URL: https://issues.apache.org/jira/browse/YARN-2479
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Abin Shahab
>Priority: Major
>  Labels: security
>
> Interaction between Docker containers and distributed cache has not yet been 
> worked out. There should be a way to securely access distributed cache 
> without compromising the isolation Docker provides.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Resolved] (YARN-2477) DockerContainerExecutor must support secure mode

2018-03-28 Thread Haibo Chen (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-2477?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Haibo Chen resolved YARN-2477.
--
Resolution: Won't Fix

Closing this as DockerContainerExecutor has been deprecated in branch-2 and 
removed in trunk.

> DockerContainerExecutor must support secure mode
> 
>
> Key: YARN-2477
> URL: https://issues.apache.org/jira/browse/YARN-2477
> Project: Hadoop YARN
>  Issue Type: New Feature
>Reporter: Abin Shahab
>Priority: Major
>  Labels: security
>
> DockerContainerExecutor(patch in YARN-1964) does not support Kerberized 
> hadoop clusters yet, as Kerberized hadoop cluster has a strict dependency on 
> the LinuxContainerExecutor. 
> For Docker containers to be used in production environment, they must support 
> secure hadoop. Issues regarding Java's AES encryption library in a 
> containerized environment also need to be worked out.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-8077) The vmemLimit parameter in ContainersMonitorImpl#isProcessTreeOverLimit is confusing

2018-03-28 Thread Miklos Szegedi (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-8077?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16417735#comment-16417735
 ] 

Miklos Szegedi commented on YARN-8077:
--

The Jenkins failure seems to be unrelated (protoc). Let me look into this.

> The vmemLimit parameter in ContainersMonitorImpl#isProcessTreeOverLimit is 
> confusing
> 
>
> Key: YARN-8077
> URL: https://issues.apache.org/jira/browse/YARN-8077
> Project: Hadoop YARN
>  Issue Type: Improvement
>  Components: nodemanager
>Affects Versions: 3.0.0
>Reporter: Sen Zhao
>Assignee: Sen Zhao
>Priority: Trivial
> Fix For: 3.2.0
>
> Attachments: YARN-8077.001.patch
>
>
> The parameter should be memLimit.   It contains the meaning of vmemLimit and 
> pmemLimit.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Resolved] (YARN-2480) DockerContainerExecutor must support user namespaces

2018-03-28 Thread Haibo Chen (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-2480?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Haibo Chen resolved YARN-2480.
--
Resolution: Won't Fix

Closing this as DockerContainerExecutor has been deprecated in branch-2 and 
removed in trunk

> DockerContainerExecutor must support user namespaces
> 
>
> Key: YARN-2480
> URL: https://issues.apache.org/jira/browse/YARN-2480
> Project: Hadoop YARN
>  Issue Type: New Feature
>Reporter: Abin Shahab
>Priority: Major
>  Labels: security
>
> When DockerContainerExector launches a container, the root inside that 
> container has root privileges on the host. 
> This is insecure in a mult-tenant environment. The uid of the container's 
> root user must be mapped to a non-privileged user on the host.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-8077) The vmemLimit parameter in ContainersMonitorImpl#isProcessTreeOverLimit is confusing

2018-03-28 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-8077?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16417726#comment-16417726
 ] 

Hudson commented on YARN-8077:
--

FAILURE: Integrated in Jenkins build Hadoop-trunk-Commit #13892 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/13892/])
YARN-8077. The vmemLimit parameter in (szegedim: rev 
cdee0a4f840868d8b8acac15e62da2ab337618c7)
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/monitor/ContainersMonitorImpl.java


> The vmemLimit parameter in ContainersMonitorImpl#isProcessTreeOverLimit is 
> confusing
> 
>
> Key: YARN-8077
> URL: https://issues.apache.org/jira/browse/YARN-8077
> Project: Hadoop YARN
>  Issue Type: Improvement
>  Components: nodemanager
>Affects Versions: 3.0.0
>Reporter: Sen Zhao
>Assignee: Sen Zhao
>Priority: Trivial
> Fix For: 3.2.0
>
> Attachments: YARN-8077.001.patch
>
>
> The parameter should be memLimit.   It contains the meaning of vmemLimit and 
> pmemLimit.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Issue Comment Deleted] (YARN-8077) The vmemLimit parameter in ContainersMonitorImpl#isProcessTreeOverLimit is confusing

2018-03-28 Thread Miklos Szegedi (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-8077?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Miklos Szegedi updated YARN-8077:
-
Comment: was deleted

(was: Committed to trunk.)

> The vmemLimit parameter in ContainersMonitorImpl#isProcessTreeOverLimit is 
> confusing
> 
>
> Key: YARN-8077
> URL: https://issues.apache.org/jira/browse/YARN-8077
> Project: Hadoop YARN
>  Issue Type: Improvement
>  Components: nodemanager
>Affects Versions: 3.0.0
>Reporter: Sen Zhao
>Assignee: Sen Zhao
>Priority: Trivial
> Fix For: 3.2.0
>
> Attachments: YARN-8077.001.patch
>
>
> The parameter should be memLimit.   It contains the meaning of vmemLimit and 
> pmemLimit.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7988) Refactor FSNodeLabelStore code for attributes store support

2018-03-28 Thread Naganarasimha G R (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7988?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16417691#comment-16417691
 ] 

Naganarasimha G R commented on YARN-7988:
-

Newer approach LGTM,

+1

> Refactor FSNodeLabelStore code for attributes store support
> ---
>
> Key: YARN-7988
> URL: https://issues.apache.org/jira/browse/YARN-7988
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Bibin A Chundatt
>Assignee: Bibin A Chundatt
>Priority: Major
> Attachments: YARN-7988-YARN-3409.002.patch, 
> YARN-7988-YARN-3409.003.patch, YARN-7988-YARN-3409.004.patch, 
> YARN-7988-YARN-3409.005.patch, YARN-7988-YARN-3409.006.patch, 
> YARN-7988-YARN-3409.007.patch, YARN-7988.001.patch
>
>
> # Abstract out file FileSystemStore operation
> # Define EditLog Operartions  and Mirror operation
> # Support compatibility with old nodelabel store



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7935) Expose container's hostname to applications running within the docker container

2018-03-28 Thread Eric Yang (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7935?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16417625#comment-16417625
 ] 

Eric Yang commented on YARN-7935:
-

[~shaneku...@gmail.com] {quote}
Eric Yang this isn't true for overlay networks. You can't assume Registry DNS 
will be in use and it won't be used by some of these network types without 
additional modifications to Hadoop (--dns for docker run).
{quote}

The information is almost straight out of [Docker embedded 
DNS|https://docs.docker.com/v17.09/engine/userguide/networking/configure-dns/]. 
 I can be over concerned when developers get into a feature that they have not 
learn the basics.  I am not blocking user-defined network feature to be 
implemented.  As matter of the fact, I welcome to support user-defined network 
feature.  However, calling Spark out as requiring user-defined network may not 
be the right message because I know it does not depend on user-defined network. 
 I have previously implemented Spark 2.1 on docker without encountering the 
"limitation" of having to base on embedded DNS.  This is the reason that it 
raised my eye balls on this issue to understand the technical detail and 
motivation on the attempts.  Now I have the understanding of the motivation, 
code review will go much quicker.

Back to the code review, the host can contain multiple network cards, using 
InetAddress.getLocalHost().getHostName() may not always produce desired result. 
 It would be best to lookup Hadoop configuration to determine if there is 
specific hostname used (yarn.nodemanager.address) and do a address lookup if it 
is not 0.0.0.0, if the property doesn't exist, then fall back to use 
InetAddress.getLocalHost().getHostName().

> Expose container's hostname to applications running within the docker 
> container
> ---
>
> Key: YARN-7935
> URL: https://issues.apache.org/jira/browse/YARN-7935
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: yarn
>Reporter: Suma Shivaprasad
>Assignee: Suma Shivaprasad
>Priority: Major
> Attachments: YARN-7935.1.patch, YARN-7935.2.patch, YARN-7935.3.patch
>
>
> Some applications have a need to bind to the container's hostname (like 
> Spark) which is different from the NodeManager's hostname(NM_HOST which is 
> available as an env during container launch) when launched through Docker 
> runtime. The container's hostname can be exposed to applications via an env 
> CONTAINER_HOSTNAME. Another potential candidate is the container's IP but 
> this can be addressed in a separate jira.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7494) Add muti node lookup support for better placement

2018-03-28 Thread Sunil G (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7494?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16417590#comment-16417590
 ] 

Sunil G commented on YARN-7494:
---

Thanks [~cheersyang] and [~leftnoteasy]

bq.Maintaining a separate cache will require to update them which not seem to 
be necessary to me.

This will help us in future as well. We depend on labelManager a lot and its 
always to get node per partition. Such a cache, will help for metrics and for 
other apis which we can do in a clean up jira.

bq.You'll get a lot of UT failures in FIFO path. A way to fix this is to do 
this in {{CapacityScheduler#addApplicationAttempt}}, such as

Makes sense. I ll update in next patch.

bq.Suggest to load it from configuration file, such as

This will force such a config to be only at CS level. I dont see any pblm but 
this will add to a complex per-policy config. I thought of making this even 
more simpler. we are now configuring policy like 
{{yarn.scheduler.capacity.multi-node-sorting.policy}} to a value like 
"resource-based" or "orgcustomPolicy.class". Now such a complex name which 
is fully qualified will make your proposal more complex. Hence we can opt for 
[name=resource-based, timeout=1200] or something similar in nature. Thoughts?

bq.CS#getCandidateNodeSet, the get_node_nodes_from_give_partition operation is 
very expensive

Yes. I though i used that in CS api as well. Infact i used only in 
MultiNodeManager. I ll change that in next patch. Thank you.

 

I will wait for config comment from [~cheersyang] before uploading next patch.

 

> Add muti node lookup support for better placement
> -
>
> Key: YARN-7494
> URL: https://issues.apache.org/jira/browse/YARN-7494
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: capacity scheduler
>Reporter: Sunil G
>Assignee: Sunil G
>Priority: Major
> Attachments: YARN-7494.001.patch, YARN-7494.002.patch, 
> YARN-7494.003.patch, YARN-7494.004.patch, YARN-7494.005.patch, 
> YARN-7494.v0.patch, YARN-7494.v1.patch, multi-node-designProposal.png
>
>
> Instead of single node, for effectiveness we can consider a multi node lookup 
> based on partition to start with.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-8013) Support APP-TAG namespace for allocation tags

2018-03-28 Thread Wangda Tan (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-8013?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16417579#comment-16417579
 ] 

Wangda Tan commented on YARN-8013:
--

Thanks [~cheersyang], sounds good. Will wait 2 more days to see if 
[~kkaranasos]/[~asuresh] have any other suggestions and answer about:
bq. 3) Not related to the JIRA, but I'm not sure why LocalAllocationTagsManager 
extends AllocationTagsManager but still has a ref to AllocationTagsManager.

> Support APP-TAG namespace for allocation tags
> -
>
> Key: YARN-8013
> URL: https://issues.apache.org/jira/browse/YARN-8013
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Weiwei Yang
>Assignee: Weiwei Yang
>Priority: Major
> Attachments: YARN-8013.001.patch, YARN-8013.002.patch, 
> YARN-8013.003.patch, YARN-8013.004.patch
>
>
> YARN-1461 adds *Application Tag* concept to Yarn applications, user is able 
> to annotate application with multiple tags to classify apps. We can leverage 
> this to represent a namespace for a certain group of apps. So instead of 
> calling *app-label*, propose to call it *app-tag*.
> A typical use case is,
> There are a lot of TF jobs running on Yarn, and some of them are consuming 
> resources heavily. So we want to limit number of PS on each node for such BIG 
> players but ignore those SMALL ones. To achieve this, we can do following 
> steps:
>  # Add application tag "big-tf" to these big TF jobs
>  # For each PS request, we add "ps" source tag and map it to constraint 
> "{color:#d04437}notin, node, tensorflow/ps{color}" or 
> "{color:#d04437}cardinality, node, tensorflow/ps{color}{color:#d04437}, 0, 
> 2{color}" for finer grained controls.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-146) Add unit tests for computing fair share in the fair scheduler

2018-03-28 Thread Haibo Chen (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-146?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Haibo Chen updated YARN-146:

Issue Type: Test  (was: New Feature)

> Add unit tests for computing fair share in the fair scheduler
> -
>
> Key: YARN-146
> URL: https://issues.apache.org/jira/browse/YARN-146
> Project: Hadoop YARN
>  Issue Type: Test
>  Components: resourcemanager
>Affects Versions: 2.0.2-alpha
>Reporter: Sandy Ryza
>Assignee: Sandy Ryza
>Priority: Major
> Fix For: 2.0.3-alpha
>
> Attachments: YARN-146-1.patch, YARN-146.patch
>
>
> MR1 had TestComputeFairShares.  This should go into the YARN fair scheduler.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7946) Update TimelineServerV2 doc as per YARN-7919

2018-03-28 Thread genericqa (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7946?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16417533#comment-16417533
 ] 

genericqa commented on YARN-7946:
-

| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
17s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
17s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 25m 
21s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 25m 
14s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
61m 11s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
16s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 28m 
36s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
10m 10s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
31s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}101m 36s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:8620d2b |
| JIRA Issue | YARN-7946 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12916609/YARN-7946.01.patch |
| Optional Tests |  asflicense  mvnsite  |
| uname | Linux ff672173a20a 3.13.0-139-generic #188-Ubuntu SMP Tue Jan 9 
14:43:09 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / 411993f |
| maven | version: Apache Maven 3.3.9 |
| Max. process+thread count | 343 (vs. ulimit of 1) |
| modules | C: hadoop-yarn-project/hadoop-yarn/hadoop-yarn-site . U: . |
| Console output | 
https://builds.apache.org/job/PreCommit-YARN-Build/20121/console |
| Powered by | Apache Yetus 0.8.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> Update TimelineServerV2 doc as per YARN-7919
> 
>
> Key: YARN-7946
> URL: https://issues.apache.org/jira/browse/YARN-7946
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Rohith Sharma K S
>Assignee: Haibo Chen
>Priority: Major
> Attachments: YARN-7946.00.patch, YARN-7946.01.patch
>
>
> Post YARN-7919, document need to be updated for co processor jar name and 
> other related details if any.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7221) Add security check for privileged docker container

2018-03-28 Thread Billie Rinaldi (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7221?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16417528#comment-16417528
 ] 

Billie Rinaldi commented on YARN-7221:
--

It looks like set_privileged is missing free(user). Also, will ngroups always 
be set when getgrouplist returns -1? I was wondering if we should check the 
return value before entering the loop for (int j = 0; j < ngroups; j++).

When applying to trunk, I got a conflict on TestDockerContainerRuntime, but I'm 
not sure what is going on since that file hasn't been modified since the last 
precommit build.

I don't see any other issues at the moment. I'm going to try running and 
testing the patch locally and will get back to you with the results.

> Add security check for privileged docker container
> --
>
> Key: YARN-7221
> URL: https://issues.apache.org/jira/browse/YARN-7221
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: security
>Affects Versions: 3.0.0, 3.1.0
>Reporter: Eric Yang
>Assignee: Eric Yang
>Priority: Major
> Attachments: YARN-7221.001.patch, YARN-7221.002.patch, 
> YARN-7221.003.patch, YARN-7221.004.patch, YARN-7221.005.patch, 
> YARN-7221.006.patch, YARN-7221.007.patch, YARN-7221.008.patch, 
> YARN-7221.009.patch, YARN-7221.010.patch, YARN-7221.011.patch
>
>
> When a docker is running with privileges, majority of the use case is to have 
> some program running with root then drop privileges to another user.  i.e. 
> httpd to start with privileged and bind to port 80, then drop privileges to 
> www user.  
> # We should add security check for submitting users, to verify they have 
> "sudo" access to run privileged container.  
> # We should remove --user=uid:gid for privileged containers.  
>  
> Docker can be launched with --privileged=true, and --user=uid:gid flag.  With 
> this parameter combinations, user will not have access to become root user.  
> All docker exec command will be drop to uid:gid user to run instead of 
> granting privileges.  User can gain root privileges if container file system 
> contains files that give user extra power, but this type of image is 
> considered as dangerous.  Non-privileged user can launch container with 
> special bits to acquire same level of root power.  Hence, we lose control of 
> which image should be run with --privileges, and who have sudo rights to use 
> privileged container images.  As the result, we should check for sudo access 
> then decide to parameterize --privileged=true OR --user=uid:gid.  This will 
> avoid leading developer down the wrong path.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-8071) Provide Spark-like API for setting Environment Variables to enable vars with commas

2018-03-28 Thread Jim Brennan (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-8071?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16417505#comment-16417505
 ] 

Jim Brennan commented on YARN-8071:
---

@jlowe, I've filed [MAPREDUCE-7069] for addressing the mapreduce properties.  I 
will use this one to address the yarn properties.


> Provide Spark-like API for setting Environment Variables to enable vars with 
> commas
> ---
>
> Key: YARN-8071
> URL: https://issues.apache.org/jira/browse/YARN-8071
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: yarn
>Affects Versions: 3.0.0
>Reporter: Jim Brennan
>Assignee: Jim Brennan
>Priority: Major
>
> YARN-6830 describes a problem where environment variables that contain commas 
> cannot be specified via {{-Dmapreduce.map.env}}.
> For example:
> {{-Dmapreduce.map.env="MODE=bar,IMAGE_NAME=foo,MOUNTS=/tmp/foo,/tmp/bar"}}
> will set {{MOUNTS}} to {{/tmp/foo}}
> In that Jira, [~aw] suggested that we change the API to provide a way to 
> specify environment variables individually, the same way that Spark does.
> {quote}Rather than fight with a regex why not redefine the API instead?
>  
> -Dmapreduce.map.env.MODE=bar
>  -Dmapreduce.map.env.IMAGE_NAME=foo
>  -Dmapreduce.map.env.MOUNTS=/tmp/foo,/tmp/bar
> ...
> e.g, mapreduce.map.env.[foo]=bar gets turned into foo=bar
> This greatly simplifies the input validation needed and makes it clear what 
> is actually being defined.
> {quote}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7988) Refactor FSNodeLabelStore code for attributes store support

2018-03-28 Thread Sunil G (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7988?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16417503#comment-16417503
 ] 

Sunil G commented on YARN-7988:
---

pending jenkins.

+1 on latest patch. 

> Refactor FSNodeLabelStore code for attributes store support
> ---
>
> Key: YARN-7988
> URL: https://issues.apache.org/jira/browse/YARN-7988
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Bibin A Chundatt
>Assignee: Bibin A Chundatt
>Priority: Major
> Attachments: YARN-7988-YARN-3409.002.patch, 
> YARN-7988-YARN-3409.003.patch, YARN-7988-YARN-3409.004.patch, 
> YARN-7988-YARN-3409.005.patch, YARN-7988-YARN-3409.006.patch, 
> YARN-7988-YARN-3409.007.patch, YARN-7988.001.patch
>
>
> # Abstract out file FileSystemStore operation
> # Define EditLog Operartions  and Mirror operation
> # Support compatibility with old nodelabel store



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Created] (YARN-8083) RM/UI2: all configurations are paged together

2018-03-28 Thread Zoltan Haindrich (JIRA)
Zoltan Haindrich created YARN-8083:
--

 Summary: RM/UI2: all configurations are paged together
 Key: YARN-8083
 URL: https://issues.apache.org/jira/browse/YARN-8083
 Project: Hadoop YARN
  Issue Type: Bug
  Components: yarn-ui-v2
Reporter: Zoltan Haindrich
 Attachments: conf_browse.png

there are 3 configs displayed on the same page; however all of the viewer 
components respond to all page controllers...

http://172.22.78.179:8088/ui2/#/yarn-tools/yarn-conf





--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-8083) RM/UI2: all configurations are paged together

2018-03-28 Thread Zoltan Haindrich (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-8083?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Zoltan Haindrich updated YARN-8083:
---
Attachment: conf_browse.png

> RM/UI2: all configurations are paged together
> -
>
> Key: YARN-8083
> URL: https://issues.apache.org/jira/browse/YARN-8083
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: yarn-ui-v2
>Reporter: Zoltan Haindrich
>Priority: Major
> Attachments: conf_browse.png
>
>
> there are 3 configs displayed on the same page; however all of the viewer 
> components respond to all page controllers...
> http://172.22.78.179:8088/ui2/#/yarn-tools/yarn-conf



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-8067) RM/UI2: queues views ; unintended scrollbars

2018-03-28 Thread Zoltan Haindrich (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-8067?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Zoltan Haindrich updated YARN-8067:
---
Component/s: yarn-ui-v2

> RM/UI2: queues views ; unintended scrollbars
> 
>
> Key: YARN-8067
> URL: https://issues.apache.org/jira/browse/YARN-8067
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: yarn-ui-v2
>Reporter: Zoltan Haindrich
>Priority: Major
> Attachments: screenshot.png
>
>
> I've horizontal/vertical scrollbars; they don't seem to be usefull



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6936) [Atsv2] Retrospect storing entities into sub application table from client perspective

2018-03-28 Thread Haibo Chen (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6936?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16417430#comment-16417430
 ] 

Haibo Chen commented on YARN-6936:
--

Thanks [~rohithsharma] for the new patch. I have a few comments/questions

1) The javadoc of the two new method in TimelineV2Client, are the same as that 
of the existing two putEntitites() and putEntitiesAsync() methods. Let's add 
the scope of the entities to each of the four methods. That is, for 
putEntities() and putEntitiesAsync(), say 'conceptual entities in the scope of 
a YARN application rather than 'conceptual entities'. Similarly for 
putSubAppEntities() and putSubAppEntitiesAsync()

2) In TimelineCollector, we'd call updateAggregateStatus() for each entity, 
regardless of whether subApp entity or not. IIRC, updateAggregateStatus() is 
for application-level metrics aggregation. Is it intended to extend 
updateAggregateStatus() so that sub application metrics are rolled up?

3) The TimelineCollectorContext is bound to an application attempt. Adding a 
subApplicationWrite flag to TimelineCollectorContext may not be the most 
intuitive approach. How about we leave subApplicationWrite as a separate flag 
instead?

> [Atsv2] Retrospect storing entities into sub application table from client 
> perspective
> --
>
> Key: YARN-6936
> URL: https://issues.apache.org/jira/browse/YARN-6936
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Rohith Sharma K S
>Assignee: Rohith Sharma K S
>Priority: Major
> Attachments: YARN-6936.000.patch, YARN-6936.001.patch
>
>
> Currently YARN-6734 stores entities into sub application table only if doAs 
> user and submitted users are different. This holds good for Tez kind of use 
> cases. But AM runs as same as submitted user like MR also need to store 
> entities in sub application table so that it could read entities without 
> application id. 
> This would be a point of concern later stages when ATSv2 is deployed into 
> production. This JIRA is to retrospect decision of storing entities into sub 
> application table based on client side configuration driven rather than user 
> driven. 
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7988) Refactor FSNodeLabelStore code for attributes store support

2018-03-28 Thread genericqa (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7988?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16417365#comment-16417365
 ] 

genericqa commented on YARN-7988:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m  
0s{color} | {color:blue} Docker mode activated. {color} |
| {color:red}-1{color} | {color:red} docker {color} | {color:red}  7m 
12s{color} | {color:red} Docker failed to build yetus/hadoop:5b98639. {color} |
\\
\\
|| Subsystem || Report/Notes ||
| JIRA Issue | YARN-7988 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12916611/YARN-7988-YARN-3409.007.patch
 |
| Console output | 
https://builds.apache.org/job/PreCommit-YARN-Build/20122/console |
| Powered by | Apache Yetus 0.8.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> Refactor FSNodeLabelStore code for attributes store support
> ---
>
> Key: YARN-7988
> URL: https://issues.apache.org/jira/browse/YARN-7988
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Bibin A Chundatt
>Assignee: Bibin A Chundatt
>Priority: Major
> Attachments: YARN-7988-YARN-3409.002.patch, 
> YARN-7988-YARN-3409.003.patch, YARN-7988-YARN-3409.004.patch, 
> YARN-7988-YARN-3409.005.patch, YARN-7988-YARN-3409.006.patch, 
> YARN-7988-YARN-3409.007.patch, YARN-7988.001.patch
>
>
> # Abstract out file FileSystemStore operation
> # Define EditLog Operartions  and Mirror operation
> # Support compatibility with old nodelabel store



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7988) Refactor FSNodeLabelStore code for attributes store support

2018-03-28 Thread genericqa (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7988?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16417350#comment-16417350
 ] 

genericqa commented on YARN-7988:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
46s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 4 new or modified test 
files. {color} |
|| || || || {color:brown} YARN-3409 Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
45s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 16m 
10s{color} | {color:green} YARN-3409 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  7m 
55s{color} | {color:green} YARN-3409 passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
58s{color} | {color:green} YARN-3409 passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
32s{color} | {color:green} YARN-3409 passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
12m 12s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m 
29s{color} | {color:green} YARN-3409 passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
20s{color} | {color:green} YARN-3409 passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
11s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
12s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  6m 
48s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  6m 
48s{color} | {color:green} hadoop-yarn-project_hadoop-yarn generated 0 new + 86 
unchanged - 1 fixed = 86 total (was 87) {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 57s{color} | {color:orange} hadoop-yarn-project/hadoop-yarn: The patch 
generated 20 new + 63 unchanged - 22 fixed = 83 total (was 85) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
26s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
10m 19s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m 
40s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} javadoc {color} | {color:red}  0m 
44s{color} | {color:red} hadoop-yarn-project_hadoop-yarn_hadoop-yarn-common 
generated 2 new + 4183 unchanged - 0 fixed = 4185 total (was 4183) {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  3m 
13s{color} | {color:green} hadoop-yarn-common in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 66m 
43s{color} | {color:green} hadoop-yarn-server-resourcemanager in the patch 
passed. {color} |
| {color:red}-1{color} | {color:red} asflicense {color} | {color:red}  0m 
33s{color} | {color:red} The patch generated 1 ASF License warnings. {color} |
| {color:black}{color} | {color:black} {color} | {color:black}138m 55s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:5b98639 |
| JIRA Issue | YARN-7988 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12916597/YARN-7988-YARN-3409.006.patch
 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  shadedclient  findbugs  checkstyle  |
| uname | Linux c0adf6c7e136 4.4.0-64-generic #85-Ubuntu SMP Mon Feb 20 
11:50:30 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/p

[jira] [Updated] (YARN-7988) Refactor FSNodeLabelStore code for attributes store support

2018-03-28 Thread Bibin A Chundatt (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-7988?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bibin A Chundatt updated YARN-7988:
---
Attachment: YARN-7988-YARN-3409.007.patch

> Refactor FSNodeLabelStore code for attributes store support
> ---
>
> Key: YARN-7988
> URL: https://issues.apache.org/jira/browse/YARN-7988
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Bibin A Chundatt
>Assignee: Bibin A Chundatt
>Priority: Major
> Attachments: YARN-7988-YARN-3409.002.patch, 
> YARN-7988-YARN-3409.003.patch, YARN-7988-YARN-3409.004.patch, 
> YARN-7988-YARN-3409.005.patch, YARN-7988-YARN-3409.006.patch, 
> YARN-7988-YARN-3409.007.patch, YARN-7988.001.patch
>
>
> # Abstract out file FileSystemStore operation
> # Define EditLog Operartions  and Mirror operation
> # Support compatibility with old nodelabel store



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-7946) Update TimelineServerV2 doc as per YARN-7919

2018-03-28 Thread Haibo Chen (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-7946?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Haibo Chen updated YARN-7946:
-
Attachment: YARN-7946.01.patch

> Update TimelineServerV2 doc as per YARN-7919
> 
>
> Key: YARN-7946
> URL: https://issues.apache.org/jira/browse/YARN-7946
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Rohith Sharma K S
>Assignee: Haibo Chen
>Priority: Major
> Attachments: YARN-7946.00.patch, YARN-7946.01.patch
>
>
> Post YARN-7919, document need to be updated for co processor jar name and 
> other related details if any.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7946) Update TimelineServerV2 doc as per YARN-7919

2018-03-28 Thread Haibo Chen (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7946?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16417319#comment-16417319
 ] 

Haibo Chen commented on YARN-7946:
--

Let me make that change in a new patch.

> Update TimelineServerV2 doc as per YARN-7919
> 
>
> Key: YARN-7946
> URL: https://issues.apache.org/jira/browse/YARN-7946
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Rohith Sharma K S
>Assignee: Haibo Chen
>Priority: Major
> Attachments: YARN-7946.00.patch
>
>
> Post YARN-7919, document need to be updated for co processor jar name and 
> other related details if any.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-8048) Support auto-spawning of admin configured services during bootstrap of rm/apiserver

2018-03-28 Thread genericqa (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-8048?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16417304#comment-16417304
 ] 

genericqa commented on YARN-8048:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
19s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 7 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
12s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 27m 
 6s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  8m 
40s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
26s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  3m 
20s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
15m 18s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  1m 
17s{color} | {color:red} hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api in 
trunk has 1 extant Findbugs warnings. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  2m 
19s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
11s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  2m 
33s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  7m 
19s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  7m 
19s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
1m 22s{color} | {color:orange} hadoop-yarn-project/hadoop-yarn: The patch 
generated 3 new + 272 unchanged - 0 fixed = 275 total (was 272) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  3m  
9s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green}  0m  
1s{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
11m  7s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  5m 
30s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  2m 
22s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
47s{color} | {color:green} hadoop-yarn-api in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  2m 
12s{color} | {color:green} hadoop-yarn-server-common in the patch passed. 
{color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 65m 
44s{color} | {color:green} hadoop-yarn-server-resourcemanager in the patch 
passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  5m 
24s{color} | {color:green} hadoop-yarn-services-core in the patch passed. 
{color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
36s{color} | {color:green} hadoop-yarn-services-api in the patch passed. 
{color} |
| {color:red}-1{color} | {color:red} asflicense {color} | {color:red}  0m 
35s{color} | {color:red} The patch generated 4 ASF License warnings. {color} |
| {color:black}{color} | {color:black} {color} | {color:black}169m 54s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Ima

[jira] [Commented] (YARN-6257) CapacityScheduler REST API produces incorrect JSON - JSON object operationsInfo contains deplicate key

2018-03-28 Thread genericqa (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6257?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16417247#comment-16417247
 ] 

genericqa commented on YARN-6257:
-

| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
21s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
12s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 26m 
35s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  8m 
26s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
21s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
16s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
13m  9s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Skipped patched modules with no Java source: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-site {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
16s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
54s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
14s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
 1s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  8m  
2s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  8m  
2s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
1m 19s{color} | {color:orange} hadoop-yarn-project/hadoop-yarn: The patch 
generated 2 new + 62 unchanged - 5 fixed = 64 total (was 67) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
14s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
11m 28s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Skipped patched modules with no Java source: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-site {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
35s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
53s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 67m 
13s{color} | {color:green} hadoop-yarn-server-resourcemanager in the patch 
passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
20s{color} | {color:green} hadoop-yarn-site in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
35s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}146m 35s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:8620d2b |
| JIRA Issue | YARN-6257 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12916577/YARN-6257.004.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  shadedclient  findbugs  checksty

[jira] [Commented] (YARN-7946) Update TimelineServerV2 doc as per YARN-7919

2018-03-28 Thread Rohith Sharma K S (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7946?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16417208#comment-16417208
 ] 

Rohith Sharma K S commented on YARN-7946:
-

Overall looks good. Does below change make sense? However two bullet points 
explains about each versions. 
{code:java}
The version of Apache HBase that is supported with Timeline Service v.2 is 
1.2.6 (default) and 2.0.0-beta1.
{code}
to
{code:java}
The supported version of Apache HBase are 1.2.6 (default) and 2.0.0-beta1.
{code}

> Update TimelineServerV2 doc as per YARN-7919
> 
>
> Key: YARN-7946
> URL: https://issues.apache.org/jira/browse/YARN-7946
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Rohith Sharma K S
>Assignee: Haibo Chen
>Priority: Major
> Attachments: YARN-7946.00.patch
>
>
> Post YARN-7919, document need to be updated for co processor jar name and 
> other related details if any.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7988) Refactor FSNodeLabelStore code for attributes store support

2018-03-28 Thread Bibin A Chundatt (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7988?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16417197#comment-16417197
 ] 

Bibin A Chundatt commented on YARN-7988:


[~sunilg]
Attaching patch after handling review comments.

Basic test done from 2.8.3 to current
*2.8.3*
{noformat}
root@bibinpc:/opt/apacheprojects/hadoop/apache/hadoop-2.8.3/bin# ./yarn rmadmin 
-addToClusterNodeLabels bibin
18/03/28 15:22:13 INFO client.RMProxy: Connecting to ResourceManager at 
/0.0.0.0:8033
root@bibinpc:/opt/apacheprojects/hadoop/apache/hadoop-2.8.3/bin# ./yarn rmadmin 
-replaceLabelsOnNode xxx,bibin
18/03/28 15:22:32 INFO client.RMProxy: Connecting to ResourceManager at 
/0.0.0.0:8033
root@bibinpc:/opt/apacheprojects/hadoop/apache/hadoop-2.8.3/bin# ./yarn rmadmin 
-replaceLabelsOnNode xxy,bibin
18/03/28 15:22:40 INFO client.RMProxy: Connecting to ResourceManager at 
/0.0.0.0:8033
root@bibinpc:/opt/apacheprojects/hadoop/apache/hadoop-2.8.3/bin# ./yarn rmadmin 
-replaceLabelsOnNode xxz,bibin
18/03/28 15:22:49 INFO client.RMProxy: Connecting to ResourceManager at 
/0.0.0.0:8033
root@bibinpc:/opt/apacheprojects/hadoop/apache/hadoop-2.8.3/bin# ./yarn rmadmin 
-replaceLabelsOnNode xxy,
18/03/28 15:23:08 INFO client.RMProxy: Connecting to ResourceManager at 
/0.0.0.0:8033
root@bibinpc:/opt/apacheprojects/hadoop/apache/hadoop-2.8.3/bin# ./yarn rmadmin 
-addToClusterNodeLabels xxy
18/03/28 15:23:39 INFO client.RMProxy: Connecting to ResourceManager at 
/0.0.0.0:8033
root@bibinpc:/opt/apacheprojects/hadoop/apache/hadoop-2.8.3/bin# ./yarn rmadmin 
-removeFromClusterNodeLabels xxy
18/03/28 15:23:51 INFO client.RMProxy: Connecting to ResourceManager at 
/0.0.0.0:8033
{noformat}

recovered in BRANCH
{noformat}

root@bibinpc:/opt/apacheprojects/hadoop/YARN3409/hadoop-dist/target/hadoop-3.1.0-SNAPSHOT/bin#
 ./yarn cluster -lnl
2018-03-28 16:45:53,065 INFO client.RMProxy: Connecting to ResourceManager at 
/0.0.0.0:8032
Node Labels: 






bibin
true


xxz:0
xxx:0




{noformat}

> Refactor FSNodeLabelStore code for attributes store support
> ---
>
> Key: YARN-7988
> URL: https://issues.apache.org/jira/browse/YARN-7988
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Bibin A Chundatt
>Assignee: Bibin A Chundatt
>Priority: Major
> Attachments: YARN-7988-YARN-3409.002.patch, 
> YARN-7988-YARN-3409.003.patch, YARN-7988-YARN-3409.004.patch, 
> YARN-7988-YARN-3409.005.patch, YARN-7988-YARN-3409.006.patch, 
> YARN-7988.001.patch
>
>
> # Abstract out file FileSystemStore operation
> # Define EditLog Operartions  and Mirror operation
> # Support compatibility with old nodelabel store



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-7988) Refactor FSNodeLabelStore code for attributes store support

2018-03-28 Thread Bibin A Chundatt (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-7988?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bibin A Chundatt updated YARN-7988:
---
Attachment: YARN-7988-YARN-3409.006.patch

> Refactor FSNodeLabelStore code for attributes store support
> ---
>
> Key: YARN-7988
> URL: https://issues.apache.org/jira/browse/YARN-7988
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Bibin A Chundatt
>Assignee: Bibin A Chundatt
>Priority: Major
> Attachments: YARN-7988-YARN-3409.002.patch, 
> YARN-7988-YARN-3409.003.patch, YARN-7988-YARN-3409.004.patch, 
> YARN-7988-YARN-3409.005.patch, YARN-7988-YARN-3409.006.patch, 
> YARN-7988.001.patch
>
>
> # Abstract out file FileSystemStore operation
> # Define EditLog Operartions  and Mirror operation
> # Support compatibility with old nodelabel store



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6936) [Atsv2] Retrospect storing entities into sub application table from client perspective

2018-03-28 Thread Rohith Sharma K S (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6936?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16417178#comment-16417178
 ] 

Rohith Sharma K S commented on YARN-6936:
-

[~haibochen] [~vrushalic] Could you review the patch? I will update new patch 
fixing test failures along with review comments

> [Atsv2] Retrospect storing entities into sub application table from client 
> perspective
> --
>
> Key: YARN-6936
> URL: https://issues.apache.org/jira/browse/YARN-6936
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Rohith Sharma K S
>Assignee: Rohith Sharma K S
>Priority: Major
> Attachments: YARN-6936.000.patch, YARN-6936.001.patch
>
>
> Currently YARN-6734 stores entities into sub application table only if doAs 
> user and submitted users are different. This holds good for Tez kind of use 
> cases. But AM runs as same as submitted user like MR also need to store 
> entities in sub application table so that it could read entities without 
> application id. 
> This would be a point of concern later stages when ATSv2 is deployed into 
> production. This JIRA is to retrospect decision of storing entities into sub 
> application table based on client side configuration driven rather than user 
> driven. 
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (YARN-7935) Expose container's hostname to applications running within the docker container

2018-03-28 Thread Shane Kumpf (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7935?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16417175#comment-16417175
 ] 

Shane Kumpf edited comment on YARN-7935 at 3/28/18 11:10 AM:
-

{quote}Docker embedded DNS will use /etc/resolv.conf from host, and filter out 
local IP addresses (127.0.0.1 etc), if no entires are available, it will route 
to 8.8.8.8
{quote}
[~eyang] this isn't true for overlay networks. You can't assume Registry DNS 
will be in use and it won't be used by some of these network types without 
additional modifications to Hadoop ({{--dns}} for {{docker run}}).

{quote}I am concerned that some end user code will end up invoking InetAddress 
Java class{quote}
This will use the IP of the container and whatever resolver the container is 
configured to use. Adding this environment variable doesn't change that.

I'm not seeing the issue with adding an additional environment variable that is 
set to the same value as {{\-\-hostname}} if this solves a problem for a class 
of application. No one is proposing modifying Hadoop IPC code to support NAT 
here or to use the {{--link}} feature, just adding an additional environment 
variable in non-entrypoint mode. Can you elaborate on the exact issue you see 
this new environment variable causing?


was (Author: shaneku...@gmail.com):
{quote}Docker embedded DNS will use /etc/resolv.conf from host, and filter out 
local IP addresses (127.0.0.1 etc), if no entires are available, it will route 
to 8.8.8.8
{quote}
[~eyang] this isn't true for overlay networks. You can't assume Registry DNS 
will be in use and it won't be used by some of these network types without 
additional modifications to Hadoop ({{--dns}} for {{docker run}}).

{quote}I am concerned that some end user code will end up invoking InetAddress 
Java class{quote}
This will use the IP of the container and whatever resolver the container is 
configured to use. Adding this environment variable doesn't change that.

I'm not seeing the issue with adding an additional environment variable that is 
set to the same value as {{--hostname}} if this solves a problem for a class of 
application. No one is proposing modifying Hadoop IPC code to support NAT here 
or to use the {{--link}} feature, just adding an additional environment 
variable in non-entrypoint mode. Can you elaborate on the exact issue you see 
this new environment variable causing?

> Expose container's hostname to applications running within the docker 
> container
> ---
>
> Key: YARN-7935
> URL: https://issues.apache.org/jira/browse/YARN-7935
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: yarn
>Reporter: Suma Shivaprasad
>Assignee: Suma Shivaprasad
>Priority: Major
> Attachments: YARN-7935.1.patch, YARN-7935.2.patch, YARN-7935.3.patch
>
>
> Some applications have a need to bind to the container's hostname (like 
> Spark) which is different from the NodeManager's hostname(NM_HOST which is 
> available as an env during container launch) when launched through Docker 
> runtime. The container's hostname can be exposed to applications via an env 
> CONTAINER_HOSTNAME. Another potential candidate is the container's IP but 
> this can be addressed in a separate jira.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7935) Expose container's hostname to applications running within the docker container

2018-03-28 Thread Shane Kumpf (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7935?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16417175#comment-16417175
 ] 

Shane Kumpf commented on YARN-7935:
---

{quote}Docker embedded DNS will use /etc/resolv.conf from host, and filter out 
local IP addresses (127.0.0.1 etc), if no entires are available, it will route 
to 8.8.8.8
{quote}
[~eyang] this isn't true for overlay networks. You can't assume Registry DNS 
will be in use and it won't be used by some of these network types without 
additional modifications to Hadoop ({{--dns}} for {{docker run}}).

{quote}I am concerned that some end user code will end up invoking InetAddress 
Java class{quote}
This will use the IP of the container and whatever resolver the container is 
configured to use. Adding this environment variable doesn't change that.

I'm not seeing the issue with adding an additional environment variable that is 
set to the same value as --hostname if this solves a problem for a class of 
application. No one is proposing modifying Hadoop IPC code to support NAT here 
or to use the {{--link}} feature, just adding an additional environment 
variable in non-entrypoint mode. Can you elaborate on the exact issue you see 
this new environment variable causing?

> Expose container's hostname to applications running within the docker 
> container
> ---
>
> Key: YARN-7935
> URL: https://issues.apache.org/jira/browse/YARN-7935
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: yarn
>Reporter: Suma Shivaprasad
>Assignee: Suma Shivaprasad
>Priority: Major
> Attachments: YARN-7935.1.patch, YARN-7935.2.patch, YARN-7935.3.patch
>
>
> Some applications have a need to bind to the container's hostname (like 
> Spark) which is different from the NodeManager's hostname(NM_HOST which is 
> available as an env during container launch) when launched through Docker 
> runtime. The container's hostname can be exposed to applications via an env 
> CONTAINER_HOSTNAME. Another potential candidate is the container's IP but 
> this can be addressed in a separate jira.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (YARN-7935) Expose container's hostname to applications running within the docker container

2018-03-28 Thread Shane Kumpf (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7935?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16417175#comment-16417175
 ] 

Shane Kumpf edited comment on YARN-7935 at 3/28/18 11:10 AM:
-

{quote}Docker embedded DNS will use /etc/resolv.conf from host, and filter out 
local IP addresses (127.0.0.1 etc), if no entires are available, it will route 
to 8.8.8.8
{quote}
[~eyang] this isn't true for overlay networks. You can't assume Registry DNS 
will be in use and it won't be used by some of these network types without 
additional modifications to Hadoop ({{--dns}} for {{docker run}}).

{quote}I am concerned that some end user code will end up invoking InetAddress 
Java class{quote}
This will use the IP of the container and whatever resolver the container is 
configured to use. Adding this environment variable doesn't change that.

I'm not seeing the issue with adding an additional environment variable that is 
set to the same value as {{--hostname}} if this solves a problem for a class of 
application. No one is proposing modifying Hadoop IPC code to support NAT here 
or to use the {{--link}} feature, just adding an additional environment 
variable in non-entrypoint mode. Can you elaborate on the exact issue you see 
this new environment variable causing?


was (Author: shaneku...@gmail.com):
{quote}Docker embedded DNS will use /etc/resolv.conf from host, and filter out 
local IP addresses (127.0.0.1 etc), if no entires are available, it will route 
to 8.8.8.8
{quote}
[~eyang] this isn't true for overlay networks. You can't assume Registry DNS 
will be in use and it won't be used by some of these network types without 
additional modifications to Hadoop ({{--dns}} for {{docker run}}).

{quote}I am concerned that some end user code will end up invoking InetAddress 
Java class{quote}
This will use the IP of the container and whatever resolver the container is 
configured to use. Adding this environment variable doesn't change that.

I'm not seeing the issue with adding an additional environment variable that is 
set to the same value as --hostname if this solves a problem for a class of 
application. No one is proposing modifying Hadoop IPC code to support NAT here 
or to use the {{--link}} feature, just adding an additional environment 
variable in non-entrypoint mode. Can you elaborate on the exact issue you see 
this new environment variable causing?

> Expose container's hostname to applications running within the docker 
> container
> ---
>
> Key: YARN-7935
> URL: https://issues.apache.org/jira/browse/YARN-7935
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: yarn
>Reporter: Suma Shivaprasad
>Assignee: Suma Shivaprasad
>Priority: Major
> Attachments: YARN-7935.1.patch, YARN-7935.2.patch, YARN-7935.3.patch
>
>
> Some applications have a need to bind to the container's hostname (like 
> Spark) which is different from the NodeManager's hostname(NM_HOST which is 
> available as an env during container launch) when launched through Docker 
> runtime. The container's hostname can be exposed to applications via an env 
> CONTAINER_HOSTNAME. Another potential candidate is the container's IP but 
> this can be addressed in a separate jira.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-8048) Support auto-spawning of admin configured services during bootstrap of rm/apiserver

2018-03-28 Thread Rohith Sharma K S (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-8048?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Rohith Sharma K S updated YARN-8048:

Attachment: YARN-8048.005.patch

> Support auto-spawning of admin configured services during bootstrap of 
> rm/apiserver
> ---
>
> Key: YARN-8048
> URL: https://issues.apache.org/jira/browse/YARN-8048
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Rohith Sharma K S
>Assignee: Rohith Sharma K S
>Priority: Major
> Attachments: YARN-8048.001.patch, YARN-8048.002.patch, 
> YARN-8048.003.patch, YARN-8048.004.patch, YARN-8048.005.patch
>
>
> Goal is to support auto-spawning of admin configured services during 
> bootstrap of resourcemanager/apiserver. 
> *Requirement:* Some of the  services might required to be consumed by yarn 
> itself ex: Hbase for atsv2. Instead of depending on user installed HBase or 
> sometimes user may not required to install HBase at all, in such conditions 
> running HBase app on YARN will help for ATSv2.
> Before YARN cluster is started, admin configure these services spec and place 
> it in common location in HDFS. At the time of RM/apiServer bootstrap, these 
> services will be submitted.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7734) YARN-5418 breaks TestContainerLogsPage.testContainerLogPageAccess

2018-03-28 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7734?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16417126#comment-16417126
 ] 

Hudson commented on YARN-7734:
--

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #13891 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/13891/])
YARN-7734. Fix UT failure (wwei: rev 411993f6e5723c8cba8100bff0269418e46f6367)
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/java/org/apache/hadoop/yarn/server/nodemanager/webapp/TestContainerLogsPage.java


> YARN-5418 breaks TestContainerLogsPage.testContainerLogPageAccess
> -
>
> Key: YARN-7734
> URL: https://issues.apache.org/jira/browse/YARN-7734
> Project: Hadoop YARN
>  Issue Type: Bug
>Affects Versions: 3.1.0, 3.0.1
>Reporter: Miklos Szegedi
>Assignee: Tao Yang
>Priority: Major
> Fix For: 3.0.2, 3.2.0
>
> Attachments: YARN-7734.001.patch
>
>
> It adds a call to LogAggregationFileControllerFactory where the context is 
> not filled in with the configuration in the mock in the unit test.
> {code}
> [ERROR] Tests run: 5, Failures: 0, Errors: 1, Skipped: 0, Time elapsed: 1.492 
> s <<< FAILURE! - in 
> org.apache.hadoop.yarn.server.nodemanager.webapp.TestContainerLogsPage
> [ERROR] 
> testContainerLogPageAccess(org.apache.hadoop.yarn.server.nodemanager.webapp.TestContainerLogsPage)
>   Time elapsed: 0.208 s  <<< ERROR!
> java.lang.NullPointerException
>   at 
> org.apache.hadoop.yarn.logaggregation.filecontroller.LogAggregationFileControllerFactory.(LogAggregationFileControllerFactory.java:68)
>   at 
> org.apache.hadoop.yarn.server.nodemanager.webapp.ContainerLogsPage$ContainersLogsBlock.(ContainerLogsPage.java:100)
>   at 
> org.apache.hadoop.yarn.server.nodemanager.webapp.TestContainerLogsPage.testContainerLogPageAccess(TestContainerLogsPage.java:268)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:498)
>   at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:47)
>   at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
>   at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:44)
>   at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
>   at 
> org.junit.internal.runners.statements.FailOnTimeout$StatementThread.run(FailOnTimeout.java:74)
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7734) YARN-5418 breaks TestContainerLogsPage.testContainerLogPageAccess

2018-03-28 Thread Tao Yang (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7734?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16417105#comment-16417105
 ] 

Tao Yang commented on YARN-7734:


Thanks [~cheersyang] for review and committing.

> YARN-5418 breaks TestContainerLogsPage.testContainerLogPageAccess
> -
>
> Key: YARN-7734
> URL: https://issues.apache.org/jira/browse/YARN-7734
> Project: Hadoop YARN
>  Issue Type: Bug
>Affects Versions: 3.1.0, 3.0.1
>Reporter: Miklos Szegedi
>Assignee: Tao Yang
>Priority: Major
> Fix For: 3.0.2, 3.2.0
>
> Attachments: YARN-7734.001.patch
>
>
> It adds a call to LogAggregationFileControllerFactory where the context is 
> not filled in with the configuration in the mock in the unit test.
> {code}
> [ERROR] Tests run: 5, Failures: 0, Errors: 1, Skipped: 0, Time elapsed: 1.492 
> s <<< FAILURE! - in 
> org.apache.hadoop.yarn.server.nodemanager.webapp.TestContainerLogsPage
> [ERROR] 
> testContainerLogPageAccess(org.apache.hadoop.yarn.server.nodemanager.webapp.TestContainerLogsPage)
>   Time elapsed: 0.208 s  <<< ERROR!
> java.lang.NullPointerException
>   at 
> org.apache.hadoop.yarn.logaggregation.filecontroller.LogAggregationFileControllerFactory.(LogAggregationFileControllerFactory.java:68)
>   at 
> org.apache.hadoop.yarn.server.nodemanager.webapp.ContainerLogsPage$ContainersLogsBlock.(ContainerLogsPage.java:100)
>   at 
> org.apache.hadoop.yarn.server.nodemanager.webapp.TestContainerLogsPage.testContainerLogPageAccess(TestContainerLogsPage.java:268)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:498)
>   at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:47)
>   at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
>   at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:44)
>   at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
>   at 
> org.junit.internal.runners.statements.FailOnTimeout$StatementThread.run(FailOnTimeout.java:74)
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-6257) CapacityScheduler REST API produces incorrect JSON - JSON object operationsInfo contains deplicate key

2018-03-28 Thread Tao Yang (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-6257?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tao Yang updated YARN-6257:
---
Attachment: YARN-6257.004.patch

> CapacityScheduler REST API produces incorrect JSON - JSON object 
> operationsInfo contains deplicate key
> --
>
> Key: YARN-6257
> URL: https://issues.apache.org/jira/browse/YARN-6257
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: capacityscheduler
>Affects Versions: 2.8.1
>Reporter: Tao Yang
>Assignee: Tao Yang
>Priority: Minor
> Attachments: YARN-6257.001.patch, YARN-6257.002.patch, 
> YARN-6257.003.patch, YARN-6257.004.patch
>
>
> In response string of CapacityScheduler REST API, 
> scheduler/schedulerInfo/health/operationsInfo have duplicate key 'entry' as a 
> JSON object :
> {code}
> "operationsInfo":{
>   
> "entry":{"key":"last-preemption","value":{"nodeId":"N/A","containerId":"N/A","queue":"N/A"}},
>   
> "entry":{"key":"last-reservation","value":{"nodeId":"N/A","containerId":"N/A","queue":"N/A"}},
>   
> "entry":{"key":"last-allocation","value":{"nodeId":"N/A","containerId":"N/A","queue":"N/A"}},
>   
> "entry":{"key":"last-release","value":{"nodeId":"N/A","containerId":"N/A","queue":"N/A"}}
> }
> {code}
> To solve this problem, I suppose the type of operationsInfo field in 
> CapacitySchedulerHealthInfo class should be converted from Map to List.
> After convert to List, The operationsInfo string will be:
> {code}
> "operationInfos":[
>   
> {"operation":"last-allocation","nodeId":"N/A","containerId":"N/A","queue":"N/A"},
>   
> {"operation":"last-release","nodeId":"N/A","containerId":"N/A","queue":"N/A"},
>   
> {"operation":"last-preemption","nodeId":"N/A","containerId":"N/A","queue":"N/A"},
>   
> {"operation":"last-reservation","nodeId":"N/A","containerId":"N/A","queue":"N/A"}
> ]
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6257) CapacityScheduler REST API produces incorrect JSON - JSON object operationsInfo contains deplicate key

2018-03-28 Thread Tao Yang (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6257?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16417097#comment-16417097
 ] 

Tao Yang commented on YARN-6257:


Thanks [~sunilg] and [~cheersyang] for your suggestions.

I agree that it's better to say "not well formatted". I will upload another 
patch to update this description and fix QA problems.

> CapacityScheduler REST API produces incorrect JSON - JSON object 
> operationsInfo contains deplicate key
> --
>
> Key: YARN-6257
> URL: https://issues.apache.org/jira/browse/YARN-6257
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: capacityscheduler
>Affects Versions: 2.8.1
>Reporter: Tao Yang
>Assignee: Tao Yang
>Priority: Minor
> Attachments: YARN-6257.001.patch, YARN-6257.002.patch, 
> YARN-6257.003.patch
>
>
> In response string of CapacityScheduler REST API, 
> scheduler/schedulerInfo/health/operationsInfo have duplicate key 'entry' as a 
> JSON object :
> {code}
> "operationsInfo":{
>   
> "entry":{"key":"last-preemption","value":{"nodeId":"N/A","containerId":"N/A","queue":"N/A"}},
>   
> "entry":{"key":"last-reservation","value":{"nodeId":"N/A","containerId":"N/A","queue":"N/A"}},
>   
> "entry":{"key":"last-allocation","value":{"nodeId":"N/A","containerId":"N/A","queue":"N/A"}},
>   
> "entry":{"key":"last-release","value":{"nodeId":"N/A","containerId":"N/A","queue":"N/A"}}
> }
> {code}
> To solve this problem, I suppose the type of operationsInfo field in 
> CapacitySchedulerHealthInfo class should be converted from Map to List.
> After convert to List, The operationsInfo string will be:
> {code}
> "operationInfos":[
>   
> {"operation":"last-allocation","nodeId":"N/A","containerId":"N/A","queue":"N/A"},
>   
> {"operation":"last-release","nodeId":"N/A","containerId":"N/A","queue":"N/A"},
>   
> {"operation":"last-preemption","nodeId":"N/A","containerId":"N/A","queue":"N/A"},
>   
> {"operation":"last-reservation","nodeId":"N/A","containerId":"N/A","queue":"N/A"}
> ]
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-7734) YARN-5418 breaks TestContainerLogsPage.testContainerLogPageAccess

2018-03-28 Thread Weiwei Yang (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-7734?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Weiwei Yang updated YARN-7734:
--
Affects Version/s: 3.0.1
   3.1.0

> YARN-5418 breaks TestContainerLogsPage.testContainerLogPageAccess
> -
>
> Key: YARN-7734
> URL: https://issues.apache.org/jira/browse/YARN-7734
> Project: Hadoop YARN
>  Issue Type: Bug
>Affects Versions: 3.1.0, 3.0.1
>Reporter: Miklos Szegedi
>Assignee: Tao Yang
>Priority: Major
> Attachments: YARN-7734.001.patch
>
>
> It adds a call to LogAggregationFileControllerFactory where the context is 
> not filled in with the configuration in the mock in the unit test.
> {code}
> [ERROR] Tests run: 5, Failures: 0, Errors: 1, Skipped: 0, Time elapsed: 1.492 
> s <<< FAILURE! - in 
> org.apache.hadoop.yarn.server.nodemanager.webapp.TestContainerLogsPage
> [ERROR] 
> testContainerLogPageAccess(org.apache.hadoop.yarn.server.nodemanager.webapp.TestContainerLogsPage)
>   Time elapsed: 0.208 s  <<< ERROR!
> java.lang.NullPointerException
>   at 
> org.apache.hadoop.yarn.logaggregation.filecontroller.LogAggregationFileControllerFactory.(LogAggregationFileControllerFactory.java:68)
>   at 
> org.apache.hadoop.yarn.server.nodemanager.webapp.ContainerLogsPage$ContainersLogsBlock.(ContainerLogsPage.java:100)
>   at 
> org.apache.hadoop.yarn.server.nodemanager.webapp.TestContainerLogsPage.testContainerLogPageAccess(TestContainerLogsPage.java:268)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:498)
>   at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:47)
>   at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
>   at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:44)
>   at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
>   at 
> org.junit.internal.runners.statements.FailOnTimeout$StatementThread.run(FailOnTimeout.java:74)
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7734) YARN-5418 breaks TestContainerLogsPage.testContainerLogPageAccess

2018-03-28 Thread Weiwei Yang (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7734?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16417091#comment-16417091
 ] 

Weiwei Yang commented on YARN-7734:
---

+1, will commit this shortly

> YARN-5418 breaks TestContainerLogsPage.testContainerLogPageAccess
> -
>
> Key: YARN-7734
> URL: https://issues.apache.org/jira/browse/YARN-7734
> Project: Hadoop YARN
>  Issue Type: Bug
>Reporter: Miklos Szegedi
>Assignee: Tao Yang
>Priority: Major
> Attachments: YARN-7734.001.patch
>
>
> It adds a call to LogAggregationFileControllerFactory where the context is 
> not filled in with the configuration in the mock in the unit test.
> {code}
> [ERROR] Tests run: 5, Failures: 0, Errors: 1, Skipped: 0, Time elapsed: 1.492 
> s <<< FAILURE! - in 
> org.apache.hadoop.yarn.server.nodemanager.webapp.TestContainerLogsPage
> [ERROR] 
> testContainerLogPageAccess(org.apache.hadoop.yarn.server.nodemanager.webapp.TestContainerLogsPage)
>   Time elapsed: 0.208 s  <<< ERROR!
> java.lang.NullPointerException
>   at 
> org.apache.hadoop.yarn.logaggregation.filecontroller.LogAggregationFileControllerFactory.(LogAggregationFileControllerFactory.java:68)
>   at 
> org.apache.hadoop.yarn.server.nodemanager.webapp.ContainerLogsPage$ContainersLogsBlock.(ContainerLogsPage.java:100)
>   at 
> org.apache.hadoop.yarn.server.nodemanager.webapp.TestContainerLogsPage.testContainerLogPageAccess(TestContainerLogsPage.java:268)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:498)
>   at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:47)
>   at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
>   at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:44)
>   at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
>   at 
> org.junit.internal.runners.statements.FailOnTimeout$StatementThread.run(FailOnTimeout.java:74)
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7734) YARN-5418 breaks TestContainerLogsPage.testContainerLogPageAccess

2018-03-28 Thread genericqa (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7734?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16417027#comment-16417027
 ] 

genericqa commented on YARN-7734:
-

| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
18s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 25m 
51s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
52s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
24s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
35s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
11m  0s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
50s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
22s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
34s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
48s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
48s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
20s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
31s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
11m 23s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
55s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
21s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 19m  
9s{color} | {color:green} hadoop-yarn-server-nodemanager in the patch passed. 
{color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
22s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 74m 35s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:8620d2b |
| JIRA Issue | YARN-7734 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12916544/YARN-7734.001.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  shadedclient  findbugs  checkstyle  |
| uname | Linux 1f786d6b4fa6 3.13.0-139-generic #188-Ubuntu SMP Tue Jan 9 
14:43:09 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / a71656c |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_151 |
| findbugs | v3.1.0-RC1 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-YARN-Build/20117/testReport/ |
| Max. process+thread count | 292 (vs. ulimit of 1) |
| modules | C: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager
 U: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager
 |
| Console output | 
https://builds.apache.org/job/PreCommit-YARN-Build/20117/console |
| Powered by | Apache Yetus 0.8.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> YARN-5418 breaks TestContainerLogsPage.testCon

[jira] [Commented] (YARN-6257) CapacityScheduler REST API produces incorrect JSON - JSON object operationsInfo contains deplicate key

2018-03-28 Thread Weiwei Yang (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6257?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16417016#comment-16417016
 ] 

Weiwei Yang commented on YARN-6257:
---

Hi [~Tao Yang]

Can you please fix the checkstyle and findbugs issues?

About the message, based on [~sunilg]'s comment, how about
{quote}the health metrics of capacity scheduler. This metrics existed since 
2.8.0, but the output was not well formatted. Hence users can not make use of 
this field cleanly, this is optimized from 3.2.0 onwards.
{quote}
Basically I don't want to say it was an illegal JSON as it follows JSON spec. 
Does that make sense?

> CapacityScheduler REST API produces incorrect JSON - JSON object 
> operationsInfo contains deplicate key
> --
>
> Key: YARN-6257
> URL: https://issues.apache.org/jira/browse/YARN-6257
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: capacityscheduler
>Affects Versions: 2.8.1
>Reporter: Tao Yang
>Assignee: Tao Yang
>Priority: Minor
> Attachments: YARN-6257.001.patch, YARN-6257.002.patch, 
> YARN-6257.003.patch
>
>
> In response string of CapacityScheduler REST API, 
> scheduler/schedulerInfo/health/operationsInfo have duplicate key 'entry' as a 
> JSON object :
> {code}
> "operationsInfo":{
>   
> "entry":{"key":"last-preemption","value":{"nodeId":"N/A","containerId":"N/A","queue":"N/A"}},
>   
> "entry":{"key":"last-reservation","value":{"nodeId":"N/A","containerId":"N/A","queue":"N/A"}},
>   
> "entry":{"key":"last-allocation","value":{"nodeId":"N/A","containerId":"N/A","queue":"N/A"}},
>   
> "entry":{"key":"last-release","value":{"nodeId":"N/A","containerId":"N/A","queue":"N/A"}}
> }
> {code}
> To solve this problem, I suppose the type of operationsInfo field in 
> CapacitySchedulerHealthInfo class should be converted from Map to List.
> After convert to List, The operationsInfo string will be:
> {code}
> "operationInfos":[
>   
> {"operation":"last-allocation","nodeId":"N/A","containerId":"N/A","queue":"N/A"},
>   
> {"operation":"last-release","nodeId":"N/A","containerId":"N/A","queue":"N/A"},
>   
> {"operation":"last-preemption","nodeId":"N/A","containerId":"N/A","queue":"N/A"},
>   
> {"operation":"last-reservation","nodeId":"N/A","containerId":"N/A","queue":"N/A"}
> ]
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6257) CapacityScheduler REST API produces incorrect JSON - JSON object operationsInfo contains deplicate key

2018-03-28 Thread genericqa (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6257?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16416971#comment-16416971
 ] 

genericqa commented on YARN-6257:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
20s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
12s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 25m 
53s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  9m 
31s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
23s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
18s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
13m 15s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Skipped patched modules with no Java source: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-site {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
14s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
55s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
11s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
54s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  7m 
15s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  7m 
15s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
1m 19s{color} | {color:orange} hadoop-yarn-project/hadoop-yarn: The patch 
generated 2 new + 62 unchanged - 5 fixed = 64 total (was 67) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
13s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
11m 10s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Skipped patched modules with no Java source: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-site {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  1m 
22s{color} | {color:red} 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager
 generated 1 new + 0 unchanged - 0 fixed = 1 total (was 0) {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
54s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 66m 
40s{color} | {color:green} hadoop-yarn-server-resourcemanager in the patch 
passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
20s{color} | {color:green} hadoop-yarn-site in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
34s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}145m  0s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| FindBugs | 
module:hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager
 |
|  |  Unread field:CapacitySchedulerHealthInfo.java:[line 45] |
\\
\\
|| Subsystem || Report/Notes ||

[jira] [Assigned] (YARN-7734) YARN-5418 breaks TestContainerLogsPage.testContainerLogPageAccess

2018-03-28 Thread Tao Yang (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-7734?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tao Yang reassigned YARN-7734:
--

Assignee: Tao Yang  (was: Xuan Gong)

> YARN-5418 breaks TestContainerLogsPage.testContainerLogPageAccess
> -
>
> Key: YARN-7734
> URL: https://issues.apache.org/jira/browse/YARN-7734
> Project: Hadoop YARN
>  Issue Type: Bug
>Reporter: Miklos Szegedi
>Assignee: Tao Yang
>Priority: Major
> Attachments: YARN-7734.001.patch
>
>
> It adds a call to LogAggregationFileControllerFactory where the context is 
> not filled in with the configuration in the mock in the unit test.
> {code}
> [ERROR] Tests run: 5, Failures: 0, Errors: 1, Skipped: 0, Time elapsed: 1.492 
> s <<< FAILURE! - in 
> org.apache.hadoop.yarn.server.nodemanager.webapp.TestContainerLogsPage
> [ERROR] 
> testContainerLogPageAccess(org.apache.hadoop.yarn.server.nodemanager.webapp.TestContainerLogsPage)
>   Time elapsed: 0.208 s  <<< ERROR!
> java.lang.NullPointerException
>   at 
> org.apache.hadoop.yarn.logaggregation.filecontroller.LogAggregationFileControllerFactory.(LogAggregationFileControllerFactory.java:68)
>   at 
> org.apache.hadoop.yarn.server.nodemanager.webapp.ContainerLogsPage$ContainersLogsBlock.(ContainerLogsPage.java:100)
>   at 
> org.apache.hadoop.yarn.server.nodemanager.webapp.TestContainerLogsPage.testContainerLogPageAccess(TestContainerLogsPage.java:268)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:498)
>   at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:47)
>   at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
>   at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:44)
>   at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
>   at 
> org.junit.internal.runners.statements.FailOnTimeout$StatementThread.run(FailOnTimeout.java:74)
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



<    1   2