[jira] [Updated] (YARN-7765) [Atsv2] GSSException: No valid credentials provided - Failed to find any Kerberos tgt thrown by HBaseClient in NM and HDFSClient in HBase daemons

2018-01-26 Thread Rohith Sharma K S (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-7765?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Rohith Sharma K S updated YARN-7765:

Affects Version/s: 2.9.0
   3.0.0

> [Atsv2] GSSException: No valid credentials provided - Failed to find any 
> Kerberos tgt thrown by HBaseClient in NM and HDFSClient in HBase daemons
> -
>
> Key: YARN-7765
> URL: https://issues.apache.org/jira/browse/YARN-7765
> Project: Hadoop YARN
>  Issue Type: Bug
>Affects Versions: 2.9.0, 3.0.0
>Reporter: Rohith Sharma K S
>Assignee: Rohith Sharma K S
>Priority: Blocker
> Attachments: YARN-7765.01.patch
>
>
> Secure cluster is deployed and all YARN services are started successfully. 
> When application is submitted, app collectors which is started as aux-service 
> throwing below exception. But this exception is *NOT* observed from RM 
> TimelineCollector. 
> Cluster is deployed with Hadoop-3.0 and Hbase-1.2.6 secure cluster. All the 
> YARN and HBase service are started and working perfectly fine. After 24 hours 
> i.e when token lifetime is expired, HBaseClient in NM and HDFSClient in 
> HMaster and HRegionServer started getting this error. After sometime, HBase 
> daemons got shutdown. In NM, JVM didn't shutdown but none of the events got 
> published.
> {noformat}
> 2018-01-17 11:04:48,017 FATAL ipc.RpcClientImpl (RpcClientImpl.java:run(684)) 
> - SASL authentication failed. The most likely cause is missing or invalid 
> credentials. Consider 'kinit'.
> javax.security.sasl.SaslException: GSS initiate failed [Caused by 
> GSSException: No valid credentials provided (Mechanism level: Failed to find 
> any Kerberos tgt)]
> {noformat}
> cc :/ [~vrushalic] [~varun_saxena] 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-7765) [Atsv2] GSSException: No valid credentials provided - Failed to find any Kerberos tgt thrown by HBaseClient in NM and HDFSClient in HBase daemons

2018-01-26 Thread Rohith Sharma K S (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-7765?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Rohith Sharma K S updated YARN-7765:

Target Version/s: 3.1.0, 2.10.0, 3.0.1

> [Atsv2] GSSException: No valid credentials provided - Failed to find any 
> Kerberos tgt thrown by HBaseClient in NM and HDFSClient in HBase daemons
> -
>
> Key: YARN-7765
> URL: https://issues.apache.org/jira/browse/YARN-7765
> Project: Hadoop YARN
>  Issue Type: Bug
>Affects Versions: 2.9.0, 3.0.0
>Reporter: Rohith Sharma K S
>Assignee: Rohith Sharma K S
>Priority: Blocker
> Attachments: YARN-7765.01.patch
>
>
> Secure cluster is deployed and all YARN services are started successfully. 
> When application is submitted, app collectors which is started as aux-service 
> throwing below exception. But this exception is *NOT* observed from RM 
> TimelineCollector. 
> Cluster is deployed with Hadoop-3.0 and Hbase-1.2.6 secure cluster. All the 
> YARN and HBase service are started and working perfectly fine. After 24 hours 
> i.e when token lifetime is expired, HBaseClient in NM and HDFSClient in 
> HMaster and HRegionServer started getting this error. After sometime, HBase 
> daemons got shutdown. In NM, JVM didn't shutdown but none of the events got 
> published.
> {noformat}
> 2018-01-17 11:04:48,017 FATAL ipc.RpcClientImpl (RpcClientImpl.java:run(684)) 
> - SASL authentication failed. The most likely cause is missing or invalid 
> credentials. Consider 'kinit'.
> javax.security.sasl.SaslException: GSS initiate failed [Caused by 
> GSSException: No valid credentials provided (Mechanism level: Failed to find 
> any Kerberos tgt)]
> {noformat}
> cc :/ [~vrushalic] [~varun_saxena] 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-7835) [Atsv2] Race condition in NM while publishing events if second attempt launched on same node

2018-01-26 Thread Rohith Sharma K S (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-7835?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Rohith Sharma K S updated YARN-7835:

Summary: [Atsv2] Race condition in NM while publishing events if second 
attempt launched on same node  (was: [Atsv2] Race condition in NM while 
publishing events if second attempt launched in same node)

> [Atsv2] Race condition in NM while publishing events if second attempt 
> launched on same node
> 
>
> Key: YARN-7835
> URL: https://issues.apache.org/jira/browse/YARN-7835
> Project: Hadoop YARN
>  Issue Type: Bug
>Reporter: Rohith Sharma K S
>Assignee: Rohith Sharma K S
>Priority: Critical
>
> It is observed race condition that if master container is killed for some 
> reason and launched on same node then NMTimelinePublisher doesn't add 
> timelineClient. But once completed container for 1st attempt has come then 
> NMTimelinePublisher removes the timelineClient. 
>  It causes all subsequent event publishing from different client fails to 
> publish with exception Application is not found. !



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-7835) [Atsv2] Race condition in NM while publishing events if second attempt launched in same node

2018-01-26 Thread Rohith Sharma K S (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-7835?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Rohith Sharma K S updated YARN-7835:

Environment: (was: It is observed race condition that if master 
container is killed for some reason and launched on same node then 
NMTimelinePublisher doesn't add timelineClient. But once completed container 
for 1st attempt has come then NMTimelinePublisher removes the timelineClient. 
It causes all subsequent event publishing from different client fails to 
publish with exception Application is not found. !
)

> [Atsv2] Race condition in NM while publishing events if second attempt 
> launched in same node
> 
>
> Key: YARN-7835
> URL: https://issues.apache.org/jira/browse/YARN-7835
> Project: Hadoop YARN
>  Issue Type: Bug
>Reporter: Rohith Sharma K S
>Assignee: Rohith Sharma K S
>Priority: Critical
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-7835) [Atsv2] Race condition in NM while publishing events if second attempt launched in same node

2018-01-26 Thread Rohith Sharma K S (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-7835?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Rohith Sharma K S updated YARN-7835:

Description: 
It is observed race condition that if master container is killed for some 
reason and launched on same node then NMTimelinePublisher doesn't add 
timelineClient. But once completed container for 1st attempt has come then 
NMTimelinePublisher removes the timelineClient. 
 It causes all subsequent event publishing from different client fails to 
publish with exception Application is not found. !

> [Atsv2] Race condition in NM while publishing events if second attempt 
> launched in same node
> 
>
> Key: YARN-7835
> URL: https://issues.apache.org/jira/browse/YARN-7835
> Project: Hadoop YARN
>  Issue Type: Bug
>Reporter: Rohith Sharma K S
>Assignee: Rohith Sharma K S
>Priority: Critical
>
> It is observed race condition that if master container is killed for some 
> reason and launched on same node then NMTimelinePublisher doesn't add 
> timelineClient. But once completed container for 1st attempt has come then 
> NMTimelinePublisher removes the timelineClient. 
>  It causes all subsequent event publishing from different client fails to 
> publish with exception Application is not found. !



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7835) [Atsv2] Race condition in NM while publishing events if second attempt launched in same node

2018-01-26 Thread Rohith Sharma K S (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7835?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16341991#comment-16341991
 ] 

Rohith Sharma K S commented on YARN-7835:
-

The below log trace shows that 2nd attempt master container has come to same 
node manager and didn't add into timelineclient since it already exist! But 
when 1st attempt complected container received, NMTImelinePublisher removes the 
timelineclient.
{code}
  2018-01-27 04:55:35,193 INFO  application.ApplicationImpl 
(ApplicationImpl.java:transition(446)) - Adding 
container_e22_1516990344374_0007_02_01 to application 
application_1516990344374_0007
2018-01-27 04:55:35,195 INFO  container.ContainerImpl 
(ContainerImpl.java:handle(2108)) - Container 
container_e22_1516990344374_0007_02_01 transitioned from NEW to LOCALIZING
2018-01-27 04:55:35,195 INFO  containermanager.AuxServices 
(AuxServices.java:handle(220)) - Got event CONTAINER_INIT for appId 
application_1516990344374_0007
2018-01-27 04:55:35,196 INFO  collector.TimelineCollectorManager 
(TimelineCollectorManager.java:putIfAbsent(149)) - the collector for 
application_1516990344374_0007 already exists!
...
...
2018-01-27 04:55:36,109 INFO  nodemanager.NodeStatusUpdaterImpl 
(NodeStatusUpdaterImpl.java:removeOrTrackCompletedContainersFromContext(682)) - 
Removed completed containers from NM context: 
[container_e22_1516990344374_0007_01_01]
2018-01-27 04:55:36,112 INFO  collector.TimelineCollectorManager 
(TimelineCollectorManager.java:remove(192)) - The collector service for 
application_1516990344374_0007 was removed
2018-01-27 04:55:36,430 ERROR collector.TimelineCollectorWebService 
(TimelineCollectorWebService.java:putEntities(165)) - Application: 
application_1516990344374_0007 is not found
2018-01-27 04:55:36,430 ERROR collector.TimelineCollectorWebService 
(TimelineCollectorWebService.java:putEntities(179)) - Error putting entities
org.apache.hadoop.yarn.webapp.NotFoundException
at org.apache.hadoop.yarn.server.timelineservice.co
{code}

> [Atsv2] Race condition in NM while publishing events if second attempt 
> launched in same node
> 
>
> Key: YARN-7835
> URL: https://issues.apache.org/jira/browse/YARN-7835
> Project: Hadoop YARN
>  Issue Type: Bug
> Environment: It is observed race condition that if master container 
> is killed for some reason and launched on same node then NMTimelinePublisher 
> doesn't add timelineClient. But once completed container for 1st attempt has 
> come then NMTimelinePublisher removes the timelineClient. 
> It causes all subsequent event publishing from different client fails to 
> publish with exception Application is not found. !
>Reporter: Rohith Sharma K S
>Assignee: Rohith Sharma K S
>Priority: Critical
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Created] (YARN-7835) [Atsv2] Race condition in NM while publishing events if second attempt launched in same node

2018-01-26 Thread Rohith Sharma K S (JIRA)
Rohith Sharma K S created YARN-7835:
---

 Summary: [Atsv2] Race condition in NM while publishing events if 
second attempt launched in same node
 Key: YARN-7835
 URL: https://issues.apache.org/jira/browse/YARN-7835
 Project: Hadoop YARN
  Issue Type: Bug
 Environment: It is observed race condition that if master container is 
killed for some reason and launched on same node then NMTimelinePublisher 
doesn't add timelineClient. But once completed container for 1st attempt has 
come then NMTimelinePublisher removes the timelineClient. 
It causes all subsequent event publishing from different client fails to 
publish with exception Application is not found. !

Reporter: Rohith Sharma K S
Assignee: Rohith Sharma K S






--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-7835) [Atsv2] Race condition in NM while publishing events if second attempt launched in same node

2018-01-26 Thread Rohith Sharma K S (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-7835?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Rohith Sharma K S updated YARN-7835:

Priority: Critical  (was: Major)

> [Atsv2] Race condition in NM while publishing events if second attempt 
> launched in same node
> 
>
> Key: YARN-7835
> URL: https://issues.apache.org/jira/browse/YARN-7835
> Project: Hadoop YARN
>  Issue Type: Bug
> Environment: It is observed race condition that if master container 
> is killed for some reason and launched on same node then NMTimelinePublisher 
> doesn't add timelineClient. But once completed container for 1st attempt has 
> come then NMTimelinePublisher removes the timelineClient. 
> It causes all subsequent event publishing from different client fails to 
> publish with exception Application is not found. !
>Reporter: Rohith Sharma K S
>Assignee: Rohith Sharma K S
>Priority: Critical
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (YARN-7765) [Atsv2] GSSException: No valid credentials provided - Failed to find any Kerberos tgt thrown by HBaseClient in NM and HDFSClient in HBase daemons

2018-01-26 Thread Rohith Sharma K S (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7765?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16341565#comment-16341565
 ] 

Rohith Sharma K S edited comment on YARN-7765 at 1/27/18 6:03 AM:
--

I see that there are couple of problem in NodeManager.
 # HBase connection is created in seviceInit of TimelineWriter. But at this 
point of time, NM has not yet done kinit.
 # NMTimelinePublisher as well assign nmLoginUgi in serviceInit. But since NM 
has not done login from keytabs, nmLoginUgi is set to currentUser.  As a 
result, NM throw above exception while publishing entity.
 # In case of NM restart, ContainerManagerImpl#serviceInit recove the 
applications. While recovering application i.e creation of Applicaiton 
instance. Timelinev2Client is created under NM Login UGI. Since NM has not yet 
done kinit  yet, TimelineClien fail to publish events for this application. 

To fix all the 3 issue, we need to do secure login before initializing services 
in NodeManager. Otherwise, we need to fix above 3 issues one by one in 
different places.


was (Author: rohithsharma):
I see that there are couple of problem in NodeManager.
 # HBase connection is created in seviceInit of TimelineWriter. But at this 
point of time, NM has not yet done kinit.
 # NMTimelinePublisher as well assign nmLoginUgi in serviceInit. But since NM 
has not done login from keytabs, nmLoginUgi is set to currentUser.  As a 
result, NM throw above exception while publishing entity.
 # With same logged in user, it also affect NM recovery flow. All the recovered 
application would also fail since applications are recovered in serviceInit 
phase.

To fix all the 3 issue, we need to do secure login before initializing services 
in NodeManager. Otherwise, we need to fix above 3 issues one by one in 
different places.

> [Atsv2] GSSException: No valid credentials provided - Failed to find any 
> Kerberos tgt thrown by HBaseClient in NM and HDFSClient in HBase daemons
> -
>
> Key: YARN-7765
> URL: https://issues.apache.org/jira/browse/YARN-7765
> Project: Hadoop YARN
>  Issue Type: Bug
>Reporter: Rohith Sharma K S
>Assignee: Rohith Sharma K S
>Priority: Blocker
> Attachments: YARN-7765.01.patch
>
>
> Secure cluster is deployed and all YARN services are started successfully. 
> When application is submitted, app collectors which is started as aux-service 
> throwing below exception. But this exception is *NOT* observed from RM 
> TimelineCollector. 
> Cluster is deployed with Hadoop-3.0 and Hbase-1.2.6 secure cluster. All the 
> YARN and HBase service are started and working perfectly fine. After 24 hours 
> i.e when token lifetime is expired, HBaseClient in NM and HDFSClient in 
> HMaster and HRegionServer started getting this error. After sometime, HBase 
> daemons got shutdown. In NM, JVM didn't shutdown but none of the events got 
> published.
> {noformat}
> 2018-01-17 11:04:48,017 FATAL ipc.RpcClientImpl (RpcClientImpl.java:run(684)) 
> - SASL authentication failed. The most likely cause is missing or invalid 
> credentials. Consider 'kinit'.
> javax.security.sasl.SaslException: GSS initiate failed [Caused by 
> GSSException: No valid credentials provided (Mechanism level: Failed to find 
> any Kerberos tgt)]
> {noformat}
> cc :/ [~vrushalic] [~varun_saxena] 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7765) [Atsv2] GSSException: No valid credentials provided - Failed to find any Kerberos tgt thrown by HBaseClient in NM and HDFSClient in HBase daemons

2018-01-26 Thread Jian He (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7765?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16341982#comment-16341982
 ] 

Jian He commented on YARN-7765:
---

patch lgtm

> [Atsv2] GSSException: No valid credentials provided - Failed to find any 
> Kerberos tgt thrown by HBaseClient in NM and HDFSClient in HBase daemons
> -
>
> Key: YARN-7765
> URL: https://issues.apache.org/jira/browse/YARN-7765
> Project: Hadoop YARN
>  Issue Type: Bug
>Reporter: Rohith Sharma K S
>Assignee: Rohith Sharma K S
>Priority: Blocker
> Attachments: YARN-7765.01.patch
>
>
> Secure cluster is deployed and all YARN services are started successfully. 
> When application is submitted, app collectors which is started as aux-service 
> throwing below exception. But this exception is *NOT* observed from RM 
> TimelineCollector. 
> Cluster is deployed with Hadoop-3.0 and Hbase-1.2.6 secure cluster. All the 
> YARN and HBase service are started and working perfectly fine. After 24 hours 
> i.e when token lifetime is expired, HBaseClient in NM and HDFSClient in 
> HMaster and HRegionServer started getting this error. After sometime, HBase 
> daemons got shutdown. In NM, JVM didn't shutdown but none of the events got 
> published.
> {noformat}
> 2018-01-17 11:04:48,017 FATAL ipc.RpcClientImpl (RpcClientImpl.java:run(684)) 
> - SASL authentication failed. The most likely cause is missing or invalid 
> credentials. Consider 'kinit'.
> javax.security.sasl.SaslException: GSS initiate failed [Caused by 
> GSSException: No valid credentials provided (Mechanism level: Failed to find 
> any Kerberos tgt)]
> {noformat}
> cc :/ [~vrushalic] [~varun_saxena] 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-2185) Use pipes when localizing archives

2018-01-26 Thread Miklos Szegedi (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-2185?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16341972#comment-16341972
 ] 

Miklos Szegedi commented on YARN-2185:
--

Thank you for the reviews [~jlowe], [~grepas] and [~rkanter] and for the 
commit, [~jlowe]!

> Use pipes when localizing archives
> --
>
> Key: YARN-2185
> URL: https://issues.apache.org/jira/browse/YARN-2185
> Project: Hadoop YARN
>  Issue Type: Improvement
>  Components: nodemanager
>Affects Versions: 2.4.0
>Reporter: Jason Lowe
>Assignee: Miklos Szegedi
>Priority: Major
> Fix For: 3.1.0
>
> Attachments: YARN-2185.000.patch, YARN-2185.001.patch, 
> YARN-2185.002.patch, YARN-2185.003.patch, YARN-2185.004.patch, 
> YARN-2185.005.patch, YARN-2185.006.patch, YARN-2185.007.patch, 
> YARN-2185.008.patch, YARN-2185.009.patch, YARN-2185.010.patch, 
> YARN-2185.011.patch, YARN-2185.012.patch, YARN-2185.012.patch
>
>
> Currently the nodemanager downloads an archive to a local file, unpacks it, 
> and then removes it.  It would be more efficient to stream the data as it's 
> being unpacked to avoid both the extra disk space requirements and the 
> additional disk activity from storing the archive.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7780) Documentation for Placement Constraints

2018-01-26 Thread Sunil G (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7780?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16341971#comment-16341971
 ] 

Sunil G commented on YARN-7780:
---

Thanks [~kkaranasos]

Few comments:
 # In distributed shell section, usage of {{-_num_containers_}} may confuse 
users when PlacementSpec is also used together. So could you please add a note 
that how this relation works to avoid confusion.
 # Also in {{example of PlacementSpec}} section, please add a cardinality 
example as well.
 # I think *TargetTag* is one of critical piece in a PlacementSpec, so I would 
like to give a bit more emphasis on that. Something like {{ place 5 containers 
with tag "hbase" with affinity to a rack on which containers with tag "zk" are 
running (i.e., an "hbase" container should not be placed at a rack where an 
"zk" container is running as TargetTag for "hbase" constraint is specified as 
"zk")}}
 # Do we need to explicitly mention that we are using *SchedulingRequest* 
instead of *ResourceRequest* ?

 

> Documentation for Placement Constraints
> ---
>
> Key: YARN-7780
> URL: https://issues.apache.org/jira/browse/YARN-7780
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Arun Suresh
>Assignee: Konstantinos Karanasos
>Priority: Major
> Attachments: YARN-7780-YARN-6592.001.patch, 
> YARN-7780-YARN-6592.002.patch
>
>
> JIRA to track documentation for the feature.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-7765) [Atsv2] GSSException: No valid credentials provided - Failed to find any Kerberos tgt thrown by HBaseClient in NM and HDFSClient in HBase daemons

2018-01-26 Thread Rohith Sharma K S (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-7765?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Rohith Sharma K S updated YARN-7765:

Priority: Blocker  (was: Critical)

> [Atsv2] GSSException: No valid credentials provided - Failed to find any 
> Kerberos tgt thrown by HBaseClient in NM and HDFSClient in HBase daemons
> -
>
> Key: YARN-7765
> URL: https://issues.apache.org/jira/browse/YARN-7765
> Project: Hadoop YARN
>  Issue Type: Bug
>Reporter: Rohith Sharma K S
>Assignee: Rohith Sharma K S
>Priority: Blocker
> Attachments: YARN-7765.01.patch
>
>
> Secure cluster is deployed and all YARN services are started successfully. 
> When application is submitted, app collectors which is started as aux-service 
> throwing below exception. But this exception is *NOT* observed from RM 
> TimelineCollector. 
> Cluster is deployed with Hadoop-3.0 and Hbase-1.2.6 secure cluster. All the 
> YARN and HBase service are started and working perfectly fine. After 24 hours 
> i.e when token lifetime is expired, HBaseClient in NM and HDFSClient in 
> HMaster and HRegionServer started getting this error. After sometime, HBase 
> daemons got shutdown. In NM, JVM didn't shutdown but none of the events got 
> published.
> {noformat}
> 2018-01-17 11:04:48,017 FATAL ipc.RpcClientImpl (RpcClientImpl.java:run(684)) 
> - SASL authentication failed. The most likely cause is missing or invalid 
> credentials. Consider 'kinit'.
> javax.security.sasl.SaslException: GSS initiate failed [Caused by 
> GSSException: No valid credentials provided (Mechanism level: Failed to find 
> any Kerberos tgt)]
> {noformat}
> cc :/ [~vrushalic] [~varun_saxena] 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7765) [Atsv2] GSSException: No valid credentials provided - Failed to find any Kerberos tgt thrown by HBaseClient in NM and HDFSClient in HBase daemons

2018-01-26 Thread Rohith Sharma K S (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7765?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16341968#comment-16341968
 ] 

Rohith Sharma K S commented on YARN-7765:
-

I verified this patch in secured cluster multiple times of NM restart and 
TimelineClients and HBaseClients are picking up right UGI after this patch. 
[~jianhe] [~jlowe] [~vinodkv] [~sunilg] [~vrushalic] Please let us know if you 
have any concern for moving secure login into Nodemanager#serviceInti from 
NM#serviceStart!



> [Atsv2] GSSException: No valid credentials provided - Failed to find any 
> Kerberos tgt thrown by HBaseClient in NM and HDFSClient in HBase daemons
> -
>
> Key: YARN-7765
> URL: https://issues.apache.org/jira/browse/YARN-7765
> Project: Hadoop YARN
>  Issue Type: Bug
>Reporter: Rohith Sharma K S
>Assignee: Rohith Sharma K S
>Priority: Critical
> Attachments: YARN-7765.01.patch
>
>
> Secure cluster is deployed and all YARN services are started successfully. 
> When application is submitted, app collectors which is started as aux-service 
> throwing below exception. But this exception is *NOT* observed from RM 
> TimelineCollector. 
> Cluster is deployed with Hadoop-3.0 and Hbase-1.2.6 secure cluster. All the 
> YARN and HBase service are started and working perfectly fine. After 24 hours 
> i.e when token lifetime is expired, HBaseClient in NM and HDFSClient in 
> HMaster and HRegionServer started getting this error. After sometime, HBase 
> daemons got shutdown. In NM, JVM didn't shutdown but none of the events got 
> published.
> {noformat}
> 2018-01-17 11:04:48,017 FATAL ipc.RpcClientImpl (RpcClientImpl.java:run(684)) 
> - SASL authentication failed. The most likely cause is missing or invalid 
> credentials. Consider 'kinit'.
> javax.security.sasl.SaslException: GSS initiate failed [Caused by 
> GSSException: No valid credentials provided (Mechanism level: Failed to find 
> any Kerberos tgt)]
> {noformat}
> cc :/ [~vrushalic] [~varun_saxena] 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7815) Mount the filecache as read-only in Docker containers

2018-01-26 Thread Miklos Szegedi (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7815?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16341966#comment-16341966
 ] 

Miklos Szegedi commented on YARN-7815:
--

[~jlowe],
{quote}The appcache mount needs to be read-write since that's where the 
container work directory is along with the application scratch area where 
shuffle outputs are deposited.
{quote}
Would it make sense to detach the appcache and mount a separate appcache dir 
for each container? AFAIK it is not for sharing between containers, since they 
might get scheduled to other nodes anyways. Currently it is legitimate that a 
container gets different security tokens from the application in the container 
launch context. If the container can look out into the application cache, it 
can see the results of other containers on the same node of the same 
application.

> Mount the filecache as read-only in Docker containers
> -
>
> Key: YARN-7815
> URL: https://issues.apache.org/jira/browse/YARN-7815
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Shane Kumpf
>Assignee: Shane Kumpf
>Priority: Major
>
> Currently, when using the Docker runtime, the filecache directories are 
> mounted read-write into the Docker containers. Read write access is not 
> necessary. We should make this more restrictive by changing that mount to 
> read-only.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7655) avoid AM preemption caused by RRs for specific nodes or racks

2018-01-26 Thread Steven Rand (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7655?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16341964#comment-16341964
 ] 

Steven Rand commented on YARN-7655:
---

I'm not sure whether many AMs wind up on a limited number of NMs. It's quite 
possible -- my guess based on application patterns is that these clusters are 
running more AMs per node than most other clusters are. 

Thanks for the two links. it does look like both of those things would let us 
spread out the AMs better, which should lead to fewer total AM preemptions, 
though not necessarily prevent local requests from causing them.

Do you think the patch is worth pursuing? I'll buy that the clusters I have in 
mind likely were seeing so many AM preemptions due to a combination of custom 
config and access patterns involving many YARN applications, and therefore many 
AMs. On the other hand, the patch is a small change, and should be beneficial 
if you value not having to retry your app due to AM preemption more than you 
value the associated loss of locality, which I suspect most people do.

> avoid AM preemption caused by RRs for specific nodes or racks
> -
>
> Key: YARN-7655
> URL: https://issues.apache.org/jira/browse/YARN-7655
> Project: Hadoop YARN
>  Issue Type: Improvement
>  Components: fairscheduler
>Affects Versions: 3.0.0
>Reporter: Steven Rand
>Assignee: Steven Rand
>Priority: Major
> Attachments: YARN-7655-001.patch
>
>
> We frequently see AM preemptions when 
> {{starvedApp.getStarvedResourceRequests()}} in 
> {{FSPreemptionThread#identifyContainersToPreempt}} includes one or more RRs 
> that request containers on a specific node. Since this causes us to only 
> consider one node to preempt containers on, the really good work that was 
> done in YARN-5830 doesn't save us from AM preemption. Even though there might 
> be multiple nodes on which we could preempt enough non-AM containers to 
> satisfy the app's starvation, we often wind up preempting one or more AM 
> containers on the single node that we're considering.
> A proposed solution is that if we're going to preempt one or more AM 
> containers for an RR that specifies a node or rack, then we should instead 
> expand the search space to consider all nodes. That way we take advantage of 
> YARN-5830, and only preempt AMs if there's no alternative. I've attached a 
> patch with an initial implementation of this. We've been running it on a few 
> clusters, and have seen AM preemptions drop from double-digit occurrences on 
> many days to zero.
> Of course, the tradeoff is some loss of locality, since the starved app is 
> less likely to be allocated resources at the most specific locality level 
> that it asked for. My opinion is that this tradeoff is worth it, but 
> interested to hear what others think as well.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7822) Constraint satisfaction checker support for composite OR and AND constraints

2018-01-26 Thread Arun Suresh (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7822?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16341960#comment-16341960
 ] 

Arun Suresh commented on YARN-7822:
---

bq. ..thought it might be better not support nested case at first. We can 
further investigate nested case moving on.
Makes sense - we can tackle that later.

bq. Do you want me to replace all FiCaSchedulerNode in 
TestPlacementConstraintsUtil in next patch?
That would be ideal - thanks.


> Constraint satisfaction checker support for composite OR and AND constraints
> 
>
> Key: YARN-7822
> URL: https://issues.apache.org/jira/browse/YARN-7822
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Arun Suresh
>Assignee: Weiwei Yang
>Priority: Major
> Attachments: YARN-7822-YARN-6592.001.patch
>
>
> JIRA to track changes to {{PlacementConstraintsUtil#canSatisfyConstraints}} 
> handle OR and AND Composite constaints



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7064) Use cgroup to get container resource utilization

2018-01-26 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7064?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16341957#comment-16341957
 ] 

Hudson commented on YARN-7064:
--

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #13571 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/13571/])
YARN-7064. Use cgroup to get container resource utilization. (Miklos 
(haibochen: rev 649ef7ac334e63a7c676f8e7406f59d9466eb6f2)
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/java/org/apache/hadoop/yarn/util/ProcfsBasedProcessTree.java
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/linux/resources/CGroupsHandler.java
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/test/java/org/apache/hadoop/yarn/conf/TestYarnConfigurationFields.java
* (add) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/linux/resources/CombinedResourceCalculator.java
* (add) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/linux/resources/TestCompareResourceCalculators.java
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/resources/yarn-default.xml
* (add) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/linux/resources/CGroupsResourceCalculator.java
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/linux/resources/TestResourceHandlerModule.java
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/linux/resources/TestCGroupsMemoryResourceHandlerImpl.java
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/conf/YarnConfiguration.java
* (add) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/linux/resources/TestCGroupsResourceCalculator.java
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/monitor/ContainersMonitorImpl.java
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/linux/resources/ResourceHandlerModule.java
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/java/org/apache/hadoop/yarn/util/ResourceCalculatorProcessTree.java
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/linux/resources/CGroupsMemoryResourceHandlerImpl.java
* (edit) 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/util/CpuTimeTracker.java
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/linux/resources/CGroupsHandlerImpl.java


> Use cgroup to get container resource utilization
> 
>
> Key: YARN-7064
> URL: https://issues.apache.org/jira/browse/YARN-7064
> Project: Hadoop YARN
>  Issue Type: Improvement
>  Components: nodemanager
>Reporter: Miklos Szegedi
>Assignee: Miklos Szegedi
>Priority: Major
> Fix For: 3.1.0
>
> Attachments: YARN-7064.000.patch, YARN-7064.001.patch, 
> YARN-7064.002.patch, YARN-7064.003.patch, YARN-7064.004.patch, 
> YARN-7064.005.patch, YARN-7064.007.patch, YARN-7064.008.patch, 
> YARN-7064.009.patch, YARN-7064.010.patch, YARN-7064.011.patch, 
> YARN-7064.012.patch, YARN-7064.013.patch, YARN-7064.014.patch
>
>
> This is an addendum to YARN-6668. What happens is that that jira always wants 
> to rebase patches against YARN-1011 instead of trunk.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6668) Use cgroup to get container resource utilization

2018-01-26 Thread Miklos Szegedi (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6668?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16341952#comment-16341952
 ] 

Miklos Szegedi commented on YARN-6668:
--

This has been checked in as YARN-7064.

> Use cgroup to get container resource utilization
> 
>
> Key: YARN-6668
> URL: https://issues.apache.org/jira/browse/YARN-6668
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: nodemanager
>Affects Versions: 3.0.0-alpha3
>Reporter: Haibo Chen
>Assignee: Miklos Szegedi
>Priority: Major
> Attachments: YARN-6668.000.patch, YARN-6668.001.patch, 
> YARN-6668.002.patch, YARN-6668.003.patch, YARN-6668.004.patch, 
> YARN-6668.005.patch, YARN-6668.006.patch, YARN-6668.007.patch, 
> YARN-6668.008.patch, YARN-6668.009.patch
>
>
> Container Monitor relies on proc file system to get container resource 
> utilization, which is not as efficient as reading cgroup accounting. We 
> should in NM, when cgroup is enabled, read cgroup stats instead. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7796) Container-executor fails with segfault on certain OS configurations

2018-01-26 Thread Miklos Szegedi (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7796?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16341948#comment-16341948
 ] 

Miklos Szegedi commented on YARN-7796:
--

Now the question is, how does a 128K allocation fill in a stack that is 
normally 8K? If it is the one that brought up the issue, there should be 
another big allocation. Do you have a ulimit -s value from a system that 
reproduces this?

> Container-executor fails with segfault on certain OS configurations
> ---
>
> Key: YARN-7796
> URL: https://issues.apache.org/jira/browse/YARN-7796
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: nodemanager
>Affects Versions: 3.0.0
>Reporter: Gergo Repas
>Assignee: Gergo Repas
>Priority: Major
> Fix For: 3.1.0, 3.0.1
>
> Attachments: YARN-7796.000.patch, YARN-7796.001.patch, 
> YARN-7796.002.patch
>
>
> There is a relatively big (128K) buffer allocated on the stack in 
> container-executor.c for the purpose of copying files. As indicated by the 
> below gdb stack trace, this allocation can fail with SIGSEGV. This happens 
> only on certain OS configurations - I can reproduce this issue on RHEL 6.9:
> {code:java}
> [Thread debugging using libthread_db enabled]
> main : command provided 0
> main : run as user is ***
> main : requested yarn user is ***
> Program received signal SIGSEGV, Segmentation fault.
> 0x004069bc in copy_file (input=7, in_filename=0x7ffd669fd2d6 
> "/yarn/nm/nmPrivate/container_1516711246952_0001_02_01.tokens", 
> out_filename=0x932930 
> "/yarn/nm/usercache/systest/appcache/application_1516711246952_0001/container_1516711246952_0001_02_01.tokens",
>  perm=384)
> at 
> /root/hadoop/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/native/container-executor/impl/container-executor.c:966
> 966 char buffer[buffer_size];
> (gdb) bt
> #0  0x004069bc in copy_file (input=7, in_filename=0x7ffd669fd2d6 
> "/yarn/nm/nmPrivate/container_1516711246952_0001_02_01.tokens", 
> out_filename=0x932930 
> "/yarn/nm/usercache/systest/appcache/application_1516711246952_0001/container_1516711246952_0001_02_01.tokens",
>  perm=384)
> at 
> /root/hadoop/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/native/container-executor/impl/container-executor.c:966
> #1  0x00409a81 in initialize_app (user=, 
> app_id=0x7ffd669fd2b7 "application_1516711246952_0001", 
> nmPrivate_credentials_file=0x7ffd669fd2d6 
> "/yarn/nm/nmPrivate/container_1516711246952_0001_02_01.tokens", 
> local_dirs=0x9331c8, log_roots=, args=0x7ffd669fb168)
> at 
> /root/hadoop/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/native/container-executor/impl/container-executor.c:1122
> #2  0x00403f90 in main (argc=, argv= optimized out>) at 
> /root/hadoop/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/native/container-executor/impl/main.c:558
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-7064) Use cgroup to get container resource utilization

2018-01-26 Thread Haibo Chen (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-7064?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Haibo Chen updated YARN-7064:
-
Component/s: nodemanager

> Use cgroup to get container resource utilization
> 
>
> Key: YARN-7064
> URL: https://issues.apache.org/jira/browse/YARN-7064
> Project: Hadoop YARN
>  Issue Type: Improvement
>  Components: nodemanager
>Reporter: Miklos Szegedi
>Assignee: Miklos Szegedi
>Priority: Major
> Fix For: 3.1.0
>
> Attachments: YARN-7064.000.patch, YARN-7064.001.patch, 
> YARN-7064.002.patch, YARN-7064.003.patch, YARN-7064.004.patch, 
> YARN-7064.005.patch, YARN-7064.007.patch, YARN-7064.008.patch, 
> YARN-7064.009.patch, YARN-7064.010.patch, YARN-7064.011.patch, 
> YARN-7064.012.patch, YARN-7064.013.patch, YARN-7064.014.patch
>
>
> This is an addendum to YARN-6668. What happens is that that jira always wants 
> to rebase patches against YARN-1011 instead of trunk.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7064) Use cgroup to get container resource utilization

2018-01-26 Thread Haibo Chen (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7064?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16341938#comment-16341938
 ] 

Haibo Chen commented on YARN-7064:
--

The findbug issue and unit test failure are unrelated. Checking this in shortly.

> Use cgroup to get container resource utilization
> 
>
> Key: YARN-7064
> URL: https://issues.apache.org/jira/browse/YARN-7064
> Project: Hadoop YARN
>  Issue Type: Improvement
>Reporter: Miklos Szegedi
>Assignee: Miklos Szegedi
>Priority: Major
> Attachments: YARN-7064.000.patch, YARN-7064.001.patch, 
> YARN-7064.002.patch, YARN-7064.003.patch, YARN-7064.004.patch, 
> YARN-7064.005.patch, YARN-7064.007.patch, YARN-7064.008.patch, 
> YARN-7064.009.patch, YARN-7064.010.patch, YARN-7064.011.patch, 
> YARN-7064.012.patch, YARN-7064.013.patch, YARN-7064.014.patch
>
>
> This is an addendum to YARN-6668. What happens is that that jira always wants 
> to rebase patches against YARN-1011 instead of trunk.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7796) Container-executor fails with segfault on certain OS configurations

2018-01-26 Thread Miklos Szegedi (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7796?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16341932#comment-16341932
 ] 

Miklos Szegedi commented on YARN-7796:
--

[~Jim_Brennan], [~grepas], the stack depth is specified by {{ulimit -s}}. It is 
different on Redhat 6 and 7. I also checked below with -fstack-check and it has 
no impact on the limit.
{code:java}
*** REDHAT 6 ***
gcc (GCC) 4.4.7 20120313 (Red Hat 4.4.7-18)
[root@mybox-rh69 ~]# curl 
https://gist.githubusercontent.com/szegedim/c583ccead8316b1035bc9148bcf588b9/raw/c0455196b47c76194e37a100964f3b3bf51d4a53/checkstack.cpp
 >./checkstack.cpp && gcc ./checkstack.cpp -lstdc++ -fstack-check && ./a.out
12051K succeededSegmentation fault (core dumped)
[root@mybox-rh69 ~]# curl 
https://gist.githubusercontent.com/szegedim/c583ccead8316b1035bc9148bcf588b9/raw/c0455196b47c76194e37a100964f3b3bf51d4a53/checkstack.cpp
 >./checkstack.cpp && gcc ./checkstack.cpp -lstdc++ && ./a.out
12051K succeededSegmentation fault (core dumped)
[root@mybox-rh69 ~]# ulimit -s
10240

*** REDHAT 7 ***
gcc (GCC) 4.8.5 20150623 (Red Hat 4.8.5-16)
[root@mybox-rh74 ~]# curl 
https://gist.githubusercontent.com/szegedim/c583ccead8316b1035bc9148bcf588b9/raw/c0455196b47c76194e37a100964f3b3bf51d4a53/checkstack.cpp
 >./checkstack.cpp && gcc ./checkstack.cpp -lstdc++ -fstack-check && ./a.out
8016K Segmentation fault
[root@mybox-rh74 ~]# curl 
https://gist.githubusercontent.com/szegedim/c583ccead8316b1035bc9148bcf588b9/raw/c0455196b47c76194e37a100964f3b3bf51d4a53/checkstack.cpp
 >./checkstack.cpp && gcc ./checkstack.cpp -lstdc++ && ./a.out
8016K Segmentation fault
[root@mybox-rh74 ~]# ulimit -s
8192

*** REDHAT 6 BUILT CODE ON REDHAT 7 ***
[root@mybox-rh74 ~]# scp root@mybox-rh69:/root/a.out ./b.out
a.out   
100% 6989 4.4MB/s   00:00
[root@mybox-rh74 ~]# ./b.out 
8016K Segmentation fault
{code}

> Container-executor fails with segfault on certain OS configurations
> ---
>
> Key: YARN-7796
> URL: https://issues.apache.org/jira/browse/YARN-7796
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: nodemanager
>Affects Versions: 3.0.0
>Reporter: Gergo Repas
>Assignee: Gergo Repas
>Priority: Major
> Fix For: 3.1.0, 3.0.1
>
> Attachments: YARN-7796.000.patch, YARN-7796.001.patch, 
> YARN-7796.002.patch
>
>
> There is a relatively big (128K) buffer allocated on the stack in 
> container-executor.c for the purpose of copying files. As indicated by the 
> below gdb stack trace, this allocation can fail with SIGSEGV. This happens 
> only on certain OS configurations - I can reproduce this issue on RHEL 6.9:
> {code:java}
> [Thread debugging using libthread_db enabled]
> main : command provided 0
> main : run as user is ***
> main : requested yarn user is ***
> Program received signal SIGSEGV, Segmentation fault.
> 0x004069bc in copy_file (input=7, in_filename=0x7ffd669fd2d6 
> "/yarn/nm/nmPrivate/container_1516711246952_0001_02_01.tokens", 
> out_filename=0x932930 
> "/yarn/nm/usercache/systest/appcache/application_1516711246952_0001/container_1516711246952_0001_02_01.tokens",
>  perm=384)
> at 
> /root/hadoop/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/native/container-executor/impl/container-executor.c:966
> 966 char buffer[buffer_size];
> (gdb) bt
> #0  0x004069bc in copy_file (input=7, in_filename=0x7ffd669fd2d6 
> "/yarn/nm/nmPrivate/container_1516711246952_0001_02_01.tokens", 
> out_filename=0x932930 
> "/yarn/nm/usercache/systest/appcache/application_1516711246952_0001/container_1516711246952_0001_02_01.tokens",
>  perm=384)
> at 
> /root/hadoop/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/native/container-executor/impl/container-executor.c:966
> #1  0x00409a81 in initialize_app (user=, 
> app_id=0x7ffd669fd2b7 "application_1516711246952_0001", 
> nmPrivate_credentials_file=0x7ffd669fd2d6 
> "/yarn/nm/nmPrivate/container_1516711246952_0001_02_01.tokens", 
> local_dirs=0x9331c8, log_roots=, args=0x7ffd669fb168)
> at 
> /root/hadoop/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/native/container-executor/impl/container-executor.c:1122
> #2  0x00403f90 in main (argc=, argv= optimized out>) at 
> /root/hadoop/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/native/container-executor/impl/main.c:558
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional 

[jira] [Commented] (YARN-7064) Use cgroup to get container resource utilization

2018-01-26 Thread genericqa (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7064?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16341921#comment-16341921
 ] 

genericqa commented on YARN-7064:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
16s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 5 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  1m 
18s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 15m 
16s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 12m 
41s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  2m 
 3s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  3m 
11s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
15m 14s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  1m 
11s{color} | {color:red} hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api in 
trunk has 1 extant Findbugs warnings. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  2m 
42s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
17s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  2m 
15s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 11m 
34s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 11m 
34s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
56s{color} | {color:green} root: The patch generated 0 new + 266 unchanged - 4 
fixed = 266 total (was 270) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  3m  
3s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green}  0m  
1s{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green}  
9m 37s{color} | {color:green} patch has no errors when building and testing our 
client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  5m 
22s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  2m 
41s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  8m 
16s{color} | {color:green} hadoop-common in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
42s{color} | {color:green} hadoop-yarn-api in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  3m 
10s{color} | {color:green} hadoop-yarn-common in the patch passed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 19m 36s{color} 
| {color:red} hadoop-yarn-server-nodemanager in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
36s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}124m  6s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | 
hadoop.yarn.server.nodemanager.webapp.TestContainerLogsPage |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:5b98639 |
| JIRA Issue | YARN-7064 |
| JIRA Patch URL | 

[jira] [Commented] (YARN-7822) Constraint satisfaction checker support for composite OR and AND constraints

2018-01-26 Thread Weiwei Yang (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7822?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16341907#comment-16341907
 ] 

Weiwei Yang commented on YARN-7822:
---

Hi [~kkaranasos]
{quote}
I had mentioned this case in a comment somewhere. 
{quote}
Yes, you mentioned this in [this 
comment|https://issues.apache.org/jira/browse/YARN-7774?focusedCommentId=16331286=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-16331286]
 and I saw that, but still was confusing. From the design doc, a constraint like

{noformat}

{max-cardinality: 3, scope: host}

{noformat}

this will be interpreted to "don't allocate more than 3 containers per node". 
However, what you were saying was I need to specify

{noformat}

{max-cardinality: 2, scope: host}

{noformat}

to achieve not placing more than 3 containers on a node. Is that correct?

 [~asuresh] regarding to your comments

bq. nested AND / OR is not supported ... it should be possible right ?

I meant to take this step by step. I saw some existing test case had the same 
assumption, e.g 
{{TestPlacementConstraintTransformations#testCompositeConstraint}} so thought 
it might be better not support nested case at first. We can further investigate 
nested case moving on.

bq. Can you replace with SchedulerNode

Sure, will try in next patch. Do you want me to replace all 
{{FiCaSchedulerNode}} in {{TestPlacementConstraintsUtil}} in next patch?

bq. validate an end-2-end scenario ?

Will do.

Thanks

> Constraint satisfaction checker support for composite OR and AND constraints
> 
>
> Key: YARN-7822
> URL: https://issues.apache.org/jira/browse/YARN-7822
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Arun Suresh
>Assignee: Weiwei Yang
>Priority: Major
> Attachments: YARN-7822-YARN-6592.001.patch
>
>
> JIRA to track changes to {{PlacementConstraintsUtil#canSatisfyConstraints}} 
> handle OR and AND Composite constaints



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7780) Documentation for Placement Constraints

2018-01-26 Thread Konstantinos Karanasos (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7780?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16341902#comment-16341902
 ] 

Konstantinos Karanasos commented on YARN-7780:
--

Thanks for the comments, [~asuresh].
{quote}line 34: Think we should remove this - allocation == since container at 
the moment.
{quote}
That's true, but then all over the code we use the notion allocation now 
(including allocation tags). I can mention that currently allocation=container 
though.
{quote}And lets move this line to the bottom - since it is specific to the 
processor.
{quote}
Doesn't the allocator in the Capacity Scheduler only do hard constraints too?

I will apply the fixes Monday morning (don't have a reliable internet 
connection at the moment) so that we can commit this.

 

> Documentation for Placement Constraints
> ---
>
> Key: YARN-7780
> URL: https://issues.apache.org/jira/browse/YARN-7780
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Arun Suresh
>Assignee: Konstantinos Karanasos
>Priority: Major
> Attachments: YARN-7780-YARN-6592.001.patch, 
> YARN-7780-YARN-6592.002.patch
>
>
> JIRA to track documentation for the feature.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-7780) Documentation for Placement Constraints

2018-01-26 Thread Konstantinos Karanasos (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-7780?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Konstantinos Karanasos updated YARN-7780:
-
Attachment: (was: YARN-7780-YARN-6592.003.patch)

> Documentation for Placement Constraints
> ---
>
> Key: YARN-7780
> URL: https://issues.apache.org/jira/browse/YARN-7780
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Arun Suresh
>Assignee: Konstantinos Karanasos
>Priority: Major
> Attachments: YARN-7780-YARN-6592.001.patch, 
> YARN-7780-YARN-6592.002.patch
>
>
> JIRA to track documentation for the feature.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-7780) Documentation for Placement Constraints

2018-01-26 Thread Konstantinos Karanasos (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-7780?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Konstantinos Karanasos updated YARN-7780:
-
Attachment: YARN-7780-YARN-6592.003.patch

> Documentation for Placement Constraints
> ---
>
> Key: YARN-7780
> URL: https://issues.apache.org/jira/browse/YARN-7780
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Arun Suresh
>Assignee: Konstantinos Karanasos
>Priority: Major
> Attachments: YARN-7780-YARN-6592.001.patch, 
> YARN-7780-YARN-6592.002.patch, YARN-7780-YARN-6592.003.patch
>
>
> JIRA to track documentation for the feature.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7780) Documentation for Placement Constraints

2018-01-26 Thread Konstantinos Karanasos (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7780?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16341883#comment-16341883
 ] 

Konstantinos Karanasos commented on YARN-7780:
--

Thanks for checking the doc, [~jianhe].

Indeed in this first version I purposely added only the minimum parameters 
required for someone to experiment with the constraint placement.

As we discussed with [~asuresh] and [~leftnoteasy], we are planning to do some 
refactoring/cleanup of the parameters after the merge, so if it is okay, I 
would prefer to add the rest of the parameters once this cleanup is done.

> Documentation for Placement Constraints
> ---
>
> Key: YARN-7780
> URL: https://issues.apache.org/jira/browse/YARN-7780
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Arun Suresh
>Assignee: Konstantinos Karanasos
>Priority: Major
> Attachments: YARN-7780-YARN-6592.001.patch, 
> YARN-7780-YARN-6592.002.patch
>
>
> JIRA to track documentation for the feature.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7831) YARN Service CLI should use hadoop.http.authentication.type to determine authentication method

2018-01-26 Thread genericqa (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7831?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16341862#comment-16341862
 ] 

genericqa commented on YARN-7831:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 16m 
39s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 17m 
44s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
20s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
13s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
22s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
10m 30s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
25s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
14s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
20s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
15s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
15s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m  9s{color} | {color:orange} 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-services-api:
 The patch generated 2 new + 2 unchanged - 0 fixed = 4 total (was 2) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
17s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
11m 17s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
31s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
13s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
24s{color} | {color:green} hadoop-yarn-services-api in the patch passed. 
{color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
21s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 60m 31s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:5b98639 |
| JIRA Issue | YARN-7831 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12907954/YARN-7831.001.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  shadedclient  findbugs  checkstyle  |
| uname | Linux 522eab96e912 3.13.0-135-generic #184-Ubuntu SMP Wed Oct 18 
11:55:51 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / 6eef3d7 |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_151 |
| findbugs | v3.1.0-RC1 |
| checkstyle | 
https://builds.apache.org/job/PreCommit-YARN-Build/19497/artifact/out/diff-checkstyle-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-applications_hadoop-yarn-services-api.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-YARN-Build/19497/testReport/ |
| Max. process+thread count | 326 (vs. ulimit of 5000) |
| 

[jira] [Commented] (YARN-7064) Use cgroup to get container resource utilization

2018-01-26 Thread genericqa (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7064?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16341856#comment-16341856
 ] 

genericqa commented on YARN-7064:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
29s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 5 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
20s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 15m 
21s{color} | {color:green} trunk passed {color} |
| {color:red}-1{color} | {color:red} compile {color} | {color:red}  8m 
13s{color} | {color:red} root in trunk failed. {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
50s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  2m 
36s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
13m 40s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  1m  
5s{color} | {color:red} hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api in 
trunk has 1 extant Findbugs warnings. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
56s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
17s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  2m 
12s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 14m 
28s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} javac {color} | {color:red} 14m 28s{color} 
| {color:red} root generated 183 new + 1057 unchanged - 0 fixed = 1240 total 
(was 1057) {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
58s{color} | {color:green} root: The patch generated 0 new + 266 unchanged - 4 
fixed = 266 total (was 270) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  2m 
47s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green}  0m  
1s{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green}  
8m 56s{color} | {color:green} patch has no errors when building and testing our 
client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  5m 
26s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  2m 
44s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  8m 
14s{color} | {color:green} hadoop-common in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
42s{color} | {color:green} hadoop-yarn-api in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  3m 
16s{color} | {color:green} hadoop-yarn-common in the patch passed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 19m 31s{color} 
| {color:red} hadoop-yarn-server-nodemanager in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
36s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}118m  0s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | 
hadoop.yarn.server.nodemanager.webapp.TestContainerLogsPage |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:5b98639 |
| JIRA Issue | YARN-7064 |
| JIRA 

[jira] [Commented] (YARN-7064) Use cgroup to get container resource utilization

2018-01-26 Thread Haibo Chen (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7064?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16341845#comment-16341845
 ] 

Haibo Chen commented on YARN-7064:
--

+1 pending Jenkins.

> Use cgroup to get container resource utilization
> 
>
> Key: YARN-7064
> URL: https://issues.apache.org/jira/browse/YARN-7064
> Project: Hadoop YARN
>  Issue Type: Improvement
>Reporter: Miklos Szegedi
>Assignee: Miklos Szegedi
>Priority: Major
> Attachments: YARN-7064.000.patch, YARN-7064.001.patch, 
> YARN-7064.002.patch, YARN-7064.003.patch, YARN-7064.004.patch, 
> YARN-7064.005.patch, YARN-7064.007.patch, YARN-7064.008.patch, 
> YARN-7064.009.patch, YARN-7064.010.patch, YARN-7064.011.patch, 
> YARN-7064.012.patch, YARN-7064.013.patch, YARN-7064.014.patch
>
>
> This is an addendum to YARN-6668. What happens is that that jira always wants 
> to rebase patches against YARN-1011 instead of trunk.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (YARN-7815) Mount the filecache as read-only in Docker containers

2018-01-26 Thread Eric Yang (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7815?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16341836#comment-16341836
 ] 

Eric Yang edited comment on YARN-7815 at 1/27/18 12:16 AM:
---

What is the common usage for 3?  Maybe 2 Read/Write, and 3 Read-only?


was (Author: eyang):
What is the common usage for 3?

> Mount the filecache as read-only in Docker containers
> -
>
> Key: YARN-7815
> URL: https://issues.apache.org/jira/browse/YARN-7815
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Shane Kumpf
>Assignee: Shane Kumpf
>Priority: Major
>
> Currently, when using the Docker runtime, the filecache directories are 
> mounted read-write into the Docker containers. Read write access is not 
> necessary. We should make this more restrictive by changing that mount to 
> read-only.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7815) Mount the filecache as read-only in Docker containers

2018-01-26 Thread Eric Yang (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7815?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16341836#comment-16341836
 ] 

Eric Yang commented on YARN-7815:
-

What is the common usage for 3?

> Mount the filecache as read-only in Docker containers
> -
>
> Key: YARN-7815
> URL: https://issues.apache.org/jira/browse/YARN-7815
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Shane Kumpf
>Assignee: Shane Kumpf
>Priority: Major
>
> Currently, when using the Docker runtime, the filecache directories are 
> mounted read-write into the Docker containers. Read write access is not 
> necessary. We should make this more restrictive by changing that mount to 
> read-only.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7815) Mount the filecache as read-only in Docker containers

2018-01-26 Thread Eric Yang (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7815?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16341835#comment-16341835
 ] 

Eric Yang commented on YARN-7815:
-

I agree with [~miklos.szeg...@cloudera.com]'s view point that keeping Read-only 
for 2, 3 and remove 4.  This gives a way to localize hadoop config and prevent 
user to modify a read-only config.  I also agree with [~jlowe] 's use case 
where intermediate output is stored in container directory to evenly distribute 
IO to separate disks instead of docker container tmp space.  I think we have 
consensus on 1 read-only, 4 removed.  It would be nice to make 2, 3 
controllable via config base on usage type.

> Mount the filecache as read-only in Docker containers
> -
>
> Key: YARN-7815
> URL: https://issues.apache.org/jira/browse/YARN-7815
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Shane Kumpf
>Assignee: Shane Kumpf
>Priority: Major
>
> Currently, when using the Docker runtime, the filecache directories are 
> mounted read-write into the Docker containers. Read write access is not 
> necessary. We should make this more restrictive by changing that mount to 
> read-only.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7815) Mount the filecache as read-only in Docker containers

2018-01-26 Thread Shane Kumpf (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7815?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16341822#comment-16341822
 ] 

Shane Kumpf commented on YARN-7815:
---

{quote} I am just wondering whether it would be more secure mounting 2. and 
appcache/filecache read only but not mounting 4. 
{quote}
IIRC, if usercache/_user_ is not mounted r/w, I believe writes to 
usercache/_user_/appcache will be denied because docker will create the parent 
directories as root:root. I'll do some more testing here based on the 
suggestions so far.

> Mount the filecache as read-only in Docker containers
> -
>
> Key: YARN-7815
> URL: https://issues.apache.org/jira/browse/YARN-7815
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Shane Kumpf
>Assignee: Shane Kumpf
>Priority: Major
>
> Currently, when using the Docker runtime, the filecache directories are 
> mounted read-write into the Docker containers. Read write access is not 
> necessary. We should make this more restrictive by changing that mount to 
> read-only.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Issue Comment Deleted] (YARN-7815) Mount the filecache as read-only in Docker containers

2018-01-26 Thread Shane Kumpf (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-7815?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Shane Kumpf updated YARN-7815:
--
Comment: was deleted

(was: {quote} I am just wondering whether it would be more secure mounting 2. 
and appcache/filecache read only but not mounting 4. 
{quote}
IIRC, if usercache/_user_ is not mounted r/w, I believe writes to 
usercache/_user_/appcache will be denied because docker will create the parent 
directories as root:root. I'll do some more testing here based on the 
suggestions so far.)

> Mount the filecache as read-only in Docker containers
> -
>
> Key: YARN-7815
> URL: https://issues.apache.org/jira/browse/YARN-7815
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Shane Kumpf
>Assignee: Shane Kumpf
>Priority: Major
>
> Currently, when using the Docker runtime, the filecache directories are 
> mounted read-write into the Docker containers. Read write access is not 
> necessary. We should make this more restrictive by changing that mount to 
> read-only.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7815) Mount the filecache as read-only in Docker containers

2018-01-26 Thread Shane Kumpf (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7815?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16341821#comment-16341821
 ] 

Shane Kumpf commented on YARN-7815:
---

{quote} I am just wondering whether it would be more secure mounting 2. and 
appcache/filecache read only but not mounting 4. 
{quote}
IIRC, if usercache/_user_ is not mounted r/w, I believe writes to 
usercache/_user_/appcache will be denied because docker will create the parent 
directories as root:root. I'll do some more testing here based on the 
suggestions so far.

> Mount the filecache as read-only in Docker containers
> -
>
> Key: YARN-7815
> URL: https://issues.apache.org/jira/browse/YARN-7815
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Shane Kumpf
>Assignee: Shane Kumpf
>Priority: Major
>
> Currently, when using the Docker runtime, the filecache directories are 
> mounted read-write into the Docker containers. Read write access is not 
> necessary. We should make this more restrictive by changing that mount to 
> read-only.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7626) Allow regular expression matching in container-executor.cfg for devices and named docker volumes mount

2018-01-26 Thread Miklos Szegedi (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7626?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16341810#comment-16341810
 ] 

Miklos Szegedi commented on YARN-7626:
--

[~Zian Chen], thank you for the patch.
{code:java}
76  static int is_regex(const char *str) {
77  const char *regex_str = "^[\\^].*[\\$]$";
78  return execute_regex_match(regex_str, str);
79  }{code}
You could just do a simple {{size_t len=strlen(str); return !(len>2 && 
str[0]=='^' && str[len-1]=='$');}} It is probably more efficient than compiling 
the regex, etc.
{code:java}
// Iterate each permitted values.{code}
There is a missing 'through' here in the iterate through each value message.
{code:java}
137 if (is_regex(permitted_values[j]) == 0) {
138   ret = validate_volume_name_with_argument(values[i], 
permitted_values[j]);
139 }
{code}
I would put {{ret = strncmp(values[i], permitted_values[j], tmp_ptr - 
values[i]);}} into the else of this block.
{code:java}
// if it's a valid REGEX return; for user mount, we need to strictly check
{code}
Is not there a contradiction with the code? \{{853 if 
(validate_volume_name(mount) == 0) {}}
{code:java}
925 // if (permitted_mounts[i] is a REGEX): use REGEX to compare; return
926 if (is_regex(permitted_mounts[i]) == 0 &&
927 validate_volume_name_with_argument(normalized_path, 
permitted_mounts[i]) == 0) {
928   ret = 1;
929   break;
930 }
{code}
Similarly, is not there a contradiction between the code and the comment? If 
the comment is right, this check should be before {{if (strcmp(normalized_path, 
permitted_mounts[i]) == 0) {}} and break in case of the regex, regardless of 
match or not.
{code:java}
979 ret = normalize_mounts(permitted_ro_mounts, -1);
{code}
If {{isUserMount}} is a boolean, I would use 0 or 1. -1 might be misleading to 
some folks.

> Allow regular expression matching in container-executor.cfg for devices and 
> named docker volumes mount
> --
>
> Key: YARN-7626
> URL: https://issues.apache.org/jira/browse/YARN-7626
> Project: Hadoop YARN
>  Issue Type: New Feature
>Reporter: Zian Chen
>Assignee: Zian Chen
>Priority: Major
> Attachments: YARN-7626.001.patch, YARN-7626.002.patch, 
> YARN-7626.003.patch, YARN-7626.004.patch, YARN-7626.005.patch
>
>
> Currently when we config some of the GPU devices related fields (like ) in 
> container-executor.cfg, these fields are generated based on different driver 
> versions or GPU device names. We want to enable regular expression matching 
> so that user don't need to manually set up these fields when config 
> container-executor.cfg,



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Assigned] (YARN-7833) Extend SLS to support simulation of a Federated Environment

2018-01-26 Thread Carlo Curino (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-7833?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Carlo Curino reassigned YARN-7833:
--

Assignee: Jose Miguel Arreola

> Extend SLS to support simulation of a Federated Environment
> ---
>
> Key: YARN-7833
> URL: https://issues.apache.org/jira/browse/YARN-7833
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Carlo Curino
>Assignee: Jose Miguel Arreola
>Priority: Major
>
> To develop algorithms for federation, it would be of great help to have a 
> version of SLS that supports multi RMs and GPG.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Assigned] (YARN-7615) [RESERVATION] Federation StateStore: support storage/retrieval of reservations

2018-01-26 Thread Carlo Curino (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-7615?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Carlo Curino reassigned YARN-7615:
--

Assignee: Giovanni Matteo Fumarola

> [RESERVATION] Federation StateStore: support storage/retrieval of reservations
> --
>
> Key: YARN-7615
> URL: https://issues.apache.org/jira/browse/YARN-7615
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: federation
>Reporter: Carlo Curino
>Assignee: Giovanni Matteo Fumarola
>Priority: Major
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7815) Mount the filecache as read-only in Docker containers

2018-01-26 Thread Jason Lowe (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7815?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16341805#comment-16341805
 ] 

Jason Lowe commented on YARN-7815:
--

I suspect we can't make the usercache readonly because we are mounting two 
other filesystems _underneath_ that now read-only filesystem.  We should retry 
with usercache/_user_/filecache being read-only and 
usercache/_user_/appcache/_application_ being read-write.  The appcache mount 
needs to be read-write since that's where the container work directory is along 
with the application scratch area where shuffle outputs are deposited.


> Mount the filecache as read-only in Docker containers
> -
>
> Key: YARN-7815
> URL: https://issues.apache.org/jira/browse/YARN-7815
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Shane Kumpf
>Assignee: Shane Kumpf
>Priority: Major
>
> Currently, when using the Docker runtime, the filecache directories are 
> mounted read-write into the Docker containers. Read write access is not 
> necessary. We should make this more restrictive by changing that mount to 
> read-only.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-7831) YARN Service CLI should use hadoop.http.authentication.type to determine authentication method

2018-01-26 Thread Eric Yang (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-7831?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Eric Yang updated YARN-7831:

Attachment: YARN-7831.001.patch

> YARN Service CLI should use hadoop.http.authentication.type to determine 
> authentication method
> --
>
> Key: YARN-7831
> URL: https://issues.apache.org/jira/browse/YARN-7831
> Project: Hadoop YARN
>  Issue Type: Bug
>Reporter: Eric Yang
>Assignee: Eric Yang
>Priority: Major
> Attachments: YARN-7831.001.patch
>
>
> YARN Service CLI uses REST API in resource manager to control YARN cluster.  
> The authentication type is currently determined by using isSecurityEnabled, 
> but the code should determine security type based on http authentication type.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Assigned] (YARN-7405) [GQ] Bias container allocations based on global view

2018-01-26 Thread Carlo Curino (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-7405?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Carlo Curino reassigned YARN-7405:
--

Assignee: Subru Krishnan

> [GQ] Bias container allocations based on global view
> 
>
> Key: YARN-7405
> URL: https://issues.apache.org/jira/browse/YARN-7405
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: federation
>Reporter: Carlo Curino
>Assignee: Subru Krishnan
>Priority: Major
>
> Each RM in a federation should bias its local allocations of containers based 
> on the global over/under utilization of queues. As part of this the local RM 
> should account for the work that other RMs will be doing in between the 
> updates we receive via the heartbeats of YARN-7404 (the mechanics used for 
> synchronization).



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Created] (YARN-7834) [GQ] Rebalance queue configuration for load-balancing and locality affinities

2018-01-26 Thread Carlo Curino (JIRA)
Carlo Curino created YARN-7834:
--

 Summary: [GQ] Rebalance queue configuration for load-balancing and 
locality affinities
 Key: YARN-7834
 URL: https://issues.apache.org/jira/browse/YARN-7834
 Project: Hadoop YARN
  Issue Type: Sub-task
Reporter: Carlo Curino


This Jira tracks algorithmic work, which will run in the GPG and will rebalance 
the mapping of queues to sub-clusters. The current design supports both 
balancing the "load" across sub-clusters (proportionally to their size) and as 
a second objective to maximize the affinity between queues and the sub-clusters 
where they historically have most demand.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Assigned] (YARN-7834) [GQ] Rebalance queue configuration for load-balancing and locality affinities

2018-01-26 Thread Carlo Curino (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-7834?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Carlo Curino reassigned YARN-7834:
--

Assignee: Carlo Curino

> [GQ] Rebalance queue configuration for load-balancing and locality affinities
> -
>
> Key: YARN-7834
> URL: https://issues.apache.org/jira/browse/YARN-7834
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Carlo Curino
>Assignee: Carlo Curino
>Priority: Major
>
> This Jira tracks algorithmic work, which will run in the GPG and will 
> rebalance the mapping of queues to sub-clusters. The current design supports 
> both balancing the "load" across sub-clusters (proportionally to their size) 
> and as a second objective to maximize the affinity between queues and the 
> sub-clusters where they historically have most demand.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-7403) [GQ] Compute global and local "IdealAllocation"

2018-01-26 Thread Carlo Curino (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-7403?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Carlo Curino updated YARN-7403:
---
Summary: [GQ] Compute global and local "IdealAllocation"  (was: [GQ] 
Compute global and local preemption)

> [GQ] Compute global and local "IdealAllocation"
> ---
>
> Key: YARN-7403
> URL: https://issues.apache.org/jira/browse/YARN-7403
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: federation
>Reporter: Carlo Curino
>Assignee: Carlo Curino
>Priority: Major
> Attachments: YARN-7403.draft.patch, YARN-7403.draft2.patch, 
> YARN-7403.draft3.patch, global-queues-preemption.PNG
>
>
> This JIRA tracks algorithmic effort to combine the local queue views of 
> capacity guarantee/use/demand and compute the global amount of preemption, 
> and based on that, "where" (in which sub-cluster) preemption will be enacted.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-7832) Logs page does not work for Running applications

2018-01-26 Thread Yesha Vora (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-7832?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yesha Vora updated YARN-7832:
-
Attachment: Screen Shot 2018-01-26 at 3.28.40 PM.png

> Logs page does not work for Running applications
> 
>
> Key: YARN-7832
> URL: https://issues.apache.org/jira/browse/YARN-7832
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: yarn-ui-v2
>Affects Versions: 3.0.0
>Reporter: Yesha Vora
>Priority: Critical
> Attachments: Screen Shot 2018-01-26 at 3.28.40 PM.png
>
>
> Scenario
>  * Run yarn service application
>  * When application is Running, go to log page
>  * Select AttemptId and Container Id
> Logs are not showed on UI. It complains "No log data available!"
>  
> Here 
> [http://xxx:8188/ws/v1/applicationhistory/containers/container_e07_1516919074719_0004_01_01/logs?_=1517009230358]
>  API fails with 500 Internal Server Error.
> {"exception":"WebApplicationException","message":"java.io.IOException: 
> ","javaClassName":"javax.ws.rs.WebApplicationException"}
> {code:java}
> GET 
> http://xxx:8188/ws/v1/applicationhistory/containers/container_e07_1516919074719_0004_01_01/logs?_=1517009230358
>  500 (Internal Server Error)
> (anonymous) @ VM779:1
> send @ vendor.js:572
> ajax @ vendor.js:548
> (anonymous) @ vendor.js:5119
> initializePromise @ vendor.js:2941
> Promise @ vendor.js:3005
> ajax @ vendor.js:5117
> ajax @ yarn-ui.js:1
> superWrapper @ vendor.js:1591
> query @ vendor.js:5112
> ember$data$lib$system$store$finders$$_query @ vendor.js:5177
> query @ vendor.js:5334
> fetchLogFilesForContainerId @ yarn-ui.js:132
> showLogFilesForContainerId @ yarn-ui.js:126
> run @ vendor.js:648
> join @ vendor.js:648
> run.join @ vendor.js:1510
> closureAction @ vendor.js:1865
> trigger @ vendor.js:302
> (anonymous) @ vendor.js:339
> each @ vendor.js:61
> each @ vendor.js:51
> trigger @ vendor.js:339
> d.select @ vendor.js:5598
> (anonymous) @ vendor.js:5598
> d.invoke @ vendor.js:5598
> d.trigger @ vendor.js:5598
> e.trigger @ vendor.js:5598
> (anonymous) @ vendor.js:5598
> d.invoke @ vendor.js:5598
> d.trigger @ vendor.js:5598
> (anonymous) @ vendor.js:5598
> dispatch @ vendor.js:306
> elemData.handle @ vendor.js:281{code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Created] (YARN-7833) Extend SLS to support simulation of a Federated Environment

2018-01-26 Thread Carlo Curino (JIRA)
Carlo Curino created YARN-7833:
--

 Summary: Extend SLS to support simulation of a Federated 
Environment
 Key: YARN-7833
 URL: https://issues.apache.org/jira/browse/YARN-7833
 Project: Hadoop YARN
  Issue Type: Sub-task
Reporter: Carlo Curino


To develop algorithms for federation, it would be of great help to have a 
version of SLS that supports multi RMs and GPG.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-7832) Logs page does not work for Running applications

2018-01-26 Thread Yesha Vora (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-7832?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yesha Vora updated YARN-7832:
-
Description: 
Scenario
 * Run yarn service application
 * When application is Running, go to log page
 * Select AttemptId and Container Id

Logs are not showed on UI. It complains "No log data available!"

 

Here 
[http://xxx:8188/ws/v1/applicationhistory/containers/container_e07_1516919074719_0004_01_01/logs?_=1517009230358]
 API fails with 500 Internal Server Error.

{"exception":"WebApplicationException","message":"java.io.IOException: 
","javaClassName":"javax.ws.rs.WebApplicationException"}
{code:java}
GET 
http://xxx:8188/ws/v1/applicationhistory/containers/container_e07_1516919074719_0004_01_01/logs?_=1517009230358
 500 (Internal Server Error)
(anonymous) @ VM779:1
send @ vendor.js:572
ajax @ vendor.js:548
(anonymous) @ vendor.js:5119
initializePromise @ vendor.js:2941
Promise @ vendor.js:3005
ajax @ vendor.js:5117
ajax @ yarn-ui.js:1
superWrapper @ vendor.js:1591
query @ vendor.js:5112
ember$data$lib$system$store$finders$$_query @ vendor.js:5177
query @ vendor.js:5334
fetchLogFilesForContainerId @ yarn-ui.js:132
showLogFilesForContainerId @ yarn-ui.js:126
run @ vendor.js:648
join @ vendor.js:648
run.join @ vendor.js:1510
closureAction @ vendor.js:1865
trigger @ vendor.js:302
(anonymous) @ vendor.js:339
each @ vendor.js:61
each @ vendor.js:51
trigger @ vendor.js:339
d.select @ vendor.js:5598
(anonymous) @ vendor.js:5598
d.invoke @ vendor.js:5598
d.trigger @ vendor.js:5598
e.trigger @ vendor.js:5598
(anonymous) @ vendor.js:5598
d.invoke @ vendor.js:5598
d.trigger @ vendor.js:5598
(anonymous) @ vendor.js:5598
dispatch @ vendor.js:306
elemData.handle @ vendor.js:281{code}

  was:
Scenario
 * Run yarn service application
 * When application is Running, go to log page
 * Select AttemptId and Container Id

Logs are not showed on UI. It complains "No log data available!"

 

Here 
[http://xxx:8188/ws/v1/applicationhistory/containers/container_e07_1516919074719_0004_01_01/logs?_=1517009230358|http://ctr-e137-1514896590304-35963-01-04.hwx.site:8188/ws/v1/applicationhistory/containers/container_e07_1516919074719_0004_01_01/logs?_=1517009230358]
 API fails with 500 Internal Server Error.

{"exception":"WebApplicationException","message":"java.io.IOException: 
","javaClassName":"javax.ws.rs.WebApplicationException"}
{code:java}
GET 
http://xxx:8188/ws/v1/applicationhistory/containers/container_e07_1516919074719_0004_01_01/logs?_=1517009230358
 500 (Internal Server Error)
(anonymous) @ VM779:1
send @ vendor.js:572
ajax @ vendor.js:548
(anonymous) @ vendor.js:5119
initializePromise @ vendor.js:2941
Promise @ vendor.js:3005
ajax @ vendor.js:5117
ajax @ yarn-ui.js:1
superWrapper @ vendor.js:1591
query @ vendor.js:5112
ember$data$lib$system$store$finders$$_query @ vendor.js:5177
query @ vendor.js:5334
fetchLogFilesForContainerId @ yarn-ui.js:132
showLogFilesForContainerId @ yarn-ui.js:126
run @ vendor.js:648
join @ vendor.js:648
run.join @ vendor.js:1510
closureAction @ vendor.js:1865
trigger @ vendor.js:302
(anonymous) @ vendor.js:339
each @ vendor.js:61
each @ vendor.js:51
trigger @ vendor.js:339
d.select @ vendor.js:5598
(anonymous) @ vendor.js:5598
d.invoke @ vendor.js:5598
d.trigger @ vendor.js:5598
e.trigger @ vendor.js:5598
(anonymous) @ vendor.js:5598
d.invoke @ vendor.js:5598
d.trigger @ vendor.js:5598
(anonymous) @ vendor.js:5598
dispatch @ vendor.js:306
elemData.handle @ vendor.js:281{code}


> Logs page does not work for Running applications
> 
>
> Key: YARN-7832
> URL: https://issues.apache.org/jira/browse/YARN-7832
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: yarn-ui-v2
>Affects Versions: 3.0.0
>Reporter: Yesha Vora
>Priority: Critical
>
> Scenario
>  * Run yarn service application
>  * When application is Running, go to log page
>  * Select AttemptId and Container Id
> Logs are not showed on UI. It complains "No log data available!"
>  
> Here 
> [http://xxx:8188/ws/v1/applicationhistory/containers/container_e07_1516919074719_0004_01_01/logs?_=1517009230358]
>  API fails with 500 Internal Server Error.
> {"exception":"WebApplicationException","message":"java.io.IOException: 
> ","javaClassName":"javax.ws.rs.WebApplicationException"}
> {code:java}
> GET 
> http://xxx:8188/ws/v1/applicationhistory/containers/container_e07_1516919074719_0004_01_01/logs?_=1517009230358
>  500 (Internal Server Error)
> (anonymous) @ VM779:1
> send @ vendor.js:572
> ajax @ vendor.js:548
> (anonymous) @ vendor.js:5119
> initializePromise @ vendor.js:2941
> Promise @ vendor.js:3005
> ajax @ vendor.js:5117
> ajax @ yarn-ui.js:1
> superWrapper @ vendor.js:1591
> query @ vendor.js:5112
> ember$data$lib$system$store$finders$$_query @ vendor.js:5177
> query @ 

[jira] [Created] (YARN-7832) Logs page does not work for Running applications

2018-01-26 Thread Yesha Vora (JIRA)
Yesha Vora created YARN-7832:


 Summary: Logs page does not work for Running applications
 Key: YARN-7832
 URL: https://issues.apache.org/jira/browse/YARN-7832
 Project: Hadoop YARN
  Issue Type: Bug
  Components: yarn-ui-v2
Affects Versions: 3.0.0
Reporter: Yesha Vora


Scenario
 * Run yarn service application
 * When application is Running, go to log page
 * Select AttemptId and Container Id

Logs are not showed on UI. It complains "No log data available!"

 

Here 
[http://xxx:8188/ws/v1/applicationhistory/containers/container_e07_1516919074719_0004_01_01/logs?_=1517009230358|http://ctr-e137-1514896590304-35963-01-04.hwx.site:8188/ws/v1/applicationhistory/containers/container_e07_1516919074719_0004_01_01/logs?_=1517009230358]
 API fails with 500 Internal Server Error.

{"exception":"WebApplicationException","message":"java.io.IOException: 
","javaClassName":"javax.ws.rs.WebApplicationException"}
{code:java}
GET 
http://xxx:8188/ws/v1/applicationhistory/containers/container_e07_1516919074719_0004_01_01/logs?_=1517009230358
 500 (Internal Server Error)
(anonymous) @ VM779:1
send @ vendor.js:572
ajax @ vendor.js:548
(anonymous) @ vendor.js:5119
initializePromise @ vendor.js:2941
Promise @ vendor.js:3005
ajax @ vendor.js:5117
ajax @ yarn-ui.js:1
superWrapper @ vendor.js:1591
query @ vendor.js:5112
ember$data$lib$system$store$finders$$_query @ vendor.js:5177
query @ vendor.js:5334
fetchLogFilesForContainerId @ yarn-ui.js:132
showLogFilesForContainerId @ yarn-ui.js:126
run @ vendor.js:648
join @ vendor.js:648
run.join @ vendor.js:1510
closureAction @ vendor.js:1865
trigger @ vendor.js:302
(anonymous) @ vendor.js:339
each @ vendor.js:61
each @ vendor.js:51
trigger @ vendor.js:339
d.select @ vendor.js:5598
(anonymous) @ vendor.js:5598
d.invoke @ vendor.js:5598
d.trigger @ vendor.js:5598
e.trigger @ vendor.js:5598
(anonymous) @ vendor.js:5598
d.invoke @ vendor.js:5598
d.trigger @ vendor.js:5598
(anonymous) @ vendor.js:5598
dispatch @ vendor.js:306
elemData.handle @ vendor.js:281{code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7815) Mount the filecache as read-only in Docker containers

2018-01-26 Thread Eric Badger (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7815?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16341743#comment-16341743
 ] 

Eric Badger commented on YARN-7815:
---

[~miklos.szeg...@cloudera.com], yes I absolutely agree. If we can remove the 
usercach bind-mount, then we should. I'm just not sure how easy/possible that 
is going off of [~shaneku...@gmail.com]'s comment above on not being able to 
make it read-only

> Mount the filecache as read-only in Docker containers
> -
>
> Key: YARN-7815
> URL: https://issues.apache.org/jira/browse/YARN-7815
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Shane Kumpf
>Assignee: Shane Kumpf
>Priority: Major
>
> Currently, when using the Docker runtime, the filecache directories are 
> mounted read-write into the Docker containers. Read write access is not 
> necessary. We should make this more restrictive by changing that mount to 
> read-only.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7815) Mount the filecache as read-only in Docker containers

2018-01-26 Thread Miklos Szegedi (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7815?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16341728#comment-16341728
 ] 

Miklos Szegedi commented on YARN-7815:
--

[~ebadger], thank you for raising this. I am just wondering whether it would be 
more secure mounting 2. and appcache/filecache read only but not mounting 4. 
This would improve security by not letting apps view and modify each others 
directories. One reason to containerize is to isolate apps from each other, is 
not it?

> Mount the filecache as read-only in Docker containers
> -
>
> Key: YARN-7815
> URL: https://issues.apache.org/jira/browse/YARN-7815
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Shane Kumpf
>Assignee: Shane Kumpf
>Priority: Major
>
> Currently, when using the Docker runtime, the filecache directories are 
> mounted read-write into the Docker containers. Read write access is not 
> necessary. We should make this more restrictive by changing that mount to 
> read-only.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Assigned] (YARN-3895) Support ACLs in ATSv2

2018-01-26 Thread Vrushali C (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-3895?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vrushali C reassigned YARN-3895:


Assignee: Vrushali C  (was: Varun Saxena)

> Support ACLs in ATSv2
> -
>
> Key: YARN-3895
> URL: https://issues.apache.org/jira/browse/YARN-3895
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: timelineserver
>Affects Versions: YARN-2928
>Reporter: Varun Saxena
>Assignee: Vrushali C
>Priority: Major
>  Labels: YARN-5355
>
> This JIRA is to keep track of authorization support design discussions for 
> both readers and collectors. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-7064) Use cgroup to get container resource utilization

2018-01-26 Thread Miklos Szegedi (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-7064?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Miklos Szegedi updated YARN-7064:
-
Attachment: YARN-7064.014.patch

> Use cgroup to get container resource utilization
> 
>
> Key: YARN-7064
> URL: https://issues.apache.org/jira/browse/YARN-7064
> Project: Hadoop YARN
>  Issue Type: Improvement
>Reporter: Miklos Szegedi
>Assignee: Miklos Szegedi
>Priority: Major
> Attachments: YARN-7064.000.patch, YARN-7064.001.patch, 
> YARN-7064.002.patch, YARN-7064.003.patch, YARN-7064.004.patch, 
> YARN-7064.005.patch, YARN-7064.007.patch, YARN-7064.008.patch, 
> YARN-7064.009.patch, YARN-7064.010.patch, YARN-7064.011.patch, 
> YARN-7064.012.patch, YARN-7064.013.patch, YARN-7064.014.patch
>
>
> This is an addendum to YARN-6668. What happens is that that jira always wants 
> to rebase patches against YARN-1011 instead of trunk.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Created] (YARN-7831) YARN Service CLI should use hadoop.http.authentication.type to determine authentication method

2018-01-26 Thread Eric Yang (JIRA)
Eric Yang created YARN-7831:
---

 Summary: YARN Service CLI should use 
hadoop.http.authentication.type to determine authentication method
 Key: YARN-7831
 URL: https://issues.apache.org/jira/browse/YARN-7831
 Project: Hadoop YARN
  Issue Type: Bug
Reporter: Eric Yang


YARN Service CLI uses REST API in resource manager to control YARN cluster.  
The authentication type is currently determined by using isSecurityEnabled, but 
the code should determine security type based on http authentication type.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Assigned] (YARN-7831) YARN Service CLI should use hadoop.http.authentication.type to determine authentication method

2018-01-26 Thread Eric Yang (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-7831?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Eric Yang reassigned YARN-7831:
---

Assignee: Eric Yang

> YARN Service CLI should use hadoop.http.authentication.type to determine 
> authentication method
> --
>
> Key: YARN-7831
> URL: https://issues.apache.org/jira/browse/YARN-7831
> Project: Hadoop YARN
>  Issue Type: Bug
>Reporter: Eric Yang
>Assignee: Eric Yang
>Priority: Major
>
> YARN Service CLI uses REST API in resource manager to control YARN cluster.  
> The authentication type is currently determined by using isSecurityEnabled, 
> but the code should determine security type based on http authentication type.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-7781) Update YARN-Services-Examples.md to be in sync with the latest code

2018-01-26 Thread Gour Saha (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-7781?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gour Saha updated YARN-7781:

Description: 
Update YARN-Services-Examples.md to make the following additions/changes:

1. Add an additional URL and PUT Request JSON to support flex:

Update to flex up/down the no of containers (instances) of a component of a 
service
PUT URL – http://localhost:8088/app/v1/services/hello-world
PUT Request JSON
{code}
{
  "components" : [ {
"name" : "hello",
"number_of_containers" : 3
  } ]
}
{code}

2. Modify all occurrences of /ws/ to /app/

  was:
Update YARN-Services-Examples.md to make the following additions/changes:

1. Add an additional URL and PUT Request JSON to support flex:

Update to flex up/down the no of containers (instances) of a component of a 
service
PUT URL – http://localhost:9191/app/v1/services/hello-world
PUT Request JSON
{code}
{
  "components" : [ {
"name" : "hello",
"number_of_containers" : 3
  } ]
}
{code}

2. Modify all occurrences of /ws/ to /app/


> Update YARN-Services-Examples.md to be in sync with the latest code
> ---
>
> Key: YARN-7781
> URL: https://issues.apache.org/jira/browse/YARN-7781
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Gour Saha
>Assignee: Jian He
>Priority: Major
> Attachments: YARN-7781.01.patch
>
>
> Update YARN-Services-Examples.md to make the following additions/changes:
> 1. Add an additional URL and PUT Request JSON to support flex:
> Update to flex up/down the no of containers (instances) of a component of a 
> service
> PUT URL – http://localhost:8088/app/v1/services/hello-world
> PUT Request JSON
> {code}
> {
>   "components" : [ {
> "name" : "hello",
> "number_of_containers" : 3
>   } ]
> }
> {code}
> 2. Modify all occurrences of /ws/ to /app/



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7781) Update YARN-Services-Examples.md to be in sync with the latest code

2018-01-26 Thread Gour Saha (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7781?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16341700#comment-16341700
 ] 

Gour Saha commented on YARN-7781:
-

Thanks [~jianhe] for the patch. Quick comments -

1. In YarnServiceAPI.md:
Seems like "|false|string||" moved to a new line instead of being on the same 
line for "principal_name".

2. Fix the 2 whitespace errors.

3. In YARN-Services-Examples.md in the below section, is the "name" attribute 
with value "hello" required? The URI path already says that it is for the 
component name hello. So we should not need anything other than 
"number_of_containers" in the body.
{noformat}
### Update to flex up/down the number of containers (instances) of a component 
of a service
PUT URL - http://localhost:8088/app/v1/services/hello-world/components/hello
# PUT Request JSON
```json
{
"name": "hello",
"number_of_containers": 3
}
{noformat}
4. Don't we support a PUT URL – 
[http://localhost:8088/app/v1/services/hello-world] where we can pass a single 
JSON and flex multiple components in a single API call? This is what I 
mentioned in point 1 in this jira description.

> Update YARN-Services-Examples.md to be in sync with the latest code
> ---
>
> Key: YARN-7781
> URL: https://issues.apache.org/jira/browse/YARN-7781
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Gour Saha
>Assignee: Jian He
>Priority: Major
> Attachments: YARN-7781.01.patch
>
>
> Update YARN-Services-Examples.md to make the following additions/changes:
> 1. Add an additional URL and PUT Request JSON to support flex:
> Update to flex up/down the no of containers (instances) of a component of a 
> service
> PUT URL – http://localhost:9191/app/v1/services/hello-world
> PUT Request JSON
> {code}
> {
>   "components" : [ {
> "name" : "hello",
> "number_of_containers" : 3
>   } ]
> }
> {code}
> 2. Modify all occurrences of /ws/ to /app/



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7815) Mount the filecache as read-only in Docker containers

2018-01-26 Thread Eric Badger (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7815?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16341671#comment-16341671
 ] 

Eric Badger commented on YARN-7815:
---

Hey [~shaneku...@gmail.com], I'm wondering if we can remove even more mounts 
than this. I think that we have redundant mounts. Basically, we mount "/foo" 
and then also mount "/foo/bar". The 2nd mount is redundant and unnecessary 
since it is already underneath "/foo". 

For a container, here's a sample set of mounts that we make
{noformat}
1. /tmp/hadoop-ebadger/nm-local-dir/filecache

2. 
/tmp/hadoop-ebadger/nm-local-dir/usercache/ebadger/appcache/application_1516983466478_0003/container_1516983466478_0003_01_02

3. 
/tmp/hadoop-ebadger/nm-local-dir/usercache/ebadger/appcache/application_1516983466478_0003/

4. /tmp/hadoop-ebadger/nm-local-dir/usercache/ebadger/{noformat}
So we have filecache and appcache. Clearly, filecache should be read-only. We 
can then get rid of mounts 2 and 3, since they are subsets of mount 4. 

 

cc [~jlowe]

 

> Mount the filecache as read-only in Docker containers
> -
>
> Key: YARN-7815
> URL: https://issues.apache.org/jira/browse/YARN-7815
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Shane Kumpf
>Assignee: Shane Kumpf
>Priority: Major
>
> Currently, when using the Docker runtime, the filecache directories are 
> mounted read-write into the Docker containers. Read write access is not 
> necessary. We should make this more restrictive by changing that mount to 
> read-only.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7765) [Atsv2] GSSException: No valid credentials provided - Failed to find any Kerberos tgt thrown by HBaseClient in NM and HDFSClient in HBase daemons

2018-01-26 Thread genericqa (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7765?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16341653#comment-16341653
 ] 

genericqa commented on YARN-7765:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
21s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 15m 
14s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
47s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
17s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
31s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green}  
9m 13s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
44s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
18s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
28s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
44s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
44s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
15s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
27s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green}  
9m 26s{color} | {color:green} patch has no errors when building and testing our 
client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
47s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
16s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 19m 30s{color} 
| {color:red} hadoop-yarn-server-nodemanager in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
17s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 59m 35s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | 
hadoop.yarn.server.nodemanager.containermanager.TestContainerManager |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:5b98639 |
| JIRA Issue | YARN-7765 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12907926/YARN-7765.01.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  shadedclient  findbugs  checkstyle  |
| uname | Linux 03dd32211aaf 4.4.0-64-generic #85-Ubuntu SMP Mon Feb 20 
11:50:30 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / a37e7f0 |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_151 |
| findbugs | v3.1.0-RC1 |
| unit | 
https://builds.apache.org/job/PreCommit-YARN-Build/19494/artifact/out/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-nodemanager.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-YARN-Build/19494/testReport/ |
| Max. process+thread count | 395 (vs. ulimit of 5000) |
| modules | C: 

[jira] [Commented] (YARN-7780) Documentation for Placement Constraints

2018-01-26 Thread Arun Suresh (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7780?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16341651#comment-16341651
 ] 

Arun Suresh commented on YARN-7780:
---

[~jianhe], thanks for taking a look.
As you've noticed, we have a pluggable algorithm and a choice of iterator (Not 
sure if we need to expose this to the end user though - since it might not make 
sense).
I think we should expose the config for the pluggable algorithm class though.

> Documentation for Placement Constraints
> ---
>
> Key: YARN-7780
> URL: https://issues.apache.org/jira/browse/YARN-7780
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Arun Suresh
>Assignee: Konstantinos Karanasos
>Priority: Major
> Attachments: YARN-7780-YARN-6592.001.patch, 
> YARN-7780-YARN-6592.002.patch
>
>
> JIRA to track documentation for the feature.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7796) Container-executor fails with segfault on certain OS configurations

2018-01-26 Thread Jim Brennan (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7796?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16341628#comment-16341628
 ] 

Jim Brennan commented on YARN-7796:
---

[~grepas] that is interesting.  I wonder if it is the version of gcc that is 
the issue?  This is what I was using on RHEL 6, which causes the problem when 
running on RHEL 7:

gcc (GCC) 4.4.7 20120313 (Red Hat 4.4.7-18)

I'll be interested to hear if removing the -fstack-check flag works in your 
case.

 

> Container-executor fails with segfault on certain OS configurations
> ---
>
> Key: YARN-7796
> URL: https://issues.apache.org/jira/browse/YARN-7796
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: nodemanager
>Affects Versions: 3.0.0
>Reporter: Gergo Repas
>Assignee: Gergo Repas
>Priority: Major
> Fix For: 3.1.0, 3.0.1
>
> Attachments: YARN-7796.000.patch, YARN-7796.001.patch, 
> YARN-7796.002.patch
>
>
> There is a relatively big (128K) buffer allocated on the stack in 
> container-executor.c for the purpose of copying files. As indicated by the 
> below gdb stack trace, this allocation can fail with SIGSEGV. This happens 
> only on certain OS configurations - I can reproduce this issue on RHEL 6.9:
> {code:java}
> [Thread debugging using libthread_db enabled]
> main : command provided 0
> main : run as user is ***
> main : requested yarn user is ***
> Program received signal SIGSEGV, Segmentation fault.
> 0x004069bc in copy_file (input=7, in_filename=0x7ffd669fd2d6 
> "/yarn/nm/nmPrivate/container_1516711246952_0001_02_01.tokens", 
> out_filename=0x932930 
> "/yarn/nm/usercache/systest/appcache/application_1516711246952_0001/container_1516711246952_0001_02_01.tokens",
>  perm=384)
> at 
> /root/hadoop/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/native/container-executor/impl/container-executor.c:966
> 966 char buffer[buffer_size];
> (gdb) bt
> #0  0x004069bc in copy_file (input=7, in_filename=0x7ffd669fd2d6 
> "/yarn/nm/nmPrivate/container_1516711246952_0001_02_01.tokens", 
> out_filename=0x932930 
> "/yarn/nm/usercache/systest/appcache/application_1516711246952_0001/container_1516711246952_0001_02_01.tokens",
>  perm=384)
> at 
> /root/hadoop/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/native/container-executor/impl/container-executor.c:966
> #1  0x00409a81 in initialize_app (user=, 
> app_id=0x7ffd669fd2b7 "application_1516711246952_0001", 
> nmPrivate_credentials_file=0x7ffd669fd2d6 
> "/yarn/nm/nmPrivate/container_1516711246952_0001_02_01.tokens", 
> local_dirs=0x9331c8, log_roots=, args=0x7ffd669fb168)
> at 
> /root/hadoop/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/native/container-executor/impl/container-executor.c:1122
> #2  0x00403f90 in main (argc=, argv= optimized out>) at 
> /root/hadoop/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/native/container-executor/impl/main.c:558
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (YARN-7765) [Atsv2] GSSException: No valid credentials provided - Failed to find any Kerberos tgt thrown by HBaseClient in NM and HDFSClient in HBase daemons

2018-01-26 Thread Rohith Sharma K S (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7765?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16341565#comment-16341565
 ] 

Rohith Sharma K S edited comment on YARN-7765 at 1/26/18 8:54 PM:
--

I see that there are couple of problem in NodeManager.
 # HBase connection is created in seviceInit of TimelineWriter. But at this 
point of time, NM has not yet done kinit.
 # NMTimelinePublisher as well assign nmLoginUgi in serviceInit. But since NM 
has not done login from keytabs, nmLoginUgi is set to currentUser.  As a 
result, NM throw above exception while publishing entity.
 # With same logged in user, it also affect NM recovery flow. All the recovered 
application would also fail since applications are recovered in serviceInit 
phase.

To fix all the 3 issue, we need to do secure login before initializing services 
in NodeManager. Otherwise, we need to fix above 3 issues one by one in 
different places.


was (Author: rohithsharma):
I see that there are couple of problem in NodeManager.
# HBase connection is created in seviceInit of TimelineWriter. But at this 
point of time, NM had not yet done kinit. 
# NMTimelinePublisher as well in serviceInit, nmLoginUgi has been copied to 
local variable and used while creating a TimelineClient. So, TimelineClient is 
created with current user but not with logged in user. As a result, NM throw 
above exception while publishing as well. 
# With same logged in user, it also affect NM recovery flow. All the recovered 
application would also fail since applications are recovered in serviceInit 
phase. 

To fix all the 3 issue, we need to do secure login before initializing services 
in NodeManager. Otherwise, we need to fix above 3 issues one by one in 
different places.

> [Atsv2] GSSException: No valid credentials provided - Failed to find any 
> Kerberos tgt thrown by HBaseClient in NM and HDFSClient in HBase daemons
> -
>
> Key: YARN-7765
> URL: https://issues.apache.org/jira/browse/YARN-7765
> Project: Hadoop YARN
>  Issue Type: Bug
>Reporter: Rohith Sharma K S
>Assignee: Rohith Sharma K S
>Priority: Critical
> Attachments: YARN-7765.01.patch
>
>
> Secure cluster is deployed and all YARN services are started successfully. 
> When application is submitted, app collectors which is started as aux-service 
> throwing below exception. But this exception is *NOT* observed from RM 
> TimelineCollector. 
> Cluster is deployed with Hadoop-3.0 and Hbase-1.2.6 secure cluster. All the 
> YARN and HBase service are started and working perfectly fine. After 24 hours 
> i.e when token lifetime is expired, HBaseClient in NM and HDFSClient in 
> HMaster and HRegionServer started getting this error. After sometime, HBase 
> daemons got shutdown. In NM, JVM didn't shutdown but none of the events got 
> published.
> {noformat}
> 2018-01-17 11:04:48,017 FATAL ipc.RpcClientImpl (RpcClientImpl.java:run(684)) 
> - SASL authentication failed. The most likely cause is missing or invalid 
> credentials. Consider 'kinit'.
> javax.security.sasl.SaslException: GSS initiate failed [Caused by 
> GSSException: No valid credentials provided (Mechanism level: Failed to find 
> any Kerberos tgt)]
> {noformat}
> cc :/ [~vrushalic] [~varun_saxena] 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-7765) [Atsv2] GSSException: No valid credentials provided - Failed to find any Kerberos tgt thrown by HBaseClient in NM and HDFSClient in HBase daemons

2018-01-26 Thread Rohith Sharma K S (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-7765?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Rohith Sharma K S updated YARN-7765:

Attachment: YARN-7765.01.patch

> [Atsv2] GSSException: No valid credentials provided - Failed to find any 
> Kerberos tgt thrown by HBaseClient in NM and HDFSClient in HBase daemons
> -
>
> Key: YARN-7765
> URL: https://issues.apache.org/jira/browse/YARN-7765
> Project: Hadoop YARN
>  Issue Type: Bug
>Reporter: Rohith Sharma K S
>Assignee: Rohith Sharma K S
>Priority: Critical
> Attachments: YARN-7765.01.patch
>
>
> Secure cluster is deployed and all YARN services are started successfully. 
> When application is submitted, app collectors which is started as aux-service 
> throwing below exception. But this exception is *NOT* observed from RM 
> TimelineCollector. 
> Cluster is deployed with Hadoop-3.0 and Hbase-1.2.6 secure cluster. All the 
> YARN and HBase service are started and working perfectly fine. After 24 hours 
> i.e when token lifetime is expired, HBaseClient in NM and HDFSClient in 
> HMaster and HRegionServer started getting this error. After sometime, HBase 
> daemons got shutdown. In NM, JVM didn't shutdown but none of the events got 
> published.
> {noformat}
> 2018-01-17 11:04:48,017 FATAL ipc.RpcClientImpl (RpcClientImpl.java:run(684)) 
> - SASL authentication failed. The most likely cause is missing or invalid 
> credentials. Consider 'kinit'.
> javax.security.sasl.SaslException: GSS initiate failed [Caused by 
> GSSException: No valid credentials provided (Mechanism level: Failed to find 
> any Kerberos tgt)]
> {noformat}
> cc :/ [~vrushalic] [~varun_saxena] 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7765) [Atsv2] GSSException: No valid credentials provided - Failed to find any Kerberos tgt thrown by HBaseClient in NM and HDFSClient in HBase daemons

2018-01-26 Thread Rohith Sharma K S (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7765?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16341565#comment-16341565
 ] 

Rohith Sharma K S commented on YARN-7765:
-

I see that there are couple of problem in NodeManager.
# HBase connection is created in seviceInit of TimelineWriter. But at this 
point of time, NM had not yet done kinit. 
# NMTimelinePublisher as well in serviceInit, nmLoginUgi has been copied to 
local variable and used while creating a TimelineClient. So, TimelineClient is 
created with current user but not with logged in user. As a result, NM throw 
above exception while publishing as well. 
# With same logged in user, it also affect NM recovery flow. All the recovered 
application would also fail since applications are recovered in serviceInit 
phase. 

To fix all the 3 issue, we need to do secure login before initializing services 
in NodeManager. Otherwise, we need to fix above 3 issues one by one in 
different places.

> [Atsv2] GSSException: No valid credentials provided - Failed to find any 
> Kerberos tgt thrown by HBaseClient in NM and HDFSClient in HBase daemons
> -
>
> Key: YARN-7765
> URL: https://issues.apache.org/jira/browse/YARN-7765
> Project: Hadoop YARN
>  Issue Type: Bug
>Reporter: Rohith Sharma K S
>Assignee: Rohith Sharma K S
>Priority: Critical
>
> Secure cluster is deployed and all YARN services are started successfully. 
> When application is submitted, app collectors which is started as aux-service 
> throwing below exception. But this exception is *NOT* observed from RM 
> TimelineCollector. 
> Cluster is deployed with Hadoop-3.0 and Hbase-1.2.6 secure cluster. All the 
> YARN and HBase service are started and working perfectly fine. After 24 hours 
> i.e when token lifetime is expired, HBaseClient in NM and HDFSClient in 
> HMaster and HRegionServer started getting this error. After sometime, HBase 
> daemons got shutdown. In NM, JVM didn't shutdown but none of the events got 
> published.
> {noformat}
> 2018-01-17 11:04:48,017 FATAL ipc.RpcClientImpl (RpcClientImpl.java:run(684)) 
> - SASL authentication failed. The most likely cause is missing or invalid 
> credentials. Consider 'kinit'.
> javax.security.sasl.SaslException: GSS initiate failed [Caused by 
> GSSException: No valid credentials provided (Mechanism level: Failed to find 
> any Kerberos tgt)]
> {noformat}
> cc :/ [~vrushalic] [~varun_saxena] 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Assigned] (YARN-7765) [Atsv2] GSSException: No valid credentials provided - Failed to find any Kerberos tgt thrown by HBaseClient in NM and HDFSClient in HBase daemons

2018-01-26 Thread Rohith Sharma K S (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-7765?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Rohith Sharma K S reassigned YARN-7765:
---

Assignee: Rohith Sharma K S

> [Atsv2] GSSException: No valid credentials provided - Failed to find any 
> Kerberos tgt thrown by HBaseClient in NM and HDFSClient in HBase daemons
> -
>
> Key: YARN-7765
> URL: https://issues.apache.org/jira/browse/YARN-7765
> Project: Hadoop YARN
>  Issue Type: Bug
>Reporter: Rohith Sharma K S
>Assignee: Rohith Sharma K S
>Priority: Critical
>
> Secure cluster is deployed and all YARN services are started successfully. 
> When application is submitted, app collectors which is started as aux-service 
> throwing below exception. But this exception is *NOT* observed from RM 
> TimelineCollector. 
> Cluster is deployed with Hadoop-3.0 and Hbase-1.2.6 secure cluster. All the 
> YARN and HBase service are started and working perfectly fine. After 24 hours 
> i.e when token lifetime is expired, HBaseClient in NM and HDFSClient in 
> HMaster and HRegionServer started getting this error. After sometime, HBase 
> daemons got shutdown. In NM, JVM didn't shutdown but none of the events got 
> published.
> {noformat}
> 2018-01-17 11:04:48,017 FATAL ipc.RpcClientImpl (RpcClientImpl.java:run(684)) 
> - SASL authentication failed. The most likely cause is missing or invalid 
> credentials. Consider 'kinit'.
> javax.security.sasl.SaslException: GSS initiate failed [Caused by 
> GSSException: No valid credentials provided (Mechanism level: Failed to find 
> any Kerberos tgt)]
> {noformat}
> cc :/ [~vrushalic] [~varun_saxena] 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7824) Yarn Component Instance page should include link to container logs

2018-01-26 Thread Vrushali C (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7824?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16341520#comment-16341520
 ] 

Vrushali C commented on YARN-7824:
--

Thanks Yesha! I guess the same might exist in branch-2 or 2.9 as well. Sunil 
might have a better idea.

Also, great blogpost today! 

> Yarn Component Instance page should include link to container logs
> --
>
> Key: YARN-7824
> URL: https://issues.apache.org/jira/browse/YARN-7824
> Project: Hadoop YARN
>  Issue Type: Improvement
>  Components: yarn-ui-v2
>Affects Versions: 3.0.0
>Reporter: Yesha Vora
>Priority: Major
>
> Steps:
> 1) Launch Httpd example
> 2) Visit component Instance page for httpd-proxy-0
> This page has information regarding httpd-proxy-0 component.
> This page should also include a link to container logs for this component
> h2.  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7824) Yarn Component Instance page should include link to container logs

2018-01-26 Thread Yesha Vora (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7824?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16341518#comment-16341518
 ] 

Yesha Vora commented on YARN-7824:
--

[~vrushalic], I'm noticing this issue in 3.0. Affected version updated.

> Yarn Component Instance page should include link to container logs
> --
>
> Key: YARN-7824
> URL: https://issues.apache.org/jira/browse/YARN-7824
> Project: Hadoop YARN
>  Issue Type: Improvement
>  Components: yarn-ui-v2
>Affects Versions: 3.0.0
>Reporter: Yesha Vora
>Priority: Major
>
> Steps:
> 1) Launch Httpd example
> 2) Visit component Instance page for httpd-proxy-0
> This page has information regarding httpd-proxy-0 component.
> This page should also include a link to container logs for this component
> h2.  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-7824) Yarn Component Instance page should include link to container logs

2018-01-26 Thread Yesha Vora (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-7824?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yesha Vora updated YARN-7824:
-
Affects Version/s: 3.0.0

> Yarn Component Instance page should include link to container logs
> --
>
> Key: YARN-7824
> URL: https://issues.apache.org/jira/browse/YARN-7824
> Project: Hadoop YARN
>  Issue Type: Improvement
>  Components: yarn-ui-v2
>Affects Versions: 3.0.0
>Reporter: Yesha Vora
>Priority: Major
>
> Steps:
> 1) Launch Httpd example
> 2) Visit component Instance page for httpd-proxy-0
> This page has information regarding httpd-proxy-0 component.
> This page should also include a link to container logs for this component
> h2.  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-7830) If attempt has selected grid view, attempt info page should be opened with grid view

2018-01-26 Thread Yesha Vora (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-7830?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yesha Vora updated YARN-7830:
-
Affects Version/s: 3.0.0

>  If attempt has selected grid view, attempt info page should be opened with 
> grid view
> -
>
> Key: YARN-7830
> URL: https://issues.apache.org/jira/browse/YARN-7830
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: yarn-ui-v2
>Affects Versions: 3.0.0
>Reporter: Yesha Vora
>Priority: Major
>
> Steps:
> 1) Start Application and visit attempt page
> 2) click on Grid view
>  3) Click on attempt 1
>  
> Current behavior:
> This page is redirected to attempt info page. This page redirects to graph 
> view . 
>  
> Expected behavior:
> In this scenario, It should redirect to grid view.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7825) Maintain constant horizontal application info bar for all pages

2018-01-26 Thread Vrushali C (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7825?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16341516#comment-16341516
 ] 

Vrushali C commented on YARN-7825:
--

Hi Yesha,

Attach a screen shot if easily possible so that it's clearer what the error is.

> Maintain constant horizontal application info bar for all pages
> ---
>
> Key: YARN-7825
> URL: https://issues.apache.org/jira/browse/YARN-7825
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: yarn-ui-v2
>Reporter: Yesha Vora
>Priority: Major
>
> Steps:
> 1) enable Ats v2
> 2) Start Yarn service application ( Httpd )
> 3) Fix horizontal info bar for below pages.
>  * component page
>  * Component Instance info page 
>  * Application attempt Info 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7824) Yarn Component Instance page should include link to container logs

2018-01-26 Thread Vrushali C (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7824?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16341515#comment-16341515
 ] 

Vrushali C commented on YARN-7824:
--

Hi Yesha,

If you can also indicate which hadoop version you are seeing this on, that will 
help. Either 3.0 or 2.9 I believe, since new yarn ui exists on those. Put it in 
the affected versions field. 

> Yarn Component Instance page should include link to container logs
> --
>
> Key: YARN-7824
> URL: https://issues.apache.org/jira/browse/YARN-7824
> Project: Hadoop YARN
>  Issue Type: Improvement
>  Components: yarn-ui-v2
>Reporter: Yesha Vora
>Priority: Major
>
> Steps:
> 1) Launch Httpd example
> 2) Visit component Instance page for httpd-proxy-0
> This page has information regarding httpd-proxy-0 component.
> This page should also include a link to container logs for this component
> h2.  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7827) Stop and Delete Yarn Service from RM UI fails with HTTP ERROR 404

2018-01-26 Thread Vrushali C (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7827?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16341511#comment-16341511
 ] 

Vrushali C commented on YARN-7827:
--

Does this work if ATSv2 is disabled? 

> Stop and Delete Yarn Service from RM UI fails with HTTP ERROR 404
> -
>
> Key: YARN-7827
> URL: https://issues.apache.org/jira/browse/YARN-7827
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: yarn-ui-v2
>Reporter: Yesha Vora
>Assignee: Sunil G
>Priority: Critical
>
> Steps:
> 1) Enable Ats v2
> 2) Start Httpd Yarn service
> 3) Go to UI2 attempts page for yarn service 
> 4) Click on setting icon
> 5) Click on stop service
> 6) This action will pop up a box to confirm stop. click on "Yes"
> Expected behavior:
> Yarn service should be stopped
> Actual behavior:
> Yarn UI is not notifying on whether Yarn service is stopped or not.
> On checking network stack trace, the PUT request failed with HTTP error 404
> {code}
> Sorry, got error 404
> Please consult RFC 2616 for meanings of the error code.
> Error Details
> org.apache.hadoop.yarn.webapp.WebAppException: /v1/services/httpd-hrt-qa-n: 
> controller for v1 not found
>   at org.apache.hadoop.yarn.webapp.Router.resolveDefault(Router.java:247)
>   at org.apache.hadoop.yarn.webapp.Router.resolve(Router.java:155)
>   at org.apache.hadoop.yarn.webapp.Dispatcher.service(Dispatcher.java:143)
>   at javax.servlet.http.HttpServlet.service(HttpServlet.java:790)
>   at 
> com.google.inject.servlet.ServletDefinition.doServiceImpl(ServletDefinition.java:287)
>   at 
> com.google.inject.servlet.ServletDefinition.doService(ServletDefinition.java:277)
>   at 
> com.google.inject.servlet.ServletDefinition.service(ServletDefinition.java:182)
>   at 
> com.google.inject.servlet.ManagedServletPipeline.service(ManagedServletPipeline.java:91)
>   at 
> com.google.inject.servlet.FilterChainInvocation.doFilter(FilterChainInvocation.java:85)
>   at 
> com.sun.jersey.spi.container.servlet.ServletContainer.doFilter(ServletContainer.java:941)
>   at 
> com.sun.jersey.spi.container.servlet.ServletContainer.doFilter(ServletContainer.java:875)
>   at 
> org.apache.hadoop.yarn.server.resourcemanager.webapp.RMWebAppFilter.doFilter(RMWebAppFilter.java:178)
>   at 
> com.sun.jersey.spi.container.servlet.ServletContainer.doFilter(ServletContainer.java:829)
>   at 
> com.google.inject.servlet.FilterChainInvocation.doFilter(FilterChainInvocation.java:82)
>   at 
> com.google.inject.servlet.ManagedFilterPipeline.dispatch(ManagedFilterPipeline.java:119)
>   at com.google.inject.servlet.GuiceFilter$1.call(GuiceFilter.java:133)
>   at com.google.inject.servlet.GuiceFilter$1.call(GuiceFilter.java:130)
>   at 
> com.google.inject.servlet.GuiceFilter$Context.call(GuiceFilter.java:203)
>   at com.google.inject.servlet.GuiceFilter.doFilter(GuiceFilter.java:130)
>   at 
> org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1759)
>   at 
> org.apache.hadoop.security.http.XFrameOptionsFilter.doFilter(XFrameOptionsFilter.java:57)
>   at 
> org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1759)
>   at 
> org.apache.hadoop.http.lib.StaticUserWebFilter$StaticUserFilter.doFilter(StaticUserWebFilter.java:110)
>   at 
> org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1759)
>   at 
> org.apache.hadoop.security.http.CrossOriginFilter.doFilter(CrossOriginFilter.java:98)
>   at 
> org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1759)
>   at 
> org.apache.hadoop.http.HttpServer2$QuotingInputFilter.doFilter(HttpServer2.java:1578)
>   at 
> org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1759)
>   at org.apache.hadoop.http.NoCacheFilter.doFilter(NoCacheFilter.java:45)
>   at 
> org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1759)
>   at 
> org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:582)
>   at 
> org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:143)
>   at 
> org.eclipse.jetty.security.SecurityHandler.handle(SecurityHandler.java:548)
>   at 
> org.eclipse.jetty.server.session.SessionHandler.doHandle(SessionHandler.java:226)
>   at 
> org.eclipse.jetty.server.handler.ContextHandler.doHandle(ContextHandler.java:1180)
>   at 
> org.eclipse.jetty.servlet.ServletHandler.doScope(ServletHandler.java:512)
>   at 
> org.eclipse.jetty.server.session.SessionHandler.doScope(SessionHandler.java:185)
>   at 
> org.eclipse.jetty.server.handler.ContextHandler.doScope(ContextHandler.java:1112)
>   at 
> 

[jira] [Comment Edited] (YARN-7780) Documentation for Placement Constraints

2018-01-26 Thread Jian He (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7780?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16341507#comment-16341507
 ] 

Jian He edited comment on YARN-7780 at 1/26/18 7:51 PM:


I noticed quite a few newly added configs are not documented like the ones 
starting with "yarn.resourcemanger.placement-constraints.."

Are we not going to document those ?


was (Author: jianhe):
I noticed quite a few newly added configs are not documented like the ones 
related to "RM_PLACEMENT_CONSTRAINTS_ALGORITHM_CLASS"

Are we not going to document those ?

> Documentation for Placement Constraints
> ---
>
> Key: YARN-7780
> URL: https://issues.apache.org/jira/browse/YARN-7780
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Arun Suresh
>Assignee: Konstantinos Karanasos
>Priority: Major
> Attachments: YARN-7780-YARN-6592.001.patch, 
> YARN-7780-YARN-6592.002.patch
>
>
> JIRA to track documentation for the feature.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7780) Documentation for Placement Constraints

2018-01-26 Thread Jian He (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7780?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16341507#comment-16341507
 ] 

Jian He commented on YARN-7780:
---

I noticed quite a few newly added configs are not documented like the ones 
related to "RM_PLACEMENT_CONSTRAINTS_ALGORITHM_CLASS"

Are we not going to document those ?

> Documentation for Placement Constraints
> ---
>
> Key: YARN-7780
> URL: https://issues.apache.org/jira/browse/YARN-7780
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Arun Suresh
>Assignee: Konstantinos Karanasos
>Priority: Major
> Attachments: YARN-7780-YARN-6592.001.patch, 
> YARN-7780-YARN-6592.002.patch
>
>
> JIRA to track documentation for the feature.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-2185) Use pipes when localizing archives

2018-01-26 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-2185?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16341505#comment-16341505
 ] 

Hudson commented on YARN-2185:
--

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #13566 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/13566/])
YARN-2185. Use pipes when localizing archives. Contributed by Miklos (jlowe: 
rev 1b0f265db1a5bfccf1d870912237ea9618bd9c34)
* (edit) 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/util/RunJar.java
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/java/org/apache/hadoop/yarn/util/FSDownload.java
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/test/java/org/apache/hadoop/yarn/util/TestFSDownload.java
* (edit) 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/FileUtil.java


> Use pipes when localizing archives
> --
>
> Key: YARN-2185
> URL: https://issues.apache.org/jira/browse/YARN-2185
> Project: Hadoop YARN
>  Issue Type: Improvement
>  Components: nodemanager
>Affects Versions: 2.4.0
>Reporter: Jason Lowe
>Assignee: Miklos Szegedi
>Priority: Major
> Fix For: 3.1.0
>
> Attachments: YARN-2185.000.patch, YARN-2185.001.patch, 
> YARN-2185.002.patch, YARN-2185.003.patch, YARN-2185.004.patch, 
> YARN-2185.005.patch, YARN-2185.006.patch, YARN-2185.007.patch, 
> YARN-2185.008.patch, YARN-2185.009.patch, YARN-2185.010.patch, 
> YARN-2185.011.patch, YARN-2185.012.patch, YARN-2185.012.patch
>
>
> Currently the nodemanager downloads an archive to a local file, unpacks it, 
> and then removes it.  It would be more efficient to stream the data as it's 
> being unpacked to avoid both the extra disk space requirements and the 
> additional disk activity from storing the archive.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7781) Update YARN-Services-Examples.md to be in sync with the latest code

2018-01-26 Thread genericqa (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7781?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16341498#comment-16341498
 ] 

genericqa commented on YARN-7781:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 15m 
47s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
11s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 15m 
22s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  7m 
44s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
57s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
30s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
12m 11s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Skipped patched modules with no Java source: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-site {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m  
7s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m  
6s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
10s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
56s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  6m 
36s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  6m 
36s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
54s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
19s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} whitespace {color} | {color:red}  0m  
0s{color} | {color:red} The patch has 2 line(s) that end in whitespace. Use git 
apply --whitespace=fix <>. Refer https://git-scm.com/docs/git-apply 
{color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
10m  4s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Skipped patched modules with no Java source: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-site {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
27s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m  
6s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  3m 
41s{color} | {color:green} hadoop-yarn-services-core in the patch passed. 
{color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
34s{color} | {color:green} hadoop-yarn-services-api in the patch passed. 
{color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
21s{color} | {color:green} hadoop-yarn-site in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
32s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 83m  2s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce 

[jira] [Commented] (YARN-7732) Support Generic AM Simulator from SynthGenerator

2018-01-26 Thread Young Chen (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7732?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16341473#comment-16341473
 ] 

Young Chen commented on YARN-7732:
--

The existing code does set up a class map for different simulator types, but 
the set up is very rigid in only supporting mapreduce.

Yes, the additional params in the AMSimulator init are for describing 
AMSimulator implementation specific parameters. The eventual goal is to support 
many different types of AMSimulators to better model a diverse workload. An 
AMSimulator that executes an arbitrary DAG could also be introduced in the 
Synth generator. 

The SLS and Rumen formats are still mapreduce specific - future work could 
possibly extend those to be generic as well, but that's out of the scope of 
this patch.

Thanks for taking a look [~leftnoteasy]! Any suggestions/improvements are very 
welcome!

> Support Generic AM Simulator from SynthGenerator
> 
>
> Key: YARN-7732
> URL: https://issues.apache.org/jira/browse/YARN-7732
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: scheduler-load-simulator
>Reporter: Young Chen
>Assignee: Young Chen
>Priority: Minor
> Attachments: YARN-7732-YARN-7798.01.patch, 
> YARN-7732-YARN-7798.02.patch, YARN-7732.01.patch, YARN-7732.02.patch, 
> YARN-7732.03.patch
>
>
> Extract the MapReduce specific set-up in the SLSRunner into the 
> MRAMSimulator, and enable support for pluggable AMSimulators.
> Previously, the AM set up in SLSRunner had the MRAMSimulator type hard coded, 
> for example startAMFromSynthGenerator() calls this:
>  
> {code:java}
> runNewAM(SLSUtils.DEFAULT_JOB_TYPE, user, jobQueue, oldJobId,
> jobStartTimeMS, jobFinishTimeMS, containerList, reservationId,
> job.getDeadline(), getAMContainerResource(null));
> {code}
> where SLSUtils.DEFAULT_JOB_TYPE = "mapreduce"
> The container set up was also only suitable for mapreduce: 
>  
> {code:java}
> Version:1.0 StartHTML:00286 EndHTML:12564 StartFragment:03634 
> EndFragment:12474 StartSelection:03700 EndSelection:12464 
> SourceURL:https://github.com/apache/hadoop/blob/trunk/hadoop-tools/hadoop-sls/src/main/java/org/apache/hadoop/yarn/sls/SLSRunner.java
>  
> // map tasks
> for (int i = 0; i < job.getNumberMaps(); i++) {
>   TaskAttemptInfo tai = job.getTaskAttemptInfo(TaskType.MAP, i, 0);
>   RMNode node =
>   nmMap.get(keyAsArray.get(rand.nextInt(keyAsArray.size(
>   .getNode();
>   String hostname = "/" + node.getRackName() + "/" + node.getHostName();
>   long containerLifeTime = tai.getRuntime();
>   Resource containerResource =
>   Resource.newInstance((int) tai.getTaskInfo().getTaskMemory(),
>   (int) tai.getTaskInfo().getTaskVCores());
>   containerList.add(new ContainerSimulator(containerResource,
>   containerLifeTime, hostname, DEFAULT_MAPPER_PRIORITY, "map"));
> }
> // reduce tasks
> for (int i = 0; i < job.getNumberReduces(); i++) {
>   TaskAttemptInfo tai = job.getTaskAttemptInfo(TaskType.REDUCE, i, 0);
>   RMNode node =
>   nmMap.get(keyAsArray.get(rand.nextInt(keyAsArray.size(
>   .getNode();
>   String hostname = "/" + node.getRackName() + "/" + node.getHostName();
>   long containerLifeTime = tai.getRuntime();
>   Resource containerResource =
>   Resource.newInstance((int) tai.getTaskInfo().getTaskMemory(),
>   (int) tai.getTaskInfo().getTaskVCores());
>   containerList.add(
>   new ContainerSimulator(containerResource, containerLifeTime,
>   hostname, DEFAULT_REDUCER_PRIORITY, "reduce"));
> }
> {code}
>  
> In addition, the syn.json format supported only mapreduce (the parameters 
> were very specific: mtime, rtime, mtasks, rtasks, etc..).
> This patch aims to introduce a new syn.json format that can describe generic 
> jobs, and the SLS setup required to support the synth generation of generic 
> jobs.
> See syn_generic.json for an equivalent of the previous syn.json in the new 
> format.
> Using the new generic format, we describe a StreamAMSimulator simulates a 
> long running streaming service that maintains N number of containers for the 
> lifetime of the AM. See syn_stream.json.
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Created] (YARN-7830) If attempt has selected grid view, attempt info page should be opened with grid view

2018-01-26 Thread Yesha Vora (JIRA)
Yesha Vora created YARN-7830:


 Summary:  If attempt has selected grid view, attempt info page 
should be opened with grid view
 Key: YARN-7830
 URL: https://issues.apache.org/jira/browse/YARN-7830
 Project: Hadoop YARN
  Issue Type: Bug
  Components: yarn-ui-v2
Reporter: Yesha Vora


Steps:

1) Start Application and visit attempt page

2) click on Grid view

 3) Click on attempt 1

 

Current behavior:

This page is redirected to attempt info page. This page redirects to graph view 
. 

 

Expected behavior:

In this scenario, It should redirect to grid view.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (YARN-7732) Support Generic AM Simulator from SynthGenerator

2018-01-26 Thread Young Chen (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7732?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16341473#comment-16341473
 ] 

Young Chen edited comment on YARN-7732 at 1/26/18 7:26 PM:
---

The existing code does set up a class map for different simulator types, but 
the actual AM instance set up is very rigid in only supporting mapreduce.

Yes, the additional params in the AMSimulator init are for describing 
AMSimulator implementation specific parameters. The eventual goal is to support 
many different types of AMSimulators to better model a diverse workload. An 
AMSimulator that executes an arbitrary DAG could also be introduced in the 
Synth generator. 

The SLS and Rumen formats are still mapreduce specific - future work could 
possibly extend those to be generic as well, but that's out of the scope of 
this patch.

Thanks for taking a look [~leftnoteasy]! Any suggestions/improvements are very 
welcome!


was (Author: youchen):
The existing code does set up a class map for different simulator types, but 
the set up is very rigid in only supporting mapreduce.

Yes, the additional params in the AMSimulator init are for describing 
AMSimulator implementation specific parameters. The eventual goal is to support 
many different types of AMSimulators to better model a diverse workload. An 
AMSimulator that executes an arbitrary DAG could also be introduced in the 
Synth generator. 

The SLS and Rumen formats are still mapreduce specific - future work could 
possibly extend those to be generic as well, but that's out of the scope of 
this patch.

Thanks for taking a look [~leftnoteasy]! Any suggestions/improvements are very 
welcome!

> Support Generic AM Simulator from SynthGenerator
> 
>
> Key: YARN-7732
> URL: https://issues.apache.org/jira/browse/YARN-7732
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: scheduler-load-simulator
>Reporter: Young Chen
>Assignee: Young Chen
>Priority: Minor
> Attachments: YARN-7732-YARN-7798.01.patch, 
> YARN-7732-YARN-7798.02.patch, YARN-7732.01.patch, YARN-7732.02.patch, 
> YARN-7732.03.patch
>
>
> Extract the MapReduce specific set-up in the SLSRunner into the 
> MRAMSimulator, and enable support for pluggable AMSimulators.
> Previously, the AM set up in SLSRunner had the MRAMSimulator type hard coded, 
> for example startAMFromSynthGenerator() calls this:
>  
> {code:java}
> runNewAM(SLSUtils.DEFAULT_JOB_TYPE, user, jobQueue, oldJobId,
> jobStartTimeMS, jobFinishTimeMS, containerList, reservationId,
> job.getDeadline(), getAMContainerResource(null));
> {code}
> where SLSUtils.DEFAULT_JOB_TYPE = "mapreduce"
> The container set up was also only suitable for mapreduce: 
>  
> {code:java}
> Version:1.0 StartHTML:00286 EndHTML:12564 StartFragment:03634 
> EndFragment:12474 StartSelection:03700 EndSelection:12464 
> SourceURL:https://github.com/apache/hadoop/blob/trunk/hadoop-tools/hadoop-sls/src/main/java/org/apache/hadoop/yarn/sls/SLSRunner.java
>  
> // map tasks
> for (int i = 0; i < job.getNumberMaps(); i++) {
>   TaskAttemptInfo tai = job.getTaskAttemptInfo(TaskType.MAP, i, 0);
>   RMNode node =
>   nmMap.get(keyAsArray.get(rand.nextInt(keyAsArray.size(
>   .getNode();
>   String hostname = "/" + node.getRackName() + "/" + node.getHostName();
>   long containerLifeTime = tai.getRuntime();
>   Resource containerResource =
>   Resource.newInstance((int) tai.getTaskInfo().getTaskMemory(),
>   (int) tai.getTaskInfo().getTaskVCores());
>   containerList.add(new ContainerSimulator(containerResource,
>   containerLifeTime, hostname, DEFAULT_MAPPER_PRIORITY, "map"));
> }
> // reduce tasks
> for (int i = 0; i < job.getNumberReduces(); i++) {
>   TaskAttemptInfo tai = job.getTaskAttemptInfo(TaskType.REDUCE, i, 0);
>   RMNode node =
>   nmMap.get(keyAsArray.get(rand.nextInt(keyAsArray.size(
>   .getNode();
>   String hostname = "/" + node.getRackName() + "/" + node.getHostName();
>   long containerLifeTime = tai.getRuntime();
>   Resource containerResource =
>   Resource.newInstance((int) tai.getTaskInfo().getTaskMemory(),
>   (int) tai.getTaskInfo().getTaskVCores());
>   containerList.add(
>   new ContainerSimulator(containerResource, containerLifeTime,
>   hostname, DEFAULT_REDUCER_PRIORITY, "reduce"));
> }
> {code}
>  
> In addition, the syn.json format supported only mapreduce (the parameters 
> were very specific: mtime, rtime, mtasks, rtasks, etc..).
> This patch aims to introduce a new syn.json format that can describe generic 
> jobs, and the SLS setup required 

[jira] [Commented] (YARN-2185) Use pipes when localizing archives

2018-01-26 Thread Jason Lowe (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-2185?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16341471#comment-16341471
 ] 

Jason Lowe commented on YARN-2185:
--

The mvn install failure appears to be a hiccup with the SNAPSHOT jars being 
updated during the build.  I cannot reproduce the install failure locally, and 
an almost identical patch did not hit that failure.

+1 for the latest patch.  Committing this.

> Use pipes when localizing archives
> --
>
> Key: YARN-2185
> URL: https://issues.apache.org/jira/browse/YARN-2185
> Project: Hadoop YARN
>  Issue Type: Improvement
>  Components: nodemanager
>Affects Versions: 2.4.0
>Reporter: Jason Lowe
>Assignee: Miklos Szegedi
>Priority: Major
> Attachments: YARN-2185.000.patch, YARN-2185.001.patch, 
> YARN-2185.002.patch, YARN-2185.003.patch, YARN-2185.004.patch, 
> YARN-2185.005.patch, YARN-2185.006.patch, YARN-2185.007.patch, 
> YARN-2185.008.patch, YARN-2185.009.patch, YARN-2185.010.patch, 
> YARN-2185.011.patch, YARN-2185.012.patch, YARN-2185.012.patch
>
>
> Currently the nodemanager downloads an archive to a local file, unpacks it, 
> and then removes it.  It would be more efficient to stream the data as it's 
> being unpacked to avoid both the extra disk space requirements and the 
> additional disk activity from storing the archive.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7822) Constraint satisfaction checker support for composite OR and AND constraints

2018-01-26 Thread Arun Suresh (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7822?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16341468#comment-16341468
 ] 

Arun Suresh commented on YARN-7822:
---

[~cheersyang], Thanks for working on this..

Couple of comments:
* I see a TODO stating that nested AND / OR is not supported. Given that the 
the children of OR / AND can take any AbstractConstraint, it should be possible 
right ?
* In your testcases, I see references to {{FiCaSchedulerNode}}. Can you replace 
with SchedulerNode. We would like to keep this module as scheduler agnostic as 
possible.
* Can you also add a testcase to the {{TestPlacementProcessor}} to validate an 
end-2-end scenario ?

I was thinking we make changes to the DistributedShell as well. Should we do it 
here ? or I am open to doing it in another JIRA.





> Constraint satisfaction checker support for composite OR and AND constraints
> 
>
> Key: YARN-7822
> URL: https://issues.apache.org/jira/browse/YARN-7822
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Arun Suresh
>Assignee: Weiwei Yang
>Priority: Major
> Attachments: YARN-7822-YARN-6592.001.patch
>
>
> JIRA to track changes to {{PlacementConstraintsUtil#canSatisfyConstraints}} 
> handle OR and AND Composite constaints



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-7732) Support Generic AM Simulator from SynthGenerator

2018-01-26 Thread Young Chen (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-7732?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Young Chen updated YARN-7732:
-
Description: 
Extract the MapReduce specific set-up in the SLSRunner into the MRAMSimulator, 
and enable support for pluggable AMSimulators.

Previously, the AM set up in SLSRunner had the MRAMSimulator type hard coded, 
for example startAMFromSynthGenerator() calls this:

 
{code:java}
runNewAM(SLSUtils.DEFAULT_JOB_TYPE, user, jobQueue, oldJobId,

jobStartTimeMS, jobFinishTimeMS, containerList, reservationId,

job.getDeadline(), getAMContainerResource(null));
{code}
where SLSUtils.DEFAULT_JOB_TYPE = "mapreduce"

The container set up was also only suitable for mapreduce: 

 
{code:java}
Version:1.0 StartHTML:00286 EndHTML:12564 StartFragment:03634 
EndFragment:12474 StartSelection:03700 EndSelection:12464 
SourceURL:https://github.com/apache/hadoop/blob/trunk/hadoop-tools/hadoop-sls/src/main/java/org/apache/hadoop/yarn/sls/SLSRunner.java
 
// map tasks

for (int i = 0; i < job.getNumberMaps(); i++) {
TaskAttemptInfo tai = job.getTaskAttemptInfo(TaskType.MAP, i, 0);
RMNode node =
nmMap.get(keyAsArray.get(rand.nextInt(keyAsArray.size(
.getNode();
String hostname = "/" + node.getRackName() + "/" + node.getHostName();
long containerLifeTime = tai.getRuntime();
Resource containerResource =
Resource.newInstance((int) tai.getTaskInfo().getTaskMemory(),
(int) tai.getTaskInfo().getTaskVCores());
containerList.add(new ContainerSimulator(containerResource,
containerLifeTime, hostname, DEFAULT_MAPPER_PRIORITY, "map"));
}



// reduce tasks
for (int i = 0; i < job.getNumberReduces(); i++) {
TaskAttemptInfo tai = job.getTaskAttemptInfo(TaskType.REDUCE, i, 0);
RMNode node =
nmMap.get(keyAsArray.get(rand.nextInt(keyAsArray.size(
.getNode();
String hostname = "/" + node.getRackName() + "/" + node.getHostName();
long containerLifeTime = tai.getRuntime();
Resource containerResource =
Resource.newInstance((int) tai.getTaskInfo().getTaskMemory(),
(int) tai.getTaskInfo().getTaskVCores());
containerList.add(
new ContainerSimulator(containerResource, containerLifeTime,
hostname, DEFAULT_REDUCER_PRIORITY, "reduce"));
}
{code}
 

In addition, the syn.json format supported only mapreduce (the parameters were 
very specific: mtime, rtime, mtasks, rtasks, etc..).

This patch aims to introduce a new syn.json format that can describe generic 
jobs, and the SLS setup required to support the synth generation of generic 
jobs.

See syn_generic.json for an equivalent of the previous syn.json in the new 
format.

Using the new generic format, we describe a StreamAMSimulator simulates a long 
running streaming service that maintains N number of containers for the 
lifetime of the AM. See syn_stream.json.

 

  was:
Extract the MapReduce specific set-up in the SLSRunner into the MRAMSimulator, 
and enable support for pluggable AMSimulators.

Previously, the am set up in SLSRunner had the MRAMSimulator type hard coded, 
for example startAMFromSynthGenerator() calls this:

 
{code:java}
runNewAM(SLSUtils.DEFAULT_JOB_TYPE, user, jobQueue, oldJobId,

jobStartTimeMS, jobFinishTimeMS, containerList, reservationId,

job.getDeadline(), getAMContainerResource(null));
{code}
where SLSUtils.DEFAULT_JOB_TYPE = "mapreduce"

 

In addition, 


> Support Generic AM Simulator from SynthGenerator
> 
>
> Key: YARN-7732
> URL: https://issues.apache.org/jira/browse/YARN-7732
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: scheduler-load-simulator
>Reporter: Young Chen
>Assignee: Young Chen
>Priority: Minor
> Attachments: YARN-7732-YARN-7798.01.patch, 
> YARN-7732-YARN-7798.02.patch, YARN-7732.01.patch, YARN-7732.02.patch, 
> YARN-7732.03.patch
>
>
> Extract the MapReduce specific set-up in the SLSRunner into the 
> MRAMSimulator, and enable support for pluggable AMSimulators.
> Previously, the AM set up in SLSRunner had the MRAMSimulator type hard coded, 
> for example startAMFromSynthGenerator() calls this:
>  
> {code:java}
> runNewAM(SLSUtils.DEFAULT_JOB_TYPE, user, jobQueue, oldJobId,
> jobStartTimeMS, jobFinishTimeMS, containerList, reservationId,
> job.getDeadline(), getAMContainerResource(null));
> {code}
> where SLSUtils.DEFAULT_JOB_TYPE = "mapreduce"
> The container set up was also only suitable for mapreduce: 
>  
> {code:java}
> Version:1.0 StartHTML:00286 EndHTML:12564 StartFragment:03634 

[jira] [Updated] (YARN-7732) Support Generic AM Simulator from SynthGenerator

2018-01-26 Thread Young Chen (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-7732?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Young Chen updated YARN-7732:
-
Summary: Support Generic AM Simulator from SynthGenerator  (was: Support 
Pluggable AM Simulator)

> Support Generic AM Simulator from SynthGenerator
> 
>
> Key: YARN-7732
> URL: https://issues.apache.org/jira/browse/YARN-7732
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: scheduler-load-simulator
>Reporter: Young Chen
>Assignee: Young Chen
>Priority: Minor
> Attachments: YARN-7732-YARN-7798.01.patch, 
> YARN-7732-YARN-7798.02.patch, YARN-7732.01.patch, YARN-7732.02.patch, 
> YARN-7732.03.patch
>
>
> Extract the MapReduce specific set-up in the SLSRunner into the 
> MRAMSimulator, and enable support for pluggable AMSimulators.
> Previously, the am set up in SLSRunner had the MRAMSimulator type hard coded, 
> for example startAMFromSynthGenerator() calls this:
>  
> {code:java}
> runNewAM(SLSUtils.DEFAULT_JOB_TYPE, user, jobQueue, oldJobId,
> jobStartTimeMS, jobFinishTimeMS, containerList, reservationId,
> job.getDeadline(), getAMContainerResource(null));
> {code}
> where SLSUtils.DEFAULT_JOB_TYPE = "mapreduce"
>  
> In addition, 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-7732) Support Pluggable AM Simulator

2018-01-26 Thread Young Chen (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-7732?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Young Chen updated YARN-7732:
-
Description: 
Extract the MapReduce specific set-up in the SLSRunner into the MRAMSimulator, 
and enable support for pluggable AMSimulators.

Previously, the am set up in SLSRunner had the MRAMSimulator type hard coded, 
for example startAMFromSynthGenerator() calls this:

 
{code:java}
runNewAM(SLSUtils.DEFAULT_JOB_TYPE, user, jobQueue, oldJobId,

jobStartTimeMS, jobFinishTimeMS, containerList, reservationId,

job.getDeadline(), getAMContainerResource(null));
{code}
where SLSUtils.DEFAULT_JOB_TYPE = "mapreduce"

 

In addition, 

  was:
Extract the MapReduce specific set-up in the SLSRunner into the MRAMSimulator, 
and enable support for pluggable AMSimulators.

Previously, the 


> Support Pluggable AM Simulator
> --
>
> Key: YARN-7732
> URL: https://issues.apache.org/jira/browse/YARN-7732
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: scheduler-load-simulator
>Reporter: Young Chen
>Assignee: Young Chen
>Priority: Minor
> Attachments: YARN-7732-YARN-7798.01.patch, 
> YARN-7732-YARN-7798.02.patch, YARN-7732.01.patch, YARN-7732.02.patch, 
> YARN-7732.03.patch
>
>
> Extract the MapReduce specific set-up in the SLSRunner into the 
> MRAMSimulator, and enable support for pluggable AMSimulators.
> Previously, the am set up in SLSRunner had the MRAMSimulator type hard coded, 
> for example startAMFromSynthGenerator() calls this:
>  
> {code:java}
> runNewAM(SLSUtils.DEFAULT_JOB_TYPE, user, jobQueue, oldJobId,
> jobStartTimeMS, jobFinishTimeMS, containerList, reservationId,
> job.getDeadline(), getAMContainerResource(null));
> {code}
> where SLSUtils.DEFAULT_JOB_TYPE = "mapreduce"
>  
> In addition, 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-7732) Support Pluggable AM Simulator

2018-01-26 Thread Young Chen (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-7732?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Young Chen updated YARN-7732:
-
Description: 
Extract the MapReduce specific set-up in the SLSRunner into the MRAMSimulator, 
and enable support for pluggable AMSimulators.

Previously, the 

  was:Extract the MapReduce specific set-up in the SLSRunner into the 
MRAMSimulator, and enable support for pluggable AMSimulators


> Support Pluggable AM Simulator
> --
>
> Key: YARN-7732
> URL: https://issues.apache.org/jira/browse/YARN-7732
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: scheduler-load-simulator
>Reporter: Young Chen
>Assignee: Young Chen
>Priority: Minor
> Attachments: YARN-7732-YARN-7798.01.patch, 
> YARN-7732-YARN-7798.02.patch, YARN-7732.01.patch, YARN-7732.02.patch, 
> YARN-7732.03.patch
>
>
> Extract the MapReduce specific set-up in the SLSRunner into the 
> MRAMSimulator, and enable support for pluggable AMSimulators.
> Previously, the 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7797) Docker host network can not obtain IP address for RegistryDNS

2018-01-26 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7797?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16341430#comment-16341430
 ] 

Hudson commented on YARN-7797:
--

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #13565 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/13565/])
YARN-7797. Docker host network can not obtain IP address for (billie: rev 
f2fa736f0ab139b5251d115fd75b833d1d7d1dcd)
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/linux/runtime/DockerLinuxContainerRuntime.java


> Docker host network can not obtain IP address for RegistryDNS
> -
>
> Key: YARN-7797
> URL: https://issues.apache.org/jira/browse/YARN-7797
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: nodemanager
>Affects Versions: 3.1.0
>Reporter: Eric Yang
>Assignee: Eric Yang
>Priority: Major
> Fix For: 3.1.0
>
> Attachments: YARN-7797.001.patch, YARN-7797.002.patch, 
> YARN-7797.003.patch, YARN-7797.004.patch, YARN-7797.005.patch
>
>
> When docker is configured to use host network, docker inspect command does 
> not return IP address of the container.  This prevents IP information to be 
> collected for RegistryDNS to register a hostname entry for the docker 
> container.
> The proposed solution is to intelligently detect the docker network 
> deployment method, and report back host IP address for RegistryDNS.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-3895) Support ACLs in ATSv2

2018-01-26 Thread Vrushali C (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-3895?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16341427#comment-16341427
 ] 

Vrushali C commented on YARN-3895:
--

 Hi [~jlowe]  [~jeagles]

I discussed with [~lohit] once again this morning.  Based on the scale of 
domain ids, I wanted to revise the storage design. We now propose to have a 
domain table, the row key being domain id and there will be two columns one for 
users and another for groups.  And for created time and other things that exist 
in the TimelineDomain object.

So at read time, just like ATSv1 does, first get all the entities satisfying 
the query criteria, then look for domain ids. And for each domain id in the 
response, check the domain table if the user/group has permissions.

For wildcard of ‘*’, no check is necessary, since it means all users and groups 
have permissions?

Similarly if the querying user is an admin, no check is done.  Also, all this 
is not executed in non-secure mode.

This will work functionally correctly but this is going to be a bit slow 
depending on the number of domain ids found in the entity response set. If 
there is only one domain id, then only one more get request to hbase. With each 
additional domain id, the query response time will increase slightly. We can 
batch the gets to domain table but even so, it will be a few seconds tending to 
minutes depending on number of calls needed, since multiple calls to hbase 
translate to multiple hdfs calls. 

I have been scratching my head on this read performance. The only other option 
I see is, that the collector keeps the domain id  & user/groups info in memory 
and writes it out with each entity. That way we end up with a denormalized 
dataset and read queries will be as fast as they can get with hbase. The domain 
table will still exist and the collector can read from it if it happens to go 
down and comes back up.

Which way do you think might end up working better for applications like Tez?

Storage scalability wise, I think either of the two options would be fine with 
hbase.  And the expiration / TTL can be set in either case as well. And as 
such, for optimizing read / write performance, we can pre-split the domain 
table and try to balance the row keys to ensure that they go to different 
Region Servers so we don’t end up hot-spotting one single RS for reads and 
writes of currently running applications.

thanks

Vrushali

> Support ACLs in ATSv2
> -
>
> Key: YARN-3895
> URL: https://issues.apache.org/jira/browse/YARN-3895
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: timelineserver
>Affects Versions: YARN-2928
>Reporter: Varun Saxena
>Assignee: Varun Saxena
>Priority: Major
>  Labels: YARN-5355
>
> This JIRA is to keep track of authorization support design discussions for 
> both readers and collectors. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



***UNCHECKED*** [jira] [Commented] (YARN-7780) Documentation for Placement Constraints

2018-01-26 Thread Arun Suresh (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7780?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16341417#comment-16341417
 ] 

Arun Suresh commented on YARN-7780:
---

Thanks [~kkaranasos].

Couple of minor nits:
line 28: "This document focuses on" -> "This feature deals with / focuses on"
line 32: "that is, if the constraints for a container cannot be satisfied given 
the current cluster condition": we need to complete the sentence with ".. the 
Request is rejected and notified to the AM." - And lets move this line to the 
bottom - since it is specific to the processor.
line 34: Think we should remove this - allocation == since container at the 
moment.

At the end also mention:
"The 'AllocateRequest' has also been modified to include an optional collection 
to 'SchedulingRequest' objects in it 'scheduling_requests' field in addition to 
the existing 'ask' field (which takes 'ResourceRequest' objects). As mentioned 
earlier, constraints are applied only to requests the AM makes via the new 
SchedulingRequest object.
When the Processor is enabled, constraints are assumed to be **hard**. That is, 
if the constraints for a container cannot be satisfied, either due to 
conflicting constraints or the current cluster / queue condition, those 
requests will be rejected by the Processor. Rejected requests will be notified 
to the AM in the return of one of the subsequent allocate calls. The 
'AllocateResponse' has been modified to include a 'rejected_requests' field to 
facilitate this."




> Documentation for Placement Constraints
> ---
>
> Key: YARN-7780
> URL: https://issues.apache.org/jira/browse/YARN-7780
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Arun Suresh
>Assignee: Konstantinos Karanasos
>Priority: Major
> Attachments: YARN-7780-YARN-6592.001.patch, 
> YARN-7780-YARN-6592.002.patch
>
>
> JIRA to track documentation for the feature.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-7829) Rebalance UI2 cluster overview page

2018-01-26 Thread Eric Yang (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-7829?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Eric Yang updated YARN-7829:

Description: The cluster overview page looks like a upside down triangle.  
It would be nice to rebalance the charts to ensure horizontal real estate are 
utilized properly.  The screenshot attachment includes some suggestion for 
rebalance.  Node Manager status and cluster resource are closely linked, it 
would be nice to promote the chart to first row.  Application Status, and 
Resource Availability are closely linked.  It would be nice to promote Resource 
usage to side by side with Application Status to fill up the horizontal real 
estates.

> Rebalance UI2 cluster overview page
> ---
>
> Key: YARN-7829
> URL: https://issues.apache.org/jira/browse/YARN-7829
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: yarn-ui-v2
>Affects Versions: 3.0.0
>Reporter: Eric Yang
>Assignee: Sunil G
>Priority: Major
> Attachments: ui2-cluster-overview.png
>
>
> The cluster overview page looks like a upside down triangle.  It would be 
> nice to rebalance the charts to ensure horizontal real estate are utilized 
> properly.  The screenshot attachment includes some suggestion for rebalance.  
> Node Manager status and cluster resource are closely linked, it would be nice 
> to promote the chart to first row.  Application Status, and Resource 
> Availability are closely linked.  It would be nice to promote Resource usage 
> to side by side with Application Status to fill up the horizontal real 
> estates.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-7829) Rebalance UI2 cluster overview page

2018-01-26 Thread Eric Yang (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-7829?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Eric Yang updated YARN-7829:

Description: The cluster overview page looks like a upside down triangle.  
It would be nice to rebalance the charts to ensure horizontal real estate are 
utilized properly.  The screenshot attachment includes some suggestion for 
rebalance.  Node Manager status and cluster resource are closely related, it 
would be nice to promote the chart to first row.  Application Status, and 
Resource Availability are closely related.  It would be nice to promote 
Resource usage to side by side with Application Status to fill up the 
horizontal real estates.  (was: The cluster overview page looks like a upside 
down triangle.  It would be nice to rebalance the charts to ensure horizontal 
real estate are utilized properly.  The screenshot attachment includes some 
suggestion for rebalance.  Node Manager status and cluster resource are closely 
linked, it would be nice to promote the chart to first row.  Application 
Status, and Resource Availability are closely linked.  It would be nice to 
promote Resource usage to side by side with Application Status to fill up the 
horizontal real estates.)

> Rebalance UI2 cluster overview page
> ---
>
> Key: YARN-7829
> URL: https://issues.apache.org/jira/browse/YARN-7829
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: yarn-ui-v2
>Affects Versions: 3.0.0
>Reporter: Eric Yang
>Assignee: Sunil G
>Priority: Major
> Attachments: ui2-cluster-overview.png
>
>
> The cluster overview page looks like a upside down triangle.  It would be 
> nice to rebalance the charts to ensure horizontal real estate are utilized 
> properly.  The screenshot attachment includes some suggestion for rebalance.  
> Node Manager status and cluster resource are closely related, it would be 
> nice to promote the chart to first row.  Application Status, and Resource 
> Availability are closely related.  It would be nice to promote Resource usage 
> to side by side with Application Status to fill up the horizontal real 
> estates.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-7829) Rebalance UI2 cluster overview page

2018-01-26 Thread Eric Yang (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-7829?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Eric Yang updated YARN-7829:

Environment: (was: The cluster overview page looks like a upside down 
triangle.  It would be nice to rebalance the charts to ensure horizontal real 
estate are utilized properly.  The screenshot attachment includes some 
suggestion for rebalance.  Node Manager status and cluster resource are closely 
linked, it would be nice to promote the chart to first row.  Application 
Status, and Resource Availability are closely linked.  It would be nice to 
promote Resource usage to side by side with Application Status to fill up the 
horizontal real estates.)

> Rebalance UI2 cluster overview page
> ---
>
> Key: YARN-7829
> URL: https://issues.apache.org/jira/browse/YARN-7829
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: yarn-ui-v2
>Affects Versions: 3.0.0
>Reporter: Eric Yang
>Assignee: Sunil G
>Priority: Major
> Attachments: ui2-cluster-overview.png
>
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



***UNCHECKED*** [jira] [Updated] (YARN-7829) Rebalance UI2 cluster overview page

2018-01-26 Thread Eric Yang (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-7829?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Eric Yang updated YARN-7829:

Attachment: ui2-cluster-overview.png

> Rebalance UI2 cluster overview page
> ---
>
> Key: YARN-7829
> URL: https://issues.apache.org/jira/browse/YARN-7829
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: yarn-ui-v2
>Affects Versions: 3.0.0
> Environment: The cluster overview page looks like a upside down 
> triangle.  It would be nice to rebalance the charts to ensure horizontal real 
> estate are utilized properly.  The screenshot attachment includes some 
> suggestion for rebalance.  Node Manager status and cluster resource are 
> closely linked, it would be nice to promote the chart to first row.  
> Application Status, and Resource Availability are closely linked.  It would 
> be nice to promote Resource usage to side by side with Application Status to 
> fill up the horizontal real estates.
>Reporter: Eric Yang
>Assignee: Sunil G
>Priority: Major
> Attachments: ui2-cluster-overview.png
>
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Created] (YARN-7829) Rebalance UI2 cluster overview page

2018-01-26 Thread Eric Yang (JIRA)
Eric Yang created YARN-7829:
---

 Summary: Rebalance UI2 cluster overview page
 Key: YARN-7829
 URL: https://issues.apache.org/jira/browse/YARN-7829
 Project: Hadoop YARN
  Issue Type: Bug
  Components: yarn-ui-v2
Affects Versions: 3.0.0
 Environment: The cluster overview page looks like a upside down 
triangle.  It would be nice to rebalance the charts to ensure horizontal real 
estate are utilized properly.  The screenshot attachment includes some 
suggestion for rebalance.  Node Manager status and cluster resource are closely 
linked, it would be nice to promote the chart to first row.  Application 
Status, and Resource Availability are closely linked.  It would be nice to 
promote Resource usage to side by side with Application Status to fill up the 
horizontal real estates.
Reporter: Eric Yang
Assignee: Sunil G
 Attachments: ui2-cluster-overview.png





--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7781) Update YARN-Services-Examples.md to be in sync with the latest code

2018-01-26 Thread Jian He (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7781?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16341383#comment-16341383
 ] 

Jian He commented on YARN-7781:
---

upload a patch

> Update YARN-Services-Examples.md to be in sync with the latest code
> ---
>
> Key: YARN-7781
> URL: https://issues.apache.org/jira/browse/YARN-7781
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Gour Saha
>Assignee: Jian He
>Priority: Major
> Attachments: YARN-7781.01.patch
>
>
> Update YARN-Services-Examples.md to make the following additions/changes:
> 1. Add an additional URL and PUT Request JSON to support flex:
> Update to flex up/down the no of containers (instances) of a component of a 
> service
> PUT URL – http://localhost:9191/app/v1/services/hello-world
> PUT Request JSON
> {code}
> {
>   "components" : [ {
> "name" : "hello",
> "number_of_containers" : 3
>   } ]
> }
> {code}
> 2. Modify all occurrences of /ws/ to /app/



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-7781) Update YARN-Services-Examples.md to be in sync with the latest code

2018-01-26 Thread Jian He (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-7781?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jian He updated YARN-7781:
--
Attachment: YARN-7781.01.patch

> Update YARN-Services-Examples.md to be in sync with the latest code
> ---
>
> Key: YARN-7781
> URL: https://issues.apache.org/jira/browse/YARN-7781
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Gour Saha
>Assignee: Jian He
>Priority: Major
> Attachments: YARN-7781.01.patch
>
>
> Update YARN-Services-Examples.md to make the following additions/changes:
> 1. Add an additional URL and PUT Request JSON to support flex:
> Update to flex up/down the no of containers (instances) of a component of a 
> service
> PUT URL – http://localhost:9191/app/v1/services/hello-world
> PUT Request JSON
> {code}
> {
>   "components" : [ {
> "name" : "hello",
> "number_of_containers" : 3
>   } ]
> }
> {code}
> 2. Modify all occurrences of /ws/ to /app/



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Assigned] (YARN-7781) Update YARN-Services-Examples.md to be in sync with the latest code

2018-01-26 Thread Jian He (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-7781?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jian He reassigned YARN-7781:
-

Assignee: Jian He

> Update YARN-Services-Examples.md to be in sync with the latest code
> ---
>
> Key: YARN-7781
> URL: https://issues.apache.org/jira/browse/YARN-7781
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Gour Saha
>Assignee: Jian He
>Priority: Major
> Attachments: YARN-7781.01.patch
>
>
> Update YARN-Services-Examples.md to make the following additions/changes:
> 1. Add an additional URL and PUT Request JSON to support flex:
> Update to flex up/down the no of containers (instances) of a component of a 
> service
> PUT URL – http://localhost:9191/app/v1/services/hello-world
> PUT Request JSON
> {code}
> {
>   "components" : [ {
> "name" : "hello",
> "number_of_containers" : 3
>   } ]
> }
> {code}
> 2. Modify all occurrences of /ws/ to /app/



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7797) Docker host network can not obtain IP address for RegistryDNS

2018-01-26 Thread Shane Kumpf (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7797?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16341368#comment-16341368
 ] 

Shane Kumpf commented on YARN-7797:
---

+1 (non-binding) on the latest patch. Thanks for addressing my comments, 
[~eyang]

> Docker host network can not obtain IP address for RegistryDNS
> -
>
> Key: YARN-7797
> URL: https://issues.apache.org/jira/browse/YARN-7797
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: nodemanager
>Affects Versions: 3.1.0
>Reporter: Eric Yang
>Assignee: Eric Yang
>Priority: Major
> Attachments: YARN-7797.001.patch, YARN-7797.002.patch, 
> YARN-7797.003.patch, YARN-7797.004.patch, YARN-7797.005.patch
>
>
> When docker is configured to use host network, docker inspect command does 
> not return IP address of the container.  This prevents IP information to be 
> collected for RegistryDNS to register a hostname entry for the docker 
> container.
> The proposed solution is to intelligently detect the docker network 
> deployment method, and report back host IP address for RegistryDNS.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Created] (YARN-7828) Clicking on yarn service should take to component tab

2018-01-26 Thread Yesha Vora (JIRA)
Yesha Vora created YARN-7828:


 Summary: Clicking on yarn service should take to component tab
 Key: YARN-7828
 URL: https://issues.apache.org/jira/browse/YARN-7828
 Project: Hadoop YARN
  Issue Type: Bug
  Components: yarn-ui-v2
Reporter: Yesha Vora


Steps:

1) Enable ATS 2

2) Start Httpd yarn service

3) Go to UI2 Services tab

4) Click on yarn service

This page redirects to Attempt-list tab

However, it should be redirected to Components tab



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



  1   2   >