[jira] [Commented] (YARN-7781) Update YARN-Services-Examples.md to be in sync with the latest code

2018-04-10 Thread Jian He (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7781?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16433185#comment-16433185
 ] 

Jian He commented on YARN-7781:
---

sure, go ahead. thanks

> Update YARN-Services-Examples.md to be in sync with the latest code
> ---
>
> Key: YARN-7781
> URL: https://issues.apache.org/jira/browse/YARN-7781
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Gour Saha
>Assignee: Jian He
>Priority: Major
> Attachments: YARN-7781.01.patch, YARN-7781.02.patch, 
> YARN-7781.03.patch
>
>
> Update YARN-Services-Examples.md to make the following additions/changes:
> 1. Add an additional URL and PUT Request JSON to support flex:
> Update to flex up/down the no of containers (instances) of a component of a 
> service
> PUT URL – http://localhost:8088/app/v1/services/hello-world
> PUT Request JSON
> {code}
> {
>   "components" : [ {
> "name" : "hello",
> "number_of_containers" : 3
>   } ]
> }
> {code}
> 2. Modify all occurrences of /ws/ to /app/



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-7936) Add default service AM Xmx

2018-02-14 Thread Jian He (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-7936?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jian He updated YARN-7936:
--
Attachment: YARN-7936.1.patch

> Add default service AM Xmx
> --
>
> Key: YARN-7936
> URL: https://issues.apache.org/jira/browse/YARN-7936
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Jian He
>Assignee: Jian He
>Priority: Major
> Attachments: YARN-7936.1.patch
>
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Created] (YARN-7936) Add default service AM Xmx

2018-02-14 Thread Jian He (JIRA)
Jian He created YARN-7936:
-

 Summary: Add default service AM Xmx
 Key: YARN-7936
 URL: https://issues.apache.org/jira/browse/YARN-7936
 Project: Hadoop YARN
  Issue Type: Sub-task
Reporter: Jian He
Assignee: Jian He






--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7909) YARN service REST API returns charset=null when kerberos enabled

2018-02-09 Thread Jian He (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7909?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16359143#comment-16359143
 ] 

Jian He commented on YARN-7909:
---

+1

> YARN service REST API returns charset=null when kerberos enabled
> 
>
> Key: YARN-7909
> URL: https://issues.apache.org/jira/browse/YARN-7909
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: yarn-native-services
>Affects Versions: 3.1.0
>Reporter: Eric Yang
>Assignee: Eric Yang
>Priority: Major
> Attachments: YARN-7909.001.patch
>
>
> When kerberos is enabled, the response header will reply with:
> {code:java}
> Content-Type: application/json;charset=null{code}
> This appears to be somehow AuthenticationFilter is trigger charset to become 
> undefined.
> The same problem is not observed when simple security is enabled.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7827) Stop and Delete Yarn Service from RM UI fails with HTTP ERROR 404

2018-02-08 Thread Jian He (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7827?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16357941#comment-16357941
 ] 

Jian He commented on YARN-7827:
---

I committed the patch to trunk, we can open separate jira to unblock the issue.


> Stop and Delete Yarn Service from RM UI fails with HTTP ERROR 404
> -
>
> Key: YARN-7827
> URL: https://issues.apache.org/jira/browse/YARN-7827
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: yarn-ui-v2
>Reporter: Yesha Vora
>Assignee: Sunil G
>Priority: Critical
> Attachments: Screen Shot 2018-02-07 at 3.51.54 PM.png, 
> YARN-7827.001.patch, YARN-7827.002.patch
>
>
> Steps:
> 1) Enable Ats v2
> 2) Start Httpd Yarn service
> 3) Go to UI2 attempts page for yarn service 
> 4) Click on setting icon
> 5) Click on stop service
> 6) This action will pop up a box to confirm stop. click on "Yes"
> Expected behavior:
> Yarn service should be stopped
> Actual behavior:
> Yarn UI is not notifying on whether Yarn service is stopped or not.
> On checking network stack trace, the PUT request failed with HTTP error 404
> {code}
> Sorry, got error 404
> Please consult RFC 2616 for meanings of the error code.
> Error Details
> org.apache.hadoop.yarn.webapp.WebAppException: /v1/services/httpd-hrt-qa-n: 
> controller for v1 not found
>   at org.apache.hadoop.yarn.webapp.Router.resolveDefault(Router.java:247)
>   at org.apache.hadoop.yarn.webapp.Router.resolve(Router.java:155)
>   at org.apache.hadoop.yarn.webapp.Dispatcher.service(Dispatcher.java:143)
>   at javax.servlet.http.HttpServlet.service(HttpServlet.java:790)
>   at 
> com.google.inject.servlet.ServletDefinition.doServiceImpl(ServletDefinition.java:287)
>   at 
> com.google.inject.servlet.ServletDefinition.doService(ServletDefinition.java:277)
>   at 
> com.google.inject.servlet.ServletDefinition.service(ServletDefinition.java:182)
>   at 
> com.google.inject.servlet.ManagedServletPipeline.service(ManagedServletPipeline.java:91)
>   at 
> com.google.inject.servlet.FilterChainInvocation.doFilter(FilterChainInvocation.java:85)
>   at 
> com.sun.jersey.spi.container.servlet.ServletContainer.doFilter(ServletContainer.java:941)
>   at 
> com.sun.jersey.spi.container.servlet.ServletContainer.doFilter(ServletContainer.java:875)
>   at 
> org.apache.hadoop.yarn.server.resourcemanager.webapp.RMWebAppFilter.doFilter(RMWebAppFilter.java:178)
>   at 
> com.sun.jersey.spi.container.servlet.ServletContainer.doFilter(ServletContainer.java:829)
>   at 
> com.google.inject.servlet.FilterChainInvocation.doFilter(FilterChainInvocation.java:82)
>   at 
> com.google.inject.servlet.ManagedFilterPipeline.dispatch(ManagedFilterPipeline.java:119)
>   at com.google.inject.servlet.GuiceFilter$1.call(GuiceFilter.java:133)
>   at com.google.inject.servlet.GuiceFilter$1.call(GuiceFilter.java:130)
>   at 
> com.google.inject.servlet.GuiceFilter$Context.call(GuiceFilter.java:203)
>   at com.google.inject.servlet.GuiceFilter.doFilter(GuiceFilter.java:130)
>   at 
> org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1759)
>   at 
> org.apache.hadoop.security.http.XFrameOptionsFilter.doFilter(XFrameOptionsFilter.java:57)
>   at 
> org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1759)
>   at 
> org.apache.hadoop.http.lib.StaticUserWebFilter$StaticUserFilter.doFilter(StaticUserWebFilter.java:110)
>   at 
> org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1759)
>   at 
> org.apache.hadoop.security.http.CrossOriginFilter.doFilter(CrossOriginFilter.java:98)
>   at 
> org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1759)
>   at 
> org.apache.hadoop.http.HttpServer2$QuotingInputFilter.doFilter(HttpServer2.java:1578)
>   at 
> org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1759)
>   at org.apache.hadoop.http.NoCacheFilter.doFilter(NoCacheFilter.java:45)
>   at 
> org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1759)
>   at 
> org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:582)
>   at 
> org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:143)
>   at 
> org.eclipse.jetty.security.SecurityHandler.handle(SecurityHandler.java:548)
>   at 
> org.eclipse.jetty.server.session.SessionHandler.doHandle(SessionHandler.java:226)
>   at 
> org.eclipse.jetty.server.handler.ContextHandler.doHandle(ContextHandler.java:1180)
>   at 
> org.eclipse.jetty.servlet.ServletHandler.doScope(ServletHandler.java:512)
>   at 
> 

[jira] [Commented] (YARN-5428) Allow for specifying the docker client configuration directory

2018-02-06 Thread Jian He (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5428?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16354684#comment-16354684
 ] 

Jian He commented on YARN-5428:
---

lgtm, will commit once jenkins comes back

> Allow for specifying the docker client configuration directory
> --
>
> Key: YARN-5428
> URL: https://issues.apache.org/jira/browse/YARN-5428
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: yarn
>Reporter: Shane Kumpf
>Assignee: Shane Kumpf
>Priority: Major
>  Labels: oct16-medium
> Attachments: YARN-5428.001.patch, YARN-5428.002.patch, 
> YARN-5428.003.patch, YARN-5428.004.patch, YARN-5428.005.patch, 
> YARN-5428.006.patch, YARN-5428.007.patch, YARN-5428.008.patch, 
> YARN-5428.009.patch, 
> YARN-5428Allowforspecifyingthedockerclientconfigurationdirectory.pdf
>
>
> The docker client allows for specifying a configuration directory that 
> contains the docker client's configuration. It is common to store "docker 
> login" credentials in this config, to avoid the need to docker login on each 
> cluster member. 
> By default the docker client config is $HOME/.docker/config.json on Linux. 
> However, this does not work with the current container executor user 
> switching and it may also be desirable to centralize this configuration 
> beyond the single user's home directory.
> Note that the command line arg is for the configuration directory NOT the 
> configuration file.
> This change will be needed to allow YARN to automatically pull images at 
> localization time or within container executor.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5428) Allow for specifying the docker client configuration directory

2018-02-06 Thread Jian He (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5428?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16354507#comment-16354507
 ] 

Jian He commented on YARN-5428:
---

The added check is a client side check. But I realize this is a generic issue 
for all requests object, a single size check alone won't cover all scenarios, I 
take back my suggestions, I think we can go with second last patch.

> Allow for specifying the docker client configuration directory
> --
>
> Key: YARN-5428
> URL: https://issues.apache.org/jira/browse/YARN-5428
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: yarn
>Reporter: Shane Kumpf
>Assignee: Shane Kumpf
>Priority: Major
>  Labels: oct16-medium
> Attachments: YARN-5428.001.patch, YARN-5428.002.patch, 
> YARN-5428.003.patch, YARN-5428.004.patch, YARN-5428.005.patch, 
> YARN-5428.006.patch, YARN-5428.007.patch, YARN-5428.008.patch, 
> YARN-5428Allowforspecifyingthedockerclientconfigurationdirectory.pdf
>
>
> The docker client allows for specifying a configuration directory that 
> contains the docker client's configuration. It is common to store "docker 
> login" credentials in this config, to avoid the need to docker login on each 
> cluster member. 
> By default the docker client config is $HOME/.docker/config.json on Linux. 
> However, this does not work with the current container executor user 
> switching and it may also be desirable to centralize this configuration 
> beyond the single user's home directory.
> Note that the command line arg is for the configuration directory NOT the 
> configuration file.
> This change will be needed to allow YARN to automatically pull images at 
> localization time or within container executor.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7873) Revert YARN-6078

2018-02-06 Thread Jian He (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7873?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16354509#comment-16354509
 ] 

Jian He commented on YARN-7873:
---

sounds good to me. 

> Revert YARN-6078
> 
>
> Key: YARN-7873
> URL: https://issues.apache.org/jira/browse/YARN-7873
> Project: Hadoop YARN
>  Issue Type: Bug
>Affects Versions: 3.0.0
>Reporter: Billie Rinaldi
>Assignee: Billie Rinaldi
>Priority: Major
>
> I think we should revert YARN-6078, since it is not working as intended. The 
> NM does not have permission to destroy the process of the ContainerLocalizer.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7889) Missing kerberos token when check for RM REST API availability

2018-02-05 Thread Jian He (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7889?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16353154#comment-16353154
 ] 

Jian He commented on YARN-7889:
---

Patch, lgtm,

One side issue noticed is that, till now, the client library (YarnClient, 
AMRMClient)  YARN exposes have builtin retry, and applications may assume that 
the retry is handled automatically. But the ApiServiceClient doesn't have retry 
built-in, this may be a caveat for applications later. I think we can deal with 
this later when requests come.

> Missing kerberos token when check for RM REST API availability
> --
>
> Key: YARN-7889
> URL: https://issues.apache.org/jira/browse/YARN-7889
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: yarn-native-services
>Affects Versions: 3.1.0
>Reporter: Eric Yang
>Assignee: Eric Yang
>Priority: Major
> Attachments: YARN-7889.001.patch, YARN-7889.002.patch
>
>
> When checking for which resource manager can be used for REST API request, 
> client side must send kerberos token to REST API end point.  The checking 
> mechanism is currently missing the kerberos token.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7868) Provide improved error message when YARN service is disabled

2018-02-01 Thread Jian He (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7868?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16349880#comment-16349880
 ] 

Jian He commented on YARN-7868:
---

lgtm, 
does it make sense to add one more msg saying "Please set 
yarn.webapp.api-service.enable to true" ?

> Provide improved error message when YARN service is disabled
> 
>
> Key: YARN-7868
> URL: https://issues.apache.org/jira/browse/YARN-7868
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: yarn-native-services
>Affects Versions: 3.1.0
>Reporter: Eric Yang
>Assignee: Eric Yang
>Priority: Major
> Attachments: YARN-7868.001.patch
>
>
> Some YARN CLI command will throw verbose error message when YARN service is 
> disabled.  The error message looks like this:
> {code}
> Jan 31, 2018 4:24:46 PM com.sun.jersey.api.client.ClientResponse getEntity
> SEVERE: A message body reader for Java class 
> org.apache.hadoop.yarn.service.api.records.ServiceStatus, and Java type class 
> org.apache.hadoop.yarn.service.api.records.ServiceStatus, and MIME media type 
> application/octet-stream was not found
> Jan 31, 2018 4:24:46 PM com.sun.jersey.api.client.ClientResponse getEntity
> SEVERE: The registered message body readers compatible with the MIME media 
> type are:
> application/octet-stream ->
>   com.sun.jersey.core.impl.provider.entity.ByteArrayProvider
>   com.sun.jersey.core.impl.provider.entity.FileProvider
>   com.sun.jersey.core.impl.provider.entity.InputStreamProvider
>   com.sun.jersey.core.impl.provider.entity.DataSourceProvider
>   com.sun.jersey.core.impl.provider.entity.RenderedImageProvider
> */* ->
>   com.sun.jersey.core.impl.provider.entity.FormProvider
>   com.sun.jersey.core.impl.provider.entity.StringProvider
>   com.sun.jersey.core.impl.provider.entity.ByteArrayProvider
>   com.sun.jersey.core.impl.provider.entity.FileProvider
>   com.sun.jersey.core.impl.provider.entity.InputStreamProvider
>   com.sun.jersey.core.impl.provider.entity.DataSourceProvider
>   com.sun.jersey.core.impl.provider.entity.XMLJAXBElementProvider$General
>   com.sun.jersey.core.impl.provider.entity.ReaderProvider
>   com.sun.jersey.core.impl.provider.entity.DocumentProvider
>   com.sun.jersey.core.impl.provider.entity.SourceProvider$StreamSourceReader
>   com.sun.jersey.core.impl.provider.entity.SourceProvider$SAXSourceReader
>   com.sun.jersey.core.impl.provider.entity.SourceProvider$DOMSourceReader
>   com.sun.jersey.json.impl.provider.entity.JSONJAXBElementProvider$General
>   com.sun.jersey.json.impl.provider.entity.JSONArrayProvider$General
>   com.sun.jersey.json.impl.provider.entity.JSONObjectProvider$General
>   com.sun.jersey.core.impl.provider.entity.XMLRootElementProvider$General
>   com.sun.jersey.core.impl.provider.entity.XMLListElementProvider$General
>   com.sun.jersey.core.impl.provider.entity.XMLRootObjectProvider$General
>   com.sun.jersey.core.impl.provider.entity.EntityHolderReader
>   com.sun.jersey.json.impl.provider.entity.JSONRootElementProvider$General
>   com.sun.jersey.json.impl.provider.entity.JSONListElementProvider$General
>   com.sun.jersey.json.impl.provider.entity.JacksonProviderProxy
>   com.fasterxml.jackson.jaxrs.json.JacksonJsonProvider
> 2018-01-31 16:24:46,415 ERROR client.ApiServiceClient: 
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-7866) [UI2] Kerberizing the UI doesn't give any warning or content when UI is accessed without kinit

2018-02-01 Thread Jian He (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-7866?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jian He updated YARN-7866:
--
Reporter: Sumana Sathish  (was: Sunil G)

> [UI2] Kerberizing the UI doesn't give any warning or content when UI is 
> accessed without kinit
> --
>
> Key: YARN-7866
> URL: https://issues.apache.org/jira/browse/YARN-7866
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: yarn-ui-v2
>Reporter: Sumana Sathish
>Assignee: Sunil G
>Priority: Major
> Attachments: YARN-7866.001.patch
>
>
> Handle 401 error and show in UI
> credit to [~ssath...@hortonworks.com] for finding  this issue.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7781) Update YARN-Services-Examples.md to be in sync with the latest code

2018-02-01 Thread Jian He (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7781?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16349218#comment-16349218
 ] 

Jian He commented on YARN-7781:
---

talked with Billie offline, I reverted the changes about kerberos principal, 
that needs more verifications.
uploaded patch03

> Update YARN-Services-Examples.md to be in sync with the latest code
> ---
>
> Key: YARN-7781
> URL: https://issues.apache.org/jira/browse/YARN-7781
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Gour Saha
>Assignee: Jian He
>Priority: Major
> Attachments: YARN-7781.01.patch, YARN-7781.02.patch, 
> YARN-7781.03.patch
>
>
> Update YARN-Services-Examples.md to make the following additions/changes:
> 1. Add an additional URL and PUT Request JSON to support flex:
> Update to flex up/down the no of containers (instances) of a component of a 
> service
> PUT URL – http://localhost:8088/app/v1/services/hello-world
> PUT Request JSON
> {code}
> {
>   "components" : [ {
> "name" : "hello",
> "number_of_containers" : 3
>   } ]
> }
> {code}
> 2. Modify all occurrences of /ws/ to /app/



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-7781) Update YARN-Services-Examples.md to be in sync with the latest code

2018-02-01 Thread Jian He (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-7781?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jian He updated YARN-7781:
--
Attachment: YARN-7781.03.patch

> Update YARN-Services-Examples.md to be in sync with the latest code
> ---
>
> Key: YARN-7781
> URL: https://issues.apache.org/jira/browse/YARN-7781
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Gour Saha
>Assignee: Jian He
>Priority: Major
> Attachments: YARN-7781.01.patch, YARN-7781.02.patch, 
> YARN-7781.03.patch
>
>
> Update YARN-Services-Examples.md to make the following additions/changes:
> 1. Add an additional URL and PUT Request JSON to support flex:
> Update to flex up/down the no of containers (instances) of a component of a 
> service
> PUT URL – http://localhost:8088/app/v1/services/hello-world
> PUT Request JSON
> {code}
> {
>   "components" : [ {
> "name" : "hello",
> "number_of_containers" : 3
>   } ]
> }
> {code}
> 2. Modify all occurrences of /ws/ to /app/



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-7781) Update YARN-Services-Examples.md to be in sync with the latest code

2018-01-31 Thread Jian He (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-7781?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jian He updated YARN-7781:
--
Attachment: YARN-7781.02.patch

> Update YARN-Services-Examples.md to be in sync with the latest code
> ---
>
> Key: YARN-7781
> URL: https://issues.apache.org/jira/browse/YARN-7781
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Gour Saha
>Assignee: Jian He
>Priority: Major
> Attachments: YARN-7781.01.patch, YARN-7781.02.patch
>
>
> Update YARN-Services-Examples.md to make the following additions/changes:
> 1. Add an additional URL and PUT Request JSON to support flex:
> Update to flex up/down the no of containers (instances) of a component of a 
> service
> PUT URL – http://localhost:8088/app/v1/services/hello-world
> PUT Request JSON
> {code}
> {
>   "components" : [ {
> "name" : "hello",
> "number_of_containers" : 3
>   } ]
> }
> {code}
> 2. Modify all occurrences of /ws/ to /app/



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7781) Update YARN-Services-Examples.md to be in sync with the latest code

2018-01-31 Thread Jian He (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7781?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16347820#comment-16347820
 ] 

Jian He commented on YARN-7781:
---

Thanks Gour for the comments,
bq. Don't we support a PUT URL – 
http://localhost:8088/app/v1/services/hello-world where we can pass a single 
JSON and flex 
This is recently added in YARN-7540 and YARN-7605. I updated the document and 
fix others too

> Update YARN-Services-Examples.md to be in sync with the latest code
> ---
>
> Key: YARN-7781
> URL: https://issues.apache.org/jira/browse/YARN-7781
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Gour Saha
>Assignee: Jian He
>Priority: Major
> Attachments: YARN-7781.01.patch
>
>
> Update YARN-Services-Examples.md to make the following additions/changes:
> 1. Add an additional URL and PUT Request JSON to support flex:
> Update to flex up/down the no of containers (instances) of a component of a 
> service
> PUT URL – http://localhost:8088/app/v1/services/hello-world
> PUT Request JSON
> {code}
> {
>   "components" : [ {
> "name" : "hello",
> "number_of_containers" : 3
>   } ]
> }
> {code}
> 2. Modify all occurrences of /ws/ to /app/



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-7765) [Atsv2] GSSException: No valid credentials provided - Failed to find any Kerberos tgt thrown by Timelinev2Client & HBaseClient in NM

2018-01-30 Thread Jian He (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-7765?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jian He updated YARN-7765:
--
Reporter: Sumana Sathish  (was: Rohith Sharma K S)

> [Atsv2] GSSException: No valid credentials provided - Failed to find any 
> Kerberos tgt thrown by Timelinev2Client & HBaseClient in NM
> 
>
> Key: YARN-7765
> URL: https://issues.apache.org/jira/browse/YARN-7765
> Project: Hadoop YARN
>  Issue Type: Bug
>Affects Versions: 2.9.0, 3.0.0
>Reporter: Sumana Sathish
>Assignee: Rohith Sharma K S
>Priority: Blocker
> Fix For: 3.1.0, 2.10.0, 3.0.1
>
> Attachments: YARN-7765.01.patch, YARN-7765.02.patch
>
>
> Secure cluster is deployed and all YARN services are started successfully. 
> When application is submitted, app collectors which is started as aux-service 
> throwing below exception. But this exception is *NOT* observed from RM 
> TimelineCollector. 
> Cluster is deployed with Hadoop-3.0 and Hbase-1.2.6 secure cluster. All the 
> YARN and HBase service are started and working perfectly fine. After 24 hours 
> i.e when token lifetime is expired, HBaseClient in NM and HDFSClient in 
> HMaster and HRegionServer started getting this error. After sometime, HBase 
> daemons got shutdown. In NM, JVM didn't shutdown but none of the events got 
> published.
> {noformat}
> 2018-01-17 11:04:48,017 FATAL ipc.RpcClientImpl (RpcClientImpl.java:run(684)) 
> - SASL authentication failed. The most likely cause is missing or invalid 
> credentials. Consider 'kinit'.
> javax.security.sasl.SaslException: GSS initiate failed [Caused by 
> GSSException: No valid credentials provided (Mechanism level: Failed to find 
> any Kerberos tgt)]
> {noformat}
> cc :/ [~vrushalic] [~varun_saxena] 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7765) [Atsv2] GSSException: No valid credentials provided - Failed to find any Kerberos tgt thrown by Timelinev2Client & HBaseClient in NM

2018-01-27 Thread Jian He (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7765?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16342467#comment-16342467
 ] 

Jian He commented on YARN-7765:
---

I committed the patch to trunk, what are other branches required ?

> [Atsv2] GSSException: No valid credentials provided - Failed to find any 
> Kerberos tgt thrown by Timelinev2Client & HBaseClient in NM
> 
>
> Key: YARN-7765
> URL: https://issues.apache.org/jira/browse/YARN-7765
> Project: Hadoop YARN
>  Issue Type: Bug
>Affects Versions: 2.9.0, 3.0.0
>Reporter: Rohith Sharma K S
>Assignee: Rohith Sharma K S
>Priority: Blocker
> Attachments: YARN-7765.01.patch, YARN-7765.02.patch
>
>
> Secure cluster is deployed and all YARN services are started successfully. 
> When application is submitted, app collectors which is started as aux-service 
> throwing below exception. But this exception is *NOT* observed from RM 
> TimelineCollector. 
> Cluster is deployed with Hadoop-3.0 and Hbase-1.2.6 secure cluster. All the 
> YARN and HBase service are started and working perfectly fine. After 24 hours 
> i.e when token lifetime is expired, HBaseClient in NM and HDFSClient in 
> HMaster and HRegionServer started getting this error. After sometime, HBase 
> daemons got shutdown. In NM, JVM didn't shutdown but none of the events got 
> published.
> {noformat}
> 2018-01-17 11:04:48,017 FATAL ipc.RpcClientImpl (RpcClientImpl.java:run(684)) 
> - SASL authentication failed. The most likely cause is missing or invalid 
> credentials. Consider 'kinit'.
> javax.security.sasl.SaslException: GSS initiate failed [Caused by 
> GSSException: No valid credentials provided (Mechanism level: Failed to find 
> any Kerberos tgt)]
> {noformat}
> cc :/ [~vrushalic] [~varun_saxena] 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7792) Merge work for YARN-6592

2018-01-27 Thread Jian He (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7792?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16342368#comment-16342368
 ] 

Jian He commented on YARN-7792:
---

bq. It is set by the PlacementProcessor in two situations.
Oh, missed this.
sounds good.

> Merge work for YARN-6592
> 
>
> Key: YARN-7792
> URL: https://issues.apache.org/jira/browse/YARN-7792
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Sunil G
>Priority: Blocker
> Attachments: YARN-6592.001.patch, YARN-7792.002.patch, 
> YARN-7792.003.patch
>
>
> This Jira is to run aggregated YARN-6592 branch patch against trunk and check 
> for any jenkins issues.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7792) Merge work for YARN-6592

2018-01-27 Thread Jian He (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7792?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16342355#comment-16342355
 ] 

Jian He commented on YARN-7792:
---

bq. It isnt meant to be read from a file. the spec is to be specified in the 
command line ifself like so:
ok.. somehow I got confused that it is a file. 

Also, got a question about the onRequestsRejected API, is the AM expected to 
keep on retrying the requests? (BTW, RejectedSchedulingRequest#reason right now 
is not set, which makes AM no way to know why it's rejected).

IIUC, previously, there isn't such a notion of rejecting resource requests due 
to transient states - requests that aren't satisfied due to user-limit, 
capacity etc. are just getting pending in RM. AM doesn't need to keep on 
retrying those requests.

> Merge work for YARN-6592
> 
>
> Key: YARN-7792
> URL: https://issues.apache.org/jira/browse/YARN-7792
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Sunil G
>Priority: Blocker
> Attachments: YARN-6592.001.patch, YARN-7792.002.patch, 
> YARN-7792.003.patch
>
>
> This Jira is to run aggregated YARN-6592 branch patch against trunk and check 
> for any jenkins issues.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7792) Merge work for YARN-6592

2018-01-27 Thread Jian He (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7792?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16342328#comment-16342328
 ] 

Jian He commented on YARN-7792:
---

Some comments regarding config, (Don't know where to post these comments, so 
just paste here)
 - yarn.resourcemanager.placement-constraints.algorithm.pool-size

 - yarn.resourcemanager.placement-constraints.scheduler.pool-size

The naming "pool-size" confused me at first sight. I understand it after 
looking at the description/code. How about call it thread-pool-size to be more 
explicit ?
 - I must be doing something wrong, I tried to run distributed shell as below 
but it fails

{code:java}
hadoop-common jhe$ hadoop jar 
$HADOOP_HOME/share/hadoop/yarn/hadoop-yarn-applications-distributedshell-$VERSION.jar
 -jar 
$HADOOP_HOME/share/hadoop/yarn/hadoop-yarn-applications-distributedshell-$VERSION.jar
 -shell_command sleep -shell_args 1 --placement_spec spec
Picked up _JAVA_OPTIONS: -Djava.awt.headless=true
18/01/26 23:05:46 INFO distributedshell.Client: Initializing Client
18/01/26 23:05:46 INFO distributedshell.PlacementSpec: Parsing Placement Specs: 
[spec]
18/01/26 23:05:46 INFO distributedshell.PlacementSpec: Parsing Spec: [spec]
18/01/26 23:05:46 ERROR distributedshell.Client: Error running Client
java.lang.ArrayIndexOutOfBoundsException: 1
 at 
org.apache.hadoop.yarn.applications.distributedshell.PlacementSpec.parse(PlacementSpec.java:85)
 at 
org.apache.hadoop.yarn.applications.distributedshell.Client.init(Client.java:431)
 at 
org.apache.hadoop.yarn.applications.distributedshell.Client.main(Client.java:248)
 at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
 at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
 at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
 at java.lang.reflect.Method.invoke(Method.java:498)
 at org.apache.hadoop.util.RunJar.run(RunJar.java:239)
 at org.apache.hadoop.util.RunJar.main(RunJar.java:153)

{code}
where the spec is a file which has the content copied from the document 

"zk=3,NOTIN,NODE,zk:hbase=5,IN,RACK,zk" 

 

> Merge work for YARN-6592
> 
>
> Key: YARN-7792
> URL: https://issues.apache.org/jira/browse/YARN-7792
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Sunil G
>Priority: Blocker
> Attachments: YARN-6592.001.patch, YARN-7792.002.patch, 
> YARN-7792.003.patch
>
>
> This Jira is to run aggregated YARN-6592 branch patch against trunk and check 
> for any jenkins issues.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7765) [Atsv2] GSSException: No valid credentials provided - Failed to find any Kerberos tgt thrown by HBaseClient in NM and HDFSClient in HBase daemons

2018-01-27 Thread Jian He (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7765?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16342023#comment-16342023
 ] 

Jian He commented on YARN-7765:
---

could you add some logs to print the UGI for future debugging purpose ?

> [Atsv2] GSSException: No valid credentials provided - Failed to find any 
> Kerberos tgt thrown by HBaseClient in NM and HDFSClient in HBase daemons
> -
>
> Key: YARN-7765
> URL: https://issues.apache.org/jira/browse/YARN-7765
> Project: Hadoop YARN
>  Issue Type: Bug
>Affects Versions: 2.9.0, 3.0.0
>Reporter: Rohith Sharma K S
>Assignee: Rohith Sharma K S
>Priority: Blocker
> Attachments: YARN-7765.01.patch
>
>
> Secure cluster is deployed and all YARN services are started successfully. 
> When application is submitted, app collectors which is started as aux-service 
> throwing below exception. But this exception is *NOT* observed from RM 
> TimelineCollector. 
> Cluster is deployed with Hadoop-3.0 and Hbase-1.2.6 secure cluster. All the 
> YARN and HBase service are started and working perfectly fine. After 24 hours 
> i.e when token lifetime is expired, HBaseClient in NM and HDFSClient in 
> HMaster and HRegionServer started getting this error. After sometime, HBase 
> daemons got shutdown. In NM, JVM didn't shutdown but none of the events got 
> published.
> {noformat}
> 2018-01-17 11:04:48,017 FATAL ipc.RpcClientImpl (RpcClientImpl.java:run(684)) 
> - SASL authentication failed. The most likely cause is missing or invalid 
> credentials. Consider 'kinit'.
> javax.security.sasl.SaslException: GSS initiate failed [Caused by 
> GSSException: No valid credentials provided (Mechanism level: Failed to find 
> any Kerberos tgt)]
> {noformat}
> cc :/ [~vrushalic] [~varun_saxena] 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7765) [Atsv2] GSSException: No valid credentials provided - Failed to find any Kerberos tgt thrown by HBaseClient in NM and HDFSClient in HBase daemons

2018-01-26 Thread Jian He (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7765?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16341982#comment-16341982
 ] 

Jian He commented on YARN-7765:
---

patch lgtm

> [Atsv2] GSSException: No valid credentials provided - Failed to find any 
> Kerberos tgt thrown by HBaseClient in NM and HDFSClient in HBase daemons
> -
>
> Key: YARN-7765
> URL: https://issues.apache.org/jira/browse/YARN-7765
> Project: Hadoop YARN
>  Issue Type: Bug
>Reporter: Rohith Sharma K S
>Assignee: Rohith Sharma K S
>Priority: Blocker
> Attachments: YARN-7765.01.patch
>
>
> Secure cluster is deployed and all YARN services are started successfully. 
> When application is submitted, app collectors which is started as aux-service 
> throwing below exception. But this exception is *NOT* observed from RM 
> TimelineCollector. 
> Cluster is deployed with Hadoop-3.0 and Hbase-1.2.6 secure cluster. All the 
> YARN and HBase service are started and working perfectly fine. After 24 hours 
> i.e when token lifetime is expired, HBaseClient in NM and HDFSClient in 
> HMaster and HRegionServer started getting this error. After sometime, HBase 
> daemons got shutdown. In NM, JVM didn't shutdown but none of the events got 
> published.
> {noformat}
> 2018-01-17 11:04:48,017 FATAL ipc.RpcClientImpl (RpcClientImpl.java:run(684)) 
> - SASL authentication failed. The most likely cause is missing or invalid 
> credentials. Consider 'kinit'.
> javax.security.sasl.SaslException: GSS initiate failed [Caused by 
> GSSException: No valid credentials provided (Mechanism level: Failed to find 
> any Kerberos tgt)]
> {noformat}
> cc :/ [~vrushalic] [~varun_saxena] 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (YARN-7780) Documentation for Placement Constraints

2018-01-26 Thread Jian He (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7780?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16341507#comment-16341507
 ] 

Jian He edited comment on YARN-7780 at 1/26/18 7:51 PM:


I noticed quite a few newly added configs are not documented like the ones 
starting with "yarn.resourcemanger.placement-constraints.."

Are we not going to document those ?


was (Author: jianhe):
I noticed quite a few newly added configs are not documented like the ones 
related to "RM_PLACEMENT_CONSTRAINTS_ALGORITHM_CLASS"

Are we not going to document those ?

> Documentation for Placement Constraints
> ---
>
> Key: YARN-7780
> URL: https://issues.apache.org/jira/browse/YARN-7780
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Arun Suresh
>Assignee: Konstantinos Karanasos
>Priority: Major
> Attachments: YARN-7780-YARN-6592.001.patch, 
> YARN-7780-YARN-6592.002.patch
>
>
> JIRA to track documentation for the feature.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7780) Documentation for Placement Constraints

2018-01-26 Thread Jian He (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7780?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16341507#comment-16341507
 ] 

Jian He commented on YARN-7780:
---

I noticed quite a few newly added configs are not documented like the ones 
related to "RM_PLACEMENT_CONSTRAINTS_ALGORITHM_CLASS"

Are we not going to document those ?

> Documentation for Placement Constraints
> ---
>
> Key: YARN-7780
> URL: https://issues.apache.org/jira/browse/YARN-7780
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Arun Suresh
>Assignee: Konstantinos Karanasos
>Priority: Major
> Attachments: YARN-7780-YARN-6592.001.patch, 
> YARN-7780-YARN-6592.002.patch
>
>
> JIRA to track documentation for the feature.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7781) Update YARN-Services-Examples.md to be in sync with the latest code

2018-01-26 Thread Jian He (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7781?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16341383#comment-16341383
 ] 

Jian He commented on YARN-7781:
---

upload a patch

> Update YARN-Services-Examples.md to be in sync with the latest code
> ---
>
> Key: YARN-7781
> URL: https://issues.apache.org/jira/browse/YARN-7781
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Gour Saha
>Assignee: Jian He
>Priority: Major
> Attachments: YARN-7781.01.patch
>
>
> Update YARN-Services-Examples.md to make the following additions/changes:
> 1. Add an additional URL and PUT Request JSON to support flex:
> Update to flex up/down the no of containers (instances) of a component of a 
> service
> PUT URL – http://localhost:9191/app/v1/services/hello-world
> PUT Request JSON
> {code}
> {
>   "components" : [ {
> "name" : "hello",
> "number_of_containers" : 3
>   } ]
> }
> {code}
> 2. Modify all occurrences of /ws/ to /app/



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-7781) Update YARN-Services-Examples.md to be in sync with the latest code

2018-01-26 Thread Jian He (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-7781?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jian He updated YARN-7781:
--
Attachment: YARN-7781.01.patch

> Update YARN-Services-Examples.md to be in sync with the latest code
> ---
>
> Key: YARN-7781
> URL: https://issues.apache.org/jira/browse/YARN-7781
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Gour Saha
>Assignee: Jian He
>Priority: Major
> Attachments: YARN-7781.01.patch
>
>
> Update YARN-Services-Examples.md to make the following additions/changes:
> 1. Add an additional URL and PUT Request JSON to support flex:
> Update to flex up/down the no of containers (instances) of a component of a 
> service
> PUT URL – http://localhost:9191/app/v1/services/hello-world
> PUT Request JSON
> {code}
> {
>   "components" : [ {
> "name" : "hello",
> "number_of_containers" : 3
>   } ]
> }
> {code}
> 2. Modify all occurrences of /ws/ to /app/



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Assigned] (YARN-7781) Update YARN-Services-Examples.md to be in sync with the latest code

2018-01-26 Thread Jian He (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-7781?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jian He reassigned YARN-7781:
-

Assignee: Jian He

> Update YARN-Services-Examples.md to be in sync with the latest code
> ---
>
> Key: YARN-7781
> URL: https://issues.apache.org/jira/browse/YARN-7781
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Gour Saha
>Assignee: Jian He
>Priority: Major
> Attachments: YARN-7781.01.patch
>
>
> Update YARN-Services-Examples.md to make the following additions/changes:
> 1. Add an additional URL and PUT Request JSON to support flex:
> Update to flex up/down the no of containers (instances) of a component of a 
> service
> PUT URL – http://localhost:9191/app/v1/services/hello-world
> PUT Request JSON
> {code}
> {
>   "components" : [ {
> "name" : "hello",
> "number_of_containers" : 3
>   } ]
> }
> {code}
> 2. Modify all occurrences of /ws/ to /app/



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7801) AmFilterInitializer should addFilter after fill all parameters

2018-01-24 Thread Jian He (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7801?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16338470#comment-16338470
 ] 

Jian He commented on YARN-7801:
---

committed to branch-2.9 too

> AmFilterInitializer should addFilter after fill all parameters
> --
>
> Key: YARN-7801
> URL: https://issues.apache.org/jira/browse/YARN-7801
> Project: Hadoop YARN
>  Issue Type: Bug
>Reporter: Sumana Sathish
>Assignee: Wangda Tan
>Priority: Critical
> Attachments: YARN-7801.001.patch
>
>
> Existing AmFilterInitializer cannot successfully pass RM_HA_URLS parameter to 
> AmIpFitler because of this issue.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-7777) Fix user name format in YARN Registry DNS name

2018-01-24 Thread Jian He (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jian He updated YARN-:
--
Attachment: YARN-.05.patch

> Fix user name format in YARN Registry DNS name 
> ---
>
> Key: YARN-
> URL: https://issues.apache.org/jira/browse/YARN-
> Project: Hadoop YARN
>  Issue Type: Bug
>Reporter: Jian He
>Assignee: Jian He
>Priority: Major
> Attachments: YARN-.01.patch, YARN-.02.patch, 
> YARN-.03.patch, YARN-.04.patch, YARN-.05.patch
>
>
> user name that has "\_" should be converted to user "-", because DNS name 
> doesn't allow "_"



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7801) AmFilterInitializer should addFilter after fill all parameters

2018-01-24 Thread Jian He (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7801?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16338093#comment-16338093
 ] 

Jian He commented on YARN-7801:
---

committed to trunk, branch-2, branch-3.0

are there other branches we want to commit to ?

> AmFilterInitializer should addFilter after fill all parameters
> --
>
> Key: YARN-7801
> URL: https://issues.apache.org/jira/browse/YARN-7801
> Project: Hadoop YARN
>  Issue Type: Bug
>Reporter: Wangda Tan
>Assignee: Wangda Tan
>Priority: Critical
> Attachments: YARN-7801.001.patch
>
>
> Existing AmFilterInitializer cannot successfully pass RM_HA_URLS parameter to 
> AmIpFitler because of this issue.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7777) Fix user name format in YARN Registry DNS name

2018-01-23 Thread Jian He (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16336693#comment-16336693
 ] 

Jian He commented on YARN-:
---

oh, sorry...  uploading a new one. 

> Fix user name format in YARN Registry DNS name 
> ---
>
> Key: YARN-
> URL: https://issues.apache.org/jira/browse/YARN-
> Project: Hadoop YARN
>  Issue Type: Bug
>Reporter: Jian He
>Assignee: Jian He
>Priority: Major
> Attachments: YARN-.01.patch, YARN-.02.patch, 
> YARN-.03.patch, YARN-.04.patch
>
>
> user name that has "\_" should be converted to user "-", because DNS name 
> doesn't allow "_"



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-7782) Enable user re-mapping for Docker containers in yarn-default.xml

2018-01-23 Thread Jian He (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-7782?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jian He updated YARN-7782:
--
Fix Version/s: (was: 2.10.0)
   3.0.1

> Enable user re-mapping for Docker containers in yarn-default.xml
> 
>
> Key: YARN-7782
> URL: https://issues.apache.org/jira/browse/YARN-7782
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: security, yarn
>Affects Versions: 2.9.0, 3.0.0
>Reporter: Eric Yang
>Assignee: Eric Yang
>Priority: Blocker
> Fix For: 3.1.0, 2.9.1, 3.0.1
>
> Attachments: YARN-7782.001.patch
>
>
> In YARN-4266, the recommendation was to use -u [uid]:[gid] numeric values to 
> enforce user and group for the running user. In YARN-7430, the user remapping 
> is default to true, but yarn-default.xml is still set to false.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7777) Fix user name format in YARN Registry DNS name

2018-01-23 Thread Jian He (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16336455#comment-16336455
 ] 

Jian He commented on YARN-:
---

changed RegistryUtils.currentUse to call convertUsername

Make the validation pass if user name has a dot

> Fix user name format in YARN Registry DNS name 
> ---
>
> Key: YARN-
> URL: https://issues.apache.org/jira/browse/YARN-
> Project: Hadoop YARN
>  Issue Type: Bug
>Reporter: Jian He
>Assignee: Jian He
>Priority: Major
> Attachments: YARN-.01.patch, YARN-.02.patch, 
> YARN-.03.patch, YARN-.04.patch
>
>
> user name that has "\_" should be converted to user "-", because DNS name 
> doesn't allow "_"



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-7777) Fix user name format in YARN Registry DNS name

2018-01-23 Thread Jian He (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jian He updated YARN-:
--
Attachment: YARN-.04.patch

> Fix user name format in YARN Registry DNS name 
> ---
>
> Key: YARN-
> URL: https://issues.apache.org/jira/browse/YARN-
> Project: Hadoop YARN
>  Issue Type: Bug
>Reporter: Jian He
>Assignee: Jian He
>Priority: Major
> Attachments: YARN-.01.patch, YARN-.02.patch, 
> YARN-.03.patch, YARN-.04.patch
>
>
> user name that has "\_" should be converted to user "-", because DNS name 
> doesn't allow "_"



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7782) Enable user re-mapping for Docker containers in yarn-default.xml

2018-01-23 Thread Jian He (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7782?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16336453#comment-16336453
 ] 

Jian He commented on YARN-7782:
---

Committed to branch-2, branch-2.9, branch-3.0 too

> Enable user re-mapping for Docker containers in yarn-default.xml
> 
>
> Key: YARN-7782
> URL: https://issues.apache.org/jira/browse/YARN-7782
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: security, yarn
>Affects Versions: 2.9.0, 3.0.0
>Reporter: Eric Yang
>Assignee: Eric Yang
>Priority: Blocker
> Fix For: 3.1.0, 2.10.0, 2.9.1
>
> Attachments: YARN-7782.001.patch
>
>
> In YARN-4266, the recommendation was to use -u [uid]:[gid] numeric values to 
> enforce user and group for the running user. In YARN-7430, the user remapping 
> is default to true, but yarn-default.xml is still set to false.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-7605) Implement doAs for Api Service REST API

2018-01-23 Thread Jian He (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-7605?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jian He updated YARN-7605:
--
Attachment: YARN-7605.020.patch

> Implement doAs for Api Service REST API
> ---
>
> Key: YARN-7605
> URL: https://issues.apache.org/jira/browse/YARN-7605
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Eric Yang
>Assignee: Eric Yang
>Priority: Major
> Fix For: yarn-native-services
>
> Attachments: YARN-7605.001.patch, YARN-7605.004.patch, 
> YARN-7605.005.patch, YARN-7605.006.patch, YARN-7605.007.patch, 
> YARN-7605.008.patch, YARN-7605.009.patch, YARN-7605.010.patch, 
> YARN-7605.011.patch, YARN-7605.012.patch, YARN-7605.013.patch, 
> YARN-7605.014.patch, YARN-7605.015.patch, YARN-7605.016.patch, 
> YARN-7605.017.patch, YARN-7605.018.patch, YARN-7605.019.patch, 
> YARN-7605.020.patch
>
>
> In YARN-7540, all client entry points for API service is centralized to use 
> REST API instead of having direct file system and resource manager rpc calls. 
>  This change helped to centralize yarn metadata to be owned by yarn user 
> instead of crawling through every user's home directory to find metadata.  
> The next step is to make sure "doAs" calls work properly for API Service.  
> The metadata is stored by YARN user, but the actual workload still need to be 
> performed as end users, hence API service must authenticate end user kerberos 
> credential, and perform doAs call when requesting containers via 
> ServiceClient.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7605) Implement doAs for Api Service REST API

2018-01-23 Thread Jian He (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7605?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16336273#comment-16336273
 ] 

Jian He commented on YARN-7605:
---

new patch uploaded

> Implement doAs for Api Service REST API
> ---
>
> Key: YARN-7605
> URL: https://issues.apache.org/jira/browse/YARN-7605
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Eric Yang
>Assignee: Eric Yang
>Priority: Major
> Fix For: yarn-native-services
>
> Attachments: YARN-7605.001.patch, YARN-7605.004.patch, 
> YARN-7605.005.patch, YARN-7605.006.patch, YARN-7605.007.patch, 
> YARN-7605.008.patch, YARN-7605.009.patch, YARN-7605.010.patch, 
> YARN-7605.011.patch, YARN-7605.012.patch, YARN-7605.013.patch, 
> YARN-7605.014.patch, YARN-7605.015.patch, YARN-7605.016.patch, 
> YARN-7605.017.patch, YARN-7605.018.patch, YARN-7605.019.patch, 
> YARN-7605.020.patch
>
>
> In YARN-7540, all client entry points for API service is centralized to use 
> REST API instead of having direct file system and resource manager rpc calls. 
>  This change helped to centralize yarn metadata to be owned by yarn user 
> instead of crawling through every user's home directory to find metadata.  
> The next step is to make sure "doAs" calls work properly for API Service.  
> The metadata is stored by YARN user, but the actual workload still need to be 
> performed as end users, hence API service must authenticate end user kerberos 
> credential, and perform doAs call when requesting containers via 
> ServiceClient.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7605) Implement doAs for Api Service REST API

2018-01-23 Thread Jian He (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7605?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16336265#comment-16336265
 ] 

Jian He commented on YARN-7605:
---

sounds good, let me revert the doc

> Implement doAs for Api Service REST API
> ---
>
> Key: YARN-7605
> URL: https://issues.apache.org/jira/browse/YARN-7605
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Eric Yang
>Assignee: Eric Yang
>Priority: Major
> Fix For: yarn-native-services
>
> Attachments: YARN-7605.001.patch, YARN-7605.004.patch, 
> YARN-7605.005.patch, YARN-7605.006.patch, YARN-7605.007.patch, 
> YARN-7605.008.patch, YARN-7605.009.patch, YARN-7605.010.patch, 
> YARN-7605.011.patch, YARN-7605.012.patch, YARN-7605.013.patch, 
> YARN-7605.014.patch, YARN-7605.015.patch, YARN-7605.016.patch, 
> YARN-7605.017.patch, YARN-7605.018.patch, YARN-7605.019.patch
>
>
> In YARN-7540, all client entry points for API service is centralized to use 
> REST API instead of having direct file system and resource manager rpc calls. 
>  This change helped to centralize yarn metadata to be owned by yarn user 
> instead of crawling through every user's home directory to find metadata.  
> The next step is to make sure "doAs" calls work properly for API Service.  
> The metadata is stored by YARN user, but the actual workload still need to be 
> performed as end users, hence API service must authenticate end user kerberos 
> credential, and perform doAs call when requesting containers via 
> ServiceClient.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7766) Introduce a new config property for YARN Service dependency tarball location

2018-01-23 Thread Jian He (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7766?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16336246#comment-16336246
 ] 

Jian He commented on YARN-7766:
---

I'm wondering if below makes sense to be a followup item.

Currently, if user doesn't supply location when run yarn app -enableFastLaunch, 
the jars will be put under this location
{code}
hdfs:///yarn-services//service-dep.tar.gz
{code}
Since API server is embedded in RM, should RM look for this location too if 
"yarn.service.framework.path" is not specified ?

And if  "yarn.service.framework.path" is not specified and still the file 
doesn't exist at above default location, I think RM can try to upload the jars 
to above default location instead,  currently RM is uploading the jars to the 
location defined by below code. This folder is per app and also inconsistent 
with CLI location. 
{code}
  protected Path addJarResource(String serviceName,
  Map localResources)
  throws IOException, SliderException {
Path libPath = fs.buildClusterDirPath(serviceName);
{code}
By doing this, the next time a submission request comes, RM doesn't need to 
upload the jars again.


> Introduce a new config property for YARN Service dependency tarball location
> 
>
> Key: YARN-7766
> URL: https://issues.apache.org/jira/browse/YARN-7766
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: applications, client, yarn-native-services
>Reporter: Gour Saha
>Assignee: Gour Saha
>Priority: Major
> Fix For: 3.1.0
>
> Attachments: YARN-7766.001.patch, YARN-7766.002.patch, 
> YARN-7766.003.patch, YARN-7766.004.patch
>
>
> Introduce a new config property (something like _yarn.service.framework.path_ 
> in-line with _mapreduce.application.framework.path_) for YARN Service 
> dependency tarball location. This will provide flexibility to the 
> user/cluster-admin to upload the dependency tarball to a location of their 
> choice. If this config property is not set, YARN Service client will default 
> to uploading all dependency jars from the client-host's classpath for every 
> service launch request (as it does today).
> Also, accept an optional destination HDFS location for *-enableFastLaunch* 
> command, to specify the location where user/cluster-admin wants to upload the 
> tarball. If not specified, let's default it to the location we use today. The 
> cluster-admin still needs to set _yarn.service.framework.path_ to this 
> default location otherwise it will not be used. So the command-line will 
> become something like this -
> {code:java}
> yarn app -enableFastLaunch []{code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-7777) Fix user name format in YARN Registry DNS name

2018-01-22 Thread Jian He (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jian He updated YARN-:
--
Attachment: YARN-.03.patch

> Fix user name format in YARN Registry DNS name 
> ---
>
> Key: YARN-
> URL: https://issues.apache.org/jira/browse/YARN-
> Project: Hadoop YARN
>  Issue Type: Bug
>Reporter: Jian He
>Assignee: Jian He
>Priority: Major
> Attachments: YARN-.01.patch, YARN-.02.patch, 
> YARN-.03.patch
>
>
> user name that has "\_" should be converted to user "-", because DNS name 
> doesn't allow "_"



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7777) Fix user name format in YARN Registry DNS name

2018-01-22 Thread Jian He (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16335226#comment-16335226
 ] 

Jian He commented on YARN-:
---

Thanks for the review, fixed the comments

> Fix user name format in YARN Registry DNS name 
> ---
>
> Key: YARN-
> URL: https://issues.apache.org/jira/browse/YARN-
> Project: Hadoop YARN
>  Issue Type: Bug
>Reporter: Jian He
>Assignee: Jian He
>Priority: Major
> Attachments: YARN-.01.patch, YARN-.02.patch, 
> YARN-.03.patch
>
>
> user name that has "\_" should be converted to user "-", because DNS name 
> doesn't allow "_"



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7605) Implement doAs for Api Service REST API

2018-01-20 Thread Jian He (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7605?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=1663#comment-1663
 ] 

Jian He commented on YARN-7605:
---

I combined this and YARN-7540 and rebased this on latest trunk and made some 
edits to the documentation

> Implement doAs for Api Service REST API
> ---
>
> Key: YARN-7605
> URL: https://issues.apache.org/jira/browse/YARN-7605
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Eric Yang
>Assignee: Eric Yang
>Priority: Major
> Fix For: yarn-native-services
>
> Attachments: YARN-7605.001.patch, YARN-7605.004.patch, 
> YARN-7605.005.patch, YARN-7605.006.patch, YARN-7605.007.patch, 
> YARN-7605.008.patch, YARN-7605.009.patch, YARN-7605.010.patch, 
> YARN-7605.011.patch, YARN-7605.012.patch, YARN-7605.013.patch, 
> YARN-7605.014.patch, YARN-7605.015.patch, YARN-7605.016.patch, 
> YARN-7605.017.patch, YARN-7605.018.patch
>
>
> In YARN-7540, all client entry points for API service is centralized to use 
> REST API instead of having direct file system and resource manager rpc calls. 
>  This change helped to centralize yarn metadata to be owned by yarn user 
> instead of crawling through every user's home directory to find metadata.  
> The next step is to make sure "doAs" calls work properly for API Service.  
> The metadata is stored by YARN user, but the actual workload still need to be 
> performed as end users, hence API service must authenticate end user kerberos 
> credential, and perform doAs call when requesting containers via 
> ServiceClient.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-7605) Implement doAs for Api Service REST API

2018-01-20 Thread Jian He (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-7605?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jian He updated YARN-7605:
--
Attachment: YARN-7605.018.patch

> Implement doAs for Api Service REST API
> ---
>
> Key: YARN-7605
> URL: https://issues.apache.org/jira/browse/YARN-7605
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Eric Yang
>Assignee: Eric Yang
>Priority: Major
> Fix For: yarn-native-services
>
> Attachments: YARN-7605.001.patch, YARN-7605.004.patch, 
> YARN-7605.005.patch, YARN-7605.006.patch, YARN-7605.007.patch, 
> YARN-7605.008.patch, YARN-7605.009.patch, YARN-7605.010.patch, 
> YARN-7605.011.patch, YARN-7605.012.patch, YARN-7605.013.patch, 
> YARN-7605.014.patch, YARN-7605.015.patch, YARN-7605.016.patch, 
> YARN-7605.017.patch, YARN-7605.018.patch
>
>
> In YARN-7540, all client entry points for API service is centralized to use 
> REST API instead of having direct file system and resource manager rpc calls. 
>  This change helped to centralize yarn metadata to be owned by yarn user 
> instead of crawling through every user's home directory to find metadata.  
> The next step is to make sure "doAs" calls work properly for API Service.  
> The metadata is stored by YARN user, but the actual workload still need to be 
> performed as end users, hence API service must authenticate end user kerberos 
> credential, and perform doAs call when requesting containers via 
> ServiceClient.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-7605) Implement doAs for Api Service REST API

2018-01-20 Thread Jian He (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-7605?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jian He updated YARN-7605:
--
Attachment: YARN-7605.018.patch

> Implement doAs for Api Service REST API
> ---
>
> Key: YARN-7605
> URL: https://issues.apache.org/jira/browse/YARN-7605
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Eric Yang
>Assignee: Eric Yang
>Priority: Major
> Fix For: yarn-native-services
>
> Attachments: YARN-7605.001.patch, YARN-7605.004.patch, 
> YARN-7605.005.patch, YARN-7605.006.patch, YARN-7605.007.patch, 
> YARN-7605.008.patch, YARN-7605.009.patch, YARN-7605.010.patch, 
> YARN-7605.011.patch, YARN-7605.012.patch, YARN-7605.013.patch, 
> YARN-7605.014.patch, YARN-7605.015.patch, YARN-7605.016.patch, 
> YARN-7605.017.patch
>
>
> In YARN-7540, all client entry points for API service is centralized to use 
> REST API instead of having direct file system and resource manager rpc calls. 
>  This change helped to centralize yarn metadata to be owned by yarn user 
> instead of crawling through every user's home directory to find metadata.  
> The next step is to make sure "doAs" calls work properly for API Service.  
> The metadata is stored by YARN user, but the actual workload still need to be 
> performed as end users, hence API service must authenticate end user kerberos 
> credential, and perform doAs call when requesting containers via 
> ServiceClient.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-7605) Implement doAs for Api Service REST API

2018-01-20 Thread Jian He (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-7605?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jian He updated YARN-7605:
--
Attachment: (was: YARN-7605.018.patch)

> Implement doAs for Api Service REST API
> ---
>
> Key: YARN-7605
> URL: https://issues.apache.org/jira/browse/YARN-7605
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Eric Yang
>Assignee: Eric Yang
>Priority: Major
> Fix For: yarn-native-services
>
> Attachments: YARN-7605.001.patch, YARN-7605.004.patch, 
> YARN-7605.005.patch, YARN-7605.006.patch, YARN-7605.007.patch, 
> YARN-7605.008.patch, YARN-7605.009.patch, YARN-7605.010.patch, 
> YARN-7605.011.patch, YARN-7605.012.patch, YARN-7605.013.patch, 
> YARN-7605.014.patch, YARN-7605.015.patch, YARN-7605.016.patch, 
> YARN-7605.017.patch
>
>
> In YARN-7540, all client entry points for API service is centralized to use 
> REST API instead of having direct file system and resource manager rpc calls. 
>  This change helped to centralize yarn metadata to be owned by yarn user 
> instead of crawling through every user's home directory to find metadata.  
> The next step is to make sure "doAs" calls work properly for API Service.  
> The metadata is stored by YARN user, but the actual workload still need to be 
> performed as end users, hence API service must authenticate end user kerberos 
> credential, and perform doAs call when requesting containers via 
> ServiceClient.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7766) Introduce a new config property for YARN Service dependency tarball location

2018-01-19 Thread Jian He (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7766?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16333002#comment-16333002
 ] 

Jian He commented on YARN-7766:
---

{code}
  public int actionDependency(String destinationFolder, boolean overwrite)
  throws IOException, YarnException {
String currentUser = RegistryUtils.currentUser();
LOG.info("Running command as user {}", currentUser);

Path dependencyLibTarGzip = fs.getDependencyTarGzip(true,
destinationFolder);
{code}
- For above code, feel like we don't need to  pass in a 'true' flag (this 
avoids the other chain of caller changes which passes in "false").  We can do 
special code handling right in here something like below ?
{code}
if (destinationFolder == null) {
  destinationFolder = String.format(YarnServiceConstants.DEPENDENCY_DIR,
  VersionInfo.getVersion());
}
Path dependencyLibTarGzip = new Path(destinationFolder,
YarnServiceConstants.DEPENDENCY_TAR_GZ_FILE_NAME
+ YarnServiceConstants.DEPENDENCY_TAR_GZ_FILE_EXT);
{code}

> Introduce a new config property for YARN Service dependency tarball location
> 
>
> Key: YARN-7766
> URL: https://issues.apache.org/jira/browse/YARN-7766
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: applications, client, yarn-native-services
>Reporter: Gour Saha
>Assignee: Gour Saha
>Priority: Major
> Attachments: YARN-7766.001.patch, YARN-7766.002.patch
>
>
> Introduce a new config property (something like _yarn.service.framework.path_ 
> in-line with _mapreduce.application.framework.path_) for YARN Service 
> dependency tarball location. This will provide flexibility to the 
> user/cluster-admin to upload the dependency tarball to a location of their 
> choice. If this config property is not set, YARN Service client will default 
> to uploading all dependency jars from the client-host's classpath for every 
> service launch request (as it does today).
> Also, accept an optional destination HDFS location for *-enableFastLaunch* 
> command, to specify the location where user/cluster-admin wants to upload the 
> tarball. If not specified, let's default it to the location we use today. The 
> cluster-admin still needs to set _yarn.service.framework.path_ to this 
> default location otherwise it will not be used. So the command-line will 
> become something like this -
> {code:java}
> yarn app -enableFastLaunch []{code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5428) Allow for specifying the docker client configuration directory

2018-01-19 Thread Jian He (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5428?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16332828#comment-16332828
 ] 

Jian He commented on YARN-5428:
---

Patch looks good me overall, wondering if we should check the credential object 
size, because user may accidentally point to a wrong file which has big size.  
And it'll stay in RM memory, znode etc for a long time

> Allow for specifying the docker client configuration directory
> --
>
> Key: YARN-5428
> URL: https://issues.apache.org/jira/browse/YARN-5428
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: yarn
>Reporter: Shane Kumpf
>Assignee: Shane Kumpf
>Priority: Major
>  Labels: oct16-medium
> Attachments: YARN-5428.001.patch, YARN-5428.002.patch, 
> YARN-5428.003.patch, YARN-5428.004.patch, YARN-5428.005.patch, 
> YARN-5428.006.patch, 
> YARN-5428Allowforspecifyingthedockerclientconfigurationdirectory.pdf
>
>
> The docker client allows for specifying a configuration directory that 
> contains the docker client's configuration. It is common to store "docker 
> login" credentials in this config, to avoid the need to docker login on each 
> cluster member. 
> By default the docker client config is $HOME/.docker/config.json on Linux. 
> However, this does not work with the current container executor user 
> switching and it may also be desirable to centralize this configuration 
> beyond the single user's home directory.
> Note that the command line arg is for the configuration directory NOT the 
> configuration file.
> This change will be needed to allow YARN to automatically pull images at 
> localization time or within container executor.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7777) Fix user name format in YARN Registry DNS name

2018-01-19 Thread Jian He (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16332727#comment-16332727
 ] 

Jian He commented on YARN-:
---

yeah, good point, missed that, updated the patch

> Fix user name format in YARN Registry DNS name 
> ---
>
> Key: YARN-
> URL: https://issues.apache.org/jira/browse/YARN-
> Project: Hadoop YARN
>  Issue Type: Bug
>Reporter: Jian He
>Assignee: Jian He
>Priority: Major
> Attachments: YARN-.01.patch, YARN-.02.patch
>
>
> user name that has "\_" should be converted to user "-", because DNS name 
> doesn't allow "_"



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-7777) Fix user name format in YARN Registry DNS name

2018-01-19 Thread Jian He (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jian He updated YARN-:
--
Attachment: YARN-.02.patch

> Fix user name format in YARN Registry DNS name 
> ---
>
> Key: YARN-
> URL: https://issues.apache.org/jira/browse/YARN-
> Project: Hadoop YARN
>  Issue Type: Bug
>Reporter: Jian He
>Assignee: Jian He
>Priority: Major
> Attachments: YARN-.01.patch, YARN-.02.patch
>
>
> user name that has "\_" should be converted to user "-", because DNS name 
> doesn't allow "_"



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-7777) Fix user name format in YARN Registry DNS name

2018-01-18 Thread Jian He (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jian He updated YARN-:
--
Description: user name that has "\_" should be converted to user "-", 
because DNS name doesn't allow "_"  (was: user name that has "_" should be 
converted to user "-", because DNS name doesn't allow "_")

> Fix user name format in YARN Registry DNS name 
> ---
>
> Key: YARN-
> URL: https://issues.apache.org/jira/browse/YARN-
> Project: Hadoop YARN
>  Issue Type: Bug
>Reporter: Jian He
>Assignee: Jian He
>Priority: Major
> Attachments: YARN-.01.patch
>
>
> user name that has "\_" should be converted to user "-", because DNS name 
> doesn't allow "_"



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Assigned] (YARN-7777) Fix user name format in YARN Registry DNS name

2018-01-18 Thread Jian He (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jian He reassigned YARN-:
-

Assignee: Jian He

> Fix user name format in YARN Registry DNS name 
> ---
>
> Key: YARN-
> URL: https://issues.apache.org/jira/browse/YARN-
> Project: Hadoop YARN
>  Issue Type: Bug
>Reporter: Jian He
>Assignee: Jian He
>Priority: Major
> Attachments: YARN-.01.patch
>
>
> user name that has "_" should be converted to user "-", because DNS name 
> doesn't allow "_"



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-7777) Fix user name format in YARN Registry DNS name

2018-01-18 Thread Jian He (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jian He updated YARN-:
--
Attachment: YARN-.01.patch

> Fix user name format in YARN Registry DNS name 
> ---
>
> Key: YARN-
> URL: https://issues.apache.org/jira/browse/YARN-
> Project: Hadoop YARN
>  Issue Type: Bug
> Environment: user name that has "_" should be converted to user "-", 
> because DNS name doesn't allow "_"
>Reporter: Jian He
>Priority: Major
> Attachments: YARN-.01.patch
>
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-7777) Fix user name format in YARN Registry DNS name

2018-01-18 Thread Jian He (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jian He updated YARN-:
--
Environment: (was: user name that has "_" should be converted to user 
"-", because DNS name doesn't allow "_")

> Fix user name format in YARN Registry DNS name 
> ---
>
> Key: YARN-
> URL: https://issues.apache.org/jira/browse/YARN-
> Project: Hadoop YARN
>  Issue Type: Bug
>Reporter: Jian He
>Priority: Major
> Attachments: YARN-.01.patch
>
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-7777) Fix user name format in YARN Registry DNS name

2018-01-18 Thread Jian He (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jian He updated YARN-:
--
Description: user name that has "_" should be converted to user "-", 
because DNS name doesn't allow "_"

> Fix user name format in YARN Registry DNS name 
> ---
>
> Key: YARN-
> URL: https://issues.apache.org/jira/browse/YARN-
> Project: Hadoop YARN
>  Issue Type: Bug
>Reporter: Jian He
>Priority: Major
> Attachments: YARN-.01.patch
>
>
> user name that has "_" should be converted to user "-", because DNS name 
> doesn't allow "_"



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-7777) Fix user name format in YARN Registry DNS name

2018-01-18 Thread Jian He (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jian He updated YARN-:
--
Summary: Fix user name format in YARN Registry DNS name   (was: Fix user 
name format in YARN DNS name )

> Fix user name format in YARN Registry DNS name 
> ---
>
> Key: YARN-
> URL: https://issues.apache.org/jira/browse/YARN-
> Project: Hadoop YARN
>  Issue Type: Bug
> Environment: user name that has "_" should be converted to user "-", 
> because DNS name doesn't allow "_"
>Reporter: Jian He
>Priority: Major
> Attachments: YARN-.01.patch
>
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Created] (YARN-7777) Fix user name format in YARN DNS name

2018-01-18 Thread Jian He (JIRA)
Jian He created YARN-:
-

 Summary: Fix user name format in YARN DNS name 
 Key: YARN-
 URL: https://issues.apache.org/jira/browse/YARN-
 Project: Hadoop YARN
  Issue Type: Bug
 Environment: user name that has "_" should be converted to user "-", 
because DNS name doesn't allow "_"
Reporter: Jian He






--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Created] (YARN-7767) Excessive logging in scheduler

2018-01-17 Thread Jian He (JIRA)
Jian He created YARN-7767:
-

 Summary: Excessive logging in scheduler 
 Key: YARN-7767
 URL: https://issues.apache.org/jira/browse/YARN-7767
 Project: Hadoop YARN
  Issue Type: Bug
Reporter: Jian He


Below logs are printed every few seconds or so in RM log 
{code}
2018-01-17 21:17:57,076 INFO  capacity.QueuePriorityContainerCandidateSelector 
(QueuePriorityContainerCandidateSelector.java:intializePriorityDigraph(121)) - 
Initializing priority preemption directed graph:
2018-01-17 21:18:12,076 INFO  capacity.QueuePriorityContainerCandidateSelector 
(QueuePriorityContainerCandidateSelector.java:intializePriorityDigraph(121)) - 
Initializing priority preemption directed graph:
2018-01-17 21:18:27,076 INFO  capacity.QueuePriorityContainerCandidateSelector 
(QueuePriorityContainerCandidateSelector.java:intializePriorityDigraph(121)) - 
Initializing priority preemption directed graph:
2018-01-17 21:18:42,077 INFO  capacity.QueuePriorityContainerCandidateSelector 
(QueuePriorityContainerCandidateSelector.java:intializePriorityDigraph(121)) - 
Initializing priority preemption directed graph:
2018-01-17 21:18:57,076 INFO  capacity.QueuePriorityContainerCandidateSelector 
(QueuePriorityContainerCandidateSelector.java:intializePriorityDigraph(121)) - 
Initializing priority preemption directed graph:
2018-01-17 21:19:12,076 INFO  capacity.QueuePriorityContainerCandidateSelector 
(QueuePriorityContainerCandidateSelector.java:intializePriorityDigraph(121)) - 
Initializing priority preemption directed graph:
2018-01-17 21:19:27,076 INFO  capacity.QueuePriorityContainerCandidateSelector 
(QueuePriorityContainerCandidateSelector.java:intializePriorityDigraph(121)) - 
Initializing priority preemption directed graph:
2018-01-17 21:19:42,076 INFO  capacity.QueuePriorityContainerCandidateSelector 
(QueuePriorityContainerCandidateSelector.java:intializePriorityDigraph(121)) - 
Initializing priority preemption directed graph:
2018-01-17 21:19:57,077 INFO  capacity.QueuePriorityContainerCandidateSelector 
(QueuePriorityContainerCandidateSelector.java:intializePriorityDigraph(121)) - 
Initializing priority preemption directed graph:
{code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-7740) Fix logging for destroy yarn service cli when app does not exist and some minor bugs

2018-01-17 Thread Jian He (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-7740?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jian He updated YARN-7740:
--
Attachment: YARN-7740.06.patch

> Fix logging for destroy yarn service cli when app does not exist and some 
> minor bugs
> 
>
> Key: YARN-7740
> URL: https://issues.apache.org/jira/browse/YARN-7740
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: yarn-native-services
>Reporter: Yesha Vora
>Assignee: Jian He
>Priority: Major
> Attachments: YARN-7740.04.patch, YARN-7740.05.patch, 
> YARN-7740.06.patch, YARN-7740.1.patch, YARN-7740.2.patch, YARN-7740.3.patch
>
>
> Scenario:
> Run "yarn app -destroy" cli with a application name which does not exist.
> Here, The cli should return a message " Application does not exists" instead 
> it is returning a message "Destroyed cluster httpd-xxx"



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7740) Fix logging for destroy yarn service cli when app does not exist and some minor bugs

2018-01-16 Thread Jian He (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7740?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16328172#comment-16328172
 ] 

Jian He commented on YARN-7740:
---

add one more fix to handle if retrieved ips is empty, should not set in to the 
ips list
{code}
-  status.setIPs(ips == null ? null : Arrays.asList(ips.split(",")));
+  status.setIPs(StringUtils.isEmpty(ips) ? null :
+  Arrays.asList(ips.split(",")));
{code}

> Fix logging for destroy yarn service cli when app does not exist and some 
> minor bugs
> 
>
> Key: YARN-7740
> URL: https://issues.apache.org/jira/browse/YARN-7740
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: yarn-native-services
>Reporter: Yesha Vora
>Assignee: Jian He
>Priority: Major
> Attachments: YARN-7740.04.patch, YARN-7740.05.patch, 
> YARN-7740.1.patch, YARN-7740.2.patch, YARN-7740.3.patch
>
>
> Scenario:
> Run "yarn app -destroy" cli with a application name which does not exist.
> Here, The cli should return a message " Application does not exists" instead 
> it is returning a message "Destroyed cluster httpd-xxx"



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-7740) Fix logging for destroy yarn service cli when app does not exist and some minor bugs

2018-01-16 Thread Jian He (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-7740?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jian He updated YARN-7740:
--
Summary: Fix logging for destroy yarn service cli when app does not exist 
and some minor bugs  (was: Fix logging for destroy yarn service cli when app 
does not exist)

> Fix logging for destroy yarn service cli when app does not exist and some 
> minor bugs
> 
>
> Key: YARN-7740
> URL: https://issues.apache.org/jira/browse/YARN-7740
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: yarn-native-services
>Reporter: Yesha Vora
>Assignee: Jian He
>Priority: Major
> Attachments: YARN-7740.04.patch, YARN-7740.05.patch, 
> YARN-7740.1.patch, YARN-7740.2.patch, YARN-7740.3.patch
>
>
> Scenario:
> Run "yarn app -destroy" cli with a application name which does not exist.
> Here, The cli should return a message " Application does not exists" instead 
> it is returning a message "Destroyed cluster httpd-xxx"



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-7740) Fix logging for destroy yarn service cli when app does not exist

2018-01-16 Thread Jian He (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-7740?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jian He updated YARN-7740:
--
Attachment: YARN-7740.05.patch

> Fix logging for destroy yarn service cli when app does not exist
> 
>
> Key: YARN-7740
> URL: https://issues.apache.org/jira/browse/YARN-7740
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: yarn-native-services
>Reporter: Yesha Vora
>Assignee: Jian He
>Priority: Major
> Attachments: YARN-7740.04.patch, YARN-7740.05.patch, 
> YARN-7740.1.patch, YARN-7740.2.patch, YARN-7740.3.patch
>
>
> Scenario:
> Run "yarn app -destroy" cli with a application name which does not exist.
> Here, The cli should return a message " Application does not exists" instead 
> it is returning a message "Destroyed cluster httpd-xxx"



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7740) Fix logging for destroy yarn service cli when app does not exist

2018-01-16 Thread Jian He (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7740?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16328064#comment-16328064
 ] 

Jian He commented on YARN-7740:
---

I made one more change to make the status return in JSON format, rather the 
object.toString format

{code}

--- 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-services/hadoop-yarn-services-core/src/main/java/org/apache/hadoop/yarn/service/client/ServiceClient.java
+++ 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-services/hadoop-yarn-services-core/src/main/java/org/apache/hadoop/yarn/service/client/ServiceClient.java
@@ -931,7 +931,7 @@ public String getStatusString(String appIdOrName)
 } catch (IllegalArgumentException e) {
 // not appId format, it could be appName.
 Service status = getStatus(appIdOrName);
- return status.toString();
+ return ServiceApiUtil.jsonSerDeser.toJson(status);
 }

{code}

> Fix logging for destroy yarn service cli when app does not exist
> 
>
> Key: YARN-7740
> URL: https://issues.apache.org/jira/browse/YARN-7740
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: yarn-native-services
>Reporter: Yesha Vora
>Assignee: Jian He
>Priority: Major
> Attachments: YARN-7740.04.patch, YARN-7740.1.patch, 
> YARN-7740.2.patch, YARN-7740.3.patch
>
>
> Scenario:
> Run "yarn app -destroy" cli with a application name which does not exist.
> Here, The cli should return a message " Application does not exists" instead 
> it is returning a message "Destroyed cluster httpd-xxx"



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-7740) Fix logging for destroy yarn service cli when app does not exist

2018-01-16 Thread Jian He (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-7740?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jian He updated YARN-7740:
--
Attachment: YARN-7740.04.patch

> Fix logging for destroy yarn service cli when app does not exist
> 
>
> Key: YARN-7740
> URL: https://issues.apache.org/jira/browse/YARN-7740
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: yarn-native-services
>Reporter: Yesha Vora
>Assignee: Jian He
>Priority: Major
> Attachments: YARN-7740.04.patch, YARN-7740.1.patch, 
> YARN-7740.2.patch, YARN-7740.3.patch
>
>
> Scenario:
> Run "yarn app -destroy" cli with a application name which does not exist.
> Here, The cli should return a message " Application does not exists" instead 
> it is returning a message "Destroyed cluster httpd-xxx"



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-7740) Fix logging for destroy yarn service cli when app does not exist

2018-01-12 Thread Jian He (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-7740?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jian He updated YARN-7740:
--
Attachment: YARN-7740.3.patch

> Fix logging for destroy yarn service cli when app does not exist
> 
>
> Key: YARN-7740
> URL: https://issues.apache.org/jira/browse/YARN-7740
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: yarn-native-services
>Reporter: Yesha Vora
>Assignee: Jian He
> Attachments: YARN-7740.1.patch, YARN-7740.2.patch, YARN-7740.3.patch
>
>
> Scenario:
> Run "yarn app -destroy" cli with a application name which does not exist.
> Here, The cli should return a message " Application does not exists" instead 
> it is returning a message "Destroyed cluster httpd-xxx"



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-7740) Fix logging for destroy yarn service cli when app does not exist

2018-01-12 Thread Jian He (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-7740?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jian He updated YARN-7740:
--
Attachment: YARN-7740.2.patch

> Fix logging for destroy yarn service cli when app does not exist
> 
>
> Key: YARN-7740
> URL: https://issues.apache.org/jira/browse/YARN-7740
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: yarn-native-services
>Reporter: Yesha Vora
>Assignee: Jian He
> Attachments: YARN-7740.1.patch, YARN-7740.2.patch
>
>
> Scenario:
> Run "yarn app -destroy" cli with a application name which does not exist.
> Here, The cli should return a message " Application does not exists" instead 
> it is returning a message "Destroyed cluster httpd-xxx"



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Assigned] (YARN-7740) Fix logging for destroy yarn service cli when app does not exist

2018-01-12 Thread Jian He (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-7740?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jian He reassigned YARN-7740:
-

Assignee: Jian He

> Fix logging for destroy yarn service cli when app does not exist
> 
>
> Key: YARN-7740
> URL: https://issues.apache.org/jira/browse/YARN-7740
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: yarn-native-services
>Reporter: Yesha Vora
>Assignee: Jian He
> Attachments: YARN-7740.1.patch
>
>
> Scenario:
> Run "yarn app -destroy" cli with a application name which does not exist.
> Here, The cli should return a message " Application does not exists" instead 
> it is returning a message "Destroyed cluster httpd-xxx"



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-7740) Fix logging for destroy yarn service cli when app does not exist

2018-01-12 Thread Jian He (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-7740?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jian He updated YARN-7740:
--
Attachment: YARN-7740.1.patch

> Fix logging for destroy yarn service cli when app does not exist
> 
>
> Key: YARN-7740
> URL: https://issues.apache.org/jira/browse/YARN-7740
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: yarn-native-services
>Reporter: Yesha Vora
> Attachments: YARN-7740.1.patch
>
>
> Scenario:
> Run "yarn app -destroy" cli with a application name which does not exist.
> Here, The cli should return a message " Application does not exists" instead 
> it is returning a message "Destroyed cluster httpd-xxx"



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-7724) yarn application status should support application name

2018-01-12 Thread Jian He (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-7724?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jian He updated YARN-7724:
--
Attachment: YARN-7724.04.patch

Thanks Billie for review, uploaded a new patch

> yarn application status should support application name
> ---
>
> Key: YARN-7724
> URL: https://issues.apache.org/jira/browse/YARN-7724
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: yarn-native-services
>Reporter: Yesha Vora
>Assignee: Jian He
> Attachments: YARN-7724.01.patch, YARN-7724.02.patch, 
> YARN-7724.03.patch, YARN-7724.04.patch
>
>
> Yarn Service application are tied with app name. Thus, yarn application 
> -status should be able to take yarn service name as argument such as
> yarn application -status 



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7724) yarn application status should support application name

2018-01-11 Thread Jian He (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7724?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16323030#comment-16323030
 ] 

Jian He commented on YARN-7724:
---

patch 03 improved the error msg is app name is not found in RM 

> yarn application status should support application name
> ---
>
> Key: YARN-7724
> URL: https://issues.apache.org/jira/browse/YARN-7724
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: yarn-native-services
>Reporter: Yesha Vora
>Assignee: Jian He
> Attachments: YARN-7724.01.patch, YARN-7724.02.patch, 
> YARN-7724.03.patch
>
>
> Yarn Service application are tied with app name. Thus, yarn application 
> -status should be able to take yarn service name as argument such as
> yarn application -status 



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-7724) yarn application status should support application name

2018-01-11 Thread Jian He (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-7724?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jian He updated YARN-7724:
--
Attachment: YARN-7724.03.patch

> yarn application status should support application name
> ---
>
> Key: YARN-7724
> URL: https://issues.apache.org/jira/browse/YARN-7724
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: yarn-native-services
>Reporter: Yesha Vora
>Assignee: Jian He
> Attachments: YARN-7724.01.patch, YARN-7724.02.patch, 
> YARN-7724.03.patch
>
>
> Yarn Service application are tied with app name. Thus, yarn application 
> -status should be able to take yarn service name as argument such as
> yarn application -status 



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7724) yarn application status should support application name

2018-01-10 Thread Jian He (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7724?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16321615#comment-16321615
 ] 

Jian He commented on YARN-7724:
---

bq. If RM is rebooted, the application status shows errors:
I think you didn't enable RM recovery? If not enabled, this is expected. 
This patch aims to address the problem of having "yarn app -status" also 
support appName. 

> yarn application status should support application name
> ---
>
> Key: YARN-7724
> URL: https://issues.apache.org/jira/browse/YARN-7724
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: yarn-native-services
>Reporter: Yesha Vora
>Assignee: Jian He
> Attachments: YARN-7724.01.patch, YARN-7724.02.patch
>
>
> Yarn Service application are tied with app name. Thus, yarn application 
> -status should be able to take yarn service name as argument such as
> yarn application -status 



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Assigned] (YARN-7724) yarn application status should support application name

2018-01-10 Thread Jian He (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-7724?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jian He reassigned YARN-7724:
-

Assignee: Jian He

> yarn application status should support application name
> ---
>
> Key: YARN-7724
> URL: https://issues.apache.org/jira/browse/YARN-7724
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: yarn-native-services
>Reporter: Yesha Vora
>Assignee: Jian He
> Attachments: YARN-7724.01.patch, YARN-7724.02.patch
>
>
> Yarn Service application are tied with app name. Thus, yarn application 
> -status should be able to take yarn service name as argument such as
> yarn application -status 



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-7724) yarn application status should support application name

2018-01-10 Thread Jian He (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-7724?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jian He updated YARN-7724:
--
Attachment: YARN-7724.02.patch

Yeah, makes sense, updated accordingly 

> yarn application status should support application name
> ---
>
> Key: YARN-7724
> URL: https://issues.apache.org/jira/browse/YARN-7724
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: yarn-native-services
>Reporter: Yesha Vora
>Assignee: Jian He
> Attachments: YARN-7724.01.patch, YARN-7724.02.patch
>
>
> Yarn Service application are tied with app name. Thus, yarn application 
> -status should be able to take yarn service name as argument such as
> yarn application -status 



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7724) yarn application status should support application name

2018-01-10 Thread Jian He (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7724?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16321293#comment-16321293
 ] 

Jian He commented on YARN-7724:
---

yarn app -status  will print both yarn generic status and app specific 
status
yarn app -status  will print app specific status only

> yarn application status should support application name
> ---
>
> Key: YARN-7724
> URL: https://issues.apache.org/jira/browse/YARN-7724
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: yarn-native-services
>Reporter: Yesha Vora
> Attachments: YARN-7724.01.patch
>
>
> Yarn Service application are tied with app name. Thus, yarn application 
> -status should be able to take yarn service name as argument such as
> yarn application -status 



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-7724) yarn application status should support application name

2018-01-10 Thread Jian He (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-7724?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jian He updated YARN-7724:
--
Attachment: YARN-7724.01.patch

> yarn application status should support application name
> ---
>
> Key: YARN-7724
> URL: https://issues.apache.org/jira/browse/YARN-7724
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: yarn-native-services
>Reporter: Yesha Vora
> Attachments: YARN-7724.01.patch
>
>
> Yarn Service application are tied with app name. Thus, yarn application 
> -status should be able to take yarn service name as argument such as
> yarn application -status 



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-7724) yarn application status should support application name

2018-01-10 Thread Jian He (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-7724?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jian He updated YARN-7724:
--
Issue Type: Sub-task  (was: Improvement)
Parent: YARN-7054

> yarn application status should support application name
> ---
>
> Key: YARN-7724
> URL: https://issues.apache.org/jira/browse/YARN-7724
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: yarn-native-services
>Reporter: Yesha Vora
>
> Yarn Service application are tied with app name. Thus, yarn application 
> -status should be able to take yarn service name as argument such as
> yarn application -status 



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7605) Implement doAs for Api Service REST API

2018-01-09 Thread Jian He (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7605?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16319497#comment-16319497
 ] 

Jian He commented on YARN-7605:
---

ah, that is the issue, makes sense, thanks 

> Implement doAs for Api Service REST API
> ---
>
> Key: YARN-7605
> URL: https://issues.apache.org/jira/browse/YARN-7605
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Eric Yang
>Assignee: Eric Yang
> Fix For: yarn-native-services
>
> Attachments: YARN-7605.001.patch, YARN-7605.004.patch, 
> YARN-7605.005.patch, YARN-7605.006.patch, YARN-7605.007.patch, 
> YARN-7605.008.patch, YARN-7605.009.patch, YARN-7605.010.patch, 
> YARN-7605.011.patch, YARN-7605.012.patch, YARN-7605.013.patch, 
> YARN-7605.014.patch
>
>
> In YARN-7540, all client entry points for API service is centralized to use 
> REST API instead of having direct file system and resource manager rpc calls. 
>  This change helped to centralize yarn metadata to be owned by yarn user 
> instead of crawling through every user's home directory to find metadata.  
> The next step is to make sure "doAs" calls work properly for API Service.  
> The metadata is stored by YARN user, but the actual workload still need to be 
> performed as end users, hence API service must authenticate end user kerberos 
> credential, and perform doAs call when requesting containers via 
> ServiceClient.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7717) Add configuration consistency for module.enabled and docker.privileged-containers.enabled

2018-01-09 Thread Jian He (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7717?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16319448#comment-16319448
 ] 

Jian He commented on YARN-7717:
---

also, as talked with Shane offline, it is better to make it case insensitive, 
like both "True" or "true" should work

> Add configuration consistency for module.enabled and 
> docker.privileged-containers.enabled
> -
>
> Key: YARN-7717
> URL: https://issues.apache.org/jira/browse/YARN-7717
> Project: Hadoop YARN
>  Issue Type: Bug
>Affects Versions: 3.0.0
>Reporter: Yesha Vora
>Assignee: Eric Badger
>
> container-executor.cfg has two properties related to dockerization. 
> 1)  module.enabled = true/false
> 2) docker.privileged-containers.enabled = 1/0
> Here, both property takes different value to enable / disable feature. Module 
> enabled take true/false string while docker.privileged-containers.enabled  
> takes 1/0 integer value. 
> This properties behavior should be consistent. Both properties should have 
> true or false string as value to enable or disable feature/



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7605) Implement doAs for Api Service REST API

2018-01-09 Thread Jian He (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7605?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16319443#comment-16319443
 ] 

Jian He commented on YARN-7605:
---

bq. Life time remaining calculated from RM doesn't appear to be real time. When 
3600 is set during application launching, the value seems to stay constant. 
When retrieving it via ApplicationTimeout, and the service is already stopped, 
the value does not change. Hence, retrieving the value from RM and merge with 
the service object doesn't compute the time more accurately. AM copy of service 
object contains the same initial life time value because service object from AM 
contains the initial life time value set during service creation. While I am 
not an expert in ApplicationTimeout code, but I haven't found the time ever 
changed from it's original value. I will put the code back in the next patch in 
case this was result of bug in ApplicationTimeout code.
It should not stay constant, the remaining time indicates how much time the app 
is left to run. It is constantly updated.  The ServiceClient gets the remaining 
time from RM via the application report.  Last time I checked, it is working 
properly. Did you test that this stay constant ? 

bq. Back to the comment about getStatus structure, do you still want the 
returned value of stopped service to be partial information, or similar to 
running application?
If hdfs unstable can cause RM unstable,  that sort of cluster downtime issue 
sounds more critical to me than this partial information. Because this endpoint 
can be very frequently called while app is accepted (client likes to poll every 
second or so to wait for app is running), that essentially means RM will hit 
HDFS for every app getStatus call before it gets running. Unless a concrete use 
case is asked for a complete information while app is accepted or completed, I 
prefer adding this later with a proper caching implementation built. Just my 
opinion, [~gsaha] , [~billie.rina...@gmail.com] ?

> Implement doAs for Api Service REST API
> ---
>
> Key: YARN-7605
> URL: https://issues.apache.org/jira/browse/YARN-7605
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Eric Yang
>Assignee: Eric Yang
> Fix For: yarn-native-services
>
> Attachments: YARN-7605.001.patch, YARN-7605.004.patch, 
> YARN-7605.005.patch, YARN-7605.006.patch, YARN-7605.007.patch, 
> YARN-7605.008.patch, YARN-7605.009.patch, YARN-7605.010.patch, 
> YARN-7605.011.patch, YARN-7605.012.patch, YARN-7605.013.patch, 
> YARN-7605.014.patch
>
>
> In YARN-7540, all client entry points for API service is centralized to use 
> REST API instead of having direct file system and resource manager rpc calls. 
>  This change helped to centralize yarn metadata to be owned by yarn user 
> instead of crawling through every user's home directory to find metadata.  
> The next step is to make sure "doAs" calls work properly for API Service.  
> The metadata is stored by YARN user, but the actual workload still need to be 
> performed as end users, hence API service must authenticate end user kerberos 
> credential, and perform doAs call when requesting containers via 
> ServiceClient.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7605) Implement doAs for Api Service REST API

2018-01-09 Thread Jian He (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7605?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16319181#comment-16319181
 ] 

Jian He commented on YARN-7605:
---

bq. It appears that the application master stores the information to update the 
status of the service. I am not sure how it was computed. Remaining time is 
also updated. One could argue that it may have delay in comparison to current 
time, if this is the case, I can put the code back. However, I didn't find the 
remaining time to be not current when retrieved via AMproxy.
Are you saying the remaining time is also shown after this patch removed the 
corresponding code ? Did you test this ? I'm not sure how is that possible. The 
remaining time is only known by RM. It's not about slight delay, after this 
patch, I think the remaining time will not be set any more.

> Implement doAs for Api Service REST API
> ---
>
> Key: YARN-7605
> URL: https://issues.apache.org/jira/browse/YARN-7605
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Eric Yang
>Assignee: Eric Yang
> Fix For: yarn-native-services
>
> Attachments: YARN-7605.001.patch, YARN-7605.004.patch, 
> YARN-7605.005.patch, YARN-7605.006.patch, YARN-7605.007.patch, 
> YARN-7605.008.patch, YARN-7605.009.patch, YARN-7605.010.patch, 
> YARN-7605.011.patch, YARN-7605.012.patch, YARN-7605.013.patch, 
> YARN-7605.014.patch
>
>
> In YARN-7540, all client entry points for API service is centralized to use 
> REST API instead of having direct file system and resource manager rpc calls. 
>  This change helped to centralize yarn metadata to be owned by yarn user 
> instead of crawling through every user's home directory to find metadata.  
> The next step is to make sure "doAs" calls work properly for API Service.  
> The metadata is stored by YARN user, but the actual workload still need to be 
> performed as end users, hence API service must authenticate end user kerberos 
> credential, and perform doAs call when requesting containers via 
> ServiceClient.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7605) Implement doAs for Api Service REST API

2018-01-08 Thread Jian He (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7605?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16317647#comment-16317647
 ] 

Jian He commented on YARN-7605:
---

bq. I added a check to return EXIT_NOT_FOUND in actionDestroy, if the file is 
not found in HDFS, it will return the proper application not found information.
still the logic that "result != 0" means application is not found is fragile. 
The next time user added a new exit code, this error message is broken. And 
currently it is inconsistent between CLI and rest API. CLI is ignoring the 
error code whereas rest API is throwing exception.
It's better to make them consistent. And the Exception that ApplicationNotFound 
is confusing, ApplicationNotFound usually means not found in RM, but here it is 
the app folder not existing in hdfs. I think we can return the fact that the 
app doesn't exist hdfs to be explicit, like the CLI does,  instead of throwing 
exception.
bq. I refactor the code to load only if RM doesn't find the app.
IIUC,  the code is load if app is not running. This can be that app is pending 
or finished, it may still hit hdfs a lot. For getStatus kinda API, the consumer 
is always calling it in a loop. This will get very bad if hdfs is down or 
during failover, the api call will be spinning and quickly overload RM. IMO, we 
may need a better solution for serving persistent app status. The current 
approach may create a bottleneck accidentally. 
bq. Redundant fetch of information. If it is running, remaining time is in the 
copy retrieved from AM.
How is it retrieving it from AM ? Only RM knows the app's remaining time
bq. Patch failed due to revert of YARN-7540. Please commit YARN-7540 so this 
patch can be applied. Thank you.
Once this patch review is done, we can combine both patches and run jenkins and 
commit together.

> Implement doAs for Api Service REST API
> ---
>
> Key: YARN-7605
> URL: https://issues.apache.org/jira/browse/YARN-7605
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Eric Yang
>Assignee: Eric Yang
> Fix For: yarn-native-services
>
> Attachments: YARN-7605.001.patch, YARN-7605.004.patch, 
> YARN-7605.005.patch, YARN-7605.006.patch, YARN-7605.007.patch, 
> YARN-7605.008.patch, YARN-7605.009.patch, YARN-7605.010.patch, 
> YARN-7605.011.patch, YARN-7605.012.patch, YARN-7605.013.patch, 
> YARN-7605.014.patch
>
>
> In YARN-7540, all client entry points for API service is centralized to use 
> REST API instead of having direct file system and resource manager rpc calls. 
>  This change helped to centralize yarn metadata to be owned by yarn user 
> instead of crawling through every user's home directory to find metadata.  
> The next step is to make sure "doAs" calls work properly for API Service.  
> The metadata is stored by YARN user, but the actual workload still need to be 
> performed as end users, hence API service must authenticate end user kerberos 
> credential, and perform doAs call when requesting containers via 
> ServiceClient.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7605) Implement doAs for Api Service REST API

2018-01-08 Thread Jian He (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7605?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16317313#comment-16317313
 ] 

Jian He commented on YARN-7605:
---

- Previous comment: Here it still assumes if {{result \!= 0}} , it is 
ApplicationNotFoundException. I think this is inappropriate, in fact in the 
implementation, it'll not return result!=0, so this condition may be just 
removed.
{code}
if (result == 0) {
  ServiceStatus serviceStatus = new ServiceStatus();
  serviceStatus.setDiagnostics("Successfully stopped service " +
  appName);
  return formatResponse(Status.OK, serviceStatus);
} else {
  throw new ApplicationNotFoundException("Service " + appName +
  " is not found in YARN.");
}
{code}
- previous comment: It is inappropriate to assume all exceptions are caused by 
"Permission denied" ? this will confuse the users. And why not just use 
LOG.warn("...", e), but use a seaprate log statement for the exception ?
{code}
try {
  proxy.flexComponents(requestBuilder.build());
} catch (YarnException | IOException e) {
  LOG.warn("Exception caught during flex operation.");
  LOG.warn(ExceptionUtils.getFullStackTrace(e));
  throw new YarnException("Permission denied to perform flex operation.");
}
{code}
- In getStatus is is changed to load from hdfs every time, won't this hit hdfs 
too much ? 
{code}
Service appSpec = ServiceApiUtil.loadService(fs, serviceName);
{code} 

- why is below code in getStatus removed ?
{code}
ApplicationTimeout lifetime =
appReport.getApplicationTimeouts().get(ApplicationTimeoutType.LIFETIME);
if (lifetime != null) {
  appSpec.setLifetime(lifetime.getRemainingTime());
}
{code}
- bq. In ServiceClient, several methods shouldn't throw InterruptedException 
because AppAdminClient definition doesn't throw InterruptedException. This is 
the reason that InterruptedException were converted to YarnException
I meant why this patch added the InterruptedException to this submitApp 
interface ? it wasn't throwing InterruptedException before
{code}
  private ApplicationId submitApp(Service app)
  throws IOException, YarnException, InterruptedException {
{code}
- bq. Formatting for createAMProxy, actionSave were to remove some check style 
issues that exist in ServiceClient.
Looks like methods are not exceeding 80 column limit and looks truncated to 
separate lines unnecessarily. what were the checkstyles about ?
- there's always this line printed when trying the CLI, any way to get rid of 
this ?
{code}18/01/08 14:31:35 INFO util.log: Logging initialized @1287ms
{code}
- For flexing it doesn't print flexed from/to numbers, I think it is useful to 
print 'component name' and from/to numbers 
{code}
18/01/08 14:33:44 INFO client.ApiServiceClient: Service jian2 is successfully 
flexed.
{code}

- both stop and destroy will print the same logging as below, I think we can 
differentiate them
“Successfully stopped service”
-  "yarn app -save" doesn't have logging to indicate whether it is succesful or 
not

> Implement doAs for Api Service REST API
> ---
>
> Key: YARN-7605
> URL: https://issues.apache.org/jira/browse/YARN-7605
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Eric Yang
>Assignee: Eric Yang
> Fix For: yarn-native-services
>
> Attachments: YARN-7605.001.patch, YARN-7605.004.patch, 
> YARN-7605.005.patch, YARN-7605.006.patch, YARN-7605.007.patch, 
> YARN-7605.008.patch, YARN-7605.009.patch, YARN-7605.010.patch, 
> YARN-7605.011.patch, YARN-7605.012.patch, YARN-7605.013.patch
>
>
> In YARN-7540, all client entry points for API service is centralized to use 
> REST API instead of having direct file system and resource manager rpc calls. 
>  This change helped to centralize yarn metadata to be owned by yarn user 
> instead of crawling through every user's home directory to find metadata.  
> The next step is to make sure "doAs" calls work properly for API Service.  
> The metadata is stored by YARN user, but the actual workload still need to be 
> performed as end users, hence API service must authenticate end user kerberos 
> credential, and perform doAs call when requesting containers via 
> ServiceClient.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-7704) Document improvement for registry dns

2018-01-08 Thread Jian He (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-7704?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jian He updated YARN-7704:
--
Attachment: YARN-7704.02.patch

also updated the outdated "/ws/v1/services" to "/app/v1/services"

> Document improvement for registry dns
> -
>
> Key: YARN-7704
> URL: https://issues.apache.org/jira/browse/YARN-7704
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Jian He
>Assignee: Jian He
> Fix For: yarn-native-services
>
> Attachments: YARN-7704.01.patch, YARN-7704.02.patch
>
>
> Add document for how to point the cluster to use the registry dns



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7540) Convert yarn app cli to call yarn api services

2018-01-05 Thread Jian He (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7540?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16314311#comment-16314311
 ] 

Jian He commented on YARN-7540:
---

Since this is partially completed and blocking downstream project, I'm 
reverting this patch for now. Will commit this and YARN-7605 together after 
this is ready.

> Convert yarn app cli to call yarn api services
> --
>
> Key: YARN-7540
> URL: https://issues.apache.org/jira/browse/YARN-7540
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Eric Yang
>Assignee: Eric Yang
> Fix For: yarn-native-services
>
> Attachments: YARN-7540.001.patch, YARN-7540.002.patch, 
> YARN-7540.003.patch, YARN-7540.004.patch, YARN-7540.005.patch, 
> YARN-7540.006.patch
>
>
> For YARN docker application to launch through CLI, it works differently from 
> launching through REST API.  All application launched through REST API is 
> currently stored in yarn user HDFS home directory.  Application managed 
> through CLI are stored into individual user's HDFS home directory.  For 
> consistency, we want to have yarn app cli to interact with API service to 
> manage applications.  For performance reason, it is easier to implement list 
> all applications from one user's home directory instead of crawling all 
> user's home directories.  For security reason, it is safer to access only one 
> user home directory instead of all users.  Given the reasons above, the 
> proposal is to change how {{yarn app -launch}}, {{yarn app -list}} and {{yarn 
> app -destroy}} work.  Instead of calling HDFS API and RM API to launch 
> containers, CLI will be converted to call API service REST API resides in RM. 
>  RM perform the persist and operations to launch the actual application.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-7704) Document improvement for registry dns

2018-01-04 Thread Jian He (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-7704?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jian He updated YARN-7704:
--
Description: 
Add document for how to point the cluster to use the registry dns


> Document improvement for registry dns
> -
>
> Key: YARN-7704
> URL: https://issues.apache.org/jira/browse/YARN-7704
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Jian He
>Assignee: Jian He
> Fix For: yarn-native-services
>
> Attachments: YARN-7704.01.patch
>
>
> Add document for how to point the cluster to use the registry dns



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Created] (YARN-7704) Document improvement for registry dns

2018-01-04 Thread Jian He (JIRA)
Jian He created YARN-7704:
-

 Summary: Document improvement for registry dns
 Key: YARN-7704
 URL: https://issues.apache.org/jira/browse/YARN-7704
 Project: Hadoop YARN
  Issue Type: Sub-task
Reporter: Jian He
Assignee: Jian He






--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-7704) Document improvement for registry dns

2018-01-04 Thread Jian He (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-7704?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jian He updated YARN-7704:
--
Attachment: YARN-7704.01.patch

> Document improvement for registry dns
> -
>
> Key: YARN-7704
> URL: https://issues.apache.org/jira/browse/YARN-7704
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Jian He
>Assignee: Jian He
> Fix For: yarn-native-services
>
> Attachments: YARN-7704.01.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7605) Implement doAs for Api Service REST API

2018-01-03 Thread Jian He (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7605?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16310526#comment-16310526
 ] 

Jian He commented on YARN-7605:
---

- the addInternalServlet has an unused parameter clazz

> Implement doAs for Api Service REST API
> ---
>
> Key: YARN-7605
> URL: https://issues.apache.org/jira/browse/YARN-7605
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Eric Yang
>Assignee: Eric Yang
> Fix For: yarn-native-services
>
> Attachments: YARN-7605.001.patch, YARN-7605.004.patch, 
> YARN-7605.005.patch, YARN-7605.006.patch, YARN-7605.007.patch, 
> YARN-7605.008.patch, YARN-7605.009.patch
>
>
> In YARN-7540, all client entry points for API service is centralized to use 
> REST API instead of having direct file system and resource manager rpc calls. 
>  This change helped to centralize yarn metadata to be owned by yarn user 
> instead of crawling through every user's home directory to find metadata.  
> The next step is to make sure "doAs" calls work properly for API Service.  
> The metadata is stored by YARN user, but the actual workload still need to be 
> performed as end users, hence API service must authenticate end user kerberos 
> credential, and perform doAs call when requesting containers via 
> ServiceClient.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7605) Implement doAs for Api Service REST API

2018-01-03 Thread Jian He (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7605?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16310519#comment-16310519
 ] 

Jian He commented on YARN-7605:
---

- we can't assume if result != 0, it is ApplicationNotFoundException ? and 
instead of "result.intValue() == 0", it can be "result == 0"
{code}
  if (result.intValue() == 0) {
ServiceStatus serviceStatus = new ServiceStatus();
serviceStatus.setDiagnostics("Successfully stopped service " +
appName);
return Response.status(Status.OK).entity(serviceStatus).build();
  } else {
throw new ApplicationNotFoundException("Service " + appName +
" is not found in YARN.");
  }
{code}
- what is this NullPointerException for ? we should fix the NPE instead ?
{code}
} catch (NullPointerException | IOException e) {
  LOG.error("Fail to stop service:", e);
{code}
- similarly, I think it is inappropriate to assume the cause of all these 
exceptions is service not found
{code}
} catch (InterruptedException | ApplicationNotFoundException |
UndeclaredThrowableException e) {
  ServiceStatus serviceStatus = new ServiceStatus();
  serviceStatus.setDiagnostics("Service " + appName +
  " is not found in YARN.");
{code}
- it is fine to just return the result 
{code}
return Integer.valueOf(result);
{code}
- create a common method for these ?
{code}
  UserGroupInformation proxyUser = UserGroupInformation.getLoginUser();
  UserGroupInformation ugi = UserGroupInformation
  .createProxyUser(request.getRemoteUser(), proxyUser);
{code}
- Again, it is inappropriate to assume all exceptions are caused by "Permission 
denied" ? this will confuse the users. And why not just user LOG.warn("...", 
e), but use a seaprate log statement for the exception ?
{code}
try {
  proxy.flexComponents(requestBuilder.build());
} catch (YarnException | IOException e) {
  LOG.warn("Exception caught during flex operation.");
  LOG.warn(ExceptionUtils.getFullStackTrace(e));
  throw new YarnException("Permission denied to perform flex operation.");
}
{code}
- why is it needed to add the InterruptedException to the method signature, and 
make callers catch this exception and then convert to YarnException ?
{code}
  private ApplicationId submitApp(Service app)
  throws IOException, YarnException, InterruptedException {
{code}
- unnecessary formatting change in createAMProxy, actionSave

> Implement doAs for Api Service REST API
> ---
>
> Key: YARN-7605
> URL: https://issues.apache.org/jira/browse/YARN-7605
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Eric Yang
>Assignee: Eric Yang
> Fix For: yarn-native-services
>
> Attachments: YARN-7605.001.patch, YARN-7605.004.patch, 
> YARN-7605.005.patch, YARN-7605.006.patch, YARN-7605.007.patch, 
> YARN-7605.008.patch, YARN-7605.009.patch
>
>
> In YARN-7540, all client entry points for API service is centralized to use 
> REST API instead of having direct file system and resource manager rpc calls. 
>  This change helped to centralize yarn metadata to be owned by yarn user 
> instead of crawling through every user's home directory to find metadata.  
> The next step is to make sure "doAs" calls work properly for API Service.  
> The metadata is stored by YARN user, but the actual workload still need to be 
> performed as end users, hence API service must authenticate end user kerberos 
> credential, and perform doAs call when requesting containers via 
> ServiceClient.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-7565) Yarn service pre-maturely releases the container after AM restart

2017-12-12 Thread Jian He (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-7565?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jian He updated YARN-7565:
--
Fix Version/s: 3.1.0

> Yarn service pre-maturely releases the container after AM restart 
> --
>
> Key: YARN-7565
> URL: https://issues.apache.org/jira/browse/YARN-7565
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Chandni Singh
>Assignee: Chandni Singh
> Fix For: 3.1.0
>
> Attachments: YARN-7565.001.patch, YARN-7565.002.patch, 
> YARN-7565.003.patch, YARN-7565.004.patch, YARN-7565.005.patch
>
>
> With YARN-6168, recovered containers can be reported to AM in response to the 
> AM heartbeat. 
> Currently, the Service Master will release the containers, that are not 
> reported in the AM registration response, immediately.
> Instead, the master can wait for a configured amount of time for the 
> containers to be recovered by RM. These containers are sent to AM in the 
> heartbeat response. Once a container is not reported in the configured 
> interval, it can be released by the master.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-7565) Yarn service pre-maturely releases the container after AM restart

2017-12-12 Thread Jian He (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-7565?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jian He updated YARN-7565:
--
Fix Version/s: (was: yarn-native-services)

> Yarn service pre-maturely releases the container after AM restart 
> --
>
> Key: YARN-7565
> URL: https://issues.apache.org/jira/browse/YARN-7565
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Chandni Singh
>Assignee: Chandni Singh
> Fix For: 3.1.0
>
> Attachments: YARN-7565.001.patch, YARN-7565.002.patch, 
> YARN-7565.003.patch, YARN-7565.004.patch, YARN-7565.005.patch
>
>
> With YARN-6168, recovered containers can be reported to AM in response to the 
> AM heartbeat. 
> Currently, the Service Master will release the containers, that are not 
> reported in the AM registration response, immediately.
> Instead, the master can wait for a configured amount of time for the 
> containers to be recovered by RM. These containers are sent to AM in the 
> heartbeat response. Once a container is not reported in the configured 
> interval, it can be released by the master.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7555) Support multiple resource types in YARN native services

2017-12-12 Thread Jian He (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7555?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16288123#comment-16288123
 ] 

Jian He commented on YARN-7555:
---

- unused import in import in Resource.java and TestServiceAM
- accidental change in first line of YarnServiceAPI.md
- and findbugs warnings
other than those, lgtm

> Support multiple resource types in YARN native services
> ---
>
> Key: YARN-7555
> URL: https://issues.apache.org/jira/browse/YARN-7555
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: yarn-native-services
>Reporter: Wangda Tan
>Assignee: Wangda Tan
>Priority: Critical
> Attachments: YARN-7555.003.patch, YARN-7555.004.patch, 
> YARN-7555.wip-001.patch
>
>
> We need to support specifying multiple resource type in addition to 
> memory/cpu in YARN native services



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7543) FileNotFoundException when creating a yarn service due to broken link under hadoop lib directory

2017-12-11 Thread Jian He (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7543?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16286806#comment-16286806
 ] 

Jian He commented on YARN-7543:
---

[~billie.rinaldi], [~gsaha], can you review 

> FileNotFoundException when creating a yarn service due to broken link under 
> hadoop lib directory
> 
>
> Key: YARN-7543
> URL: https://issues.apache.org/jira/browse/YARN-7543
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Gour Saha
>Assignee: Jian He
> Fix For: yarn-native-services
>
> Attachments: YARN-7543.01.patch
>
>
> The hadoop lib dir had a broken link to a ojdb jar which was not really 
> required for a YARN service creation. The app submission failed with the 
> below FNFE. Ideally it should be handled and app should be successfully 
> submitted and let the app fail if it really needed the jar of the broken link 
> -
> {code}
> [root@ctr-e134-1499953498516-324910-01-02 ~]# yarn app -launch 
> gour-sleeper sleeper
> WARNING: YARN_LOG_DIR has been replaced by HADOOP_LOG_DIR. Using value of 
> YARN_LOG_DIR.
> WARNING: YARN_LOGFILE has been replaced by HADOOP_LOGFILE. Using value of 
> YARN_LOGFILE.
> WARNING: YARN_PID_DIR has been replaced by HADOOP_PID_DIR. Using value of 
> YARN_PID_DIR.
> WARNING: YARN_OPTS has been replaced by HADOOP_OPTS. Using value of YARN_OPTS.
> 17/11/21 03:21:58 WARN util.NativeCodeLoader: Unable to load native-hadoop 
> library for your platform... using builtin-java classes where applicable
> 17/11/21 03:21:59 INFO client.RMProxy: Connecting to ResourceManager at 
> ctr-e134-1499953498516-324910-01-03.example.com/172.27.47.1:8050
> 17/11/21 03:22:00 WARN shortcircuit.DomainSocketFactory: The short-circuit 
> local reads feature cannot be used because libhadoop cannot be loaded.
> 17/11/21 03:22:00 INFO client.RMProxy: Connecting to ResourceManager at 
> ctr-e134-1499953498516-324910-01-03.example.com/172.27.47.1:8050
> 17/11/21 03:22:00 INFO client.ServiceClient: Loading service definition from 
> local FS: 
> /usr/hdp/3.0.0.0-493/hadoop-yarn/yarn-service-examples/sleeper/sleeper.json
> 17/11/21 03:22:01 INFO client.ServiceClient: Persisted service gour-sleeper 
> at 
> hdfs://ctr-e134-1499953498516-324910-01-03.example.com:8020/user/hdfs/.yarn/services/gour-sleeper/gour-sleeper.json
> 17/11/21 03:22:01 INFO conf.Configuration: resource-types.xml not found
> 17/11/21 03:22:01 WARN client.ServiceClient: AM log4j property file doesn't 
> exist: /usr/hdp/3.0.0.0-493/hadoop/conf/yarnservice-log4j.properties
> 17/11/21 03:22:01 INFO client.ServiceClient: Uploading all dependency jars to 
> HDFS. For faster submission of apps, pre-upload dependency jars to HDFS using 
> command: yarn app -enableFastLaunch
> Exception in thread "main" java.io.FileNotFoundException: File 
> /usr/hdp/3.0.0.0-493/hadoop/lib/ojdbc6.jar does not exist
>   at 
> org.apache.hadoop.fs.RawLocalFileSystem.deprecatedGetFileStatus(RawLocalFileSystem.java:641)
>   at 
> org.apache.hadoop.fs.RawLocalFileSystem.getFileLinkStatusInternal(RawLocalFileSystem.java:867)
>   at 
> org.apache.hadoop.fs.RawLocalFileSystem.getFileStatus(RawLocalFileSystem.java:631)
>   at 
> org.apache.hadoop.fs.FilterFileSystem.getFileStatus(FilterFileSystem.java:454)
>   at org.apache.hadoop.fs.FileUtil.copy(FileUtil.java:365)
>   at 
> org.apache.hadoop.fs.FileSystem.copyFromLocalFile(FileSystem.java:2399)
>   at 
> org.apache.hadoop.yarn.service.utils.CoreFileSystem.submitFile(CoreFileSystem.java:434)
>   at 
> org.apache.hadoop.yarn.service.utils.ServiceUtils.putAllJars(ServiceUtils.java:409)
>   at 
> org.apache.hadoop.yarn.service.provider.ProviderUtils.addAllDependencyJars(ProviderUtils.java:138)
>   at 
> org.apache.hadoop.yarn.service.client.ServiceClient.addJarResource(ServiceClient.java:695)
>   at 
> org.apache.hadoop.yarn.service.client.ServiceClient.submitApp(ServiceClient.java:553)
>   at 
> org.apache.hadoop.yarn.service.client.ServiceClient.actionCreate(ServiceClient.java:212)
>   at 
> org.apache.hadoop.yarn.service.client.ServiceClient.actionLaunch(ServiceClient.java:197)
>   at 
> org.apache.hadoop.yarn.client.cli.ApplicationCLI.run(ApplicationCLI.java:447)
>   at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:76)
>   at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:90)
>   at 
> org.apache.hadoop.yarn.client.cli.ApplicationCLI.main(ApplicationCLI.java:111)
> {code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7543) FileNotFoundException when creating a yarn service due to broken link under hadoop lib directory

2017-12-11 Thread Jian He (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7543?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16286802#comment-16286802
 ] 

Jian He commented on YARN-7543:
---

Changes:
- continue if the jar file doesn't exist
- a side change that checks if the specified cpu resource is within the max cpu 
limit

> FileNotFoundException when creating a yarn service due to broken link under 
> hadoop lib directory
> 
>
> Key: YARN-7543
> URL: https://issues.apache.org/jira/browse/YARN-7543
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Gour Saha
>Assignee: Jian He
> Fix For: yarn-native-services
>
> Attachments: YARN-7543.01.patch
>
>
> The hadoop lib dir had a broken link to a ojdb jar which was not really 
> required for a YARN service creation. The app submission failed with the 
> below FNFE. Ideally it should be handled and app should be successfully 
> submitted and let the app fail if it really needed the jar of the broken link 
> -
> {code}
> [root@ctr-e134-1499953498516-324910-01-02 ~]# yarn app -launch 
> gour-sleeper sleeper
> WARNING: YARN_LOG_DIR has been replaced by HADOOP_LOG_DIR. Using value of 
> YARN_LOG_DIR.
> WARNING: YARN_LOGFILE has been replaced by HADOOP_LOGFILE. Using value of 
> YARN_LOGFILE.
> WARNING: YARN_PID_DIR has been replaced by HADOOP_PID_DIR. Using value of 
> YARN_PID_DIR.
> WARNING: YARN_OPTS has been replaced by HADOOP_OPTS. Using value of YARN_OPTS.
> 17/11/21 03:21:58 WARN util.NativeCodeLoader: Unable to load native-hadoop 
> library for your platform... using builtin-java classes where applicable
> 17/11/21 03:21:59 INFO client.RMProxy: Connecting to ResourceManager at 
> ctr-e134-1499953498516-324910-01-03.example.com/172.27.47.1:8050
> 17/11/21 03:22:00 WARN shortcircuit.DomainSocketFactory: The short-circuit 
> local reads feature cannot be used because libhadoop cannot be loaded.
> 17/11/21 03:22:00 INFO client.RMProxy: Connecting to ResourceManager at 
> ctr-e134-1499953498516-324910-01-03.example.com/172.27.47.1:8050
> 17/11/21 03:22:00 INFO client.ServiceClient: Loading service definition from 
> local FS: 
> /usr/hdp/3.0.0.0-493/hadoop-yarn/yarn-service-examples/sleeper/sleeper.json
> 17/11/21 03:22:01 INFO client.ServiceClient: Persisted service gour-sleeper 
> at 
> hdfs://ctr-e134-1499953498516-324910-01-03.example.com:8020/user/hdfs/.yarn/services/gour-sleeper/gour-sleeper.json
> 17/11/21 03:22:01 INFO conf.Configuration: resource-types.xml not found
> 17/11/21 03:22:01 WARN client.ServiceClient: AM log4j property file doesn't 
> exist: /usr/hdp/3.0.0.0-493/hadoop/conf/yarnservice-log4j.properties
> 17/11/21 03:22:01 INFO client.ServiceClient: Uploading all dependency jars to 
> HDFS. For faster submission of apps, pre-upload dependency jars to HDFS using 
> command: yarn app -enableFastLaunch
> Exception in thread "main" java.io.FileNotFoundException: File 
> /usr/hdp/3.0.0.0-493/hadoop/lib/ojdbc6.jar does not exist
>   at 
> org.apache.hadoop.fs.RawLocalFileSystem.deprecatedGetFileStatus(RawLocalFileSystem.java:641)
>   at 
> org.apache.hadoop.fs.RawLocalFileSystem.getFileLinkStatusInternal(RawLocalFileSystem.java:867)
>   at 
> org.apache.hadoop.fs.RawLocalFileSystem.getFileStatus(RawLocalFileSystem.java:631)
>   at 
> org.apache.hadoop.fs.FilterFileSystem.getFileStatus(FilterFileSystem.java:454)
>   at org.apache.hadoop.fs.FileUtil.copy(FileUtil.java:365)
>   at 
> org.apache.hadoop.fs.FileSystem.copyFromLocalFile(FileSystem.java:2399)
>   at 
> org.apache.hadoop.yarn.service.utils.CoreFileSystem.submitFile(CoreFileSystem.java:434)
>   at 
> org.apache.hadoop.yarn.service.utils.ServiceUtils.putAllJars(ServiceUtils.java:409)
>   at 
> org.apache.hadoop.yarn.service.provider.ProviderUtils.addAllDependencyJars(ProviderUtils.java:138)
>   at 
> org.apache.hadoop.yarn.service.client.ServiceClient.addJarResource(ServiceClient.java:695)
>   at 
> org.apache.hadoop.yarn.service.client.ServiceClient.submitApp(ServiceClient.java:553)
>   at 
> org.apache.hadoop.yarn.service.client.ServiceClient.actionCreate(ServiceClient.java:212)
>   at 
> org.apache.hadoop.yarn.service.client.ServiceClient.actionLaunch(ServiceClient.java:197)
>   at 
> org.apache.hadoop.yarn.client.cli.ApplicationCLI.run(ApplicationCLI.java:447)
>   at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:76)
>   at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:90)
>   at 
> org.apache.hadoop.yarn.client.cli.ApplicationCLI.main(ApplicationCLI.java:111)
> {code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org

[jira] [Updated] (YARN-7543) FileNotFoundException when creating a yarn service due to broken link under hadoop lib directory

2017-12-11 Thread Jian He (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-7543?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jian He updated YARN-7543:
--
Attachment: YARN-7543.01.patch

> FileNotFoundException when creating a yarn service due to broken link under 
> hadoop lib directory
> 
>
> Key: YARN-7543
> URL: https://issues.apache.org/jira/browse/YARN-7543
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Gour Saha
>Assignee: Jian He
> Fix For: yarn-native-services
>
> Attachments: YARN-7543.01.patch
>
>
> The hadoop lib dir had a broken link to a ojdb jar which was not really 
> required for a YARN service creation. The app submission failed with the 
> below FNFE. Ideally it should be handled and app should be successfully 
> submitted and let the app fail if it really needed the jar of the broken link 
> -
> {code}
> [root@ctr-e134-1499953498516-324910-01-02 ~]# yarn app -launch 
> gour-sleeper sleeper
> WARNING: YARN_LOG_DIR has been replaced by HADOOP_LOG_DIR. Using value of 
> YARN_LOG_DIR.
> WARNING: YARN_LOGFILE has been replaced by HADOOP_LOGFILE. Using value of 
> YARN_LOGFILE.
> WARNING: YARN_PID_DIR has been replaced by HADOOP_PID_DIR. Using value of 
> YARN_PID_DIR.
> WARNING: YARN_OPTS has been replaced by HADOOP_OPTS. Using value of YARN_OPTS.
> 17/11/21 03:21:58 WARN util.NativeCodeLoader: Unable to load native-hadoop 
> library for your platform... using builtin-java classes where applicable
> 17/11/21 03:21:59 INFO client.RMProxy: Connecting to ResourceManager at 
> ctr-e134-1499953498516-324910-01-03.example.com/172.27.47.1:8050
> 17/11/21 03:22:00 WARN shortcircuit.DomainSocketFactory: The short-circuit 
> local reads feature cannot be used because libhadoop cannot be loaded.
> 17/11/21 03:22:00 INFO client.RMProxy: Connecting to ResourceManager at 
> ctr-e134-1499953498516-324910-01-03.example.com/172.27.47.1:8050
> 17/11/21 03:22:00 INFO client.ServiceClient: Loading service definition from 
> local FS: 
> /usr/hdp/3.0.0.0-493/hadoop-yarn/yarn-service-examples/sleeper/sleeper.json
> 17/11/21 03:22:01 INFO client.ServiceClient: Persisted service gour-sleeper 
> at 
> hdfs://ctr-e134-1499953498516-324910-01-03.example.com:8020/user/hdfs/.yarn/services/gour-sleeper/gour-sleeper.json
> 17/11/21 03:22:01 INFO conf.Configuration: resource-types.xml not found
> 17/11/21 03:22:01 WARN client.ServiceClient: AM log4j property file doesn't 
> exist: /usr/hdp/3.0.0.0-493/hadoop/conf/yarnservice-log4j.properties
> 17/11/21 03:22:01 INFO client.ServiceClient: Uploading all dependency jars to 
> HDFS. For faster submission of apps, pre-upload dependency jars to HDFS using 
> command: yarn app -enableFastLaunch
> Exception in thread "main" java.io.FileNotFoundException: File 
> /usr/hdp/3.0.0.0-493/hadoop/lib/ojdbc6.jar does not exist
>   at 
> org.apache.hadoop.fs.RawLocalFileSystem.deprecatedGetFileStatus(RawLocalFileSystem.java:641)
>   at 
> org.apache.hadoop.fs.RawLocalFileSystem.getFileLinkStatusInternal(RawLocalFileSystem.java:867)
>   at 
> org.apache.hadoop.fs.RawLocalFileSystem.getFileStatus(RawLocalFileSystem.java:631)
>   at 
> org.apache.hadoop.fs.FilterFileSystem.getFileStatus(FilterFileSystem.java:454)
>   at org.apache.hadoop.fs.FileUtil.copy(FileUtil.java:365)
>   at 
> org.apache.hadoop.fs.FileSystem.copyFromLocalFile(FileSystem.java:2399)
>   at 
> org.apache.hadoop.yarn.service.utils.CoreFileSystem.submitFile(CoreFileSystem.java:434)
>   at 
> org.apache.hadoop.yarn.service.utils.ServiceUtils.putAllJars(ServiceUtils.java:409)
>   at 
> org.apache.hadoop.yarn.service.provider.ProviderUtils.addAllDependencyJars(ProviderUtils.java:138)
>   at 
> org.apache.hadoop.yarn.service.client.ServiceClient.addJarResource(ServiceClient.java:695)
>   at 
> org.apache.hadoop.yarn.service.client.ServiceClient.submitApp(ServiceClient.java:553)
>   at 
> org.apache.hadoop.yarn.service.client.ServiceClient.actionCreate(ServiceClient.java:212)
>   at 
> org.apache.hadoop.yarn.service.client.ServiceClient.actionLaunch(ServiceClient.java:197)
>   at 
> org.apache.hadoop.yarn.client.cli.ApplicationCLI.run(ApplicationCLI.java:447)
>   at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:76)
>   at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:90)
>   at 
> org.apache.hadoop.yarn.client.cli.ApplicationCLI.main(ApplicationCLI.java:111)
> {code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Assigned] (YARN-7543) FileNotFoundException when creating a yarn service due to broken link under hadoop lib directory

2017-12-11 Thread Jian He (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-7543?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jian He reassigned YARN-7543:
-

Assignee: Jian He

> FileNotFoundException when creating a yarn service due to broken link under 
> hadoop lib directory
> 
>
> Key: YARN-7543
> URL: https://issues.apache.org/jira/browse/YARN-7543
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Gour Saha
>Assignee: Jian He
> Fix For: yarn-native-services
>
>
> The hadoop lib dir had a broken link to a ojdb jar which was not really 
> required for a YARN service creation. The app submission failed with the 
> below FNFE. Ideally it should be handled and app should be successfully 
> submitted and let the app fail if it really needed the jar of the broken link 
> -
> {code}
> [root@ctr-e134-1499953498516-324910-01-02 ~]# yarn app -launch 
> gour-sleeper sleeper
> WARNING: YARN_LOG_DIR has been replaced by HADOOP_LOG_DIR. Using value of 
> YARN_LOG_DIR.
> WARNING: YARN_LOGFILE has been replaced by HADOOP_LOGFILE. Using value of 
> YARN_LOGFILE.
> WARNING: YARN_PID_DIR has been replaced by HADOOP_PID_DIR. Using value of 
> YARN_PID_DIR.
> WARNING: YARN_OPTS has been replaced by HADOOP_OPTS. Using value of YARN_OPTS.
> 17/11/21 03:21:58 WARN util.NativeCodeLoader: Unable to load native-hadoop 
> library for your platform... using builtin-java classes where applicable
> 17/11/21 03:21:59 INFO client.RMProxy: Connecting to ResourceManager at 
> ctr-e134-1499953498516-324910-01-03.example.com/172.27.47.1:8050
> 17/11/21 03:22:00 WARN shortcircuit.DomainSocketFactory: The short-circuit 
> local reads feature cannot be used because libhadoop cannot be loaded.
> 17/11/21 03:22:00 INFO client.RMProxy: Connecting to ResourceManager at 
> ctr-e134-1499953498516-324910-01-03.example.com/172.27.47.1:8050
> 17/11/21 03:22:00 INFO client.ServiceClient: Loading service definition from 
> local FS: 
> /usr/hdp/3.0.0.0-493/hadoop-yarn/yarn-service-examples/sleeper/sleeper.json
> 17/11/21 03:22:01 INFO client.ServiceClient: Persisted service gour-sleeper 
> at 
> hdfs://ctr-e134-1499953498516-324910-01-03.example.com:8020/user/hdfs/.yarn/services/gour-sleeper/gour-sleeper.json
> 17/11/21 03:22:01 INFO conf.Configuration: resource-types.xml not found
> 17/11/21 03:22:01 WARN client.ServiceClient: AM log4j property file doesn't 
> exist: /usr/hdp/3.0.0.0-493/hadoop/conf/yarnservice-log4j.properties
> 17/11/21 03:22:01 INFO client.ServiceClient: Uploading all dependency jars to 
> HDFS. For faster submission of apps, pre-upload dependency jars to HDFS using 
> command: yarn app -enableFastLaunch
> Exception in thread "main" java.io.FileNotFoundException: File 
> /usr/hdp/3.0.0.0-493/hadoop/lib/ojdbc6.jar does not exist
>   at 
> org.apache.hadoop.fs.RawLocalFileSystem.deprecatedGetFileStatus(RawLocalFileSystem.java:641)
>   at 
> org.apache.hadoop.fs.RawLocalFileSystem.getFileLinkStatusInternal(RawLocalFileSystem.java:867)
>   at 
> org.apache.hadoop.fs.RawLocalFileSystem.getFileStatus(RawLocalFileSystem.java:631)
>   at 
> org.apache.hadoop.fs.FilterFileSystem.getFileStatus(FilterFileSystem.java:454)
>   at org.apache.hadoop.fs.FileUtil.copy(FileUtil.java:365)
>   at 
> org.apache.hadoop.fs.FileSystem.copyFromLocalFile(FileSystem.java:2399)
>   at 
> org.apache.hadoop.yarn.service.utils.CoreFileSystem.submitFile(CoreFileSystem.java:434)
>   at 
> org.apache.hadoop.yarn.service.utils.ServiceUtils.putAllJars(ServiceUtils.java:409)
>   at 
> org.apache.hadoop.yarn.service.provider.ProviderUtils.addAllDependencyJars(ProviderUtils.java:138)
>   at 
> org.apache.hadoop.yarn.service.client.ServiceClient.addJarResource(ServiceClient.java:695)
>   at 
> org.apache.hadoop.yarn.service.client.ServiceClient.submitApp(ServiceClient.java:553)
>   at 
> org.apache.hadoop.yarn.service.client.ServiceClient.actionCreate(ServiceClient.java:212)
>   at 
> org.apache.hadoop.yarn.service.client.ServiceClient.actionLaunch(ServiceClient.java:197)
>   at 
> org.apache.hadoop.yarn.client.cli.ApplicationCLI.run(ApplicationCLI.java:447)
>   at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:76)
>   at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:90)
>   at 
> org.apache.hadoop.yarn.client.cli.ApplicationCLI.main(ApplicationCLI.java:111)
> {code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7565) Yarn service pre-maturely releases the container after AM restart

2017-12-10 Thread Jian He (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7565?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16285568#comment-16285568
 ] 

Jian He commented on YARN-7565:
---

bq. In regards to the last race condition, I assumed that the events in a 
component are processed sequentially instead of parallel. 
Yes, they are processed sequentially, (though the patch doesn't process events 
asynchronously , we need to call "dispatcher.getEventHandler().handle(event);", 
 instead of directly handle(event)).
But even that, it could still occur ?  event for 3) got sent before event for 5)

> Yarn service pre-maturely releases the container after AM restart 
> --
>
> Key: YARN-7565
> URL: https://issues.apache.org/jira/browse/YARN-7565
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Chandni Singh
>Assignee: Chandni Singh
> Fix For: yarn-native-services
>
> Attachments: YARN-7565.001.patch, YARN-7565.002.patch, 
> YARN-7565.003.patch
>
>
> With YARN-6168, recovered containers can be reported to AM in response to the 
> AM heartbeat. 
> Currently, the Service Master will release the containers, that are not 
> reported in the AM registration response, immediately.
> Instead, the master can wait for a configured amount of time for the 
> containers to be recovered by RM. These containers are sent to AM in the 
> heartbeat response. Once a container is not reported in the configured 
> interval, it can be released by the master.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (YARN-7565) Yarn service pre-maturely releases the container after AM restart

2017-12-10 Thread Jian He (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7565?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16285539#comment-16285539
 ] 

Jian He edited comment on YARN-7565 at 12/11/17 6:00 AM:
-

- yarn.service.container.expiry-interval-ms -> yarn.service.am-recovery.timeout 
?
- better move ServiceScheduler.YARN_COMPONENT to YarnRegistryAttributes as well
- this logic seems a bit confusing. It sends a recover event, but adds to 
unRecoveredInstances which looks contradictoy by naming. 
All that needed is to call "component.pendingInstances.remove(instance);". How 
about just expose this method as an API ? rather than sending the event ?
{code}
  if (componentName != null) {
Component component = componentsByName.get(componentName);
ComponentInstance compInstance = component.getComponentInstance(
record.description);
ContainerId containerId = ContainerId.fromString(record.get(
YarnRegistryAttributes.YARN_ID));
ComponentEvent event = new ComponentEvent(componentName,
CONTAINER_RECOVERED)
.setInstance(compInstance)
.setContainerId(containerId);
unRecoveredInstances.put(containerId, compInstance);
component.handle(event);
  }
{code}
- Similary, ContainerRecoveryFailedTransition can be removed, just call the 
existing reInsertPendingInstance API instead (rename it to 
insertPendingInstance)
- add componentInstanceId to the log like below 
{code}
  LOG.info("{}, Wait on container {} expired", 
entry.getValue().getCompInstanceId, entry.getKey());
{code}
- Looks like it only adds to the pending instance list when recover timeouts, 
do we need to re-request the container ? 
{code}
if (unRecoveredInstances.size() > 0) {
  executorService.schedule(() -> {
// after containerWaitMs, all the containers that haven't be recovered
// by RM will released. The corresponding Component Instances will be
// adding to the pending queues of their respective component.
Iterator> iterator =
unRecoveredInstances.entrySet().iterator();
while (iterator.hasNext()) {
  Map.Entry entry = iterator.next();
  iterator.remove();
  LOG.info("{} Wait on container {} expired", 
entry.getValue().getCompInstanceId(), entry.getKey());
  String componentName = entry.getValue().getCompName();
  ComponentEvent event = new ComponentEvent(
  componentName, CONTAINER_RECOVERY_FAILED)
  .setInstance(entry.getValue())
  .setContainerId(entry.getKey());
  componentsByName.get(componentName).handle(event);
  amRMClient.releaseAssignedContainer(entry.getKey());
}
  }, containerWaitMs, TimeUnit.MILLISECONDS);
{code} 
- Clear the unRecoveredInstances in the end of this executor thread. it'll 
remain in memory forever otherwise.
- I think there may be a race condition where additional pending instance got 
added. 
1. In the executor thread loop, get the unRecovered instance object (say, 
instance1)
2. In onContainersReceivedFromPreviousAttempts: remove unrecovered instance1 
from the map
3. In onContainersReceivedFromPreviousAttempts: send recover event
4. In Component: to handle the recover event, remove from pending instance list
5. But then Executor thread add (instance1) back to the pending list 
We probably need to synchronize on the entire unRecoveredInstance map ? 



was (Author: jianhe):
- yarn.service.container.expiry-interval-ms -> yarn.service.am-recovery.timeout 
?
- better move ServiceScheduler.YARN_COMPONENT to YarnRegistryAttributes as well
- this logic seems a bit confusing. It sends a recover event, but adds to 
unRecoveredInstances which looks contradictoy by naming. 
All that needed is to call "component.pendingInstances.remove(instance);". How 
about just expose this method as an API ? rather than sending the event ?
{code}
  if (componentName != null) {
Component component = componentsByName.get(componentName);
ComponentInstance compInstance = component.getComponentInstance(
record.description);
ContainerId containerId = ContainerId.fromString(record.get(
YarnRegistryAttributes.YARN_ID));
ComponentEvent event = new ComponentEvent(componentName,
CONTAINER_RECOVERED)
.setInstance(compInstance)
.setContainerId(containerId);
unRecoveredInstances.put(containerId, compInstance);
component.handle(event);
  }
{code}
- Similary, ContainerRecoveryFailedTransition can be removed, just call the 
above addPendingInstance API instead
- add componentInstanceId to the log like below 
{code}
  LOG.info("{}, Wait on container {} expired", 
entry.getValue().getCompInstanceId, entry.getKey());
{code}
- 

[jira] [Commented] (YARN-7565) Yarn service pre-maturely releases the container after AM restart

2017-12-10 Thread Jian He (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7565?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16285539#comment-16285539
 ] 

Jian He commented on YARN-7565:
---

- yarn.service.container.expiry-interval-ms -> yarn.service.am-recovery.timeout 
?
- better move ServiceScheduler.YARN_COMPONENT to YarnRegistryAttributes as well
- this logic seems a bit confusing. It sends a recover event, but adds to 
unRecoveredInstances which looks contradictoy by naming. 
All that needed is to call "component.pendingInstances.remove(instance);". How 
about just expose this method as an API ? rather than sending the event ?
{code}
  if (componentName != null) {
Component component = componentsByName.get(componentName);
ComponentInstance compInstance = component.getComponentInstance(
record.description);
ContainerId containerId = ContainerId.fromString(record.get(
YarnRegistryAttributes.YARN_ID));
ComponentEvent event = new ComponentEvent(componentName,
CONTAINER_RECOVERED)
.setInstance(compInstance)
.setContainerId(containerId);
unRecoveredInstances.put(containerId, compInstance);
component.handle(event);
  }
{code}
- Similary, ContainerRecoveryFailedTransition can be removed, just call the 
above addPendingInstance API instead
- add componentInstanceId to the log like below 
{code}
  LOG.info("{}, Wait on container {} expired", 
entry.getValue().getCompInstanceId, entry.getKey());
{code}
- Looks like it only adds to the pending instance list when recover timeouts, 
do we need to re-request the container ? 
{code}
if (unRecoveredInstances.size() > 0) {
  executorService.schedule(() -> {
// after containerWaitMs, all the containers that haven't be recovered
// by RM will released. The corresponding Component Instances will be
// adding to the pending queues of their respective component.
Iterator> iterator =
unRecoveredInstances.entrySet().iterator();
while (iterator.hasNext()) {
  Map.Entry entry = iterator.next();
  iterator.remove();
  LOG.info("{} Wait on container {} expired", 
entry.getValue().getCompInstanceId(), entry.getKey());
  String componentName = entry.getValue().getCompName();
  ComponentEvent event = new ComponentEvent(
  componentName, CONTAINER_RECOVERY_FAILED)
  .setInstance(entry.getValue())
  .setContainerId(entry.getKey());
  componentsByName.get(componentName).handle(event);
  amRMClient.releaseAssignedContainer(entry.getKey());
}
  }, containerWaitMs, TimeUnit.MILLISECONDS);
{code} 
- Clear the unRecoveredInstances in the end of this executor thread. it'll 
remain in memory forever otherwise.
- I think there may be a race condition where additional pending instance got 
added. 
1. In the executor thread loop, get the unRecovered instance object (say, 
instance1)
2. In onContainersReceivedFromPreviousAttempts: remove unrecovered instance1 
from the map
3. In onContainersReceivedFromPreviousAttempts: send recover event
4. In Component: to handle the recover event, remove from pending instance list
5. But then Executor thread add (instance1) back to the pending list 
We probably need to synchronize on the entire unRecoveredInstance map ? 


> Yarn service pre-maturely releases the container after AM restart 
> --
>
> Key: YARN-7565
> URL: https://issues.apache.org/jira/browse/YARN-7565
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Chandni Singh
>Assignee: Chandni Singh
> Fix For: yarn-native-services
>
> Attachments: YARN-7565.001.patch, YARN-7565.002.patch, 
> YARN-7565.003.patch
>
>
> With YARN-6168, recovered containers can be reported to AM in response to the 
> AM heartbeat. 
> Currently, the Service Master will release the containers, that are not 
> reported in the AM registration response, immediately.
> Instead, the master can wait for a configured amount of time for the 
> containers to be recovered by RM. These containers are sent to AM in the 
> heartbeat response. Once a container is not reported in the configured 
> interval, it can be released by the master.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7540) Convert yarn app cli to call yarn api services

2017-12-08 Thread Jian He (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7540?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16284309#comment-16284309
 ] 

Jian He commented on YARN-7540:
---

- config name for https is "webapp.https.address"
{code}
 String rmAddress = conf.get("yarn.resourcemanager.webapp.address");
if(conf.getBoolean("hadoop.ssl.enabled", false)) {
  scheme = "https://;;
}
{code}

> Convert yarn app cli to call yarn api services
> --
>
> Key: YARN-7540
> URL: https://issues.apache.org/jira/browse/YARN-7540
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Eric Yang
>Assignee: Eric Yang
> Fix For: yarn-native-services
>
> Attachments: YARN-7540.001.patch, YARN-7540.002.patch, 
> YARN-7540.003.patch
>
>
> For YARN docker application to launch through CLI, it works differently from 
> launching through REST API.  All application launched through REST API is 
> currently stored in yarn user HDFS home directory.  Application managed 
> through CLI are stored into individual user's HDFS home directory.  For 
> consistency, we want to have yarn app cli to interact with API service to 
> manage applications.  For performance reason, it is easier to implement list 
> all applications from one user's home directory instead of crawling all 
> user's home directories.  For security reason, it is safer to access only one 
> user home directory instead of all users.  Given the reasons above, the 
> proposal is to change how {{yarn app -launch}}, {{yarn app -list}} and {{yarn 
> app -destroy}} work.  Instead of calling HDFS API and RM API to launch 
> containers, CLI will be converted to call API service REST API resides in RM. 
>  RM perform the persist and operations to launch the actual application.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



  1   2   3   4   5   6   7   8   9   10   >