[ 
https://issues.apache.org/jira/browse/PIG-4698?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14949904#comment-14949904
 ] 

Srikanth Sundarrajan commented on PIG-4698:
-------------------------------------------

There are a couple of options on how we can go about this

1. Spark supports [dynamic 
allocation|http://spark.apache.org/docs/latest/job-scheduling.html#dynamic-resource-allocation]
 and the same can be 
[configured|http://spark.apache.org/docs/latest/configuration.html#dynamic-allocation]
 for Yarn backends. As a first step this can be enabled and this simply allows 
for the executors to expand and shrink between a min & max bound.

2. As a subsequent effort we can attempt to use 
SparkContext::requestExecutors() and SparkContext::killExecutors appropriately 
to control this in a fine grained fashion depending on the stage of execution 
and resources required for that stage. 

Would prefer that we go with approach #1 for now. [~xuefuz], suggested the same 
in an offline conversation as well. Thoughts?

> Enable dynamic resource allocation/de-allocation on Yarn backends
> -----------------------------------------------------------------
>
>                 Key: PIG-4698
>                 URL: https://issues.apache.org/jira/browse/PIG-4698
>             Project: Pig
>          Issue Type: Sub-task
>          Components: spark
>    Affects Versions: spark-branch
>            Reporter: Srikanth Sundarrajan
>            Assignee: Srikanth Sundarrajan
>              Labels: spork
>             Fix For: spark-branch
>
>
> Resource elasticity needs to be enabled on Yarn backend to allow jobs to 
> scale out better and provide better wall clock execution times, while unused 
> resources should be released back to RM for use.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

Reply via email to