[ 
https://issues.apache.org/jira/browse/SPARK-24615?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16563730#comment-16563730
 ] 

Saisai Shao edited comment on SPARK-24615 at 7/31/18 2:16 PM:
--------------------------------------------------------------

Hi [~tgraves], I think eval() might unnecessarily break the lineage which could 
execute in one stage, for example: data transforming -> training -> 
transforming, this could possibly run in one stage, using eval will break into 
several stages, I'm not sure if it is the good choice. Also if we use eval to 
break the lineage, how do we store the intermediate data, like shuffle, or in 
memory/ on disk?

Yes, how to break the boundaries is hard for user to know, but currently I 
cannot figure out a good solution, unless we use eval() to explicitly separate 
them. To solve the conflicts, failing might be one choice. In the SQL or DF 
area, I don't think we have to expose such low level RDD APIs to user, maybe 
some hints should be enough (though I haven't thought about it).

Currently in my design, withResources only applies to the stage in which RDD 
will be executed, the following stages will still be ordinary stages without 
additional resources.


was (Author: jerryshao):
Hi [~tgraves], I think eval() might unnecessarily break the lineage which could 
execute in one stage, for example: data transforming -> training -> 
transforming, this could possibly run in one stage, using eval will break into 
several stages, I'm not sure if it is the good choice. Also if we use eval to 
break the lineage, how do we store the intermediate data, like shuffle, or in 
memory/ on disk?

Yes, how to break the boundaries is hard for user to know, but currently I 
cannot figure out a good solution, unless we use eval() to explicitly separate 
them. To solve the conflicts, failing might be one choice.

Currently in my design, withResources only applies to the stage in which RDD 
will be executed, the following stages will still be ordinary stages without 
additional resouces.

> Accelerator-aware task scheduling for Spark
> -------------------------------------------
>
>                 Key: SPARK-24615
>                 URL: https://issues.apache.org/jira/browse/SPARK-24615
>             Project: Spark
>          Issue Type: Improvement
>          Components: Spark Core
>    Affects Versions: 2.4.0
>            Reporter: Saisai Shao
>            Assignee: Saisai Shao
>            Priority: Major
>              Labels: Hydrogen, SPIP
>
> In the machine learning area, accelerator card (GPU, FPGA, TPU) is 
> predominant compared to CPUs. To make the current Spark architecture to work 
> with accelerator cards, Spark itself should understand the existence of 
> accelerators and know how to schedule task onto the executors where 
> accelerators are equipped.
> Current Spark’s scheduler schedules tasks based on the locality of the data 
> plus the available of CPUs. This will introduce some problems when scheduling 
> tasks with accelerators required.
>  # CPU cores are usually more than accelerators on one node, using CPU cores 
> to schedule accelerator required tasks will introduce the mismatch.
>  # In one cluster, we always assume that CPU is equipped in each node, but 
> this is not true of accelerator cards.
>  # The existence of heterogeneous tasks (accelerator required or not) 
> requires scheduler to schedule tasks with a smart way.
> So here propose to improve the current scheduler to support heterogeneous 
> tasks (accelerator requires or not). This can be part of the work of Project 
> hydrogen.
> Details is attached in google doc. It doesn't cover all the implementation 
> details, just highlight the parts should be changed.
>  
> CC [~yanboliang] [~merlintang]



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org

Reply via email to