[ https://issues.apache.org/jira/browse/SPARK-24615?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16549276#comment-16549276 ]
Saisai Shao commented on SPARK-24615: ------------------------------------- Thanks [~tgraves] for the suggestion. {quote}Once I get to the point I want to do the ML I want to ask for the gpu's as well as ask for more memory during that stage because I didn't need more before this stage for all the etl work. I realize you already have executors, but ideally spark with the cluster manager could potentially release the existing ones and ask for new ones with those requirements. {quote} Yes, I already discussed with my colleague offline, this is a valid scenario, but I think to achieve this we should change the current dynamic resource allocation mechanism Currently I marked this as a Non-Goal in this proposal, only focus on statically resource requesting (--executor-cores, --executor-gpus). I think we should support it later. > Accelerator-aware task scheduling for Spark > ------------------------------------------- > > Key: SPARK-24615 > URL: https://issues.apache.org/jira/browse/SPARK-24615 > Project: Spark > Issue Type: Improvement > Components: Spark Core > Affects Versions: 2.4.0 > Reporter: Saisai Shao > Assignee: Saisai Shao > Priority: Major > Labels: Hydrogen, SPIP > > In the machine learning area, accelerator card (GPU, FPGA, TPU) is > predominant compared to CPUs. To make the current Spark architecture to work > with accelerator cards, Spark itself should understand the existence of > accelerators and know how to schedule task onto the executors where > accelerators are equipped. > Current Spark’s scheduler schedules tasks based on the locality of the data > plus the available of CPUs. This will introduce some problems when scheduling > tasks with accelerators required. > # CPU cores are usually more than accelerators on one node, using CPU cores > to schedule accelerator required tasks will introduce the mismatch. > # In one cluster, we always assume that CPU is equipped in each node, but > this is not true of accelerator cards. > # The existence of heterogeneous tasks (accelerator required or not) > requires scheduler to schedule tasks with a smart way. > So here propose to improve the current scheduler to support heterogeneous > tasks (accelerator requires or not). This can be part of the work of Project > hydrogen. > Details is attached in google doc. It doesn't cover all the implementation > details, just highlight the parts should be changed. > > CC [~yanboliang] [~merlintang] -- This message was sent by Atlassian JIRA (v7.6.3#76005) --------------------------------------------------------------------- To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org For additional commands, e-mail: issues-h...@spark.apache.org