Ngone51 commented on a change in pull request #33615:
URL: https://github.com/apache/spark/pull/33615#discussion_r681563786



##########
File path: docs/configuration.md
##########
@@ -3134,3 +3134,119 @@ The stage level scheduling feature allows users to 
specify task and executor res
 This is only available for the RDD API in Scala, Java, and Python.  It is 
available on YARN and Kubernetes when dynamic allocation is enabled. See the 
[YARN](running-on-yarn.html#stage-level-scheduling-overview) page or 
[Kubernetes](running-on-kubernetes.html#stage-level-scheduling-overview) page 
for more implementation details.
 
 See the `RDD.withResources` and `ResourceProfileBuilder` API's for using this 
feature. The current implementation acquires new executors for each 
`ResourceProfile`  created and currently has to be an exact match. Spark does 
not try to fit tasks into an executor that require a different ResourceProfile 
than the executor was created with. Executors that are not in use will idle 
timeout with the dynamic allocation logic. The default configuration for this 
feature is to only allow one ResourceProfile per stage. If the user associates 
more then 1 ResourceProfile to an RDD, Spark will throw an exception by 
default. See config `spark.scheduler.resource.profileMergeConflicts` to control 
that behavior. The current merge strategy Spark implements when 
`spark.scheduler.resource.profileMergeConflicts` is enabled is a simple max of 
each resource within the conflicting ResourceProfiles. Spark will create a new 
ResourceProfile with the max of each of the resources.
+
+# Push-based shuffle overview
+
+Push-based shuffle is an improved shuffle architecture that optimizes the 
reliability and performance of the shuffle step in Spark. Complementing the 
existing shuffle mechanism, push-based shuffle takes a best-effort approach to 
push the shuffle blocks generated by the map tasks to remote shuffle services 
to be merged per shuffle partition. When the reduce tasks start running, they 
fetch a combination of the merged shuffle partitions and some of the original 
shuffle blocks to get their input data. As a result, push-based shuffle 
converts shuffle services’ small random disk reads into large sequential reads. 
The reduce tasks are also scheduled with locality preferences of the locations 
of their corresponding merged shuffle partitions, which helps to significantly 
improve shuffle fetch data locality.

Review comment:
       `remote shuffle services` -> `remote external shuffle services`




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org

Reply via email to