guihuawen created SPARK-48210:
---------------------------------

             Summary: Modify the description of whether dynamic partitioning is 
enabled in the “  Stage Level Scheduling Overview”
                 Key: SPARK-48210
                 URL: https://issues.apache.org/jira/browse/SPARK-48210
             Project: Spark
          Issue Type: Documentation
          Components: Documentation
    Affects Versions: 4.0.0
            Reporter: guihuawen
             Fix For: 4.0.0


“  Stage Level Scheduling Overview ” in running-on-yarn and  
running-on-kubernetes

The description of dynamic partitioning is inconsistent with the code 
implementation verification.

In running-on-yarn 

'
 * When dynamic allocation is disabled: It allows users to specify different 
task resource requirements at the stage level and will use the same executors 
requested at startup.

'

But the  implementation is:

Class:ResourceProfileManager

Fuc:isSupported

private[spark] def isSupported(rp: ResourceProfile): Boolean = {
assert(master != null)
if (rp.isInstanceOf[TaskResourceProfile] && !dynamicEnabled) {
if ((notRunningUnitTests || testExceptionThrown) &&
!(isStandaloneOrLocalCluster || isYarn || isK8s)) {
throw new SparkException("TaskResourceProfiles are only supported for 
Standalone, " +
"Yarn and Kubernetes cluster for now when dynamic allocation is disabled.")
}
}

 

The judgment of this code is that it does not support TaskResourceProfile in 
Yarn and k8s when dynamic partitioning is closed.

 

The description in the document does not match, so the document needs to be 
modified.

 

 

 

 

 



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org

Reply via email to