Cherry Zhang created SPARK-20905:
------------------------------------

             Summary: When running spark with yarn-client, large executor-cores 
will lead to bad performance. 
                 Key: SPARK-20905
                 URL: https://issues.apache.org/jira/browse/SPARK-20905
             Project: Spark
          Issue Type: Question
          Components: Examples
    Affects Versions: 2.0.0
            Reporter: Cherry Zhang


Hi, all:
 When I run a training job in spark with yarn-client, and set 
executor-cores=20(less than vcores=24) and executor-num=4(my cluster has 4 
slaves), then there will be always one node computing time is larger than 
others.

I checked some blogs, and they says executor-cores should be set less than 5 if 
there are tons of concurrency threads. I tried to set executor-cores=4, and  
executor-num=20, then it worked.

But I don't know why, can you give some explain? Thank you very much.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org

Reply via email to