[ 
https://issues.apache.org/jira/browse/SPARK-11005?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sandeep Pal updated SPARK-11005:
--------------------------------
    Environment: 
6 node cluster with 1 master and 5 worker nodes.
Memory > 100 GB each
Cores = 72 each
Input data ~94 GB

  was:
6 node cluster with 1 master and 5 worker nodes.
Memory > 100 GB each
Cores = 72 each
Input data ~496 GB


> Spark 1.5 Shuffle performance - (sort-based shuffle)
> ----------------------------------------------------
>
>                 Key: SPARK-11005
>                 URL: https://issues.apache.org/jira/browse/SPARK-11005
>             Project: Spark
>          Issue Type: Question
>          Components: Shuffle, SQL
>    Affects Versions: 1.5.0
>         Environment: 6 node cluster with 1 master and 5 worker nodes.
> Memory > 100 GB each
> Cores = 72 each
> Input data ~94 GB
>            Reporter: Sandeep Pal
>
> In case of terasort by Spark SQL with 20 total cores(4 cores/ executor), the 
> performance of the map tasks is 14 minutes (around 26s-30s each) where as if 
> I increase the number of cores to 60(12 cores /executor), the performance of 
> map degrades to 30 minutes ( ~2.3 minutes per task). I believe the map tasks 
> are independent of each other in the shuffle. 
> Each map task has 128 MB input (HDFS block size) in both of the above cases. 
> So, what makes the performance degradation with increasing number of cores.
>       



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org

Reply via email to