[ 
https://issues.apache.org/jira/browse/SPARK-10940?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14945642#comment-14945642
 ] 

Sean Owen commented on SPARK-10940:
-----------------------------------

I mean, just for the sake of testing, 100K? if that's not enough it seems like 
it can't be simple normal behavior. That's not to say it should require so many 
open files but would help narrow down the problem.

> Too many open files Spark Shuffle
> ---------------------------------
>
>                 Key: SPARK-10940
>                 URL: https://issues.apache.org/jira/browse/SPARK-10940
>             Project: Spark
>          Issue Type: Bug
>          Components: Shuffle, SQL
>    Affects Versions: 1.5.0
>         Environment: 6 node standalone spark cluster with 1 master and 5 
> worker nodes on Centos 6.6 for all nodes. Each node has > 100 GB memory and 
> 36 cores.
>            Reporter: Sandeep Pal
>
> Executing terasort by Spark-SQL on the data generated by teragen in hadoop. 
> Data size generated is ~456 GB. 
> Terasort passing with --total-executor-cores = 40, where as failing for 
> --total-executor-cores = 120. 
> I have tried to increase the ulimit to 10k but the problem persists.
> Below is the error message from one of the executor node:
> java.io.FileNotFoundException: 
> /tmp/spark-e15993e8-51a4-452a-8b86-da0169445065/executor-0c661152-3837-4711-bba2-2abf4fd15240/blockmgr-973aab72-feb8-4c60-ba3d-1b2ee27a1cc2/3f/temp_shuffle_7741538d-3ccf-4566-869f-265655ca9c90
>  (Too many open files)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org

Reply via email to