[ 
https://issues.apache.org/jira/browse/SPARK-9347?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14641763#comment-14641763
 ] 

Samphel Norden commented on SPARK-9347:
---------------------------------------

Consider a top level folder with folder hierachy as follows
root/parquet_date=20150717/parquet_hour_of_day=00

For each date there are 24 hour folders
Overall there are approx 400 parquet (snappy compressed) files per hour folder 
and we are currently looking at 8 days worth of data
So roughly 400*24*8 days of files

After launching spark shell, doing this almost always hangs

scala> sqlContext.parquetFile(<top level> folder)

It appears to be directly proportional to the number of files in the folder and 
it appears that the metadata load is reading every single file. 
Verified by doing above against a folder with order of magnitude less files 
returns quickly.
Is there a better way to load the data into a dataframe. 

> spark load of existing parquet files extremely slow if large number of files
> ----------------------------------------------------------------------------
>
>                 Key: SPARK-9347
>                 URL: https://issues.apache.org/jira/browse/SPARK-9347
>             Project: Spark
>          Issue Type: Improvement
>          Components: SQL
>    Affects Versions: 1.3.1
>            Reporter: Samphel Norden
>
> When spark sql shell is launched and we point it to a folder containing a 
> large number of parquet files, the sqlContext.parquetFile() command takes a 
> very long time to load the tables. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org

Reply via email to