[ 
https://issues.apache.org/jira/browse/SPARK-32582?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17175981#comment-17175981
 ] 

Lantao Jin commented on SPARK-32582:
------------------------------------

{quote}
 I remember I investigated this issue and Hadoop API itself lists in batch. 
There streaming way of listing isn't possible.
{quote}

Yes, we could list status in one partition if it is a partitioned table. For a 
non-partitioned table, it still lists all files. We assume too many files in a 
non-partitioned table is a bad design in data warehouse.

{quote}
We can add one more mode "INFER_WITH_SAMPLE".
{quote}

I am not sure it would be helpful since there is no API in Hadoop to list 
partial files in a folder.

> Spark SQL Infer Schema Performance
> ----------------------------------
>
>                 Key: SPARK-32582
>                 URL: https://issues.apache.org/jira/browse/SPARK-32582
>             Project: Spark
>          Issue Type: Improvement
>          Components: SQL
>    Affects Versions: 2.4.6, 3.0.0
>            Reporter: Jarred Li
>            Priority: Major
>
> When infer schema is enabled, it tries to list all the files in the table, 
> however only one of the file is used to read schema informaiton. The 
> performance is impacted due to list all the files in the table when the 
> number of partitions is larger.
>  
> See the code in 
> "[https://github.com/apache/spark/blob/master/sql/core/src/main/scala/org/apache/spark/sql/execution/datasources/orc/OrcUtils.scala#L88|https://github.com/apache/spark/blob/master/sql/core/src/main/scala/org/apache/spark/sql/execution/datasources/orc/OrcUtils.scala#88]";,
>  all the files in the table are input, however only one of the file's schema 
> is used to infer schema.
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org

Reply via email to