[ 
https://issues.apache.org/jira/browse/SPARK-23271?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16349649#comment-16349649
 ] 

Dilip Biswal commented on SPARK-23271:
--------------------------------------

[~hyukjin.kwon]
 I took a look at this. To the best of my knowledge, the difference in the 
behaviour between these two cases is because of the following :
 case 1
{code:java}
scala> List.empty[String].toDF().rdd.partitions.length
res18: Int = 1
{code}
Case 2
{code:java}
scala> val anySchema = StructType(StructField("anyName", StringType, nullable = 
false) :: Nil)
anySchema: org.apache.spark.sql.types.StructType = 
StructType(StructField(anyName,StringType,false))
scala> 
spark.read.schema(anySchema).csv("/tmp/empty_folder").rdd.partitions.length
res22: Int = 0
{code}
For the 2nd case, since number of partitions = 0, we don't call the write task 
(the task has logic to create the empty parquet file).
 I tried to repartition the input RDD after 
[here|https://github.com/apache/spark/blob/master/sql/core/src/main/scala/org/apache/spark/sql/execution/datasources/FileFormatWriter.scala#L180]
 if input number of partitions = 0, before setting up the write job and things 
seem to work well.

Would this be a reasonable way to fix this ? Appreciate your feedback.

> Parquet output contains only "_SUCCESS" file after empty DataFrame saving 
> --------------------------------------------------------------------------
>
>                 Key: SPARK-23271
>                 URL: https://issues.apache.org/jira/browse/SPARK-23271
>             Project: Spark
>          Issue Type: Bug
>          Components: SQL
>    Affects Versions: 2.2.0
>            Reporter: Pavlo Z.
>            Priority: Minor
>         Attachments: parquet-empty-output.zip
>
>
> Sophisticated case, reproduced only if read empty CSV file without header 
> with assigned schema.
> Steps for reproduce (Scala):
> {code:java}
> val anySchema = StructType(StructField("anyName", StringType, nullable = 
> false) :: Nil)
> val inputDF = spark.read.schema(anySchema).csv(inputFolderWithEmptyCSVFile)
> inputDF.write.parquet(outputFolderName)
> // Exception: org.apache.spark.sql.AnalysisException: Unable to infer schema 
> for Parquet. It must be specified manually.;
> val actualDF = spark.read.parquet(outputFolderName)
>  
> {code}
> *Actual:* Only "_SUCCESS" file in output directory
> *Expected*: at least one Parquet file with schema.
> Project for reproduce is attached.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org

Reply via email to