GitHub user fangshil opened a pull request:

    https://github.com/apache/spark/pull/20931

    [SPARK-23815][Core]Spark writer dynamic partition overwrite mode may fail 
to write output on multi level partition

    
    ## What changes were proposed in this pull request?
    
    Spark introduced new writer mode to overwrite only related partitions in 
SPARK-20236. While we are using this feature in our production cluster, we 
found a bug when writing multi-level partitions on HDFS.
    
    A simple test case to reproduce this issue:
    val df = Seq(("1","2","3")).toDF("col1", "col2","col3")
    
df.write.partitionBy("col1","col2").mode("overwrite").save("/my/hdfs/location")
    
    If HDFS location "/my/hdfs/location" does not exist, there will be no 
output.
    
    This seems to be caused by the job commit change in SPARK-20236 in 
HadoopMapReduceCommitProtocol.
    
    In the commit job process, the output has been written into staging dir 
/my/hdfs/location/.spark-staging.xxx/col1=1/col2=2, and then the code calls 
fs.rename to rename /my/hdfs/location/.spark-staging.xxx/col1=1/col2=2 to 
/my/hdfs/location/col1=1/col2=2. However, in our case the operation will fail 
on HDFS because /my/hdfs/location/col1=1 does not exists. HDFS rename can not 
create directory for more than one level. 
    
    This does not happen in unit test covered with SPARK-20236 with local file 
system.
    
    We are proposing a fix. When cleaning current partition dir 
/my/hdfs/location/col1=1/col2=2 before the rename op, if the delete op fails 
(because /my/hdfs/location/col1=1/col2=2 may not exist), we call mkdirs op to 
create the parent dir /my/hdfs/location/col1=1 (if the parent dir does not 
exist) so the following rename op can succeed.
    
    
    
    
    
    ## How was this patch tested?
    
    We have tested this patch on our production cluster and it fixed the problem

You can merge this pull request into a Git repository by running:

    $ git pull https://github.com/fangshil/spark master

Alternatively you can review and apply these changes as the patch at:

    https://github.com/apache/spark/pull/20931.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

    This closes #20931
    
----
commit da63c17d7ae7fbf04cc474d946d61a098b3e1ade
Author: Fangshi Li <fli@...>
Date:   2018-03-28T04:25:54Z

    Spark writer dynamic partition overwrite mode may fail to write output on 
multi level partition

----


---

---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org

Reply via email to