Github user cloud-fan commented on a diff in the pull request:

    https://github.com/apache/spark/pull/19269#discussion_r139705608
  
    --- Diff: 
sql/core/src/main/java/org/apache/spark/sql/sources/v2/writer/DataSourceV2Writer.java
 ---
    @@ -0,0 +1,71 @@
    +/*
    + * Licensed to the Apache Software Foundation (ASF) under one or more
    + * contributor license agreements.  See the NOTICE file distributed with
    + * this work for additional information regarding copyright ownership.
    + * The ASF licenses this file to You under the Apache License, Version 2.0
    + * (the "License"); you may not use this file except in compliance with
    + * the License.  You may obtain a copy of the License at
    + *
    + *    http://www.apache.org/licenses/LICENSE-2.0
    + *
    + * Unless required by applicable law or agreed to in writing, software
    + * distributed under the License is distributed on an "AS IS" BASIS,
    + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
    + * See the License for the specific language governing permissions and
    + * limitations under the License.
    + */
    +
    +package org.apache.spark.sql.sources.v2.writer;
    +
    +import org.apache.spark.annotation.InterfaceStability;
    +import org.apache.spark.sql.Row;
    +import org.apache.spark.sql.SaveMode;
    +import org.apache.spark.sql.sources.v2.DataSourceV2Options;
    +import org.apache.spark.sql.sources.v2.WriteSupport;
    +import org.apache.spark.sql.types.StructType;
    +
    +/**
    + * A data source writer that is returned by
    + * {@link WriteSupport#createWriter(StructType, SaveMode, 
DataSourceV2Options)}.
    + * It can mix in various writing optimization interfaces to speed up the 
data saving. The actual
    + * writing logic is delegated to {@link WriteTask} that is returned by 
{@link #createWriteTask()}.
    + *
    + * The writing procedure is:
    + *   1. Create a write task by {@link #createWriteTask()}, serialize and 
send it to all the
    + *      partitions of the input data(RDD).
    + *   2. For each partition, create a data writer with the write task, and 
write the data of the
    + *      partition with this writer. If all the data are written 
successfully, call
    + *      {@link DataWriter#commit()}. If exception happens during the 
writing, call
    + *      {@link DataWriter#abort()}. This step may repeat several times as 
Spark will retry failed
    + *      tasks.
    + *   3. Wait until all the writers/partitions are finished, i.e., either 
committed or aborted. If
    + *      all partitions are written successfully, call {@link 
#commit(WriterCommitMessage[])}. If
    + *      some partitions failed and aborted, call {@link #abort()}.
    --- End diff --
    
    cc @rdblue @rxin do we really need an individual SPIP for the write path? I 
think this procedure is the only thing we need some high-level discussion, 
other parts are very similar to the read path, e.g. `WriteSupport` -> 
`DataSourceV2Writer` -> `WriteTask` -> `DataWriter`.


---

---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org

Reply via email to