GitHub user lw-lin opened a pull request:

    https://github.com/apache/spark/pull/13705

    [SPARK-15472][SQL] Add support for writing in `csv` format in Structured 
Streaming

    ## What changes were proposed in this pull request?
    
    This patch adds support for writing in `csv` format in Structured Streaming:
    
    **1. At a high level, this patch forms the following class hierarchy**:
    ```
                                <OutputWriter>
                                      ↑
                             CSVOutputWriterBase
                                 ↗          ↖
    (anonymous batch) CSVOutputWriter    (anonymous streaming) CSVOutputWriter
                                               [write data without using
                                                  an OutputCommitter]
    ```
    ```
                             <OutputWriterFactory>
                                 ↗          ↖
           BatchCSVOutputWriterFactory   StreamingCSVOutputWriterFactory
    ```
    The streaming CSVOutputWriter would write data **without** using an 
`OutputCommitter`, which was the same approach taken by 
[SPARK-14716](https://github.com/apache/spark/pull/12409).
    
    **2. To support compression, this patch attaches an extension to the path 
assigned by `FileStreamSink`**.
    
    E.g., if we write out using the `gzip` compression and `FileStreamSink` 
assigns path `${uuid}` to the output writer, then in the end the file written 
out will be `${uuid}.csv.gz`. This way when we read the file back, we should be 
able to interpret it correctly as `gzip` compressed.
    
    This is slightly different from 
[SPARK-14716](https://github.com/apache/spark/pull/12409).
    
    ## How was this patch tested?
    
    `FileStreamSinkSuite` is expanded to cover `csv` format:
    
    ```scala
    test("csv - unpartitioned data - codecs: none/gzip")
    test("csv - partitioned data - codecs: none/gzip")
    test("csv - unpartitioned writing and batch reading - codecs: none/gzip")
    test("csv - partitioned writing and batch reading - codecs: none/gzip")
    ```

You can merge this pull request into a Git repository by running:

    $ git pull https://github.com/lw-lin/spark csv-for-ss

Alternatively you can review and apply these changes as the patch at:

    https://github.com/apache/spark/pull/13705.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

    This closes #13705
    
----
commit 9869f9885e4fdc7364cd46ab05b1f332921ff8d7
Author: Liwei Lin <lwl...@gmail.com>
Date:   2016-06-16T05:38:13Z

    Add support for writing in `csv` format

----


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---

---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org

Reply via email to