[ 
https://issues.apache.org/jira/browse/FLINK-38132?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

huyuliang updated FLINK-38132:
------------------------------
           Component/s: Formats (JSON, Avro, Parquet, ORC, SequenceFile)
                            (was: API / DataSet)
     Affects Version/s: 1.20.2
                        1.19.3
                        2.0.0
    Remaining Estimate: 360h
     Original Estimate: 360h

> CLONE - Improve the CSV reading process
> ---------------------------------------
>
>                 Key: FLINK-38132
>                 URL: https://issues.apache.org/jira/browse/FLINK-38132
>             Project: Flink
>          Issue Type: Improvement
>          Components: Formats (JSON, Avro, Parquet, ORC, SequenceFile)
>    Affects Versions: 2.0.0, 1.19.3, 1.20.2
>            Reporter: huyuliang
>            Priority: Minor
>              Labels: CSV, auto-deprioritized-major, auto-deprioritized-minor
>   Original Estimate: 360h
>  Remaining Estimate: 360h
>
> CSV is one of the most commonly used file formats in data wrangling. To load 
> records from CSV files, Flink has provided the basic {{CsvInputFormat}}, as 
> well as some variants (e.g., {{RowCsvInputFormat}} and 
> {{PojoCsvInputFormat}}). However, it seems that the reading process can be 
> improved. For example, we could add a built-in util to automatically infer 
> schemas from CSV headers and samples of data. Also, the current bad record 
> handling method can be improved by somehow keeping the invalid lines (and 
> even the reasons for failed parsing), instead of logging the total number 
> only.
> This is an umbrella issue for all the improvements and bug fixes for the CSV 
> reading process.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

Reply via email to