[ 
https://issues.apache.org/jira/browse/FLINK-24728?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jingsong Lee closed FLINK-24728.
--------------------------------
    Resolution: Fixed

master:

6af0b9965293cb732a540b9364b6aae76a9b356a

2e9f9ad166f472edd693c8a47857e14e76928dc9

release-1.14:

9a0c5e00839983de23f662e337dfc626d0bdaad9

11d24708be32605243bc404679b17758c4e76e79

release-1.13:

3200e8ef43b3024b0b44f184dfa833d1aa7d7d75

> Batch SQL file sink forgets to close the output stream
> ------------------------------------------------------
>
>                 Key: FLINK-24728
>                 URL: https://issues.apache.org/jira/browse/FLINK-24728
>             Project: Flink
>          Issue Type: Bug
>          Components: Table SQL / Runtime
>    Affects Versions: 1.11.4, 1.14.0, 1.12.5, 1.13.3
>            Reporter: Caizhi Weng
>            Assignee: Caizhi Weng
>            Priority: Major
>              Labels: pull-request-available
>             Fix For: 1.15.0, 1.13.6, 1.14.3
>
>
> I tried to write a large avro file into HDFS and discover that the displayed 
> file size in HDFS is extremely small, but copying that file to local yields 
> the correct size. If we create another Flink job and read that avro file from 
> HDFS, the job will finish without outputting any record because the file size 
> Flink gets from HDFS is the very small file size.
> This is because the output format created in 
> {{FileSystemTableSink#createBulkWriterOutputFormat}} only finishes the 
> {{BulkWriter}}. According to the java doc of {{BulkWriter#finish}} bulk 
> writers should not close the output stream and should leave them to the 
> framework.



--
This message was sent by Atlassian Jira
(v8.20.1#820001)

Reply via email to