[ https://issues.apache.org/jira/browse/FLINK-16371?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]
Kostas Kloudas closed FLINK-16371. ---------------------------------- Resolution: Fixed Merged on master with 79de2ea5ab64de03b46e8ad6a0df3bbde986d124 and on release-1.10 with ea5197eeefcf72eb9aa33ad83e591faf856bbda9 > HadoopCompressionBulkWriter fails with 'java.io.NotSerializableException' > ------------------------------------------------------------------------- > > Key: FLINK-16371 > URL: https://issues.apache.org/jira/browse/FLINK-16371 > Project: Flink > Issue Type: Bug > Components: Connectors / FileSystem > Affects Versions: 1.10.0 > Reporter: Sivaprasanna Sethuraman > Assignee: Sivaprasanna Sethuraman > Priority: Major > Labels: pull-request-available > Fix For: 1.10.1, 1.11.0 > > Time Spent: 20m > Remaining Estimate: 0h > > When using CompressWriterFactory with Hadoop compression codec, the execution > fails with java.io.NotSerializableException. > I guess this is probably to do with the the instance creation for Hadoop's > CompressionCodec being done here at > [CompressWriterFactory.java#L59|https://github.com/apache/flink/blob/master/flink-formats/flink-compress/src/main/java/org/apache/flink/formats/compress/CompressWriterFactory.java#L59] > and thus it has to be sent over the wire causing the exception to be thrown. > So I did a quick test on my end by changing the way the CompressionCodec is > initialised and ran it on a Hadoop cluster, and it has been working just > fine. Will raise a PR in a day or so. -- This message was sent by Atlassian Jira (v8.3.4#803005)