[ https://issues.apache.org/jira/browse/FLINK-16371?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]
Sivaprasanna Sethuraman updated FLINK-16371: -------------------------------------------- Comment: was deleted (was: Can someone please grant me access to Flink's Jira? I want to assign this ticket to myself.) > HadoopCompressionBulkWriter fails with 'java.io.NotSerializableException' > ------------------------------------------------------------------------- > > Key: FLINK-16371 > URL: https://issues.apache.org/jira/browse/FLINK-16371 > Project: Flink > Issue Type: Bug > Affects Versions: 1.10.0 > Reporter: Sivaprasanna Sethuraman > Assignee: Sivaprasanna Sethuraman > Priority: Major > > When using CompressWriterFactory with Hadoop compression codec, the execution > fails with java.io.NotSerializableException. > I guess this is probably to do with the the instance creation for Hadoop's > CompressionCodec being done here at > [CompressWriterFactory.java#L59|https://github.com/apache/flink/blob/master/flink-formats/flink-compress/src/main/java/org/apache/flink/formats/compress/CompressWriterFactory.java#L59] > and thus it has to be sent over the wire causing the exception to be thrown. > So I did a quick test on my end by changing the way the CompressionCodec is > initialised and ran it on a Hadoop cluster, and it has been working just > fine. Will raise a PR in a day or so. -- This message was sent by Atlassian Jira (v8.3.4#803005)