[ https://issues.apache.org/jira/browse/FLINK-16371?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17049004#comment-17049004 ]
Sivaprasanna Sethuraman commented on FLINK-16371: ------------------------------------------------- Can someone please grant me access to this Jira? I want to assign it to myself. > HadoopCompressionBulkWriter fails with 'java.io.NotSerializableException' > ------------------------------------------------------------------------- > > Key: FLINK-16371 > URL: https://issues.apache.org/jira/browse/FLINK-16371 > Project: Flink > Issue Type: Bug > Affects Versions: 1.10.0 > Reporter: Sivaprasanna Sethuraman > Priority: Major > > When using CompressWriterFactory with Hadoop compression codec, the execution > fails with java.io.NotSerializableException. > I guess this is probably to do with the the instance creation for Hadoop's > CompressionCodec being done here at > [CompressWriterFactory.java#L59|https://github.com/apache/flink/blob/master/flink-formats/flink-compress/src/main/java/org/apache/flink/formats/compress/CompressWriterFactory.java#L59] > and thus it has to be sent over the wire causing the exception to be thrown. > So I did a quick test on my end by changing the way the CompressionCodec is > created and ran it on a Hadoop cluster, and it has been working just fine. > Will raise a PR in a day or so. -- This message was sent by Atlassian Jira (v8.3.4#803005)