[ https://issues.apache.org/jira/browse/HADOOP-12007?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16950864#comment-16950864 ]
Victor Zhang commented on HADOOP-12007: --------------------------------------- I have the same problem too when spark streaming program save data to hdfs with gzip. > GzipCodec native CodecPool leaks memory > --------------------------------------- > > Key: HADOOP-12007 > URL: https://issues.apache.org/jira/browse/HADOOP-12007 > Project: Hadoop Common > Issue Type: Bug > Affects Versions: 2.7.0 > Reporter: Yejun Yang > Priority: Major > > org/apache/hadoop/io/compress/GzipCodec.java call > CompressionCodec.Util.createOutputStreamWithCodecPool to use CodecPool. But > compressor objects are actually never returned to pool which cause memory > leak. > HADOOP-10591 uses CompressionOutputStream.close() to return Compressor object > to pool. But CompressionCodec.Util.createOutputStreamWithCodecPool actually > returns a CompressorStream which overrides close(). > This cause CodecPool.returnCompressor never being called. In my log file I > can see lots of "Got brand-new compressor [.gz]" but no "Got recycled > compressor". -- This message was sent by Atlassian Jira (v8.3.4#803005) --------------------------------------------------------------------- To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org