[ https://issues.apache.org/jira/browse/HADOOP-6663?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12925875#action_12925875 ]
Tom White commented on HADOOP-6663: ----------------------------------- I ran the tests and test-patch manually: {noformat} [exec] +1 overall. [exec] [exec] +1 @author. The patch does not contain any @author tags. [exec] [exec] +1 tests included. The patch appears to include 4 new or modified tests. [exec] [exec] +1 javadoc. The javadoc tool did not generate any warning messages. [exec] [exec] +1 javac. The applied patch does not increase the total number of javac compiler warnings. [exec] [exec] +1 findbugs. The patch does not introduce any new Findbugs warnings. [exec] [exec] +1 release audit. The applied patch does not increase the total number of release audit warnings. [exec] [exec] +1 system tests framework. The patch passed system tests framework compile. {noformat} > BlockDecompressorStream get EOF exception when decompressing the file > compressed from empty file > ------------------------------------------------------------------------------------------------ > > Key: HADOOP-6663 > URL: https://issues.apache.org/jira/browse/HADOOP-6663 > Project: Hadoop Common > Issue Type: Bug > Components: io > Affects Versions: 0.20.2 > Reporter: Kang Xiao > Assignee: Kang Xiao > Fix For: 0.22.0 > > Attachments: BlockDecompressorStream.java.patch, > BlockDecompressorStream.java.patch, BlockDecompressorStream.patch, > HADOOP-6663.patch > > > An empty file can be compressed using BlockDecompressorStream, which is for > block-based compressiong algorithm such as LZO. However, when decompressing > the compressed file, BlockDecompressorStream get EOF exception. > Here is a typical exception stack: > java.io.EOFException > at > org.apache.hadoop.io.compress.BlockDecompressorStream.rawReadInt(BlockDecompressorStream.java:125) > at > org.apache.hadoop.io.compress.BlockDecompressorStream.getCompressedData(BlockDecompressorStream.java:96) > at > org.apache.hadoop.io.compress.BlockDecompressorStream.decompress(BlockDecompressorStream.java:82) > at > org.apache.hadoop.io.compress.DecompressorStream.read(DecompressorStream.java:74) > at java.io.InputStream.read(InputStream.java:85) > at org.apache.hadoop.util.LineReader.readLine(LineReader.java:134) > at org.apache.hadoop.mapred.LineRecordReader.next(LineRecordReader.java:134) > at org.apache.hadoop.mapred.LineRecordReader.next(LineRecordReader.java:39) > at > org.apache.hadoop.mapred.MapTask$TrackedRecordReader.moveToNext(MapTask.java:186) > at org.apache.hadoop.mapred.MapTask$TrackedRecordReader.next(MapTask.java:170) > at org.apache.hadoop.mapred.MapRunner.run(MapRunner.java:48) > at org.apache.hadoop.streaming.PipeMapRunner.run(PipeMapRunner.java:18) > at org.apache.hadoop.mapred.MapTask.run(MapTask.java:334) > at org.apache.hadoop.mapred.Child.main(Child.java:196) -- This message is automatically generated by JIRA. - You can reply to this email to add a comment to the issue online.