[ https://issues.apache.org/jira/browse/HADOOP-9665?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13721176#comment-13721176 ]
Chris Nauroth commented on HADOOP-9665: --------------------------------------- I intend to merge this patch to branch-1-win later today. The branch-1 patch applies cleanly to branch-1-win. > BlockDecompressorStream#decompress will throw EOFException instead of return > -1 when EOF > ---------------------------------------------------------------------------------------- > > Key: HADOOP-9665 > URL: https://issues.apache.org/jira/browse/HADOOP-9665 > Project: Hadoop Common > Issue Type: Bug > Affects Versions: 1.1.2, 2.1.0-beta, 2.3.0 > Reporter: Zhijie Shen > Assignee: Zhijie Shen > Priority: Critical > Fix For: 2.1.0-beta, 1.2.1 > > Attachments: HADOOP-9665.1.patch, HADOOP-9665.2.patch, > HADOOP-9665-branch-1.1.patch > > > BlockDecompressorStream#decompress ultimately calls rawReadInt, which will > throw EOFException instead of return -1 when encountering end of a stream. > Then, decompress will be called by read. However, InputStream#read is > supposed to return -1 instead of throwing EOFException to indicate the end of > a stream. This explains why in LineReader, > {code} > if (bufferPosn >= bufferLength) { > startPosn = bufferPosn = 0; > if (prevCharCR) > ++bytesConsumed; //account for CR from previous read > bufferLength = in.read(buffer); > if (bufferLength <= 0) > break; // EOF > } > {code} > -1 is checked instead of catching EOFException. > Now the problem will occur with SnappyCodec. If an input file is compressed > with SnappyCodec, it needs to be decompressed through BlockDecompressorStream > when it is read. Then, if it empty, EOFException will been thrown from > rawReadInt and break LineReader. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira