[
https://issues.apache.org/jira/browse/HADOOP-3562?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Owen O'Malley updated HADOOP-3562:
----------------------------------
Status: Open (was: Patch Available)
I'd forgotten what a mess the streaming input format stuff is. *Sigh*
Changing the signature of the constructors is a problem, because it isn't
backward compatible if other people are defining their own
StreamBaseRecordReader's, the StreamInputFormat won't find their constructor.
*Heavy sigh*
I'd suggest keeping the input stream handles separate from each other.
{code}
InputStream in_ = the decompressed stream
FSDataInputStream underlying = the underlying stream
{code}
so for uncompressed streams they are the same, but for compressed streams they
are different from each other. If you switch over to using the underlying
stream for the position stuff, I believe it will remove the need for ignoreEnd,
which is dangerous if there was a splittable compressed format.
> StreamXMLRecordReader does not support gzipped files
> ----------------------------------------------------
>
> Key: HADOOP-3562
> URL: https://issues.apache.org/jira/browse/HADOOP-3562
> Project: Hadoop Core
> Issue Type: Bug
> Components: contrib/streaming
> Affects Versions: 0.17.0
> Reporter: Bo Adler
> Assignee: Bo Adler
> Fix For: 0.19.0
>
> Attachments: 0001-test-to-demonstrate-problem.patch,
> 0002-support-for-gzip-d-xml-records.patch, HADOOP-3562.combined.patch,
> HADOOP-3562.combined.patch
>
>
> I am using Hadoop Streaming to analyze Wikipedia data files, which are in XML
> format and are compressed because they are so large. While doing some
> preliminary tests, I discovered that you cannot use StreamXMLRecordReader
> with gzipped data files -- the data is fed into the mapper script as raw data.
--
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.