[ 
https://issues.apache.org/jira/browse/HADOOP-7076?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12977446#action_12977446
 ] 

Greg Roelofs commented on HADOOP-7076:
--------------------------------------

See also PIG-42.  That died when Pig stopped doing compression itself, but the 
approach may be worth considering.

Alternatively, an LZO-style side index could be generated for any concatenated 
gzip stream.

> Splittable Gzip
> ---------------
>
>                 Key: HADOOP-7076
>                 URL: https://issues.apache.org/jira/browse/HADOOP-7076
>             Project: Hadoop Common
>          Issue Type: New Feature
>          Components: io
>            Reporter: Niels Basjes
>         Attachments: HADOOP-7076.patch
>
>
> Files compressed with the gzip codec are not splittable due to the nature of 
> the codec.
> This limits the options you have scaling out when reading large gzipped input 
> files.
> Given the fact that gunzipping a 1GiB file usually takes only 2 minutes I 
> figured that for some use cases wasting some resources may result in a 
> shorter job time under certain conditions.
> So reading the entire input file from the start for each split (wasting 
> resources!!) may lead to additional scalability.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.

Reply via email to