[ 
http://issues.apache.org/jira/browse/HADOOP-54?page=comments#action_12422994 ] 
            
Doug Cutting commented on HADOOP-54:
------------------------------------

I suggest adding the binary append API suggested by Owen and deprecating the 
old binary append API, but making it work back-compatibly.  Thus it should 
accept pre-compressed (if compression is enabled) values, de-compress them, 
then call the new append method.  We should update all existing binary appends 
in Hadoop and prepare a patch for Nutch to do the same.  Then we should file a 
bug to remove the deprecated method in the next release.

We unfortunately lose the ability to move individual compressed values around.  
If a mapper does not touch values, it would be best to only decompress values 
on reduce nodes, rather than decompress and recompress them on map nodes, since 
compression can be computationally expensive.  But I don't see how to avoid 
this if we want to compress multiple values together.  I think this argues that 
we might still permit the existing single-value compression, since that might 
be most efficient for large-valued files that are not touched during maps.

Also, please add a public SequenceFile.Writer() constructor that accepts a 
Configuration.  We should probably also deprecate the unconfigured constructor 
and remove it in the next release.  I agree with Eric that things can be 
over-configurable, but its easier to make them configurable in the code from 
the start, and only as-needed add them to hadoop-default.xml, so that folks who 
have not read the code can tweak them.

I also agree that flush should not be public.

> SequenceFile should compress blocks, not individual entries
> -----------------------------------------------------------
>
>                 Key: HADOOP-54
>                 URL: http://issues.apache.org/jira/browse/HADOOP-54
>             Project: Hadoop
>          Issue Type: Improvement
>          Components: io
>    Affects Versions: 0.2.0
>            Reporter: Doug Cutting
>         Assigned To: Arun C Murthy
>             Fix For: 0.5.0
>
>         Attachments: VIntCompressionResults.txt
>
>
> SequenceFile will optionally compress individual values.  But both 
> compression and performance would be much better if sequences of keys and 
> values are compressed together.  Sync marks should only be placed between 
> blocks.  This will require some changes to MapFile too, so that all file 
> positions stored there are the positions of blocks, not entries within 
> blocks.  Probably this can be accomplished by adding a 
> getBlockStartPosition() method to SequenceFile.Writer.

-- 
This message is automatically generated by JIRA.
-
If you think it was sent incorrectly contact one of the administrators: 
http://issues.apache.org/jira/secure/Administrators.jspa
-
For more information on JIRA, see: http://www.atlassian.com/software/jira

        

Reply via email to