[ 
https://issues.apache.org/jira/browse/HADOOP-3315?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12669748#action_12669748
 ] 

Hong Tang commented on HADOOP-3315:
-----------------------------------

bq. Here's one issue in TFile that was causing grief (tests explicitly set this 
size).
{code}
static int getFSInputBufferSize(Configuration conf) {
    return conf.getInt(FS_INPUT_BUF_SIZE_ATTR, 256 & 1024);
{code}

The configuration parameters tfile.fs.output.buffer.size and 
tfile.fs.input.buffer.size are the buffering between the 
compression/decompression codec and the underlying file IO stream. larger 
values would decrease the number of invocations of IO calls to the underlying 
stream; but pays the overhead of buffer copying and memory. The choice of 256K 
in the code is a safe one, but not the best one.

The performance number I posted does not twist these two settings (aka as set 
by the test at 1 and 0 respectively), and is aimed at relying on buffering in 
the fs stream and bypassing internal buffering. It seems to work well with a 
version of hadoop (a 0.19-dev version) I used.

> New binary file format
> ----------------------
>
>                 Key: HADOOP-3315
>                 URL: https://issues.apache.org/jira/browse/HADOOP-3315
>             Project: Hadoop Core
>          Issue Type: New Feature
>          Components: io
>            Reporter: Owen O'Malley
>            Assignee: Amir Youssefi
>             Fix For: 0.21.0
>
>         Attachments: HADOOP-3315_20080908_TFILE_PREVIEW_WITH_LZO_TESTS.patch, 
> HADOOP-3315_20080915_TFILE.patch, hadoop-trunk-tfile.patch, 
> hadoop-trunk-tfile.patch, TFile Specification 20081217.pdf
>
>
> SequenceFile's block compression format is too complex and requires 4 codecs 
> to compress or decompress. It would be good to have a file format that only 
> needs 

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.

Reply via email to