Hey Pierre,

These are not traditional filesystem blocks - if you save a file smaller than 
64MB, you don't lose 64MB of file space..

Hadoop will use 32KB to store a 32KB file (ok, plus a KB of metadata or so), 
not 64MB.

Brian

On May 18, 2010, at 7:06 AM, Pierre ANCELOT wrote:

> Hi,
> I'm porting a legacy application to hadoop and it uses a bunch of small
> files.
> I'm aware that having such small files ain't a good idea but I'm not doing
> the technical decisions and the port has to be done for yesterday...
> Of course such small files are a problem, loading 64MB blocks for a few
> lines of text is an evident loss.
> What will happen if I set a smaller, or even way smaller (32kB) blocks?
> 
> Thank you.
> 
> Pierre ANCELOT.

Attachment: smime.p7s
Description: S/MIME cryptographic signature

Reply via email to