Anurag,

The way would be to rewrite the file. Do:

$ hadoop fs -Ddfs.block.size=134217728 -cp <source file> <temp destination file>
$ hadoop fs -rm <source file>
$ hadoop fs -mv <temp destination file> <source file>

Know though, that the block sizes are supplied by clients and there's
no way to enforce it other than by ensuring all your client configs
have the right values for dfs.block.size. MR programs should carry it
as well, and you may verify that by checking a job.xml of a job. If it
doesn't have the proper value, ensure the submitting user has proper
configs with the block size you want them to use.

However, folks can still override client configs if they use the full
blown create API and specify their own block size:
http://hadoop.apache.org/common/docs/stable/api/org/apache/hadoop/fs/FileSystem.html#create(org.apache.hadoop.fs.Path,%20boolean,%20int,%20short,%20long)
(Look at the blockSize method param).

HTH!

On Tue, Jun 26, 2012 at 11:07 AM, Anurag Tangri <tangri.anu...@gmail.com> wrote:
> Hi,
> We have a situation where all files that we have are 64 MB block size.
>
>
> I want to change these files (output of a map job mainly) to 128 MB blocks.
>
> What would be good way to do this migration from 64 mb to 128 mb block
> files ?
>
> Thanks,
> Anurag Tangri



-- 
Harsh J

Reply via email to