On Jun 6, 2011, at 12:09 PM, J. Ryan Earl wrote:

> Hello,
> 
> So I have a question about changing dfs.block.size in
> $HADOOP_HOME/conf/hdfs-site.xml.  I understand that when files are created,
> blocksizes can be modified from default.  What happens if you modify the
> blocksize of an existing HDFS site?  Do newly created files get the default
> blocksize and old files remain the same?

        Yes.


>  Is there a way to change the
> blocksize of existing files; I'm assuming you could write MapReduce job to
> do it, but any build in facilities?

        You can use distcp to copy the files back onto the same fs in a new 
location.  The new files should be in the new block size.  Now you can move the 
new files where the old files used to live.

Reply via email to