Do newly created files get the default blocksize and old files remain the 
same? Yes

Is there a way to change the blocksize of existing files? I have done this 
using copy-out and copy back in script. Couldn't find a short-cut analogous to 
setrep.

 
-Ayon
See My Photos on Flickr
Also check out my Blog for answers to commonly asked questions.



________________________________
From: J. Ryan Earl <[email protected]>
To: [email protected]
Sent: Monday, June 6, 2011 12:09 PM
Subject: Changing dfs.block.size


Hello,

So I have a question about changing dfs.block.size in 
$HADOOP_HOME/conf/hdfs-site.xml.  I understand that when files are created, 
blocksizes can be modified from default.  What happens if you modify the 
blocksize of an existing HDFS site?  Do newly created files get the default 
blocksize and old files remain the same?  Is there a way to change the 
blocksize of existing files; I'm assuming you could write MapReduce job to do 
it, but any build in facilities?

Thanks,
-JR

Reply via email to