Another advice here, is that you can test the right block size with a seemed enviroment to your production system, before to deploy the real system, and then, you can avoid these kinds of changes.

El 6/6/2011 3:09 PM, J. Ryan Earl escribió:
Hello,

So I have a question about changing dfs.block.size in $HADOOP_HOME/conf/hdfs-site.xml. I understand that when files are created, blocksizes can be modified from default. What happens if you modify the blocksize of an existing HDFS site? Do newly created files get the default blocksize and old files remain the same? Is there a way to change the blocksize of existing files; I'm assuming you could write MapReduce job to do it, but any build in facilities?

Thanks,
-JR



--
Marcos Luís Ortíz Valmaseda
 Software Engineer (UCI)
 http://marcosluis2186.posterous.com
 http://twitter.com/marcosluis2186

Reply via email to