Block size may not be the only answer, look into the way the namenode
distributes the blocks on your datanodes, see if the client datanode is not
creating a bottleneck.



zeevik wrote:
> 
> 
> .. New member here, hello everyone! ..
> 
> I am changing the default dfs.block.size from 64MB to 256MB (or any other
> value) in hadoop-site.xml file and restarting the cluster to make sure
> changes are applied. Now the issue is that when I am trying to put a file
> on the hdfs (hadoop fs -put) it seems like the block size is always 64MB
> (browsing the filesystem via the http interface). Hadoop version is 0.19.1
> on a 6 node cluster. 
> 
> 1. Why the new block size is not reflected when I am creating/loading a
> new file into the hdfs?
> 2. How can I see current parameters and their values on Hadoop to make
> sure the change in hadoop-site.xml file took affect at the restart? 
> 
> I am trying to load a large file into HDFS and it seems slow (1.5min for
> 1GB), that's why I am trying to increase the block size.
> 
> Thanks,
> Zeev
> 

-- 
View this message in context: 
http://www.nabble.com/dfs.block.size-change-not-taking-affect--tp24654181p24654233.html
Sent from the Hadoop core-user mailing list archive at Nabble.com.

Reply via email to