Edit conf/hdfs-site.xml for block size in Namenode. But clean way is to copy 
this file across the cluster. This new value becomes cluster default. 

-Bharath



From: Bharath Mundlapudi <bharathw...@yahoo.com>
To: Rita <rmorgan...@gmail.com>; "hdfs-user@hadoop.apache.org" 
<hdfs-user@hadoop.apache.org>
Cc: 
Sent: Sunday, February 6, 2011 4:45 PM
Subject: Re: changing the block size


Answer depends on what you are trying to achieve. Assuming you are trying to 
store a file in HDFS using put or copyFromLocal.
You no need to restart the entire cluster, just Namenode restart is sufficient. 
 

hadoop-daemon.sh stop namenode
hadoop-daemon.sh start namenode

-Bharath
    




From: Rita <rmorgan...@gmail.com>
To: hdfs-user@hadoop.apache.org; Bharath Mundlapudi <bharathw...@yahoo.com>
Cc: 
Sent: Sunday, February 6, 2011 2:24 PM
Subject: Re: changing the block size


Bharath,
So, I have to restart the entire cluster? So, I need to stop the namenode and 
then run start-dfs.sh ?

Ayon,
So, what I did was decommission a node, remove all of its data (rm -rf 
data.dir) and stopped the hdfs process on it. Then I made the change to 
conf/hdfs-site.xml on the data node and then I restarted the datanode. I then 
ran a balancer to take effect and I am still getting 64MB files instead of 
128MB. :-/






On Sun, Feb 6, 2011 at 2:25 PM, Bharath Mundlapudi <bharathw...@yahoo.com> 
wrote:

Can you tell us, how are you verifying if its not working?
>
>Edit 
>
>>conf/hdfs-site.xml dfs.block.size 
> 
> 
>And restart the cluster. 
>
>-Bharath
>
>
>
>
>From: Rita <rmorgan...@gmail.com>
>To: hdfs-user@hadoop.apache.org
>Cc: 
>Sent: Sunday, February 6, 2011 8:50 AM
>
>Subject: Re: changing the block size
>
>
>
>Neither one was working. 
>
>Is there anything I can do? I always have problems like this in hdfs. It seems 
>even experts are guessing at the answers :-/
>
>
>
>On Thu, Feb 3, 2011 at 11:45 AM, Ayon Sinha <ayonsi...@yahoo.com> wrote:
>
>conf/hdfs-site.xml
>> 
>>restart dfs. I believe it should be sufficient to restart the namenode only, 
>>but others can confirm.
>>
>>-Ayon
>>
>>
>>From: Rita <rmorgan...@gmail.com>
>>To: hdfs-user@hadoop.apache.org
>>Sent: Thu, February 3, 2011 4:35:09
>> AM
>>Subject: changing the block size
>>
>>
>>>>Currently I am using the default block size of 64MB. I would like to change 
>>>>it for my cluster to 256 megabytes since I deal with large files (over 
>>>>2GB).  What is the best way to do this? 
>>
>>What file do I have to make the change on? Does it have to be applied on the 
>>namenode or each individual data nodes?  What has to get restarted, namenode, 
>>datanode, or both?
>>
>>
>>
>>-- 
>>--- Get your facts first, then you can distort them as you please.--
>>
>>
>
>
>-- 
>--- Get your facts first, then you can distort them as you please.--
>
>
>
>


-- 
--- Get your facts first, then you can distort them as you please.--


      

Reply via email to