You can write a script to update this config.(Even you can manually add
this configuration in hdfs-site.xml of all the datanodes and you can
restart)
--Brahma
On Sun, Feb 12, 2017 at 12:15 PM, Alexis wrote:
> Yes you did. Thanks in advance. Is there any way to push this
Yes you did. Thanks in advance. Is there any way to push this config to all the
nodes from master or should I make to script to do this?
Regards
Enviado desde mi iPhone
> El 12 feb. 2017, a las 02:30, Brahma Reddy Battula
> escribió:
>
> Hi Alexis Fidalgo
>
> 1) I did
Hi Alexis Fidalgo
1) I did not seen this query recently
2) you need to configure this property in slaves ( DataNode).
*dfs.datanode.du.**reserved : *The number of bytes will be left free on the
volumes used by the DataNodes. By Default,it's zero.
For example if the disk capacity is 1TB and
Hello, i’ve tried to search archives (and google) regarding this issue but had
no luck. After some changes in our mapreduce code, it takes all the available
disk space on datanodes, before this change we hade no problem at all, but
since then, every few days, disks on datanodes (we have 4, all
In our environment we have hdfs nodes that are also used as compute nodes.
Our disk environment is heterogeneous. We have a couple of machines with
much smaller disk capacity than others. Another minor issue is our IT
staff sets up 1 filesystem backed by a hardware raid of all of the
physical