Re: about dfs.datanode.du.reserved

2017-02-12 Thread Brahma Reddy Battula
You can write a script to update this config.(Even you can manually add
this configuration in hdfs-site.xml of all the datanodes and you can
restart)



--Brahma

On Sun, Feb 12, 2017 at 12:15 PM, Alexis  wrote:

> Yes you did. Thanks in advance. Is there any way to push this config to
> all the nodes from master or should I make to script to do this?
>
> Regards
>
> Enviado desde mi iPhone
>
> El 12 feb. 2017, a las 02:30, Brahma Reddy Battula 
> escribió:
>
> Hi Alexis Fidalgo
>
> 1) I did not seen this query recently
>
> 2) you need to configure this property in slaves ( DataNode).
>
> *dfs.datanode.du.**reserved : *The number of bytes will be left free on
> the volumes used by the DataNodes. By Default,it's zero.
>
>
> For example if the disk capacity is 1TB and *dfs.datanode.du.**reserved *c
> onfigured with* 100GB.*So DataNode will not use 100GB for block
> allocation,so this data can be used by nodemanager intermittent files,
> log files .
>
> May be you can plan your MR jobs accordingly this. Hope I cleared your
> doubts.
>
> On Sat, Feb 11, 2017 at 7:26 PM, Alexis Fidalgo  wrote:
>
>> Hello, i’ve tried to search archives (and google) regarding this issue
>> but had no luck. After some changes in our mapreduce code, it takes all the
>> available disk space on datanodes, before this change we hade no problem at
>> all, but since then, every few days, disks on datanodes (we have 4, all
>> with same configuration regarding disk, memory, processor, OS) becomes full
>> and we have no more mapreduce jobs completed. so i need to wipe datanodes
>> and format namenode and start all over again.
>>
>> Reading documentation i found this configuration for hdfs-site.xml
>>
>> 
>> dfs.datanode.du.reserved
>> 32212254720
>> 
>> 
>>
>> Questions regarding this
>>
>> 1. is there any thread already on this issue to read and not to ask again
>> about it?
>> 2. if not 1, do i need to set up this property only on master or every
>> slave too?
>> 3. will this fix the problem or just avoid the disk become full but the
>> MR jobs will fail the same (no more space to work so we need to review our
>> code)
>>
>>
>> thanks in advance, sorry if im asking about an already discussed issue, i
>> just suscribed to the list.
>>
>> regards
>>
>>
>> -
>> To unsubscribe, e-mail: user-unsubscr...@hadoop.apache.org
>> For additional commands, e-mail: user-h...@hadoop.apache.org
>>
>>
>
>
> --
>
>
>
> --Brahma Reddy Battula
>
>


-- 



--Brahma Reddy Battula


Re: about dfs.datanode.du.reserved

2017-02-11 Thread Alexis
Yes you did. Thanks in advance. Is there any way to push this config to all the 
nodes from master or should I make to script to do this?

Regards 

Enviado desde mi iPhone

> El 12 feb. 2017, a las 02:30, Brahma Reddy Battula  
> escribió:
> 
> Hi Alexis Fidalgo
> 
> 1) I did not seen this query recently
> 
> 2) you need to configure this property in slaves ( DataNode).
> 
> dfs.datanode.du.reserved : The number of bytes will be left free on the 
> volumes used by the DataNodes. By Default,it's zero.
> 
> 
> For example if the disk capacity is 1TB and dfs.datanode.du.reserved 
> configured with 100GB.So DataNode will not use 100GB for block allocation,so 
> this data can be used by nodemanager intermittent files, log files .
> May be you can plan your MR jobs accordingly this. Hope I cleared your doubts.
> 
>> On Sat, Feb 11, 2017 at 7:26 PM, Alexis Fidalgo  wrote:
>> Hello, i’ve tried to search archives (and google) regarding this issue but 
>> had no luck. After some changes in our mapreduce code, it takes all the 
>> available disk space on datanodes, before this change we hade no problem at 
>> all, but since then, every few days, disks on datanodes (we have 4, all with 
>> same configuration regarding disk, memory, processor, OS) becomes full and 
>> we have no more mapreduce jobs completed. so i need to wipe datanodes and 
>> format namenode and start all over again.
>> 
>> Reading documentation i found this configuration for hdfs-site.xml
>> 
>> 
>> dfs.datanode.du.reserved
>> 32212254720
>> 
>> 
>> 
>> Questions regarding this
>> 
>> 1. is there any thread already on this issue to read and not to ask again 
>> about it?
>> 2. if not 1, do i need to set up this property only on master or every slave 
>> too?
>> 3. will this fix the problem or just avoid the disk become full but the MR 
>> jobs will fail the same (no more space to work so we need to review our code)
>> 
>> 
>> thanks in advance, sorry if im asking about an already discussed issue, i 
>> just suscribed to the list.
>> 
>> regards
>> 
>> 
>> -
>> To unsubscribe, e-mail: user-unsubscr...@hadoop.apache.org
>> For additional commands, e-mail: user-h...@hadoop.apache.org
>> 
> 
> 
> 
> -- 
> 
> 
> 
> --Brahma Reddy Battula


Re: about dfs.datanode.du.reserved

2017-02-11 Thread Brahma Reddy Battula
Hi Alexis Fidalgo

1) I did not seen this query recently

2) you need to configure this property in slaves ( DataNode).

*dfs.datanode.du.**reserved : *The number of bytes will be left free on the
volumes used by the DataNodes. By Default,it's zero.


For example if the disk capacity is 1TB and *dfs.datanode.du.**reserved *
configured with* 100GB.*So DataNode will not use 100GB for block
allocation,so this data can be used by nodemanager intermittent files, log
files .

May be you can plan your MR jobs accordingly this. Hope I cleared your
doubts.

On Sat, Feb 11, 2017 at 7:26 PM, Alexis Fidalgo  wrote:

> Hello, i’ve tried to search archives (and google) regarding this issue but
> had no luck. After some changes in our mapreduce code, it takes all the
> available disk space on datanodes, before this change we hade no problem at
> all, but since then, every few days, disks on datanodes (we have 4, all
> with same configuration regarding disk, memory, processor, OS) becomes full
> and we have no more mapreduce jobs completed. so i need to wipe datanodes
> and format namenode and start all over again.
>
> Reading documentation i found this configuration for hdfs-site.xml
>
> 
> dfs.datanode.du.reserved
> 32212254720
> 
> 
>
> Questions regarding this
>
> 1. is there any thread already on this issue to read and not to ask again
> about it?
> 2. if not 1, do i need to set up this property only on master or every
> slave too?
> 3. will this fix the problem or just avoid the disk become full but the MR
> jobs will fail the same (no more space to work so we need to review our
> code)
>
>
> thanks in advance, sorry if im asking about an already discussed issue, i
> just suscribed to the list.
>
> regards
>
>
> -
> To unsubscribe, e-mail: user-unsubscr...@hadoop.apache.org
> For additional commands, e-mail: user-h...@hadoop.apache.org
>
>


-- 



--Brahma Reddy Battula