On 9/22/09 5:47 PM, "Ravi Phulari" <rphul...@yahoo-inc.com> wrote:

> Hello Paul here is quick answer to your question -
> You can use dfs.datanode.du.pct  and dfs.datanode.du.reserved  property in
> hdfs-site.xml config file to  configure
> maximum  local disk space used by hdfs and mapreduce.

No, that's incorrect.

These values determine how much HDFS is *not* allowed to use.  There is no
limit on how much MR can take.  This is exactly the opposite of what he and
pretty much every other admin wants.  [Negative math is fun! Or something.]

The only way to guarantee that HDFS and MR do not eat more space than you
actually want is to create a separate file system.   In the case of the
datandode,  potentially run the data node process with a file system quota
at the Unix level.  

Reply via email to