On 9/22/09 5:47 PM, "Ravi Phulari" <rphul...@yahoo-inc.com> wrote: > Hello Paul here is quick answer to your question - > You can use dfs.datanode.du.pct and dfs.datanode.du.reserved property in > hdfs-site.xml config file to configure > maximum local disk space used by hdfs and mapreduce. No, that's incorrect. These values determine how much HDFS is *not* allowed to use. There is no limit on how much MR can take. This is exactly the opposite of what he and pretty much every other admin wants. [Negative math is fun! Or something.] The only way to guarantee that HDFS and MR do not eat more space than you actually want is to create a separate file system. In the case of the datandode, potentially run the data node process with a file system quota at the Unix level.
- local node Quotas (for an R&D cluster) Paul Smith
- Re: local node Quotas (for an R&D cluster... Ravi Phulari
- Re: local node Quotas (for an R&D clu... Paul Smith
- Re: local node Quotas (for an R&D... Ravi Phulari
- Program crashed when volume of data g... Kunsheng Chen
- Re: Program crashed when volume o... Chandraprakash Bhagtani
- RE: Program crashed when volume o... Amogh Vasekar
- Re: local node Quotas (for an R&D clu... Allen Wittenauer
- Re: local node Quotas (for an R&D... Edward Capriolo
- Re: local node Quotas (for an R&D... Vinod KV
- Re: local node Quotas (for an R&D... Eli Collins
- Re: local node Quotas (for an R&a... Paul Smith
- Re: local node Quotas (for an R&a... Allen Wittenauer
- Re: local node Quotas (for a... Paul Smith
- Re: local node Quotas (f... Allen Wittenauer
- Re: local node Quota... Paul Smith
- Re: local node Quota... Steve Loughran
- Re: local node Quota... Paul Smith