On Tue, Sep 22, 2009 at 9:40 PM, Allen Wittenauer
<awittena...@linkedin.com> wrote:
>
>
>
> On 9/22/09 5:47 PM, "Ravi Phulari" <rphul...@yahoo-inc.com> wrote:
>
>> Hello Paul here is quick answer to your question -
>> You can use dfs.datanode.du.pct  and dfs.datanode.du.reserved  property in
>> hdfs-site.xml config file to  configure
>> maximum  local disk space used by hdfs and mapreduce.
>
> No, that's incorrect.
>
> These values determine how much HDFS is *not* allowed to use.  There is no
> limit on how much MR can take.  This is exactly the opposite of what he and
> pretty much every other admin wants.  [Negative math is fun! Or something.]
>
> The only way to guarantee that HDFS and MR do not eat more space than you
> actually want is to create a separate file system.   In the case of the
> datandode,  potentially run the data node process with a file system quota
> at the Unix level.
>
>

A non-hadoop option is you can create dedicated partitions. If your
setup uses LVM, shrink your old partitions and create dedicated hadoop
ones. Also you could create partitions in a partition with
loop-mounting.

Reply via email to