Hi,

This directory is used as part of the 'DistributedCache' feature. (
http://hadoop.apache.org/docs/r1.0.4/mapred_tutorial.html#DistributedCache).
There is a configuration key "local.cache.size" which controls the amount
of data stored under DistributedCache. The default limit is 10GB. However,
the files under this cannot be deleted if they are being used. Also, some
frameworks on Hadoop could be using DistributedCache transparently to you.

So you could check what is being stored here and based on that lower the
limit of the cache size if you feel that will help. The property needs to
be set in mapred-default.xml.

Thanks
Hemanth


On Mon, Apr 8, 2013 at 11:09 PM, <xia_y...@dell.com> wrote:

> Hi,****
>
> ** **
>
> I am using hadoop which is packaged within hbase -0.94.1. It is hadoop
> 1.0.3. There is some mapreduce job running on my server. After some time, I
> found that my folder /tmp/hadoop-root/mapred/local/archive has 14G size.**
> **
>
> ** **
>
> How to configure this and limit the size? I do not want  to waste my space
> for archive.****
>
> ** **
>
> Thanks,****
>
> ** **
>
> Xia****
>
> ** **
>

Reply via email to