My settings:
....
<property>
  <name>mapred.local.dir</name>
  <value>/hadoop/mapred/local</value>
  <description>The local directory where MapReduce stores intermediate
  data files.  May be a comma-separated list of
  directories on different devices in order to spread disk i/o.
  </description>
</property>

<property>
  <name>mapred.system.dir</name>
  <value>/hadoop/mapred/system</value>
  <description>The shared directory where MapReduce stores control files.
  </description>
</property>
....

My device which mounted onto "/" have free space is 115G.

[EMAIL PROTECTED] /]# df -h
Filesystem            Size  Used Avail Use% Mounted on
/dev/sda2             133G   13G  113G  11% /

Anybody have other ideas?








-----Original Message-----
From: Sami Siren [mailto:[EMAIL PROTECTED] 
Sent: Wednesday, August 02, 2006 6:01 PM
To: nutch-dev@lucene.apache.org
Subject: Re: nutch
Importance: High

most propably you have run out of space in tmp (local) filesystem

use properties like

<property>
  <name>mapred.system.dir</name>
  <value><!-- path to fs that contains a lots of space --></value>
</property>
<property>
  <name>mapred.local.dir</name>
  <value><!-- path to fs that contains a lots of space --></value>
</property>

in hadoop-site.xml to get over this problem.


[EMAIL PROTECTED] wrote:

>I forget.... ;-) One more question:
>This problem with nutch or hadoop?
>
>-----Original Message-----
>From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED] 
>Sent: Wednesday, August 02, 2006 11:38 AM
>To: nutch-dev@lucene.apache.org
>Subject: nutch
>Importance: High
>
>I use nutch 0.8(mapred). Nutch started on 3 servers.
>When my nutch try index segment I get error on tasktracker:
><skiped>
>
>
>
>
>
>  
>



Reply via email to