>> when i run example wordcount i have problem like this :

Is wordcount a hadoop example? or your code?

On 8/16/08, tran thien <[EMAIL PROTECTED]> wrote:
> hi everyone,
> i am using hadoop 0.17.1.
> There are 2 node : one master(also slave) and one slave.
> when i run example wordcount i have problem like this :
>
> 08/08/16 11:59:39 INFO mapred.JobClient:  map 100% reduce 22%
> 08/08/16 11:59:48 INFO mapred.JobClient:  map 100% reduce 23%
> 08/08/16 12:02:03 INFO mapred.JobClient: Task Id :
> task_200808161130_0001_m_000007_0, Status : FAILED
> Too many fetch-failures
>
> I config hadoop-site.xml like this :
>
> <?xml version="1.0"?>
> <?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
>
> <!-- Put site-specific property overrides in this file. -->
>
> <configuration>
>
> <property>
>  <name>fs.default.name</name>
>  <value>hdfs://192.168.1.135:54310</value>
>  <description>The name of the default file system.  A URI whose
>  scheme and authority determine the FileSystem implementation.  The
>  uri's scheme determines the config property (fs.SCHEME.impl) naming
>  the FileSystem implementation class.  The uri's authority is used to
>  determine the host, port, etc. for a filesystem.</description>
> </property>
>
> <property>
>  <name>mapred.job.tracker</name>
>  <value>192.168.1.135:54311</value>
>  <description>The host and port that the MapReduce job tracker runs
>  at.  If "local", then jobs are run in-process as a single map
>  and reduce task.
>  </description>
> </property>
>
> <property>
>  <name>dfs.replication</name>
>  <value>2</value>
>  <description>Default block replication.
>  The actual number of replications can be specified when the file is
> created.
>  The default is used if replication is not specified in create time.
>  </description>
> </property>
>
> <property>
>  <name>mapred.map.tasks</name>
>  <value>11</value>
>  <description>The default number of map tasks per job.  Typically set
>  to a prime several times greater than number of available hosts.
>  Ignored when mapred.job.tracker is "local".
>  </description>
> </property>
>
> <property>
>  <name>mapred.reduce.tasks</name>
>  <value>7</value>
>  <description>The default number of reduce tasks per job.  Typically
> set
>  to a prime close to the number of available hosts.  Ignored when
>  mapred.job.tracker is "local".
>  </description>
> </property>
>
> <property>
>  <name>mapred.tasktracker.map.tasks.maximum</name>
>  <value>5</value>
>  <description>The maximum number of map tasks that will be run
>  simultaneously by a task tracker.
>  </description>
> </property>
>
> <property>
>  <name>mapred.tasktracker.reduce.tasks.maximum</name>
>  <value>5</value>
>  <description>The maximum number of reduce tasks that will be run
>  simultaneously by a task tracker.
>  </description>
> </property>
>
> </configuration>
>
>
> I don't know why? Can you help me to resolve this problem?
>
> Thanks for the help in advance,
>
> Regards,
> thientd
>
>
>
>


-- 
Best regards, Edward J. Yoon
[EMAIL PROTECTED]
http://blog.udanax.org

Reply via email to