Too many fetch failures. Help!
Hi, This is my first post here. I'm new to Hadoop. I've already installed Hadoop on 2 Ubuntu boxes (one is both master and slave and the other is only slave). When I run a Wordcount example on 5 small txt files, the process never completes and I get a Too many fetch failures error on my terminal. If you can help me, I cant post my terminal's output and any log files needed. Great thanks. -- Abdelrahman Kamel
Re: Too many fetch failures. Help!
Hey, Try configuring your cluster with hostnames instead of ips and add those entries to /etc/hosts and sync it across all the nodes in the cluster. You need to restart the cluster after making these changes. Hope this helps, On Mon, Sep 26, 2011 at 8:46 PM, Abdelrahman Kamel abdouka...@gmail.com wrote: Hi, This is my first post here. I'm new to Hadoop. I've already installed Hadoop on 2 Ubuntu boxes (one is both master and slave and the other is only slave). When I run a Wordcount example on 5 small txt files, the process never completes and I get a Too many fetch failures error on my terminal. If you can help me, I cant post my terminal's output and any log files needed. Great thanks. -- Abdelrahman Kamel -- Regards, Bharath .V w:http://researchweb.iiit.ac.in/~bharath.v
Re: Too many fetch failures. Help!
Hello Abdelrahman, Are you able to ping from one machine to other with the configured hostname? configure both the hostnames in /etc/hosts file properly and try. Regards, Uma - Original Message - From: Abdelrahman Kamel abdouka...@gmail.com Date: Monday, September 26, 2011 8:47 pm Subject: Too many fetch failures. Help! To: common-user@hadoop.apache.org Hi, This is my first post here. I'm new to Hadoop. I've already installed Hadoop on 2 Ubuntu boxes (one is both master andslave and the other is only slave). When I run a Wordcount example on 5 small txt files, the process never completes and I get a Too many fetch failures error on my terminal. If you can help me, I cant post my terminal's output and any log files needed. Great thanks. -- Abdelrahman Kamel
RE: Too many fetch failures. Help!
Hi Bharath, There are few reasons to cause this problem. I have listed below some reasons with solutions. This might help you to solve this. If you post the logs, the problem can be figured out. Reason 1: It could be that the mapping in the /etc/hosts file is not present. The DNS server is down as a result of which the hostnames cannot be resolved. The DNS server is in-correctly configured. Solution: Setting the slave.host.name property can be one solution. Appropriate changes need to be done based on the problem. Reason 2: If the map outputs are larger, we may get java.lang.OutOfMemoryError: Java heap space. Because of this there are too many fetch failures. Solution: The error, java.lang.OutOfMemoryError: Java heap space in task tracker logs can be solved by any of the following methods: By decreasing the value configured for mapred.job.shuffle.input.buffer.percent. By increasing the heap memory of child JVM options for the property mapred.child.java.opts. Thanks Devaraj From: bharath vissapragada [bharathvissapragada1...@gmail.com] Sent: Monday, September 26, 2011 8:54 PM To: common-user@hadoop.apache.org Subject: Re: Too many fetch failures. Help! Hey, Try configuring your cluster with hostnames instead of ips and add those entries to /etc/hosts and sync it across all the nodes in the cluster. You need to restart the cluster after making these changes. Hope this helps, On Mon, Sep 26, 2011 at 8:46 PM, Abdelrahman Kamel abdouka...@gmail.com wrote: Hi, This is my first post here. I'm new to Hadoop. I've already installed Hadoop on 2 Ubuntu boxes (one is both master and slave and the other is only slave). When I run a Wordcount example on 5 small txt files, the process never completes and I get a Too many fetch failures error on my terminal. If you can help me, I cant post my terminal's output and any log files needed. Great thanks. -- Abdelrahman Kamel -- Regards, Bharath .V w:http://researchweb.iiit.ac.in/~bharath.v