Hi all,
I have enabled log aggregation and want to track task logs on hdfs. I need
to start historyserver vie mr-jobhistory-daemon.sh start historyserver on
all nodes. Is there any way to run historyserver automatically when yarn
starts?
You can set the maximum map and reduce attempts so that if a failure
occurred job gets failed and done.
On Sat, Jan 11, 2014 at 11:07 PM, John Lilley wrote:
> We have a YARN application that we want to automatically terminate if
> the YARN client disconnects or crashes. Is it possible to confi
There is no firewall.
On Wed, Jan 8, 2014 at 9:58 PM, Vinod Kumar Vavilapalli <
vino...@hortonworks.com> wrote:
> Checked the firewall rules?
>
> +Vinod
>
> On Jan 8, 2014, at 3:22 AM, Saeed Adel Mehraban
> wrote:
>
> Hi all.
> I have an installation on Hadoop
Hi all.
I have an installation on Hadoop on 3 nodes, namely master, slave1 and
slave2. When I try to run a job, assuming appmaster be on slave1, every map
and reduce tasks which take place on slave2 will fail due to
ConnectException.
I checked the port which slave2 wants to connect to. It differs r
connection
exception: java.net.ConnectException: Connection refused; For more details
see: http://wiki.apache.org/hadoop/ConnectionRefused
I don't know what process meant to listen to port 58898 on slave1.
Any idea here?
On Tue, Jan 7, 2014 at 1:28 AM, Saeed Adel Mehraban wrote:
> Before I
hadoop-yarn/apps which is explicitly configured in the
> yarn-site.xml.
>
>
> FYI:
> http://www.cloudera.com/content/cloudera-content/cloudera-docs/CDH4/latest/CDH4-Installation-Guide/cdh4ig_topic_11_4.html
>
> 2014/1/7 Saeed Adel Mehraban
>
>> When I click on indiv
logs for you to investigate what caused it to fail. Try visiting the
> task's logs on the JT UI by clicking through individual failed
> attempts to find the reason of its failure.
>
> On Sun, Jan 5, 2014 at 11:03 PM, Saeed Adel Mehraban
> wrote:
> > Hi all,
> > My task
Hi all,
My task jobs are failing due to many failed maps. I want to know what makes
a map to fail? Is it something like exceptions or what?
I have a Hadoop 2.2.0 installation on 3 VMs, one as master and 2 as slaves.
When I try to run simple jobs like provided wordcount sample, if I try to
run the job on 1 or a few files, it probably will succeed or not (somehow
50-50 chance of failure) but with more files, I get failure most of the
tim