Your MsRead.readFields() doesn't contain readInt().
Can you show us the lines around line 84 of MsRead.java ?
On Wed, Sep 29, 2010 at 2:44 PM, Tali K wrote:
>
> HI All,
>
> I am getting this Exception on a cluster(10 nodes) when I am running
> simple hadoop map / reduce job.
> I don't have thi
Thanks James.
The link is helpful too.
Regards,
Bhushan
-Original Message-
From: james warren [mailto:ja...@rockyou.com]
Sent: Wednesday, September 29, 2010 1:50 PM
To: common-user@hadoop.apache.org
Subject: Re: Multiple masters in hadoop
Actually the /hadoop/conf/masters file is for con
HI All,
I am getting this Exception on a cluster(10 nodes) when I am running simple
hadoop map / reduce job.
I don't have this Exception while running it on my desktop in hadoop's pseudo
distributed mode.
Can somebody help? I would really appreciate it.
10/09/29 14:28:34 INFO mapred.JobClie
Actually the /hadoop/conf/masters file is for configuring
secondarynamenode(s). Check
http://www.cloudera.com/blog/2009/02/multi-host-secondarynamenode-configuration/
for
details.
cheers,
-jw
On Wed, Sep 29, 2010 at 1:36 PM, Shi Yu wrote:
> The "Master" appeared in Masters and Salves files is
The "Master" appeared in Masters and Salves files is the machine name or
ip address. If you have a single cluster, when you specify multiple
names in those files it will cause error because of the connection failure.
Shi
On 2010-9-29 15:28, Bhushan Mahale wrote:
Hi,
The master files name in
Hi,
The master files name in hadoop/conf is called as masters.
Wondering if I can configure multiple masters for a single cluster. If yes, how
can I use them?
Thanks,
Bhushan
DISCLAIMER
==
This e-mail may contain privileged and confidential information which is the
property of Persist
For the benefit of the list archives: the log4j properties are being
set inside the hadoop daemon shell script (here is the relevant line,
as pointed out to me by Boris)
bin/hadoop-daemon.sh:export HADOOP_ROOT_LOGGER="INFO,DRFA"
On Tue, Sep 28, 2010 at 4:12 PM, Alex Kozlov wrote:
> Hi Leo,
>
> W
Hi,
While trying to run a MapReduce job, the reducers stucks in the copy
phase indefinitely. Though, all the Mapper have been finished the reducers
stucks at 15-20% completion.
The log available at the Reducers is as follows:
2010-09-29 11:33:24,535 INFO org.apache.hadoop.mapred.ReduceTask:
atte
On 29/09/10 00:12, Alex Kozlov wrote:
Hi Leo,
What distribution are you using? Sometimes the log4j.properties is packed
inside .jar file, which is picked first, so you need to explicitly give a
java option '-Dlog4j.configuration=' in the
corresponding daemon flags.
You find the JAR which has