Hi,
I have been trying to get Hadoop working with Ganglia, and am making
some progress.
I have upgraded to Hadoop 0.20.1, and that seems to make a big
difference, I no longer get any errors related to Ganglia.
But when I run gmetad --debug=5, I get the following:
[r...@monitor ganglia]#
You'll need to be more specific about which version of Hadoop and Eclipse
you're using. There are known issues building the plugin on Hadoop 0.20.1.
I am in the process of providing a patch for this, in the interim, you
can try using the plugin jar attached at
John-
I would recommend that you drop into irc channel #ganglia on freenode or
join the ganglia-general mailing list (http://ganglia.info/) and send this
question there. This seems like a configuration/firewall issue.
-Matt
On Mon, Nov 23, 2009 at 12:41 AM, John Martyniak
Can you go into detail regarding your scenario. What exactly is failing
and how is it failing ?
Kind regards
Steve Watt
From:
Raymond Jennings III raymondj...@yahoo.com
To:
common-user@hadoop.apache.org
Date:
11/23/2009 11:17 AM
Subject:
Re: Error trying to build hadoop eclipse plugin
Hi,
Hi,
After porting my code from Hadoop 0.17 to 0.20, I am starting to have problems
setting my jar file. I used to be able to set jar file by using
JobConf.setJar(). But now I am using Job.setJarByClass(). It looks to me that
this method is not working. I kept getting ClassNotFoundException
Hi Mike,
I haven't seen that problem. There is one patch in the Cloudera distribution
that does modify the behavior of that method, though. Would you mind trying
this on the stock Apache 0.20.1 release? I see no reason to believe this is
the issue, since hundreds of other people are using our
What are you putting as the argument in job.setJarByClass( ?? )
On Mon, Nov 23, 2009 at 8:34 PM, Todd Lipcon t...@cloudera.com wrote:
Hi Mike,
I haven't seen that problem. There is one patch in the Cloudera
distribution
that does modify the behavior of that method, though. Would you mind
Hi Mike,
It sounds like you're doing something weird when you create your jar.
What platform are you submitting from, and how are you making the jar?
-Todd
On Mon, Nov 23, 2009 at 9:09 PM, Zhengguo 'Mike' SUN
zhengguo...@yahoo.comwrote:
Hi Todd,
The hadoop-0.20.1+133-examples.jar worked
Hi all,
I am a newbie to hadoop mapreduce and have to use Pipes api. I will
like to know how can an application specify more control on the
jobconf attributes through Pipes api?
My problem being; the pipes api exposes just one function runTask() to
the application code. Inside runTask() an object
Interesting. I don't have the 17 minutes issue, but the reducer with the
identical keys is taking about twice as long as the others.
Looking at counters, most of the tasks have Reduce shuffle bytes 0,
whereas the slow one has reduce shuffle bytes 1,100,006 as expected.
Logs on the slow one:
Hi,
Not sure about your hadoop version, and havent done much on single m/c setup
myself. However there is a IPC improvement bug filed @
https://issues.apache.org/jira/browse/HADOOP-2864.Thanks!
On 11/24/09 11:22 AM, onur ascigil onurasci...@hotmail.com wrote:
I am running Hadoop on a single
Oops - I mistakenly assumed the test Reducer was just some kind of
wordcount-esque summer.
In fact, it has an O(n^2) operation, essentially:
sValue += values.next().toString() + '\t';
Appending to a string like this is very slow, and explains why the reducers
that get a
I read the code and find the call
DFSInputStream.read(buf, off, len)
will cause the DataNode read len bytes (or less if encounting the end of
block) , why does not hdfs read ahead to improve performance for sequential
read?
--
View this message in context:
13 matches
Mail list logo