Hadoop version: 0.20.2
From: Ted Yu [mailto:yuzhih...@gmail.com]
Sent: Friday, July 16, 2010 12:34 PM
To: mapreduce-user@hadoop.apache.org
Subject: Re: Error: java.lang.NullPointerException at
java.util.concurrent.ConcurrentHashMap.get
What version of hadoop are you using ?
On Fri, Jul 16
What version of hadoop are you using ?
On Fri, Jul 16, 2010 at 8:56 AM, Chinni, Ravi wrote:
> I am trying to run the terasort example with a small input on a 4 node
> cluster. I just did the minimal configuration (fs.default.name, master,
> slaves etc.), but did not do anything specific to ter
I am trying to run the terasort example with a small input on a 4 node
cluster. I just did the minimal configuration (fs.default.name, master,
slaves etc.), but did not do anything specific to terasort. I am getting
the following java.lang.NullPointerException on running terasort
example:
10/0
Oops. I should add the example uses the old API.
James
On Fri, Jul 16, 2010 at 4:45 PM, James Hammerton <
james.hammer...@mendeley.com> wrote:
> Essentially you specify the type of your inputs in the Mapper and in the
> job set up. You need to upload data to HDFS from your local filesystem.
>
>
Essentially you specify the type of your inputs in the Mapper and in the job
set up. You need to upload data to HDFS from your local filesystem.
This tutorial should help you:
http://hadoop.apache.org/common/docs/r0.20.2/mapred_tutorial.html
James
On Fri, Jul 16, 2010 at 4:38 PM, Khaled BEN BAH
Hello to all
I wonder how to specify inputs of mapreduce
it's necessary that inputs be in hdfs ???
or it's possible to process mapreduce with inputs located in the local
system??
if it's possible how can i make it?
thanks in advance for help
Khaled
best regards
Hi Torsten,
Have you by any chance tried the release candidate I created? If
you've got any feedback from trying it out then please send it to
http://www.mail-archive.com/common-...@hadoop.apache.org/msg01793.html.
Thanks,
Tom
On Thu, Jul 15, 2010 at 9:24 AM, Torsten Curdt wrote:
> Hey folks,
>