Pseudo-distributed mode is good for developing and testing hadoop
code. But instead of experimenting with hadoop on your mac, I would go
for hadoop on EC2. With starcluster http://web.mit.edu/star/cluster/
it takes just a single command to start hadoop.  You also get a fixed
environment.

-Håvard


On Mon, Aug 13, 2012 at 6:21 AM, Subho Banerjee <subs.z...@gmail.com> wrote:
> Hello,
>
> I am running hadoop v1.0.3 in Mac OS X 10.8 with Java_1.6.0_33-b03-424
>
>
> When running hadoop on pseudo-distributed mode, the map seems to work, but
> it cannot compute the reduce.
>
> 12/08/13 08:58:12 INFO mapred.JobClient: Running job: job_201208130857_0001
> 12/08/13 08:58:13 INFO mapred.JobClient: map 0% reduce 0%
> 12/08/13 08:58:27 INFO mapred.JobClient: map 20% reduce 0%
> 12/08/13 08:58:33 INFO mapred.JobClient: map 30% reduce 0%
> 12/08/13 08:58:36 INFO mapred.JobClient: map 40% reduce 0%
> 12/08/13 08:58:39 INFO mapred.JobClient: map 50% reduce 0%
> 12/08/13 08:58:42 INFO mapred.JobClient: map 60% reduce 0%
> 12/08/13 08:58:45 INFO mapred.JobClient: map 70% reduce 0%
> 12/08/13 08:58:48 INFO mapred.JobClient: map 80% reduce 0%
> 12/08/13 08:58:51 INFO mapred.JobClient: map 90% reduce 0%
> 12/08/13 08:58:54 INFO mapred.JobClient: map 100% reduce 0%
> 12/08/13 08:59:14 INFO mapred.JobClient: Task Id :
> attempt_201208130857_0001_m_000000_0, Status : FAILED
> Too many fetch-failures
> 12/08/13 08:59:14 WARN mapred.JobClient: Error reading task outputServer
> returned HTTP response code: 403 for URL:
> http://10.1.66.17:50060/tasklog?plaintext=true&attemptid=attempt_201208130857_0001_m_000000_0&filter=stdout
> 12/08/13 08:59:14 WARN mapred.JobClient: Error reading task outputServer
> returned HTTP response code: 403 for URL:
> http://10.1.66.17:50060/tasklog?plaintext=true&attemptid=attempt_201208130857_0001_m_000000_0&filter=stderr
> 12/08/13 08:59:18 INFO mapred.JobClient: map 89% reduce 0%
> 12/08/13 08:59:21 INFO mapred.JobClient: map 100% reduce 0%
> 12/08/13 09:00:14 INFO mapred.JobClient: Task Id :
> attempt_201208130857_0001_m_000001_0, Status : FAILED
> Too many fetch-failures
>
> Here is what I get when I try to see the tasklog using the links given in
> the output
>
> http://10.1.66.17:50060/tasklog?plaintext=true&attemptid=attempt_201208130857_0001_m_000000_0&filter=stderr
> --->
> 2012-08-13 08:58:39.189 java[74092:1203] Unable to load realm info from
> SCDynamicStore
>
> http://10.1.66.17:50060/tasklog?plaintext=true&attemptid=attempt_201208130857_0001_m_000000_0&filter=stdout
> --->
>
> I have changed my hadoop-env.sh acoording to Mathew Buckett in
> https://issues.apache.org/jira/browse/HADOOP-7489
>
> Also this error of Unable to load realm info from SCDynamicStore does not
> show up when I do 'hadoop namenode -format' or 'start-all.sh'
>
> I am also attaching a zipped copy of my logs
>
>
> Cheers,
>
> Subho.



-- 
Håvard Wahl Kongsgård
Faculty of Medicine &
Department of Mathematical Sciences
NTNU

http://havard.security-review.net/

Reply via email to