Very clear. The comic format works indeed quite well.
I never considered comics as a serious (professional) way to get something
explained efficiently,
but this shows people should think twice before they start writing their next
documentation.
one question though: if a DN has a corrupted
Thats indeed a great piece of work Maneesh...Waiting for the mapreduce comic :)
Regards,
Ravi Teja
From: Dieter Plaetinck [dieter.plaeti...@intec.ugent.be]
Sent: 01 December 2011 15:11:36
To: common-user@hadoop.apache.org
Subject: Re: HDFS Explained as
Thats indeed a great piece of work Maneesh...Waiting for the mapreduce comic :)
Regards,
Ravi Teja
From: Dieter Plaetinck [dieter.plaeti...@intec.ugent.be]
Sent: 01 December 2011 15:11:36
To: common-user@hadoop.apache.org
Subject: Re: HDFS Explained as
Hi Dieter
Very clear. The comic format works indeed quite well.
I never considered comics as a serious (professional) way to get
something explained efficiently,
but this shows people should think twice before they start writing their
next documentation.
Thanks! :)
one question though:
Regarding the user logs of tasktracker, there is nothing interesting
there. That is the thing, tasktracker did not
pick the task that was assigned to it.
Any idea why the mapper is not picking up the task?
Thanks
Nitika
On Mon, Nov 28, 2011 at 9:53 PM, Prashant Sharma
prashant.ii...@gmail.com
Hi everyone,
So I have this blade server with 4x500 GB hard disks.
I want to use all these hard disks for hadoop HDFS.
How can I achieve this target ?
If I install hadoop on 1 hard disk and use other hard disk as normal
partitions eg. -
/dev/sda1, -- HDD 1 -- Primary partition -- Linux +
You need to apply comma-separated lists only to dfs.data.dir (HDFS) and
mapred.local.dir (MR) directly. Make sure the subdirectories are different for
each, else you may accidentally wipe away your data when you restart MR
services.
The hadoop.tmp.dir property does not accept multiple paths