Hi, Agarwal,
    Hadoop just put the jobtoken, _partitionlst, and  some other files that
needed to share in a directory located in hdfs://namenode:port/tmp/XXXX/.

   And all the TaskTracker will access these files from the shared tmp
directory, just like the way  they share the input file in the HDFS.



yours,
Ling Kun


On Wed, May 22, 2013 at 4:29 PM, Agarwal, Nikhil
<nikhil.agar...@netapp.com>wrote:

>  Hi,****
>
> ** **
>
> Can anyone guide me to some pointers or explain how HDFS shares the
> information put in the temporary directories (hadoop.tmp.dir,
> mapred.tmp.dir, etc.) to all other nodes? ****
>
> ** **
>
> I suppose that during execution of a MapReduce job, the JobTracker
> prepares a file called jobtoken and puts it in the temporary directories.
> which needs to be read by all TaskTrackers. So, how does HDFS share the
> contents? Does it use nfs mount or ….?****
>
> ** **
>
> Thanks & Regards,****
>
> Nikhil****
>
> ** **
>



-- 
http://www.lingcc.com

Reply via email to