Re: How to get the HDFS I/O information

2012-04-25 Thread Rajashekhar M A
I. > > > > Thanks > Devaraj > > > From: George Datskos [george.dats...@jp.fujitsu.com] > Sent: Wednesday, April 25, 2012 6:36 AM > To: mapreduce-user@hadoop.apache.org > Subject: Re: How to get the HDFS I/O information > > Qu, > > Every jo

RE: How to get the HDFS I/O information

2012-04-24 Thread Devaraj k
From: George Datskos [george.dats...@jp.fujitsu.com] Sent: Wednesday, April 25, 2012 6:36 AM To: mapreduce-user@hadoop.apache.org Subject: Re: How to get the HDFS I/O information Qu, Every job has a history file that is, by default, stored under $HADOOP_LOG_DIR/history. These &quo

Re: How to get the HDFS I/O information

2012-04-24 Thread George Datskos
Qu, Every job has a history file that is, by default, stored under $HADOOP_LOG_DIR/history. These "job history" files list the amount of hdfs read/write (and lots of other things) for every task. On 2012/04/25 7:25, Qu Chen wrote: Let me add, I'd like to do this periodically to gather some

Re: How to get the HDFS I/O information

2012-04-24 Thread Qu Chen
Let me add, I'd like to do this periodically to gather some performance profile information. On Tue, Apr 24, 2012 at 5:47 PM, Qu Chen wrote: > I am trying to gather the info regarding the amount of HDFS read/write for > each task in a given map-reduce job. How can I do that? >