_SUCCESS file is created after the hadoop job has successfully finished.Setting 
I think is mapreduce.fileoutputcommitter.marksuccessfuljobs. You can leverage 
this file existence to kick off your second step.
Alternatively you can capture the process id or logs to verify the conclusion 
of the first step.

Cheers,
/R

On 1/18/10 6:41 AM, "Mark Kerzner" <markkerz...@gmail.com> wrote:

Hi,

I am writing a second step to run after my first Hadoop job step finished.
It is to pick up the results of the previous step and to do further
processing on it. Therefore, I have two questions please.

   1. Is the output file always called  part-00000?
   2. Am I perhaps better off reading all files in the output directory and
   how do I do it?

Thank you,
Mark

PS. Thank you guys for answering my questions - that's a tremendous help and
a great resource.

Mark

Reply via email to