Hi,
In our case we launched Pig from perl script and handled re-execution, clean-up 
etc. from there. If you need to implement a workflow or DAG like model, 
consider looking at Oozie / cascading. If you are interested in diving little 
deeper, you can try embedded pig.

Amogh


On 2/17/10 1:53 PM, "jiang licht" <licht_ji...@yahoo.com> wrote:

Thanks Amogh.

So, I think the following will do the job:
public void setJobEndNotificationURI(String uri)But what about hadoop jobs 
written in PIG scripts? Since PIG will take control, is there some convenient  
way to do the same thing as well?

Thanks!
--
Michael

--- On Wed, 2/17/10, Amogh Vasekar <am...@yahoo-inc.com> wrote:

From: Amogh Vasekar <am...@yahoo-inc.com>
Subject: Re: Hadoop automatic job status check and notification?
To: "common-user@hadoop.apache.org" <common-user@hadoop.apache.org>
Date: Wednesday, February 17, 2010, 12:44 AM

Hi,
When you submit a job to the cluster, you can control the behavior for blocking 
/ return using JobClient's submitJob, runJob methods. It will also let you know 
if the job was successful or failed, so you can design your follow up scripts 
accordingly.


Amogh


On 2/17/10 11:01 AM, "jiang licht" <licht_ji...@yahoo.com> wrote:

New to Hadoop (now using 0.20.1), I want to do the following:

Automatic status check and notification of hadoop jobs such that e.g. when a 
job is finished, a script can be trigged so that job results can be 
automatically pulled back to local machines and expensive hadoop cluster can be 
released or shutdown.

So, what is the best way to do this?

Thanks!
--
Michael









Reply via email to