To me it sounds like the asker should checkout tools like storm and s4
instead of hadoop.

http://www.infoq.com/news/2011/09/twitter-storm-real-time-hadoop

-- 
Met vriendelijke groet,
Niels Basjes
Op 27 sep. 2011 22:38 schreef "Mike Spreitzer" <mspre...@us.ibm.com> het
volgende:
> It looks to me like Oozie will not do what was asked. In
>
http://yahoo.github.com/oozie/releases/3.0.0/WorkflowFunctionalSpec.html#a0_Definitions
> I see:
>
> 3.2.2 Map-Reduce Action
> ...
> The workflow job will wait until the Hadoop map/reduce job completes
> before continuing to the next action in the workflow execution path.
>
> That implies to me that the output of one job is held in some intermediate

> storage (likely HDFS) for a while before being read by the consuming
> job(s).
>
> Regards,
> Mike Spreitzer

Reply via email to