However, for continuous production data processing, hadoop+cascading sounds like a good option.

This will be especially true with stream assertions and traps (as mentioned previously, and available in trunk). <grin>

I've written workloads for clients that render down to ~60 unique Hadoop map/reduce jobs, all inter-related, from ~10 unique units of work (internally lots of joins, sorts and math). I can't imagine having written them by hand.

ckw

--
Chris K Wensel
[EMAIL PROTECTED]
http://chris.wensel.net/
http://www.cascading.org/





Reply via email to