On Fri, 17 Sep 2004 11:28:07 -0600, Kris Nuttycombe
<[EMAIL PROTECTED]> wrote:
> Hi, all,
> 
> I'm writing to get some advice and perhaps offer some code that may be
> useful to the commons-chain project or elsewhere.
> 

Kris,

Sorry it took so long (catching up after a two week trip was more time
consuming than I would have dreamed :-).  I've checked in your
proposed "pipeline" code as a separate package in
jakarta-commons-sandbox, and as of tonight you'll also see nightly
builds at:

  http://cvs.apache.org/builds/jakarta-commons/nightly/commons-pipeline/

I had to tweak the Maven dependencies to get commons-digester-1.6.jar
instead of commons-digester-1.6-dev.jar, and I generated an Ant
build.xml file (since my nightly builds use that); other than that,
the code should be exactly as you sent it to me.

I look forward to seeing how this kind of thing can be used, and how
it might interplay with [chain].

Craig

PS:  Until Kris is voted as a committer, other sandbox committers
should feel free to help post any patches that are proposed if I'm not
keeping up in a timely manner.



> The group I work for does a large amount of data processing and we are
> working on solutions for pipelined data processing. Our current
> implementation uses a pipeline model where each stage in the pipeline
> has an integrated blocking queue and an abstract process(Object o)
> method that is sequentially applied to each element in the queue. When a
> stage is finished processing, it may pass the processed object (or
> products derived from it) onto the input queue of one or more subsequent
> stages in the pipeline. Branching pipelines are supported, and the whole
> mess is configured using Digester.
> 
> There's a lot of similarity here with the chain of responsibility
> pattern that commons-chain implements, but subtle differences as well.
> Each stage runs in one or more separate threads and we are working to
> allow the processing to be distributed across the network. The pipeline
> model assumes that each object placed in the pipe is going to be
> processed by every stage, whereas to my understanding the chain of
> responsibility is more designed for finding an appropriate command to
> use to process a given context. Also, the pipeline is designed to run as
> a service where data can be provided for processing by automated
> systems. For example, data being beamed down from a satellite can be
> aggregated into orbits that are then passed into the pipeline for
> generation of geolocated gridded products, statistical analysis, etc.
> 
> Our group would really like to be able to contribute some of this code
> back to the commons effort, since we use a ton of commons components.
> The amount of overlap with commons-chain is significant, but I'm not
> sure it's a perfect match because of the differing goals. Does anyone
> out there know of other similar efforts? Is there a place for this sort
> of code in commons? Are we just missing something fundamental about
> commons-chain where we should simply be using that instead?
> 
> Suggestions would be much appreciated. I'm happy to send code, examples,
> and documentation to anyone who's interested.
> 
> Thanks,
> Kris
> 
> --
> =====================================================
> Kris Nuttycombe
> Associate Scientist
> Enterprise Data Systems Group
> CIRES, National Geophysical Data Center/NOAA
> (303) 497-6337
> [EMAIL PROTECTED]
> =====================================================
> 
> ---------------------------------------------------------------------
> To unsubscribe, e-mail: [EMAIL PROTECTED]
> For additional commands, e-mail: [EMAIL PROTECTED]
> 
>

---------------------------------------------------------------------
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]

Reply via email to