> 
> > All Manipulators and Outputs will implement 
> TransactionSink.  The only 
> > difference is that a Manipulator will need to have a registered 
> > Manipulator or Output to sent its stuff along to.
> > 
> > This requires registration to chain everything together, 
> but I don't 
> > know how that can be avoided.
> > 
> > One big advantage over your third alternative, is that 
> Input's don't 
> > need to know if they have a Manipulator or an Output.  And 
> manipulators 
> > are free to return Responses as they see fit.
> 
> Can you please supply a use case in code?
> Thanks :-)
> 

This is the result of my first attempt at using the framework.  Please 
excuse any abuse of it.  I've glossed over a number of fundemental 
problems but so far it seems to work.  I'll cover multi-thread 
extensions later.

interface sinkI {
  Result sink( Transaction t );
}

interface sourceI {
}

public class filereader implements sourceI, Configurable, Serviceable, 
Startable {

  void start() {

   // Assume through configure() and initialize() we have build the 
file we'll
   // be reading from.

   // Assume we have through service() obtained a reference to a sinkI

   while( true ) {

     Transaction t = getTransaction( m_file );

     Result r = m_sink.sink( t );

   }
 }
}


Similarly a manipulator would be something like:

public class manipulator implements sinkI, Configurable, Serviceable {

  Result sink( Transaction t ) {

     // Assume m_sink is obtained in service()

     Transaction new_t = manipulate(t);

     return m_sink.sink( new_t );

  }

}

Similar for output classes except they wouldn't have a sinkI.

In order to allow each component in the pipeline to operate in a 
seperate thread, the sink() methods could be simply a mechanism to post 
to a threadsafe queue of work.

However, after reading Berin's other mail (I can see the future!), I 
understand a little better about the threading requirements.  This 
setup does require a single threaded initialization procedure and all 
objects need to be created before servicing.  This might be a problem.

I do like the idea of having the job control the structure and 
execution of the pipeline.  But in that model how do you encourage 
multi-threaded behavior?  So that each step in the pipeline could be 
doing its own thing at its own pace and all the data gets processed in 
the right order.  With a single job controller doing the main loop, I 
don't see the parallelism.

Kevin

(I'll grab commons-sandbox this weekend.)


--
To unsubscribe, e-mail:   <mailto:[EMAIL PROTECTED]>
For additional commands, e-mail: <mailto:[EMAIL PROTECTED]>

Reply via email to