> From: Marc Schier [mailto:[EMAIL PROTECTED]]
>
> Let me know how you intend to achieve it and I could do it...
I'd rather get payed to do it, but since no-one jumped on the
opportunity.... ;P
The idea came about when I was spending time trying to clean up
Fortress. Currently, Fortress binds a lot of stuff in the
container's context. The idea being that any container can ask
for that information, and expect to see it. Unfortunately, we
can't define really solid contracts to make it enforceable.
Another issue is the fact that we have *services* bound to the
Context--not the ServiceManager.
The idea is to move the services out of the context, and into
the ServiceManager where it belongs. That includes the CommandManager
(which is part of the Event package).
We are still with the Stage == Component metaphor, but the twist
is to expose the Sinks through the ServiceManager. We would ask
for a "Command Queue", and we would get the sink that the StageManager
knows belongs to the CommandManger. That would allow other components
that know what the Sink interface is to put new events on the
pipeline as well as let the Stages access the traditional components
through the same mechanism.
The only trouble spot is the EventHandler for the Stage. Currently
Stages are required to implement EventHandler. In reality, it can be
a helper class. The important thing is to specify the proper meta
info. By separating out the EventHandler into a helper class, we can
also turn traditional component interfaces into Stages.
If we make the "Stage" a conceptual thing instead of a component,
we can convert any component with Atomic methods to work in a SEDA
like environment.
For example, my InfoMover project (in apps) has three processing
components and one notification dispatcher:
interface Input
{
Transaction getTransaction();
}
interface Output
{
Result process(Transaction trans);
}
interface Manipulator
{
Transaction process(Transaction trans);
}
interface Notifier
{
void notify( Result result );
}
In a pure SEDA set up, the Input would be a Source and the Output
would be a Sink, while the Manipulator is a full fledged Stage.
An external Input EventHandler would receive notification events
from the JobManager or incomming messages from incomming network
connections. The Input stage would then convert the information
into Transaction objects which it sends to the next sink. Since
the StageManager is in control, that can be remapped at any time.
Anyway, the Transaction moves to the Manipulator EventHandler which
then calls the process() method. The Transaction received from the
return method would then be sent on to the next stage in the pipeline.
The same type of thing happens with the Output, which gives back a
Result. The Result is then sent to the notifier.
There are more things to deal with, but that is the general gist
of it.
In the mean time, I am going to look at what you did with your silk.jar.
--
To unsubscribe, e-mail: <mailto:[EMAIL PROTECTED]>
For additional commands, e-mail: <mailto:[EMAIL PROTECTED]>