Hi,

MetaInfo
--------

Lets assume we can come up with a central container. First thing we need to 
do is to is externalize metadata for all the components. This is painful to 
maintain so we will need to use XDoclet to annotate our code. These 
annotations would essentially state the resources that the component 
requires and the resources that the component is capable of providing. A 
fully specced out java file would look something like the following. Note 
that is is vastly more annotated than a real example so most files wont be 
this complex.

/**
  * @avalon.component name="mail-server-impl" version="1.2.4" 
lifecycle="thread-safe"
  * @avalon.service type="org.apache.avalon.MailServer"
  */
public class MailServerImpl
   implements MailServer, LogEnabled, Contextualizable, Composable
{
   /**
    * @avalon.logger description="Base logger passed in is used for ... 
logging stuff"
    * @avalon.logger name="auth"
    *                description="Auth sub category is used to log 
authentication information"
    */
   public void enableLogging( Logger l )
   {
     m_logger = l;
     m_authLogger = l.getChildLogger( "auth" );
   }

   /**
    * @avalon.context type="org.apache.phoenix.api.BlockContext"
    * @avalon.entry key="mBeanServer" type="javax.management.MBeanServer" 
optional="false"
    */
   public void contextualize( Context c )
   {
     m_blockContext = (BlockContext)c;
     MBeanServer mbs = m_blockContext.get( "mBeanServer" );
     doMBeanRegister( mbs );
   }

   /**
    * @avalon.dependency type="org.apache.cornerstone.SomeServiceInterface"
    * @avalon.dependency role="quirky" type="org.apache.SomeInterface" 
optional="true"
    */
   public void compose( ComponentManager cm )
   {
    m_someServiceInterface = (SomeServiceInterface)cm.lookup( 
SomeServiceInterface.ROLE );
    if( cm.exists( "quirky" ) )
    {
      m_someInterface = (SomeInterface)cm.lookup( "quirky" );
    }
   }
}

Of course the above is massively complex but it demonstrates the possible 
annotations that could exist. The metainfo system that is currently under 
development also allows extensible set of attributes to be associated with 
various resources. So in theory if you needed container specific metadata 
you could associate it with different features to achieve that extension.

For example, if cocoon had a transformer X that only transformed documents 
that conformed to the "http://xml.apache.org/cocoon/announcement.dtd"; then 
you could annotate the class to indicate this. If it also spat out another 
DTD you could add anotations for this via something like

/**
  * @avalon.component 
cocoon:input-dtd="http://xml.apache.org/cocoon/announcement.dtd";
  * @avalon.component 
cocoon:output-dtd="http://xml.apache.org/cocoon/other.dtd";
  */
class XTransformer implements Transformer { ... }

The cocooon container could then be extended to specially deal with such 
attributes. Cocoon could verify that the input/output is chained correctly 
and that whole sitemap once assembled is valid.

With enough annotations you could almost validate the entire sitemap prior 
to deploying it which would probably save a lot of headaches over time.

So the first part of our strategy to moving towards a single container is 
creating a generic MetaInfo infrastructure.

Component/Assembly Profile
--------------------------

Where MetaInfo gives information about the type of a component, the Profile 
describes information about a component in an Application/assembly. So it 
saids that Component A has a dependency on Component B and has 
configuration X. A set of Component Profiles make up an Assembly Profile.

So usually a Profile is made up of something like

Component A of Type P, uses Component B and C, and has Configuration X
Component B of Type Q, uses Component C, and has Parameters Y
Component C of Type R, uses no Components, and has Configuration Z

The actual arrangement is partially container specific but the general form 
is common.

It is the Profile that gives the container the ability to validate the 
application before instantiating it. ie You can make sure that all the 
dependencies are valid, configuration is valid according to a schema etc.

So after the Profile is validated the application should startup in 99% of 
cases except when runtime errors occur.

Component Entrys
----------------

When a container is managing a component it creates an Entry per component. 
The Entry manages all the runtime information associated with a component. 
This runtime information is container specific. If the component is pooled 
then the Entry will contain a pool of instances. If the container tracks 
resources, the Entry will contain list of resources the component uses. If 
the component is accessed via proxies, the entry will list the proxies.


The Process of Assembly
-----------------------

The process of assembly is creating an application profile via wiring 
together components.

In the past some containers, such as Phoenix, went the path of requiring 
assembler to explicitly specify all the components and resolve all the 
dependencies. ie If a component has a dependency then the assembler must 
specify another component which will satisfy the dependency. So you have to 
manually assemble the application. This can be a bit error prone especially 
when you need to take into consideration such aspects as thread-safety, 
inability to have recursive dependencies and potentially many many many 
components.

Other containers may auto-assemble the application. For example I believe 
Merlin does something like the following (though I may not be 100% accurate 
it is close enough). When you start an "application" you declare a 
component that is not fully resolved. Merlin then kicks in and tries to 
assemble a full Profile. For every unsatisfied dependency in scans the 
Profiles of all the Components and sees if there is any candidates that can 
satisfy dependency. If there is one candidate that satisfies dependency 
then it is used. If there is multiple candidates that satisfy dependency 
then a heuristic is employed to select one of the candidates.

The heuristic is currently governed by a combination of things I believe. 
It has a policy attribute in MetaInfo of dependecyDescriptor that it can 
use, it also makes sure that no circular dependencies are created and in 
reality the evaluation process could include oodles more variables. So lets 
generalize it to the following interface

public interface DependencyEvaluator
{
   int evaluate( ComponentMetaData consumer,
                 DependencyDescriptor dependency,
                 ComponentMetaData candidate );
}

Each candidate is passed through evaluator and a score collected. The 
candidate with highest score "wins" and is selected as provider for 
dependency. Anyways after walking the components, Merlin eventually builds 
up a full application Profile. Using this mechanism the assembly requires 
far less work by assembler as the runtime will make educated guesses on the 
best possible dependency for anything that is not fully specified.

In reality Fortress is in a similar situation except that its mapping is 
more structured and does not currently follow metadata (as no such metadata 
exists).

Handlers
--------

Each component may have what we call a different "lifestyle". ie Components 
may be single-client, singl-threaded, pooled, "singleton" etc. For each of 
these different lifestyles we are going to need a slightly different 
architecture via which to aquire the component.

The component may still be passed through standard lifecycle process and 
described by standard Profile and standard MetaInfo but it will have a 
different handler. The handler will enforce the different lifestyle.

ie If the component is sharable it will hand out the same instance to 
multiple consumers. If the component is not sharable between multiple 
consumers then it will hand out different instances to different consumers.

The handlers is one of the main places where the containers will differ. 
Some will offer pooling and advanced resource management. Others will proxy 
access to components, maybe offering interceptors etc.

Implementation Overview
-----------------------

So how do we go about implementing this?

First of all the basic MetaInfo structure can be shared between containers 
with ease. No container should extend these classes (in fact they are final 
so they cant). Container specific attributes can be stored for each 
different feature. There is currently no standard for these extra 
attributes. Eventually it may be prudent to adopt some "standard" 
attributes but now it is mostly free form that we can use to experiment 
with stuff.

The Component/Application Profile classes provide a basis for all the 
Profile information. However it is possible that some containers will 
extend these classes to provide specific information relevent only to the 
particular container. However for many containers (ie Phoenix and Fortress) 
the  base Profile classes should be sufficient.

Almost every container will have different implementations for 
ComponentEntry and the different ComponentHandlers. The implementation of 
these features effectively define how the container works.

Shared Container Parts
----------------------

There is significant overlap in the code for writing the container. So how 
do we go about sharing it all?

* All containers can share the metainfo code (from containerkit)
* All containers can share the lifecycle processing code (from containerkit)
* Dependency traversal can be shared by all containers (from containerkit)
* Merlin and Fortress should definetly share the "auto-assembly" utility 
classes.
* Phoenix and Merlin can share the Handler/ComponentEntry part of container

Theres the possibility that we may be able to share some of the other bits 
but thats something we can think about later.

Benefits of all this?
---------------------

The biggest benefit of all this is that we will finally have the ability to 
write components and transparently deploy them into other containers with 
very little effort. It is likely that there will still be some container 
specific jazz in some components but we can get at least 90% cross 
container compatability.

So that means Myrmidon will be able to use Cocoon services (yea!), Phoenix 
will be able to use Merlin services and all the other combinations.

Containers will then be differentiated by their;
* features (Pooling, auto assembly, isolation, multi-app support, 
hierarchial support)
* resource cost (how long to startup, memory usage etc)
* deployment format. (ie Phoenix has .sars while other containers use the 
ClassLoader they are already in).
* assembly descriptors (ie how you specify that components are wired 
together. Compare sitemap vs the assembly.xml, config.xml, environment.xml 
format of Phoenix vs single file for myrmidon)

Conclusion
----------

It is not perfect, we have not got no grail but it is as close as we are 
going to get right now. We specify just enough that we can achieve 90% 
component portability between containers but we still leave room for 
different containers being customized for different things and specializing 
in different areas.

Thoughts? 


--
To unsubscribe, e-mail:   <mailto:[EMAIL PROTECTED]>
For additional commands, e-mail: <mailto:[EMAIL PROTECTED]>

Reply via email to