Darren,

I read through this thread and your docs on the wiki, but I'd appreciate it if 
you could answer a couple questions for me:

-When creating an extension, such as a DataStoreProvider, those items are 
currently added into the list of providers for the appropriate bean, such as:
<bean id="dataStoreProviderManager"
        
class="org.apache.cloudstack.storage.datastore.provider.DataStoreProviderManagerImpl">
    <property name="providers">
      <list merge="true">
        <ref bean="cloudStackPrimaryDataStoreProviderImpl"/>
        <ref local="cloudStackImageStoreProviderImpl"/>
        <ref local="s3ImageStoreProviderImpl"/>
        <ref local="swiftImageStoreProviderImpl"/>
        <ref local="solidFireDataStoreProvider"/>
      </list>
    </property>
  </bean>

So, how do we add our bean to that list?

-There are a number of extensions that are not currently listed, such as 
DataMotionStrategy, SnapshotStrategy, etc. Is it a problem that those are 
omitted from https://cwiki.apache.org/confluence/display/CLOUDSTACK/Extensions?

-I know somewhere in this thread you talked about the order of beans, but can 
you document how the ordering or precedence works in the wiki? For example, if 
I create a DataMotionStrategy, how do I ensure that my strategy's canHandle() 
method is invoked before the AncientDataStoreStrategy?

-Is there any progress on modularizing commands.properties and the log4j 
configuration?

Thanks,
Chris
--
Chris Suich
chris.su...@netapp.com<mailto:chris.su...@netapp.com>
NetApp Software Engineer
Data Center Platforms – Cloud Solutions
Citrix, Cisco & Red Hat

On Sep 24, 2013, at 2:35 AM, Daan Hoogland 
<daan.hoogl...@gmail.com<mailto:daan.hoogl...@gmail.com>> wrote:

touching on thread hijack but; How does this work relate to the css
modularization going on at the moment as well? It is proposed there to
do merging at build time. Try to beat me if I am to much off topic,
Daan

On Tue, Sep 24, 2013 at 12:43 AM, Darren Shepherd
<darren.s.sheph...@gmail.com<mailto:darren.s.sheph...@gmail.com>> wrote:
Ah right, okay.  So your talking about the order of the adapters.
Currently that is maintained as the order in the AdapterList in the
componentContext.xml.  So what I've done is that extensions get added
to "Registries."  Registries that need to be ordered can specify an
ordering configuration variable so that when the extensions are found
they are added to the list in a specific order.  So the registry
definition for the auth stuff looks something like

<bean id="userAuthenticatorsRegistry"
       
class="org.apache.cloudstack.spring.lifecycle.registry.ExtensionRegistry">
       <property name="orderConfigKey" value="user.authenticators.order" />
       <property name="excludeKey" value="user.authenticators.exclude" />
       <property name="orderConfigDefault"
value="SHA256SALT,MD5,LDAP,PLAINTEXT" />
   </bean>

So one can use user.authenticators.order to change the order and
user.authenticators.exclude to exclude certain extensions from being
used.  The default value is also specified in that example.

Darren

On Mon, Sep 23, 2013 at 3:28 PM, Kelven Yang 
<kelven.y...@citrix.com<mailto:kelven.y...@citrix.com>> wrote:
I understand that loading order can be completely solved by Spring with
dependency injection, either within flat context or hierarchy context
structure. But some sibling plugins in CloudStack do require a flexible
way to declare ordering for runtime. For example, allocator plugins or
authenticator plugins. The order itself is currently designated in their
parent manager to reference to a ordered adapter list.

This order semantics has nothing to do with dependency injection, but
unfortunately in previous version of CloudStack, it does mix the
requirement into injection framework and there are business logic relying
on it. The fact for parent manager to compose ordering has made an
assumption at compile/load time binding to subject plugins, which we don't
want to see in the future since we want drop-in jar for plugins, what is
our answer for this?

Kelven


On 9/23/13 1:40 PM, "Darren Shepherd" 
<darren.s.sheph...@gmail.com<mailto:darren.s.sheph...@gmail.com>> wrote:

Siblings have no relationship to each other really.  The load order
doesn't matter as one sibling has no visibility to another.  Child
contexts are pretty much for plug-ins.  Core components of ACS will
live in the core context as they are all interdependent.

I will explain the whole Spring lifecycle though and how it works in the
model.

Contexts are initialized from parent to child.  So the top most parent
context is initialized first, then its child, then its grandchildren.
When a context is initialized the following happens in the below order

1) All beans are instantiated
2) Dependencies are wire up (@Inject)
3) @PostConstruct is call for all beans in dependency graph order
4) Extensions are discovered an registered (so NetworkElement for
example will be discovered an registered as a NetworkElement)
5) configure() on all ComponentLifecycle beans will be called in the
getRunLevel() order

Once all modules have been initialized in this fashion then in a
parent first child last order start() on all ComponentLifecycle beans
is called following the getRunLevel() order for the beans in the
context.

Darren


On Mon, Sep 23, 2013 at 11:34 AM, Kelven Yang 
<kelven.y...@citrix.com<mailto:kelven.y...@citrix.com>>
wrote:
Darren,

Due to internal release work, I haven't got a chance to review it but
I'm
planning to do so later today and tomorrow. Before that, I have one
question about hierarchy-orginzated context structure, could you
elaborate
an example to the ML on how two sibling plugins to declare their runtime
load order? I'd like to get a feeling on how hard or easy for developers
to do things that involve with structural change under the new hierarchy
mode.

Kelven


On 9/23/13 12:19 AM, "Darren Shepherd" 
<darren.s.sheph...@gmail.com<mailto:darren.s.sheph...@gmail.com>>
wrote:

So how do I proceed forward on this?  I basically already have this all
working.  I'd really like to get it all committed as soon as possible if
there are no objections to the approach.  The sooner the better.

I already have a bunch of patches pending on review board that change a
bunch of random but related things.  I need all of those patches
committed before I can submit the next round of patches.  I have about 4
or 5 more.  What I'll do is that everything will get committed and then
their will be one small patch that will be last that flips some config
files to enable this all.  All changes to code will work in both a
modular and monolithic spring context.  So it will be really easy to
turn
this off if suddenly something goes terribly wrong.

So I need people to agree this is good and then start
reviewing/committing my patches.  I really want to get this wrapped up
this week if I can.

Darren

On Sep 18, 2013, at 7:06 PM, Darren Shepherd
<darren.s.sheph...@gmail.com<mailto:darren.s.sheph...@gmail.com>> wrote:

Yes this is one of the many things this is trying to address.  Adding
a
plugin should be plopping your jar in a directory and restarting.  You
pointed out two things I didn't think about though, command.properties
and log4j xml.  Let me think about those twos as they should be address
also.  Basically you should never have to edit a file that is packaged
as part of ACS.  Only add your artifacts to some directory, ideally
just
a jar.

Darren


On Wed, Sep 18, 2013 at 5:46 PM, SuichII, Christopher
<chris.su...@netapp.com<mailto:chris.su...@netapp.com>> wrote:
I've been following this conversation somewhat and would like to
throw
in some background as a plugin writer.

One thing that concerns me in the current plugin model is the number
of XML/text files that need to be edited to deploy my plugin.
-applicationContext must be edited to add our PluginMangerImpl.
-commands.properties file must be edited to included the permissions
for the APIs we contributed.
-componentContext.xml & nonossComponentContext.xml must be edited to
add our Storage Subsystem Provider API.
-log4j-cloud.xml must be edited to ensure that our logger is enabled
and logging to our necessary default level.

I know our situation is a bit different than the current plug in
model, but I think it is something we, as a community, are going to
begin seeing more of. For a variety of reasons that I won't get in to
right now, our plugin will be closed source and kept separate from the
ACS source tree. We want our users to be able to simply drop in our
jar
file to the CS directory or run and installer and have it picked up by
the MS upon a restart.

It sounds like what you are proposing here would be very beneficial
to
plugins that are targeting a deployment model like this.

Is this something you're looking/hoping/would like to solve, Darren?

-Chris
--
Chris Suich
chris.su...@netapp.com<mailto:chris.su...@netapp.com>
NetApp Software Engineer
Data Center Platforms ­ Cloud Solutions
Citrix, Cisco & Red Hat

On Sep 18, 2013, at 6:44 PM, Darren Shepherd
<darren.s.sheph...@gmail.com> wrote:

I'm not a committer


On Wed, Sep 18, 2013 at 3:24 PM, Frank Zhang
<frank.zh...@citrix.com> wrote:

Well. The codes explain more than words.
It seems the only extra work is adding a property file that
specifies
parent context and current context name, it's not much complex.
BTW: any reason for working on repo outside ACS?

-----Original Message-----
From: Darren Shepherd [mailto:darren.s.sheph...@gmail.com]
Sent: Wednesday, September 18, 2013 2:43 PM
To: dev@cloudstack.apache.org
Subject: Re: [PROPOSAL] Modularize Spring

If you want to see this all working you can just fetch the
"no-at-db4"
branch at https://github.com/ibuildthecloud/cloudstack.git

Plugin composes multiple modules.  If modules are siblings they
can't
inject
from each other.  But a plugin can augment another module if it
chooses
to.
The reality is that the core cloudstack is a tangled mess of
dependencies such
that most of the core code can't be modularized as it stands.  So
there
exists a
context towards the top of the hierarchy called "core" that a lot
of jars
contribute to it.  Here is the full hierarchy right now.  I'll
probably
rename a
bunch of things, but this gives you an idea.

bootstrap
system
  core
    allocator
      allocator-server
      planner
        api-planner
        baremetal-planner
        explicit-dedication
        host-anti-affinity
        implicit-dedication-planner
        server-planner
        user-concentrated-pod-planner
      random-allocator
    api
      acl-static-role-based
      rate-limit
      server-api
      user-authenticator-ldap
      user-authenticator-md5
      user-authenticator-plaintext
      user-authenticator-sha256salted
    backend
      alert-adapter-server-backend
      compute
        alert-adapter-server-compute
        baremetal-compute
        fencer-server
        investigator-server
        kvm-compute
        ovm-compute
        server-compute
        xenserver-compute
      network
        baremetal-network
        elb
        midonet
        nvp
        ovs
        server-network
        ssp
      storage
        alert-adapter-server-storage
        allocator-storage
        baremetal-storage
        secondary-storage
        server-storage
        storage-image-default
        storage-image-s3
        storage-image-swift
        storage-volume-default
        storage-volume-solidfire
        template-adapter-server
    discoverer
      baremetal-discoverer
      discoverer-server
      ovm-discoverer
      xenserver-discoverer



If you look at the baremetal hypervisor plugin that is pretty
cross
cutting to
most of ACS.  So the modules it contributes to are as follows


resources/META-INF/cloudstack/baremetal-storage/spring-context.xml

resources/META-INF/cloudstack/baremetal-compute/spring-context.xml

resources/META-INF/cloudstack/baremetal-discoverer/spring-context.xml

resources/META-INF/cloudstack/core/spring-baremetal-core-context.xml

resources/META-INF/cloudstack/baremetal-planner/spring-context.xml

resources/META-INF/cloudstack/baremetal-network/spring-context.xml

So it creates child contextes of storage, compute, network,
planner, and
discoverer to add its extensions where it needs to be.
Additionally,
you'll notice,
it adds some beans to the core context from the file
resources/META-
INF/cloudstack/core/spring-baremetal-core-context.xml.  This is
because
it has
some manager class that is used by multiple contexts.

Frank, I understand the scare that we are going too complex, but
do you
have
some other suggestion?  I don't like the idea of one gigantic
spring
context.  So I
feel I am making it as simple as I can while maintaining some
order.
People
just need to create one or more spring xml files and a properties
files
that says
where to put it in the hierarchy.

Additionally, by putting forcing people to put beans in certains
modules
it
forces them to think about what is the role of the code.  For
example,
today in
ACS the *ManagerImpl classes are a huge mess.  They implement too
many
interfaces and the code crosses to many architectural boundaries.
Its
about
time we start splitting things up to be more maintainable.

If you have some time, please check out what I have on github.

Darren


On Wed, Sep 18, 2013 at 1:56 PM, Frank Zhang
<frank.zh...@citrix.com>
wrote:

I am not against boundary, I am just against making things
unnecessary
complex to enable boundary.
If we are going this way, I hope we can make it as much as
transparent
to developers. That means, as a developer, all a plugin I need
to
do
is 1) provide my separate spring xml 2) inject beans I want
(valid
beans) in my bean and code business logic 3) compile to a jar
and
put
to some place designated by CloudStack. That's it.

I raise this topic because I have seen some projects to create
boundary making things horrible complex. And sometimes
developers
are
hurt  by wrong boundaries, as a result, to overcome these
limitations
people write lots of ugly code which makes thing even worse.

However, I am still worry about if we can make things so
simpler.
For example, we may have an orchestration context that contains
major
beans needed by almost every plugin,  this context can be easily
set
as parent context for all plugin contexts when bootstrap.
However, if
a plugin A needs to access some bean defined in plugin B, given
they
are sibling, how plugin framework resolves the dependency ?

-----Original Message-----
From: Darren Shepherd [mailto:darren.s.sheph...@gmail.com]
Sent: Wednesday, September 18, 2013 11:53 AM
To: dev@cloudstack.apache.org
Subject: Re: [PROPOSAL] Modularize Spring

I'm not for OSGi either, but contexts are quite useful and will
lead
to
better
things.  First off, we don't want one gigantic spring XML
config
file
like we have
today.  I think we can all agree on that.  So each plugin will
have
to
supply its
own XML.  So the obstacles you mention, will largely be just
that
for
people.

With Spring it is really simple to just inject dependencies and
cross architectural boundaries.  Its currently everywhere in
ACS.
You can't
just say
we should review code and make sure nobody doesn't do bad
things.  A
little bit
of framework to enforce separation is a good thing.  But I'm
guessing
you will
disagree with me there.

Here are some random points on why contexts are good.  Say I
want to
use Spring AOP or Spring TX in my plugin.  With your own
context
you
can
ensure
that you won't screw with anybody else code by accidentally
having
you pointcut match their bean.  You don't have to worry about
bean
name
conflicts.
If two config files specify bean X, Spring will gladly just use
the
last
one.  I've
already found multiply defined beans in ACS, but things still
just
happen to work.
Having multiple contexts also defines initialization order.  We
can
ensure that
the framework is loaded and ready before child context are
loaded
and
started.
(we kind of do this today with ComponentLifeCycle, but its a
hack in
my
mind).
Additionally, when things start you will know, loading context
"crapping
plug X".
If spring fails to initialize, the issue it there.  Today, if
spring
fails to start, it
could be one of over 500 beans causing the weird problem.  The
list
goes
on
and on.

Finally, this is the big one and why I really want contexts.  I
have
some notes on
the wiki [1] that you might want to read through.  Basically I
want
to
get to a
more flexible deployment model that allows both a single
monolithic
JVM
as
today and also a fully distributed system.  Having contexts in
hierarchy
will
enable me to accomplish this.  By selecting which contexts are
loaded at runtime will determine what role the JVM takes on.
The
contexts help
people
better understand how the distributed architecture will work
too,
when
we get
there.

Frank, trust me, I hate complex things.  I don't want OSGi,
classloader
magic,
etc.  But I do like organization and a little bit of framework
so
that
people don't
accidentally shoot themselves in the foot.  I personally like
knowing
that I
couldn't have screwed something up, because the framework won't
even
allow
it.  If we separate everything as I want today, and then
tomorrow we
say
this is
way too much overhead, moving to a flat context is simple.
Don't
think
we are
on some slippery slope to classloaders and dependency hell.

Darren

[1]


https://cwiki.apache.org/confluence/display/CLOUDSTACK/Nothing+to+see+
her
e...#Nothingtoseehere...-DeploymentModels



On Wed, Sep 18, 2013 at 11:22 AM, Frank Zhang
<frank.zh...@citrix.com>wrote:

What's the point in using separate spring context per plugin?
Separate class loader is the thing I hate most in OSGI, I am
afraid we are on the same way.
Frankly speaking, I never see benefits of this *separate*
model,
our project(or most projects) is not like Chrome which has to
create sandbox for extensions in order to avoid bad plugin
screws
up the whole browser(however, I still see bad plugins screw up
my
Chrome well).
Projects like CloudStack using plugin to decouple architecture
should not introduce many isolations to plugin writer, the
point
preventing wrong use of some components is not making much
sense
to me. If a plugin not following guide(if we have
it) we should kick it out, instead of making obstacles for 99%
good
people.



-----Original Message-----
From: Darren Shepherd [mailto:darren.s.sheph...@gmail.com]
Sent: Wednesday, September 18, 2013 10:33 AM
To: dev@cloudstack.apache.org
Subject: Re: [PROPOSAL] Modularize Spring

Right, component isn't a thing.  I probably shouldn't use
that
term.
I
want to
standarize on the naming convention of plugin, module, and
extension.
It is
explained a bit on the wiki [1] but I'll try to do a little
better job
here.  So a
plugin is basically a jar.  A jar contains multiple modules.
A
modules
ends up
being a spring application context that composes multiple
configuration
files.
Modules are assembled into a hierarchy at runtime.
Extensions
are implementations of interfaces that exist in a module.  A
maven project produces a jar, so a plugin ends up being a
maven
project
also.

So currently today we don't have a strong definition of
Plugin
and I
hope to
address that.

Darren

[1]
https://cwiki.apache.org/confluence/display/CLOUDSTACK/Plug-
ins%2C+Modules%2C+and+Extensions


On Wed, Sep 18, 2013 at 4:25 AM, Daan Hoogland
<daan.hoogl...@gmail.com>wrote:

sounds great Darren,

By component, you mean maven project or some larger chunk
like
distribution package? (did i miss this definition somewhere
or
do we define the components now?)

regards,
Daan

On Wed, Sep 18, 2013 at 12:10 AM, Darren Shepherd
<darren.s.sheph...@gmail.com> wrote:
Currently ACS code is fairly modular in that you can add
plug-ins to ACS
to
extend most functionality.  Unfortunately ACS is not
packaged in a
modular
way.  It is still delivered essentially as one large unit.
There are
many
reason for this but one large barrier to modularizing ACS
is
that the Spring configuration is managed as one large unit.

I propose that we break apart the Spring XML configuration
such that each component contributes its own configuration.
Additionally each component will be loaded into its own
Spring ApplicationContext such that its beans will not
conflict with the wiring of other beans in ACS.  This
change
will
lay the foundation for a richer plugin model and
additionally a more distributed architecture.

The technical details for this proposal can be found on the
wiki
[1].

Darren

[1]

https://cwiki.apache.org/confluence/display/CLOUDSTACK/Modular
ize+
Spri
ng








Reply via email to