Re: [PROPOSAL] Using new Workspace in samples/calculator-distributed Re: Domain/Contribution Repository was: Re: SCA contribution packaging schemes: was: SCA runtimes

2008-03-18 Thread Simon Laws
I got the code done last week but I'm only just now finishing up the
build.xml file. So, as promised, here's what I did (a bit of a long post but
I think I got it all)

Firstly to get more familiar with the workspace I followed Sebastien's
instructions from the Domain/Contribution repository thread [1] and ran up
the workspace to have a play.

You can use the latest tutorial modules to see the end to end integration,
with the following steps:

1. Start tutorial/domain/.../LaunchTutorialAdmin.

2. Open http://localhost:9990/ui/composite in your Web browser. You should
see all the tutorial contributions and deployables that I've added to that
domain.

3. Click the feeds in the composite install image to see the resolved
composites.

4. Start all the launch programs in tutorial/nodes, you can start them in
any order you want.

5. Open tutorial/assets/tutorial.html in your Web browser, follow the links
to the various store implementations.

The workspace is allowing you to organize the relationships between
contributions/composites, the domain composite that describes the whole
application and the nodes that will run the composites. It processes all of
the contributions that have been provided, the composites they contain, the
association of composite with the domain and with nodes and produces fully
resolved composites in terms of the contributions that are require to run
them and the service and reference URIs that they will use.

This resolved composite information is available from the workspace through
composite specific feeds. From this feed you can get URLs to the required
contributions and the composite. In fact what happens each time you do a GET
on the composite URL is that all of the composites assigned to the domain
are read and the domain composite is built in full using the composite
builder. The individual composite that was requested is then extracted and
returned. In this way policy matching, cross domain wiring, autowiring etc
is manged at the domain level using the same code used by the nodes to build
individual composites.

This is very similar in layout with what is happening with our current
domain/node implementation where you add contributions to the domain and
nodes run the resulting composites. However there is a big difference here
in that there is now an implication that the domain is fully configured
before you start the nodes as the workspace is responsible for configuring
service / reference URIs based on prior knowledge of node configurations.
Previously you could start nodes and have them register with the domain
without having to provide this knowledge manually to the domain. I guess
automatic node registration could be rolled into this if we want.

In making the calculator-distributed sample work I wanted to be able to test
the sample in our maven build so having a set of HTTP forms (which the
workspace does provide) to fill in is interesting but not that useful. So
immediately I went looking for the files that the workspace writes to see if
I could create those and install them pre-configured ready for the test to
run. I used the tutorial files as templates and made the following to match
the calculator-distributed scenario.

Firstly there is a file (workspace.xml) [2] that describes all each
contribution's location and URI

workspace xmlns=http://tuscany.apache.org/xmlns/sca/1.0; xmlns:ns1=
http://tuscany.apache.org/xmlns/sca/1.0;
  contribution location=file:./target/classes/nodeA  uri=nodeA/
  contribution location=file:./target/classes/nodeB  uri=nodeB/
  contribution location=file:./target/classes/nodeC  uri=nodeC/
  contribution location=file:./target/classes/cloud uri=
http://tuscany.apache.org/xmlns/sca/1.0/cloud/
/workspace

Then there is a file (domain.composite) [3] that is a serialized version of
the domain composite, i.e. what you would get from the specs
getDomainLevelComposite() method. This shows which composites are deployed
at the domain level.

composite name=domain.composite
  targetNamespace=http://tuscany.apache.org/xmlns/sca/1.0;
  xmlns=http://www.osoa.org/xmlns/sca/1.0; xmlns:ns1=
http://www.osoa.org/xmlns/sca/1.0;
  include name=ns2:CalculatorA uri=nodeA xmlns:ns2=http://sample/
  include name=ns2:CalculatorB uri=nodeB xmlns:ns2=http://sample/
  include name=ns2:CalculatorC uri=nodeC xmlns:ns2=http://sample/
/composite

Lastly there is a file (cloud.composite) [4] that is another SCA composite
that describes the nodes that are going to run composites.

composite name=cloud.composite
  targetNamespace=http://tuscany.apache.org/xmlns/sca/1.0;
  xmlns=http://www.osoa.org/xmlns/sca/1.0; xmlns:ns1=
http://www.osoa.org/xmlns/sca/1.0;
  include name=ns2:NodeA uri=
http://tuscany.apache.org/xmlns/sca/1.0/cloud; xmlns:ns2=
http://sample/cloud/
  include name=ns2:NodeB uri=
http://tuscany.apache.org/xmlns/sca/1.0/cloud; xmlns:ns2=
http://sample/cloud/
  include name=ns2:NodeC uri=
http://tuscany.apache.org/xmlns/sca/1.0/cloud; xmlns:ns2=
http://sample/cloud/

Re: [PROPOSAL] Using new Workspace in samples/calculator-distributed Re: Domain/Contribution Repository was: Re: SCA contribution packaging schemes: was: SCA runtimes

2008-03-18 Thread Jean-Sebastien Delfino

Simon Laws wrote:

I got the code done last week but I'm only just now finishing up the
build.xml file. So, as promised, here's what I did (a bit of a long post but
I think I got it all)

Firstly to get more familiar with the workspace I followed Sebastien's
instructions from the Domain/Contribution repository thread [1] and ran up
the workspace to have a play.

You can use the latest tutorial modules to see the end to end integration,
with the following steps:

1. Start tutorial/domain/.../LaunchTutorialAdmin.

2. Open http://localhost:9990/ui/composite in your Web browser. You should
see all the tutorial contributions and deployables that I've added to that
domain.

3. Click the feeds in the composite install image to see the resolved
composites.

4. Start all the launch programs in tutorial/nodes, you can start them in
any order you want.

5. Open tutorial/assets/tutorial.html in your Web browser, follow the links
to the various store implementations.

The workspace is allowing you to organize the relationships between
contributions/composites, the domain composite that describes the whole
application and the nodes that will run the composites. It processes all of
the contributions that have been provided, the composites they contain, the
association of composite with the domain and with nodes and produces fully
resolved composites in terms of the contributions that are require to run
them and the service and reference URIs that they will use.

This resolved composite information is available from the workspace through
composite specific feeds. From this feed you can get URLs to the required
contributions and the composite. In fact what happens each time you do a GET
on the composite URL is that all of the composites assigned to the domain
are read and the domain composite is built in full using the composite
builder. The individual composite that was requested is then extracted and
returned. In this way policy matching, cross domain wiring, autowiring etc
is manged at the domain level using the same code used by the nodes to build
individual composites.

This is very similar in layout with what is happening with our current
domain/node implementation where you add contributions to the domain and
nodes run the resulting composites. However there is a big difference here
in that there is now an implication that the domain is fully configured
before you start the nodes as the workspace is responsible for configuring
service / reference URIs based on prior knowledge of node configurations.
Previously you could start nodes and have them register with the domain
without having to provide this knowledge manually to the domain. I guess
automatic node registration could be rolled into this if we want.

In making the calculator-distributed sample work I wanted to be able to test
the sample in our maven build so having a set of HTTP forms (which the
workspace does provide) to fill in is interesting but not that useful. So
immediately I went looking for the files that the workspace writes to see if
I could create those and install them pre-configured ready for the test to
run. I used the tutorial files as templates and made the following to match
the calculator-distributed scenario.

Firstly there is a file (workspace.xml) [2] that describes all each
contribution's location and URI

workspace xmlns=http://tuscany.apache.org/xmlns/sca/1.0; xmlns:ns1=
http://tuscany.apache.org/xmlns/sca/1.0;
  contribution location=file:./target/classes/nodeA  uri=nodeA/
  contribution location=file:./target/classes/nodeB  uri=nodeB/
  contribution location=file:./target/classes/nodeC  uri=nodeC/
  contribution location=file:./target/classes/cloud uri=
http://tuscany.apache.org/xmlns/sca/1.0/cloud/
/workspace

Then there is a file (domain.composite) [3] that is a serialized version of
the domain composite, i.e. what you would get from the specs
getDomainLevelComposite() method. This shows which composites are deployed
at the domain level.

composite name=domain.composite
  targetNamespace=http://tuscany.apache.org/xmlns/sca/1.0;
  xmlns=http://www.osoa.org/xmlns/sca/1.0; xmlns:ns1=
http://www.osoa.org/xmlns/sca/1.0;
  include name=ns2:CalculatorA uri=nodeA xmlns:ns2=http://sample/
  include name=ns2:CalculatorB uri=nodeB xmlns:ns2=http://sample/
  include name=ns2:CalculatorC uri=nodeC xmlns:ns2=http://sample/
/composite

Lastly there is a file (cloud.composite) [4] that is another SCA composite
that describes the nodes that are going to run composites.

composite name=cloud.composite
  targetNamespace=http://tuscany.apache.org/xmlns/sca/1.0;
  xmlns=http://www.osoa.org/xmlns/sca/1.0; xmlns:ns1=
http://www.osoa.org/xmlns/sca/1.0;
  include name=ns2:NodeA uri=
http://tuscany.apache.org/xmlns/sca/1.0/cloud; xmlns:ns2=
http://sample/cloud/
  include name=ns2:NodeB uri=
http://tuscany.apache.org/xmlns/sca/1.0/cloud; xmlns:ns2=
http://sample/cloud/
  include name=ns2:NodeC uri=
http://tuscany.apache.org/xmlns/sca/1.0/cloud; xmlns:ns2=

[PROPOSAL] Using new Workspace in samples/calculator-distributed Re: Domain/Contribution Repository was: Re: SCA contribution packaging schemes: was: SCA runtimes

2008-03-12 Thread Simon Laws
I like the look of the workspace code Sebastien has been writing and I
propose to try it out on samples/calculator-distributed.

In particular I'd like to help Felix who is hitting the common filesystem
restriction of the current domain implementation .

Let me know if anyone has any concerns.

I'll report back with what I learn. There are other modules that rely on
distributed support

itest/callable-references
itest/domain
itest/osgi-tuscany/tuscany-3rdparty
itest/osgi-tuscay/tuscany-runtime
samples/calculator-distributed
tools/eclipse/plugins/runtime

I'm happy to think about those if the sample/calculator-distributed goes ok.


Regards

Simon

[1] http://www.mail-archive.com/tuscany-user%40ws.apache.org/msg02610.html

On Mon, Mar 10, 2008 at 6:07 AM, Jean-Sebastien Delfino 
[EMAIL PROTECTED] wrote:

 Jean-Sebastien Delfino wrote:
  Simon Laws wrote:
  On Fri, Mar 7, 2008 at 4:18 PM, Simon Laws [EMAIL PROTECTED]
  wrote:
 
 
  On Fri, Mar 7, 2008 at 12:23 PM, Jean-Sebastien Delfino 
  [EMAIL PROTECTED] wrote:
 
  Jean-Sebastien Delfino wrote:
  Simon Laws wrote:
  I've been running the workspace code today with a view to
 integrating
  the
  new code in assembly which calculates service endpoints i.e. point4
  above.
 
  I think we need to amend point 4 to make this work properly..
 
  4. Point my Web browser to the various ATOM collections to get:
  - lists of contributions, composites and nodes
  - list of contributions that are required by a given contribution
  - the source of a particular composite
  - the output of a composite after the domain composite has been
 built
  by
  CompositeBuilder
 
  Looking at the code in DeployableCompositeCollectionImpl I see that
  on
  doGet() it builds the request composite. What the last point  needs
  to
  do is
 
  - read the whole domain
  - set up all of the service URIs for each of the included
 composites
  taking
  into account the node to which each composite is assigned
  - build the whole domain using CompositeBuilder
  - extract the required composite from the domain and serialize it
  out.
  Yes, exactly!
 
  Are you changing this code or can I put this in?
  Just go ahead, I'll update and merge if I have any other changes in
  the
  same classes.
 
  Simon, a quick update: I've done an initial bring-up of node2-impl.
  It's
  still a little rough but you can give it a try if you want.
 
  The steps to run the store app for example with node2 are as follows:
 
  1) use workspace-admin to add the store and assets contributions to
 the
  domain;
 
  2) add the store composite to the domain composite using the admin as
  well;
 
  3) start the StoreLauncher2 class that I just added to the store
  module;
 
  4) that will start an instance of node2 with all the node config
 served
  from the admin app.
 
  So the next step is to integrate your node allocation code with
  workspace-admin and that will complete the story. Then we'll be able
 to
  remove all the currently hardcoded endpoint URIs from the composites.
 
  I'll send a more detailed description and steps to run more scenarios
  later on Friday.
 
  --
  Jean-Sebastien
 
  -
  To unsubscribe, e-mail: [EMAIL PROTECTED]
  For additional commands, e-mail: [EMAIL PROTECTED]
 
  Ok, sounds good. I've done the uri integration although there are
 some
  issues we need to discuss. First I'll update with your code, commit my
  changes and then post here about the issues.
 
  Regards
 
  Simon
 
  I've now checked in my changes (last commit was 634762) to integrate
  the URI
  calculation code with the workspace. I've run the new store launcher
  following Sebastien's instructions from a previous post to this thread.
 I
  don't seem to have broken it too much although I'm not seeing any
  prices for
  the catalog items.
 
  I was seeing that issue too before, it's a minor bug in the property
  writing code, which is not writing property values correctly.
 
  Issues with the URI generation code
 
  I have to turn model resolution back on by uncommenting a line in
  ContributionContentProcessor.resolve. Otherwise the JavaImplementation
  types
  are not read and
  compositeConfiguationBuilder.calculateBindingURIs(defaultBindings,
  composite, null); can't generate default services. I then had to tun
  it back
  off to make the store sample work. I need some help on this one.
 
  I'm investigating now.
 
 
  If you hand craft services it seems to be OK although I have noticed,
  looking at the generated SCDL, that it seems to be assuming that all
  generated service names will be based on the implementation classname
  regardless of whether the interface is marked as @Remotable or not.
 Feels
  like a bug somewhere so am going to look at that next.
 
  OK
 
 
  To get Java implementation resolution to work I needed to hack in the
  Java
  factories setup in the DeployableCompositeCollectionImpl.initialize()
  method.  This is not very 

Re: [PROPOSAL] Using new Workspace in samples/calculator-distributed Re: Domain/Contribution Repository was: Re: SCA contribution packaging schemes: was: SCA runtimes

2008-03-12 Thread Raymond Feng

+1.

Raymond
--
From: Simon Laws [EMAIL PROTECTED]
Sent: Wednesday, March 12, 2008 4:28 AM
To: tuscany-dev@ws.apache.org
Subject: [PROPOSAL] Using new Workspace in samples/calculator-distributed 
Re: Domain/Contribution Repository was: Re: SCA contribution packaging 
schemes: was: SCA runtimes



I like the look of the workspace code Sebastien has been writing and I
propose to try it out on samples/calculator-distributed.

In particular I'd like to help Felix who is hitting the common filesystem
restriction of the current domain implementation .

Let me know if anyone has any concerns.

I'll report back with what I learn. There are other modules that rely on
distributed support

itest/callable-references
itest/domain
itest/osgi-tuscany/tuscany-3rdparty
itest/osgi-tuscay/tuscany-runtime
samples/calculator-distributed
tools/eclipse/plugins/runtime

I'm happy to think about those if the sample/calculator-distributed goes 
ok.



Regards

Simon

[1] http://www.mail-archive.com/tuscany-user%40ws.apache.org/msg02610.html

On Mon, Mar 10, 2008 at 6:07 AM, Jean-Sebastien Delfino 
[EMAIL PROTECTED] wrote:


Jean-Sebastien Delfino wrote:
 Simon Laws wrote:
 On Fri, Mar 7, 2008 at 4:18 PM, Simon Laws [EMAIL PROTECTED]
 wrote:


 On Fri, Mar 7, 2008 at 12:23 PM, Jean-Sebastien Delfino 
 [EMAIL PROTECTED] wrote:

 Jean-Sebastien Delfino wrote:
 Simon Laws wrote:
 I've been running the workspace code today with a view to
integrating
 the
 new code in assembly which calculates service endpoints i.e. 
 point4

 above.

 I think we need to amend point 4 to make this work properly..

 4. Point my Web browser to the various ATOM collections to get:
 - lists of contributions, composites and nodes
 - list of contributions that are required by a given contribution
 - the source of a particular composite
 - the output of a composite after the domain composite has been
built
 by
 CompositeBuilder

 Looking at the code in DeployableCompositeCollectionImpl I see 
 that

 on
 doGet() it builds the request composite. What the last point 
 needs

 to
 do is

 - read the whole domain
 - set up all of the service URIs for each of the included
composites
 taking
 into account the node to which each composite is assigned
 - build the whole domain using CompositeBuilder
 - extract the required composite from the domain and serialize it
 out.
 Yes, exactly!

 Are you changing this code or can I put this in?
 Just go ahead, I'll update and merge if I have any other changes in
 the
 same classes.

 Simon, a quick update: I've done an initial bring-up of node2-impl.
 It's
 still a little rough but you can give it a try if you want.

 The steps to run the store app for example with node2 are as 
 follows:


 1) use workspace-admin to add the store and assets contributions to
the
 domain;

 2) add the store composite to the domain composite using the admin 
 as

 well;

 3) start the StoreLauncher2 class that I just added to the store
 module;

 4) that will start an instance of node2 with all the node config
served
 from the admin app.

 So the next step is to integrate your node allocation code with
 workspace-admin and that will complete the story. Then we'll be able
to
 remove all the currently hardcoded endpoint URIs from the 
 composites.


 I'll send a more detailed description and steps to run more 
 scenarios

 later on Friday.

 --
 Jean-Sebastien

 -
 To unsubscribe, e-mail: [EMAIL PROTECTED]
 For additional commands, e-mail: [EMAIL PROTECTED]

 Ok, sounds good. I've done the uri integration although there are
some
 issues we need to discuss. First I'll update with your code, commit 
 my

 changes and then post here about the issues.

 Regards

 Simon

 I've now checked in my changes (last commit was 634762) to integrate
 the URI
 calculation code with the workspace. I've run the new store launcher
 following Sebastien's instructions from a previous post to this 
 thread.

I
 don't seem to have broken it too much although I'm not seeing any
 prices for
 the catalog items.

 I was seeing that issue too before, it's a minor bug in the property
 writing code, which is not writing property values correctly.

 Issues with the URI generation code

 I have to turn model resolution back on by uncommenting a line in
 ContributionContentProcessor.resolve. Otherwise the JavaImplementation
 types
 are not read and
 compositeConfiguationBuilder.calculateBindingURIs(defaultBindings,
 composite, null); can't generate default services. I then had to tun
 it back
 off to make the store sample work. I need some help on this one.

 I'm investigating now.


 If you hand craft services it seems to be OK although I have noticed,
 looking at the generated SCDL, that it seems to be assuming that all
 generated service names will be based on the implementation classname
 regardless of whether the interface is marked as @Remotable

Re: Domain/Contribution Repository was: Re: SCA contribution packaging schemes: was: SCA runtimes

2008-03-10 Thread Jean-Sebastien Delfino

Jean-Sebastien Delfino wrote:

Simon Laws wrote:

On Fri, Mar 7, 2008 at 4:18 PM, Simon Laws [EMAIL PROTECTED]
wrote:



On Fri, Mar 7, 2008 at 12:23 PM, Jean-Sebastien Delfino 
[EMAIL PROTECTED] wrote:


Jean-Sebastien Delfino wrote:

Simon Laws wrote:

I've been running the workspace code today with a view to integrating

the

new code in assembly which calculates service endpoints i.e. point4
above.

I think we need to amend point 4 to make this work properly..

4. Point my Web browser to the various ATOM collections to get:
- lists of contributions, composites and nodes
- list of contributions that are required by a given contribution
- the source of a particular composite
- the output of a composite after the domain composite has been built

by

CompositeBuilder

Looking at the code in DeployableCompositeCollectionImpl I see that

on

doGet() it builds the request composite. What the last point  needs

to

do is

- read the whole domain
- set up all of the service URIs for each of the included composites
taking
into account the node to which each composite is assigned
- build the whole domain using CompositeBuilder
- extract the required composite from the domain and serialize it

out.

Yes, exactly!


Are you changing this code or can I put this in?

Just go ahead, I'll update and merge if I have any other changes in

the

same classes.

Simon, a quick update: I've done an initial bring-up of node2-impl. 
It's

still a little rough but you can give it a try if you want.

The steps to run the store app for example with node2 are as follows:

1) use workspace-admin to add the store and assets contributions to the
domain;

2) add the store composite to the domain composite using the admin as
well;

3) start the StoreLauncher2 class that I just added to the store 
module;


4) that will start an instance of node2 with all the node config served
from the admin app.

So the next step is to integrate your node allocation code with
workspace-admin and that will complete the story. Then we'll be able to
remove all the currently hardcoded endpoint URIs from the composites.

I'll send a more detailed description and steps to run more scenarios
later on Friday.

--
Jean-Sebastien

-
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]

Ok, sounds good. I've done the uri integration although there are some

issues we need to discuss. First I'll update with your code, commit my
changes and then post here about the issues.

Regards

Simon

I've now checked in my changes (last commit was 634762) to integrate 
the URI

calculation code with the workspace. I've run the new store launcher
following Sebastien's instructions from a previous post to this thread. I
don't seem to have broken it too much although I'm not seeing any 
prices for

the catalog items.


I was seeing that issue too before, it's a minor bug in the property 
writing code, which is not writing property values correctly.



Issues with the URI generation code

I have to turn model resolution back on by uncommenting a line in
ContributionContentProcessor.resolve. Otherwise the JavaImplementation 
types

are not read and
compositeConfiguationBuilder.calculateBindingURIs(defaultBindings,
composite, null); can't generate default services. I then had to tun 
it back

off to make the store sample work. I need some help on this one.


I'm investigating now.



If you hand craft services it seems to be OK although I have noticed,
looking at the generated SCDL, that it seems to be assuming that all
generated service names will be based on the implementation classname
regardless of whether the interface is marked as @Remotable or not. Feels
like a bug somewhere so am going to look at that next.


OK



To get Java implementation resolution to work I needed to hack in the 
Java

factories setup in the DeployableCompositeCollectionImpl.initialize()
method.  This is not very good and raises the bigger question about 
the set

up in here. It's creating a set of extension points in parallel to those
created by the runtime running this component. Can we either use the
registry created by the underlying runtime or do similar generic setup.


Yes, I'd like to keep the infrastructure used by the admin decoupled 
from the infrastructure of the runtime hosting the admin, but I'll try 
to simplify the setup by creating an instance of runtime for the admin 
and getting the necessary objects out of it, instead of assembling it 
from scratch as it is now.



The code doesn't currently distinguish between those services that are
@Remotable and those that aren't

Simon






Simon,

After a few more changes, the domain / node allocation, default URI 
calculation and resolution of references across nodes now works OK.


I was able to remove all the hardcoded URIs in the tutorial composites 
as they now get determined from the configuration of the nodes that the 

Re: Domain/Contribution Repository was: Re: SCA contribution packaging schemes: was: SCA runtimes

2008-03-07 Thread Jean-Sebastien Delfino

Jean-Sebastien Delfino wrote:

Simon Laws wrote:


I've been running the workspace code today with a view to integrating the
new code in assembly which calculates service endpoints i.e. point4 
above.


I think we need to amend point 4 to make this work properly..

4. Point my Web browser to the various ATOM collections to get:
- lists of contributions, composites and nodes
- list of contributions that are required by a given contribution
- the source of a particular composite
- the output of a composite after the domain composite has been built by
CompositeBuilder

Looking at the code in DeployableCompositeCollectionImpl I see that on
doGet() it builds the request composite. What the last point  needs to 
do is


- read the whole domain
- set up all of the service URIs for each of the included composites 
taking

into account the node to which each composite is assigned
- build the whole domain using CompositeBuilder
- extract the required composite from the domain and serialize it out.


Yes, exactly!



Are you changing this code or can I put this in?


Just go ahead, I'll update and merge if I have any other changes in the 
same classes.




Simon, a quick update: I've done an initial bring-up of node2-impl. It's 
still a little rough but you can give it a try if you want.


The steps to run the store app for example with node2 are as follows:

1) use workspace-admin to add the store and assets contributions to the 
domain;


2) add the store composite to the domain composite using the admin as well;

3) start the StoreLauncher2 class that I just added to the store module;

4) that will start an instance of node2 with all the node config served 
from the admin app.


So the next step is to integrate your node allocation code with 
workspace-admin and that will complete the story. Then we'll be able to 
remove all the currently hardcoded endpoint URIs from the composites.


I'll send a more detailed description and steps to run more scenarios 
later on Friday.


--
Jean-Sebastien

-
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]



Re: Domain/Contribution Repository was: Re: SCA contribution packaging schemes: was: SCA runtimes

2008-03-07 Thread Simon Laws
On Fri, Mar 7, 2008 at 12:23 PM, Jean-Sebastien Delfino 
[EMAIL PROTECTED] wrote:

 Jean-Sebastien Delfino wrote:
  Simon Laws wrote:
 
  I've been running the workspace code today with a view to integrating
 the
  new code in assembly which calculates service endpoints i.e. point4
  above.
 
  I think we need to amend point 4 to make this work properly..
 
  4. Point my Web browser to the various ATOM collections to get:
  - lists of contributions, composites and nodes
  - list of contributions that are required by a given contribution
  - the source of a particular composite
  - the output of a composite after the domain composite has been built
 by
  CompositeBuilder
 
  Looking at the code in DeployableCompositeCollectionImpl I see that on
  doGet() it builds the request composite. What the last point  needs to
  do is
 
  - read the whole domain
  - set up all of the service URIs for each of the included composites
  taking
  into account the node to which each composite is assigned
  - build the whole domain using CompositeBuilder
  - extract the required composite from the domain and serialize it out.
 
  Yes, exactly!
 
 
  Are you changing this code or can I put this in?
 
  Just go ahead, I'll update and merge if I have any other changes in the
  same classes.
 

 Simon, a quick update: I've done an initial bring-up of node2-impl. It's
 still a little rough but you can give it a try if you want.

 The steps to run the store app for example with node2 are as follows:

 1) use workspace-admin to add the store and assets contributions to the
 domain;

 2) add the store composite to the domain composite using the admin as
 well;

 3) start the StoreLauncher2 class that I just added to the store module;

 4) that will start an instance of node2 with all the node config served
 from the admin app.

 So the next step is to integrate your node allocation code with
 workspace-admin and that will complete the story. Then we'll be able to
 remove all the currently hardcoded endpoint URIs from the composites.

 I'll send a more detailed description and steps to run more scenarios
 later on Friday.

 --
 Jean-Sebastien

 -
 To unsubscribe, e-mail: [EMAIL PROTECTED]
 For additional commands, e-mail: [EMAIL PROTECTED]

 Ok, sounds good. I've done the uri integration although there are some
issues we need to discuss. First I'll update with your code, commit my
changes and then post here about the issues.

Regards

Simon


Re: Domain/Contribution Repository was: Re: SCA contribution packaging schemes: was: SCA runtimes

2008-03-07 Thread Simon Laws
On Fri, Mar 7, 2008 at 4:18 PM, Simon Laws [EMAIL PROTECTED]
wrote:



 On Fri, Mar 7, 2008 at 12:23 PM, Jean-Sebastien Delfino 
 [EMAIL PROTECTED] wrote:

  Jean-Sebastien Delfino wrote:
   Simon Laws wrote:
  
   I've been running the workspace code today with a view to integrating
  the
   new code in assembly which calculates service endpoints i.e. point4
   above.
  
   I think we need to amend point 4 to make this work properly..
  
   4. Point my Web browser to the various ATOM collections to get:
   - lists of contributions, composites and nodes
   - list of contributions that are required by a given contribution
   - the source of a particular composite
   - the output of a composite after the domain composite has been built
  by
   CompositeBuilder
  
   Looking at the code in DeployableCompositeCollectionImpl I see that
  on
   doGet() it builds the request composite. What the last point  needs
  to
   do is
  
   - read the whole domain
   - set up all of the service URIs for each of the included composites
   taking
   into account the node to which each composite is assigned
   - build the whole domain using CompositeBuilder
   - extract the required composite from the domain and serialize it
  out.
  
   Yes, exactly!
  
  
   Are you changing this code or can I put this in?
  
   Just go ahead, I'll update and merge if I have any other changes in
  the
   same classes.
  
 
  Simon, a quick update: I've done an initial bring-up of node2-impl. It's
  still a little rough but you can give it a try if you want.
 
  The steps to run the store app for example with node2 are as follows:
 
  1) use workspace-admin to add the store and assets contributions to the
  domain;
 
  2) add the store composite to the domain composite using the admin as
  well;
 
  3) start the StoreLauncher2 class that I just added to the store module;
 
  4) that will start an instance of node2 with all the node config served
  from the admin app.
 
  So the next step is to integrate your node allocation code with
  workspace-admin and that will complete the story. Then we'll be able to
  remove all the currently hardcoded endpoint URIs from the composites.
 
  I'll send a more detailed description and steps to run more scenarios
  later on Friday.
 
  --
  Jean-Sebastien
 
  -
  To unsubscribe, e-mail: [EMAIL PROTECTED]
  For additional commands, e-mail: [EMAIL PROTECTED]
 
  Ok, sounds good. I've done the uri integration although there are some
 issues we need to discuss. First I'll update with your code, commit my
 changes and then post here about the issues.

 Regards

 Simon

I've now checked in my changes (last commit was 634762) to integrate the URI
calculation code with the workspace. I've run the new store launcher
following Sebastien's instructions from a previous post to this thread. I
don't seem to have broken it too much although I'm not seeing any prices for
the catalog items.

Issues with the URI generation code

I have to turn model resolution back on by uncommenting a line in
ContributionContentProcessor.resolve. Otherwise the JavaImplementation types
are not read and
compositeConfiguationBuilder.calculateBindingURIs(defaultBindings,
composite, null); can't generate default services. I then had to tun it back
off to make the store sample work. I need some help on this one.

If you hand craft services it seems to be OK although I have noticed,
looking at the generated SCDL, that it seems to be assuming that all
generated service names will be based on the implementation classname
regardless of whether the interface is marked as @Remotable or not. Feels
like a bug somewhere so am going to look at that next.

To get Java implementation resolution to work I needed to hack in the Java
factories setup in the DeployableCompositeCollectionImpl.initialize()
method.  This is not very good and raises the bigger question about the set
up in here. It's creating a set of extension points in parallel to those
created by the runtime running this component. Can we either use the
registry created by the underlying runtime or do similar generic setup.

The code doesn't currently distinguish between those services that are
@Remotable and those that aren't

Simon


Re: Domain/Contribution Repository was: Re: SCA contribution packaging schemes: was: SCA runtimes

2008-03-07 Thread Jean-Sebastien Delfino

Simon Laws wrote:

On Fri, Mar 7, 2008 at 4:18 PM, Simon Laws [EMAIL PROTECTED]
wrote:



On Fri, Mar 7, 2008 at 12:23 PM, Jean-Sebastien Delfino 
[EMAIL PROTECTED] wrote:


Jean-Sebastien Delfino wrote:

Simon Laws wrote:

I've been running the workspace code today with a view to integrating

the

new code in assembly which calculates service endpoints i.e. point4
above.

I think we need to amend point 4 to make this work properly..

4. Point my Web browser to the various ATOM collections to get:
- lists of contributions, composites and nodes
- list of contributions that are required by a given contribution
- the source of a particular composite
- the output of a composite after the domain composite has been built

by

CompositeBuilder

Looking at the code in DeployableCompositeCollectionImpl I see that

on

doGet() it builds the request composite. What the last point  needs

to

do is

- read the whole domain
- set up all of the service URIs for each of the included composites
taking
into account the node to which each composite is assigned
- build the whole domain using CompositeBuilder
- extract the required composite from the domain and serialize it

out.

Yes, exactly!


Are you changing this code or can I put this in?

Just go ahead, I'll update and merge if I have any other changes in

the

same classes.


Simon, a quick update: I've done an initial bring-up of node2-impl. It's
still a little rough but you can give it a try if you want.

The steps to run the store app for example with node2 are as follows:

1) use workspace-admin to add the store and assets contributions to the
domain;

2) add the store composite to the domain composite using the admin as
well;

3) start the StoreLauncher2 class that I just added to the store module;

4) that will start an instance of node2 with all the node config served
from the admin app.

So the next step is to integrate your node allocation code with
workspace-admin and that will complete the story. Then we'll be able to
remove all the currently hardcoded endpoint URIs from the composites.

I'll send a more detailed description and steps to run more scenarios
later on Friday.

--
Jean-Sebastien

-
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]

Ok, sounds good. I've done the uri integration although there are some

issues we need to discuss. First I'll update with your code, commit my
changes and then post here about the issues.

Regards

Simon


I've now checked in my changes (last commit was 634762) to integrate the URI
calculation code with the workspace. I've run the new store launcher
following Sebastien's instructions from a previous post to this thread. I
don't seem to have broken it too much although I'm not seeing any prices for
the catalog items.


I was seeing that issue too before, it's a minor bug in the property 
writing code, which is not writing property values correctly.



Issues with the URI generation code

I have to turn model resolution back on by uncommenting a line in
ContributionContentProcessor.resolve. Otherwise the JavaImplementation types
are not read and
compositeConfiguationBuilder.calculateBindingURIs(defaultBindings,
composite, null); can't generate default services. I then had to tun it back
off to make the store sample work. I need some help on this one.


I'm investigating now.



If you hand craft services it seems to be OK although I have noticed,
looking at the generated SCDL, that it seems to be assuming that all
generated service names will be based on the implementation classname
regardless of whether the interface is marked as @Remotable or not. Feels
like a bug somewhere so am going to look at that next.


OK



To get Java implementation resolution to work I needed to hack in the Java
factories setup in the DeployableCompositeCollectionImpl.initialize()
method.  This is not very good and raises the bigger question about the set
up in here. It's creating a set of extension points in parallel to those
created by the runtime running this component. Can we either use the
registry created by the underlying runtime or do similar generic setup.


Yes, I'd like to keep the infrastructure used by the admin decoupled 
from the infrastructure of the runtime hosting the admin, but I'll try 
to simplify the setup by creating an instance of runtime for the admin 
and getting the necessary objects out of it, instead of assembling it 
from scratch as it is now.



The code doesn't currently distinguish between those services that are
@Remotable and those that aren't

Simon




--
Jean-Sebastien

-
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]



Re: Domain/Contribution Repository was: Re: SCA contribution packaging schemes: was: SCA runtimes

2008-03-07 Thread Jean-Sebastien Delfino

Simon Laws wrote:

On Fri, Mar 7, 2008 at 4:18 PM, Simon Laws [EMAIL PROTECTED]
wrote:



On Fri, Mar 7, 2008 at 12:23 PM, Jean-Sebastien Delfino 
[EMAIL PROTECTED] wrote:


Jean-Sebastien Delfino wrote:

Simon Laws wrote:

I've been running the workspace code today with a view to integrating

the

new code in assembly which calculates service endpoints i.e. point4
above.

I think we need to amend point 4 to make this work properly..

4. Point my Web browser to the various ATOM collections to get:
- lists of contributions, composites and nodes
- list of contributions that are required by a given contribution
- the source of a particular composite
- the output of a composite after the domain composite has been built

by

CompositeBuilder

Looking at the code in DeployableCompositeCollectionImpl I see that

on

doGet() it builds the request composite. What the last point  needs

to

do is

- read the whole domain
- set up all of the service URIs for each of the included composites
taking
into account the node to which each composite is assigned
- build the whole domain using CompositeBuilder
- extract the required composite from the domain and serialize it

out.

Yes, exactly!


Are you changing this code or can I put this in?

Just go ahead, I'll update and merge if I have any other changes in

the

same classes.


Simon, a quick update: I've done an initial bring-up of node2-impl. It's
still a little rough but you can give it a try if you want.

The steps to run the store app for example with node2 are as follows:

1) use workspace-admin to add the store and assets contributions to the
domain;

2) add the store composite to the domain composite using the admin as
well;

3) start the StoreLauncher2 class that I just added to the store module;

4) that will start an instance of node2 with all the node config served
from the admin app.

So the next step is to integrate your node allocation code with
workspace-admin and that will complete the story. Then we'll be able to
remove all the currently hardcoded endpoint URIs from the composites.

I'll send a more detailed description and steps to run more scenarios
later on Friday.

--
Jean-Sebastien

-
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]

Ok, sounds good. I've done the uri integration although there are some

issues we need to discuss. First I'll update with your code, commit my
changes and then post here about the issues.

Regards

Simon


I've now checked in my changes (last commit was 634762) to integrate the URI
calculation code with the workspace. I've run the new store launcher
following Sebastien's instructions from a previous post to this thread. I
don't seem to have broken it too much although I'm not seeing any prices for
the catalog items.


I was seeing that issue too before, it's a minor bug in the property 
writing code, which is not writing property values correctly.



Issues with the URI generation code

I have to turn model resolution back on by uncommenting a line in
ContributionContentProcessor.resolve. Otherwise the JavaImplementation types
are not read and
compositeConfiguationBuilder.calculateBindingURIs(defaultBindings,
composite, null); can't generate default services. I then had to tun it back
off to make the store sample work. I need some help on this one.


I'm investigating now.



If you hand craft services it seems to be OK although I have noticed,
looking at the generated SCDL, that it seems to be assuming that all
generated service names will be based on the implementation classname
regardless of whether the interface is marked as @Remotable or not. Feels
like a bug somewhere so am going to look at that next.


OK



To get Java implementation resolution to work I needed to hack in the Java
factories setup in the DeployableCompositeCollectionImpl.initialize()
method.  This is not very good and raises the bigger question about the set
up in here. It's creating a set of extension points in parallel to those
created by the runtime running this component. Can we either use the
registry created by the underlying runtime or do similar generic setup.


Yes, I'd like to keep the infrastructure used by the admin decoupled 
from the infrastructure of the runtime hosting the admin, but I'll try 
to simplify the setup by creating an instance of runtime for the admin 
and getting the necessary objects out of it, instead of assembling it 
from scratch as it is now.



The code doesn't currently distinguish between those services that are
@Remotable and those that aren't

Simon




--
Jean-Sebastien

-
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]



Re: Domain/Contribution Repository was: Re: SCA contribution packaging schemes: was: SCA runtimes

2008-03-06 Thread Simon Laws
On Fri, Feb 29, 2008 at 5:37 PM, Jean-Sebastien Delfino 
[EMAIL PROTECTED] wrote:

 Comments inline.

  A) Contribution workspace (containing installed contributions):
  - Contribution model representing a contribution
  - Reader for the contribution model
  - Workspace model representing a collection of contributions
  - Reader/writer for the workspace model
  - HTTP based service for accessing the workspace
  - Web browser client for the workspace service
  - Command line client for the workspace service
  - Validator for contributions in a workspace
 
  I started looking at step D). Having a rest from URLs :-) In the context
 of
  this thread the node can loose it's connection to the domain and hence
 the
  factory and the node interface slims down. So Runtime that loads a set
 of
  contributions and a composite becomes;
 
  create a node
  add some contributions (addContribution) and mark a composite for
  starting(currently called addToDomainLevelComposite).
  start the node
  stop the node
 
  You could then recycle (destroy) the node and repeat if required.
 
  This all sound like a suggestion Sebastien made about 5 months ago ;-) I
  have started to check in an alternative implementation of the node
  (node2-impl). I haven't changed any interfaces yet so I don't break any
  existing tests (and the code doesn't run yet!).
 
  Anyhow. I've been looking at the workspace code for parts A and B that
 has
  recently been committed. It would seem to be fairly representative of
 the
  motivating scenario [1].  I don't have detailed question yet but
  interestingly it looks like contributions, composites etc are exposed as
  HTTP resources. Sebastien, It would be useful to have a summary of you
  thoughts on how it is intended to hang together and how these will be
 used.

 I've basically created three services:

 workspace - Provides access to a collection of links to contributions,
 their URI and location. Also provides functions to get the list of
 contribution dependencies and validate a contribution.

 composites - Provides access to a collection of links to the composites
 present in to the domain composite. Also provides a function returning a
 particular composite once it has been 'built' (by CompositeBuilder),
 i.e. its references, properties etc have been resolved.

 nodes - Provides access to a collection of links to composites
 describing the implementation.node components which represent SCA nodes.

 There's another file upload service that I'm using to upload
 contribution files and other files to some storage area but it's just
 temporary.

 I'm using binding.atom to expose the above collections as editable
 ATOM-Pub collections (and ATOM feeds of contributions, composites, nodes).

 Here's how I'm using these services as an SCA domain administrator:

 1. Add one or more links to contributions to the workspace. They can be
 anywhere accessible on the network through a URL, or local on disk. The
 workspace just keeps track of the list.

 2. Add one or more composites to the composites collection. They become
 part of the domain composite.

 3. Add one or more composites declaring SCA nodes to the nodes
 collection. The nodes are described as SCA components of type
 implementation.node. A node component names the application composite
 that is assigned to run on it (see implementation-node-xml for an
 example).

 4. Point my Web browser to the various ATOM collections to get:
 - lists of contributions, composites and nodes
 - list of contributions that are required by a given contribution
 - the source of a particular composite
 - the output of a composite built by CompositeBuilder

 Here, I'm hoping that the work you've started to assign endpoint info
 to domain model [2] will help CompositeBuilder produce the correct
 fully resolved composite.

 5. Pick a node, point my Web browser to its composite description and
 write down:
 - $node = URL of the composite describing the node
 - $composite = URL of the application composite that's assigned to it
 - $contrib = URL the list of contribution dependencies.

 6. When you have node2-impl ready :) from the command line do:
 sca-node $node $composite $contrib
 this should start the SCA node, which can get its description, composite
 and contributions from these URLs.

 or for (6) start the node directly from my Web browser as described in
 [1], but one step at a time... that can come later when we have the
 basic building blocks working OK :)


 
  I guess these HTTP resource bring a deployment dimension.
 
  Local - Give the node contribution URLs that point to the local file
 system
  from where the node reads the contribution (this is how it has worked to
  date)
  Remote - Give it contribution URLs that point out to HTTP resource so
 the
  node can read the contributions from where they are stored in the
 network
 
  Was that the intention?

 Yes. I don't always want to have to upload contributions to some server
 or even have to copy them 

Re: Investigating assignment of endpoint information to the domain model was: Re: Domain/Contribution Repository was: Re: SCA contribution packaging schemes: was: SCA runtimes

2008-03-06 Thread Simon Laws
On Wed, Mar 5, 2008 at 12:52 PM, Simon Laws [EMAIL PROTECTED]
wrote:



 On Wed, Mar 5, 2008 at 6:01 AM, Jean-Sebastien Delfino 
 [EMAIL PROTECTED] wrote:

  Simon Laws wrote:
   Thanks Sebastien, Hopefully some insight on the puzzle in line...
  
   Simon
  
   On Mon, Mar 3, 2008 at 9:57 PM, Jean-Sebastien Delfino 
  [EMAIL PROTECTED]
   wrote:
  
   I apologize in advance for the inline comment puzzle, but you had
   started with a long email in the first place :)
  
  
   no problem at all. Thanks for you detailed response.
  
   snip...
  
  
   I'm happy with workspace.configuration.impl. However applying default
   binding configuration to bindings in a composition doesn't have much
  to
   do with the workspace so I'd suggest to push it down to assembly,
   possible if you use a signature like I suggested above.
  
  
   Ok I can do that.
  
  
   B) The algorithm (A) that calculates service endpoints based on node
   default
   binding configurations depends on knowing the protocol that a
  particular
   binding is configured to use.
   That part I don't get :) We could toy with the idea that SCA bindings
   are not the right level of abstraction and that we need a transport
   concept (or scheme or protocol, e.g. http) and the ability for
  multiple
   bindings (e.g. ws, atom, json) to share the same transport... But
  that's
   a whole different discussion IMO.
  
   Can we keep this simply on a binding basis? and have a node declare
  this:
  
   component ...
 implementation.node ...
 service...
   binding.ws uri=http://localhost:1234/services/
   binding.jsonrpc uri=http://localhost:1234/services/
   binding.atom uri=http://localhost:/services/
   /component
  
   Then the binding.ws uri=... declaration can provide the default
  config
   for all binding.ws on that node, binding.jsonrpc for all
  binding.json,
   binding.atom for all binding.atom etc. As you can see in this
  example,
   different bindings could use different ports... so, trying to share a
   common transport will probably be less functional if it forces the
   bindings sharing that transport to share a single port.
  
  
   This is OK until you bring policy into the picture. A policy might
  affect
   the scheme a binding relies on so you may more realistically end up
  with..
  
   component ...
 implementation.node ...
 service...
   binding.ws uri=http://localhost:1234/services/
   binding.ws
   uri=https://localhost:443/serviceshttp://localhost:1234/services/
  
   binding.jsonrpc uri=http://localhost:1234/services/
   binding.atom uri=http://localhost:/services/
   /component
  
   And any particular, for example,  binding.ws might required to be
  defaulted
   with http://...;, https://..; or even not defaulted at all if it's
  going
   to use jms:  The issue with policies of course is that they are
  not,
   currently, applied until later on when the bindings are actually
  activated.
   So just looking at the model you can tell it has associated
  intents/policy
   but not what the implications are for the endpoint.
  
   We can ignore this in the first instance I guess and run with the
   restriction that you can't apply policy that affects the scheme to
  bindings
   inside the domain. But I'd be interested on you thoughts on the future
   solution none the less. You will notice from the code that I haven't
   actually done anything inside the bindings but just proposed that we
  will
   have to ask binding specific questions at some point during URL
  creation.
  
 
  Well, I think you're raising an interesting issue, but it seems to be
  independent of any of this node business, more like a general issue with
  the impact of policies on specified binding URIs.


 I agree that if the binding URI were completed based on the processing of
 the build phase then this conversation is independent of the default
 values provided by nodes. This is not currently the case AFAIUI. The policy
 model is built and matched at build phase but the policy sets are not
 applied until the binding runtime is created. For example, the
 Axis2ServiceProvider constructor is involved in setting the binding URI at
 the moment.  So in having an extension I was proposing a new place where
 binding specific operations related to generating the URI could be housed
 independently of the processing that happens when the providers are created.
 In this way we would kick off this URL processing earlier on.


 
  If I understand correctly, and I'm taking the store tutorial Catalog
  component as an example to illustrate the issue:
 
  component name=CatalogServiceComponent
service name=Catalog intents=ns:myEncryptionIntent
  binding.ws uri=http://somehost:8080/catalog/
/service
  /component
 
  would in fact translate to:
 
  component name=CatalogComponent
service name=Catalog intents=myEncryptionIntent
  binding.ws uri=https://localhost:443/catalog/
/service
  

Re: Domain/Contribution Repository was: Re: SCA contribution packaging schemes: was: SCA runtimes

2008-03-06 Thread Jean-Sebastien Delfino

Simon Laws wrote:


I've been running the workspace code today with a view to integrating the
new code in assembly which calculates service endpoints i.e. point4 above.

I think we need to amend point 4 to make this work properly..

4. Point my Web browser to the various ATOM collections to get:
- lists of contributions, composites and nodes
- list of contributions that are required by a given contribution
- the source of a particular composite
- the output of a composite after the domain composite has been built by
CompositeBuilder

Looking at the code in DeployableCompositeCollectionImpl I see that on
doGet() it builds the request composite. What the last point  needs to do is

- read the whole domain
- set up all of the service URIs for each of the included composites taking
into account the node to which each composite is assigned
- build the whole domain using CompositeBuilder
- extract the required composite from the domain and serialize it out.


Yes, exactly!



Are you changing this code or can I put this in?


Just go ahead, I'll update and merge if I have any other changes in the 
same classes.


--
Jean-Sebastien

-
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]



Re: Investigating assignment of endpoint information to the domain model was: Re: Domain/Contribution Repository was: Re: SCA contribution packaging schemes: was: SCA runtimes

2008-03-06 Thread Jean-Sebastien Delfino

Simon Laws wrote:

On Wed, Mar 5, 2008 at 6:01 AM, Jean-Sebastien Delfino [EMAIL PROTECTED]
wrote:


Simon Laws wrote:

Thanks Sebastien, Hopefully some insight on the puzzle in line...

Simon

On Mon, Mar 3, 2008 at 9:57 PM, Jean-Sebastien Delfino 

[EMAIL PROTECTED]

wrote:


I apologize in advance for the inline comment puzzle, but you had
started with a long email in the first place :)


no problem at all. Thanks for you detailed response.

snip...



I'm happy with workspace.configuration.impl. However applying default
binding configuration to bindings in a composition doesn't have much to
do with the workspace so I'd suggest to push it down to assembly,
possible if you use a signature like I suggested above.


Ok I can do that.



B) The algorithm (A) that calculates service endpoints based on node

default

binding configurations depends on knowing the protocol that a

particular

binding is configured to use.

That part I don't get :) We could toy with the idea that SCA bindings
are not the right level of abstraction and that we need a transport
concept (or scheme or protocol, e.g. http) and the ability for multiple
bindings (e.g. ws, atom, json) to share the same transport... But

that's

a whole different discussion IMO.

Can we keep this simply on a binding basis? and have a node declare

this:

component ...
  implementation.node ...
  service...
binding.ws uri=http://localhost:1234/services/
binding.jsonrpc uri=http://localhost:1234/services/
binding.atom uri=http://localhost:/services/
/component

Then the binding.ws uri=... declaration can provide the default

config

for all binding.ws on that node, binding.jsonrpc for all binding.json

,

binding.atom for all binding.atom etc. As you can see in this

example,

different bindings could use different ports... so, trying to share a
common transport will probably be less functional if it forces the
bindings sharing that transport to share a single port.


This is OK until you bring policy into the picture. A policy might

affect

the scheme a binding relies on so you may more realistically end up

with..

component ...
  implementation.node ...
  service...
binding.ws uri=http://localhost:1234/services/
binding.ws
uri=https://localhost:443/serviceshttp://localhost:1234/services/

binding.jsonrpc uri=http://localhost:1234/services/
binding.atom uri=http://localhost:/services/
/component

And any particular, for example,  binding.ws might required to be

defaulted

with http://...;, https://..; or even not defaulted at all if it's

going

to use jms:  The issue with policies of course is that they are

not,

currently, applied until later on when the bindings are actually

activated.

So just looking at the model you can tell it has associated

intents/policy

but not what the implications are for the endpoint.

We can ignore this in the first instance I guess and run with the
restriction that you can't apply policy that affects the scheme to

bindings

inside the domain. But I'd be interested on you thoughts on the future
solution none the less. You will notice from the code that I haven't
actually done anything inside the bindings but just proposed that we

will

have to ask binding specific questions at some point during URL

creation.
Well, I think you're raising an interesting issue, but it seems to be
independent of any of this node business, more like a general issue with
the impact of policies on specified binding URIs.



I agree that if the binding URI were completed based on the processing of
the build phase then this conversation is independent of the default
values provided by nodes. This is not currently the case AFAIUI. The policy
model is built and matched at build phase but the policy sets are not
applied until the binding runtime is created. For example, the
Axis2ServiceProvider constructor is involved in setting the binding URI at
the moment.  So in having an extension I was proposing a new place where
binding specific operations related to generating the URI could be housed
independently of the processing that happens when the providers are created.
In this way we would kick off this URL processing earlier on.



OK (taking the policy processing aside), you're right that the 
determination of the binding URI should not be done at all in the 
Axis2ServiceProvider.


I would suggest the following:

- Recognize that this is a manifestation of a bigger issue (and the code 
in Axis2ServiceProvider a hack to work around it). There is no extension 
point for build-time processing of models at the moment. We have plug 
points for read(), resolve() but nothing for build().


- Add a build(T model) method to ArtifactProcessorT.

- Invoke that method from CompositeBuilder() or one the related classes 
in the builder package.


- Move the code responsible for the determination of the URI of a 
binding to ArtifactProcessor.read(), resolve(), build() or a combination 
of these as most convenient.



Re: Investigating assignment of endpoint information to the domain model was: Re: Domain/Contribution Repository was: Re: SCA contribution packaging schemes: was: SCA runtimes

2008-03-06 Thread Jean-Sebastien Delfino

Simon Laws wrote:

On Wed, Mar 5, 2008 at 12:52 PM, Simon Laws [EMAIL PROTECTED]
wrote:



On Wed, Mar 5, 2008 at 6:01 AM, Jean-Sebastien Delfino 
[EMAIL PROTECTED] wrote:


Simon Laws wrote:

Thanks Sebastien, Hopefully some insight on the puzzle in line...

Simon

On Mon, Mar 3, 2008 at 9:57 PM, Jean-Sebastien Delfino 

[EMAIL PROTECTED]

wrote:


I apologize in advance for the inline comment puzzle, but you had
started with a long email in the first place :)


no problem at all. Thanks for you detailed response.

snip...



I'm happy with workspace.configuration.impl. However applying default
binding configuration to bindings in a composition doesn't have much

to

do with the workspace so I'd suggest to push it down to assembly,
possible if you use a signature like I suggested above.


Ok I can do that.



B) The algorithm (A) that calculates service endpoints based on node

default

binding configurations depends on knowing the protocol that a

particular

binding is configured to use.

That part I don't get :) We could toy with the idea that SCA bindings
are not the right level of abstraction and that we need a transport
concept (or scheme or protocol, e.g. http) and the ability for

multiple

bindings (e.g. ws, atom, json) to share the same transport... But

that's

a whole different discussion IMO.

Can we keep this simply on a binding basis? and have a node declare

this:

component ...
  implementation.node ...
  service...
binding.ws uri=http://localhost:1234/services/
binding.jsonrpc uri=http://localhost:1234/services/
binding.atom uri=http://localhost:/services/
/component

Then the binding.ws uri=... declaration can provide the default

config

for all binding.ws on that node, binding.jsonrpc for all

binding.json,

binding.atom for all binding.atom etc. As you can see in this

example,

different bindings could use different ports... so, trying to share a
common transport will probably be less functional if it forces the
bindings sharing that transport to share a single port.


This is OK until you bring policy into the picture. A policy might

affect

the scheme a binding relies on so you may more realistically end up

with..

component ...
  implementation.node ...
  service...
binding.ws uri=http://localhost:1234/services/
binding.ws
uri=https://localhost:443/serviceshttp://localhost:1234/services/

binding.jsonrpc uri=http://localhost:1234/services/
binding.atom uri=http://localhost:/services/
/component

And any particular, for example,  binding.ws might required to be

defaulted

with http://...;, https://..; or even not defaulted at all if it's

going

to use jms:  The issue with policies of course is that they are

not,

currently, applied until later on when the bindings are actually

activated.

So just looking at the model you can tell it has associated

intents/policy

but not what the implications are for the endpoint.

We can ignore this in the first instance I guess and run with the
restriction that you can't apply policy that affects the scheme to

bindings

inside the domain. But I'd be interested on you thoughts on the future
solution none the less. You will notice from the code that I haven't
actually done anything inside the bindings but just proposed that we

will

have to ask binding specific questions at some point during URL

creation.
Well, I think you're raising an interesting issue, but it seems to be
independent of any of this node business, more like a general issue with
the impact of policies on specified binding URIs.


I agree that if the binding URI were completed based on the processing of
the build phase then this conversation is independent of the default
values provided by nodes. This is not currently the case AFAIUI. The policy
model is built and matched at build phase but the policy sets are not
applied until the binding runtime is created. For example, the
Axis2ServiceProvider constructor is involved in setting the binding URI at
the moment.  So in having an extension I was proposing a new place where
binding specific operations related to generating the URI could be housed
independently of the processing that happens when the providers are created.
In this way we would kick off this URL processing earlier on.



If I understand correctly, and I'm taking the store tutorial Catalog
component as an example to illustrate the issue:

component name=CatalogServiceComponent
  service name=Catalog intents=ns:myEncryptionIntent
binding.ws uri=http://somehost:8080/catalog/
  /service
/component

would in fact translate to:

component name=CatalogComponent
  service name=Catalog intents=myEncryptionIntent
binding.ws uri=https://localhost:443/catalog/
  /service
/component

assuming in this example that myEncryptionIntent is realized using
HTTPS on port 443.

Is that the issue you're talking about?


Yes, that's the issue, i.e. the binding specific code that makes this so
is not running during the build phase. 

Re: Investigating assignment of endpoint information to the domain model was: Re: Domain/Contribution Repository was: Re: SCA contribution packaging schemes: was: SCA runtimes

2008-03-05 Thread Simon Laws
On Wed, Mar 5, 2008 at 6:01 AM, Jean-Sebastien Delfino [EMAIL PROTECTED]
wrote:

 Simon Laws wrote:
  Thanks Sebastien, Hopefully some insight on the puzzle in line...
 
  Simon
 
  On Mon, Mar 3, 2008 at 9:57 PM, Jean-Sebastien Delfino 
 [EMAIL PROTECTED]
  wrote:
 
  I apologize in advance for the inline comment puzzle, but you had
  started with a long email in the first place :)
 
 
  no problem at all. Thanks for you detailed response.
 
  snip...
 
 
  I'm happy with workspace.configuration.impl. However applying default
  binding configuration to bindings in a composition doesn't have much to
  do with the workspace so I'd suggest to push it down to assembly,
  possible if you use a signature like I suggested above.
 
 
  Ok I can do that.
 
 
  B) The algorithm (A) that calculates service endpoints based on node
  default
  binding configurations depends on knowing the protocol that a
 particular
  binding is configured to use.
  That part I don't get :) We could toy with the idea that SCA bindings
  are not the right level of abstraction and that we need a transport
  concept (or scheme or protocol, e.g. http) and the ability for multiple
  bindings (e.g. ws, atom, json) to share the same transport... But
 that's
  a whole different discussion IMO.
 
  Can we keep this simply on a binding basis? and have a node declare
 this:
 
  component ...
implementation.node ...
service...
  binding.ws uri=http://localhost:1234/services/
  binding.jsonrpc uri=http://localhost:1234/services/
  binding.atom uri=http://localhost:/services/
  /component
 
  Then the binding.ws uri=... declaration can provide the default
 config
  for all binding.ws on that node, binding.jsonrpc for all binding.json
 ,
  binding.atom for all binding.atom etc. As you can see in this
 example,
  different bindings could use different ports... so, trying to share a
  common transport will probably be less functional if it forces the
  bindings sharing that transport to share a single port.
 
 
  This is OK until you bring policy into the picture. A policy might
 affect
  the scheme a binding relies on so you may more realistically end up
 with..
 
  component ...
implementation.node ...
service...
  binding.ws uri=http://localhost:1234/services/
  binding.ws
  uri=https://localhost:443/serviceshttp://localhost:1234/services/
 
  binding.jsonrpc uri=http://localhost:1234/services/
  binding.atom uri=http://localhost:/services/
  /component
 
  And any particular, for example,  binding.ws might required to be
 defaulted
  with http://...;, https://..; or even not defaulted at all if it's
 going
  to use jms:  The issue with policies of course is that they are
 not,
  currently, applied until later on when the bindings are actually
 activated.
  So just looking at the model you can tell it has associated
 intents/policy
  but not what the implications are for the endpoint.
 
  We can ignore this in the first instance I guess and run with the
  restriction that you can't apply policy that affects the scheme to
 bindings
  inside the domain. But I'd be interested on you thoughts on the future
  solution none the less. You will notice from the code that I haven't
  actually done anything inside the bindings but just proposed that we
 will
  have to ask binding specific questions at some point during URL
 creation.
 

 Well, I think you're raising an interesting issue, but it seems to be
 independent of any of this node business, more like a general issue with
 the impact of policies on specified binding URIs.


I agree that if the binding URI were completed based on the processing of
the build phase then this conversation is independent of the default
values provided by nodes. This is not currently the case AFAIUI. The policy
model is built and matched at build phase but the policy sets are not
applied until the binding runtime is created. For example, the
Axis2ServiceProvider constructor is involved in setting the binding URI at
the moment.  So in having an extension I was proposing a new place where
binding specific operations related to generating the URI could be housed
independently of the processing that happens when the providers are created.
In this way we would kick off this URL processing earlier on.



 If I understand correctly, and I'm taking the store tutorial Catalog
 component as an example to illustrate the issue:

 component name=CatalogServiceComponent
   service name=Catalog intents=ns:myEncryptionIntent
 binding.ws uri=http://somehost:8080/catalog/
   /service
 /component

 would in fact translate to:

 component name=CatalogComponent
   service name=Catalog intents=myEncryptionIntent
 binding.ws uri=https://localhost:443/catalog/
   /service
 /component

 assuming in this example that myEncryptionIntent is realized using
 HTTPS on port 443.

 Is that the issue you're talking about?


Yes, that's the issue, i.e. the binding specific code that makes this 

Re: Investigating assignment of endpoint information to the domain model was: Re: Domain/Contribution Repository was: Re: SCA contribution packaging schemes: was: SCA runtimes

2008-03-04 Thread Jean-Sebastien Delfino

Simon Laws wrote:

Thanks Sebastien, Hopefully some insight on the puzzle in line...

Simon

On Mon, Mar 3, 2008 at 9:57 PM, Jean-Sebastien Delfino [EMAIL PROTECTED]
wrote:


I apologize in advance for the inline comment puzzle, but you had
started with a long email in the first place :)



no problem at all. Thanks for you detailed response.

snip...



I'm happy with workspace.configuration.impl. However applying default
binding configuration to bindings in a composition doesn't have much to
do with the workspace so I'd suggest to push it down to assembly,
possible if you use a signature like I suggested above.



Ok I can do that.



B) The algorithm (A) that calculates service endpoints based on node

default

binding configurations depends on knowing the protocol that a particular
binding is configured to use.

That part I don't get :) We could toy with the idea that SCA bindings
are not the right level of abstraction and that we need a transport
concept (or scheme or protocol, e.g. http) and the ability for multiple
bindings (e.g. ws, atom, json) to share the same transport... But that's
a whole different discussion IMO.

Can we keep this simply on a binding basis? and have a node declare this:

component ...
  implementation.node ...
  service...
binding.ws uri=http://localhost:1234/services/
binding.jsonrpc uri=http://localhost:1234/services/
binding.atom uri=http://localhost:/services/
/component

Then the binding.ws uri=... declaration can provide the default config
for all binding.ws on that node, binding.jsonrpc for all binding.json,
binding.atom for all binding.atom etc. As you can see in this example,
different bindings could use different ports... so, trying to share a
common transport will probably be less functional if it forces the
bindings sharing that transport to share a single port.



This is OK until you bring policy into the picture. A policy might affect
the scheme a binding relies on so you may more realistically end up with..

component ...
  implementation.node ...
  service...
binding.ws uri=http://localhost:1234/services/
binding.ws
uri=https://localhost:443/serviceshttp://localhost:1234/services/

binding.jsonrpc uri=http://localhost:1234/services/
binding.atom uri=http://localhost:/services/
/component

And any particular, for example,  binding.ws might required to be defaulted
with http://...;, https://..; or even not defaulted at all if it's going
to use jms:  The issue with policies of course is that they are not,
currently, applied until later on when the bindings are actually activated.
So just looking at the model you can tell it has associated intents/policy
but not what the implications are for the endpoint.

We can ignore this in the first instance I guess and run with the
restriction that you can't apply policy that affects the scheme to bindings
inside the domain. But I'd be interested on you thoughts on the future
solution none the less. You will notice from the code that I haven't
actually done anything inside the bindings but just proposed that we will
have to ask binding specific questions at some point during URL creation.



Well, I think you're raising an interesting issue, but it seems to be 
independent of any of this node business, more like a general issue with 
the impact of policies on specified binding URIs.


If I understand correctly, and I'm taking the store tutorial Catalog 
component as an example to illustrate the issue:


component name=CatalogServiceComponent
  service name=Catalog intents=ns:myEncryptionIntent
binding.ws uri=http://somehost:8080/catalog/
  /service
/component

would in fact translate to:

component name=CatalogComponent
  service name=Catalog intents=myEncryptionIntent
binding.ws uri=https://localhost:443/catalog/
  /service
/component

assuming in this example that myEncryptionIntent is realized using 
HTTPS on port 443.


Is that the issue you're talking about?
--
Jean-Sebastien

-
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]



Re: Investigating assignment of endpoint information to the domain model was: Re: Domain/Contribution Repository was: Re: SCA contribution packaging schemes: was: SCA runtimes

2008-03-03 Thread Jean-Sebastien Delfino
I apologize in advance for the inline comment puzzle, but you had 
started with a long email in the first place :)


Simon Laws wrote:

And here's the separate thread following on from [1]... I'm looking at

what we can do with any endpoint information we have prior to the point at
which a composite is deployed to a node. This is an alternative to
(replacement for?) having the Tuscany runtime go and query for endpoint
information after it has been started. I have been summarizing info here
[2][3].  Looking at this I need to do something like...

- associate composites with nodes/apply physical binding
defaults/propagate physical addresses based on domain level wiring

   1. Read in node model - which provides
  1. Mapping of composite to node
  2. Default configuration of bindings at that node, e.g. the
  root URL required for binding.ws


+1


   2. For each composite in the domain (I'm assuming I have access to
   the domain level composite model)
  1. Find, from the node model, the node which will host the
  composite
  2. for each service in the composite
 1. If there are no bindings for the service
1. Create a default binding configured with the
default URI from the node model


Create a binding.sca configured with the URI found on the 
binding.sca from the node configuration. Same as your else branch 
Take the default binding configuration and apply it to the binding.



2. We maybe should only configure the URI if we
know there is a remote reference.


Maybe later :) I think that always configuring the URI for now is better 
than starting to couple binding configuration with reference resolution.



2. else
1. find each binding in the service
   1. Take the default binding configuration
   and apply it to the binding


+1


   2. What to do about URLs as they may be
   either
  1. Unset
 1. Apply algorithm from Assembly
 Spec 1.7.2


+1


  2. Set relatively
 1. Apply algorithm from Assembly



  Spec 1.7.2
+1


  3. Set absolutely
 1. Assume it is set correctly?


Yes


  4. Set implicitly (from WSDL
  information)
 1. Assume it is set correctly?
3. The above is similar to what goes
 during compositeConfiguration in the build phase


+1



  3. For each reference in the composite
 1. Look for any targets that cannot be satisfied within
 the current node (need an interface to call through which scans the 
domain)
 2. Find the service model for this target
 3. Do policy and binding matching
 4. For matching bindings ensure that the binding URL is
 unset and set with information from the target service
 5. The above is also similar to what happens during the
 build phase
 4. Domain Level Autowiring also needs to be taken into
  account
  5. Wire by impl that uses domain wide references also need to
  be considered


IMO (3), (4), (5) should be taken completely separately. Resolution of 
references inside a node, across nodes, or when nodes provide default 
binding configuration or not can just always work the same way:
a) find the target service inside the set of composites included in the 
domain

b) configure the reference from the resolved service configuration.



Referring to the builder code now is feels like 2.2 above is a new model
enhancement step that could reuse (some of) the function in
CompositeConfigurationBuilderImpl.configureComponents but with extra
binding specific feature to ensure that URLs are set correctly.


Yes exactly.



2.3 looks very like CompositeWireBuilder.


Which should continue to work as-is :)



My quandry at the moment is that the process has a dependency on the node
description so it doesn't fit in the builders where they are at the moment.
It feels like we need a separate module. So comments about whether any of
this makes sense and, if so, where I should put it are welcome.



If that helps: The dependency on NodeImplementation is not really 
necessary, as the only info you need is the list of Binding objects from 
the node description.


Something like that should work:

configure(Composite composite, ListBinding defaultBindings);



[1] http://www.mail-archive.com/tuscany-dev@ws.apache.org/msg28299.html
[2]
http://cwiki.apache.org/confluence/display/TUSCANYWIKI/Contribution+Processing
[3] http://cwiki.apache.org/confluence/display/TUSCANYWIKI/Runtime+Phases



I have some code [1] for steps 1, 2.1 and 2.2 above and I want to move it
out of my sandbox to see how it fits. Two questions

A) I have a relatively stand alone algorithm [2] that populates service
binding URIs (similar to the 

Re: Investigating assignment of endpoint information to the domain model was: Re: Domain/Contribution Repository was: Re: SCA contribution packaging schemes: was: SCA runtimes

2008-03-03 Thread Simon Laws
Thanks Sebastien, Hopefully some insight on the puzzle in line...

Simon

On Mon, Mar 3, 2008 at 9:57 PM, Jean-Sebastien Delfino [EMAIL PROTECTED]
wrote:

 I apologize in advance for the inline comment puzzle, but you had
 started with a long email in the first place :)


no problem at all. Thanks for you detailed response.

snip...


 I'm happy with workspace.configuration.impl. However applying default
 binding configuration to bindings in a composition doesn't have much to
 do with the workspace so I'd suggest to push it down to assembly,
 possible if you use a signature like I suggested above.


Ok I can do that.



 
  B) The algorithm (A) that calculates service endpoints based on node
 default
  binding configurations depends on knowing the protocol that a particular
  binding is configured to use.

 That part I don't get :) We could toy with the idea that SCA bindings
 are not the right level of abstraction and that we need a transport
 concept (or scheme or protocol, e.g. http) and the ability for multiple
 bindings (e.g. ws, atom, json) to share the same transport... But that's
 a whole different discussion IMO.

 Can we keep this simply on a binding basis? and have a node declare this:

 component ...
   implementation.node ...
   service...
 binding.ws uri=http://localhost:1234/services/
 binding.jsonrpc uri=http://localhost:1234/services/
 binding.atom uri=http://localhost:/services/
 /component

 Then the binding.ws uri=... declaration can provide the default config
 for all binding.ws on that node, binding.jsonrpc for all binding.json,
 binding.atom for all binding.atom etc. As you can see in this example,
 different bindings could use different ports... so, trying to share a
 common transport will probably be less functional if it forces the
 bindings sharing that transport to share a single port.


This is OK until you bring policy into the picture. A policy might affect
the scheme a binding relies on so you may more realistically end up with..

component ...
  implementation.node ...
  service...
binding.ws uri=http://localhost:1234/services/
binding.ws
uri=https://localhost:443/serviceshttp://localhost:1234/services/

binding.jsonrpc uri=http://localhost:1234/services/
binding.atom uri=http://localhost:/services/
/component

And any particular, for example,  binding.ws might required to be defaulted
with http://...;, https://..; or even not defaulted at all if it's going
to use jms:  The issue with policies of course is that they are not,
currently, applied until later on when the bindings are actually activated.
So just looking at the model you can tell it has associated intents/policy
but not what the implications are for the endpoint.

We can ignore this in the first instance I guess and run with the
restriction that you can't apply policy that affects the scheme to bindings
inside the domain. But I'd be interested on you thoughts on the future
solution none the less. You will notice from the code that I haven't
actually done anything inside the bindings but just proposed that we will
have to ask binding specific questions at some point during URL creation.


Re: Domain/Contribution Repository was: Re: SCA contribution packaging schemes: was: SCA runtimes

2008-02-29 Thread Jean-Sebastien Delfino

Comments inline.


A) Contribution workspace (containing installed contributions):
- Contribution model representing a contribution
- Reader for the contribution model
- Workspace model representing a collection of contributions
- Reader/writer for the workspace model
- HTTP based service for accessing the workspace
- Web browser client for the workspace service
- Command line client for the workspace service
- Validator for contributions in a workspace


I started looking at step D). Having a rest from URLs :-) In the context of
this thread the node can loose it's connection to the domain and hence the
factory and the node interface slims down. So Runtime that loads a set of
contributions and a composite becomes;

create a node
add some contributions (addContribution) and mark a composite for
starting(currently called addToDomainLevelComposite).
start the node
stop the node

You could then recycle (destroy) the node and repeat if required.

This all sound like a suggestion Sebastien made about 5 months ago ;-) I
have started to check in an alternative implementation of the node
(node2-impl). I haven't changed any interfaces yet so I don't break any
existing tests (and the code doesn't run yet!).

Anyhow. I've been looking at the workspace code for parts A and B that has
recently been committed. It would seem to be fairly representative of the
motivating scenario [1].  I don't have detailed question yet but
interestingly it looks like contributions, composites etc are exposed as
HTTP resources. Sebastien, It would be useful to have a summary of you
thoughts on how it is intended to hang together and how these will be used.


I've basically created three services:

workspace - Provides access to a collection of links to contributions, 
their URI and location. Also provides functions to get the list of 
contribution dependencies and validate a contribution.


composites - Provides access to a collection of links to the composites 
present in to the domain composite. Also provides a function returning a 
particular composite once it has been 'built' (by CompositeBuilder), 
i.e. its references, properties etc have been resolved.


nodes - Provides access to a collection of links to composites 
describing the implementation.node components which represent SCA nodes.


There's another file upload service that I'm using to upload 
contribution files and other files to some storage area but it's just 
temporary.


I'm using binding.atom to expose the above collections as editable 
ATOM-Pub collections (and ATOM feeds of contributions, composites, nodes).


Here's how I'm using these services as an SCA domain administrator:

1. Add one or more links to contributions to the workspace. They can be 
anywhere accessible on the network through a URL, or local on disk. The 
workspace just keeps track of the list.


2. Add one or more composites to the composites collection. They become 
part of the domain composite.


3. Add one or more composites declaring SCA nodes to the nodes 
collection. The nodes are described as SCA components of type 
implementation.node. A node component names the application composite 
that is assigned to run on it (see implementation-node-xml for an example).


4. Point my Web browser to the various ATOM collections to get:
- lists of contributions, composites and nodes
- list of contributions that are required by a given contribution
- the source of a particular composite
- the output of a composite built by CompositeBuilder

Here, I'm hoping that the work you've started to assign endpoint info 
to domain model [2] will help CompositeBuilder produce the correct 
fully resolved composite.


5. Pick a node, point my Web browser to its composite description and 
write down:

- $node = URL of the composite describing the node
- $composite = URL of the application composite that's assigned to it
- $contrib = URL the list of contribution dependencies.

6. When you have node2-impl ready :) from the command line do:
sca-node $node $composite $contrib
this should start the SCA node, which can get its description, composite 
and contributions from these URLs.


or for (6) start the node directly from my Web browser as described in 
[1], but one step at a time... that can come later when we have the 
basic building blocks working OK :)





I guess these HTTP resource bring a deployment dimension.

Local - Give the node contribution URLs that point to the local file system
from where the node reads the contribution (this is how it has worked to
date)
Remote - Give it contribution URLs that point out to HTTP resource so the
node can read the contributions from where they are stored in the network

Was that the intention?


Yes. I don't always want to have to upload contributions to some server 
or even have to copy them around. The collection of contributions should 
be able to point to contributions directly in my IDE workspace for 
example (and it supports that today).



[1] 

Investigating assignment of endpoint information to the domain model was: Re: Domain/Contribution Repository was: Re: SCA contribution packaging schemes: was: SCA runtimes

2008-02-28 Thread Simon Laws
On Tue, Feb 26, 2008 at 5:49 PM, Simon Laws [EMAIL PROTECTED]
wrote:



 On Tue, Feb 5, 2008 at 8:34 AM, Jean-Sebastien Delfino 
 [EMAIL PROTECTED] wrote:

  Venkata Krishnan wrote:
   It would also be good to have some sort of 'ping' function that could
  be
   used to check if a service is receptive to requests.  Infact I wonder
  if the
   Workspace Admin should also be able to test this sort of a ping per
   binding.  Is this something that can go into the section (B) .. or is
  this
   out of place ?
  
 
  Good idea, I'd put it section (D). A node runtime needs to provide a way
  to monitor its status.
 
  --
  Jean-Sebastien
 
  -
  To unsubscribe, e-mail: [EMAIL PROTECTED]
  For additional commands, e-mail: [EMAIL PROTECTED]
 
  Hi Sebastien

 I see you have started to check in code related to steps A and B. I have
 time this week to start helping on this and thought I would start looking at
 the back end of B and moving into C but don't want to tread on you toes.

 I made some code to experiment with before I went on holiday so it's not
 integrated with your code (it just uses the Workspace interface). What I was
 starting to look at was resolving a domain level composite which includes
 unresolved composites. I.e. I built a composite which includes the
 deployable composites for a series of contributions and am learning about
 resolution and re-resolution.

 I'm not doing anything about composite selection for deployment just yet.
 That will come from the node model/gui/command line. I just want to work out
 how we get the domain resolution going in this context.

 If you are not already doing this I'll carry on experimenting in my
 sandbox for a little while longer and spawn of a separate thread to discuss.

 Simon


 And here's the separate thread following on from [1]... I'm looking at
what we can do with any endpoint information we have prior to the point at
which a composite is deployed to a node. This is an alternative to
(replacement for?) having the Tuscany runtime go and query for endpoint
information after it has been started. I have been summarizing info here
[2][3].  Looking at this I need to do something like...

- associate composites with nodes/apply physical binding defaults/propagate
physical addresses based on domain level wiring

   1. Read in node model - which provides
  1. Mapping of composite to node
  2. Default configuration of bindings at that node, e.g. the root
  URL required for binding.ws
   2. For each composite in the domain (I'm assuming I have access to the
   domain level composite model)
  1. Find, from the node model, the node which will host the
  composite
  2. for each service in the composite
 1. If there are no bindings for the service
1. Create a default binding configured with the
default URI from the node model
2. We maybe should only configure the URI if we know
there is a remote reference.
2. else
1. find each binding in the service
   1. Take the default binding configuration and
   apply it to the binding
   2. What to do about URLs as they may be either

  1. Unset
 1. Apply algorithm from Assembly
 Spec 1.7.2
  2. Set relatively
 1. Apply algorithm from Assembly
 Spec 1.7.2
  3. Set absolutely
 1. Assume it is set correctly?
  4. Set implicitly (from WSDL
  information)
 1. Assume it is set correctly?
3. The above is similar to what goes
 during compositeConfiguration in the build phase
  3. For each reference in the composite
 1. Look for any targets that cannot be satisfied within
 the current node (need an interface to call through which
scans the domain)
 2. Find the service model for this target
 3. Do policy and binding matching
 4. For matching bindings ensure that the binding URL is
 unset and set with information from the target service
 5. The above is also similar to what happens during the
 build phase
 4. Domain Level Autowiring also needs to be taken into
  account
  5. Wire by impl that uses domain wide references also need to be
  considered

Referring to the builder code now is feels like 2.2 above is a new model
enhancement step that could reuse (some of) the function in
CompositeConfigurationBuilderImpl.configureComponents but with extra binding
specific feature to ensure that URLs are set correctly.

2.3 looks very like CompositeWireBuilder.

My quandry at the moment is that the process has a dependency on the node
description so it doesn't fit in the builders where they are at the moment.
It feels like we need 

Re: Domain/Contribution Repository was: Re: SCA contribution packaging schemes: was: SCA runtimes

2008-02-28 Thread Simon Laws
On Tue, Feb 26, 2008 at 5:57 PM, Simon Laws [EMAIL PROTECTED]
wrote:



 On Mon, Feb 25, 2008 at 4:17 PM, Jean-Sebastien Delfino 
 [EMAIL PROTECTED] wrote:

Jean-Sebastien Delfino wrote:
   Looks good to me, building on your initial list I added a few more
  items
   and tried to organize them in three categories:
  
   A) Contribution workspace (containing installed contributions):
   - Contribution model representing a contribution
   - Reader for the contribution model
   - Workspace model representing a collection of contributions
   - Reader/writer for the workspace model
   - HTTP based service for accessing the workspace
   - Web browser client for the workspace service
   - Command line client for the workspace service
   - Validator for contributions in a workspace
  
  
   ant elder wrote:
   Do you have you heart set on calling this a workspace or are you open
  to
   calling it something else like a repository?
  
 
  I think that they are two different concepts, here are two analogies:
 
  - We in Tuscany assemble our distro out of artifacts from multiple Maven
  repositories.
 
  - An application developer (for example using Eclipse) can connect
  Eclipse workspace to multiple SVN repositories.
 
  What I'm looking after here is similar to the above 'distro' or 'Eclipse
  workspace', basically an assembly of contributions, artifacts of various
  kinds, that I can load in a 'workspace', resolve, validate and run,
  different from the repository or repositories that I get the artifacts
  from.
  --
  Jean-Sebastien
 
  -
  To unsubscribe, e-mail: [EMAIL PROTECTED]
  For additional commands, e-mail: [EMAIL PROTECTED]
 

 To me repository (in my mind somewhere to store things) describes a much
 less active entity compared to the workspace which has to do a lot of work
 to load and assimilate information from multiple contributions. I'm not sure
 about workspace either but to me it's better than repository and it's not
 domain which has caused us all kinds of problems.

 My 2c

 Simon


I started looking at step D). Having a rest from URLs :-) In the context of
this thread the node can loose it's connection to the domain and hence the
factory and the node interface slims down. So Runtime that loads a set of
contributions and a composite becomes;

create a node
add some contributions (addContribution) and mark a composite for
starting(currently called addToDomainLevelComposite).
start the node
stop the node

You could then recycle (destroy) the node and repeat if required.

This all sound like a suggestion Sebastien made about 5 months ago ;-) I
have started to check in an alternative implementation of the node
(node2-impl). I haven't changed any interfaces yet so I don't break any
existing tests (and the code doesn't run yet!).

Anyhow. I've been looking at the workspace code for parts A and B that has
recently been committed. It would seem to be fairly representative of the
motivating scenario [1].  I don't have detailed question yet but
interestingly it looks like contributions, composites etc are exposed as
HTTP resources. Sebastien, It would be useful to have a summary of you
thoughts on how it is intended to hang together and how these will be used.

I guess these HTTP resource bring a deployment dimension.

Local - Give the node contribution URLs that point to the local file system
from where the node reads the contribution (this is how it has worked to
date)
Remote - Give it contribution URLs that point out to HTTP resource so the
node can read the contributions from where they are stored in the network

Was that the intention?

Simon

[1] http://www.mail-archive.com/tuscany-dev@ws.apache.org/msg27362.html


Re: Domain/Contribution Repository was: Re: SCA contribution packaging schemes: was: SCA runtimes

2008-02-26 Thread Simon Laws
On Tue, Feb 5, 2008 at 8:34 AM, Jean-Sebastien Delfino [EMAIL PROTECTED]
wrote:

 Venkata Krishnan wrote:
  It would also be good to have some sort of 'ping' function that could be
  used to check if a service is receptive to requests.  Infact I wonder if
 the
  Workspace Admin should also be able to test this sort of a ping per
  binding.  Is this something that can go into the section (B) .. or is
 this
  out of place ?
 

 Good idea, I'd put it section (D). A node runtime needs to provide a way
 to monitor its status.

 --
 Jean-Sebastien

 -
 To unsubscribe, e-mail: [EMAIL PROTECTED]
 For additional commands, e-mail: [EMAIL PROTECTED]

 Hi Sebastien

I see you have started to check in code related to steps A and B. I have
time this week to start helping on this and thought I would start looking at
the back end of B and moving into C but don't want to tread on you toes.

I made some code to experiment with before I went on holiday so it's not
integrated with your code (it just uses the Workspace interface). What I was
starting to look at was resolving a domain level composite which includes
unresolved composites. I.e. I built a composite which includes the
deployable composites for a series of contributions and am learning about
resolution and re-resolution.

I'm not doing anything about composite selection for deployment just yet.
That will come from the node model/gui/command line. I just want to work out
how we get the domain resolution going in this context.

If you are not already doing this I'll carry on experimenting in my sandbox
for a little while longer and spawn of a separate thread to discuss.

Simon


Re: Domain/Contribution Repository was: Re: SCA contribution packaging schemes: was: SCA runtimes

2008-02-26 Thread Simon Laws
On Mon, Feb 25, 2008 at 4:17 PM, Jean-Sebastien Delfino 
[EMAIL PROTECTED] wrote:

   Jean-Sebastien Delfino wrote:
  Looks good to me, building on your initial list I added a few more
 items
  and tried to organize them in three categories:
 
  A) Contribution workspace (containing installed contributions):
  - Contribution model representing a contribution
  - Reader for the contribution model
  - Workspace model representing a collection of contributions
  - Reader/writer for the workspace model
  - HTTP based service for accessing the workspace
  - Web browser client for the workspace service
  - Command line client for the workspace service
  - Validator for contributions in a workspace
 
 
  ant elder wrote:
  Do you have you heart set on calling this a workspace or are you open to
  calling it something else like a repository?
 

 I think that they are two different concepts, here are two analogies:

 - We in Tuscany assemble our distro out of artifacts from multiple Maven
 repositories.

 - An application developer (for example using Eclipse) can connect
 Eclipse workspace to multiple SVN repositories.

 What I'm looking after here is similar to the above 'distro' or 'Eclipse
 workspace', basically an assembly of contributions, artifacts of various
 kinds, that I can load in a 'workspace', resolve, validate and run,
 different from the repository or repositories that I get the artifacts
 from.
 --
 Jean-Sebastien

 -
 To unsubscribe, e-mail: [EMAIL PROTECTED]
 For additional commands, e-mail: [EMAIL PROTECTED]


To me repository (in my mind somewhere to store things) describes a much
less active entity compared to the workspace which has to do a lot of work
to load and assimilate information from multiple contributions. I'm not sure
about workspace either but to me it's better than repository and it's not
domain which has caused us all kinds of problems.

My 2c

Simon


Re: Domain/Contribution Repository was: Re: SCA contribution packaging schemes: was: SCA runtimes

2008-02-25 Thread Jean-Sebastien Delfino

 Jean-Sebastien Delfino wrote:

Looks good to me, building on your initial list I added a few more items
and tried to organize them in three categories:

A) Contribution workspace (containing installed contributions):
- Contribution model representing a contribution
- Reader for the contribution model
- Workspace model representing a collection of contributions
- Reader/writer for the workspace model
- HTTP based service for accessing the workspace
- Web browser client for the workspace service
- Command line client for the workspace service
- Validator for contributions in a workspace



ant elder wrote:
Do you have you heart set on calling this a workspace or are you open to
calling it something else like a repository?



I think that they are two different concepts, here are two analogies:

- We in Tuscany assemble our distro out of artifacts from multiple Maven 
repositories.


- An application developer (for example using Eclipse) can connect 
Eclipse workspace to multiple SVN repositories.


What I'm looking after here is similar to the above 'distro' or 'Eclipse 
workspace', basically an assembly of contributions, artifacts of various 
kinds, that I can load in a 'workspace', resolve, validate and run, 
different from the repository or repositories that I get the artifacts from.

--
Jean-Sebastien

-
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]



Re: Domain/Contribution Repository was: Re: SCA contribution packaging schemes: was: SCA runtimes

2008-02-05 Thread Jean-Sebastien Delfino

Venkata Krishnan wrote:

It would also be good to have some sort of 'ping' function that could be
used to check if a service is receptive to requests.  Infact I wonder if the
Workspace Admin should also be able to test this sort of a ping per
binding.  Is this something that can go into the section (B) .. or is this
out of place ?



Good idea, I'd put it section (D). A node runtime needs to provide a way 
to monitor its status.


--
Jean-Sebastien

-
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]



Re: Domain/Contribution Repository was: Re: SCA contribution packaging schemes: was: SCA runtimes

2008-02-02 Thread Jean-Sebastien Delfino

Simon Laws wrote:
[snip]

From what you are saying a short term shopping list of functions seems to be
emerging.

Contribution uploader/manager(via browser)
Contribution addition/management from command line (adding as Luciano has
started this and useful for testing)
Workspace to register added contributions contributions
Parser to turn workspace contributions into a model that can be inspected
(doesn't need the machinery of a runtime)
Validator for validating contributions in a workspace
Domain/Node model reader/writer (implementation.node)
Function for assigning composites to nodes
Function for processing assigned composites in the context of the domain
(reference resolution, autowire) (again can be more lightweight than a
runtime but does needs access to binding specific processing)
Deployer for writing out contributions for nodes

What else is there?

Simon



Looks good to me, building on your initial list I added a few more items 
and tried to organize them in three categories:


A) Contribution workspace (containing installed contributions):
- Contribution model representing a contribution
- Reader for the contribution model
- Workspace model representing a collection of contributions
- Reader/writer for the workspace model
- HTTP based service for accessing the workspace
- Web browser client for the workspace service
- Command line client for the workspace service
- Validator for contributions in a workspace

B) Domain composite (containing deployed composites):
- We can just reuse the existing composite model
- HTTP based service for accessing the domain composite
- Web browser client for the domain composite service
- Command line client for the domain composite service
- Validator for composites deployed in the domain composite
- Function for processing wiring in the domain

C) Node configuration
- Implementation.node model
- Reader/writer for the implementation.node model
- Function for configuring composites assigned to nodes
- Function for pushing contributions and composites to nodes

D) Node runtime
- Runtime that loads a set of contributions and a composite
- HTTP based service for starting/stopping a node

--
Jean-Sebastien

-
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]



Re: Domain/Contribution Repository was: Re: SCA contribution packaging schemes: was: SCA runtimes

2008-01-30 Thread Simon Laws
On Jan 29, 2008 4:22 PM, Luciano Resende [EMAIL PROTECTED] wrote:

 Comments inline. Note that I have also some prototype of a install
 program in my sandbox.

 On Jan 29, 2008 7:14 AM, Simon Laws [EMAIL PROTECTED] wrote:
  On Jan 28, 2008 5:38 PM, Simon Laws [EMAIL PROTECTED] wrote:
 
   snip...
  
I'm not too keen on scanning a disk directory as it doesn't apply to
 a
distributed environment, I'd prefer to:
- define a model representing a contribution repository
- persist it in some XML form
   
  
  
   I've started on some model code in my sandbox [1]. Feel free to use
 and
   abuse.
  
   Regards
  
   Simon
  
   [1]
  
 http://svn.apache.org/repos/asf/incubator/tuscany/sandbox/slaws/modules/
  
 
  Looking a svn I find there is already a ContributionRepository
  implementation [1]. There may be a little bit too much function in there
 at
  the moment but it's useful to see it none the less. So, to work out what
 it
  does. First question concerns the store() method.
 
  public URL store(String contribution, URL sourceURL, InputStream
  contributionStream).
 
  Can someone explain what the sourceURL is for?

 contribution is the URI for the contribution being stored

 SourceURL is the URL pointing to the contribution you want to store in
 the repository.

 InputStream is the content of the contribution (optional)

 
  The model in my sandbox [2], which is very simlar to the XML that the
  current contribution repository uses, now holds node and contribution
 name
  information [3]. These could be two separate models to decouple the
  management of contributions from the process of associating them
 together.
  I'd keep the info in one place but I expect other's views will vary.
 
  [1]
 
 http://svn.apache.org/repos/asf/incubator/tuscany/java/sca/modules/contribution-impl/src/main/java/org/apache/tuscany/sca/contribution/service/impl/ContributionRepositoryImpl.java
  [2]
 http://svn.apache.org/repos/asf/incubator/tuscany/sandbox/slaws/modules/
  [3]
 
 http://svn.apache.org/repos/asf/incubator/tuscany/sandbox/slaws/modules/domain-model-xml/src/test/resources/org/apache/tuscany/sca/domain/model/xml/test.domain
 



 --
 Luciano Resende
 Apache Tuscany Committer
 http://people.apache.org/~lresende http://people.apache.org/%7Elresende
 http://lresende.blogspot.com/

 -
 To unsubscribe, e-mail: [EMAIL PROTECTED]
 For additional commands, e-mail: [EMAIL PROTECTED]

 Luciano

Thanks for the heads up on the installer stuff. Actually makes the intention
much clearer when you see the code being used. I'll add some more thoughts
to this thread shortly.

Thanks

Simon


Re: Domain/Contribution Repository was: Re: SCA contribution packaging schemes: was: SCA runtimes

2008-01-30 Thread Simon Laws
On Jan 30, 2008 12:24 AM, Jean-Sebastien Delfino [EMAIL PROTECTED]
wrote:

 Simon Laws wrote:
 [snip]
  The model in my sandbox [2], which is very simlar to the XML that the
  current contribution repository uses, now holds node and contribution
 name
  information [3]. These could be two separate models to decouple the
  management of contributions from the process of associating them
 together.

 I like the decoupling part:

 - A workspace containing contributions (basically just a contribution
 URI - URL association). I've started to add that Workspace interface to
 the contribution package.

 - A description of the network containing nodes, we don't need a new
 model for that, as we already have implementation-node and can use
 something like:

 composite name=bobsNetWork

   component name=bobsNode1
 implementation.node ...
   /component

   component name=bobsNode2
 implementation.node ...
   /component

 /composite

 --
 Jean-Sebastien

 -
 To unsubscribe, e-mail: [EMAIL PROTECTED]
 For additional commands, e-mail: [EMAIL PROTECTED]


From what you are saying a short term shopping list of functions seems to be
emerging.

Contribution uploader/manager(via browser)
Contribution addition/management from command line (adding as Luciano has
started this and useful for testing)
Workspace to register added contributions contributions
Parser to turn workspace contributions into a model that can be inspected
(doesn't need the machinery of a runtime)
Validator for validating contributions in a workspace
Domain/Node model reader/writer (implementation.node)
Function for assigning composites to nodes
Function for processing assigned composites in the context of the domain
(reference resolution, autowire) (again can be more lightweight than a
runtime but does needs access to binding specific processing)
Deployer for writing out contributions for nodes

What else is there?

Simon


Re: Domain/Contribution Repository was: Re: SCA contribution packaging schemes: was: SCA runtimes

2008-01-29 Thread Luciano Resende
Comments inline. Note that I have also some prototype of a install
program in my sandbox.

On Jan 29, 2008 7:14 AM, Simon Laws [EMAIL PROTECTED] wrote:
 On Jan 28, 2008 5:38 PM, Simon Laws [EMAIL PROTECTED] wrote:

  snip...
 
   I'm not too keen on scanning a disk directory as it doesn't apply to a
   distributed environment, I'd prefer to:
   - define a model representing a contribution repository
   - persist it in some XML form
  
 
 
  I've started on some model code in my sandbox [1]. Feel free to use and
  abuse.
 
  Regards
 
  Simon
 
  [1]
  http://svn.apache.org/repos/asf/incubator/tuscany/sandbox/slaws/modules/
 

 Looking a svn I find there is already a ContributionRepository
 implementation [1]. There may be a little bit too much function in there at
 the moment but it's useful to see it none the less. So, to work out what it
 does. First question concerns the store() method.

 public URL store(String contribution, URL sourceURL, InputStream
 contributionStream).

 Can someone explain what the sourceURL is for?

contribution is the URI for the contribution being stored

SourceURL is the URL pointing to the contribution you want to store in
the repository.

InputStream is the content of the contribution (optional)


 The model in my sandbox [2], which is very simlar to the XML that the
 current contribution repository uses, now holds node and contribution name
 information [3]. These could be two separate models to decouple the
 management of contributions from the process of associating them together.
 I'd keep the info in one place but I expect other's views will vary.

 [1]
 http://svn.apache.org/repos/asf/incubator/tuscany/java/sca/modules/contribution-impl/src/main/java/org/apache/tuscany/sca/contribution/service/impl/ContributionRepositoryImpl.java
 [2] http://svn.apache.org/repos/asf/incubator/tuscany/sandbox/slaws/modules/
 [3]
 http://svn.apache.org/repos/asf/incubator/tuscany/sandbox/slaws/modules/domain-model-xml/src/test/resources/org/apache/tuscany/sca/domain/model/xml/test.domain




-- 
Luciano Resende
Apache Tuscany Committer
http://people.apache.org/~lresende
http://lresende.blogspot.com/

-
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]



Domain/Contribution Repository was: Re: SCA contribution packaging schemes: was: SCA runtimes

2008-01-29 Thread Simon Laws
On Jan 28, 2008 5:38 PM, Simon Laws [EMAIL PROTECTED] wrote:

 snip...

  I'm not too keen on scanning a disk directory as it doesn't apply to a
  distributed environment, I'd prefer to:
  - define a model representing a contribution repository
  - persist it in some XML form
 


 I've started on some model code in my sandbox [1]. Feel free to use and
 abuse.

 Regards

 Simon

 [1]
 http://svn.apache.org/repos/asf/incubator/tuscany/sandbox/slaws/modules/


Looking a svn I find there is already a ContributionRepository
implementation [1]. There may be a little bit too much function in there at
the moment but it's useful to see it none the less. So, to work out what it
does. First question concerns the store() method.

public URL store(String contribution, URL sourceURL, InputStream
contributionStream).

Can someone explain what the sourceURL is for?

The model in my sandbox [2], which is very simlar to the XML that the
current contribution repository uses, now holds node and contribution name
information [3]. These could be two separate models to decouple the
management of contributions from the process of associating them together.
I'd keep the info in one place but I expect other's views will vary.

[1]
http://svn.apache.org/repos/asf/incubator/tuscany/java/sca/modules/contribution-impl/src/main/java/org/apache/tuscany/sca/contribution/service/impl/ContributionRepositoryImpl.java
[2] http://svn.apache.org/repos/asf/incubator/tuscany/sandbox/slaws/modules/
[3]
http://svn.apache.org/repos/asf/incubator/tuscany/sandbox/slaws/modules/domain-model-xml/src/test/resources/org/apache/tuscany/sca/domain/model/xml/test.domain


Re: SCA contribution packaging schemes: was: SCA runtimes

2008-01-29 Thread ant elder
On Jan 28, 2008 5:34 AM, Jean-Sebastien Delfino [EMAIL PROTECTED]
wrote:

snip

I don't think that a Webapp is the right architecture but I may be wrong
 or missing something, so you should probably just try and see for
 yourself if this is what you want to do.


Can you explain more about why what you mean by not the right architecture?
There has been confusion and disagreement around what Tuscany should be
doing with webapps for a long time, years even, so  maybe its time we tried
to  get some consensus on this.

   ...ant


Re: Domain/Contribution Repository was: Re: SCA contribution packaging schemes: was: SCA runtimes

2008-01-29 Thread Jean-Sebastien Delfino

Simon Laws wrote:
[snip]

The model in my sandbox [2], which is very simlar to the XML that the
current contribution repository uses, now holds node and contribution name
information [3]. These could be two separate models to decouple the
management of contributions from the process of associating them together.


I like the decoupling part:

- A workspace containing contributions (basically just a contribution 
URI - URL association). I've started to add that Workspace interface to 
the contribution package.


- A description of the network containing nodes, we don't need a new 
model for that, as we already have implementation-node and can use 
something like:


composite name=bobsNetWork

  component name=bobsNode1
implementation.node ...
  /component

  component name=bobsNode2
implementation.node ...
  /component

/composite

--
Jean-Sebastien

-
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]



Re: SCA contribution packaging schemes: was: SCA runtimes

2008-01-28 Thread Simon Laws


 Add the
  code that loads all contributions that are available from the file
 system.
  Ant already has this code in various forms

 We can do simpler than load all contributions that are available from
 the file system as the list of contributions to be loaded in a node is
 determined from the composite allocated to it.


I was thinking specifically here about how the node is told which composite
to load and subsequently how it physically locates the artifacts that are
required. As we have disconnected nodes from the domain there is no service
interface to call to pass this information. So my suggestion was to wrap the
node with the code that can load information deployed via the file system in
lieu of a service interface that tells the node what to do.


  3. As an experiment make a Domain that takes as input
a - Contributions from disc (again can re-use Ant's contribution
 loading
  code)

 I'm not too keen on scanning a disk directory as it doesn't apply to a
 distributed environment, I'd prefer to:
 - define a model representing a contribution repository
 - persist it in some XML form
 - provide a service to add/remove/update contributions

 Once we have that basic service in place, it'll be easy to develop a
 program that watches a directory and drives the add/remove/update calls.


This was just a step on the way suggestion to get the repository up and
running quickly. We need to be able to read the model and the contributions
themselves. In the spirit of keeping functions separate the mechanism by
which the model and the contributions get to the repository is not connected
to the way that they are read and processed.

Regards

Simon


Re: SCA contribution packaging schemes: was: SCA runtimes

2008-01-28 Thread Raymond Feng

Hi,

Starting with the J2SE based runtime sounds like a balanced approach. I 
think the embedded HTTP support (Tomcat or Jetty) will provide us the 
web-based UI capability to install/uninstall/list contributions and 
deploy/undeploy composites. I also see great similarity with the 
Tuscany/Geronimo deep integration (Tuscany as a Geronimo plugin) where the 
GUI can be an extension to the Geronimo admin console.


Thanks,
Raymond

- Original Message - 
From: Jean-Sebastien Delfino [EMAIL PROTECTED]

To: tuscany-dev@ws.apache.org
Sent: Sunday, January 27, 2008 9:34 PM
Subject: Re: SCA contribution packaging schemes: was: SCA runtimes



ant elder wrote:

On Jan 24, 2008 7:47 PM, Jean-Sebastien Delfino [EMAIL PROTECTED]
wrote:


ant elder wrote:
[snip]

The (F), (G) and (H) would use the packaging in your (B). For your (B)
how/where were you expecting those sca contribution jars to get used?

Ah I'm happy to see that there are not so many packaging schemes after
all :)

We've already started to discuss contribution usage scenarios in [1].

Here's a longer scenario, showing how I want to use contributions and
composites in a domain for the store tutorial I've been working on.

There are three contributions in the tutorial:
- assets.jar containing most implementation artifacts
- store.jar containing the main store components
- cloud.jar containing utility components in the service cloud

Both store.jar and cloud.jar import artifacts from assets.jar.

1. Create assets.jar and store.jar (using scheme B).

2. Open my tutorial domain in my Web browser, upload store.jar to the
domain.

3. List the contributions in the domain, store.jar shows a red-x error
as some of its imports are not resolvable.

4. Upload assets.jar. Both assets.jar and store.jar show in the list
with no red-x.

5. List the deployable composites, find http://store#store under
store.jar. Open it in my browser to check it's what I want.

6. Mark http://store#store as deployed. Store has a reference to a
CurrencyConverter service (from composite http://cloud#cloud which is
not in my domain yet) so it shows a red-x and appears disabled.

7. Upload cloud.jar, find deployable composite http://cloud#cloud in it,
mark it deployed. The red-x on deployed composite http://store#store is
now gone.

8. Assuming I have 2 machines for running SCA in my network and have
already declared these 2 machines to my domain, allocate composites to
them. Select http://store#store and associate it with machine1.
Store.jar and assets.jar are downloaded to machine1 and machine1
configured with http://store#store.

9. Select http://cloud#cloud and associate it with machine2. Cloud.jar
and assets.jar are downloaded to machine2 and machine2 is configured
with http://cloud#cloud.

10. Display the list of deployed composites, select http://store#store,
click the start button, select http://cloud#cloud, click start.

Hope this helps.

[1] http://marc.info/?l=tuscany-devm=119952302226006

--
Jean-Sebastien


That all sounds wonderful, will be really good when we get to there. 
There's

a lot to do for all that to work


There's not a lot to do. Most of the necessary work is to decouple all the 
code that's implementing too much runtime magic getting in the way of the 
simple scenario I've described here.


though so as a stepping stone how about
getting this to work on a single node first without the gui and 
individual

deployment steps and then add those things once we have something basic
working?


Sorry to disagree, I'm approaching this the other way around:

1. Get the user experience and the UI right first.

2. Work through the individual steps and make sure they make sense.

3. Clean up all the magic code currently tying all the steps together, and 
make the individual functions (add/remove a contribution, validate a 
contribution, get a contribution closure) usable.


4. Lastly, implement minimal code to bootstrap a runtime node from a 
deployed composite (for the last step in the scenario).


The basic idea is to drive the development of the underlying plumbing from 
the scenario and user experience and not the other way around.




Where do we want this to run? - I'd quite like at least one of the 
options

to be as a regular webapp in Tomcat.



I don't think that a Webapp is the right architecture but I may be wrong 
or missing something, so you should probably just try and see for yourself 
if this is what you want to do.


I'm more interested in getting the above scenario working well with one 
option for now: the J2SE based runtime. That's what I've started to work 
on.


--
Jean-Sebastien

-
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]




-
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]



Re: SCA contribution packaging schemes: was: SCA runtimes

2008-01-28 Thread Simon Laws
snip...

 I'm not too keen on scanning a disk directory as it doesn't apply to a
 distributed environment, I'd prefer to:
 - define a model representing a contribution repository
 - persist it in some XML form



I've started on some model code in my sandbox [1]. Feel free to use and
abuse.

Regards

Simon

[1] http://svn.apache.org/repos/asf/incubator/tuscany/sandbox/slaws/modules/


Re: SCA contribution packaging schemes: was: SCA runtimes

2008-01-27 Thread Jean-Sebastien Delfino

ant elder wrote:

On Jan 24, 2008 7:47 PM, Jean-Sebastien Delfino [EMAIL PROTECTED]
wrote:


ant elder wrote:
[snip]

The (F), (G) and (H) would use the packaging in your (B). For your (B)
how/where were you expecting those sca contribution jars to get used?

Ah I'm happy to see that there are not so many packaging schemes after
all :)

We've already started to discuss contribution usage scenarios in [1].

Here's a longer scenario, showing how I want to use contributions and
composites in a domain for the store tutorial I've been working on.

There are three contributions in the tutorial:
- assets.jar containing most implementation artifacts
- store.jar containing the main store components
- cloud.jar containing utility components in the service cloud

Both store.jar and cloud.jar import artifacts from assets.jar.

1. Create assets.jar and store.jar (using scheme B).

2. Open my tutorial domain in my Web browser, upload store.jar to the
domain.

3. List the contributions in the domain, store.jar shows a red-x error
as some of its imports are not resolvable.

4. Upload assets.jar. Both assets.jar and store.jar show in the list
with no red-x.

5. List the deployable composites, find http://store#store under
store.jar. Open it in my browser to check it's what I want.

6. Mark http://store#store as deployed. Store has a reference to a
CurrencyConverter service (from composite http://cloud#cloud which is
not in my domain yet) so it shows a red-x and appears disabled.

7. Upload cloud.jar, find deployable composite http://cloud#cloud in it,
mark it deployed. The red-x on deployed composite http://store#store is
now gone.

8. Assuming I have 2 machines for running SCA in my network and have
already declared these 2 machines to my domain, allocate composites to
them. Select http://store#store and associate it with machine1.
Store.jar and assets.jar are downloaded to machine1 and machine1
configured with http://store#store.

9. Select http://cloud#cloud and associate it with machine2. Cloud.jar
and assets.jar are downloaded to machine2 and machine2 is configured
with http://cloud#cloud.

10. Display the list of deployed composites, select http://store#store,
click the start button, select http://cloud#cloud, click start.

Hope this helps.

[1] http://marc.info/?l=tuscany-devm=119952302226006

--
Jean-Sebastien



That all sounds wonderful, will be really good when we get to there. There's
a lot to do for all that to work


There's not a lot to do. Most of the necessary work is to decouple all 
the code that's implementing too much runtime magic getting in the way 
of the simple scenario I've described here.


though so as a stepping stone how about

getting this to work on a single node first without the gui and individual
deployment steps and then add those things once we have something basic
working?


Sorry to disagree, I'm approaching this the other way around:

1. Get the user experience and the UI right first.

2. Work through the individual steps and make sure they make sense.

3. Clean up all the magic code currently tying all the steps together, 
and make the individual functions (add/remove a contribution, validate a 
contribution, get a contribution closure) usable.


4. Lastly, implement minimal code to bootstrap a runtime node from a 
deployed composite (for the last step in the scenario).


The basic idea is to drive the development of the underlying plumbing 
from the scenario and user experience and not the other way around.




Where do we want this to run? - I'd quite like at least one of the options
to be as a regular webapp in Tomcat.



I don't think that a Webapp is the right architecture but I may be wrong 
or missing something, so you should probably just try and see for 
yourself if this is what you want to do.


I'm more interested in getting the above scenario working well with one 
option for now: the J2SE based runtime. That's what I've started to work on.


--
Jean-Sebastien

-
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]



Re: SCA contribution packaging schemes: was: SCA runtimes

2008-01-27 Thread Jean-Sebastien Delfino

Simon Laws wrote:
[snip]

How about the following as some first steps.

1. Disengage the Node from the domain in the way that it is connected at the
moment  leaving the Node able to load Contributions and start composites as
it does currently  in stand alone mode. Doing this we remove the sca
application that is used to connect the node to the domain and the need to
pull in WS, JSON etc that came up on another thread.


+1 to disengage all the magic domain/node connections.



2. Wrap the Node in the first run options we want to test with (standalone
and tomcat webapp would give us enough to try most of the samples).


I'd like to focus on the J2SE standalone option as trying to tackle many 
options before having the simple one really figured out will create a mess.


Add the

code that loads all contributions that are available from the file system.
Ant already has this code in various forms


We can do simpler than load all contributions that are available from 
the file system as the list of contributions to be loaded in a node is 
determined from the composite allocated to it.




3. As an experiment make a Domain that takes as input
  a - Contributions from disc (again can re-use Ant's contribution loading
code)


I'm not too keen on scanning a disk directory as it doesn't apply to a 
distributed environment, I'd prefer to:

- define a model representing a contribution repository
- persist it in some XML form
- provide a service to add/remove/update contributions

Once we have that basic service in place, it'll be easy to develop a 
program that watches a directory and drives the add/remove/update calls.



  b - a topology file something like [1]

  and produces as output

  c - a list of which contributions need to be copied to which node and
appropriate warnings about missing dependencies
  d - updated contributions/composites so that references that refer to
services in remote nodes have absolute URLs written in to appropriate
bindings.


+1 to the general ideas but I really want to decouple b, c, d as they 
are independent steps.



  We also have most of the code already to do a-d in various places. d is
the trickiest bit but provides the ideal opportunity to tidy up the binding
URL calculation story.


The less code the better... We may be able to reuse a little bit but we 
 can cover the scenario I've tried to describe with much less code than 
we currently have :)




[1] http://www.mail-archive.com/tuscany-dev@ws.apache.org/msg26561.html



--
Jean-Sebastien

-
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]



Re: SCA contribution packaging schemes: was: SCA runtimes

2008-01-26 Thread ant elder
On Jan 25, 2008 1:16 PM, Simon Laws [EMAIL PROTECTED] wrote:

 Sebastien/Ant

 
   Here's a longer scenario, showing how I want to use contributions and
   composites in a domain for the store tutorial I've been working on.
 
  A real eye opener. Thank you for this.

 snip...

 
  getting this to work on a single node first without the gui and
 individual
 
  deployment steps and then add those things once we have something basic
  working?

 How about the following as some first steps.

 1. Disengage the Node from the domain in the way that it is connected at
 the
 moment  leaving the Node able to load Contributions and start composites
 as
 it does currently  in stand alone mode. Doing this we remove the sca
 application that is used to connect the node to the domain and the need to
 pull in WS, JSON etc that came up on another thread.

 2. Wrap the Node in the first run options we want to test with (standalone
 and tomcat webapp would give us enough to try most of the samples). Add
 the
 code that loads all contributions that are available from the file system.
 Ant already has this code in various forms

 3. As an experiment make a Domain that takes as input
  a - Contributions from disc (again can re-use Ant's contribution loading
 code)
  b - a topology file something like [1]

  and produces as output

  c - a list of which contributions need to be copied to which node and
 appropriate warnings about missing dependencies
  d - updated contributions/composites so that references that refer to
 services in remote nodes have absolute URLs written in to appropriate
 bindings.

  We also have most of the code already to do a-d in various places. d is
 the trickiest bit but provides the ideal opportunity to tidy up the
 binding
 URL calculation story.

 Thoughts?

 Simon

 [1] http://www.mail-archive.com/tuscany-dev@ws.apache.org/msg26561.html


Sounds fine to me, and to try and progress things along unless there's
alternative suggestions I'd like to make a start at this. Any help welcomed,
first for me will be to get the store tutorial contributions running in a
single node.

   ...ant


Re: SCA contribution packaging schemes: was: SCA runtimes

2008-01-25 Thread ant elder
On Jan 24, 2008 7:47 PM, Jean-Sebastien Delfino [EMAIL PROTECTED]
wrote:

 ant elder wrote:
 [snip]
 
  The (F), (G) and (H) would use the packaging in your (B). For your (B)
  how/where were you expecting those sca contribution jars to get used?

 Ah I'm happy to see that there are not so many packaging schemes after
 all :)

 We've already started to discuss contribution usage scenarios in [1].

 Here's a longer scenario, showing how I want to use contributions and
 composites in a domain for the store tutorial I've been working on.

 There are three contributions in the tutorial:
 - assets.jar containing most implementation artifacts
 - store.jar containing the main store components
 - cloud.jar containing utility components in the service cloud

 Both store.jar and cloud.jar import artifacts from assets.jar.

 1. Create assets.jar and store.jar (using scheme B).

 2. Open my tutorial domain in my Web browser, upload store.jar to the
 domain.

 3. List the contributions in the domain, store.jar shows a red-x error
 as some of its imports are not resolvable.

 4. Upload assets.jar. Both assets.jar and store.jar show in the list
 with no red-x.

 5. List the deployable composites, find http://store#store under
 store.jar. Open it in my browser to check it's what I want.

 6. Mark http://store#store as deployed. Store has a reference to a
 CurrencyConverter service (from composite http://cloud#cloud which is
 not in my domain yet) so it shows a red-x and appears disabled.

 7. Upload cloud.jar, find deployable composite http://cloud#cloud in it,
 mark it deployed. The red-x on deployed composite http://store#store is
 now gone.

 8. Assuming I have 2 machines for running SCA in my network and have
 already declared these 2 machines to my domain, allocate composites to
 them. Select http://store#store and associate it with machine1.
 Store.jar and assets.jar are downloaded to machine1 and machine1
 configured with http://store#store.

 9. Select http://cloud#cloud and associate it with machine2. Cloud.jar
 and assets.jar are downloaded to machine2 and machine2 is configured
 with http://cloud#cloud.

 10. Display the list of deployed composites, select http://store#store,
 click the start button, select http://cloud#cloud, click start.

 Hope this helps.

 [1] http://marc.info/?l=tuscany-devm=119952302226006

 --
 Jean-Sebastien


That all sounds wonderful, will be really good when we get to there. There's
a lot to do for all that to work though so as a stepping stone how about
getting this to work on a single node first without the gui and individual
deployment steps and then add those things once we have something basic
working?

Where do we want this to run? - I'd quite like at least one of the options
to be as a regular webapp in Tomcat.

   ...ant


Re: SCA contribution packaging schemes: was: SCA runtimes

2008-01-25 Thread Simon Laws
Sebastien/Ant


  Here's a longer scenario, showing how I want to use contributions and
  composites in a domain for the store tutorial I've been working on.

 A real eye opener. Thank you for this.

snip...


 getting this to work on a single node first without the gui and individual

 deployment steps and then add those things once we have something basic
 working?

How about the following as some first steps.

1. Disengage the Node from the domain in the way that it is connected at the
moment  leaving the Node able to load Contributions and start composites as
it does currently  in stand alone mode. Doing this we remove the sca
application that is used to connect the node to the domain and the need to
pull in WS, JSON etc that came up on another thread.

2. Wrap the Node in the first run options we want to test with (standalone
and tomcat webapp would give us enough to try most of the samples). Add the
code that loads all contributions that are available from the file system.
Ant already has this code in various forms

3. As an experiment make a Domain that takes as input
  a - Contributions from disc (again can re-use Ant's contribution loading
code)
  b - a topology file something like [1]

  and produces as output

  c - a list of which contributions need to be copied to which node and
appropriate warnings about missing dependencies
  d - updated contributions/composites so that references that refer to
services in remote nodes have absolute URLs written in to appropriate
bindings.

  We also have most of the code already to do a-d in various places. d is
the trickiest bit but provides the ideal opportunity to tidy up the binding
URL calculation story.

Thoughts?

Simon

[1] http://www.mail-archive.com/tuscany-dev@ws.apache.org/msg26561.html


Re: SCA contribution packaging schemes: was: SCA runtimes

2008-01-24 Thread Jean-Sebastien Delfino

ant elder wrote:
[snip]


The (F), (G) and (H) would use the packaging in your (B). For your (B)
how/where were you expecting those sca contribution jars to get used?


Ah I'm happy to see that there are not so many packaging schemes after 
all :)


We've already started to discuss contribution usage scenarios in [1].

Here's a longer scenario, showing how I want to use contributions and 
composites in a domain for the store tutorial I've been working on.


There are three contributions in the tutorial:
- assets.jar containing most implementation artifacts
- store.jar containing the main store components
- cloud.jar containing utility components in the service cloud

Both store.jar and cloud.jar import artifacts from assets.jar.

1. Create assets.jar and store.jar (using scheme B).

2. Open my tutorial domain in my Web browser, upload store.jar to the 
domain.


3. List the contributions in the domain, store.jar shows a red-x error 
as some of its imports are not resolvable.


4. Upload assets.jar. Both assets.jar and store.jar show in the list 
with no red-x.


5. List the deployable composites, find http://store#store under 
store.jar. Open it in my browser to check it's what I want.


6. Mark http://store#store as deployed. Store has a reference to a 
CurrencyConverter service (from composite http://cloud#cloud which is 
not in my domain yet) so it shows a red-x and appears disabled.


7. Upload cloud.jar, find deployable composite http://cloud#cloud in it, 
mark it deployed. The red-x on deployed composite http://store#store is 
now gone.


8. Assuming I have 2 machines for running SCA in my network and have 
already declared these 2 machines to my domain, allocate composites to 
them. Select http://store#store and associate it with machine1. 
Store.jar and assets.jar are downloaded to machine1 and machine1 
configured with http://store#store.


9. Select http://cloud#cloud and associate it with machine2. Cloud.jar 
and assets.jar are downloaded to machine2 and machine2 is configured 
with http://cloud#cloud.


10. Display the list of deployed composites, select http://store#store, 
click the start button, select http://cloud#cloud, click start.


Hope this helps.

[1] http://marc.info/?l=tuscany-devm=119952302226006

--
Jean-Sebastien

-
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]



Re: SCA contribution packaging schemes: was: SCA runtimes

2008-01-24 Thread Raymond Feng

Hi,

Thank you for describing the scenario. It's really helpful to understand how 
all the pieces work together from a user perspective.


I have a few comments inline.

Thanks,
Raymond

- Original Message - 
From: Jean-Sebastien Delfino [EMAIL PROTECTED]

To: tuscany-dev@ws.apache.org
Sent: Thursday, January 24, 2008 11:47 AM
Subject: Re: SCA contribution packaging schemes: was: SCA runtimes



ant elder wrote:
[snip]


The (F), (G) and (H) would use the packaging in your (B). For your (B)
how/where were you expecting those sca contribution jars to get used?


Ah I'm happy to see that there are not so many packaging schemes after all 
:)


We've already started to discuss contribution usage scenarios in [1].

Here's a longer scenario, showing how I want to use contributions and 
composites in a domain for the store tutorial I've been working on.


There are three contributions in the tutorial:
- assets.jar containing most implementation artifacts
- store.jar containing the main store components
- cloud.jar containing utility components in the service cloud

Both store.jar and cloud.jar import artifacts from assets.jar.

1. Create assets.jar and store.jar (using scheme B).

2. Open my tutorial domain in my Web browser, upload store.jar to the 
domain.


A degenerated case is to copy store.jar to a folder containing the 
contributions for a given SCA domain. But the web-based management is better 
suitable for a distributed env.




3. List the contributions in the domain, store.jar shows a red-x error as 
some of its imports are not resolvable.




We could have something similar to the OSGi console to show the status of 
each contribution (such as INSTALLED, RESOLVED). For those that cannot be 
fully resolved, we should be able to tell which import is not satisfied.


4. Upload assets.jar. Both assets.jar and store.jar show in the list with 
no red-x.


5. List the deployable composites, find http://store#store under 
store.jar. Open it in my browser to check it's what I want.


6. Mark http://store#store as deployed. Store has a reference to a 
CurrencyConverter service (from composite http://cloud#cloud which is not 
in my domain yet) so it shows a red-x and appears disabled.


7. Upload cloud.jar, find deployable composite http://cloud#cloud in it, 
mark it deployed. The red-x on deployed composite http://store#store is 
now gone.


We should be able to deploy multiple composites in one shot as they might 
have cross-references. Or the GUI could select the required 
contributions/composites as we mark one deployable composite.




8. Assuming I have 2 machines for running SCA in my network and have 
already declared these 2 machines to my domain, allocate composites to 
them. Select http://store#store and associate it with machine1. Store.jar 
and assets.jar are downloaded to machine1 and machine1 configured with 
http://store#store.


I assume the closure (a set of contributions required by the deployable 
composite) should be downloaded.




9. Select http://cloud#cloud and associate it with machine2. Cloud.jar and 
assets.jar are downloaded to machine2 and machine2 is configured with 
http://cloud#cloud.


Who initiates the download? Is it a pull or push model?



10. Display the list of deployed composites, select http://store#store, 
click the start button, select http://cloud#cloud, click start.


Hope this helps.

[1] http://marc.info/?l=tuscany-devm=119952302226006

--
Jean-Sebastien

-
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]




-
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]



Re: SCA contribution packaging schemes: was: SCA runtimes

2008-01-24 Thread Jean-Sebastien Delfino

Some more input on steps 7 and 9.

I agree with your other comments (snipped out to keep this short).

Raymond Feng wrote:
[snip]
7. Upload cloud.jar, find deployable composite http://cloud#cloud in 
it, mark it deployed. The red-x on deployed composite 
http://store#store is now gone.


We should be able to deploy multiple composites in one shot as they 
might have cross-references.


Yes, but the simple one composite at a time deployment scheme that I 
described still supports your cross-reference case, or am I missing 
something?


Or the GUI could select the required

contributions/composites as we mark one deployable composite.



We could do that, except that the required composites might not be 
available in the domain yet.


[snip]
9. Select http://cloud#cloud and associate it with machine2. Cloud.jar 
and assets.jar are downloaded to machine2 and machine2 is configured 
with http://cloud#cloud.


Who initiates the download? Is it a pull or push model?


I was thinking about push: the domain triggers the download.

To avoid over-engineering this too quickly, how about starting simple 
and just generating a zip of the artifacts and let the administrator FTP 
and unzip it on the target machine?


In other words I think we need to be comfortable with executing the 
install / resolve / deploy / configure / distribute steps manually 
before trying to automate them.


--
Jean-Sebastien

-
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]



Re: SCA contribution packaging schemes: was: SCA runtimes

2008-01-24 Thread Raymond Feng

Comments inline.

Thanks,
Raymond

- Original Message - 
From: Jean-Sebastien Delfino [EMAIL PROTECTED]

To: tuscany-dev@ws.apache.org
Sent: Thursday, January 24, 2008 2:11 PM
Subject: Re: SCA contribution packaging schemes: was: SCA runtimes



Some more input on steps 7 and 9.

I agree with your other comments (snipped out to keep this short).

Raymond Feng wrote:
[snip]
7. Upload cloud.jar, find deployable composite http://cloud#cloud in it, 
mark it deployed. The red-x on deployed composite http://store#store is 
now gone.


We should be able to deploy multiple composites in one shot as they might 
have cross-references.


Yes, but the simple one composite at a time deployment scheme that I 
described still supports your cross-reference case, or am I missing 
something?




I'm a bit confused by that your statement in :

6. Mark http://store#store as deployed. Store has a reference to a
CurrencyConverter service (from composite http://cloud#cloud which is
not in my domain yet) so it shows a red-x and appears disabled.

In this case, the target service is not in my domain yet, but it shouldn't 
prevent http://store#store from being assigned to a node to start the 
composite. The relationship between the reference and service is 
loosely-coupled, right? It could be a warning though.


I assume when we select a deployable composite from a contribution, we just 
have to create a collection of contributions required to deploy the 
composite. Let's say let have two deployable composites http//store#store 
and http://cloud#cloud. It could end up with two downloadable zips: 
store.jar  assets.jar for store composite and cloud.jar  asset.jar for 
cloud composite. One zip can be downloaded to machine 1 and the other goes 
to machine 2. There is no need to check if a reference in store composite 
can be fulfilled by a service in cloud composite.




Or the GUI could select the required

contributions/composites as we mark one deployable composite.



We could do that, except that the required composites might not be 
available in the domain yet.


I should say required contrbutions.



[snip]
9. Select http://cloud#cloud and associate it with machine2. Cloud.jar 
and assets.jar are downloaded to machine2 and machine2 is configured 
with http://cloud#cloud.


Who initiates the download? Is it a pull or push model?


I was thinking about push: the domain triggers the download.

To avoid over-engineering this too quickly, how about starting simple and 
just generating a zip of the artifacts and let the administrator FTP and 
unzip it on the target machine?


In other words I think we need to be comfortable with executing the 
install / resolve / deploy / configure / distribute steps manually before 
trying to automate them.


It's import to keep all these steps separate and manually operatable. For 
example, we could expose management services over JMX, and the control be 
invoked from any JMX-enabled consolse/command line.




--
Jean-Sebastien

-
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]




-
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]



Re: SCA contribution packaging schemes: was: SCA runtimes

2008-01-24 Thread Jean-Sebastien Delfino

More comments inline :)

Raymond Feng wrote:

I'm a bit confused by that your statement in :

6. Mark http://store#store as deployed. Store has a reference to a
CurrencyConverter service (from composite http://cloud#cloud which is
not in my domain yet) so it shows a red-x and appears disabled.

In this case, the target service is not in my domain yet, but it 
shouldn't prevent http://store#store from being assigned to a node to 
start the composite.


My view is that this case should prevent assigning http://store#store to 
a machine, in the next few months at least.


Having a reference is the expression of a requirement (i.e. I need the 
function behind the reference to be available to perform my job). 
Assigning a component with a dangling reference to a processor and 
starting it is just violating that requirement, like trying to run a 
Java class with compile errors.


I'm not saying that we'll never have to look into that kind of dream 
dynamic scenario, but I'd like to start with reality before attacking 
that dream.


The relationship between the reference and service

is loosely-coupled, right?


Can you help me understand your definition of loosely coupled?

It could be a warning though.


I assume when we select a deployable composite from a contribution, we 
just have to create a collection of contributions required to deploy the 
composite. Let's say let have two deployable composites 
http//store#store and http://cloud#cloud. It could end up with two 
downloadable zips: store.jar  assets.jar for store composite and 
cloud.jar  asset.jar for cloud composite. One zip can be downloaded to 
machine 1 and the other goes to machine 2.


Yes. We could also put some of the pieces of runtime required to run the 
composite in that zip.


There is no need to check if
a reference in store composite can be fulfilled by a service in cloud 
composite.




If somebody has declared a reference, that was for a reason: express the 
requirement to have a service wired to that reference so I'd prefer to 
check for now.


Also until the reference is satisfied you may not know which binding to 
use, thereby preventing you to select the correct machine equipped with 
the necessary software to support that binding.





Or the GUI could select the required

contributions/composites as we mark one deployable composite.



We could do that, except that the required composites might not be 
available in the domain yet.


I should say required contrbutions.


Yes, when you select a composite, the UI could highlight the 
contributions required by the contribution containing that composite.






[snip]
9. Select http://cloud#cloud and associate it with machine2. 
Cloud.jar and assets.jar are downloaded to machine2 and machine2 is 
configured with http://cloud#cloud.


Who initiates the download? Is it a pull or push model?


I was thinking about push: the domain triggers the download.

To avoid over-engineering this too quickly, how about starting simple 
and just generating a zip of the artifacts and let the administrator 
FTP and unzip it on the target machine?


In other words I think we need to be comfortable with executing the 
install / resolve / deploy / configure / distribute steps manually 
before trying to automate them.


It's import to keep all these steps separate and manually operatable. 


Exactly.

For example, we could expose management services over JMX, and the 
control be invoked from any JMX-enabled consolse/command line.


Not sure about JMX yet, I'd like to understand what the individual steps 
are before picking a specific technology to implement them.


--
Jean-Sebastien

-
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]



Re: SCA contribution packaging schemes: was: SCA runtimes

2008-01-23 Thread Jean-Sebastien Delfino

ant elder wrote:

On Jan 21, 2008 9:31 PM, Jean-Sebastien Delfino [EMAIL PROTECTED]
wrote:


Simon Nash wrote:
  Jean-Sebastien Delfino wrote:

- Under which circumstances does the app packager want to package the
Tuscany and dependency JARs with the application artifacts.

[snip]

With a big topic like this, dividing it into separate threads makes it
easier for people to follow and participate in the discussions.  The
split you are suggesting looks good to me.

[snip]

Trying to address Under which circumstances does the app packager want
to package the Tuscany and dependency JARs with the application
artifacts?

My (maybe simplistic) view is:

A) We can package in a WAR:
- several SCA contributions JARs
- any SCA deployment composites
- the required API JARs
- the required Tuscany JARs and runtime dependency JARs

This allows deployment of an SCA/Tuscany based solution to JEE Web
containers without requiring any system configuration or software
installation besides the Webapp.

There are some basic architectural limitations to that scheme:
- no good support for other bindings than HTTP based bindings
- footprint issue with every Webapp packaging the whole runtime

Also we're not quite there yet as I don't think we support:
- several SCA contributions in the packaged solution
- SCA deployment composites

B) Package SCA contributions as simple JARs, containing only the
application artifacts (no API JARs, no runtime dependency JARs).

Packaging SCA contributions as OSGi bundles is a variation of the same
scheme.

Any thoughts?
What other packaging schemes do people want to support and when?
--
Jean-Sebastien



Here's all the  options I can think of:

A) - app dependencies and tuscany and its dependencies in web-inf/lib
B) - app dependencies in web-inf/lib, tuscany  and its dependencies in
container shared library (geronimo/websphere/..)
C) - app dependencies and tuscany bootstrap jar in web-inf/lib, tuscany and
its dependencies in web-inf/tuscany (to issolate tuscany from app CL)
D) - app dependencies and tuscany bootstrap jar in web-inf/lib, tuscany and
its dependencies in folder outside of webapp ie c:/Tuscany/lib
E) - app dependencies in web-inf/lib, tuscany using deep integration in
container (tomcat/geronimo/...)
F) - all tuscany and its dependencies in web-inf/lib, app (sca
contributions) in web-inf/sca-contributions
G) - all tuscany and its dependencies in web-inf/lib, app (sca
contributions) outside of webapp ie c:/MySCAContributions
H) - tuscany using deep integration in container (tomcat/geronimo/...),
app's (sca contributions) in folder in container, ie c:/apache-tomcat-6.0.10
/SCAContributions

Are there any other configurations anyone can think of?

Most of our webapp samples today use (A) but we've code scattered about SVN
and SVN history that do most of the others.
(C) and (D) is what i think was being suggested by Simon Nash in [1].
The app can see the Tuscany classes and dependencies with (A) and (B) which
we were trying to avoid at one point.
(B) (D) (E) and (H) reduce the size of the application as Tuscany is outside
of the webapp but that requires an extra install step
(G) (and F) is what I think users were interested in doing in TUSCANY-1884
and [2]

So its just a matter of deciding which we want to support and distribute :)
As everyone seems to have different ideas about whats important I'm tempted
to say lets try to support all of these for now so we play around and see
which we think are really useful. How to distribute each option could be
left to another thread.

   ...ant

[1]
http://mail-archives.apache.org/mod_mbox/ws-tuscany-dev/200801.mbox/[EMAIL 
PROTECTED]
[2]
http://mail-archives.apache.org/mod_mbox/ws-tuscany-dev/200710.mbox/[EMAIL 
PROTECTED]



I don't think that support all of these is such a good idea as it will 
create complexity, but people are welcome to work on them if they want 
to spend the time.


I'm interested in working on providing simple and usable support for:

- (A) as it's a simple scheme that'll work with all Web containers

- (B from your list) as it's a lighter variation of (A) that'll work 
with Web containers that support shared libraries.


- (B from my list) as it's in line with the SCA spec and keeps runtime 
specifics out of the application package.


I'm not quite sure how to map my option (B) to your options (F), (G), 
(H). What will the packaging of an SCA contribution look like in your 
options (F), (G), (H)?


--
Jean-Sebastien

-
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]



Re: SCA contribution packaging schemes: was: SCA runtimes

2008-01-23 Thread Raymond Feng

Hi,

I view the WAR (a single web module) is a simplified deployment of EAR 
(might contain multiple modules). I meant to say that we can embed SCA 
assembly in a JEE application, for example, a bunch of java components wired 
together to provide some services to the EJB, JSP or Servlet. It's probably 
related to the Use Recursive SCA Assembly in Enterprise Applications 
scenario in [1]. I guess it's the degenerated case of the 
META-INF/application.composite without the capability of 
implementation.ejb or implementation.web.


[1] http://www.osoa.org:80/pages/viewpage.action?pageId=3980

Thanks,
Raymond

- Original Message - 
From: Jean-Sebastien Delfino [EMAIL PROTECTED]

To: tuscany-dev@ws.apache.org
Sent: Wednesday, January 23, 2008 9:57 AM
Subject: Re: SCA contribution packaging schemes: was: SCA runtimes



Raymond Feng wrote:
A  B seem to be the two primary schemes. A variation of option A is that 
we package all the jars (as utility jars) into an EAR so that JEE 
applications can use Tuscany/SCA.




Can you help me understand what you meant by: so that JEE applications 
can use Tuscany/SCA?


Are you talking about the SCA assembly of JEE applications described in 
http://www.osoa.org/pages/viewpage.action?pageId=3980 using something like 
META-INF/application.composite?

--
Jean-Sebastien

-
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]




-
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]



Re: SCA contribution packaging schemes: was: SCA runtimes

2008-01-23 Thread ant elder
On Jan 23, 2008 6:24 PM, Jean-Sebastien Delfino [EMAIL PROTECTED]
wrote:

 ant elder wrote:
  On Jan 21, 2008 9:31 PM, Jean-Sebastien Delfino [EMAIL PROTECTED]
  wrote:
 
  Simon Nash wrote:
Jean-Sebastien Delfino wrote:
  - Under which circumstances does the app packager want to package the
  Tuscany and dependency JARs with the application artifacts.
  [snip]
  With a big topic like this, dividing it into separate threads makes it
  easier for people to follow and participate in the discussions.  The
  split you are suggesting looks good to me.
  [snip]
 
  Trying to address Under which circumstances does the app packager want
  to package the Tuscany and dependency JARs with the application
  artifacts?
 
  My (maybe simplistic) view is:
 
  A) We can package in a WAR:
  - several SCA contributions JARs
  - any SCA deployment composites
  - the required API JARs
  - the required Tuscany JARs and runtime dependency JARs
 
  This allows deployment of an SCA/Tuscany based solution to JEE Web
  containers without requiring any system configuration or software
  installation besides the Webapp.
 
  There are some basic architectural limitations to that scheme:
  - no good support for other bindings than HTTP based bindings
  - footprint issue with every Webapp packaging the whole runtime
 
  Also we're not quite there yet as I don't think we support:
  - several SCA contributions in the packaged solution
  - SCA deployment composites
 
  B) Package SCA contributions as simple JARs, containing only the
  application artifacts (no API JARs, no runtime dependency JARs).
 
  Packaging SCA contributions as OSGi bundles is a variation of the same
  scheme.
 
  Any thoughts?
  What other packaging schemes do people want to support and when?
  --
  Jean-Sebastien
 
 
  Here's all the  options I can think of:
 
  A) - app dependencies and tuscany and its dependencies in web-inf/lib
  B) - app dependencies in web-inf/lib, tuscany  and its dependencies in
  container shared library (geronimo/websphere/..)
  C) - app dependencies and tuscany bootstrap jar in web-inf/lib, tuscany
 and
  its dependencies in web-inf/tuscany (to issolate tuscany from app CL)
  D) - app dependencies and tuscany bootstrap jar in web-inf/lib, tuscany
 and
  its dependencies in folder outside of webapp ie c:/Tuscany/lib
  E) - app dependencies in web-inf/lib, tuscany using deep integration in
  container (tomcat/geronimo/...)
  F) - all tuscany and its dependencies in web-inf/lib, app (sca
  contributions) in web-inf/sca-contributions
  G) - all tuscany and its dependencies in web-inf/lib, app (sca
  contributions) outside of webapp ie c:/MySCAContributions
  H) - tuscany using deep integration in container (tomcat/geronimo/...),
  app's (sca contributions) in folder in container, ie c:/apache-
 tomcat-6.0.10
  /SCAContributions
 
  Are there any other configurations anyone can think of?
 
  Most of our webapp samples today use (A) but we've code scattered about
 SVN
  and SVN history that do most of the others.
  (C) and (D) is what i think was being suggested by Simon Nash in [1].
  The app can see the Tuscany classes and dependencies with (A) and (B)
 which
  we were trying to avoid at one point.
  (B) (D) (E) and (H) reduce the size of the application as Tuscany is
 outside
  of the webapp but that requires an extra install step
  (G) (and F) is what I think users were interested in doing in
 TUSCANY-1884
  and [2]
 
  So its just a matter of deciding which we want to support and distribute
 :)
  As everyone seems to have different ideas about whats important I'm
 tempted
  to say lets try to support all of these for now so we play around and
 see
  which we think are really useful. How to distribute each option could be
  left to another thread.
 
 ...ant
 
  [1]
 
 http://mail-archives.apache.org/mod_mbox/ws-tuscany-dev/200801.mbox/[EMAIL 
 PROTECTED]
  [2]
 
 http://mail-archives.apache.org/mod_mbox/ws-tuscany-dev/200710.mbox/[EMAIL 
 PROTECTED]
 

 I don't think that support all of these is such a good idea as it will
 create complexity, but people are welcome to work on them if they want
 to spend the time.

 I'm interested in working on providing simple and usable support for:

 - (A) as it's a simple scheme that'll work with all Web containers

 - (B from your list) as it's a lighter variation of (A) that'll work
 with Web containers that support shared libraries.

 - (B from my list) as it's in line with the SCA spec and keeps runtime
 specifics out of the application package.

 I'm not quite sure how to map my option (B) to your options (F), (G),
 (H). What will the packaging of an SCA contribution look like in your
 options (F), (G), (H)?

 --
 Jean-Sebastien


The (F), (G) and (H) would use the packaging in your (B). For your (B)
how/where were you expecting those sca contribution jars to get used?

   ...ant


Re: SCA contribution packaging schemes: was: SCA runtimes

2008-01-23 Thread Raymond Feng


- Original Message - 
From: Jean-Sebastien Delfino [EMAIL PROTECTED]

To: tuscany-dev@ws.apache.org
Sent: Wednesday, January 23, 2008 10:24 AM
Subject: Re: SCA contribution packaging schemes: was: SCA runtimes



ant elder wrote:

On Jan 21, 2008 9:31 PM, Jean-Sebastien Delfino [EMAIL PROTECTED]
wrote:


Simon Nash wrote:
  Jean-Sebastien Delfino wrote:

- Under which circumstances does the app packager want to package the
Tuscany and dependency JARs with the application artifacts.

[snip]

With a big topic like this, dividing it into separate threads makes it
easier for people to follow and participate in the discussions.  The
split you are suggesting looks good to me.

[snip]

Trying to address Under which circumstances does the app packager want
to package the Tuscany and dependency JARs with the application
artifacts?

My (maybe simplistic) view is:

A) We can package in a WAR:
- several SCA contributions JARs
- any SCA deployment composites
- the required API JARs
- the required Tuscany JARs and runtime dependency JARs

This allows deployment of an SCA/Tuscany based solution to JEE Web
containers without requiring any system configuration or software
installation besides the Webapp.

There are some basic architectural limitations to that scheme:
- no good support for other bindings than HTTP based bindings
- footprint issue with every Webapp packaging the whole runtime

Also we're not quite there yet as I don't think we support:
- several SCA contributions in the packaged solution
- SCA deployment composites

B) Package SCA contributions as simple JARs, containing only the
application artifacts (no API JARs, no runtime dependency JARs).

Packaging SCA contributions as OSGi bundles is a variation of the same
scheme.

Any thoughts?
What other packaging schemes do people want to support and when?
--
Jean-Sebastien



Here's all the  options I can think of:

A) - app dependencies and tuscany and its dependencies in web-inf/lib
B) - app dependencies in web-inf/lib, tuscany  and its dependencies in
container shared library (geronimo/websphere/..)
C) - app dependencies and tuscany bootstrap jar in web-inf/lib, tuscany 
and

its dependencies in web-inf/tuscany (to issolate tuscany from app CL)
D) - app dependencies and tuscany bootstrap jar in web-inf/lib, tuscany 
and

its dependencies in folder outside of webapp ie c:/Tuscany/lib
E) - app dependencies in web-inf/lib, tuscany using deep integration in
container (tomcat/geronimo/...)
F) - all tuscany and its dependencies in web-inf/lib, app (sca
contributions) in web-inf/sca-contributions
G) - all tuscany and its dependencies in web-inf/lib, app (sca
contributions) outside of webapp ie c:/MySCAContributions
H) - tuscany using deep integration in container (tomcat/geronimo/...),
app's (sca contributions) in folder in container, ie 
c:/apache-tomcat-6.0.10

/SCAContributions

Are there any other configurations anyone can think of?

Most of our webapp samples today use (A) but we've code scattered about 
SVN

and SVN history that do most of the others.
(C) and (D) is what i think was being suggested by Simon Nash in [1].
The app can see the Tuscany classes and dependencies with (A) and (B) 
which

we were trying to avoid at one point.
(B) (D) (E) and (H) reduce the size of the application as Tuscany is 
outside

of the webapp but that requires an extra install step
(G) (and F) is what I think users were interested in doing in 
TUSCANY-1884

and [2]

So its just a matter of deciding which we want to support and distribute 
:)
As everyone seems to have different ideas about whats important I'm 
tempted

to say lets try to support all of these for now so we play around and see
which we think are really useful. How to distribute each option could be
left to another thread.

   ...ant

[1]
http://mail-archives.apache.org/mod_mbox/ws-tuscany-dev/200801.mbox/[EMAIL 
PROTECTED]
[2]
http://mail-archives.apache.org/mod_mbox/ws-tuscany-dev/200710.mbox/[EMAIL 
PROTECTED]



I don't think that support all of these is such a good idea as it will 
create complexity, but people are welcome to work on them if they want to 
spend the time.


+1 to start with only a few options. I have to admit that I will be 
overhelmed by so many choices.




I'm interested in working on providing simple and usable support for:

- (A) as it's a simple scheme that'll work with all Web containers


+1.



- (B from your list) as it's a lighter variation of (A) that'll work with 
Web containers that support shared libraries.


+1. I already experimented it with Geronimo and it works fine.



- (B from my list) as it's in line with the SCA spec and keeps runtime 
specifics out of the application package.


I'm not quite sure how to map my option (B) to your options (F), (G), (H). 
What will the packaging of an SCA contribution look like in your options 
(F), (G), (H)?


My understanding is that F/G/H have the same packaging scheme as Sebstien's 
B, i.e., a place to get a list of installed

Re: SCA contribution packaging schemes: was: SCA runtimes

2008-01-22 Thread ant elder
On Jan 21, 2008 9:31 PM, Jean-Sebastien Delfino [EMAIL PROTECTED]
wrote:

 Simon Nash wrote:
   Jean-Sebastien Delfino wrote:
  - Under which circumstances does the app packager want to package the
  Tuscany and dependency JARs with the application artifacts.
 [snip]
  With a big topic like this, dividing it into separate threads makes it
  easier for people to follow and participate in the discussions.  The
  split you are suggesting looks good to me.
 [snip]

 Trying to address Under which circumstances does the app packager want
 to package the Tuscany and dependency JARs with the application
 artifacts?

 My (maybe simplistic) view is:

 A) We can package in a WAR:
 - several SCA contributions JARs
 - any SCA deployment composites
 - the required API JARs
 - the required Tuscany JARs and runtime dependency JARs

 This allows deployment of an SCA/Tuscany based solution to JEE Web
 containers without requiring any system configuration or software
 installation besides the Webapp.

 There are some basic architectural limitations to that scheme:
 - no good support for other bindings than HTTP based bindings
 - footprint issue with every Webapp packaging the whole runtime

 Also we're not quite there yet as I don't think we support:
 - several SCA contributions in the packaged solution
 - SCA deployment composites

 B) Package SCA contributions as simple JARs, containing only the
 application artifacts (no API JARs, no runtime dependency JARs).

 Packaging SCA contributions as OSGi bundles is a variation of the same
 scheme.

 Any thoughts?
 What other packaging schemes do people want to support and when?
 --
 Jean-Sebastien


Here's all the  options I can think of:

A) - app dependencies and tuscany and its dependencies in web-inf/lib
B) - app dependencies in web-inf/lib, tuscany  and its dependencies in
container shared library (geronimo/websphere/..)
C) - app dependencies and tuscany bootstrap jar in web-inf/lib, tuscany and
its dependencies in web-inf/tuscany (to issolate tuscany from app CL)
D) - app dependencies and tuscany bootstrap jar in web-inf/lib, tuscany and
its dependencies in folder outside of webapp ie c:/Tuscany/lib
E) - app dependencies in web-inf/lib, tuscany using deep integration in
container (tomcat/geronimo/...)
F) - all tuscany and its dependencies in web-inf/lib, app (sca
contributions) in web-inf/sca-contributions
G) - all tuscany and its dependencies in web-inf/lib, app (sca
contributions) outside of webapp ie c:/MySCAContributions
H) - tuscany using deep integration in container (tomcat/geronimo/...),
app's (sca contributions) in folder in container, ie c:/apache-tomcat-6.0.10
/SCAContributions

Are there any other configurations anyone can think of?

Most of our webapp samples today use (A) but we've code scattered about SVN
and SVN history that do most of the others.
(C) and (D) is what i think was being suggested by Simon Nash in [1].
The app can see the Tuscany classes and dependencies with (A) and (B) which
we were trying to avoid at one point.
(B) (D) (E) and (H) reduce the size of the application as Tuscany is outside
of the webapp but that requires an extra install step
(G) (and F) is what I think users were interested in doing in TUSCANY-1884
and [2]

So its just a matter of deciding which we want to support and distribute :)
As everyone seems to have different ideas about whats important I'm tempted
to say lets try to support all of these for now so we play around and see
which we think are really useful. How to distribute each option could be
left to another thread.

   ...ant

[1]
http://mail-archives.apache.org/mod_mbox/ws-tuscany-dev/200801.mbox/[EMAIL 
PROTECTED]
[2]
http://mail-archives.apache.org/mod_mbox/ws-tuscany-dev/200710.mbox/[EMAIL 
PROTECTED]


SCA contribution packaging schemes: was: SCA runtimes

2008-01-21 Thread Jean-Sebastien Delfino

Simon Nash wrote:
 Jean-Sebastien Delfino wrote:
- Under which circumstances does the app packager want to package the 
Tuscany and dependency JARs with the application artifacts.

[snip]

With a big topic like this, dividing it into separate threads makes it
easier for people to follow and participate in the discussions.  The
split you are suggesting looks good to me.

[snip]

Trying to address Under which circumstances does the app packager want 
to package the Tuscany and dependency JARs with the application artifacts?


My (maybe simplistic) view is:

A) We can package in a WAR:
- several SCA contributions JARs
- any SCA deployment composites
- the required API JARs
- the required Tuscany JARs and runtime dependency JARs

This allows deployment of an SCA/Tuscany based solution to JEE Web 
containers without requiring any system configuration or software 
installation besides the Webapp.


There are some basic architectural limitations to that scheme:
- no good support for other bindings than HTTP based bindings
- footprint issue with every Webapp packaging the whole runtime

Also we're not quite there yet as I don't think we support:
- several SCA contributions in the packaged solution
- SCA deployment composites

B) Package SCA contributions as simple JARs, containing only the 
application artifacts (no API JARs, no runtime dependency JARs).


Packaging SCA contributions as OSGi bundles is a variation of the same 
scheme.


Any thoughts?
What other packaging schemes do people want to support and when?
--
Jean-Sebastien

-
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]



Re: SCA contribution packaging schemes: was: SCA runtimes

2008-01-21 Thread Raymond Feng
A  B seem to be the two primary schemes. A variation of option A is that we 
package all the jars (as utility jars) into an EAR so that JEE applications 
can use Tuscany/SCA.


Thanks,
Raymond

- Original Message - 
From: Jean-Sebastien Delfino [EMAIL PROTECTED]

To: tuscany-dev@ws.apache.org
Sent: Monday, January 21, 2008 1:31 PM
Subject: SCA contribution packaging schemes: was: SCA runtimes



Simon Nash wrote:
 Jean-Sebastien Delfino wrote:
- Under which circumstances does the app packager want to package the 
Tuscany and dependency JARs with the application artifacts.

[snip]

With a big topic like this, dividing it into separate threads makes it
easier for people to follow and participate in the discussions.  The
split you are suggesting looks good to me.

[snip]

Trying to address Under which circumstances does the app packager want to 
package the Tuscany and dependency JARs with the application artifacts?


My (maybe simplistic) view is:

A) We can package in a WAR:
- several SCA contributions JARs
- any SCA deployment composites
- the required API JARs
- the required Tuscany JARs and runtime dependency JARs

This allows deployment of an SCA/Tuscany based solution to JEE Web 
containers without requiring any system configuration or software 
installation besides the Webapp.


There are some basic architectural limitations to that scheme:
- no good support for other bindings than HTTP based bindings
- footprint issue with every Webapp packaging the whole runtime

Also we're not quite there yet as I don't think we support:
- several SCA contributions in the packaged solution
- SCA deployment composites

B) Package SCA contributions as simple JARs, containing only the 
application artifacts (no API JARs, no runtime dependency JARs).


Packaging SCA contributions as OSGi bundles is a variation of the same 
scheme.


Any thoughts?
What other packaging schemes do people want to support and when?
--
Jean-Sebastien

-
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]




-
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]