Re: Request to propogate the value of a references target= attribute on its associated bindings model object

2008-02-01 Thread Simon Laws
Some more comments inline 



 Lets see if I can articulate this a little better. My thinking is that
 taget= represents a binding independent way to resolve an endpoint. It
 doesnt necessarily specify the contents of the effective URI that is
 used to address an endpoint.


+1


 In the context of a WS binding for instance it
 doesnt specify the value or any part of the value of the actual URL used
 to
 invoke the service. The binding URI attribute does represent all of or a
 part of the URI used to invoke a service.


The binding URI should (according to lines 2305-2307 of the assembly spec)
define the target URI of the reference (either the component/service for a
wire to an endpoint within the SCA domain or the accessible address of some
endpoint outside the SCA domain)

I think we are bending this a bit when we try and wire a remote binding,
e,g, binding.ws, inside the domain and add the full URL information to the
reference binding uri. Maybe need another look.


 The use case I have in mind is the
 ability to use target= to specify a logical representation of a URI that
 can
 be used by all binding types as a key to lookup / resolve the binding's
 specific physical endpoint to be used to invoke the service. In the case
 of
 binding.ws for example I envision a mapping as follows :

 target = C1/S1 binding.ws URI = http://someServer/someService


We have a mapping very similar to this in the domain code currently


 binding.jms URI = some binding appropriate URI

 binding.sca URI = some binding appropriate URI

 In this instance only a logical value of C1/S1 needs to be specified on
 the
 reference target=. Each service can then register itself and all
 appropriate
 binding specific URI's.

 The reference can then simply specify which binding type to use and the
 logical target name of C1/S1 and the binding can then resolve the target
 to the binding specific URI.

 The binding however needs to know the value of target= to know 1) when
 this
 logical to physical name resolution needs to occur and 2) what key to use
 to
 perform the lookup.


There is no doubt that this mapping should take place. So it comes down to
whether the binding implementation should be responsible for initiating this
mapping or if the mapping should take place in order that the bindings are
fully configured when they are created. There are two reasons I might use to
justify initiating this mapping in the binding.

1. To handle changes in the SCA model during runtime, e.g. someone adds or
modifies a wire and you need a component reference to take account of this.

I think there is debate in OASIS at the moment about what the shape of this
capability should be. While I can't predict what the result of the Oasis
discussion will be I can say that Tuscany set off looking at this but there
are many complications, not least that references need to be matched with
services taking into account available bindings, interface descriptions and
policy intents so a much more capable domain level query is required to
effect this mapping properly.

2. To handle service resilience issues, e.g. you know which service you want
to talk to and have chosen a binding at the domain level but you want to
take account of a service being moved deliberately or failing and being
restarted.

The question here is whether it is the responsibility of Tuscany to manage
this, e.g. The responsibility could be devolved to the system administrator
who could be expected to deploy services into some kind of cluster if there
is a requirement to provide a level of abstraction between the URL that a
reference targets and the location of a given service.

Are there scenarios that I'm missing here?

I'm really trying to understand what, on the face of it, is a very simple
request (to provide more information in the binding model) as it has
implication for where Tuscany users see the boundary of responsibility for
an SCA application running in Tuscany.

Regards

Simon


Re: WSDLLess Deployment Implementation Question

2008-02-01 Thread Simon Laws
On Jan 31, 2008 5:04 PM, Lou Amodeo [EMAIL PROTECTED] wrote:

 Hi, I have a question about the implementation of the wsdlless
 deployment
 function.
 The issues I see are occurring in a couple of places within the
 life-cycle.
 Namely, deployment, binding start, and service definition. It occurs to me
 that if the generation of the wsdl occurred early in the process, rather
 than during the startup of the component, Tuscany could maintain a common
 code path for contributions with wsdl and contributions w/o wsdl. Today
 for
 instance WSDL is processed in the WSBindingProcessor when its present and
 the Axis2ServiceBinding when its not. I am curious why this is so? Why
 wouldn't the WSBindingProcessor detect wsdlLess, generate a wsdl
 definition
 at that point and the subsuquent flow from there on would be the same?
 Alternatively, when a contribution is deployed, why not detect a wsdl is
 missing, generate one, and place it in the contribution. At this point a
 wsdlElement could also be added and the subsequent startup process would
 not
 know the difference? I am not sure if other bindings will adopt a wsdlless
 approach but I would think a common non-binding specific process would be
 beneficial.


 Thanks for your help.

Hi Lou,

Anything we can do to simplify our code path is a good thing IMO.

More specifically I'm attracted by your idea of having the wsdl processing
happen in one place and I think this could work well for the model generated
in memory when a composite is processed. The WSDL interface contract could
be built in the earlier phases of composite processing and the code at the
top of Axis2ServiceBindingProvider

InterfaceContract contract = wsBinding.getBindingInterfaceContract
();
if (contract == null) {
contract = service.getInterfaceContract
().makeUnidirectional(false);
if ((contract instanceof JavaInterfaceContract)) {
contract =
Java2WSDLHelper.createWSDLInterfaceContract((JavaInterfaceContract)contract,
requiresSOAP12(wsBinding));
}
wsBinding.setBindingInterfaceContract(contract);
}

Could be reworked or removed..

Could you say a little more about the alternative you propose,
Alternatively, when a contribution is deployed, why not detect a wsdl is
missing, generate one, and place it in the contribution. . What you you
mean by deployed here. I'm asking in relation to two other threads that
are ongoing.

In the discussion [1] about reference targets I've been floating the idea of
having a contribution processing phase in the domain that fixes up
contributions before they are deployed to nodes. I had been thinking about
URIs but this WSDL point could be another one of the things that could be
done ahead of time. In that way the Node would always be presented with a
WSDL file in the deployed contribution.

In the discussion [2] about the runtime Java2WSDL processing you suggest
that you want to write the generated WSDL out as a file as an alternative to
providing access to it via ?wsdl. Moving this processing forward would make
this related fix easier. My question here is where should the WSDL file be
written to? I'll ask that back over on the other thread.

Regards

Simon


[1] http://www.mail-archive.com/tuscany-dev%40ws.apache.org/msg27294.html
[2] http://www.mail-archive.com/tuscany-dev%40ws.apache.org/msg27532.html


Distribution structure

2008-02-01 Thread Simon Laws
I'm looking at copying the 1.1 release artifacts up onto the new
distribution infrastructure at

www.apache.org/dist/incubator/

We need to make a decision about how this will be structured. As a default I
assume we stick pretty much with what we have already (see
http://archive.apache.org/dist/incubator/tuscany/)

tuscany
  native (was cpp)
  java/
 das/
 sca/
 sdo/

If people are happy I'll go in and create the bits necessary for the new
java release, i.e.

tuscany/java/sca/1.1-incubating

Let me know.

Regards

Simon


Tuscany Java SCA Release 1.1-incubating and new Incubator distribution policy

2008-02-01 Thread Simon Laws
Hi

The Tuscany project is now ready to distribute its Java SCA Release
1.1-incubating. Following up from Robert's recent post to the Tuscany dev
list [1] I am posting here before proceeding to copy the artifacts up to
www.apache.org/dist/incubator.

The proposal it to use a similar release structure to the one we already use
[2]

tuscany/
  native/
  java/
 das/
 sca/
1.1-incubating/
   release artifacts here
 sdo/

Please advise

Thanks

Simon

[1] http://www.mail-archive.com/tuscany-dev@ws.apache.org/msg26970.html
[2] http://archive.apache.org/dist/incubator/tuscany/


Re: Tuscany Java SCA Release 1.1-incubating and new Incubator distribution policy

2008-02-03 Thread Simon Laws
On Feb 3, 2008 12:31 PM, Robert Burrell Donkin 
[EMAIL PROTECTED] wrote:

 On Feb 1, 2008 4:00 PM, Simon Laws [EMAIL PROTECTED] wrote:
  Hi
 
  The Tuscany project is now ready to distribute its Java SCA Release
  1.1-incubating. Following up from Robert's recent post to the Tuscany
 dev
  list [1] I am posting here before proceeding to copy the artifacts up to
  www.apache.org/dist/incubator.

 i volunteered to walk podlings through their first release under the
 new policy (at least until the documentation is completed)

 please take a look at

 http://incubator.apache.org/guides/releasemanagement.html#release-distribution

 are there any questions you have that it doesn't answer...?

 - robert

 -
 To unsubscribe, e-mail: [EMAIL PROTECTED]
 For additional commands, e-mail: [EMAIL PROTECTED]

 Hi Robert

The document looks good to me. I don't have specific questions outstanding.

We know how we want our distribution directories structured (see start of
thread).

There is a draft release page here (
http://cwiki.apache.org/confluence/display/TUSCANY/SCA+Java+1.1-incubating).
I'd appreciate it if you could give the URLs a quick once over to see if
they are as you would expect re. mirroring.

When we get the go ahead  it's left from me to upload the artifacts to
www.apache.org/dist/incubator/tuscany/, set the correct group and
permissions as documented and then integrate the new release page into the
Tuscany web site.

Thanks for helping us through this.

Simon


Re: Tuscany Java SCA Release 1.1-incubating and new Incubator distribution policy

2008-02-03 Thread Simon Laws
Thanks for the speedy feedback.

On Feb 3, 2008 4:55 PM, Robert Burrell Donkin [EMAIL PROTECTED]
wrote:

 On Feb 3, 2008 3:53 PM, Simon Laws [EMAIL PROTECTED] wrote:

  We know how we want our distribution directories structured (see start
 of
  thread).

 great

  There is a draft release page here (
 
 http://cwiki.apache.org/confluence/display/TUSCANY/SCA+Java+1.1-incubating
 ).
  I'd appreciate it if you could give the URLs a quick once over to see if
  they are as you would expect re. mirroring.

 a few minor, non-normative recommendations

 It can take a day or so for releases to propagate to all mirrors, so
 if you are downloading something near the release date, please be
 patient and retry the links from this page in the event that a
 selected mirror does not yet have the download.

 i've found that it works better to hold off the announcement until the
 mirrors have sync'd and i can test for myself that everything has
 worked. (saves embarrasements.) so, i tend to initially put a notice
 saying that the release is not yet available and then remove it once
 i've tested the downloads.


Ok, good idea.



 The PGP signatures can be verified using PGP or GPG. First download
 the KEYS as well as the asc signature file for the relevant
 distribution. Make sure you get these files from our main distribution
 directory, rather than from a mirror (the asc and md5 links below take
 you to the main distribution directory). Then verify the signatures
 using, for example

 i've found that users find sums easier than signatures. most users
 will not be strongly connected to the apache web of trust. so they are
 going to need to understand how to interpret the results. if you
 recommend checking signatures, considering copying or linking to some
 of the content on http://www.apache.org/dev/release-signing.html#faq.


Ok, I can do that.


  When we get the go ahead  it's left from me to upload the artifacts to
  www.apache.org/dist/incubator/tuscany/, set the correct group and
  permissions as documented and then integrate the new release page into
 the
  Tuscany web site.

 yep (it's not really a go-ahead, more a
 make-sure-you-know-what-you're-doing)

 you're planning to use the dynamic script? (rather than a custom
 download script)


Yes. The release artifacts themselves are linked via

http://www.apache.org/dyn/closer.cgi/incubator/tuscany/java/sca/1.1-incubating/apache-tuscany-sca-1.1-incubating.zip

Etc.

Maybe a custom scrip when we''ve got used to the new approach. But not
yet:-)



 - robert

 -
 To unsubscribe, e-mail: [EMAIL PROTECTED]
 For additional commands, e-mail: [EMAIL PROTECTED]




Re: [jira] Commented: (TUSCANY-1999) ConversationAttributes and expiry doesn't work with Stateless Conversational components

2008-02-04 Thread Simon Laws
On Feb 4, 2008 12:25 PM, Thomas Greenwood (JIRA) tuscany-dev@ws.apache.org
wrote:


[
 https://issues.apache.org/jira/browse/TUSCANY-1999?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=12565331#action_12565331]

 Thomas Greenwood commented on TUSCANY-1999:
 ---

 BTW I'd like the patch committed if possible since that's what we're
 currently working with.

  ConversationAttributes and expiry doesn't work with Stateless
 Conversational components
 
 ---
 
  Key: TUSCANY-1999
  URL: https://issues.apache.org/jira/browse/TUSCANY-1999
  Project: Tuscany
   Issue Type: Bug
   Components: Java SCA Core Runtime
 Affects Versions: Java-SCA-1.1
 Reporter: Ben Smith
 Assignee: Simon Laws
  Fix For: Java-SCA-Next
 
  Attachments: ConversationExpiry.patch
 
 
  In services that are marked as @Conversational yet have scope of
 STATELESS the following problems occur
  Caused by:
 org.apache.tuscany.sca.implementation.java.introspect.impl.InvalidConversationalImplementation:
 Service is marked with @ConversationAttributes but the scope is not
 @Scope(CONVERSATION)
at
 org.apache.tuscany.sca.implementation.java.introspect.impl.ConversationProcessor.visitClass
 (ConversationProcessor.java:57)
  Also looking at the code it looks as if that expiring of conversations
 only occurs with services that are of scope CONVERSATION. I believe that the
 above should work with all services marked as @Conversational.
  To fix this I'm thinking that the job of expiring conversations should
 be moved from the ConversationalScopeContainer into the ConversationManager
 and the check in the ConversationProcessor changed to check for the
 @Conversational tag not @Scope(CONVERSATION)
  Ben

 --
 This message is automatically generated by JIRA.
 -
 You can reply to this email to add a comment to the issue online.


 -
 To unsubscribe, e-mail: [EMAIL PROTECTED]
 For additional commands, e-mail: [EMAIL PROTECTED]

 Hi Thomas

I took a preliminary look at the patch and from what I've seen it looks good
so far. Had hoped to commit this straight away but I've been unwell for the
last couple of days so didn't get it done. Back on the case now so I'll go
through it again and ask questions as required.

Regards

Simon


[ANNOUNCE] Apache Tuscany SCA Java 1.1 released

2008-02-05 Thread Simon Laws
The Apache Tuscany team are delighted to announce the 1.1 release of the
Java SCA project.

Apache Tuscany provides a runtime environment based on the Service Component
Architecture (SCA). SCA is a set of specifications aimed at simplifying SOA
application development. These specifications are being standardized by
OASIS as part of the Open Composite Services Architecture (Open CSA).

The Tuscany SCA Java 1.1 release adds a number of features including a JMS
binding, improved policy support and an implementation extension for
representing client side Javascript applications as SCA components.

For full details about the release and to download the distributions please
go to:

http://incubator.apache.org/tuscany/sca-java-releases.html

To find out more about OASIS Open CSA go to:

http://www.oasis-opencsa.org

Apache Tuscany welcomes your help. Any contribution, including code,
testing, contributions to the documentation, or bug reporting is always
appreciated. For more information on how to get involved in Apache Tuscany
visit the website at:

http://incubator.apache.org/tuscany

Thank you for your interest in Apache Tuscany!

The Apache Tuscany Team.

---

Tuscany is an effort undergoing incubation at the Apache Software
Foundation (ASF), sponsored by the Apache Web services PMC. Incubation
is required of all newly accepted projects until a further review
indicates that the infrastructure, communications, and decision making
process have stabilized in a manner consistent with other successful
ASF projects. While incubation status is not necessarily a reflection
of the completeness or stability of the code, it does indicate that
the project has yet to be fully endorsed by the ASF.


Re: Tuscany 1.1 in maven repo

2008-02-08 Thread Simon Laws
On Feb 8, 2008 4:27 PM, Dave Sowerby [EMAIL PROTECTED] wrote:

 Hi All,

 I can't seem to spot the 1.1 artifacts over at
 http://people.apache.org/repo/m2-incubating-repository

 Is this intentional - has the repo location changed?  Or has it just
 slipped through the net?

 Cheers,

 Dave.

 --
 Dave Sowerby MEng MBCS

 -
 To unsubscribe, e-mail: [EMAIL PROTECTED]
 For additional commands, e-mail: [EMAIL PROTECTED]

 Oops, slipped through the net. Let me go sort that.

Thanks

Simon


Re: Tuscany 1.1 in maven repo

2008-02-10 Thread Simon Laws
On Feb 8, 2008 4:31 PM, Simon Laws [EMAIL PROTECTED] wrote:



 On Feb 8, 2008 4:27 PM, Dave Sowerby [EMAIL PROTECTED] wrote:

  Hi All,
 
  I can't seem to spot the 1.1 artifacts over at
  http://people.apache.org/repo/m2-incubating-repository
 
  Is this intentional - has the repo location changed?  Or has it just
  slipped through the net?
 
  Cheers,
 
  Dave.
 
  --
  Dave Sowerby MEng MBCS
 
  -
  To unsubscribe, e-mail: [EMAIL PROTECTED]
  For additional commands, e-mail: [EMAIL PROTECTED]
 
  Oops, slipped through the net. Let me go sort that.

 Thanks

 Simon

Ok, Dave, Can you give it another spin to make sure I have everything the
the right place.

Thanks

Simon


Re: JIRA backlog

2008-02-11 Thread Simon Laws
On Feb 6, 2008 1:36 PM, ant elder [EMAIL PROTECTED] wrote:

 We've about 170 open JIRAs for SCA, (currently split over 3 versions but
 i'll go move all the SCA ones to SCA-next), what shall we do about them?

 There's various suggestions for how to improve JIRA handling listed at:

 http://cwiki.apache.org/confluence/display/TUSCANYWIKI/Graduation+Next+Steps
 ,
 which of those should we do? How about as a start we just all focus on
 JIRAs
 for a bit trying to resolve which ever ones we can to see if that brings
 the
 number down much? Thats what i'll go do for now ...

   ...ant


Buried deep in [1] there was a suggestion that the next release will be
called 1.2. I've created a 1.2 release label so we can use that to associate
JIRA with the next release that we have either completed, or would like to
see completed.

Simon

[1] http://apache.markmail.org/message/jwt6vnb3tc4xgfe5


Re: Performance results.

2008-02-11 Thread Simon Laws
On Feb 10, 2008 12:50 PM, Giorgio Zoppi [EMAIL PROTECTED] wrote:

 I've tried my app a while ago in a 16 node-cluster and I'm publishing
 my result at 
 http://www.cli.di.unipi.it/~zoppi/out/ch06.htmlhttp://www.cli.di.unipi.it/%7Ezoppi/out/ch06.html
 They are in italian but you can find many images. I suppose to cache
 component uri
 as you can see in the code:
 http://www.cli.di.unipi.it/~zoppi/out/apas06.html#d4e1968http://www.cli.di.unipi.it/%7Ezoppi/out/apas06.html#d4e1968
 in order to speed up applications.
 At the end of this work there's all code that i crafted in order to
 have a behavioural skeleton
 task farm: a SCA distributed task executor.
 Next I have to port it to the new Tuscany 1.1.
 Cheers,
 Giorgio

 -
 To unsubscribe, e-mail: [EMAIL PROTECTED]
 For additional commands, e-mail: [EMAIL PROTECTED]



Hi Giorgio, thanks for posting the results. Looking at the graphs you can
see the measured completion time diverging from the ideal completion time as
the task size shrinks and the number of nodes increases. This is to be
expected as the infrastructure becomes more noticeable in these situations.
The question is is it too intrusive and do you have a feeling for where we
get the best payback in reducing the overhead. You mention caching the URL.
In other discussions over the last few weeks [1] [2] we are looking at a
slightly simplified approach to deploying the nodes where the topology is
calculated ahead of time and hence endpoint information can be provided with
deployed composites. Rather than the nodes having to make calls across the
information to find this information. This is initially less dynamic than
the situation we have now but you could argue that it is more predictable.
Do you see problems for you application in taking this slightly more static
approach?

I still have the previous code you attached to the open JIRA [3] sitting on
my machine. My intention was to check it in as a demonstration but haven't
done so yet. Should I wait until you have ported to 1.1?

Regards

Simon

[1] http://www.mail-archive.com/tuscany-dev@ws.apache.org/msg27362.html
[2] http://www.mail-archive.com/tuscany-dev@ws.apache.org/msg27609.html
[3] https://issues.apache.org/jira/browse/TUSCANY-1863


Re: Adding 'applicablePolicySets' to PolicySetAttachPoints

2008-02-11 Thread Simon Laws
Hi Venkat

A question.

snip...


 - In the contribution read phase, we postpone the reading of composite
 files
 so that all definitions.xml file contents can all be aggregated


Do you mean all the definitions.xml files in the contribution or all the
definitions.xml files in the domain?

Simon


Re: Adding 'applicablePolicySets' to PolicySetAttachPoints

2008-02-12 Thread Simon Laws
On Feb 12, 2008 8:18 AM, Venkata Krishnan [EMAIL PROTECTED] wrote:

 On Feb 12, 2008 1:09 PM, Simon Laws [EMAIL PROTECTED] wrote:

  Hi Venkat
 
  A question.
 
  snip...
 
  
   - In the contribution read phase, we postpone the reading of composite
   files
   so that all definitions.xml file contents can all be aggregated
  
 
  Do you mean all the definitions.xml files in the contribution or all the
  definitions.xml files in the domain?
 
  Simon
 

 Hi Simon,

 Ok, I should probably say that all definitions.xml in a runtime and that
 includes the ones coming from our extensions / runtime and the ones coming
 in from the contributions.

 - Venkat


So does that mean that all definitions.xml processing has to complete for
all contributions before any composites are parsed?

Simon


Re: Adding 'applicablePolicySets' to PolicySetAttachPoints

2008-02-12 Thread Simon Laws
snip..

On Feb 12, 2008 8:40 AM, Venkata Krishnan [EMAIL PROTECTED] wrote:

 Yes.   Because we are now computing the 'applicablePolicySets' for various
 SCA artifacts and that needs the list of 'all' PolicySets that might be
 applicable ever.


So, in the code today, how do you know you have reached the point that all
contributions have been added and you can start associating policy sets with
composites? Is the composite processing now in a separate phase independent
of the the contribution processing.

To try and get this clearer in my mind I've written out a part of the
various phases on the wiki [1]. Is there a new phase? Looking at the code I
don't see it.

Simon

[1] http://cwiki.apache.org/confluence/display/TUSCANYWIKI/Runtime+Phases


Re: Adding 'applicablePolicySets' to PolicySetAttachPoints

2008-02-13 Thread Simon Laws
On Feb 12, 2008 5:18 PM, Venkata Krishnan [EMAIL PROTECTED] wrote:

 Hi,

 No there isn't a separate phase.  Just that in the current read phase I
 look
 for *.composite files and set those aside in a list without processing
 them
 further.  After all artifacts in the contribution have been read I then
 read
 the list of composite URIs, read them and modify them with the additional
 attribute 'applicablePolicySets' and then push it further for the usual
 processing.

 I see that this is what you have also summarized on the wiki.  I have
 assumed that in the section titled New Policy Processing Phase should go
 the description of what we do now with the composite reading and
 augmenting.  I have added that information.  Let me know if your thoughts
 for it were otherwise.

 I think I might have to change this a bit in the context of multiple
 contributions.  Isn't it ?

 - Venkat

 On Feb 12, 2008 2:41 PM, Simon Laws [EMAIL PROTECTED] wrote:

  snip..
 
  On Feb 12, 2008 8:40 AM, Venkata Krishnan [EMAIL PROTECTED] wrote:
 
   Yes.   Because we are now computing the 'applicablePolicySets' for
  various
   SCA artifacts and that needs the list of 'all' PolicySets that might
 be
   applicable ever.
  
  
  So, in the code today, how do you know you have reached the point that
 all
  contributions have been added and you can start associating policy sets
  with
  composites? Is the composite processing now in a separate phase
  independent
  of the the contribution processing.
 
  To try and get this clearer in my mind I've written out a part of the
  various phases on the wiki [1]. Is there a new phase? Looking at the
 code
  I
  don't see it.
 
  Simon
 
  [1]
 http://cwiki.apache.org/confluence/display/TUSCANYWIKI/Runtime+Phases
 


Hi Venkat

Thanks for the updates. The multiple contribution case was the case that I
was thinking about :-) So you have these new steps that have to sit between
the point where all the definitions.xml files are read from ALL the
contributions and the point at which a composite is parsed and turned into
an assembly model. SO it sounds like the process would be something like

1. Add contribution
2. Process contribution to extract definitions.xml

repeat 1  2 until all contributions are added

3. Find composites in contributions
4. Process appliesTo with reference to each of the composites
5. process the the composites into an assembly model for further domain
processing (the domain composite)

I'm not necessarily advocating enforcing the approach that all contributions
must be added before any further processing commences. You could imagine an
approach where the process is just repeated as new contributions are added
for example. But you get my point.

Simon


Re: Adding 'applicablePolicySets' to PolicySetAttachPoints

2008-02-14 Thread Simon Laws
snip...

against the aggregated union of all definitions.  Do you see something
 missing ?


 The point I'm interested in is what happens to the composites that belong
to contributions that have previously been added when you add a new
contribution, for example,

ContributionA
  definitions.xml(A)
  A.composite
ContributionB
  defnitions.xml(B)
  B.composite

When ContributionA is processed A.composite will be processed in the context
of any appliesTo statements that appear in deinfitions.xml(A). When
ContributionB is added should B.composite be processed in the context of
appliesTo statements that appear in both deinfitions.xml(A) and
definitions.xml(B)? Should A.composite be re-processed in the context of
appliesTo statements that appear in both deinfitions.xml(A) and
definitions.xml(B)?

Simon


Re: Adding 'applicablePolicySets' to PolicySetAttachPoints

2008-02-14 Thread Simon Laws
snip...


 in definitions.xml(A).  But, is this sort of re-processing / rebuild is a
 requirment only in this context ?  If there are other contexts as well,
 such
 as re-wiring and we there is going to be a separate phase for this, then
 I'd
 like to do this as well in that phase.


Absolutely. There are a number of things that need to be calculated across
the domain. We have started discussing [1] some phases of processing at a
higher level than are currently contained in the depths of the
ContributionService, assembly builders and composite activators. I've
started a technical note [2] on this subject to net out the conversation
from that thread (hasn't been much recently but it is on my mind and
Sebastien has been doing things on the Contribution Worksapce judging by the
SVN log). So from the point of view of this thread we should ensure that the
blocks of processing you have extended the existing processing steps with,
i.e. the bits you added to [3], are suitably accessible so that the can
easily be called from higher level steps [2]

I guess we need to get a bit more concrete on the higher level steps.

Simon

[1] http://www.mail-archive.com/tuscany-dev@ws.apache.org/msg27609.html
[2]
http://cwiki.apache.org/confluence/display/TUSCANYWIKI/Contribution+Processing
[3] http://cwiki.apache.org/confluence/display/TUSCANYWIKI/Runtime+Phases


Processing multiple WSDLs in the same namespace

2008-02-14 Thread Simon Laws
re. https://issues.apache.org/jira/browse/TUSCANY-2043

Part of the the fix for this involves removing a FIXME in
org.apache.tuscany.sca.interfacedef.wsdl.xml.XSDModelResolver. Specifically
at the bottom of the aggregate method I need to comment out two lines...


// FIXME: [rfeng] This is hacky
//definitions.clear();
//definitions.add(aggregated);
return aggregated;

Raymond, do you recall why these were put in?

Because the non-aggregated definition is replaces with the aggregated
definition the effect of including them is to build a hierarchy of XSD that
could end up looking like:

  facade
facade
  facade
xsd

depending on how may WSDL/Schema there are in the same namespace.

The other, possibly more dangerous, effect is that some schema are omitted
because the aggregated schema may not include all of the schema that are
in the original definitions list, for example, if many schema are in the
same namespace then only the inline schema for WSDL that have been resolved
so far will be present as opposed to all of the XSD that could be available.
So the original list is cleared and replaced with a shortened list. The XSD
that have now been remove will not be resolved.

I get a clean build with these two lines removed, and with the other
required changes in place, so will check my fix in unless someone shouts.

Simon


What should be in Tuscany SCA Java release 1.2?

2008-02-14 Thread Simon Laws
Hi

It's probably about time we started talking about what's going to be in
Tuscany SCA Java release 1.2. From the past timeline I would expect us to be
trying for a release mid to late March which is not very far away.

Some of the things I'd like to see are;

More progress on our domain level composite and generally the processing
that has to go there
There have been a lot of policy changes going on and it would be good to get
them in. Also linked to the item above we should look at how policy affects
domain level processing.
Don't know if it's achievable but some elements of the runtime story we have
been talking about on the mail list for a while now

Feel free to add topics on this thread. I've also opened up the
Java-SCA-1.2category in JIRA so start associating JIRA with it, for
example, if

1 - you've already marked a JIRA as fixed and its sitting at Java-SCA-Next
2 - you are working or are going to work on the JIRA for 1.2
3 - you would like to see the JIRA fixed for 1.2

Of course everyone is invited to contribute and submit patches for JIRA
whether they be for bugs or new features. Inevitably not all wish list
features will get done so you improve your chances of getting you favorite
feature in by submitting a patch for it.

Regards

Simon


Re: Processing multiple WSDLs in the same namespace

2008-02-14 Thread Simon Laws
hi Ramyond

Comments in line...

Simon

On Thu, Feb 14, 2008 at 4:57 PM, Raymond Feng [EMAIL PROTECTED] wrote:

 Hi,

 Let me explain what I thought in this example: a contribution with two
 xsds
 under the same tns (http://ns1): a.xsd and b.xsd.

 When the XSDs are initially processed, we only create a XSDefinition
 object
 to hold the tns and URL of the xsd. The XSD is not fully loaded at this
 point. The XSDefinition object is added to the definitions list. So we end
 up with two XSDefinition objects in the list, one for a.xsd and one for
 b.xsd.



 When the first request comes to resolve the XSDefinition for http//ns1,
 we create an aggregated XSDefinition which use xsd:include to reference
 a.xsd and b.xsd. Both a.xsd and b.xsd are fully loaded. (I'm not sure if
 the
 XSD can include WSDLs for the inline schemas and we might have problems
 here)


I observe that in the following case:

WSDL-A - namespace X
  Inline XSD-A - namespace Y

WSDL-B - namespace X
  Inline XSD-B - namespace Y

That XSD-A and XSD-B are both registered with the XSDModelResolver in their
unresolved state. When WSDL-A is resolved is causes a request for XSD-A to
be resolved. This in turn causes the code to try and aggregate XSD-A and
XSD-B together. However the resolution process for XSD-B has not started yet
and hence it is ommitted from the aggregation and, with the code as it
currently is, is lost when the definitions list is replaced with the
aggregated XSD.



 The reason I replaced the list with the facade is to avoid reaggregation.
 This way, we'll return the aggregated XSDefinition next time directly from
 the list.


Ah I see. Is there any function harm in re-doing the aggregation or if this
a performance thing?




 I'm not sure why you see the hierarchy.  Can you tell me what you found
 out?


I'm not sure either looking back. I don't see it now. It's possible I was
just confusing this with the effect of the disappearing XSD, i.e. I saw the
result nest facade only having 1 object inside it and assumed, incorrectly,
that is was another facade.




 Thanks,
 Raymond

 - Original Message -
 From: Simon Laws [EMAIL PROTECTED]
 To: tuscany-dev tuscany-dev@ws.apache.org
 Sent: Thursday, February 14, 2008 6:15 AM
 Subject: Processing multiple WSDLs in the same namespace


  re. https://issues.apache.org/jira/browse/TUSCANY-2043
 
  Part of the the fix for this involves removing a FIXME in
  org.apache.tuscany.sca.interfacedef.wsdl.xml.XSDModelResolver.
  Specifically
  at the bottom of the aggregate method I need to comment out two lines...
 
 
 // FIXME: [rfeng] This is hacky
 //definitions.clear();
 //definitions.add(aggregated);
 return aggregated;
 
  Raymond, do you recall why these were put in?
 
  Because the non-aggregated definition is replaces with the aggregated
  definition the effect of including them is to build a hierarchy of XSD
  that
  could end up looking like:
 
   facade
 facade
   facade
 xsd
 
  depending on how may WSDL/Schema there are in the same namespace.
 
  The other, possibly more dangerous, effect is that some schema are
 omitted
  because the aggregated schema may not include all of the schema that
 are
  in the original definitions list, for example, if many schema are in the
  same namespace then only the inline schema for WSDL that have been
  resolved
  so far will be present as opposed to all of the XSD that could be
  available.
  So the original list is cleared and replaced with a shortened list. The
  XSD
  that have now been remove will not be resolved.
 
  I get a clean build with these two lines removed, and with the other
  required changes in place, so will check my fix in unless someone
 shouts.
 
  Simon
 


 -
 To unsubscribe, e-mail: [EMAIL PROTECTED]
 For additional commands, e-mail: [EMAIL PROTECTED]




Re: Processing multiple WSDLs in the same namespace

2008-02-14 Thread Simon Laws
Raymond,

A couple of comments below..

My preference would be to check in the changes I've made that make the itest
(wsdl-multiple) that I've just checked in at least work. I can then close
JIRA-2043 that relates to a very specific problem and raise a new one that
discusses the general situation. We then have some extra testing in the
build to keep us honest when it comes to fixing the more general case.

Sound OK?

Regards

Simon

On Thu, Feb 14, 2008 at 5:46 PM, Raymond Feng [EMAIL PROTECTED] wrote:

 Maybe the best approach is keep the physical artifacts in the list as-is
 without aggregation.


+1


 And we change the artifact resolving so that it only
 happens at lower levels such as WSDL portType, binding, service or XSD
 element, type. This way we can find the accurate artifact.


So you mean that the aggregation is no longer required. Sounds good to me.
I.e.

Currently we aggregate the WSDL/XSD at resolution time and then have extra
logic at run time to unpick this aggregation

It would seem better to

Pick the precise artifact that is required at resolution time and do away
with the runtime requirement to analyze aggregations



 The current Tuscany code tries to resolve WSDLDefinition/XSDefinition
 first.
 It seems to be causing ambiguity when there are multiple WSDLs/XSDs under
 the same namespace.

 Here are some examples of references to WSDL/XSD elements from SCA.

 [EMAIL PROTECTED] -- WSDL portType
 [EMAIL PROTECTED] -- WSDL portType, service, binding, etc.
 property -- XSD element or type

 In the above, we do have the information about what type of artifacts
 we're
 trying to resolve. Do any of you see a case that we can only resolve at
 WSDL
 Definition or XML Schema level?



 Thanks,
 Raymond



Re: [continuum] BUILD ERROR: Apache Tuscany SCA Implementation Project

2008-02-15 Thread Simon Laws
On Feb 15, 0008 4:09 PM, Continuum VMBuild Server [EMAIL PROTECTED]
wrote:

 Online report :
 http://vmbuild.apache.org/continuum/buildResult.action?buildId=51091projectId=277

 Build statistics:
  State: Error
  Previous State: Building
  Started at: Fri 15 Feb 2008 07:06:27 -0800
  Finished at: Fri 15 Feb 2008 08:04:19 -0800
  Total time: 57m 51s
  Build Trigger: Schedule
  Build Number: 45
  Exit code: 0
  Building machine hostname: vmbuild.apache.org
  Operating system : Linux(unknown)
  Java Home version :
  java version 1.5.0_12
  Java(TM) 2 Runtime Environment, Standard Edition (build
 1.5.0_12-b04)
  Java HotSpot(TM) Client VM (build 1.5.0_12-b04, mixed mode,
 sharing)

  Builder version :
  Maven version: 2.0.7
  Java version: 1.5.0_12
  OS name: linux version: 2.6.20-16-server arch: i386


 
 SCM Changes:

 
 Changed: slaws @ Fri 15 Feb 2008 05:07:00 -0800
 Comment: TUSCANY-2046
 Make itest use port 8085 rather than 8080
 Files changed:
  /incubator/tuscany/java/sca/itest/wsdl-multiple/src/main/resources/auto-
 wsdl.composite ( 628050 )

  /incubator/tuscany/java/sca/itest/wsdl-multiple/src/main/resources/manual-
 wsdl.composite ( 628050 )
  
 /incubator/tuscany/java/sca/itest/wsdl-multiple/src/main/resources/wsdl/helloworld.HelloWorldCallback.wsdl
 ( 628050 )
  
 /incubator/tuscany/java/sca/itest/wsdl-multiple/src/main/resources/wsdl/helloworld.HelloWorldService.wsdl
 ( 628050 )

 Changed: slaws @ Fri 15 Feb 2008 05:14:14 -0800
 Comment: TUSCANY-2047
 Address a FIXME in the SCABindingProcessor to allow the binding processor
 to be loaded through the normal extension loading process rather than being
 created explicitly in the ReallySmallRuntime. In theory this would make it
 easier for people to provide their own binding.sca implementation.
 Files changed:
  
 /incubator/tuscany/java/sca/modules/binding-sca/src/main/java/org/apache/tuscany/sca/binding/sca/impl/SCABindingFactoryImpl.java
 ( 628054 )
  
 /incubator/tuscany/java/sca/modules/binding-sca-xml/src/main/java/org/apache/tuscany/sca/binding/sca/xml/SCABindingProcessor.java
 ( 628054 )
  
 /incubator/tuscany/java/sca/modules/binding-sca-xml/src/test/java/org/apace/tuscany/sca/binding/sca/xml/ReadTestCase.java
 ( 628054 )
  
 /incubator/tuscany/java/sca/modules/binding-sca-xml/src/test/java/org/apace/tuscany/sca/binding/sca/xml/WriteTestCase.java
 ( 628054 )
  
 /incubator/tuscany/java/sca/modules/host-embedded/src/main/java/org/apache/tuscany/sca/host/embedded/impl/ReallySmallRuntime.java
 ( 628054 )
  
 /incubator/tuscany/java/sca/modules/host-embedded/src/main/java/org/apache/tuscany/sca/host/embedded/impl/ReallySmallRuntimeBuilder.java
 ( 628054 )

 Changed: slaws @ Fri 15 Feb 2008 05:15:58 -0800
 Comment: Make the satic Domain instance protected so that I can get at it
 from a class that extends the domain.
 Files changed:
  
 /incubator/tuscany/java/sca/modules/host-embedded/src/main/java/org/apache/tuscany/sca/host/embedded/SCADomain.java
 ( 628057 )

 Changed: slaws @ Fri 15 Feb 2008 05:42:24 -0800
 Comment: Add some traps around the domain cleanup to remove some null
 pointer exceptions from the continuum output to make the real error a little
 clearer
 Files changed:
  
 /incubator/tuscany/java/sca/itest/wsdl-multiple/src/test/java/org/apache/tuscany/sca/itest/AutoWSDLTestCase.java
 ( 628063 )
  
 /incubator/tuscany/java/sca/itest/wsdl-multiple/src/test/java/org/apache/tuscany/sca/itest/ManualWSDLTestCase.java
 ( 628063 )


 
 Dependencies Changes:

 
 No dependencies changed



 
 Build Defintion:

 
 POM filename: pom.xml
 Goals: -Pdistribution clean install
 Arguments: --batch-mode
 Build Fresh: false
 Always Build: false
 Default Build Definition: true
 Schedule: DEFAULT_SCHEDULE
 Profile Name: Java 5, Large Memory
 Description:



 
 Test Summary:

 
 Tests: 1023
 Failures: 0
 Total time: 932533


 
 Build Error:

 
 org.apache.maven.continuum.execution.ContinuumBuildCancelledException: The
 build was cancelled
at
 org.apache.maven.continuum.execution.AbstractBuildExecutor.executeShellCommand
 (AbstractBuildExecutor.java:216)
at
 org.apache.maven.continuum.execution.maven.m2.MavenTwoBuildExecutor.build(
 

Re: Classloading code in core contribution processing

2008-02-25 Thread Simon Laws
Hi Rajini

just back in from vacation and catching up. I've put some comments in line
but the text seems to be circling around a few hot issues:

- How closely class loading should be related to model resolution, i.e.
options 1 and 2 from previously in this thread
- Support for split namsepaces/shared packages
- Recursive searching of contributions
- Handling non-existent resources, e.g by spotting cycles in
imports/exports.

These are of course related but it may be easier if we address them
independently.

Simon




  Tuscany node and domain code are split into three modules each for API,
 SPI
 and Implementation eg. tuscany-node-api, tuscany-node and
 tuscany-node-impl.
 The API module defines a set of classes in org.apache.tuscany.sca.node and
 the SPI module extends this package with more classes. So the package
 org.apache.tuscany.sca.node is split across tuscany-node-api and
 tuscany-node. If we used maven-bundle-plugin to generate OSGi manifest
 entries for Tuscany modules, we would get three OSGi bundles corresponding
 to the node modules. And the API and SPI bundles have to specify that they
 use split-packages. It would obviously have been better if API and SPI
 used
 different packages, but the point I am trying to make is that splitting
 packages across modules is not as crazy as it sounds, and split packages
 do
 appear in code written by experienced programmers.


The split packages across the various node/domain module was not by design.
The code moved around and that was the result. We could go ahead and fix
this. Are there any other explicit examples of split packages that you
happen to know about


 IMO, supporting overlapping package import/exports is more important with
 SCA contributions than with OSGi bundles since SCA contributions can
 specify
 wildcards in import.java/export.java. eg. If you look at packaging
 tuscany-contribution and tuscany-contribution-impl where
 tuscany-contribution-impl depends on tuscany-contribution, there is no
 clear
 naming convention to separate the two modules using a single import/export
 statement pair. So if I could use wildcards, the simplest option that
 would
 avoid separate import/export statements for each subpackage (as required
 in
 OSGi) would be to export org.apache.tuscany.sca.contribution* from
 tuscany-contribution and import org.apache.tuscany.sca.contribution* in
 tuscany-contribution-impl. The sub-packages themselves are not shared but
 the import/export namespaces are. We need to avoid cycles in these cases.
 Again, there is a way to avoid sharing package spaces, but it is simpler
 to
 share, and there is nothing in the SCA spec which stops you sharing
 packages
 across contributions.


I'm not sure if you are suggesting that we implement a wildcard mechanism or
that we impose some restrictions, for example, to mandate that
import.javashould use fully qualified package names (as it says in
line 2929 of the
assembly spec). Are wildcards already supported?

The assembly spec seems to recognize that artifacts from the same namespace
may appear in several places (line 2946) but it is suggesting that these
multiple namespace references are included explicitly as distinct import
declarations.



 I dont think the current model resolver code which recursively searches
 exporting contributions for artifacts is correct anyway - even for
 artifacts
 other than classes. IMO, when an exporting contribution is searched for an
 artifact, it should only search the exporting contribution itself, not its
 imports. And that would avoid cycles in classloading. I would still prefer
 not to intertwine classloading and model resolution because that would
 unnecessarily make classloading stack traces which are complex anyway,
 even
 more complex that it needs to be. But at least if it works all the time,
 rather than run into stack overflows, I might not have to look at those
 stack traces


Looking at the assembly spec there is not much discussion of recursive
inclusion. I did find line 3022 which describes the behaviour
w.r.tindirect dependent contributions which to me implies that
contributions
providing exports should be recursively searched




 and this will convince me to help fix it now :) Thanks.


 It is not broken now - you have to break it first and then fix it :-).


  --
  Jean-Sebastien
 
  -
  To unsubscribe, e-mail: [EMAIL PROTECTED]
  For additional commands, e-mail: [EMAIL PROTECTED]
 
 


 --
 Thank you...

 Regards,

 Rajini



Re: Trouble with aggregating definitions.xml in distro

2008-02-25 Thread Simon Laws
On Mon, Feb 25, 2008 at 1:12 AM, Venkata Krishnan [EMAIL PROTECTED]
wrote:

 Hi,

 I have been working on modifying the existing bigbank demo to include
 security (things that have been tried and working in the securie-bigbank
 demo).

 All seemed fine, until I tried the modified bigbank demo from a
 distribution.  One of things we do now is aggregating the various
 definitions.xml in META-INF/services since we now allow various modules
 and
 contributions to have their own definitions.xml if needs be.

  In a distro all of these definitions.xml are aggregated into a single
 file
 using the shade transformer.  I end up with a definitions.xml that has
 multiple sca:definitions elements but no root.  Also there seems to be
 multiple 'xml' declarations - ?xml version=1.0 encoding=ASCII?.
 All
 these creates trouble for the XMLStreamReader.  At the present moment I am
 thinking of the following :

 1) In the Definitions Document Processor prepend and append the xml with
 dummy elements so that there is a root element

 2) Either strip all the duplicate xml declarations when doing step (1) or
 go
 an manually delete this in all the definitions.xml in our modules

 Though most of it has been tried and works, I feel its like some 'trick
 code' and could give us troubles in maintainability.  Does anybody have a
 better idea to deal with this ?

 Thanks.

 - Venkat


Hi Venkat

Can I just clarify that you are saying that you are having problems because
of the way that the shader plugin is aggregating the definitions.xml files
that now appear in various extension modules, e.g. binding-ws-axis2,
poilcy-logging et. and that this is not specifically related to the bingbank
demo or to the way that Tuscany subsequently aggregates the contents is
finds in definitions.xml files.

Does definitions.xml have to appear in META-INF/services. Could we, for
example, further qualify the definitions.xml file by putting it in a
directory that represents the name of the extension module to which it
refers? Or does that make it difficult to pick them up generically?

Simon


Re: [TEST] Conversation Lifetime

2008-02-25 Thread Simon Laws
On Mon, Feb 25, 2008 at 1:08 PM, Kevin Williams [EMAIL PROTECTED]
wrote:

 I would like to add a few iTests for Conversation Lifetime items that
 don't seem to have explicit tests,  In particular, I am looking at:

  1) The ability to continue a conversation by loading a reference
 that had been written to persistent storage
  2) Implicit end of a conversation by a non-business exception
  3) Verify that a client's call to Conversation.end truly ends the
 conversation

 Does this sound like a good idea?

 Thanks,

 --Kevin

 -
 To unsubscribe, e-mail: [EMAIL PROTECTED]
 For additional commands, e-mail: [EMAIL PROTECTED]

 Kevin, Sounds like a great idea! Let me know if you need any help.

Simon


Re: Trouble with aggregating definitions.xml in distro

2008-02-25 Thread Simon Laws
So, just to be clear again...



 
 
  Hi Venkat
 
  Can I just clarify that you are saying that you are having problems
  because
  of the way that the shader plugin is aggregating the definitions.xmlfiles
  that now appear in various extension modules, e.g. binding-ws-axis2,
  poilcy-logging et. and that this is not specifically related to the
  bingbank
  demo or to the way that Tuscany subsequently aggregates the contents is
  finds in definitions.xml files.
 

 Yes I am talking about aggregating the definitions.xml files from the
 various modules.  The shade plugin is working alright.


In as much that the shade plugin is identifying that there are multiple
files with the same name, definitions.xml in this case, and is blindly
concatenating them?


  This is not specific
 to the bigbank demo - more a general problem.  I think I have been caught
 on
 wrong foot trying to use this META-INF/services aggregation for the
 definitions.xml file as well. :(


I agree that having all of the files called definitions.xml located in the
same logical place on the classpath is causing problems and also that the
choice of META-INF/services doesn't seem to be right. Don't think these two
things are related though.




 
  Does definitions.xml have to appear in META-INF/services. Could we, for
  example, further qualify the definitions.xml file by putting it in a
  directory that represents the name of the extension module to which it
  refers? Or does that make it difficult to pick them up generically?
 

 I did think of including the extension module where it is defined, but
 then
 we must enlist all extension modules then or in otherwords enlist the
 locations of these definitions.xml file somewhere.  Am not sure if we can
 search for resources using regular expressions - something like
 /*/definitions.xml.


For example, could you use something like

policy-logging\src\main\resources\org\apache\tuscany\policy\logging\definitions.xml




 Thanks.


 
  Simon
 



Re: Classloading code in core contribution processing

2008-02-25 Thread Simon Laws
Hi Rajini

I'm covering old ground here but trying to make sure I'm looking at this in
the right way.

A - How closely class loading should be related to model resolution, i.e.
options 1 and 2 from previously in this thread
   A1 (classloader uses model resolver) - standardizes the artifact
resolution process but make classloading more complex
   A2 (classloader doesn't use model resolver) - simplifies the classloading
process but leads to multiple mechanisms for artifact resolution
B - Support for split namspaces/shared packages
   Supporting this helps when consuming Java artifacts in the case where
there is legacy code and for some java patterns such as localization. I
expect this
   could apply to other types of artifacts also, for example, XML schema
that use library schema for common types.
C - Recursive searching of contributions
   It's not clear that we have established that this is a requirement
D - Handling non-existent resources, e.g by spotting cycles in
imports/exports.
  It would seem to me to be sensible to guard against this generally. Is a
specific requirement if we have C

It seems to me that there we are talking about two orthogonal pieces of
work. Firstly B, C  D describe behaviour of artifact resolution in general.
Then, given the artifact resolution framework, how does Java classloading
fit in, I.e. A1 or A2.

Can we agree the general behaviour first and then agree javal classloading
as a special case of this.

Regards

Simon


Re: Problems with maven-antrun-plugin

2008-02-26 Thread Simon Laws
On Tue, Feb 26, 2008 at 10:15 AM, Matthew Peters [EMAIL PROTECTED]
wrote:

 Simon, that's a good idea. I wish I had captured the output from when it
 failed. I removed the plugin and ran mvn validate and this time it
 worked fine. I don't know whether to :-) or :-(

 This time:

 [INFO] artifact org.apache.maven.plugins:maven-antrun-plugin: checking for
 updates from apache.incubator
 [INFO] artifact org.apache.maven.plugins:maven-antrun-plugin: checking for
 updates from codehaus-snapshot
 [INFO] artifact org.apache.maven.plugins:maven-antrun-plugin: checking for
 updates from central
 Downloading:

 http://repo1.maven.org/maven2/org/apache/maven/plugins/maven-antrun-plugin/1.1/maven-antrun-plugin-1.1.pom
 1Khttp://repo1.maven.org/maven2/org/apache/maven/plugins/maven-antrun-plugin/1.1/maven-antrun-plugin-1.1.pom1Kdownloaded

 Matthew




 Simon Laws [EMAIL PROTECTED]
 25/02/2008 12:06
 Please respond to
 tuscany-dev@ws.apache.org


 To
 tuscany-dev@ws.apache.org
 cc

 Subject
 Re: Problems with maven-antrun-plugin






 On Mon, Feb 25, 2008 at 9:16 AM, Matthew Peters
 [EMAIL PROTECTED]
 wrote:

  I had a problem to do with maven-antrun-plugin while doing a Tuscany
  build, and I mention it in case anyone else ever has the same
 difficulty.
 
  I was starting with a fresh install of maven, and thus a fresh maven
  repository. During the build it wanted to download and install
  maven-antrun-plugin, but could not seem to find it automatically, even
  though it had found and installed hundreds of others with no difficulty.
 I
  went and downloaded three different versions from the maven plugin site
  http://maven.apache.org/plugins/ and installed them successfully ...I
  think with maven install:install-file etc. etc. but the build was
  still never satisfied. Also, when installed, there was only ever just
 the
  jar file inside the repository - other plugins I have installed always
  have a .pom file too. The only way I fixed it in the end was to ask
  someone else to zip up and send me the plugin from their repository -
 this
  did include both jar and pom.
 
  So, I don't know why this plugin was different from others, and I assume
  it is a problem that only people starting with a fresh maven will run
  into.
 
  Matthew
 
 
 
 
 
 
  Unless stated otherwise above:
  IBM United Kingdom Limited - Registered in England and Wales with number
  741598.
  Registered office: PO Box 41, North Harbour, Portsmouth, Hampshire PO6
 3AU
 
 
 
 
 
  Hi Matthew

 Can you say where maven was trying, and failing, to download the antrun
 plugin from in your case. Maybe there is a problem with the repo that
 holds
 this plugin.

 Regards

 Simon







 Unless stated otherwise above:
 IBM United Kingdom Limited - Registered in England and Wales with number
 741598.
 Registered office: PO Box 41, North Harbour, Portsmouth, Hampshire PO6 3AU





 Hmmm, one of the problems with network based repositories is that
sometimes they don't work and then sometimes they do. It's not so bad when
you are expecting it and can take appropriate action (like you did by
phoning a friend until it sorts itself out). Very much a mystery if its your
first time though.

Simon


Re: Exposing composite file as a web service

2008-02-26 Thread Simon Laws
On Tue, Feb 26, 2008 at 12:59 PM, Sandeep Raman [EMAIL PROTECTED]
wrote:

 Hi,

 Can the composite file be exposed as a web service to external world.
 how can i go about it. can you please guide me.

 regards
 Sandeep Raman.
 =-=-=
 Notice: The information contained in this e-mail
 message and/or attachments to it may contain
 confidential or privileged information. If you are
 not the intended recipient, any dissemination, use,
 review, distribution, printing or copying of the
 information contained in this e-mail message
 and/or attachments to it are strictly prohibited. If
 you have received this communication in error,
 please notify us by reply e-mail or telephone and
 immediately and permanently delete the message
 and any attachments. Thank you


 Hi Sandeep

I'm not clear what you mean by Can the composite file be exposed as a web
service.. but you can certainly expose, as web services, component services
that a composite file describes. For example, take a look at the
helloworld-ws-service sample [1]. In the composite file (
helloworldws.composite) [2] that this sample uses you will see that it
defines a single component with a single service exposed as a web service.

composite xmlns=http://www.osoa.org/xmlns/sca/1.0;
targetNamespace=http://helloworld;
xmlns:hw=http://helloworld;
name=helloworldws

component name=HelloWorldServiceComponent
implementation.java class=helloworld.HelloWorldImpl /
service name=HelloWorldService
interface.wsdl
interface=http://helloworld#wsdl.interface(HelloWorld) /
binding.ws uri=http://localhost:8085/HelloWorldService/
/service
/component

/composite


Note that the binding.ws... part make the service a web service. So if I
run this sample I can reasonably expect to be able to point my browser at

http://localhost:8085/HelloWorldService?wsdl


To see the WSDL description of the web service that is created and to prove
that it is running. For this sample there is also a client SCA application
[3] that can be used to make a call to this web service.

Hope that helps. Let me know if I'm interpreting the question incorrectly
here of if you need more information

Regards

Simon

[1]
http://svn.apache.org/repos/asf/incubator/tuscany/java/sca/samples/helloworld-ws-service/
[2]
http://svn.apache.org/repos/asf/incubator/tuscany/java/sca/samples/helloworld-ws-service/src/main/resources/META-INF/sca-deployables/helloworldws.composite
[3]
http://svn.apache.org/repos/asf/incubator/tuscany/java/sca/samples/helloworld-ws-reference/


Re: svn commit: r631266 - in /incubator/tuscany/java/sca/modules/contribution/src/main: java/org/apache/tuscany/sca/contribution/processor/ resources/META-INF/services/

2008-02-26 Thread Simon Laws
Sebastien

This looks interesting. Can you say a little about how this will be used.
I'm interested as I'd like to see us open up the contribution service a bit
and provide some interfaces that allow us to operate on contributions (find
artifacts, read artifacts, resolve artifacts etc.) rather than having the
contribution service as a black box. Is this where you are going?

Simon

On Tue, Feb 26, 2008 at 3:57 PM, [EMAIL PROTECTED] wrote:

 Author: jsdelfino
 Date: Tue Feb 26 07:57:32 2008
 New Revision: 631266

 URL: http://svn.apache.org/viewvc?rev=631266view=rev
 Log:
 Adding an interface for scanners used to scan contributions and find
 artifacts in them.

 Added:

  
 incubator/tuscany/java/sca/modules/contribution/src/main/java/org/apache/tuscany/sca/contribution/processor/ContributionScanner.java
   (with props)

  
 incubator/tuscany/java/sca/modules/contribution/src/main/java/org/apache/tuscany/sca/contribution/processor/ContributionScannerExtensionPoint.java
   (with props)

  
 incubator/tuscany/java/sca/modules/contribution/src/main/java/org/apache/tuscany/sca/contribution/processor/DefaultContributionScannerExtensionPoint.java
   (with props)

  
 incubator/tuscany/java/sca/modules/contribution/src/main/resources/META-INF/services/org.apache.tuscany.sca.contribution.processor.ContributionScannerExtensionPoint

 Added:
 incubator/tuscany/java/sca/modules/contribution/src/main/java/org/apache/tuscany/sca/contribution/processor/ContributionScanner.java
 URL:
 http://svn.apache.org/viewvc/incubator/tuscany/java/sca/modules/contribution/src/main/java/org/apache/tuscany/sca/contribution/processor/ContributionScanner.java?rev=631266view=auto

 ==
 ---
 incubator/tuscany/java/sca/modules/contribution/src/main/java/org/apache/tuscany/sca/contribution/processor/ContributionScanner.java
 (added)
 +++
 incubator/tuscany/java/sca/modules/contribution/src/main/java/org/apache/tuscany/sca/contribution/processor/ContributionScanner.java
 Tue Feb 26 07:57:32 2008
 @@ -0,0 +1,67 @@
 +/*
 + * Licensed to the Apache Software Foundation (ASF) under one
 + * or more contributor license agreements.  See the NOTICE file
 + * distributed with this work for additional information
 + * regarding copyright ownership.  The ASF licenses this file
 + * to you under the Apache License, Version 2.0 (the
 + * License); you may not use this file except in compliance
 + * with the License.  You may obtain a copy of the License at
 + *
 + *   http://www.apache.org/licenses/LICENSE-2.0
 + *
 + * Unless required by applicable law or agreed to in writing,
 + * software distributed under the License is distributed on an
 + * AS IS BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
 + * KIND, either express or implied.  See the License for the
 + * specific language governing permissions and limitations
 + * under the License.
 + */
 +package org.apache.tuscany.sca.contribution.processor;
 +
 +import java.io.IOException;
 +import java.net.MalformedURLException;
 +import java.net.URL;
 +import java.util.List;
 +
 +import org.apache.tuscany.sca.contribution.service.ContributionException;
 +
 +/**
 + * Interface for contribution package scanners
 + *
 + * Contribution scanners understand the format of the contribution and
 how to get the
 + * artifacts in the contribution.
 + *
 + * @version $Rev$ $Date$
 + */
 +public interface ContributionScanner {
 +
 +/**
 + * Returns the type of package supported by this package scanner.
 + *
 + * @return the package type
 + */
 +String getContributionType();
 +
 +/**
 + * Returns a list of artifacts in the contribution.
 + *
 + * @param contributionURL Contribution URL
 + * @return List of artifact URIs
 + * @throws ContributionException
 + * @throws IOException
 + */
 +ListString getArtifacts(URL contributionURL) throws
 ContributionException, IOException;
 +
 +/**
 + * Return the URL for an artifact in the contribution.
 + *
 + * This is needed for archives such as jar files that have specific
 URL schemes
 + * for the artifacts they contain.
 + *
 + * @param contributionURL Contribution URL
 + * @param artifact The relative URI for the artifact
 + * @return The artifact URL
 + */
 +URL getArtifactURL(URL packageSourceURL, String artifact) throws
 MalformedURLException;
 +
 +}

 Propchange:
 incubator/tuscany/java/sca/modules/contribution/src/main/java/org/apache/tuscany/sca/contribution/processor/ContributionScanner.java

 --
svn:eol-style = native

 Propchange:
 incubator/tuscany/java/sca/modules/contribution/src/main/java/org/apache/tuscany/sca/contribution/processor/ContributionScanner.java

 --
svn:keywords = Rev Date

 Added:
 

Re: Domain/Contribution Repository was: Re: SCA contribution packaging schemes: was: SCA runtimes

2008-02-26 Thread Simon Laws
On Tue, Feb 5, 2008 at 8:34 AM, Jean-Sebastien Delfino [EMAIL PROTECTED]
wrote:

 Venkata Krishnan wrote:
  It would also be good to have some sort of 'ping' function that could be
  used to check if a service is receptive to requests.  Infact I wonder if
 the
  Workspace Admin should also be able to test this sort of a ping per
  binding.  Is this something that can go into the section (B) .. or is
 this
  out of place ?
 

 Good idea, I'd put it section (D). A node runtime needs to provide a way
 to monitor its status.

 --
 Jean-Sebastien

 -
 To unsubscribe, e-mail: [EMAIL PROTECTED]
 For additional commands, e-mail: [EMAIL PROTECTED]

 Hi Sebastien

I see you have started to check in code related to steps A and B. I have
time this week to start helping on this and thought I would start looking at
the back end of B and moving into C but don't want to tread on you toes.

I made some code to experiment with before I went on holiday so it's not
integrated with your code (it just uses the Workspace interface). What I was
starting to look at was resolving a domain level composite which includes
unresolved composites. I.e. I built a composite which includes the
deployable composites for a series of contributions and am learning about
resolution and re-resolution.

I'm not doing anything about composite selection for deployment just yet.
That will come from the node model/gui/command line. I just want to work out
how we get the domain resolution going in this context.

If you are not already doing this I'll carry on experimenting in my sandbox
for a little while longer and spawn of a separate thread to discuss.

Simon


Re: Domain/Contribution Repository was: Re: SCA contribution packaging schemes: was: SCA runtimes

2008-02-26 Thread Simon Laws
On Mon, Feb 25, 2008 at 4:17 PM, Jean-Sebastien Delfino 
[EMAIL PROTECTED] wrote:

   Jean-Sebastien Delfino wrote:
  Looks good to me, building on your initial list I added a few more
 items
  and tried to organize them in three categories:
 
  A) Contribution workspace (containing installed contributions):
  - Contribution model representing a contribution
  - Reader for the contribution model
  - Workspace model representing a collection of contributions
  - Reader/writer for the workspace model
  - HTTP based service for accessing the workspace
  - Web browser client for the workspace service
  - Command line client for the workspace service
  - Validator for contributions in a workspace
 
 
  ant elder wrote:
  Do you have you heart set on calling this a workspace or are you open to
  calling it something else like a repository?
 

 I think that they are two different concepts, here are two analogies:

 - We in Tuscany assemble our distro out of artifacts from multiple Maven
 repositories.

 - An application developer (for example using Eclipse) can connect
 Eclipse workspace to multiple SVN repositories.

 What I'm looking after here is similar to the above 'distro' or 'Eclipse
 workspace', basically an assembly of contributions, artifacts of various
 kinds, that I can load in a 'workspace', resolve, validate and run,
 different from the repository or repositories that I get the artifacts
 from.
 --
 Jean-Sebastien

 -
 To unsubscribe, e-mail: [EMAIL PROTECTED]
 For additional commands, e-mail: [EMAIL PROTECTED]


To me repository (in my mind somewhere to store things) describes a much
less active entity compared to the workspace which has to do a lot of work
to load and assimilate information from multiple contributions. I'm not sure
about workspace either but to me it's better than repository and it's not
domain which has caused us all kinds of problems.

My 2c

Simon


Re: svn commit: r631266 - in /incubator/tuscany/java/sca/modules/contribution/src/main: java/org/apache/tuscany/sca/contribution/processor/ resources/META-INF/services/

2008-02-27 Thread Simon Laws
On Tue, Feb 26, 2008 at 7:02 PM, Jean-Sebastien Delfino 
[EMAIL PROTECTED] wrote:

 Simon Laws wrote:
  Sebastien
 
  This looks interesting. Can you say a little about how this will be
 used.
  I'm interested as I'd like to see us open up the contribution service a
 bit
  and provide some interfaces that allow us to operate on contributions
 (find
  artifacts, read artifacts, resolve artifacts etc.) rather than having
 the
  contribution service as a black box. Is this where you are going?
 
  Simon
 

 Yes that's where I'm going :), I'm trying to implement discrete
 functions to:
 - add/list/remove contributions/composites/nodes
 - analyze/resolve/validate contributions and their dependencies
 - read/compile-build/write composites without requiring a runtime

 ContributionScanner is similar to PackagingProcessor but without URIs
 (which bloat memory with a big workspace) and without an InputStream
 (which doesn't apply to contribution directories).

 For everything else I'm trying to reuse as much of the underlying model,
 for example use artifact processors for turning contributions into
 contribution models, use model resolvers to resolve contribution
 dependencies (see the other changes to contribution-impl in SVN r631293).

 I'm planning to commit the analyze/validate/find-dependencies functions
 to module workspace-impl.

 --
 Jean-Sebastien

 -
 To unsubscribe, e-mail: [EMAIL PROTECTED]
 For additional commands, e-mail: [EMAIL PROTECTED]

 Ok, sounds great.

I would add another line to the functions list. Something like...

- associate composites with nodes/apply physical binding defaults/propagate
physical addresses based on domain level wiring

I'm collecting these various bits of info here (
http://cwiki.apache.org/confluence/display/TUSCANYWIKI/Contribution+Processing)
as I come across them in the hope that we can build some documentation.

Simon


Re: Composite and PolicySet processing in ContributionServiceImpl?

2008-02-28 Thread Simon Laws
On Thu, Feb 28, 2008 at 9:04 AM, Jean-Sebastien Delfino 
[EMAIL PROTECTED] wrote:

 There is a fair amount of special handling code for composites and
 policySets in ContributionServiceImpl.

 I guess these are hacked workarounds for some issues? any idea what the
 issues were?
 --
 Jean-Sebastien

 -
 To unsubscribe, e-mail: [EMAIL PROTECTED]
 For additional commands, e-mail: [EMAIL PROTECTED]


This was the pseudo code that Venkat used to describe the appliesTo
processing here [1].

Enhance composite with policy sets based on appliesTo information

   1. For each composite file read the xml content first...
  1. For each policyset in the domain...
 1. Extract the value of 'appliesTo' attribute with is an
 xpath expression
 2. Evaluate this expression against the composite xml
 3. For each node that results out of the above evaluation
1. if the node contains an attribute named
'applicablePolicySets'
   1. concatenate to its value, the name of the
   PolicySet
2. else
   1. create an attribute named
   'applicablePolicySet' and set its value to the name of
the PolicySet
  2. Wherever applicable the composite's elements
   will have the additional attribute name 'applicablePolicySets'.

Don't know if that covers all of the policy processing in
ContributionServiceImpl

Regards

Simon

[1] http://cwiki.apache.org/confluence/display/TUSCANYWIKI/Runtime+Phases


Re: Composite and PolicySet processing in ContributionServiceImpl?

2008-02-28 Thread Simon Laws
On Thu, Feb 28, 2008 at 3:58 PM, Jean-Sebastien Delfino 
[EMAIL PROTECTED] wrote:

 Simon Laws wrote:
  On Thu, Feb 28, 2008 at 9:04 AM, Jean-Sebastien Delfino 
  [EMAIL PROTECTED] wrote:
 
  There is a fair amount of special handling code for composites and
  policySets in ContributionServiceImpl.
 
  I guess these are hacked workarounds for some issues? any idea what the
  issues were?
  --
  Jean-Sebastien
 
  -
  To unsubscribe, e-mail: [EMAIL PROTECTED]
  For additional commands, e-mail: [EMAIL PROTECTED]
 
 
  This was the pseudo code that Venkat used to describe the appliesTo
  processing here [1].
 
  Enhance composite with policy sets based on appliesTo information
 
 1. For each composite file read the xml content first...
1. For each policyset in the domain...
   1. Extract the value of 'appliesTo' attribute with is an
   xpath expression
   2. Evaluate this expression against the composite xml
   3. For each node that results out of the above evaluation
  1. if the node contains an attribute named
  'applicablePolicySets'
 1. concatenate to its value, the name of the
 PolicySet
  2. else
 1. create an attribute named
 'applicablePolicySet' and set its value to the name of
  the PolicySet
2. Wherever applicable the composite's elements
 will have the additional attribute name 'applicablePolicySets'.
 
  Don't know if that covers all of the policy processing in
  ContributionServiceImpl
 
  Regards
 
  Simon
 
  [1]
 http://cwiki.apache.org/confluence/display/TUSCANYWIKI/Runtime+Phases
 

 OK, the algorithm makes sense to me but I'm surprised that it's done in
 contribution-impl. IMO contribution-impl should not have dependencies on
 the specifics of composites and policies and all this work should be
 pushed one level up in the dependency stack.

 --
 Jean-Sebastien

 -
 To unsubscribe, e-mail: [EMAIL PROTECTED]
 For additional commands, e-mail: [EMAIL PROTECTED]

 Well I think this is one of the things that could be usefully extracted
from the contribution service and into a separate enhance composite with
policy function. As to where it should go? I'm in a bit of a quandry. It
feels like we need something between what is currently in the contribution
service and some of the things that are currently in policy and in the
assembly builders. Could we have a new module
assembly-processor/assembly-builder/assembly-configurator? where we can put
functions that manipulate an assembly model that has already been read and
where function from other modules needs to be bought together.

I have an interest in this as in looking into the way that we can configure
a domain's composites with any endpoint information we know ahead of
deployment time and will likely want a home for an enhance composite will
endpoint info function.

Simon


Investigating assignment of endpoint information to the domain model was: Re: Domain/Contribution Repository was: Re: SCA contribution packaging schemes: was: SCA runtimes

2008-02-28 Thread Simon Laws
On Tue, Feb 26, 2008 at 5:49 PM, Simon Laws [EMAIL PROTECTED]
wrote:



 On Tue, Feb 5, 2008 at 8:34 AM, Jean-Sebastien Delfino 
 [EMAIL PROTECTED] wrote:

  Venkata Krishnan wrote:
   It would also be good to have some sort of 'ping' function that could
  be
   used to check if a service is receptive to requests.  Infact I wonder
  if the
   Workspace Admin should also be able to test this sort of a ping per
   binding.  Is this something that can go into the section (B) .. or is
  this
   out of place ?
  
 
  Good idea, I'd put it section (D). A node runtime needs to provide a way
  to monitor its status.
 
  --
  Jean-Sebastien
 
  -
  To unsubscribe, e-mail: [EMAIL PROTECTED]
  For additional commands, e-mail: [EMAIL PROTECTED]
 
  Hi Sebastien

 I see you have started to check in code related to steps A and B. I have
 time this week to start helping on this and thought I would start looking at
 the back end of B and moving into C but don't want to tread on you toes.

 I made some code to experiment with before I went on holiday so it's not
 integrated with your code (it just uses the Workspace interface). What I was
 starting to look at was resolving a domain level composite which includes
 unresolved composites. I.e. I built a composite which includes the
 deployable composites for a series of contributions and am learning about
 resolution and re-resolution.

 I'm not doing anything about composite selection for deployment just yet.
 That will come from the node model/gui/command line. I just want to work out
 how we get the domain resolution going in this context.

 If you are not already doing this I'll carry on experimenting in my
 sandbox for a little while longer and spawn of a separate thread to discuss.

 Simon


 And here's the separate thread following on from [1]... I'm looking at
what we can do with any endpoint information we have prior to the point at
which a composite is deployed to a node. This is an alternative to
(replacement for?) having the Tuscany runtime go and query for endpoint
information after it has been started. I have been summarizing info here
[2][3].  Looking at this I need to do something like...

- associate composites with nodes/apply physical binding defaults/propagate
physical addresses based on domain level wiring

   1. Read in node model - which provides
  1. Mapping of composite to node
  2. Default configuration of bindings at that node, e.g. the root
  URL required for binding.ws
   2. For each composite in the domain (I'm assuming I have access to the
   domain level composite model)
  1. Find, from the node model, the node which will host the
  composite
  2. for each service in the composite
 1. If there are no bindings for the service
1. Create a default binding configured with the
default URI from the node model
2. We maybe should only configure the URI if we know
there is a remote reference.
2. else
1. find each binding in the service
   1. Take the default binding configuration and
   apply it to the binding
   2. What to do about URLs as they may be either

  1. Unset
 1. Apply algorithm from Assembly
 Spec 1.7.2
  2. Set relatively
 1. Apply algorithm from Assembly
 Spec 1.7.2
  3. Set absolutely
 1. Assume it is set correctly?
  4. Set implicitly (from WSDL
  information)
 1. Assume it is set correctly?
3. The above is similar to what goes
 during compositeConfiguration in the build phase
  3. For each reference in the composite
 1. Look for any targets that cannot be satisfied within
 the current node (need an interface to call through which
scans the domain)
 2. Find the service model for this target
 3. Do policy and binding matching
 4. For matching bindings ensure that the binding URL is
 unset and set with information from the target service
 5. The above is also similar to what happens during the
 build phase
 4. Domain Level Autowiring also needs to be taken into
  account
  5. Wire by impl that uses domain wide references also need to be
  considered

Referring to the builder code now is feels like 2.2 above is a new model
enhancement step that could reuse (some of) the function in
CompositeConfigurationBuilderImpl.configureComponents but with extra binding
specific feature to ensure that URLs are set correctly.

2.3 looks very like CompositeWireBuilder.

My quandry at the moment is that the process has a dependency on the node
description so it doesn't fit in the builders where they are at the moment.
It feels like we need

Are comments on SVN commit messages archived anywhere?

2008-02-28 Thread Simon Laws
Are comments posted against SVN commit messages in tuscany-dev archived
anywhere?

Simon


Re: Are comments on SVN commit messages archived anywhere?

2008-02-28 Thread Simon Laws
On Thu, Feb 28, 2008 at 7:01 PM, Luciano Resende [EMAIL PROTECTED]
wrote:

 Not sure I understood what you are looking for, but tuscany-commits is
 archived in [1]

 [1]
 http://www.mail-archive.com/tuscany-commits%40ws.apache.org/maillist.html

 On Thu, Feb 28, 2008 at 10:58 AM, Simon Laws [EMAIL PROTECTED]
 wrote:
  Are comments posted against SVN commit messages in tuscany-dev archived
   anywhere?
 
   Simon
 



 --
 Luciano Resende
 Apache Tuscany Committer
 http://people.apache.org/~lresende http://people.apache.org/%7Elresende
 http://lresende.blogspot.com/

 -
 To unsubscribe, e-mail: [EMAIL PROTECTED]
 For additional commands, e-mail: [EMAIL PROTECTED]

 Thanks Luciano

I'm having a bit of a senior moment:-( I looked at the commit list but of
course when someone comments back it goes back to the dev list and I didn't
look there. Doh.

Simon


Re: Domain/Contribution Repository was: Re: SCA contribution packaging schemes: was: SCA runtimes

2008-02-28 Thread Simon Laws
On Tue, Feb 26, 2008 at 5:57 PM, Simon Laws [EMAIL PROTECTED]
wrote:



 On Mon, Feb 25, 2008 at 4:17 PM, Jean-Sebastien Delfino 
 [EMAIL PROTECTED] wrote:

Jean-Sebastien Delfino wrote:
   Looks good to me, building on your initial list I added a few more
  items
   and tried to organize them in three categories:
  
   A) Contribution workspace (containing installed contributions):
   - Contribution model representing a contribution
   - Reader for the contribution model
   - Workspace model representing a collection of contributions
   - Reader/writer for the workspace model
   - HTTP based service for accessing the workspace
   - Web browser client for the workspace service
   - Command line client for the workspace service
   - Validator for contributions in a workspace
  
  
   ant elder wrote:
   Do you have you heart set on calling this a workspace or are you open
  to
   calling it something else like a repository?
  
 
  I think that they are two different concepts, here are two analogies:
 
  - We in Tuscany assemble our distro out of artifacts from multiple Maven
  repositories.
 
  - An application developer (for example using Eclipse) can connect
  Eclipse workspace to multiple SVN repositories.
 
  What I'm looking after here is similar to the above 'distro' or 'Eclipse
  workspace', basically an assembly of contributions, artifacts of various
  kinds, that I can load in a 'workspace', resolve, validate and run,
  different from the repository or repositories that I get the artifacts
  from.
  --
  Jean-Sebastien
 
  -
  To unsubscribe, e-mail: [EMAIL PROTECTED]
  For additional commands, e-mail: [EMAIL PROTECTED]
 

 To me repository (in my mind somewhere to store things) describes a much
 less active entity compared to the workspace which has to do a lot of work
 to load and assimilate information from multiple contributions. I'm not sure
 about workspace either but to me it's better than repository and it's not
 domain which has caused us all kinds of problems.

 My 2c

 Simon


I started looking at step D). Having a rest from URLs :-) In the context of
this thread the node can loose it's connection to the domain and hence the
factory and the node interface slims down. So Runtime that loads a set of
contributions and a composite becomes;

create a node
add some contributions (addContribution) and mark a composite for
starting(currently called addToDomainLevelComposite).
start the node
stop the node

You could then recycle (destroy) the node and repeat if required.

This all sound like a suggestion Sebastien made about 5 months ago ;-) I
have started to check in an alternative implementation of the node
(node2-impl). I haven't changed any interfaces yet so I don't break any
existing tests (and the code doesn't run yet!).

Anyhow. I've been looking at the workspace code for parts A and B that has
recently been committed. It would seem to be fairly representative of the
motivating scenario [1].  I don't have detailed question yet but
interestingly it looks like contributions, composites etc are exposed as
HTTP resources. Sebastien, It would be useful to have a summary of you
thoughts on how it is intended to hang together and how these will be used.

I guess these HTTP resource bring a deployment dimension.

Local - Give the node contribution URLs that point to the local file system
from where the node reads the contribution (this is how it has worked to
date)
Remote - Give it contribution URLs that point out to HTTP resource so the
node can read the contributions from where they are stored in the network

Was that the intention?

Simon

[1] http://www.mail-archive.com/tuscany-dev@ws.apache.org/msg27362.html


Re: Trouble with aggregating definitions.xml in distro

2008-02-29 Thread Simon Laws
snip...

 Could we just add our own Shade transformer that knows how to aggregate
 the
 definitions files? Eg TO SUPPORT something like this in the shade plugin
 config:

 transformer implementation=
 org.apache.tuscany.sca.tools.ShadeDefinitionsTransformer
resourceMETA-INF/services/definitions.xml/resource
 /transformer


I hadn't noticed the list of shader transformer configurations before in the
distribution/bundle pom. How does the appending transformer get applied to
definitions.xml as it stands. Is this just default behaviour?

Simon


Re: Moving ServiceDiscovery and related classes to tuscany-util

2008-02-29 Thread Simon Laws
On Fri, Feb 29, 2008 at 7:34 AM, Venkata Krishnan [EMAIL PROTECTED]
wrote:

 Hi,

 I find that ServiceDiscovery is getting to be used widely and want to move
 it out of Contribution module to a separate module like Utils.  The
 immediate benefit I see from this is some relief from cyclic dependencies.
 For example, I am trying to use the ServiceDiscovery in the 'definitions'
 module and to do that I'd need the 'contribution' module.  But the
 'contribution' already has dependency on 'definitions'.

 I agree that 'contibutions' could be cleaned up a bit so as to not depend
 on
 'definitions' but I wish to deal with that separately and not as an
 alternative.

 Thoughts ?

 - Venkat

+1, It's used from lots of places, contribution, core, databinding etc. and
doesn't seem to be intrinsically related to the process of contribution.

How about tuscany-extensibility as an alternative to tuscany-util though as
util could end up being a bucket for all sorts of things.

Simon


Re: Trouble with aggregating definitions.xml in distro

2008-02-29 Thread Simon Laws
On Fri, Feb 29, 2008 at 11:58 AM, Venkata Krishnan [EMAIL PROTECTED]
wrote:

 Hi,

 Yes the shade transformer that we use there just about aggregates the
 contents of all files found with the path that we specify there.  So it
 also
 ends up aggregating the definitions.xml just as a text file.  So this ends
 up with multiple sca:definitions elements and then no root element in
 the
 aggregated definitions.xml.  This is where the problem started.

 I am looking at a XMLAppender that Ant pointed out.  Let me see how it
 goes.  Otherwise I want to try our own shade transformer.

 Thanks

 - Venkat

 On Fri, Feb 29, 2008 at 2:38 PM, Simon Laws [EMAIL PROTECTED]
 wrote:

  snip...
 
   Could we just add our own Shade transformer that knows how to
 aggregate
   the
   definitions files? Eg TO SUPPORT something like this in the shade
 plugin
   config:
  
   transformer implementation=
   org.apache.tuscany.sca.tools.ShadeDefinitionsTransformer
  resourceMETA-INF/services/definitions.xml/resource
   /transformer
  
 
  I hadn't noticed the list of shader transformer configurations before in
  the
  distribution/bundle pom. How does the appending transformer get applied
 to
  definitions.xml as it stands. Is this just default behaviour?
 
  Simon
 


So why do we specify transformers for some things and not for others. All
the transformers specified are AppendingTransformer which I assume is what
is appending the definitions.xml files together by default.

Simon


Re: Updating Store Tutorial to share a common ui from store-assets

2008-03-01 Thread Simon Laws
On Sat, Mar 1, 2008 at 1:40 AM, Jean-Sebastien Delfino [EMAIL PROTECTED]
wrote:

 Luciano Resende wrote:
  Now that I finished support for Resource import/export, I was thinking
  on updating our Store Tutorial to share a common store.html in the
  tutorial-assets.
 
  Thoughts ?
 
  [1] http://www.mail-archive.com/tuscany-dev@ws.apache.org/msg28457.html
 

 +1 from me

 --
 Jean-Sebastien

 -
 To unsubscribe, e-mail: [EMAIL PROTECTED]
 For additional commands, e-mail: [EMAIL PROTECTED]


+1 sounds good to me

Simon


Re: [VOTE] Community is more important than code

2008-03-03 Thread Simon Laws
On Mon, Mar 3, 2008 at 2:51 PM, Matthew Peters [EMAIL PROTECTED]
wrote:

 I too am puzzled by the question. They are both important. What would be a
 scenario in which one would have to choose?

 Matthew



 Simon Nash [EMAIL PROTECTED]
 03/03/2008 13:13
 Please respond to
 tuscany-dev@ws.apache.org


 To
 tuscany-dev@ws.apache.org
 cc

 Subject
 Re: [VOTE] Community is more important than code






 ant elder wrote:
  After all this time in incubation we've all learnt to understand this
 but I
  think it may be useful reaffirm it now with a vote. So please all vote
 to
  show you understand and agree with the long standing Apache motto that
  community is more important than code.
 
  +1 from me.
 
 ...ant
 
 This seems a strange vote as it's not clear what action is implied by
 voting Yes, or what a No vote would mean.  Is it possible to
 clarify the purpose of this vote or provide a little more background?

   Simon


 -
 To unsubscribe, e-mail: [EMAIL PROTECTED]
 For additional commands, e-mail: [EMAIL PROTECTED]








 Unless stated otherwise above:
 IBM United Kingdom Limited - Registered in England and Wales with number
 741598.
 Registered office: PO Box 41, North Harbour, Portsmouth, Hampshire PO6 3AU





 I think Ant is trying to make sure that it stays at the top of our minds.
It seems a bit strange to hold a VOTE on this (although I am happy to +1it)
but I guess there isn't really another way in this email based world of ours
to have a show of hands.

Simon


Re: [VOTE] Pass-by-value related SPI change

2008-03-03 Thread Simon Laws
On Mon, Mar 3, 2008 at 1:34 PM, ant elder [EMAIL PROTECTED] wrote:

 On Fri, Feb 29, 2008 at 11:44 PM, Raymond Feng [EMAIL PROTECTED]
 wrote:

  Hi,
 
  Please vote on one of the following five options to define
  allowsPassByReference property for Invokers. You can vote with multiple
  choices ordered by your preference.
 
  [1] Add boolean allowsPassByReference() to the Invoker interface
  directly
 
  [2] Add boolean allowsPassByReference() to an optional SPI (either a
  separate interface or a sub-interface of Invoker)
 
  [3] Define an InvokerProperties interface to encapsulate known
  properties
  including allowsPassByReference, change the Provider.createInvoker()
 to
  take InvokerProperties. Add getInvokerProperties() to the Invoker
  interface.
 
  [4] Define an InvokerProperties class to encapsulate known properties
  including allowsPassByReference, add getInvokerProperties() to the
  Invoker interface.
 
  [5] Define an InvokerProperties interface to encapsulate known
  properties
  including allowsPassByReference, define an
 InvocationPropertiesFactory
  interface to create InvokerProperties, add getInvokerProperties() to
  the
  Invoker interface.
 
  My vote is [1], [2].
 
  Thanks,
  Raymond
 
 
 Not breaking existing extensions is the most important to me so I'm less
 keen on [1]. The current state of the code is [2] which I originally found
 confusing as the method and interface names didn't seem to match -
 PassByValueAware/allowsPassByReference - so i might have preferred
 something
 like  PassByReferenceAware/allowsPassByReference but there has been so
 much
 discussion around it i guess i know what its all about now. We do seem to
 regularly need to add properties like this so i can understand the
 motivation for the InvokerProperties solutions and I'd be fine with doing
 that if thats what everyone wants.

 I know this isn't an explicit vote, but there already isn't consensus on
 one
 option so i hope it will be clearer and easier to find consensus by
 stating
 my preferences like this.

   ...ant

[2] now [3] later. I like [3] but would like to see us review our SPI more
holistically rather than just applying this pattern in one place.

Simon


Re: Investigating assignment of endpoint information to the domain model was: Re: Domain/Contribution Repository was: Re: SCA contribution packaging schemes: was: SCA runtimes

2008-03-03 Thread Simon Laws
Thanks Sebastien, Hopefully some insight on the puzzle in line...

Simon

On Mon, Mar 3, 2008 at 9:57 PM, Jean-Sebastien Delfino [EMAIL PROTECTED]
wrote:

 I apologize in advance for the inline comment puzzle, but you had
 started with a long email in the first place :)


no problem at all. Thanks for you detailed response.

snip...


 I'm happy with workspace.configuration.impl. However applying default
 binding configuration to bindings in a composition doesn't have much to
 do with the workspace so I'd suggest to push it down to assembly,
 possible if you use a signature like I suggested above.


Ok I can do that.



 
  B) The algorithm (A) that calculates service endpoints based on node
 default
  binding configurations depends on knowing the protocol that a particular
  binding is configured to use.

 That part I don't get :) We could toy with the idea that SCA bindings
 are not the right level of abstraction and that we need a transport
 concept (or scheme or protocol, e.g. http) and the ability for multiple
 bindings (e.g. ws, atom, json) to share the same transport... But that's
 a whole different discussion IMO.

 Can we keep this simply on a binding basis? and have a node declare this:

 component ...
   implementation.node ...
   service...
 binding.ws uri=http://localhost:1234/services/
 binding.jsonrpc uri=http://localhost:1234/services/
 binding.atom uri=http://localhost:/services/
 /component

 Then the binding.ws uri=... declaration can provide the default config
 for all binding.ws on that node, binding.jsonrpc for all binding.json,
 binding.atom for all binding.atom etc. As you can see in this example,
 different bindings could use different ports... so, trying to share a
 common transport will probably be less functional if it forces the
 bindings sharing that transport to share a single port.


This is OK until you bring policy into the picture. A policy might affect
the scheme a binding relies on so you may more realistically end up with..

component ...
  implementation.node ...
  service...
binding.ws uri=http://localhost:1234/services/
binding.ws
uri=https://localhost:443/serviceshttp://localhost:1234/services/

binding.jsonrpc uri=http://localhost:1234/services/
binding.atom uri=http://localhost:/services/
/component

And any particular, for example,  binding.ws might required to be defaulted
with http://...;, https://..; or even not defaulted at all if it's going
to use jms:  The issue with policies of course is that they are not,
currently, applied until later on when the bindings are actually activated.
So just looking at the model you can tell it has associated intents/policy
but not what the implications are for the endpoint.

We can ignore this in the first instance I guess and run with the
restriction that you can't apply policy that affects the scheme to bindings
inside the domain. But I'd be interested on you thoughts on the future
solution none the less. You will notice from the code that I haven't
actually done anything inside the bindings but just proposed that we will
have to ask binding specific questions at some point during URL creation.


Re: svn commit: r632646 - in /incubator/tuscany/java/sca/tools/maven/maven-definitions:

2008-03-03 Thread Simon Laws
On Mon, Mar 3, 2008 at 10:08 PM, Jean-Sebastien Delfino 
[EMAIL PROTECTED] wrote:

 Raymond Feng [EMAIL PROTECTED] wrote:
  I wonder if it's too heavy to develop a maven plugin to merge the
  definitions.xml file. BTW, I don't like the all-in-one jar too much as
 it
  breaks the modularity and extensibility story.

 +1 to that.

 Simon Laws wrote:
  If we are going to stick using the shader to produce an all jar then
 we
  need something to aggregate definitions files together correctly. People
 may
  put definitions.xml files in the same place in different modules by
 accident
  even if we were to come up with a naming scheme.
 
  Having said that I agree with you that I don't know why we have the all
 jar.
  The thing that confuses me is that we also build a manifest jar which
  references the all jar and all of the independent modules that we also
 ship?
  We copy the modules when we create a war. We use the all jar to make
 the
  build.xml script simpler for those samples that don't build a war but it
  might be more instructive to list out all the modules that samples use.
  Alternatively use the manifest jar.
 
  Simon
 

 +1 from me. Listing the required JARs is also what I've been trying to
 promote with the maven-ant-generator plugin, which generates a build.xml
 file containing the JARs you need from your pom.xml.

 I think it wouldn't be hard to go one step further and generate the list
 of JARs from the capabilities required by a composite (implementation
 types, bindings, policy types). That would allow application developers
 to (1) not have to write these build files (2) see in the generated
 build files exactly what they're using instead of an opaque
 tuscany-all.jar.

 --
 Jean-Sebastien

 -
 To unsubscribe, e-mail: [EMAIL PROTECTED]
 For additional commands, e-mail: [EMAIL PROTECTED]


Off topic a little but maybe we could add some more targets to the ant file
to build our various hosting options (build-standalone, build-webapp,...).
The pain we have at the moment is that the all jar can naturally only have
one hosting option but then it really confuses people because we ship with
Tomcat and hence they will get failures if they try, inadvertently, to build
a webapp using it, i.e. host-webapp can't be found.

Simon


Re: tuscany-maven-dependency-lister maven plugin

2008-03-05 Thread Simon Laws
On Tue, Mar 4, 2008 at 12:02 AM, Raymond Feng [EMAIL PROTECTED] wrote:

 Hi,

 I'm wondering what our tuscany-maven-dependency-lister plugin provides
 over
 the maven-dependency-plugin. The following command can give us a nice
 dependency tree of a project:

 mvn dependency:tree

 Thanks,
 Raymond


 -
 To unsubscribe, e-mail: [EMAIL PROTECTED]
 For additional commands, e-mail: [EMAIL PROTECTED]

 Hi Raymond

Basically I didn't want a tree in the sense that the dependency plugin
prints it out. The question I was trying to answer across the whole project
was, how may different versions of each dependency do we have across the
whole Tuscany Java SCA project. To do this I, for example, do the
following

cd sca
mvn -o -Pdependencies -Dmaven.test.skip=true
find . -name dependency.txt -exec cat '{}'  deptotal.txt \;

then load deptotal.txt into your favourite spreadsheet program and sort the
page.

Simon


Re: Why there are two different ways for tuscany generating WSDL from java (java2wsdl)

2008-03-05 Thread Simon Laws
On Tue, Mar 4, 2008 at 9:03 AM, Alex [EMAIL PROTECTED] wrote:

 Hi All,
 In tuscany-sca (1.1 above) , there are two modules related with java2wsdl:
 1.) modules\interface-wsdl-java2wsdl
 2.) tools\java2wsdl
 The java2wsdl interface(1) provides a runtime interface to handle java
 object to wsdl object
 the  java2wsdl tool (2) provides a command-line tool for converting java
 classes into wsdl files.
 the (1) use JAVA2WSDLBuilder (from Axis2 1.3 code) and AxisService2WSDL11,
 AxisService2WSDL20  to generate WSDL
 the (2) use TuscanyJAVA2WSDLBuilder, TuscanyWSDLTypeGenerator ... to
 generate WSDL
 Why there are two different ways? Why not just use axis code only or
 tuscany
 code only for the two modules?
 Or there are already a plan to merge the code? so which one will be if
 there
 is a choice?

 Thanks
 - Alex

Hi Alex

I don't think there is a good reason for the two approaches to WSDL
generation. It's probably just historical. I agree that it would be much
cleaner and more maintainable to have one set of code for doing this. I saw
a comment on the list the other from someone getting different results
depending on which approach they used. This is obviously not a good thing.
Are you interested in getting involved in trying to fix this?

Regards

Simon


Re: Investigating assignment of endpoint information to the domain model was: Re: Domain/Contribution Repository was: Re: SCA contribution packaging schemes: was: SCA runtimes

2008-03-05 Thread Simon Laws
On Wed, Mar 5, 2008 at 6:01 AM, Jean-Sebastien Delfino [EMAIL PROTECTED]
wrote:

 Simon Laws wrote:
  Thanks Sebastien, Hopefully some insight on the puzzle in line...
 
  Simon
 
  On Mon, Mar 3, 2008 at 9:57 PM, Jean-Sebastien Delfino 
 [EMAIL PROTECTED]
  wrote:
 
  I apologize in advance for the inline comment puzzle, but you had
  started with a long email in the first place :)
 
 
  no problem at all. Thanks for you detailed response.
 
  snip...
 
 
  I'm happy with workspace.configuration.impl. However applying default
  binding configuration to bindings in a composition doesn't have much to
  do with the workspace so I'd suggest to push it down to assembly,
  possible if you use a signature like I suggested above.
 
 
  Ok I can do that.
 
 
  B) The algorithm (A) that calculates service endpoints based on node
  default
  binding configurations depends on knowing the protocol that a
 particular
  binding is configured to use.
  That part I don't get :) We could toy with the idea that SCA bindings
  are not the right level of abstraction and that we need a transport
  concept (or scheme or protocol, e.g. http) and the ability for multiple
  bindings (e.g. ws, atom, json) to share the same transport... But
 that's
  a whole different discussion IMO.
 
  Can we keep this simply on a binding basis? and have a node declare
 this:
 
  component ...
implementation.node ...
service...
  binding.ws uri=http://localhost:1234/services/
  binding.jsonrpc uri=http://localhost:1234/services/
  binding.atom uri=http://localhost:/services/
  /component
 
  Then the binding.ws uri=... declaration can provide the default
 config
  for all binding.ws on that node, binding.jsonrpc for all binding.json
 ,
  binding.atom for all binding.atom etc. As you can see in this
 example,
  different bindings could use different ports... so, trying to share a
  common transport will probably be less functional if it forces the
  bindings sharing that transport to share a single port.
 
 
  This is OK until you bring policy into the picture. A policy might
 affect
  the scheme a binding relies on so you may more realistically end up
 with..
 
  component ...
implementation.node ...
service...
  binding.ws uri=http://localhost:1234/services/
  binding.ws
  uri=https://localhost:443/serviceshttp://localhost:1234/services/
 
  binding.jsonrpc uri=http://localhost:1234/services/
  binding.atom uri=http://localhost:/services/
  /component
 
  And any particular, for example,  binding.ws might required to be
 defaulted
  with http://...;, https://..; or even not defaulted at all if it's
 going
  to use jms:  The issue with policies of course is that they are
 not,
  currently, applied until later on when the bindings are actually
 activated.
  So just looking at the model you can tell it has associated
 intents/policy
  but not what the implications are for the endpoint.
 
  We can ignore this in the first instance I guess and run with the
  restriction that you can't apply policy that affects the scheme to
 bindings
  inside the domain. But I'd be interested on you thoughts on the future
  solution none the less. You will notice from the code that I haven't
  actually done anything inside the bindings but just proposed that we
 will
  have to ask binding specific questions at some point during URL
 creation.
 

 Well, I think you're raising an interesting issue, but it seems to be
 independent of any of this node business, more like a general issue with
 the impact of policies on specified binding URIs.


I agree that if the binding URI were completed based on the processing of
the build phase then this conversation is independent of the default
values provided by nodes. This is not currently the case AFAIUI. The policy
model is built and matched at build phase but the policy sets are not
applied until the binding runtime is created. For example, the
Axis2ServiceProvider constructor is involved in setting the binding URI at
the moment.  So in having an extension I was proposing a new place where
binding specific operations related to generating the URI could be housed
independently of the processing that happens when the providers are created.
In this way we would kick off this URL processing earlier on.



 If I understand correctly, and I'm taking the store tutorial Catalog
 component as an example to illustrate the issue:

 component name=CatalogServiceComponent
   service name=Catalog intents=ns:myEncryptionIntent
 binding.ws uri=http://somehost:8080/catalog/
   /service
 /component

 would in fact translate to:

 component name=CatalogComponent
   service name=Catalog intents=myEncryptionIntent
 binding.ws uri=https://localhost:443/catalog/
   /service
 /component

 assuming in this example that myEncryptionIntent is realized using
 HTTPS on port 443.

 Is that the issue you're talking about?


Yes, that's the issue, i.e. the binding specific code that makes

Re: Why there are two different ways for tuscany generating WSDL from java (java2wsdl)

2008-03-05 Thread Simon Laws
On Wed, Mar 5, 2008 at 2:30 PM, Alex [EMAIL PROTECTED] wrote:

 Hi Scott,

 the Axis2 Java2WSDL can add -sg option with the value 
 org.apache.axis2.jaxbri.JaxbSchemaGenerator.
 then it can deal with the JXAB annotations.
 since interface-wsdl-java2wsdl relies on Axis'2 java2wsdl directly, It's
 easy to do JXAB.
 But for tools\java2wsdl, it NOT easy since it use different approache.
 -Alex
 On Wed, Mar 5, 2008 at 10:09 PM, Scott Kurz [EMAIL PROTECTED] wrote:

  One important difference if I understand correctly is the tool handles
  SDOs
  whereas the runtime
  interface-wsdl-java2wsdl module only handles POJO types.
 
  I think the runtime code basically relies on Axis2's Java-XSD mapping,
  which I don't think would
  fully honor JAXB annotations in the Java as it ideally would (though it
  looks like we do an extra
  step allowing us to recognize if a NS-pkg mapping other than the
 default
  was used to gen the Java).
 
  (With some configuration, I believe it's possible to use Axis2's J2W
  function in a way such that it would
  recognize these JAXB annotations, or another alternative I believe Simon
  Nash mentioned was to look into
  CXF.)
 
  I didn't follow all of the discussion about removing SDO from the
 Tuscany
  charter... but if SDO is no
  longer a special part of the Tuscany project then what would happen to
 the
  W2J/J2W tools built around
  SDO support?
 
  Scott
 
 
 
  On Wed, Mar 5, 2008 at 7:26 AM, Simon Laws [EMAIL PROTECTED]
  wrote:
 
   On Tue, Mar 4, 2008 at 9:03 AM, Alex [EMAIL PROTECTED] wrote:
  
Hi All,
In tuscany-sca (1.1 above) , there are two modules related with
   java2wsdl:
1.) modules\interface-wsdl-java2wsdl
2.) tools\java2wsdl
The java2wsdl interface(1) provides a runtime interface to handle
 java
object to wsdl object
the  java2wsdl tool (2) provides a command-line tool for converting
  java
classes into wsdl files.
the (1) use JAVA2WSDLBuilder (from Axis2 1.3 code) and
   AxisService2WSDL11,
AxisService2WSDL20  to generate WSDL
the (2) use TuscanyJAVA2WSDLBuilder, TuscanyWSDLTypeGenerator ... to
generate WSDL
Why there are two different ways? Why not just use axis code only or
tuscany
code only for the two modules?
Or there are already a plan to merge the code? so which one will be
 if
there
is a choice?
   
Thanks
- Alex
   
   Hi Alex
  
   I don't think there is a good reason for the two approaches to WSDL
   generation. It's probably just historical. I agree that it would be
 much
   cleaner and more maintainable to have one set of code for doing this.
 I
   saw
   a comment on the list the other from someone getting different results
   depending on which approach they used. This is obviously not a good
  thing.
   Are you interested in getting involved in trying to fix this?
  
   Regards
  
   Simon
  
 



 --
 http://jroller.com/page/dindin


The question then is, what do we want these tools to do? Some thoughts from
my point of view. This is just my view and others may disagree...

- The runtime J2WSDL should be able to generate WSDL for Java interfaces for
the Java interface styles that Tuscany SCA supports. Specifically I mean
that our runtime tooling should be able to handle the various databindings
that we support for java interfaces, e.g. SDO, JAXB, etc. It's not necessary
that they are all supported straight away but we should have an approach
that means we see how they could be supported.

- We should try and adopt existing technology for doing this generation
where possible, Axis2, CxF etc. rather than writing our own

- We may need to make changes over and above the basic generation provided
by tools, such as Axis, to fix faults and add extra function, e.g. [1] and I
know Raymond has been working on making sure we adopt a JAXWS mapping for
java2wsdl generation. Also there are extra annotations that we may want to
introduce in the WSDL based on service configuration.

- We shouldn't have two sets of tooling implemented in different ways to do
this stuff

There has been previous discussion of this in [2] but I don't know where
Simon got to. It sounds like we need a tool into which we can plug WSDL and
XSD generators and also plug in post generation processors. As Scott points
out the runtime Java2WSDL currently has different capability compared with
the developer tooling. I'm with Alex on this, If we can further develop the
runtime tool then does anyone have any good reason why we can't use the same
code for the developer tool?

Simon

[1] http://www.mail-archive.com/tuscany-dev%40ws.apache.org/msg28531.html
[2] http://www.mail-archive.com/tuscany-dev@ws.apache.org/msg27855.html


JIRA fest for week beginning 10th March? Re: What should be in Tuscany SCA Java release 1.2?

2008-03-06 Thread Simon Laws
On Wed, Mar 5, 2008 at 6:22 PM, ant elder [EMAIL PROTECTED] wrote:

 The main thing I'd like to do for 1.2 is to try to finish off the JMS
 binding so that it as much as possible fully implements the spec but that
 wont be done by middle of next week. Guess i don't mind too much just
 committing things twice in trunk and 1.2 brn but could the branch be taken
 a
 little later say after Friday 14th?

   ...ant

 On Wed, Mar 5, 2008 at 4:30 PM, Luciano Resende [EMAIL PROTECTED]
 wrote:

  Thanks Guys, my idea is to wrap up any in progress work and create a
  branch middle of next week. It would be great In the mean time, it
  would be good if we all could take a quick look at JIRAs and start
  fixing them or marking them for 1.2 release.
 
  Thoughts ?
 
  On Wed, Mar 5, 2008 at 4:31 AM, ant elder [EMAIL PROTECTED] wrote:
   On Wed, Mar 5, 2008 at 11:57 AM, Simon Laws [EMAIL PROTECTED]
 
  
  
   wrote:
  
 On Wed, Mar 5, 2008 at 11:45 AM, Venkata Krishnan 
  [EMAIL PROTECTED]
 wrote:

  +1 for Luciano as RM.  I'd be happy to help wherever required.
   Thanks
 for
  volunteering, Luciano.
 
  - Venkat
 
  On Tue, Mar 4, 2008 at 11:06 PM, Luciano Resende 
  [EMAIL PROTECTED]
  wrote:
 
   Time flies and is already March. I'd like to restart discussion
  on
   this thread and start building a list of things we want to do
 for
  SCA
   1.2 and I'd also like to volunteer for Release Manager for SCA
  1.2
   release.
  
   On Thu, Feb 14, 2008 at 8:52 AM, Simon Laws 
  [EMAIL PROTECTED]
 
   wrote:
Hi
   
 It's probably about time we started talking about what's
 going
  to
 be
  in
 Tuscany SCA Java release 1.2. From the past timeline I would
  expect
  us
   to be
 trying for a release mid to late March which is not very far
  away.
   
 Some of the things I'd like to see are;
   
 More progress on our domain level composite and generally
 the
   processing
 that has to go there
 There have been a lot of policy changes going on and it
 would
  be
 good
   to get
 them in. Also linked to the item above we should look at how
  policy
   affects
 domain level processing.
 Don't know if it's achievable but some elements of the
 runtime
 story
  we
   have
 been talking about on the mail list for a while now
   
 Feel free to add topics on this thread. I've also opened up
  the
 Java-SCA-1.2category in JIRA so start associating JIRA with
  it, for
 example, if
   
 1 - you've already marked a JIRA as fixed and its sitting at
   Java-SCA-Next
 2 - you are working or are going to work on the JIRA for 1.2
 3 - you would like to see the JIRA fixed for 1.2
   
 Of course everyone is invited to contribute and submit
 patches
  for
  JIRA
 whether they be for bugs or new features. Inevitably not all
  wish
   list
 features will get done so you improve your chances of
 getting
  you
   favorite
 feature in by submitting a patch for it.
   
 Regards
   
 Simon
   
  
  
  
   --
   Luciano Resende
   Apache Tuscany Committer
   
   http://people.apache.org/~lresendehttp://people.apache.org/%7Elresende
 http://people.apache.org/%7Elresende
  http://people.apache.org/%7Elresende
 http://people.apache.org/%7Elresende
  http://people.apache.org/%7Elresende
   http://lresende.blogspot.com/
  
  
  -
   To unsubscribe, e-mail: [EMAIL PROTECTED]
   For additional commands, e-mail:
 [EMAIL PROTECTED]
  
  
 
 +1 Luciano as RM. Thanks Luciano for volunteering! So what are we
  actually
 looking at as a timeline? Do we think we can get an RC to vote on
  before
 the
 end of this month ready for a  release as early as possible in
 April.

 Simon

  
Aiming for that sounds good to me.
  
And slightly off topic but in way of helping accept the less and
  sooner
approach for 1.2 how about thinking in the back of our minds of a
 v2.0for
May/June that includes all the things like runtime and distribution
restructuring, SPI cleanup, and hopefully being able to include a
post-graduation dropping of the -incubating suffix.
  
  ...ant
  
 
 
 
  --
  Luciano Resende
  Apache Tuscany Committer
  http://people.apache.org/~lresendehttp://people.apache.org/%7Elresende
 http://people.apache.org/%7Elresende
  http://lresende.blogspot.com/
 
  -
  To unsubscribe, e-mail: [EMAIL PROTECTED]
  For additional commands, e-mail: [EMAIL PROTECTED]
 
 


I've noticed Raymond (and others:-) taking on and resolving lots of JIRA
recently. You

Re: Tuscany Runtime Error.

2008-03-06 Thread Simon Laws
Hi Sandeep

Are you able to provide the test case which is giving the error? If so the
best thing to do is open a JIRA and attach it there so someone can run it
and track down the problem.

Just taking a wild stab in the dark it would appear that the runtime is not
able to find the appropriate start() method on class that implements the
Compose interface. I don't know why this would be without running the
actual sample. Assuming that your component is implemented in Java you could
try telling the component implementation forcibly that it is exposing SCA
services using the SCA @Service annotation. Something like...

@Service(Compose.class)
public class MyComponentImplementation {

  public void start(){
...
  }

  etc...
}


Regards

Simon

On Thu, Mar 6, 2008 at 6:18 AM, Sandeep Raman [EMAIL PROTECTED] wrote:

 Hi,

 I have an component service (wsdl) created with the operation name
 start. Once i run the tuscany runtime , i get an error saying

 org.osoa.sca.ServiceRuntimeException: No matching operation for start is
 found in service TwoWSService#Compose

 what may be the possible reason for this error. The stacktrace is as
 follows:

 SEVERE: No matching operation for start is found in service
 TwoWSService#Compose
 org.osoa.sca.ServiceRuntimeException: No matching operation for start is
 found in service TwoWSService#Compose
at
 org.apache.tuscany.sca.core.assembly.RuntimeWireImpl.initInvocationChains(
 RuntimeWireImpl.java:165)
at
 org.apache.tuscany.sca.core.assembly.RuntimeWireImpl.getInvocationChains(
 RuntimeWireImpl.java:97)
at
 org.apache.tuscany.sca.core.assembly.RuntimeWireImpl.getInvocationChain(
 RuntimeWireImpl.java:103)
at
 org.apache.tuscany.sca.core.invocation.RuntimeWireInvoker.invoke(
 RuntimeWireInvoker.java:87)
at
 org.apache.tuscany.sca.core.invocation.RuntimeWireInvoker.invoke(
 RuntimeWireInvoker.java:82)
at
 org.apache.tuscany.sca.core.assembly.RuntimeWireImpl.invoke(
 RuntimeWireImpl.java:126)
at
 org.apache.tuscany.sca.binding.ws.axis2.Axis2ServiceProvider.invokeTarget(
 Axis2ServiceProvider.java:589)
at

 org.apache.tuscany.sca.binding.ws.axis2.Axis2ServiceInOutSyncMessageReceiver.invokeBusinessLogic
 (Axis2ServiceInOutSyncMessageReceiver.java:59)
at

 org.apache.axis2.receivers.AbstractInOutSyncMessageReceiver.invokeBusinessLogic
 (AbstractInOutSyncMessageReceiver.java:42)
at
 org.apache.axis2.receivers.AbstractMessageReceiver.receive(
 AbstractMessageReceiver.java:96)
at org.apache.axis2.engine.AxisEngine.receive(AxisEngine.java:145)
at
 org.apache.axis2.transport.http.HTTPTransportUtils.processHTTPPostRequest(
 HTTPTransportUtils.java:275)
at
 org.apache.axis2.transport.http.AxisServlet.doPost(AxisServlet.java:120)
at javax.servlet.http.HttpServlet.service(HttpServlet.java:727)
at javax.servlet.http.HttpServlet.service(HttpServlet.java:820)
at
 org.mortbay.jetty.servlet.ServletHolder.handle(ServletHolder.java:487)
at
 org.mortbay.jetty.servlet.ServletHandler.handle(ServletHandler.java:367)
at
 org.mortbay.jetty.servlet.SessionHandler.handle(SessionHandler.java:181)
at
 org.mortbay.jetty.handler.ContextHandler.handle(ContextHandler.java:712)
at
 org.mortbay.jetty.handler.HandlerWrapper.handle(HandlerWrapper.java:139)
at org.mortbay.jetty.Server.handle(Server.java:285)
at
 org.mortbay.jetty.HttpConnection.handleRequest(HttpConnection.java:502)
at
 org.mortbay.jetty.HttpConnection$RequestHandler.content(
 HttpConnection.java:835)
at org.mortbay.jetty.HttpParser.parseNext(HttpParser.java:641)
at
 org.mortbay.jetty.HttpParser.parseAvailable(HttpParser.java:208)
at
 org.mortbay.jetty.HttpConnection.handle(HttpConnection.java:378)
at
 org.mortbay.io.nio.SelectChannelEndPoint.run(SelectChannelEndPoint.java
 :368)
at
 org.apache.tuscany.sca.core.work.Jsr237Work.run(Jsr237Work.java:61)
at
 org.apache.tuscany.sca.core.work.ThreadPoolWorkManager$DecoratingWork.run(
 ThreadPoolWorkManager.java:205)
at
 java.util.concurrent.ThreadPoolExecutor$Worker.runTask(
 ThreadPoolExecutor.java:650)
at
 java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java
 :675)
at java.lang.Thread.run(Thread.java:595)


 the wsdl file generated has the snippet as follows:

 wsdl:binding name=ComposeSOAP11Binding type=ns0:ComposePortType
  soap:binding style=document tr
 ansport=http://schemas.xmlsoap.org/soap/http; /
 - wsdl:operation name=start
  soap:operation soapAction=urn:start style=document /
 - wsdl:input
  soap:body use=literal /
  /wsdl:input
 - wsdl:output
  soap:body use=literal /
  /wsdl:output
 - wsdl:fault name=Exception
  soap:fault name=Exception use=literal /
  /wsdl:fault
  /wsdl:operation
  /wsdl:binding

 Regards
 Sandeep.
 =-=-=
 Notice: The information contained in this e-mail
 message and/or 

Getting spec clarity for ServiceReference.getConversationID()

2008-03-06 Thread Simon Laws
TUSCANY-2055 has raised an issue where we need some spec clarity. Namely, in
relation to the function ServiceReference.getConversationID(), the Java
Annotations and API V1 spec says a few things.

521 1.6.6.2. Accessing Conversation IDs from Clients
522 Whether the conversation ID is chosen by the client or is generated by
the system, the client may access
523 the conversation ID by calling ServiceReference.getConversationID().

924 • getConversationID() - Returns the id supplied by the user that will be
associated with
925 conversations initiated through this reference.

946 • getConversationID() - Returns the identifier for this conversation. If
a user-defined identity had
947 been supplied for this reference then its value will be returned;
otherwise the identity generated by
948 the system when the conversation was initiated will be returned.

As I said in my JIRA comment my interpretation is that the Conversation
object represents the current conversation so you should always go there to
get the current conversationID. The get/setConversationId on the
ServiceReference  allows the user to provide a conversation ID that will
subsequently be used for new conversations. Hence you won't get the current
conversationID by calling getConversationId on ServiceReference you'll just
get whatever you set there manually.

Do one of the people here who are directly involved in the Java spec know
what is intended. If not I'll mail the spec list with the question.

Let me know.

Simon


Re: Getting spec clarity for ServiceReference.getConversationID()

2008-03-06 Thread Simon Laws
On Thu, Mar 6, 2008 at 3:49 PM, Simon Laws [EMAIL PROTECTED]
wrote:

 TUSCANY-2055 has raised an issue where we need some spec clarity. Namely,
 in relation to the function ServiceReference.getConversationID(), the Java
 Annotations and API V1 spec says a few things.

 521 1.6.6.2. Accessing Conversation IDs from Clients
 522 Whether the conversation ID is chosen by the client or is generated by
 the system, the client may access
 523 the conversation ID by calling ServiceReference.getConversationID().

 924 • getConversationID() - Returns the id supplied by the user that will
 be associated with
 925 conversations initiated through this reference.

 946 • getConversationID() - Returns the identifier for this conversation.
 If a user-defined identity had
 947 been supplied for this reference then its value will be returned;
 otherwise the identity generated by
 948 the system when the conversation was initiated will be returned.

 As I said in my JIRA comment my interpretation is that the Conversation
 object represents the current conversation so you should always go there to
 get the current conversationID. The get/setConversationId on the
 ServiceReference  allows the user to provide a conversation ID that will
 subsequently be used for new conversations. Hence you won't get the current
 conversationID by calling getConversationId on ServiceReference you'll just
 get whatever you set there manually.

 Do one of the people here who are directly involved in the Java spec know
 what is intended. If not I'll mail the spec list with the question.

 Let me know.

 Simon


Looking through the OASIS JIRA I notice that Simon Nash has submitted
JIRA-31 [1] against the OASIS SCA Java TC to cover this.

Simon

[1] http://www.osoa.org/jira/browse/JAVA-31


Re: Domain/Contribution Repository was: Re: SCA contribution packaging schemes: was: SCA runtimes

2008-03-06 Thread Simon Laws
On Fri, Feb 29, 2008 at 5:37 PM, Jean-Sebastien Delfino 
[EMAIL PROTECTED] wrote:

 Comments inline.

  A) Contribution workspace (containing installed contributions):
  - Contribution model representing a contribution
  - Reader for the contribution model
  - Workspace model representing a collection of contributions
  - Reader/writer for the workspace model
  - HTTP based service for accessing the workspace
  - Web browser client for the workspace service
  - Command line client for the workspace service
  - Validator for contributions in a workspace
 
  I started looking at step D). Having a rest from URLs :-) In the context
 of
  this thread the node can loose it's connection to the domain and hence
 the
  factory and the node interface slims down. So Runtime that loads a set
 of
  contributions and a composite becomes;
 
  create a node
  add some contributions (addContribution) and mark a composite for
  starting(currently called addToDomainLevelComposite).
  start the node
  stop the node
 
  You could then recycle (destroy) the node and repeat if required.
 
  This all sound like a suggestion Sebastien made about 5 months ago ;-) I
  have started to check in an alternative implementation of the node
  (node2-impl). I haven't changed any interfaces yet so I don't break any
  existing tests (and the code doesn't run yet!).
 
  Anyhow. I've been looking at the workspace code for parts A and B that
 has
  recently been committed. It would seem to be fairly representative of
 the
  motivating scenario [1].  I don't have detailed question yet but
  interestingly it looks like contributions, composites etc are exposed as
  HTTP resources. Sebastien, It would be useful to have a summary of you
  thoughts on how it is intended to hang together and how these will be
 used.

 I've basically created three services:

 workspace - Provides access to a collection of links to contributions,
 their URI and location. Also provides functions to get the list of
 contribution dependencies and validate a contribution.

 composites - Provides access to a collection of links to the composites
 present in to the domain composite. Also provides a function returning a
 particular composite once it has been 'built' (by CompositeBuilder),
 i.e. its references, properties etc have been resolved.

 nodes - Provides access to a collection of links to composites
 describing the implementation.node components which represent SCA nodes.

 There's another file upload service that I'm using to upload
 contribution files and other files to some storage area but it's just
 temporary.

 I'm using binding.atom to expose the above collections as editable
 ATOM-Pub collections (and ATOM feeds of contributions, composites, nodes).

 Here's how I'm using these services as an SCA domain administrator:

 1. Add one or more links to contributions to the workspace. They can be
 anywhere accessible on the network through a URL, or local on disk. The
 workspace just keeps track of the list.

 2. Add one or more composites to the composites collection. They become
 part of the domain composite.

 3. Add one or more composites declaring SCA nodes to the nodes
 collection. The nodes are described as SCA components of type
 implementation.node. A node component names the application composite
 that is assigned to run on it (see implementation-node-xml for an
 example).

 4. Point my Web browser to the various ATOM collections to get:
 - lists of contributions, composites and nodes
 - list of contributions that are required by a given contribution
 - the source of a particular composite
 - the output of a composite built by CompositeBuilder

 Here, I'm hoping that the work you've started to assign endpoint info
 to domain model [2] will help CompositeBuilder produce the correct
 fully resolved composite.

 5. Pick a node, point my Web browser to its composite description and
 write down:
 - $node = URL of the composite describing the node
 - $composite = URL of the application composite that's assigned to it
 - $contrib = URL the list of contribution dependencies.

 6. When you have node2-impl ready :) from the command line do:
 sca-node $node $composite $contrib
 this should start the SCA node, which can get its description, composite
 and contributions from these URLs.

 or for (6) start the node directly from my Web browser as described in
 [1], but one step at a time... that can come later when we have the
 basic building blocks working OK :)


 
  I guess these HTTP resource bring a deployment dimension.
 
  Local - Give the node contribution URLs that point to the local file
 system
  from where the node reads the contribution (this is how it has worked to
  date)
  Remote - Give it contribution URLs that point out to HTTP resource so
 the
  node can read the contributions from where they are stored in the
 network
 
  Was that the intention?

 Yes. I don't always want to have to upload contributions to some server
 or even have to copy them 

Re: Tuscany participation at Google Summer of Code (GSoC) 2008

2008-03-06 Thread Simon Laws
On Thu, Mar 6, 2008 at 5:40 PM, Raymond Feng [EMAIL PROTECTED] wrote:

 Improving the XQuery component implementation type could be a good
 candidate.

 Thanks,
 Raymond

 --
 From: Luciano Resende [EMAIL PROTECTED]
 Sent: Friday, February 29, 2008 5:52 PM
 To: tuscany-dev tuscany-dev@ws.apache.org
 Subject: Tuscany participation at Google Summer of Code (GSoC) 2008

  Apache Software Foundation is participating in Google Summer of Code
  program [1] as a mentoring organization. I think this is a good
  opportunity for us and I'd like to use this thread to discuss possible
  innovative and challenging projects that could attract the students
  participating on the program. Maybe we could start be defining some
  themes, and then projects around this themes, then, once we have a
  couple of projects, we could use wiki to create a small description of
  the project.
 
  Possible themes :
 
Tuscany Extensions (new bindings and implementations)
Web 2.0
 
  Thoughts ?
 
  [1] http://code.google.com/soc/2008/
  [2] http://wiki.apache.org/general/SummerOfCode2008
 
  --
  Luciano Resende
  Apache Tuscany Committer
  http://people.apache.org/~lresendehttp://people.apache.org/%7Elresende
  http://lresende.blogspot.com/
 
  -
  To unsubscribe, e-mail: [EMAIL PROTECTED]
  For additional commands, e-mail: [EMAIL PROTECTED]
 

 -
 To unsubscribe, e-mail: [EMAIL PROTECTED]
 For additional commands, e-mail: [EMAIL PROTECTED]


- some monitoring interceptors to help people work out what's going on in
the runtime
- resurrect the XSLT composite diagram drawer (don't remember what it was
called) so we don't have to keep doing them by hand
- doing a slick AJAX GUI based on the workplace stuff Sebastien has been
making.

Simon


Re: Investigating assignment of endpoint information to the domain model was: Re: Domain/Contribution Repository was: Re: SCA contribution packaging schemes: was: SCA runtimes

2008-03-06 Thread Simon Laws
On Wed, Mar 5, 2008 at 12:52 PM, Simon Laws [EMAIL PROTECTED]
wrote:



 On Wed, Mar 5, 2008 at 6:01 AM, Jean-Sebastien Delfino 
 [EMAIL PROTECTED] wrote:

  Simon Laws wrote:
   Thanks Sebastien, Hopefully some insight on the puzzle in line...
  
   Simon
  
   On Mon, Mar 3, 2008 at 9:57 PM, Jean-Sebastien Delfino 
  [EMAIL PROTECTED]
   wrote:
  
   I apologize in advance for the inline comment puzzle, but you had
   started with a long email in the first place :)
  
  
   no problem at all. Thanks for you detailed response.
  
   snip...
  
  
   I'm happy with workspace.configuration.impl. However applying default
   binding configuration to bindings in a composition doesn't have much
  to
   do with the workspace so I'd suggest to push it down to assembly,
   possible if you use a signature like I suggested above.
  
  
   Ok I can do that.
  
  
   B) The algorithm (A) that calculates service endpoints based on node
   default
   binding configurations depends on knowing the protocol that a
  particular
   binding is configured to use.
   That part I don't get :) We could toy with the idea that SCA bindings
   are not the right level of abstraction and that we need a transport
   concept (or scheme or protocol, e.g. http) and the ability for
  multiple
   bindings (e.g. ws, atom, json) to share the same transport... But
  that's
   a whole different discussion IMO.
  
   Can we keep this simply on a binding basis? and have a node declare
  this:
  
   component ...
 implementation.node ...
 service...
   binding.ws uri=http://localhost:1234/services/
   binding.jsonrpc uri=http://localhost:1234/services/
   binding.atom uri=http://localhost:/services/
   /component
  
   Then the binding.ws uri=... declaration can provide the default
  config
   for all binding.ws on that node, binding.jsonrpc for all
  binding.json,
   binding.atom for all binding.atom etc. As you can see in this
  example,
   different bindings could use different ports... so, trying to share a
   common transport will probably be less functional if it forces the
   bindings sharing that transport to share a single port.
  
  
   This is OK until you bring policy into the picture. A policy might
  affect
   the scheme a binding relies on so you may more realistically end up
  with..
  
   component ...
 implementation.node ...
 service...
   binding.ws uri=http://localhost:1234/services/
   binding.ws
   uri=https://localhost:443/serviceshttp://localhost:1234/services/
  
   binding.jsonrpc uri=http://localhost:1234/services/
   binding.atom uri=http://localhost:/services/
   /component
  
   And any particular, for example,  binding.ws might required to be
  defaulted
   with http://...;, https://..; or even not defaulted at all if it's
  going
   to use jms:  The issue with policies of course is that they are
  not,
   currently, applied until later on when the bindings are actually
  activated.
   So just looking at the model you can tell it has associated
  intents/policy
   but not what the implications are for the endpoint.
  
   We can ignore this in the first instance I guess and run with the
   restriction that you can't apply policy that affects the scheme to
  bindings
   inside the domain. But I'd be interested on you thoughts on the future
   solution none the less. You will notice from the code that I haven't
   actually done anything inside the bindings but just proposed that we
  will
   have to ask binding specific questions at some point during URL
  creation.
  
 
  Well, I think you're raising an interesting issue, but it seems to be
  independent of any of this node business, more like a general issue with
  the impact of policies on specified binding URIs.


 I agree that if the binding URI were completed based on the processing of
 the build phase then this conversation is independent of the default
 values provided by nodes. This is not currently the case AFAIUI. The policy
 model is built and matched at build phase but the policy sets are not
 applied until the binding runtime is created. For example, the
 Axis2ServiceProvider constructor is involved in setting the binding URI at
 the moment.  So in having an extension I was proposing a new place where
 binding specific operations related to generating the URI could be housed
 independently of the processing that happens when the providers are created.
 In this way we would kick off this URL processing earlier on.


 
  If I understand correctly, and I'm taking the store tutorial Catalog
  component as an example to illustrate the issue:
 
  component name=CatalogServiceComponent
service name=Catalog intents=ns:myEncryptionIntent
  binding.ws uri=http://somehost:8080/catalog/
/service
  /component
 
  would in fact translate to:
 
  component name=CatalogComponent
service name=Catalog intents=myEncryptionIntent
  binding.ws uri=https://localhost:443/catalog/
/service

Re: Domain/Contribution Repository was: Re: SCA contribution packaging schemes: was: SCA runtimes

2008-03-07 Thread Simon Laws
On Fri, Mar 7, 2008 at 12:23 PM, Jean-Sebastien Delfino 
[EMAIL PROTECTED] wrote:

 Jean-Sebastien Delfino wrote:
  Simon Laws wrote:
 
  I've been running the workspace code today with a view to integrating
 the
  new code in assembly which calculates service endpoints i.e. point4
  above.
 
  I think we need to amend point 4 to make this work properly..
 
  4. Point my Web browser to the various ATOM collections to get:
  - lists of contributions, composites and nodes
  - list of contributions that are required by a given contribution
  - the source of a particular composite
  - the output of a composite after the domain composite has been built
 by
  CompositeBuilder
 
  Looking at the code in DeployableCompositeCollectionImpl I see that on
  doGet() it builds the request composite. What the last point  needs to
  do is
 
  - read the whole domain
  - set up all of the service URIs for each of the included composites
  taking
  into account the node to which each composite is assigned
  - build the whole domain using CompositeBuilder
  - extract the required composite from the domain and serialize it out.
 
  Yes, exactly!
 
 
  Are you changing this code or can I put this in?
 
  Just go ahead, I'll update and merge if I have any other changes in the
  same classes.
 

 Simon, a quick update: I've done an initial bring-up of node2-impl. It's
 still a little rough but you can give it a try if you want.

 The steps to run the store app for example with node2 are as follows:

 1) use workspace-admin to add the store and assets contributions to the
 domain;

 2) add the store composite to the domain composite using the admin as
 well;

 3) start the StoreLauncher2 class that I just added to the store module;

 4) that will start an instance of node2 with all the node config served
 from the admin app.

 So the next step is to integrate your node allocation code with
 workspace-admin and that will complete the story. Then we'll be able to
 remove all the currently hardcoded endpoint URIs from the composites.

 I'll send a more detailed description and steps to run more scenarios
 later on Friday.

 --
 Jean-Sebastien

 -
 To unsubscribe, e-mail: [EMAIL PROTECTED]
 For additional commands, e-mail: [EMAIL PROTECTED]

 Ok, sounds good. I've done the uri integration although there are some
issues we need to discuss. First I'll update with your code, commit my
changes and then post here about the issues.

Regards

Simon


Re: Domain/Contribution Repository was: Re: SCA contribution packaging schemes: was: SCA runtimes

2008-03-07 Thread Simon Laws
On Fri, Mar 7, 2008 at 4:18 PM, Simon Laws [EMAIL PROTECTED]
wrote:



 On Fri, Mar 7, 2008 at 12:23 PM, Jean-Sebastien Delfino 
 [EMAIL PROTECTED] wrote:

  Jean-Sebastien Delfino wrote:
   Simon Laws wrote:
  
   I've been running the workspace code today with a view to integrating
  the
   new code in assembly which calculates service endpoints i.e. point4
   above.
  
   I think we need to amend point 4 to make this work properly..
  
   4. Point my Web browser to the various ATOM collections to get:
   - lists of contributions, composites and nodes
   - list of contributions that are required by a given contribution
   - the source of a particular composite
   - the output of a composite after the domain composite has been built
  by
   CompositeBuilder
  
   Looking at the code in DeployableCompositeCollectionImpl I see that
  on
   doGet() it builds the request composite. What the last point  needs
  to
   do is
  
   - read the whole domain
   - set up all of the service URIs for each of the included composites
   taking
   into account the node to which each composite is assigned
   - build the whole domain using CompositeBuilder
   - extract the required composite from the domain and serialize it
  out.
  
   Yes, exactly!
  
  
   Are you changing this code or can I put this in?
  
   Just go ahead, I'll update and merge if I have any other changes in
  the
   same classes.
  
 
  Simon, a quick update: I've done an initial bring-up of node2-impl. It's
  still a little rough but you can give it a try if you want.
 
  The steps to run the store app for example with node2 are as follows:
 
  1) use workspace-admin to add the store and assets contributions to the
  domain;
 
  2) add the store composite to the domain composite using the admin as
  well;
 
  3) start the StoreLauncher2 class that I just added to the store module;
 
  4) that will start an instance of node2 with all the node config served
  from the admin app.
 
  So the next step is to integrate your node allocation code with
  workspace-admin and that will complete the story. Then we'll be able to
  remove all the currently hardcoded endpoint URIs from the composites.
 
  I'll send a more detailed description and steps to run more scenarios
  later on Friday.
 
  --
  Jean-Sebastien
 
  -
  To unsubscribe, e-mail: [EMAIL PROTECTED]
  For additional commands, e-mail: [EMAIL PROTECTED]
 
  Ok, sounds good. I've done the uri integration although there are some
 issues we need to discuss. First I'll update with your code, commit my
 changes and then post here about the issues.

 Regards

 Simon

I've now checked in my changes (last commit was 634762) to integrate the URI
calculation code with the workspace. I've run the new store launcher
following Sebastien's instructions from a previous post to this thread. I
don't seem to have broken it too much although I'm not seeing any prices for
the catalog items.

Issues with the URI generation code

I have to turn model resolution back on by uncommenting a line in
ContributionContentProcessor.resolve. Otherwise the JavaImplementation types
are not read and
compositeConfiguationBuilder.calculateBindingURIs(defaultBindings,
composite, null); can't generate default services. I then had to tun it back
off to make the store sample work. I need some help on this one.

If you hand craft services it seems to be OK although I have noticed,
looking at the generated SCDL, that it seems to be assuming that all
generated service names will be based on the implementation classname
regardless of whether the interface is marked as @Remotable or not. Feels
like a bug somewhere so am going to look at that next.

To get Java implementation resolution to work I needed to hack in the Java
factories setup in the DeployableCompositeCollectionImpl.initialize()
method.  This is not very good and raises the bigger question about the set
up in here. It's creating a set of extension points in parallel to those
created by the runtime running this component. Can we either use the
registry created by the underlying runtime or do similar generic setup.

The code doesn't currently distinguish between those services that are
@Remotable and those that aren't

Simon


Re: [continuum] BUILD FAILURE: Apache Tuscany SCA Implementation Project

2008-03-07 Thread Simon Laws
Sorry folks. That's me. Looks like I missed a pom change.

Simon


Re: svn commit: r635435 - in /incubator/tuscany/java/sca/modules: assembly/src/main/java/org/apache/tuscany/sca/assembly/builder/impl/ assembly/src/test/java/org/apache/tuscany/sca/assembly/builder/im

2008-03-10 Thread Simon Laws
On Mon, Mar 10, 2008 at 5:43 AM, [EMAIL PROTECTED] wrote:

 Author: jsdelfino
 Date: Sun Mar  9 22:43:19 2008
 New Revision: 635435

 URL: http://svn.apache.org/viewvc?rev=635435view=rev
 Log:
 Fixed algorithm in CompositeConfigurationBuilder to produce correct URIs,
 in particular avoid adding binding name to itself, and consider binding URI
 when specified in the composite in the single service case too. Integrated
 CompositeConfigurationBuilder in the main build() now that it covers all
 cases. Adjusted the domain-impl, the callable reference resolution and the
 core-spring reference resolution code to the new URI form.

 Modified:

  
 incubator/tuscany/java/sca/modules/assembly/src/main/java/org/apache/tuscany/sca/assembly/builder/impl/CompositeConfigurationBuilderImpl.java

  
 incubator/tuscany/java/sca/modules/assembly/src/test/java/org/apache/tuscany/sca/assembly/builder/impl/CalculateBindingURITestCase.java

  
 incubator/tuscany/java/sca/modules/core-spring/src/main/java/org/apache/tuscany/sca/core/spring/assembly/impl/BeanReferenceImpl.java

  
 incubator/tuscany/java/sca/modules/core/src/main/java/org/apache/tuscany/sca/core/context/CallableReferenceImpl.java

  
 incubator/tuscany/java/sca/modules/domain-impl/src/main/java/org/apache/tuscany/sca/domain/impl/SCADomainImpl.java

  
 incubator/tuscany/java/sca/modules/workspace-admin/src/main/java/org/apache/tuscany/sca/workspace/admin/impl/DeployableCollectionImpl.java

 Modified:
 incubator/tuscany/java/sca/modules/assembly/src/main/java/org/apache/tuscany/sca/assembly/builder/impl/CompositeConfigurationBuilderImpl.java
 URL:
 http://svn.apache.org/viewvc/incubator/tuscany/java/sca/modules/assembly/src/main/java/org/apache/tuscany/sca/assembly/builder/impl/CompositeConfigurationBuilderImpl.java?rev=635435r1=635434r2=635435view=diff

 ==
 ---
 incubator/tuscany/java/sca/modules/assembly/src/main/java/org/apache/tuscany/sca/assembly/builder/impl/CompositeConfigurationBuilderImpl.java
 (original)
 +++
 incubator/tuscany/java/sca/modules/assembly/src/main/java/org/apache/tuscany/sca/assembly/builder/impl/CompositeConfigurationBuilderImpl.java
 Sun Mar  9 22:43:19 2008
 @@ -52,9 +52,9 @@
  import org.apache.tuscany.sca.policy.PolicySetAttachPoint;

  public class CompositeConfigurationBuilderImpl {
 -String SCA10_NS = http://www.osoa.org/xmlns/sca/1.0;;
 -String BINDING_SCA = binding.sca;
 -QName BINDING_SCA_QNAME = new QName(SCA10_NS, BINDING_SCA);
 +private final static String SCA10_NS = 
 http://www.osoa.org/xmlns/sca/1.0;;
 +private final static String BINDING_SCA = binding.sca;
 +private final static QName BINDING_SCA_QNAME = new QName(SCA10_NS,
 BINDING_SCA);

 private AssemblyFactory assemblyFactory;
 private SCABindingFactory scaBindingFactory;
 @@ -81,9 +81,10 @@
  * @param composite
  * @param problems
  */
 -public void configureComponents(Composite composite) {
 +public void configureComponents(Composite composite) throws
 CompositeBuilderException {
 configureComponents(composite, null);
 configureSourcedProperties(composite, null);
 +configureBindingURIs(composite, null, null);
 }

 /**
 @@ -124,8 +125,6 @@
 // Create default SCA binding
 if (service.getBindings().isEmpty()) {
 SCABinding scaBinding = createSCABinding();
 -
 -
 service.getBindings().add(scaBinding);
 }

 @@ -136,33 +135,6 @@
 if (binding.getName() == null) {
 binding.setName(service.getName());
 }
 -
 -String bindingURI;
 -if (binding.getURI() == null) {
 -if (compositeServices.size()  1) {
 -// Binding URI defaults to parent URI / binding
 name
 -bindingURI = String.valueOf(binding.getName());
 -if (parentURI != null) {
 -bindingURI = URI.create(parentURI +
 '/').resolve(bindingURI).toString();
 -}
 -} else {
 -// If there's only one service then binding URI
 defaults
 -// to the parent URI
 -if (parentURI != null) {
 -bindingURI = parentURI;
 -} else {
 -bindingURI = String.valueOf(binding.getName
 ());
 -}
 -}
 -} else {
 -// Combine the specified binding URI with the
 component URI
 -bindingURI = binding.getURI();
 -if (parentURI != null) {
 -bindingURI = URI.create(parentURI +
 '/').resolve(bindingURI).toString();
 -}
 -}
 -
 -binding.setURI(bindingURI);

Re: svn commit: r635435 - in /incubator/tuscany/java/sca/modules: assembly/src/main/java/org/apache/tuscany/sca/assembly/builder/impl/ assembly/src/test/java/org/apache/tuscany/sca/assembly/builder/im

2008-03-11 Thread Simon Laws
Comments inline

snip...


 and that's OK I think :) as binding.ws and binding.jsonrpc could end up
 on different port numbers for example (depending on their node
 configuration).


And they might not.



 Actually, I was struggling to understand why we needed this test for
 duplicate names at all.

 Does the spec forbids two bindings with the same name?


No but it says that if  the name is used as the default binding URI then
only one binding for a service can use this default.



 By trying hard to detect duplicate names early on (without having all
 the info about what the binding URIs will end up to be), won't we raise
 errors  even for cases that should work?


I agree that we don't have all of the information required to carry out this
test. Hence the TODO in the code.


 One example of the issues I was running into:
 service...
   binding.jsonrpc/
   binding.ws uri=/whatever
 /service
 throwing a duplicate name exception as the two bindings had the same name.

 On a fun note, I think that:
 service name=aservice
   binding.jsonrpc name=whatever/
   binding.ws uri=/whatever
 /service
 would probably not have thrown an exception, but should have :)


Hmmm. not sure but you could be right.


 So my recommendation would be to not try too hard to detect these
 duplicates early on based on binding names (as the detection algorithm
 is too fragile at that point), instead detect duplicates at the very end
 when we see that we end up with duplicate URIs.


+1. I'll add a new method to scan the model for duplicate URI that can be
called independently.




 
  Also can you give a quick example the problems you found relating to
 avoid
  adding binding name to itself, and consider binding URI when specified
 in
  the composite in the single service case too so I can understand the
 other
  code changes.

 One issue was that the code was not considering the binding URI when
 there was only one service on a component, for example IIRC:
 component name=Store
   service name=Widget
 binding.http uri=/ui/
   /service
 /component
 was bound to /Store (the component URI) instead of /ui.

 I understand that the binding name should be omitted from the computed
 URI in the single service case, but the binding URI should be considered
 if specified, leading to the requirement to distinguish between binding
 name (always know, not always considered) and binding URI (not always
 specified, always considered when specified).


Ok. Thank you. I had interpreted the spec incorrectly here. I had mistakenly
read assembly spec line 2375 as Where a component has only a single
service, the value of the Service Binding URI is null instead of what it
actually says, which is Where a component has only a single service, the *
default* value of the Service Binding URI is null.



 About the adding the binding name to itself part, this is a little
 complicated. IIRC the issue was that CompositeConfigurationBuilder would
 first convert:
 component name=Foo
   service...
 binding.sca/
   /service
 /component

 into:
 component name=Foo
   service...
 binding.sca uri=Foo/
   /service
 /component

 Then when an SCANode would consume that, CompositeBuilder.build would
 turn it into:
 component name=Foo
   service...
 binding.sca uri=Foo/Foo/
   /service
 /component

 as I think the old CompositeConfigurationBuilder code used in that case
 used to concatenate the component URI and the binding URI.


OK, I see. So this was a combination of the old and the new causing
problems. I see that you now call the old after the new has run so am
assuming that your changes are making this behave now. Let me know if not.



 
  Thanks
 
  Simon
 

 --
 Jean-Sebastien

 -
 To unsubscribe, e-mail: [EMAIL PROTECTED]
 For additional commands, e-mail: [EMAIL PROTECTED]




Re: Suggestion for Replying with comments to a specific committ

2008-03-11 Thread Simon Laws
On Tue, Mar 11, 2008 at 8:33 AM, ant elder [EMAIL PROTECTED] wrote:

 On Tue, Mar 11, 2008 at 12:20 AM, Luciano Resende [EMAIL PROTECTED]
 wrote:

  It would be great if people could start changing the subject when
  replying with comments to a given committs, this would allow others to
  better identify what's going on and possible jump and help on the
  discussion.
 
  Just my 2c
 
 
 I think it depends on the type of comment being made, some things keeping
 the subject is useful - i.e. a reply - others it probably isn't and either
 forwarding the commit mail to the dev list with a new subject or else just
 having a new email that contains a link to the commit email in the
 archives
 might be better.

   ...ant


As a recent offender I can probably see where Luciano is going with this.
I do think though it depends what sort of comment is being made. If we are
making comments specifically about the committed change then I'm not sure a
new title adds much. For example, the generic I have some questions about
the change you just made is not really helpful and the the more specific
The changes you made relating to blah and blah don't look right to me
could, I see, help to arrange email more easily but is in danger of turning
into a long title by repeating the commit comment or, indeed, the contents
of the post.

If however the post is somewhat tangential then I completely agree that a
new title or, as Ant suggests, a new mail referencing the commit, is
entirely appropriate.

Simon


Re: Proposed resolution for TUSCANY-2055

2008-03-11 Thread Simon Laws
On Tue, Mar 11, 2008 at 3:35 PM, Simon Nash [EMAIL PROTECTED] wrote:

 I would like to resolve TUSCANY-2055 based on the most recent comment
 that I appended referring to the resolution of OASIS issue JAVA-31.

 Does anyone object to my marking this resolved?  If not, I will go
 ahead and do this later this week.

   Simon


 -
 To unsubscribe, e-mail: [EMAIL PROTECTED]
 For additional commands, e-mail: [EMAIL PROTECTED]


I did it earlier today but welcome some review of the test case as it now
stands.

Simon


Re: svn commit: r636186 - /incubator/tuscany/java/sca/itest/callablereferences-ws/src/main/resources/CallableReferenceWsReturnTest.composite

2008-03-12 Thread Simon Laws
On Wed, Mar 12, 2008 at 3:01 AM, [EMAIL PROTECTED] wrote:

 Author: lresende
 Date: Tue Mar 11 20:01:48 2008
 New Revision: 636186

 URL: http://svn.apache.org/viewvc?rev=636186view=rev
 Log:
 Changing HTTP port in use, to avoid build issues on Continuum

 Modified:

  
 incubator/tuscany/java/sca/itest/callablereferences-ws/src/main/resources/CallableReferenceWsReturnTest.composite

 Modified:
 incubator/tuscany/java/sca/itest/callablereferences-ws/src/main/resources/CallableReferenceWsReturnTest.composite
 URL:
 http://svn.apache.org/viewvc/incubator/tuscany/java/sca/itest/callablereferences-ws/src/main/resources/CallableReferenceWsReturnTest.composite?rev=636186r1=636185r2=636186view=diff

 ==
 ---
 incubator/tuscany/java/sca/itest/callablereferences-ws/src/main/resources/CallableReferenceWsReturnTest.composite
 (original)
 +++
 incubator/tuscany/java/sca/itest/callablereferences-ws/src/main/resources/CallableReferenceWsReturnTest.composite
 Tue Mar 11 20:01:48 2008
 @@ -28,7 +28,7 @@
binding.sca /
/service
reference name=beta
 -   binding.ws uri=http://localhost:8080/Beta; /
 +   binding.ws uri=http://localhost:8085/Beta; /
/reference
/component

 @@ -36,10 +36,10 @@
implementation.java
class=
 org.apache.tuscany.sca.itest.callablerefwsreturn.BetaImpl /
service name=Beta
 -   binding.ws uri=http://localhost:8080/Beta; /
 +   binding.ws uri=http://localhost:8085/Beta; /
/service
reference name=gamma
 -   binding.ws uri=http://localhost:8080/Gamma; /
 +   binding.ws uri=http://localhost:8085/Gamma; /
/reference
/component

 @@ -47,7 +47,7 @@
implementation.java
class=
 org.apache.tuscany.sca.itest.callablerefwsreturn.GammaImpl /
service name=Gamma
 -   binding.ws uri=http://localhost:8080/Gamma; /
 +   binding.ws uri=http://localhost:8085/Gamma; /
/service
/component




 -
 To unsubscribe, e-mail: [EMAIL PROTECTED]
 For additional commands, e-mail: [EMAIL PROTECTED]

 Ooops, thanks Luciano.

Simon


[PROPOSAL] Using new Workspace in samples/calculator-distributed Re: Domain/Contribution Repository was: Re: SCA contribution packaging schemes: was: SCA runtimes

2008-03-12 Thread Simon Laws
I like the look of the workspace code Sebastien has been writing and I
propose to try it out on samples/calculator-distributed.

In particular I'd like to help Felix who is hitting the common filesystem
restriction of the current domain implementation .

Let me know if anyone has any concerns.

I'll report back with what I learn. There are other modules that rely on
distributed support

itest/callable-references
itest/domain
itest/osgi-tuscany/tuscany-3rdparty
itest/osgi-tuscay/tuscany-runtime
samples/calculator-distributed
tools/eclipse/plugins/runtime

I'm happy to think about those if the sample/calculator-distributed goes ok.


Regards

Simon

[1] http://www.mail-archive.com/tuscany-user%40ws.apache.org/msg02610.html

On Mon, Mar 10, 2008 at 6:07 AM, Jean-Sebastien Delfino 
[EMAIL PROTECTED] wrote:

 Jean-Sebastien Delfino wrote:
  Simon Laws wrote:
  On Fri, Mar 7, 2008 at 4:18 PM, Simon Laws [EMAIL PROTECTED]
  wrote:
 
 
  On Fri, Mar 7, 2008 at 12:23 PM, Jean-Sebastien Delfino 
  [EMAIL PROTECTED] wrote:
 
  Jean-Sebastien Delfino wrote:
  Simon Laws wrote:
  I've been running the workspace code today with a view to
 integrating
  the
  new code in assembly which calculates service endpoints i.e. point4
  above.
 
  I think we need to amend point 4 to make this work properly..
 
  4. Point my Web browser to the various ATOM collections to get:
  - lists of contributions, composites and nodes
  - list of contributions that are required by a given contribution
  - the source of a particular composite
  - the output of a composite after the domain composite has been
 built
  by
  CompositeBuilder
 
  Looking at the code in DeployableCompositeCollectionImpl I see that
  on
  doGet() it builds the request composite. What the last point  needs
  to
  do is
 
  - read the whole domain
  - set up all of the service URIs for each of the included
 composites
  taking
  into account the node to which each composite is assigned
  - build the whole domain using CompositeBuilder
  - extract the required composite from the domain and serialize it
  out.
  Yes, exactly!
 
  Are you changing this code or can I put this in?
  Just go ahead, I'll update and merge if I have any other changes in
  the
  same classes.
 
  Simon, a quick update: I've done an initial bring-up of node2-impl.
  It's
  still a little rough but you can give it a try if you want.
 
  The steps to run the store app for example with node2 are as follows:
 
  1) use workspace-admin to add the store and assets contributions to
 the
  domain;
 
  2) add the store composite to the domain composite using the admin as
  well;
 
  3) start the StoreLauncher2 class that I just added to the store
  module;
 
  4) that will start an instance of node2 with all the node config
 served
  from the admin app.
 
  So the next step is to integrate your node allocation code with
  workspace-admin and that will complete the story. Then we'll be able
 to
  remove all the currently hardcoded endpoint URIs from the composites.
 
  I'll send a more detailed description and steps to run more scenarios
  later on Friday.
 
  --
  Jean-Sebastien
 
  -
  To unsubscribe, e-mail: [EMAIL PROTECTED]
  For additional commands, e-mail: [EMAIL PROTECTED]
 
  Ok, sounds good. I've done the uri integration although there are
 some
  issues we need to discuss. First I'll update with your code, commit my
  changes and then post here about the issues.
 
  Regards
 
  Simon
 
  I've now checked in my changes (last commit was 634762) to integrate
  the URI
  calculation code with the workspace. I've run the new store launcher
  following Sebastien's instructions from a previous post to this thread.
 I
  don't seem to have broken it too much although I'm not seeing any
  prices for
  the catalog items.
 
  I was seeing that issue too before, it's a minor bug in the property
  writing code, which is not writing property values correctly.
 
  Issues with the URI generation code
 
  I have to turn model resolution back on by uncommenting a line in
  ContributionContentProcessor.resolve. Otherwise the JavaImplementation
  types
  are not read and
  compositeConfiguationBuilder.calculateBindingURIs(defaultBindings,
  composite, null); can't generate default services. I then had to tun
  it back
  off to make the store sample work. I need some help on this one.
 
  I'm investigating now.
 
 
  If you hand craft services it seems to be OK although I have noticed,
  looking at the generated SCDL, that it seems to be assuming that all
  generated service names will be based on the implementation classname
  regardless of whether the interface is marked as @Remotable or not.
 Feels
  like a bug somewhere so am going to look at that next.
 
  OK
 
 
  To get Java implementation resolution to work I needed to hack in the
  Java
  factories setup in the DeployableCompositeCollectionImpl.initialize()
  method.  This is not very

IRC with Venkat about tidying contribution processing...

2008-03-13 Thread Simon Laws
I chatted with Venkat earlier today on #tuscany about changes he proposed to
contribution processing. This is input to the wider, what do we do about
contribution processing and workspace debate



Venkat irc://freenode/Venkat,isnickI am trying to clean up the
contribution service imple a bit
slawsso you want to talk a bit about your plans?
Venkat irc://freenode/Venkat,isnickyes
slawsok- we could also take a look together at the changes sebastien has
made to contribution processing
Venkat irc://freenode/Venkat,isnickbasically I am wondering if
contribution ever needs to depend on 'assembly' or 'policy' or 'definitions'
module
Venkat irc://freenode/Venkat,isnickfor now am starting with cleaning up
for decoupling the 'definitions' dependency
slawsi'm with you there. It should depend on as little as possible. Not
sure we can mitigate assembly but lets see
Venkat irc://freenode/Venkat,isnickright now... we are doing a bit of
preprocessing of composite files to add the 'applicablePolicySets' to
various SCA Artifcats
slawsy - i saw that
Venkat irc://freenode/Venkat,isnickfor this we need to take note of all
the PolicySets that are getting contributed...
Venkat irc://freenode/Venkat,isnicki.e. when a defintions.xml is read...
we would like to dive in and weed out the policysets
Venkat irc://freenode/Venkat,isnickthen these policysets will have to be
processed against every composite that is read... and which ever policyset
applies
Venkat irc://freenode/Venkat,isnickto which ever sca artifact in the
composite... that policyset's name has to get into the
'applicablePolicySets' attribute
Venkat irc://freenode/Venkat,isnickso all of this is now being done in
the Contribution Service
Venkat irc://freenode/Venkat,isnickso the contribution service needs to
know when its a definition that was read...
Venkat irc://freenode/Venkat,isnickand then needs to weed out the
policysets out of it.. and then when processing composites it has to use
this..
Venkat irc://freenode/Venkat,isnickso that brings in the dependencies..
i.e. contribution serivce needs to know about SCADefinitions, PolicySet and
so on
Venkat irc://freenode/Venkat,isnickam wondering if I could decouple this
and tear it out of the ContributionService..
Venkat irc://freenode/Venkat,isnickso here is what I am thinking of..
slawsok- go ahead
Venkat irc://freenode/Venkat,isnickwhen artifacts are read by the
contribution service... it coudl publish events to interested listners...
Venkat irc://freenode/Venkat,isnickit just about sends events that has
the URL that was processed and the model that was read... (need not know
anything of the actually model type)
Venkat irc://freenode/Venkat,isnickthe runtime listens to these events
and if there are defintions read.. it weeds out policysets... and aggregates
it
Venkat irc://freenode/Venkat,isnickthe composite preprocessing will move
to the CompositeDocumentProcessor
Venkat irc://freenode/Venkat,isnickwhere in before the reading of the
document, the processor will preprocess and add the applicable policysets..
Venkat irc://freenode/Venkat,isnicknow how does the
CompositeDocumentProcessor get hold of the policysets... since its the
runtime that creates this processor
Venkat irc://freenode/Venkat,isnickit should be fine for it to inject
the policySets list that it is aggregating...
Venkat irc://freenode/Venkat,isnickthe only ugly thing here is probably
the PolicySet list.. which is a sort of sink... a reference to it
Venkat irc://freenode/Venkat,isnickis passed to the
CompositeDocumentProcessor during its creation.. and then during the
contribution processing.. its the same sink
Venkat irc://freenode/Venkat,isnickinto which the runtime dumps the
policysets weeded out..
Venkat irc://freenode/Venkat,isnickso thats for what is running in my
mind... for this specific decoupling... over to your thoughts :)
|--

slawsok- sounds like you are thinking about a much more modular approach
to definitions and policy processing. Can we enumerate the individual parts
of that processing here so I can see them clearly..
slawslet me make a list based on what you have just said...
slaws1 - read definitions.xml
slaws2 - aggregate policy sets
slaws3 - apply policy sets to composite model
Venkat irc://freenode/Venkat,isnick(to composite xml)
slawsok - 3 - apply policy sets to composite.xml
slawsi didn't quite understand the bit about
slaws Venkat into which the runtime dumps the policysets weeded out..
slawscan you say a little more
Venkat irc://freenode/Venkat,isnickits the 2 - aggregate policysets
... its right now done by the contribution service... I want to move that to
the runtime
Venkat irc://freenode/Venkat,isnickso instead of the contrib svc doing
it.
Venkat irc://freenode/Venkat,isnickit will simply pass a SCADefintions
model to the runtime... as part of an event
Venkat irc://freenode/Venkat,isnickthe runtime will look this
SCADeinitios and aggregate the PolicySets alone
slawsand what are the sources of the policy sets that are 

Re: Back to the work...

2008-03-14 Thread Simon Laws
On Fri, Mar 14, 2008 at 4:55 AM, Jean-Sebastien Delfino 
[EMAIL PROTECTED] wrote:

 Douglas Leite wrote:
  Hello Community!
 
  I was not so active in the Tuscany project in the last months, because I
 was
  busy finishing my Computer Science college degree, and preparing some
 things
  related to my master's degree. However, the things are more easy now,
 and I
  would like to contribute again.
 
  I am going to take a look on the impl.data.xml module, and as soon as I
 can,
  I will contribute more patches to drive this to completion.  :-)
 

 Great! welcome back Douglas!

 --
 Jean-Sebastien

 -
 To unsubscribe, e-mail: [EMAIL PROTECTED]
 For additional commands, e-mail: [EMAIL PROTECTED]

 Hey Douglas, good to see you back!

Simon


Re: ASF Headers, was Re: svn commit: r636903 - in /incubator/tuscany/java/sca: modules/binding-sca/src/main/java/org/apache/tuscany/sca/binding/sca/impl/ samples/calculator-distributed/ samples/calcul

2008-03-14 Thread Simon Laws
On Fri, Mar 14, 2008 at 12:04 AM, Luciano Resende [EMAIL PROTECTED]
wrote:

 Are we targetting this for our SCA 1.2 Release ? Could you please
 update ASF headers on the composite files and anywhere else needed.

 On Thu, Mar 13, 2008 at 3:21 PM,  [EMAIL PROTECTED] wrote:
  Author: slaws
   Date: Thu Mar 13 15:21:31 2008
   New Revision: 636903
 
   URL: http://svn.apache.org/viewvc?rev=636903view=rev
   Log:
   Convert the calculator-distributed sample over to the new workspace
 model for the domain
 
   Added:
 
 incubator/tuscany/java/sca/samples/calculator-distributed/cloud.composite
 (with props)
 
 incubator/tuscany/java/sca/samples/calculator-distributed/domain.composite
 (with props)
 
 incubator/tuscany/java/sca/samples/calculator-distributed/src/main/java/node/LaunchCalculatorNodeA.java
   (with props)
 
 incubator/tuscany/java/sca/samples/calculator-distributed/src/main/java/node/LaunchCalculatorNodeB.java
   (with props)
 
 incubator/tuscany/java/sca/samples/calculator-distributed/src/main/java/node/LaunchCalculatorNodeC.java
   (with props)
 
 incubator/tuscany/java/sca/samples/calculator-distributed/src/main/java/node/LaunchDomain.java
- copied, changed from r636668,
 incubator/tuscany/java/sca/samples/calculator-distributed/src/main/java/node/DomainNode.java
 
 incubator/tuscany/java/sca/samples/calculator-distributed/src/main/resources/domain/
 
 incubator/tuscany/java/sca/samples/calculator-distributed/src/main/resources/domain/cloud.composite
   (with props)
 
 incubator/tuscany/java/sca/samples/calculator-distributed/src/test/java/calculator/CalculatorDistributedTestCase.java
- copied, changed from r636668,
 incubator/tuscany/java/sca/samples/calculator-distributed/src/test/java/calculator/DomainInMemoryTestCase.java
 
 incubator/tuscany/java/sca/samples/calculator-distributed/workspace.xml
 (with props)
   Removed:
 
 incubator/tuscany/java/sca/samples/calculator-distributed/src/main/java/node/CalculatorNode.java
 
 incubator/tuscany/java/sca/samples/calculator-distributed/src/main/java/node/DomainNode.java
 
 incubator/tuscany/java/sca/samples/calculator-distributed/src/test/java/calculator/DomainInMemoryTestCase.java
   Modified:
 
 incubator/tuscany/java/sca/modules/binding-sca/src/main/java/org/apache/tuscany/sca/binding/sca/impl/RuntimeSCAReferenceBindingProvider.java
 
 incubator/tuscany/java/sca/modules/binding-sca/src/main/java/org/apache/tuscany/sca/binding/sca/impl/RuntimeSCAServiceBindingProvider.java
  incubator/tuscany/java/sca/samples/calculator-distributed/build.xml
  incubator/tuscany/java/sca/samples/calculator-distributed/pom.xml
 
   Modified:
 incubator/tuscany/java/sca/modules/binding-sca/src/main/java/org/apache/tuscany/sca/binding/sca/impl/RuntimeSCAReferenceBindingProvider.java
   URL:
 http://svn.apache.org/viewvc/incubator/tuscany/java/sca/modules/binding-sca/src/main/java/org/apache/tuscany/sca/binding/sca/impl/RuntimeSCAReferenceBindingProvider.java?rev=636903r1=636902r2=636903view=diff
 
  
 ==
   ---
 incubator/tuscany/java/sca/modules/binding-sca/src/main/java/org/apache/tuscany/sca/binding/sca/impl/RuntimeSCAReferenceBindingProvider.java
 (original)
   +++
 incubator/tuscany/java/sca/modules/binding-sca/src/main/java/org/apache/tuscany/sca/binding/sca/impl/RuntimeSCAReferenceBindingProvider.java
 Thu Mar 13 15:21:31 2008
   @@ -180,7 +180,7 @@
   + reference.getName());
   }
 
   -if (nodeFactory.getNode() == null) {
   +if ((nodeFactory != null)  (nodeFactory.getNode() ==
 null)) {
   throw new IllegalStateException(No distributed
 domain available for component:  + component
   .getName()
   +  and reference: 
 
   Modified:
 incubator/tuscany/java/sca/modules/binding-sca/src/main/java/org/apache/tuscany/sca/binding/sca/impl/RuntimeSCAServiceBindingProvider.java
   URL:
 http://svn.apache.org/viewvc/incubator/tuscany/java/sca/modules/binding-sca/src/main/java/org/apache/tuscany/sca/binding/sca/impl/RuntimeSCAServiceBindingProvider.java?rev=636903r1=636902r2=636903view=diff
 
  
 ==
   ---
 incubator/tuscany/java/sca/modules/binding-sca/src/main/java/org/apache/tuscany/sca/binding/sca/impl/RuntimeSCAServiceBindingProvider.java
 (original)
   +++
 incubator/tuscany/java/sca/modules/binding-sca/src/main/java/org/apache/tuscany/sca/binding/sca/impl/RuntimeSCAServiceBindingProvider.java
 Thu Mar 13 15:21:31 2008
   @@ -73,7 +73,16 @@
   // - distributed domain in which to look for remote
 endpoints
   // - remotable interface on the service
   if (distributedProviderFactory != null) {
   -if ((this.nodeFactory != null)  (
 this.nodeFactory.getNode() != null)) {
   +
   +URI 

Build Failure? Re: svn commit: r636985 - in /incubator/tuscany/java/sca/modules: assembly-xml/src/main/java/org/apache/tuscany/sca/assembly/xml/ assembly/src/main/java/org/apache/tuscany/sca/assembly/

2008-03-14 Thread Simon Laws
I'm getting.

org.apache.tuscany.sca.contribution.service.ContributionReadException:
javax.xml
.xpath.XPathExpressionException: javax.xml.transform.TransformerException:
Extra
 illegal tokens: 'http://www.osoa.org/xmlns/sca/1.0', ':', 'binding.sca'
at org.apache.tuscany.sca.policy.xml.PolicySetProcessor.read
(PolicySetPr
ocessor.java:109)
at org.apache.tuscany.sca.policy.xml.PolicySetProcessor.read
(PolicySetPr
ocessor.java:61)
at
org.apache.tuscany.sca.contribution.processor.ExtensibleStAXArtifactP
rocessor.read(ExtensibleStAXArtifactProcessor.java:83)
at
org.apache.tuscany.sca.definitions.xml.SCADefinitionsProcessor.read(S
CADefinitionsProcessor.java:90)
at
org.apache.tuscany.sca.definitions.xml.SCADefinitionsProcessor.read(S
CADefinitionsProcessor.java:49)
at
org.apache.tuscany.sca.contribution.processor.ExtensibleStAXArtifactP
rocessor.read(ExtensibleStAXArtifactProcessor.java:83)
at
org.apache.tuscany.sca.definitions.xml.SCADefinitionsDocumentProcesso
r.read(SCADefinitionsDocumentProcessor.java:120)
at
org.apache.tuscany.sca.assembly.xml.ResolvePolicyTestCase.testResolve
ConstrainingType(ResolvePolicyTestCase.java:115)

and am assuming it's related to this checkin. Am investigating but in the
mean time can you check if anything was missed.

Thanks

Simon



On Fri, Mar 14, 2008 at 3:48 AM, [EMAIL PROTECTED] wrote:

 Author: rfeng
 Date: Thu Mar 13 20:48:47 2008
 New Revision: 636985

 URL: http://svn.apache.org/viewvc?rev=636985view=rev
 Log:
 Fix for TUSCANY-2078

 Modified:

  
 incubator/tuscany/java/sca/modules/assembly-xml/src/main/java/org/apache/tuscany/sca/assembly/xml/CompositeProcessor.java

  
 incubator/tuscany/java/sca/modules/assembly/src/main/java/org/apache/tuscany/sca/assembly/ComponentProperty.java

  
 incubator/tuscany/java/sca/modules/assembly/src/main/java/org/apache/tuscany/sca/assembly/builder/impl/PropertyUtil.java

  
 incubator/tuscany/java/sca/modules/assembly/src/main/java/org/apache/tuscany/sca/assembly/impl/ComponentPropertyImpl.java

  
 incubator/tuscany/java/sca/modules/contribution-impl/src/main/java/org/apache/tuscany/sca/contribution/service/impl/ContributionServiceImpl.java

  
 incubator/tuscany/java/sca/modules/policy-xml/src/main/java/org/apache/tuscany/sca/policy/xml/PolicySetProcessor.java

  
 incubator/tuscany/java/sca/modules/policy/src/main/java/org/apache/tuscany/sca/policy/PolicySet.java

  
 incubator/tuscany/java/sca/modules/policy/src/main/java/org/apache/tuscany/sca/policy/impl/PolicySetImpl.java

 Modified:
 incubator/tuscany/java/sca/modules/assembly-xml/src/main/java/org/apache/tuscany/sca/assembly/xml/CompositeProcessor.java
 URL:
 http://svn.apache.org/viewvc/incubator/tuscany/java/sca/modules/assembly-xml/src/main/java/org/apache/tuscany/sca/assembly/xml/CompositeProcessor.java?rev=636985r1=636984r2=636985view=diff

 ==
 ---
 incubator/tuscany/java/sca/modules/assembly-xml/src/main/java/org/apache/tuscany/sca/assembly/xml/CompositeProcessor.java
 (original)
 +++
 incubator/tuscany/java/sca/modules/assembly-xml/src/main/java/org/apache/tuscany/sca/assembly/xml/CompositeProcessor.java
 Thu Mar 13 20:48:47 2008
 @@ -31,6 +31,9 @@
  import javax.xml.stream.XMLStreamException;
  import javax.xml.stream.XMLStreamReader;
  import javax.xml.stream.XMLStreamWriter;
 +import javax.xml.xpath.XPath;
 +import javax.xml.xpath.XPathExpressionException;
 +import javax.xml.xpath.XPathFactory;

  import org.apache.tuscany.sca.assembly.AssemblyFactory;
  import org.apache.tuscany.sca.assembly.Binding;
 @@ -75,7 +78,9 @@
  * @version $Rev$ $Date$
  */
  public class CompositeProcessor extends BaseAssemblyProcessor implements
 StAXArtifactProcessorComposite {
 -
 +// FIXME: to be refactored
 +private XPathFactory xPathFactory = XPathFactory.newInstance();
 +
 /**
  * Construct a new composite processor
  *
 @@ -235,7 +240,33 @@
 // Read a componentproperty
 componentProperty =
 assemblyFactory.createComponentProperty();
 property = componentProperty;
 -componentProperty.setSource(getString(reader,
 SOURCE));
 +String source = getString(reader, SOURCE);
 +if(source!=null) {
 +source = source.trim();
 +}
 +componentProperty.setSource(source);
 +if (source != null) {
 +// $name/...
 +if (source.charAt(0) == '$') {
 +int index = source.indexOf('/');
 +if (index == -1) {
 +// Tolerating $prop
 +source = source + /;
 +

Re: [SCA 1.2] Changing trunk pom version

2008-03-14 Thread Simon Laws
On Thu, Mar 13, 2008 at 6:58 PM, Luciano Resende [EMAIL PROTECTED]
wrote:

 Now that we are near the release of Java SCA 1,2, I'd like to propose
 changing the trunk pom version to 2-incubating-SNAPSHOT around the
 same time we create the SCA 1.2 release branch.

 Thoughts ?

 --
 Luciano Resende
 Apache Tuscany Committer
 http://people.apache.org/~lresende http://people.apache.org/%7Elresende
 http://lresende.blogspot.com/

 -
 To unsubscribe, e-mail: [EMAIL PROTECTED]
 For additional commands, e-mail: [EMAIL PROTECTED]


Are the implications of having 2-incubating... that we are due a major
revision change? In that case what our our plans for.

- development for this new major revision. I imagine for example we might
want to look at tidying our SPIs but did you have something specific in
mind?

- support for our 1.x codebase. If the implication of 2-... is that we are
going to do some big breaking changes then we probably need a 1.x branch
post 1.2 to allow us to support, for a while,  anyone on the current code.

So the answer here may not be to go directly to 2 on the back of 1.2 but to
get 1.2 out of the door and then branch to 1.x and 2 if that is what people
think is appropriate, i.e. treat release 1.2 and the move toward a new major
revision as separate issues.

Simon


Re: Build Failure? Re: svn commit: r636985 - in /incubator/tuscany/java/sca/modules: assembly-xml/src/main/java/org/apache/tuscany/sca/assembly/xml/ assembly/src/main/java/org/apache/tuscany/sca/assem

2008-03-14 Thread Simon Laws
On Fri, Mar 14, 2008 at 9:20 AM, Simon Laws [EMAIL PROTECTED]
wrote:

 I'm getting.

 org.apache.tuscany.sca.contribution.service.ContributionReadException:
 javax.xml
 .xpath.XPathExpressionException: javax.xml.transform.TransformerException:
 Extra
  illegal tokens: 'http://www.osoa.org/xmlns/sca/1.0', ':', 'binding.sca'
 at org.apache.tuscany.sca.policy.xml.PolicySetProcessor.read
 (PolicySetPr
 ocessor.java:109)
 at org.apache.tuscany.sca.policy.xml.PolicySetProcessor.read
 (PolicySetPr
 ocessor.java:61)
 at
 org.apache.tuscany.sca.contribution.processor.ExtensibleStAXArtifactP
 rocessor.read(ExtensibleStAXArtifactProcessor.java:83)
 at
 org.apache.tuscany.sca.definitions.xml.SCADefinitionsProcessor.read(S
 CADefinitionsProcessor.java:90)
 at
 org.apache.tuscany.sca.definitions.xml.SCADefinitionsProcessor.read(S
 CADefinitionsProcessor.java:49)
 at
 org.apache.tuscany.sca.contribution.processor.ExtensibleStAXArtifactP
 rocessor.read(ExtensibleStAXArtifactProcessor.java:83)
 at
 org.apache.tuscany.sca.definitions.xml.SCADefinitionsDocumentProcesso
 r.read(SCADefinitionsDocumentProcessor.java:120)
 at
 org.apache.tuscany.sca.assembly.xml.ResolvePolicyTestCase.testResolve
 ConstrainingType(ResolvePolicyTestCase.java:115)

 and am assuming it's related to this checkin. Am investigating but in the
 mean time can you check if anything was missed.

 Thanks

 Simon



 On Fri, Mar 14, 2008 at 3:48 AM, [EMAIL PROTECTED] wrote:

  Author: rfeng
  Date: Thu Mar 13 20:48:47 2008
  New Revision: 636985
 
  URL: http://svn.apache.org/viewvc?rev=636985view=rev
  Log:
  Fix for TUSCANY-2078
 
  Modified:
 
   
  incubator/tuscany/java/sca/modules/assembly-xml/src/main/java/org/apache/tuscany/sca/assembly/xml/CompositeProcessor.java
 
   
  incubator/tuscany/java/sca/modules/assembly/src/main/java/org/apache/tuscany/sca/assembly/ComponentProperty.java
 
   
  incubator/tuscany/java/sca/modules/assembly/src/main/java/org/apache/tuscany/sca/assembly/builder/impl/PropertyUtil.java
 
   
  incubator/tuscany/java/sca/modules/assembly/src/main/java/org/apache/tuscany/sca/assembly/impl/ComponentPropertyImpl.java
 
   
  incubator/tuscany/java/sca/modules/contribution-impl/src/main/java/org/apache/tuscany/sca/contribution/service/impl/ContributionServiceImpl.java
 
   
  incubator/tuscany/java/sca/modules/policy-xml/src/main/java/org/apache/tuscany/sca/policy/xml/PolicySetProcessor.java
 
   
  incubator/tuscany/java/sca/modules/policy/src/main/java/org/apache/tuscany/sca/policy/PolicySet.java
 
   
  incubator/tuscany/java/sca/modules/policy/src/main/java/org/apache/tuscany/sca/policy/impl/PolicySetImpl.java
 
  Modified:
  incubator/tuscany/java/sca/modules/assembly-xml/src/main/java/org/apache/tuscany/sca/assembly/xml/CompositeProcessor.java
  URL:
  http://svn.apache.org/viewvc/incubator/tuscany/java/sca/modules/assembly-xml/src/main/java/org/apache/tuscany/sca/assembly/xml/CompositeProcessor.java?rev=636985r1=636984r2=636985view=diff
 
  ==
  ---
  incubator/tuscany/java/sca/modules/assembly-xml/src/main/java/org/apache/tuscany/sca/assembly/xml/CompositeProcessor.java
  (original)
  +++
  incubator/tuscany/java/sca/modules/assembly-xml/src/main/java/org/apache/tuscany/sca/assembly/xml/CompositeProcessor.java
  Thu Mar 13 20:48:47 2008
  @@ -31,6 +31,9 @@
   import javax.xml.stream.XMLStreamException;
   import javax.xml.stream.XMLStreamReader;
   import javax.xml.stream.XMLStreamWriter;
  +import javax.xml.xpath.XPath;
  +import javax.xml.xpath.XPathExpressionException;
  +import javax.xml.xpath.XPathFactory;
 
   import org.apache.tuscany.sca.assembly.AssemblyFactory;
   import org.apache.tuscany.sca.assembly.Binding;
  @@ -75,7 +78,9 @@
   * @version $Rev$ $Date$
   */
   public class CompositeProcessor extends BaseAssemblyProcessor
  implements StAXArtifactProcessorComposite {
  -
  +// FIXME: to be refactored
  +private XPathFactory xPathFactory = XPathFactory.newInstance();
  +
  /**
   * Construct a new composite processor
   *
  @@ -235,7 +240,33 @@
  // Read a componentproperty
  componentProperty =
  assemblyFactory.createComponentProperty();
  property = componentProperty;
  -componentProperty.setSource(getString(reader,
  SOURCE));
  +String source = getString(reader, SOURCE);
  +if(source!=null) {
  +source = source.trim();
  +}
  +componentProperty.setSource(source);
  +if (source != null) {
  +// $name/...
  +if (source.charAt(0) == '$') {
  +int index = source.indexOf

calculator-distributed continuum failure

2008-03-14 Thread Simon Laws
I notice there is another problem in continuum at the moment but I still
haven't fixed the calculated-distributed fault. I've spent time installing a
build on linux and getting Eclipse up and running. It's some kind of XML
parsing problem but interestingly it happens when the workspace is run from
maven but not when it is run from Eclipse. So I'll go check what versions of
things maven uses. Anyhow I'm posting this as I have to shoot off so feel
free to disable the test while I continue to look at it if you want to have
the build finish.

Simon


Workpool files added from TUSCANY-1863, TUSCANY-1907,

2008-03-14 Thread Simon Laws
I've added most of the files attached to TUSCANY-1863, TUSCANY-1907 to svn.
I've taken the liberty of removing what seem to be work files, and I've
added ASF headers where I can. I've put the new assembly files into the
contribution-updater module for now while we learn ho it hangs together.

I've not added the new projects

modules/databinding-job
modules/contribution-updater
modules/contribution-updater-impl
demo/workpool-distributed.

to the build as we need to work with Giorgio to bring them up to speed with
the latest trunk and integrate them with new features such as workspace.
Currently I'm getting compile errors.

I'm going to close off TUSCANY-1863/1907 and we can open new JIRA as we go
along and fix up the code with the new trunk.

Regards

Simon


Release checklist and process

2008-03-16 Thread Simon Laws
I've put my notes from release 1.1 up at [1]. This a further development of
Ant's original notes and many of the commands here are from Raymond's script
[2]. Thanks guys.

There is a general high level check list and then a detailed step by step
guide. This is just a brain dump from my R1.1 experience so consider
yourselves invited to improve. As we are starting release 1.2 now it seems
like a good opportunity to test them, plug any holes and generally improve
for the next person.

Is this of any use?

Simon

[1] http://cwiki.apache.org/confluence/display/TUSCANY/Making+releases
[2]
http://svn.apache.org/repos/asf/incubator/tuscany/java/etc/release-sca.sh


Checking the TUSCANY-2077 test in? Re: svn commit: r637621 - in /incubator/tuscany/java/sca/modules/core/src/main/java/org/apache/tuscany/sca/core: assembly/ invocation/

2008-03-17 Thread Simon Laws
Hi Simon

Do you have Daniel's test in a position where you could check it in?

Simon

On Sun, Mar 16, 2008 at 5:59 PM, [EMAIL PROTECTED] wrote:

 Author: nash
 Date: Sun Mar 16 10:59:45 2008
 New Revision: 637621

 URL: http://svn.apache.org/viewvc?rev=637621view=rev
 Log:
 Fix for TUSCANY-2077

 Modified:

  
 incubator/tuscany/java/sca/modules/core/src/main/java/org/apache/tuscany/sca/core/assembly/EndpointReferenceImpl.java

  
 incubator/tuscany/java/sca/modules/core/src/main/java/org/apache/tuscany/sca/core/invocation/JDKCallbackInvocationHandler.java

  
 incubator/tuscany/java/sca/modules/core/src/main/java/org/apache/tuscany/sca/core/invocation/JDKInvocationHandler.java

  
 incubator/tuscany/java/sca/modules/core/src/main/java/org/apache/tuscany/sca/core/invocation/RuntimeWireInvoker.java

 Modified:
 incubator/tuscany/java/sca/modules/core/src/main/java/org/apache/tuscany/sca/core/assembly/EndpointReferenceImpl.java
 URL:
 http://svn.apache.org/viewvc/incubator/tuscany/java/sca/modules/core/src/main/java/org/apache/tuscany/sca/core/assembly/EndpointReferenceImpl.java?rev=637621r1=637620r2=637621view=diff

 ==
 ---
 incubator/tuscany/java/sca/modules/core/src/main/java/org/apache/tuscany/sca/core/assembly/EndpointReferenceImpl.java
 (original)
 +++
 incubator/tuscany/java/sca/modules/core/src/main/java/org/apache/tuscany/sca/core/assembly/EndpointReferenceImpl.java
 Sun Mar 16 10:59:45 2008
 @@ -149,9 +149,11 @@
 @Override
 public Object clone() throws CloneNotSupportedException {
 EndpointReferenceImpl copy = (EndpointReferenceImpl)super.clone();
 +/* [nash] no need to copy callback endpoint
 if (callbackEndpoint != null) {
 copy.callbackEndpoint =
 (EndpointReference)callbackEndpoint.clone();
 }
 +*/
 if (parameters != null) {
 copy.parameters = (ReferenceParameters)parameters.clone();
 }

 Modified:
 incubator/tuscany/java/sca/modules/core/src/main/java/org/apache/tuscany/sca/core/invocation/JDKCallbackInvocationHandler.java
 URL:
 http://svn.apache.org/viewvc/incubator/tuscany/java/sca/modules/core/src/main/java/org/apache/tuscany/sca/core/invocation/JDKCallbackInvocationHandler.java?rev=637621r1=637620r2=637621view=diff

 ==
 ---
 incubator/tuscany/java/sca/modules/core/src/main/java/org/apache/tuscany/sca/core/invocation/JDKCallbackInvocationHandler.java
 (original)
 +++
 incubator/tuscany/java/sca/modules/core/src/main/java/org/apache/tuscany/sca/core/invocation/JDKCallbackInvocationHandler.java
 Sun Mar 16 10:59:45 2008
 @@ -98,7 +98,7 @@
 }

 try {
 -return invoke(chain, args, wire);
 +return invoke(chain, args, wire, wire.getSource());
 } catch (InvocationTargetException e) {
 Throwable t = e.getCause();
 if (t instanceof NoRegisteredCallbackException) {

 Modified:
 incubator/tuscany/java/sca/modules/core/src/main/java/org/apache/tuscany/sca/core/invocation/JDKInvocationHandler.java
 URL:
 http://svn.apache.org/viewvc/incubator/tuscany/java/sca/modules/core/src/main/java/org/apache/tuscany/sca/core/invocation/JDKInvocationHandler.java?rev=637621r1=637620r2=637621view=diff

 ==
 ---
 incubator/tuscany/java/sca/modules/core/src/main/java/org/apache/tuscany/sca/core/invocation/JDKInvocationHandler.java
 (original)
 +++
 incubator/tuscany/java/sca/modules/core/src/main/java/org/apache/tuscany/sca/core/invocation/JDKInvocationHandler.java
 Sun Mar 16 10:59:45 2008
 @@ -67,6 +67,7 @@
 protected boolean conversational;
 protected ExtendedConversation conversation;
 protected MessageFactory messageFactory;
 +protected EndpointReference source;
 protected EndpointReference target;
 protected RuntimeWire wire;
 protected CallableReference? callableReference;
 @@ -98,14 +99,12 @@

 protected void init(RuntimeWire wire) {
 if (wire != null) {
 -/* [scn] no need to clone because the wire doesn't get
 modified
 try {
 -// Clone the wire so that reference parameters can be
 changed
 -this.wire = (RuntimeWire)wire.clone();
 +// Clone the endpoint reference so that reference
 parameters can be changed
 +source = (EndpointReference)wire.getSource().clone();
 } catch (CloneNotSupportedException e) {
 throw new ServiceRuntimeException(e);
 }
 -[scn] */
 initConversational(wire);
 }
 }
 @@ -152,7 +151,7 @@
 }

 // send the invocation down the wire
 -Object result = invoke(chain, args, wire);
 +Object result = invoke(chain, args, wire, source);

 return result;
 }
 @@ -262,10 

Re: Checking the TUSCANY-2077 test in? Re: svn commit: r637621 - in /incubator/tuscany/java/sca/modules/core/src/main/java/org/apache/tuscany/sca/core: assembly/ invocation/

2008-03-17 Thread Simon Laws
On Mon, Mar 17, 2008 at 1:16 PM, Simon Nash [EMAIL PROTECTED] wrote:

 Simon Laws wrote:
  Hi Simon
 
  Do you have Daniel's test in a position where you could check it in?
 
 I do have his test running, but for some reason it never failed
 for me, even when the printlns showed the incorrect conversation ID
 was being used.  So I just used the printlns to debug the problem
 and verify my fix.

   Simon

 -
 To unsubscribe, e-mail: [EMAIL PROTECTED]
 For additional commands, e-mail: [EMAIL PROTECTED]



Ah, I see. Fair enough.

Simon


Re: Tests for Tuscany running under OSGi

2008-03-17 Thread Simon Laws
On Mon, Mar 17, 2008 at 2:49 PM, ant elder [EMAIL PROTECTED] wrote:

 On Mon, Mar 17, 2008 at 12:20 PM, Simon Nash [EMAIL PROTECTED] wrote:

 snip

 I tried to build itest/osgi-tuscany to see what its time and space
  overheads are, but I ran into multiple errors (incorrect pom and some
  tests failing).  Is anyone else able to get this to build cleanly?
 
 
 If you're seeing failures like:

 Unresolved package in bundle 9: package; ((package=
 org.apache.tuscany.sca.node.launcher

 then yes i also see that now. I guess its yet another break due to trunk
 changes and osgi-tuscany needing to be updated as its not in the build.

   ...ant


Can we automate the updating of the OSGi artifacts to match the truck
status?

Simon


Re: [PROPOSAL] Using new Workspace in samples/calculator-distributed Re: Domain/Contribution Repository was: Re: SCA contribution packaging schemes: was: SCA runtimes

2008-03-18 Thread Simon Laws
I got the code done last week but I'm only just now finishing up the
build.xml file. So, as promised, here's what I did (a bit of a long post but
I think I got it all)

Firstly to get more familiar with the workspace I followed Sebastien's
instructions from the Domain/Contribution repository thread [1] and ran up
the workspace to have a play.

You can use the latest tutorial modules to see the end to end integration,
with the following steps:

1. Start tutorial/domain/.../LaunchTutorialAdmin.

2. Open http://localhost:9990/ui/composite in your Web browser. You should
see all the tutorial contributions and deployables that I've added to that
domain.

3. Click the feeds in the composite install image to see the resolved
composites.

4. Start all the launch programs in tutorial/nodes, you can start them in
any order you want.

5. Open tutorial/assets/tutorial.html in your Web browser, follow the links
to the various store implementations.

The workspace is allowing you to organize the relationships between
contributions/composites, the domain composite that describes the whole
application and the nodes that will run the composites. It processes all of
the contributions that have been provided, the composites they contain, the
association of composite with the domain and with nodes and produces fully
resolved composites in terms of the contributions that are require to run
them and the service and reference URIs that they will use.

This resolved composite information is available from the workspace through
composite specific feeds. From this feed you can get URLs to the required
contributions and the composite. In fact what happens each time you do a GET
on the composite URL is that all of the composites assigned to the domain
are read and the domain composite is built in full using the composite
builder. The individual composite that was requested is then extracted and
returned. In this way policy matching, cross domain wiring, autowiring etc
is manged at the domain level using the same code used by the nodes to build
individual composites.

This is very similar in layout with what is happening with our current
domain/node implementation where you add contributions to the domain and
nodes run the resulting composites. However there is a big difference here
in that there is now an implication that the domain is fully configured
before you start the nodes as the workspace is responsible for configuring
service / reference URIs based on prior knowledge of node configurations.
Previously you could start nodes and have them register with the domain
without having to provide this knowledge manually to the domain. I guess
automatic node registration could be rolled into this if we want.

In making the calculator-distributed sample work I wanted to be able to test
the sample in our maven build so having a set of HTTP forms (which the
workspace does provide) to fill in is interesting but not that useful. So
immediately I went looking for the files that the workspace writes to see if
I could create those and install them pre-configured ready for the test to
run. I used the tutorial files as templates and made the following to match
the calculator-distributed scenario.

Firstly there is a file (workspace.xml) [2] that describes all each
contribution's location and URI

workspace xmlns=http://tuscany.apache.org/xmlns/sca/1.0; xmlns:ns1=
http://tuscany.apache.org/xmlns/sca/1.0;
  contribution location=file:./target/classes/nodeA  uri=nodeA/
  contribution location=file:./target/classes/nodeB  uri=nodeB/
  contribution location=file:./target/classes/nodeC  uri=nodeC/
  contribution location=file:./target/classes/cloud uri=
http://tuscany.apache.org/xmlns/sca/1.0/cloud/
/workspace

Then there is a file (domain.composite) [3] that is a serialized version of
the domain composite, i.e. what you would get from the specs
getDomainLevelComposite() method. This shows which composites are deployed
at the domain level.

composite name=domain.composite
  targetNamespace=http://tuscany.apache.org/xmlns/sca/1.0;
  xmlns=http://www.osoa.org/xmlns/sca/1.0; xmlns:ns1=
http://www.osoa.org/xmlns/sca/1.0;
  include name=ns2:CalculatorA uri=nodeA xmlns:ns2=http://sample/
  include name=ns2:CalculatorB uri=nodeB xmlns:ns2=http://sample/
  include name=ns2:CalculatorC uri=nodeC xmlns:ns2=http://sample/
/composite

Lastly there is a file (cloud.composite) [4] that is another SCA composite
that describes the nodes that are going to run composites.

composite name=cloud.composite
  targetNamespace=http://tuscany.apache.org/xmlns/sca/1.0;
  xmlns=http://www.osoa.org/xmlns/sca/1.0; xmlns:ns1=
http://www.osoa.org/xmlns/sca/1.0;
  include name=ns2:NodeA uri=
http://tuscany.apache.org/xmlns/sca/1.0/cloud; xmlns:ns2=
http://sample/cloud/
  include name=ns2:NodeB uri=
http://tuscany.apache.org/xmlns/sca/1.0/cloud; xmlns:ns2=
http://sample/cloud/
  include name=ns2:NodeC uri=
http://tuscany.apache.org/xmlns/sca/1.0/cloud; xmlns:ns2=
http://sample/cloud/

Can't get build.xml to work with calculator-distributed

2008-03-19 Thread Simon Laws
I'm trying to run the calculator-distribute sample with the workspace
changes from an ant build.xml. I'm getting

runDomain:
 [java] 19-Mar-2008 11:23:38
org.apache.tuscany.sca.workspace.admin.launcher
.DomainAdminLauncher main
 [java] INFO: Apache Tuscany SCA Domain Administration starting...
 [java] 19-Mar-2008 11:23:39
org.apache.tuscany.sca.workspace.admin.launcher
.DomainAdminLauncherUtil collectJARFiles
 [java] INFO: Runtime classpath: 153 JARs from C:\simon\tuscany\sca-
java-1.2
\distribution\target\apache-
tuscany-sca-1.2-incubating-SNAPSHOT.dir\tuscany-sca-
1.2-incubating-SNAPSHOT\lib
 [java] 19-Mar-2008 11:23:39
org.apache.tuscany.sca.workspace.admin.launcher
.DomainAdminLauncher main
 [java] SEVERE: SCA Domain Administration could not be started
 [java] java.lang.ClassNotFoundException:
org.apache.tuscany.sca.workspace.a
dmin.launcher.DomainAdminLauncherBootstrap
 [java] at java.lang.Class.forName(Class.java:163)
 [java] at
org.apache.tuscany.sca.workspace.admin.launcher.DomainAdminLa
uncher.main(DomainAdminLauncher.java:53)
 [java] at node.LaunchDomain.main(LaunchDomain.java:30)
 [java] Exception in thread main java.lang.ClassNotFoundException:
org.apa
che.tuscany.sca.workspace.admin.launcher.DomainAdminLauncherBootstrap
 [java] at java.lang.Class.forName(Class.java:163)
 [java] at
org.apache.tuscany.sca.workspace.admin.launcher.DomainAdminLa
uncher.main(DomainAdminLauncher.java:53)
 [java] at node.LaunchDomain.main(LaunchDomain.java:30)
 [java] Java Result: 1

Now the classpath looks ok to me in that it includes the tuscany manifest
jar so should have all the dependencies. However I took a look as the
launcher code that gets used in the workspace and note that it's changed
recently. Can someone (Sebastien?) explain what the intention is here.
Should I be setting TUSCANY_HOME or something. I seem to be able to get
further if I add individual module jars to the classpath but I don't really
want to do that for all jars for this sample.

Thanks

Simon


Re: definitions.xml and the SCA Domain over a distributed runtime

2008-03-19 Thread Simon Laws
Hi Venkat

I think that definitions.xml can be provided to Tuscany in two ways. Either
in a contribution or in an extension library. I also think that the contents
of definitions.xml files provided in either of these ways should be added to
the domain wide pool of intents and policy sets and should be applied to
composites in the domain as appropriate.

Is this correct?

I think at the moment the code only treats the definitions.xml added with
extensions as being of domain scope. Definitions.xml added within a
contribution are only processed in the context of that contribution

Is my reading of the code correct?

If I'm correct on these two points we need to fix the case where the
definitions.xml file comes within a contribution. I think this is
independent of whether a node running a composite is remote or not as a node
may require multiple contributions in order to support a single composite,
as you scenario suggests.

I've put some comments in line.

Simon

[1]
http://svn.apache.org/repos/asf/incubator/tuscany/java/sca/modules/workspace-admin/src/main/java/org/apache/tuscany/sca/workspace/admin/impl/DeployableCompositeCollectionImpl.java

On Wed, Mar 19, 2008 at 10:11 AM, Venkata Krishnan [EMAIL PROTECTED]
wrote:

 Hi,

 I am trying to run the bigbank sample from a distirbution and postulating
 a
 multiple contirbutions situation as follow :

 Let contribution CA and contribution CB each have their definitions.xmlthat
 defines some policysets.  Now, can the composites defined in CB be able to
 use the policysets defined in CA ?


I think they should be able to .



 If so, is there a discipline that needs to be followed in the order of
 adding these contributions i.e. should CA be added first and then CB ?


The code is like this at the moment when it comes to running a composite,
i.e. the contributions have to be added in the right order, but it would be
good if that were not the case. More importantly the implication is that we
need to load ALL of the contributions that are required before any
composites are processed.


 In a distributed runtime, where CA and CB are added and deployed on two
 different nodes, would the node that has CB should try to pull down parts
 (just the defintions.xml) or whole of CA ?


It might need other things from CA so I would suggest that the whole of CA
is given to the node.



 Finally, if definitions are going to be applicable to an entire domain,
 which I believe should be case, then how do we ensure that all
 definitions.xml contributed are first read and processed before composites
 are read and processed and how do we make this consolidate / aggregated
 definitions available to all nodes in the domain ?


I think we have to look at the ideas in the workspace. Here all of the
contributions are expected to be available before any nodes start running
composites. I put some code into the workspace code to calculate the URI of
all service bindings before any nodes run [1], take a look at the doGet()
method. To work this relies on reading all of the contributions required to
run the configured domain. This would seem to be a good point at which to
pull all definitions.xml files out of all contributions and aggregate the
policy sets before individual composites are processed.



 Thanks

 - Venkat



Re: Can't get build.xml to work with calculator-distributed

2008-03-19 Thread Simon Laws
On Wed, Mar 19, 2008 at 5:57 PM, Jean-Sebastien Delfino 
[EMAIL PROTECTED] wrote:

 Simon Laws wrote:
  I'm trying to run the calculator-distribute sample with the workspace
  changes from an ant build.xml. I'm getting
 
  runDomain:
   [java] 19-Mar-2008 11:23:38
  org.apache.tuscany.sca.workspace.admin.launcher
  .DomainAdminLauncher main
   [java] INFO: Apache Tuscany SCA Domain Administration starting...
   [java] 19-Mar-2008 11:23:39
  org.apache.tuscany.sca.workspace.admin.launcher
  .DomainAdminLauncherUtil collectJARFiles
   [java] INFO: Runtime classpath: 153 JARs from C:\simon\tuscany\sca-
  java-1.2
  \distribution\target\apache-
  tuscany-sca-1.2-incubating-SNAPSHOT.dir\tuscany-sca-
  1.2-incubating-SNAPSHOT\lib
   [java] 19-Mar-2008 11:23:39
  org.apache.tuscany.sca.workspace.admin.launcher
  .DomainAdminLauncher main
   [java] SEVERE: SCA Domain Administration could not be started
   [java] java.lang.ClassNotFoundException:
  org.apache.tuscany.sca.workspace.a
  dmin.launcher.DomainAdminLauncherBootstrap
   [java] at java.lang.Class.forName(Class.java:163)
   [java] at
  org.apache.tuscany.sca.workspace.admin.launcher.DomainAdminLa
  uncher.main(DomainAdminLauncher.java:53)
   [java] at node.LaunchDomain.main(LaunchDomain.java:30)
   [java] Exception in thread main java.lang.ClassNotFoundException:
  org.apa
  che.tuscany.sca.workspace.admin.launcher.DomainAdminLauncherBootstrap
   [java] at java.lang.Class.forName(Class.java:163)
   [java] at
  org.apache.tuscany.sca.workspace.admin.launcher.DomainAdminLa
  uncher.main(DomainAdminLauncher.java:53)
   [java] at node.LaunchDomain.main(LaunchDomain.java:30)
   [java] Java Result: 1
 
  Now the classpath looks ok to me in that it includes the tuscany
 manifest
  jar so should have all the dependencies.

 Does the manifest reference tuscany-workspace-admin-1.2-incubating.jar?

 The distribution I built yesterday didn't have it, but I saw some
 commits from Luciano changing the distro assembly files yesterday...

 --
 Jean-Sebastien

 -
 To unsubscribe, e-mail: [EMAIL PROTECTED]
 For additional commands, e-mail: [EMAIL PROTECTED]


Yeah, I should have said that I made those changes locally. The manifest
references the jars. Should is still work with a reference to the manifest
jar or do I need to go and set TUSCANY_HOME which is mentioned in the code.
If it should work as is I'll investigate more. I just didn't want to spend
the time until I knew that I was going in the right direction.

Simon


Re: Keeping up with the dev list and the flood of JIRA messages

2008-03-20 Thread Simon Laws
On Wed, Mar 19, 2008 at 10:14 PM, Raymond Feng [EMAIL PROTECTED] wrote:

 I have a mail rule/filter set up to route the JIRA messages into a
 separate
 folder in my inbox.

 Thanks,
 Raymond
 --
 From: Jean-Sebastien Delfino [EMAIL PROTECTED]
 Sent: Wednesday, March 19, 2008 2:40 PM
 To: tuscany-dev tuscany-dev@ws.apache.org
 Subject: Keeping up with the dev list and the flood of JIRA messages

  Hi,
 
  Just curious, are people able to keep up with the list discussions in
 the
  middle of that flood of JIRA messages?
 
  Is everybody routing JIRAs to a separate folder? I'm finding it
 difficult
  to see through the traffic without doing that.
 
  Thoughts? Can we improve this to make it easier for people to follow?
  --
  Jean-Sebastien
 
  -
  To unsubscribe, e-mail: [EMAIL PROTECTED]
  For additional commands, e-mail: [EMAIL PROTECTED]
 

 -
 To unsubscribe, e-mail: [EMAIL PROTECTED]
 For additional commands, e-mail: [EMAIL PROTECTED]

 I like to see the JIRA coming in on the dev list. I manually filter so
that I at least get a sense of what's going on.

Simon


Re: Verification Testing

2008-03-20 Thread Simon Laws
On Wed, Mar 19, 2008 at 8:26 PM, Dan Becker [EMAIL PROTECTED] wrote:

 Simon Nash wrote:
  Kevin Williams wrote:
  I am thinking of adding a new test bucket specifically for
  verification testing against the specification set.  I believe it
  would add value to the project and may also be a place where
  developers new to Tuscany might contribute.  Does this sound like a
  reasonable idea?

 +1

 I think it is very useful and will be a good way to make piece-of-mind
 regression tests.

 --
 Thanks, Dan Becker

 -
 To unsubscribe, e-mail: [EMAIL PROTECTED]
 For additional commands, e-mail: [EMAIL PROTECTED]


+1 excellent idea

Simon


Re: Build error on java\sca -- Error extracting plugin descriptor: 'No mojo descriptors found in plugin.

2008-03-20 Thread Simon Laws
On Thu, Mar 20, 2008 at 8:35 AM, Vamsavardhana Reddy [EMAIL PROTECTED]
wrote:

 Forgot to mention in my earlier post.  I am using Maven 2.0.6, Sun JDK
 1.5.0on Windows XP.

 ++Vamsi

 On Thu, Mar 20, 2008 at 2:02 PM, Vamsavardhana Reddy [EMAIL PROTECTED]
 wrote:

  I am hitting a build error on trunk, i.e. java/sca.  The error message
 is
  Error extracting plugin descriptor: 'No mojo descriptors found in
 plugin.
  Any hints on how to resolve this problem?  Output from command window is
  given below.
 
 
  [INFO]
 
 -
  ---
  [INFO] Building Apache Tuscany SCA Definitions Shade Transformer for
  Distributio
  n Bundle
  [INFO]task-segment: [install]
  [INFO]
 
 -
  ---
  [INFO] [plugin:descriptor]
  [INFO] Using 2 extractors.
  [INFO] Applying extractor for language: java
  [INFO] Extractor for language: java found 0 mojo descriptors.
  [INFO] Applying extractor for language: bsh
  [INFO] Extractor for language: bsh found 0 mojo descriptors.
  [INFO]
  
  [ERROR] BUILD ERROR
  [INFO]
  
  [INFO] Error extracting plugin descriptor: 'No mojo descriptors found in
  plugin.
  '
 
  [INFO]
  
  [INFO] For more information, run Maven with the -e switch
  [INFO]
  
  [INFO] Total time: 7 minutes 56 seconds
  [INFO] Finished at: Thu Mar 20 13:50:19 IST 2008
  [INFO] Final Memory: 63M/118M
  [INFO]
  
 


Hi Vamsi

I'm not seeing this problem. Are you sure you want to build the
distribution?

Simon


Re: Can't get build.xml to work with calculator-distributed

2008-03-20 Thread Simon Laws
On Thu, Mar 20, 2008 at 7:46 AM, Jean-Sebastien Delfino 
[EMAIL PROTECTED] wrote:

 Luciano Resende wrote:
  Do you still see issues after revision #639171 ? If so, could you
  please give me the names of missing jars ? I have tried to capture the
  differences in [1]
 
  [1]
 http://cwiki.apache.org/confluence/display/TUSCANYWIKI/Release+-+Java+SCA+1.2#Release-JavaSCA1.2-Modulesincludedinthedistribution
 
  On Wed, Mar 19, 2008 at 11:29 PM, Jean-Sebastien Delfino
  [EMAIL PROTECTED] wrote:
  Simon Laws wrote:
On Wed, Mar 19, 2008 at 5:57 PM, Jean-Sebastien Delfino 
[EMAIL PROTECTED] wrote:
   
Simon Laws wrote:
I'm trying to run the calculator-distribute sample with the
 workspace
changes from an ant build.xml. I'm getting
   
runDomain:
 [java] 19-Mar-2008 11:23:38
org.apache.tuscany.sca.workspace.admin.launcher
.DomainAdminLauncher main
 [java] INFO: Apache Tuscany SCA Domain Administration
 starting...
 [java] 19-Mar-2008 11:23:39
org.apache.tuscany.sca.workspace.admin.launcher
.DomainAdminLauncherUtil collectJARFiles
 [java] INFO: Runtime classpath: 153 JARs from
 C:\simon\tuscany\sca-
java-1.2
\distribution\target\apache-
tuscany-sca-1.2-incubating-SNAPSHOT.dir\tuscany-sca-
1.2-incubating-SNAPSHOT\lib
 [java] 19-Mar-2008 11:23:39
org.apache.tuscany.sca.workspace.admin.launcher
.DomainAdminLauncher main
 [java] SEVERE: SCA Domain Administration could not be started
 [java] java.lang.ClassNotFoundException:
org.apache.tuscany.sca.workspace.a
dmin.launcher.DomainAdminLauncherBootstrap
 [java] at java.lang.Class.forName(Class.java:163)
 [java] at
org.apache.tuscany.sca.workspace.admin.launcher.DomainAdminLa
uncher.main(DomainAdminLauncher.java:53)
 [java] at node.LaunchDomain.main(LaunchDomain.java:30)
 [java] Exception in thread main
 java.lang.ClassNotFoundException:
org.apa
   
 che.tuscany.sca.workspace.admin.launcher.DomainAdminLauncherBootstrap
 [java] at java.lang.Class.forName(Class.java:163)
 [java] at
org.apache.tuscany.sca.workspace.admin.launcher.DomainAdminLa
uncher.main(DomainAdminLauncher.java:53)
 [java] at node.LaunchDomain.main(LaunchDomain.java:30)
 [java] Java Result: 1
   
Now the classpath looks ok to me in that it includes the tuscany
manifest
jar so should have all the dependencies.
Does the manifest reference
 tuscany-workspace-admin-1.2-incubating.jar?
   
The distribution I built yesterday didn't have it, but I saw some
commits from Luciano changing the distro assembly files
 yesterday...
   
--
Jean-Sebastien
   
   
 -
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]
   
   
Yeah, I should have said that I made those changes locally. The
 manifest
references the jars. Should is still work with a reference to the
 manifest
jar or do I need to go and set TUSCANY_HOME which is mentioned in
 the code.
If it should work as is I'll investigate more. I just didn't want to
 spend
the time until I knew that I was going in the right direction.
   
Simon
   
 
   I started to fix this issue in revision r639167 (trunk) and r639170 (
 1.2
   branch) although I'm still having problems with tuscany-sca-manifest
 as
   it's missing a number of JARs.
 
   As a result of these changes the domain admin app can now be started
 as:
   java -jar .../modules/tuscany-node2-launcher-1.2-incubating.jar domain
 
   --
 
 
  Jean-Sebastien
 

 The maintenance of manifest/pom.xml and bundle/pom.xml is really error
 prone :(

 I fixed the errors I could see in these poms, added some missing JARs
 and removed obsolete references to the old feed binding JARs.

 I also fixed incorrect class names in calculator-distributed/build.xml.

 I am able to start the domain and nodes from calculator-distributed with
 these fixes (SVN revision r639187) but then I'm seeing a weird NPE in
 the SDO runtime:

  [java] Caused by: java.lang.NullPointerException
  [java] at
 commonj.sdo.impl.HelperProvider.getDefaultContext(HelperProvider.java:379)
  [java] at
 org.apache.tuscany.sca.databinding.sdo.SDODataBinding.introspect(
 SDODataBinding.java:61)
  [java] at

 org.apache.tuscany.sca.databinding.DefaultDataBindingExtensionPoint$LazyDataBinding.introspect
 (DefaultDataBindingExtensionPoint.java:191)
  [java] at

 org.apache.tuscany.sca.databinding.DefaultDataBindingExtensionPoint.introspectType
 (DefaultDataBindingExtensionPoint.java:246)
  [java] at
 ...

 It is not specific to calculator-distributed, as I can see the same
 exception in other samples.

 Any idea?
 --
 Jean-Sebastien

 -
 To unsubscribe, e-mail

New dependency on Drools

2008-03-20 Thread Simon Laws
I've just committed the patch from TUSCANY-2099. The patch is a few more
steps on the way to getting the workpool demo running with the latest code
and it introduces a new dependency on Drools. It's ASL2 licenses but I want
to call it out here in case anyone has any concerns.

Thanks

Simon


Re: New dependency on Drools

2008-03-20 Thread Simon Laws
On Thu, Mar 20, 2008 at 1:04 PM, Simon Laws [EMAIL PROTECTED]
wrote:

 I've just committed the patch from TUSCANY-2099. The patch is a few more
 steps on the way to getting the workpool demo running with the latest code
 and it introduces a new dependency on Drools. It's ASL2 licenses but I want
 to call it out here in case anyone has any concerns.

 Thanks

 Simon


I should have said this only affects trunk and is not a 1.2 dependency.

Simon


Re: Can't get build.xml to work with calculator-distributed

2008-03-20 Thread Simon Laws
On Thu, Mar 20, 2008 at 12:56 PM, Simon Laws [EMAIL PROTECTED]
wrote:



 On Thu, Mar 20, 2008 at 7:46 AM, Jean-Sebastien Delfino 
 [EMAIL PROTECTED] wrote:

  Luciano Resende wrote:
   Do you still see issues after revision #639171 ? If so, could you
   please give me the names of missing jars ? I have tried to capture the
   differences in [1]
  
   [1]
  http://cwiki.apache.org/confluence/display/TUSCANYWIKI/Release+-+Java+SCA+1.2#Release-JavaSCA1.2-Modulesincludedinthedistribution
  
   On Wed, Mar 19, 2008 at 11:29 PM, Jean-Sebastien Delfino
   [EMAIL PROTECTED] wrote:
   Simon Laws wrote:
 On Wed, Mar 19, 2008 at 5:57 PM, Jean-Sebastien Delfino 
 [EMAIL PROTECTED] wrote:

 Simon Laws wrote:
 I'm trying to run the calculator-distribute sample with the
  workspace
 changes from an ant build.xml. I'm getting

 runDomain:
  [java] 19-Mar-2008 11:23:38
 org.apache.tuscany.sca.workspace.admin.launcher
 .DomainAdminLauncher main
  [java] INFO: Apache Tuscany SCA Domain Administration
  starting...
  [java] 19-Mar-2008 11:23:39
 org.apache.tuscany.sca.workspace.admin.launcher
 .DomainAdminLauncherUtil collectJARFiles
  [java] INFO: Runtime classpath: 153 JARs from
  C:\simon\tuscany\sca-
 java-1.2
 \distribution\target\apache-
 tuscany-sca-1.2-incubating-SNAPSHOT.dir\tuscany-sca-
 1.2-incubating-SNAPSHOT\lib
  [java] 19-Mar-2008 11:23:39
 org.apache.tuscany.sca.workspace.admin.launcher
 .DomainAdminLauncher main
  [java] SEVERE: SCA Domain Administration could not be
  started
  [java] java.lang.ClassNotFoundException:
 org.apache.tuscany.sca.workspace.a
 dmin.launcher.DomainAdminLauncherBootstrap
  [java] at java.lang.Class.forName(Class.java:163)
  [java] at
 org.apache.tuscany.sca.workspace.admin.launcher.DomainAdminLa
 uncher.main(DomainAdminLauncher.java:53)
  [java] at node.LaunchDomain.main(LaunchDomain.java:30)
  [java] Exception in thread main
  java.lang.ClassNotFoundException:
 org.apa

  che.tuscany.sca.workspace.admin.launcher.DomainAdminLauncherBootstrap
  [java] at java.lang.Class.forName(Class.java:163)
  [java] at
 org.apache.tuscany.sca.workspace.admin.launcher.DomainAdminLa
 uncher.main(DomainAdminLauncher.java:53)
  [java] at node.LaunchDomain.main(LaunchDomain.java:30)
  [java] Java Result: 1

 Now the classpath looks ok to me in that it includes the tuscany
 manifest
 jar so should have all the dependencies.
 Does the manifest reference
  tuscany-workspace-admin-1.2-incubating.jar?

 The distribution I built yesterday didn't have it, but I saw some
 commits from Luciano changing the distro assembly files
  yesterday...

 --
 Jean-Sebastien


  -
 To unsubscribe, e-mail: [EMAIL PROTECTED]
 For additional commands, e-mail: [EMAIL PROTECTED]


 Yeah, I should have said that I made those changes locally. The
  manifest
 references the jars. Should is still work with a reference to the
  manifest
 jar or do I need to go and set TUSCANY_HOME which is mentioned in
  the code.
 If it should work as is I'll investigate more. I just didn't want
  to spend
 the time until I knew that I was going in the right direction.

 Simon

  
I started to fix this issue in revision r639167 (trunk) and r639170
  (1.2
branch) although I'm still having problems with tuscany-sca-manifest
  as
it's missing a number of JARs.
  
As a result of these changes the domain admin app can now be started
  as:
java -jar .../modules/tuscany-node2-launcher-1.2-incubating.jardomain
  
--
  
  
   Jean-Sebastien
  
 
  The maintenance of manifest/pom.xml and bundle/pom.xml is really error
  prone :(
 
  I fixed the errors I could see in these poms, added some missing JARs
  and removed obsolete references to the old feed binding JARs.
 
  I also fixed incorrect class names in calculator-distributed/build.xml.
 
  I am able to start the domain and nodes from calculator-distributed with
  these fixes (SVN revision r639187) but then I'm seeing a weird NPE in
  the SDO runtime:
 
   [java] Caused by: java.lang.NullPointerException
   [java] at
  commonj.sdo.impl.HelperProvider.getDefaultContext(HelperProvider.java
  :379)
   [java] at
  org.apache.tuscany.sca.databinding.sdo.SDODataBinding.introspect(
  SDODataBinding.java:61)
   [java] at
 
  org.apache.tuscany.sca.databinding.DefaultDataBindingExtensionPoint$LazyDataBinding.introspect
  (DefaultDataBindingExtensionPoint.java:191)
   [java] at
 
  org.apache.tuscany.sca.databinding.DefaultDataBindingExtensionPoint.introspectType
  (DefaultDataBindingExtensionPoint.java:246)
   [java

Re: Build error on java\sca -- Error extracting plugin descriptor: 'No mojo descriptors found in plugin.

2008-03-20 Thread Simon Laws
On Thu, Mar 20, 2008 at 12:37 PM, Simon Laws [EMAIL PROTECTED]
wrote:



 On Thu, Mar 20, 2008 at 8:35 AM, Vamsavardhana Reddy [EMAIL PROTECTED]
 wrote:

  Forgot to mention in my earlier post.  I am using Maven 2.0.6, Sun JDK
  1.5.0on Windows XP.
 
  ++Vamsi
 
  On Thu, Mar 20, 2008 at 2:02 PM, Vamsavardhana Reddy 
  [EMAIL PROTECTED]
  wrote:
 
   I am hitting a build error on trunk, i.e. java/sca.  The error message
  is
   Error extracting plugin descriptor: 'No mojo descriptors found in
  plugin.
   Any hints on how to resolve this problem?  Output from command window
  is
   given below.
  
  
   [INFO]
  
  -
   ---
   [INFO] Building Apache Tuscany SCA Definitions Shade Transformer for
   Distributio
   n Bundle
   [INFO]task-segment: [install]
   [INFO]
  
  -
   ---
   [INFO] [plugin:descriptor]
   [INFO] Using 2 extractors.
   [INFO] Applying extractor for language: java
   [INFO] Extractor for language: java found 0 mojo descriptors.
   [INFO] Applying extractor for language: bsh
   [INFO] Extractor for language: bsh found 0 mojo descriptors.
   [INFO]
  
  
   [ERROR] BUILD ERROR
   [INFO]
  
  
   [INFO] Error extracting plugin descriptor: 'No mojo descriptors found
  in
   plugin.
   '
  
   [INFO]
  
  
   [INFO] For more information, run Maven with the -e switch
   [INFO]
  
  
   [INFO] Total time: 7 minutes 56 seconds
   [INFO] Finished at: Thu Mar 20 13:50:19 IST 2008
   [INFO] Final Memory: 63M/118M
   [INFO]
  
  
  
 

 Hi Vamsi

 I'm not seeing this problem. Are you sure you want to build the
 distribution?

 Simon

I take it back, I realize that issue you have here is with the new shade
transformer that we have and which lives in tools/maven/maven-definitions. I
believe this is only used during the bundle build which I expect you don't
need to do so you should be safe to skip this module for now.

Unfortunately I still have no idea why it's failing for you.

Regards

Simon


Re: [SCA 1.2] TUSCANY-2115 branch cleanup

2008-03-25 Thread Simon Laws
On Mon, Mar 24, 2008 at 4:24 AM, Luciano Resende [EMAIL PROTECTED]
wrote:

 On Sun, Mar 23, 2008 at 8:54 PM, Jean-Sebastien Delfino
 [EMAIL PROTECTED] wrote:
  Luciano Resende wrote:
As part of TUSCANY-2115 [1] I have some local changes to remove the
following projects :
   ...
   - tutorial/nodes-jee
 
   I'm OK with excluding nodes-jee
 
   I'll move tutorial/nodes-jee/catalog-webapp to tutorial as that one
   actually works.

 Sure, please let me know when you are done with these changes by
 adding a comment to TUSCANY-2115

 
   ...
   - demos/workpool-distributed
 
   I thought that demo was working (although I just tried it and am
 getting
   errors), but I find it really interesting. What needs to be done to
 keep
   it in the release?
 

 I thought this was a work in progress from Giorgio, please let me know
 if this is actually working and ready for the release.


   --
   Jean-Sebastien
 
   -
   To unsubscribe, e-mail: [EMAIL PROTECTED]
   For additional commands, e-mail: [EMAIL PROTECTED]
 
 



 --
 Luciano Resende
 Apache Tuscany Committer
 http://people.apache.org/~lresende http://people.apache.org/%7Elresende
 http://lresende.blogspot.com/

 -
 To unsubscribe, e-mail: [EMAIL PROTECTED]
 For additional commands, e-mail: [EMAIL PROTECTED]


Hi

The workpool demo is a work in progress and won't be in 1.2

The modules..

  - modules/node ?
  - modules/node-api ?
  - modules/node-impl ?

Need to stay as we took the decision on the release contents IRC to not roll
out the new workspace to all of the modules that currently depend on the
existing domain/node implementation.

Simon


Re: Component Service and WebSphere.

2008-03-25 Thread Simon Laws
On Tue, Mar 25, 2008 at 11:11 AM, Sandeep Raman [EMAIL PROTECTED]
wrote:

 Hi ,

 I followed the same blog. But culdnt get it working.
 What else can i try out.


 Regards,
 Sandeep.

 Simon Laws [EMAIL PROTECTED] wrote on 03/25/2008 04:31:23 PM:

  On Tue, Mar 25, 2008 at 10:33 AM, Sandeep Raman [EMAIL PROTECTED]
  wrote:
 
   Hi,
  
   I have deployed a composite application as a war in tomcat and am able
 to
   get the component service given using binding.ws Uri coming up. The
 same
   war in websphere doesnt come up with the context root.
   Is there anything I need to do to deploy a web application based on
   tuscany in websphere.
  
   Regards
   Sandeep
   =-=-=
   Notice: The information contained in this e-mail
   message and/or attachments to it may contain
   confidential or privileged information. If you are
   not the intended recipient, any dissemination, use,
   review, distribution, printing or copying of the
   information contained in this e-mail message
   and/or attachments to it are strictly prohibited. If
   you have received this communication in error,
   please notify us by reply e-mail or telephone and
   immediately and permanently delete the message
   and any attachments. Thank you
  
  
  
  Hi Sandeep
 
  Sebastien made some notes of how to get Tuscany apps working in
 WebSphere
  [1]. Can you take a look and see if they help
 
  Regards
 
  Simon
 
  [1]
 
 http://jsdelfino.blogspot.com/2007/10/how-to-use-apache-tuscany-with.html

  ForwardSourceID:NT9736
 =-=-=
 Notice: The information contained in this e-mail
 message and/or attachments to it may contain
 confidential or privileged information. If you are
 not the intended recipient, any dissemination, use,
 review, distribution, printing or copying of the
 information contained in this e-mail message
 and/or attachments to it are strictly prohibited. If
 you have received this communication in error,
 please notify us by reply e-mail or telephone and
 immediately and permanently delete the message
 and any attachments. Thank you



Hi Sandeep

Can you provide some more information about the problem you are having. For
example,

How far did you get through the instructions on Sebastien's blog before you
had problems?
What errors are you seeing in the application server logs?
What version of the application server are you running with?
What version of the Tuscany code are you working with?
Have you tried the same steps with the calculator-webapp sample?

Thanks

Simon


Re: Composite Builder and some questions?

2008-03-25 Thread Simon Laws
Hi Girogio

Sorry for slow response. Been out for a few days. Some comments in line

Simon

On Fri, Mar 21, 2008 at 5:27 PM, Giorgio Zoppi [EMAIL PROTECTED]
wrote:

 Hi,
 the next patch for my demo it will be in the CompositeBuilder. I have
 to do refactoring in this
 area to allow a fine grain on updating a composite. Are there things
 that i should give a particular attention? (I've already have a patch
 but i'd like to discuss on this area before creating a jira).


Can you say a little more about fine grained updating and the impact on the
composite builder. For example, are you looking at how to build individual
parts of a composite rather than a full composite?


 I've another question. What happens if i stop a composite in a node
 and there's an incoming call to a component inside that composite?
 That call is put in a queue or not?


I depends on what sort of binding you have. For example, if the binding is a
JMS binding you would expect the message to be held on a queue provided by
the messaging infrastructure until the composite/node is (re-)started. For
other bindings, such binding.ws, the sending reference will not be able to
connect to the web service endpoint that the stopped service would normally
provides and message delivery will fail with a suitable error message at the
client.



 Cheers,
 Giorgio.

 -
 To unsubscribe, e-mail: [EMAIL PROTECTED]
 For additional commands, e-mail: [EMAIL PROTECTED]




Re: [NOTICE] Giorgio Zoppi voted as Tuscany committer

2008-03-26 Thread Simon Laws
On Wed, Mar 26, 2008 at 11:22 AM, Vamsavardhana Reddy [EMAIL PROTECTED]
wrote:

 Congratulations Giorgio!!

 ++Vamsi

 On Wed, Mar 26, 2008 at 2:31 PM, ant elder [EMAIL PROTECTED] wrote:

  The Tuscany PPMC and Incubator PMC have voted for Giorgio Zoppi to
 become
  a
  Tuscany committer.
 
  Could you submit an Apache CLA so i can get your userid and access
 sorted
  out, you can find out about the Contributor License Agreement at
  http://www.apache.org/licenses/#clas
 
 
  Congratulations and welcome Giorgio!
 
...ant
 


Congrats Girogio and welcome.

Simon


Re: Tuscany composite validation

2008-03-26 Thread Simon Laws
Hi Hasan

Adriano is correct about the xsd validation. I've made some more comments in
line. Looking at the range of questions you are asking maybe what we could
do is create and itest to cover the range of validation features that
Tuscany should support and we can concentrate there on improving the
usability story and of course on developing the APIs to deliver it.

Regards

Simon

On Tue, Mar 25, 2008 at 4:50 PM, Adriano Crestani 
[EMAIL PROTECTED] wrote:

 Hi Hasan,

 As far as I know, the validation is done by SCA on composite files. It
 uses
 the tuscany-sca.xsd file. You can find it at:

 https://svn.apache.org/repos/asf/incubator/tuscany/java/sca/modules/assembly-xsd/src/main/resources

 Regards,
 Adriano Crestani

 On Tue, Mar 25, 2008 at 8:44 AM, Hasan Muhammad [EMAIL PROTECTED] wrote:

  Hi Simon,
 
  I was wondering whether tuscany does any validation of the composites
 and
  if
  so, to what extent? If not, what is the api (if any exists) that we can
  use
  to do validation ourselves? If not the api, then how can we obtain
  information to do this validation? We would to know this in light of
  Workspace and ContributionManager.


Currently there are two main types of validation that occur on composites.

-  XSD validation - As Adriano points out see the schema in
modules/assembly-xsd. These are applied when contributions are read. If you
look at the ReallySmallRuntimeBuilder you can see how these XSD are loaded
and also what you would have to do to load your own schema for validation
purposes.

- Programatic validation - in the assembly builder it checks that composites
are properly specified as far as possible w.r.t applying rules from the SCA
specification, e.g. missing or duplicate names, reference/service matching
etc. The same builder code is used regardless of how the runtime is being
started. So, for example, in the new workspace code the builder is called
when a configured composite is requested in the
DeployableCompositeCollectionImpl.doGet() method.

In the case of assembly builders you will notice in the builder code that a
CompositeBuilderMonitor is used to capture any validation issues. The
monitor is called through a local warning() method so it looks like this
could do with a bit of a clean up. This is the extent of the API we have for
this at the moment. The Workspace code is not great in this respect and just
logs validation errors to the underlying logger infrastructure. To capture
validation errors this is where you will be plugging into.



 
  Also, can we get a list of all error/warning messages related to the
  particular contribution and the respected category? By category, i mean
  whether the error/warning is for schema validation, or implementation
 type
  error, etc.


We don't maintain message catalogs at the moment so we have to search the
code to find the messages. For example, the output (with a little editing)
of

grep -R --include=*.java warning *

assembly/src/main/java/org/apache/tuscany/sca/assembly/builder/impl/CompositeConfigurationBuilderImpl.java:
warning(Duplicate component name:  + composite.getName()
assembly/src/main/java/org/apache/tuscany/sca/assembly/builder/impl/CompositeConfigurationBuilderImpl.java:
warning(Property not found for component property:  + component.getName()
assembly/src/main/java/org/apache/tuscany/sca/assembly/builder/impl/CompositeConfigurationBuilderImpl.java:
warning(Component property mustSupply attribute incompatible with property:
 + component
assembly/src/main/java/org/apache/tuscany/sca/assembly/builder/impl/CompositeConfigurationBuilderImpl.java:
warning(No value configured on a mustSupply property:  + component.getName
()
assembly/src/main/java/org/apache/tuscany/sca/assembly/builder/impl/CompositeConfigurationBuilderImpl.java:
warning(Component property many attribute incompatible with property:  +
component
assembly/src/main/java/org/apache/tuscany/sca/assembly/builder/impl/CompositeConfigurationBuilderImpl.java:
warning(No type specified on component property:  + component.getName()
assembly/src/main/java/org/apache/tuscany/sca/assembly/builder/impl/CompositeConfigurationBuilderImpl.java:
warning(Reference not found for component reference:  + component.getName
()
assembly/src/main/java/org/apache/tuscany/sca/assembly/builder/impl/CompositeConfigurationBuilderImpl.java:
warning(Component reference multiplicity incompatible with reference
multiplicity:  + component
assembly/src/main/java/org/apache/tuscany/sca/assembly/builder/impl/CompositeConfigurationBuilderImpl.java:
warning(Component reference interface incompatible with reference
interface:  + component
assembly/src/main/java/org/apache/tuscany/sca/assembly/builder/impl/CompositeConfigurationBuilderImpl.java:
warning(Service not found for component service:  + component.getName()
assembly/src/main/java/org/apache/tuscany/sca/assembly/builder/impl/CompositeConfigurationBuilderImpl.java:
warning(Component service interface incompatible with service interface: 

Re: Adding conversational intents as described in the assembly spec

2008-03-26 Thread Simon Laws
On Wed, Mar 26, 2008 at 10:13 AM, Vamsavardhana Reddy [EMAIL PROTECTED]
wrote:

 I started investigating TUSCANY-2112.  The approach I am taking is to make
 something like the following work without having to annotate the java
 interfaces with @Conversational etc.

component name=MyConvServiceComponent
implementation.java class=
 org.apache.tuscany.sca.mytest.MyConvServiceImpl/
service name=MyConvService requires=sca:conversational
interface.java interface=
 org.apache.tuscany.sca.mytest.MyConvService
callbackInterface=
 org.apache.tuscany.sca.mytest.MyConvCallback/
operation name=endConversation
 requires=tuscany:endsConversation/
binding.ws/
callback
binding.ws/
/callback
/service
/component
component name=MyConvClientComponent
implementation.java class=
 org.apache.tuscany.sca.mytest.MyConvClientImpl/
reference name=myConvService target=MyConvServiceComponent
 requires=sca:conversational
interface.java interface=
 org.apache.tuscany.sca.mytest.MyConvService
callbackInterface=
 org.apache.tuscany.sca.mytest.MyConvCallback/
operation name=endConversation
 requires=tuscany:endsConversation/
binding.ws uri=/MyConvServiceComponent/
/reference
/component

 I have tried some fix in Axis2ServiceBindingProvider and
 Axis2ReferenceBindingProvider (where I noticed some TODO's) to set the
 conversation related flags based on the intents.  But, then there seem to
 be
 some other problems like the intents are not propagated to references etc,
 for which I have created a JIRA.  Even though I set the flags explicitly
 through the debugger by changing the values, I ended up with a
 pass-by-value not allowed exception due to some object serialization
 problems.  Appears like the intent processing should be done at higher
 level
 than the bindings, perhaps when when the component instance is created!!

 Some questions I have:
 1. About setting endsConversation intent on callback methods.  The
 callback tag does not seem to allow operation tags inside.  Should
 these
 intents be set under service tag itself?  In this case we will need to
 qualify the operation name so that it is recognized it is from callback
 interface.
 2. Should the callback interface be inheriting the intents from the
 service?  Or should it be set on the callback instead?


Hi Vamsi

The intent processing needs to be generic, i.e. not tied to the web service
bindings, if possible and applied to the interfaces described on services in
the assembly model. The objective is to have
org.apache.tuscany.sca.interfacedef.Interface.isConversational() return true
when the intent is present. However I note that against this method Raymond
has made the comment.

// FIXME: [rfeng] We need to re-consider the conversational as an intent

So the time is now nigh;-) Maybe it's the case that we flip this round and
introduce intents on the the services and references based on the settings
in either the

Java interface
Wsdl interface
intents in the composite file

And then check for these intents in the places that currently check for the
interface flags being set.

We need some advice from the policy experts as to where this kind of
processing could take place. It would seem to belong to the build phase but
we are talking about a very specific intent here so it's not generic
intent/policy processing.

You could go-ahead and create an itest (conversations-intent?) for these
policy driven cases so we can start running up a (non-)working example.

Regards

Simon


Re: Error in WebSphere for Component Service(Urgent for me)

2008-03-27 Thread Simon Laws
On Thu, Mar 27, 2008 at 12:25 PM, Sandeep Raman [EMAIL PROTECTED]
wrote:

 Hi,

 I get this error:

 /ExceptionError 404: SRVE0190E: File not found: /ComposerService
 on touching my URL:
 http://172.19.103.18:9080/LOSComposite/ComposerService
 in websphere while calling up my composite application with a binding.ws
 specified.
 The end points are established and i get the information in the log. No
 errors appear.


 When i go to the address
 http://172.19.103.18:9080/LOSComposite/ComposerService?wsdl

 i get
 Invalid at the top level of the document. Error processing resource
 'http://172.19.103.18:9080/LOSComposite/ComposerService...
 /wsdl:definitions
 I am using websphere 6.1.0.3 with a patch 6.1.0.9 as specified in
 sebastain's blog.
 Can someone give me a solution on how can i resolve this, this is a bit
 urgent for me a i have a demo for this tommorow.
 Regards
 Sandeep.


 =-=-=
 Notice: The information contained in this e-mail
 message and/or attachments to it may contain
 confidential or privileged information. If you are
 not the intended recipient, any dissemination, use,
 review, distribution, printing or copying of the
 information contained in this e-mail message
 and/or attachments to it are strictly prohibited. If
 you have received this communication in error,
 please notify us by reply e-mail or telephone and
 immediately and permanently delete the message
 and any attachments. Thank you



Hi Sandeep

Is this error being reported from the application you have attached to
TUSCANY-2144? If so I'll try and run it up here. If not can you attach the
application you are having problems with.

Can you tell me if you have successfully started  and accessed
samples/calculator-webapp on the WebSphere configuration you have. If not I
would like us to work together to get that working first.

Regards

Simon


Re: [VOTE] Release Tuscany Java SCA 1.2-incubating (RC2)

2008-03-27 Thread Simon Laws
On Wed, Mar 26, 2008 at 8:25 PM, Jean-Sebastien Delfino 
[EMAIL PROTECTED] wrote:

 Luciano Resende wrote:
  Please review and vote on the 1.2 release artifacts of Tuscany SCA for
 Java.
 
  The artifacts are available for review at:
  http://people.apache.org/~lresende/tuscany/sca-1.2-RC2/http://people.apache.org/%7Elresende/tuscany/sca-1.2-RC2/
 
 
  This includes the signed binary and source distributions, the RAT
 report,
  and the Maven staging repository.
 
  The eclipse updatesite for the Tuscany Eclipse plugins is available at:
  http://people.apache.org/~lresende/tuscany/sca-1.2-RC2/updatesite/http://people.apache.org/%7Elresende/tuscany/sca-1.2-RC2/updatesite/
 
  The release tag is available at :
  http://svn.apache.org/repos/asf/incubator/tuscany/tags/java/sca/1.2-RC2/
 
  If you do find issues with the release candidate that you think need
  to be fixed and lead to a -1 please review and fix them in the 1.2branch
  or raise jira's targeting the 1.2 release.
 

 It looks like the script used to change 1.2-incubating-SNAPSHOT to
 1.2-incubating missed some files in the Eclipse plugins. I've created
 JIRA TUSCANY-2142 [1] to track that issue and describe a workaround.

 [1] http://issues.apache.org/jira/browse/TUSCANY-2142
 --
 Jean-Sebastien

 -
 To unsubscribe, e-mail: [EMAIL PROTECTED]
 For additional commands, e-mail: [EMAIL PROTECTED]

 I made a start by reviewing the binary distro.

I've been through all the samples and revisited the samples table on the
release page [1]. I created a demos table also. I created JIRA where I found
problems.

Licenses
===

Some reformatting didn't get copied back from 1.1; primarily putting all
jars on separate lines with complete version numbers. These are the diffs I
found

LICENSE   /lib
  geronimo-activation_1.0.2_spec-1.1.jar
jaxen-1.1-1.jar   jaxen-1.1.1.jar
Groovygroovy-all-minimal-1.5.4.jar
Jythonjython-2.2.jar
jsr181-apijsr181-api-1.0-MR1.jar
jsr250-apijsr250-api-1.0.jar
mail  mail-1.4.jar
saaj-api  saaj-api-1.3.jar
wsdl4jwsdl4j-1.6.2.jar
backport-util-concurrent: backport-util-concurrent-2.2.jar
serp  serp-1.12.0.jar
axion axion-1.0-M3-dev.jar
javaccjavacc-3.2.jar
howl  howl-1.0.1-1.jar
dojotoolkit
activeio-core-3.0.0-incubator.jar
activemq-core-4.1.1.jar
addressing-1.3.mar
axis2-adb-codegen-1.3.jar
maven-artifact-2.0.2.jar
maven-artifact-manager-2.0.2.jar
maven-error-diagnostics-2.0.2.jar
maven-model-2.0.2.jar
maven-profile-2.0.2.jar
maven-project-2.0.2.jar
maven-repository-metadata-2.0.2.jar
maven-settings-2.0.2.jar
opensaml-1.1.jar
wagon-file-1.0-alpha-7.jar
wagon-http-lightweight-1.0-alpha-6.jar
wagon-provider-api-1.0-alpha-6.jar
xbean-2.1.0.jar
xmlParserAPIs-2.6.2.jar

Junk files
===

.\demos\bigbank\src\main\resources\web\dojo
./tutorial/store-db/target/cart-db/log
./tutorial/store-eu/target/cart-eu-db/log
./tutorial/store-supplier/target/cart-db/log
./samples/calculator-webapp/target/war/work
./samples/calculator-ws-webapp/target/war/work
./samples/chat-webapp/target/war/work
./samples/feed-aggregator-webapp/target/war/work
./samples/helloworld-dojo-webapp/target/war/work
./samples/helloworld-jsonrpc-webapp/target/war/work
./samples/helloworld-ws-sdo-webapp/target/war/work
./tutorial/store-db/target/cart-db/tmp
./tutorial/store-eu/target/cart-eu-db/tmp
./tutorial/store-supplier/target/cart-db/tmp

RAT


output hard to read as it reports on all the dojo files. I did spot..

 ===
 
==./modules/interface-java-jaxws/src/test/java/org/apache/tuscany/sca/interfacedef/java/jaxws/Bean.java
 ===
 package org.apache.tuscany.sca.interfacedef.java.jaxws;

public interface BeanT {
public T getP1();
}

 ===
 
==./modules/interface-java-jaxws/src/test/java/org/apache/tuscany/sca/interfacedef/java/jaxws/Bean1.java
 ===
 package org.apache.tuscany.sca.interfacedef.java.jaxws;

public class Bean1 {
private String p1;
private int p2;
public String getP1() {
return p1;
}
public void setP1(String p1) {
this.p1 = p1;
}
public int getP2() {
return p2;
}
public void setP2(int p2) {
this.p2 = p2;
}
}

 ===
 
==./modules/interface-java-jaxws/src/test/java/org/apache/tuscany/sca/interfacedef/java/jaxws/Bean2.java
 

<    7   8   9   10   11   12   13   14   15   16   >