Re: [sca-java-integration-branch] Missing WSDL type imports causes aymetrical behviour

2007-03-14 Thread Simon Laws

On 3/13/07, Raymond Feng [EMAIL PROTECTED] wrote:


Hi,

The ClassCastException will only happen if we try to get the child element
out of the repsonse wrapper (which is an instance of AnyTypeDataObjectImpl
because the model is not registered with SDO runtime) and cast it to a
generated SDO interface. If the ServiceFactory is not registered, the data
(from the inline schema of the WSDL) are then represented by
org.apache.tuscany.sdo.impl.AnyTypeDataObjectImpl which doesn't have
problems to convert to XML or vice versa. But now the child of
AnyTypeDataObjectImpl is BasicSequence instead of the real PersonType
since
the types are not known.

It's very common that we define XSD elements and types in the inline
schemas
of WSDL. These need to be registered so SDO runtime can work correctly. I
could try to make your special case working (your wsdl only has element
definitions) and I don't see much value.

Thanks,
Raymond

- Original Message -
From: Simon Laws [EMAIL PROTECTED]
To: tuscany-dev tuscany-dev@ws.apache.org
Sent: Tuesday, March 13, 2007 2:51 PM
Subject: [sca-java-integration-branch] Missing WSDL type imports causes
aymetrical behviour


 Hi

 I remember Dan discussing this on the list before but I can't put my
 finger
 on the thread

 If I run the databinding WS tests without including the factory for the
 WSDL
 XML types in the composite...

dbsdo:import.sdo factory=
 org.apache.tuscany.sca.itest.databinding.services.ServicesFactory/

 Then the web service call fails with the following:

 java.lang.ClassCastException:
 org.apache.tuscany.sdo.util.BasicSequenceincompat
 ible with org.apache.tuscany.sca.itest.databinding.types.PersonType
at $Proxy26.greetPersonType(Unknown Source)
at
 org.apache.tuscany.sca.itest.sdodatabinding.GreeterServiceClientImpl.
 greetPersonType(GreeterServiceClientImpl.java:47)
 ...

 In the case where I am getting the error Tuscany can't convert the
wrapped
 return type from the call back to the type expected in the client. This
is
 clearly something to do with Tuscany not having the correct types
 registered
 (adding the above import fixes it). But the thing that confuses me is
that
 it only fails on processing the return message in the client. It has
quite
 happily

  1/ Converted the request message into a wrapped message on the client
  2/ Converted the wrapped message back into the required message on the
 server
  3/ Converted the response message into a wrapped response on the server

 only to fail at the final hurdle when trying to unwrap the response on
the
 client. So it seems to be able to do quite a lot without the imported
 information. Why is it required for the last step.

 I'm happy to look into this if there is not an obvious answer. If
nothing
 else we need better error messages here.

 Regards

 Simon



-
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]

Hi Raymond, thanks for the info.


I still don't understand how this works with the requst message (as opposed
to the response message). At the target component the message handlers (?)
are converting from the XML to the wrapped object and from there to the
child object successfully. What extra information is available at the target
component for request processing that isn't available for response
processing at the client component?

Simon


Databinding itest reorg proposal

2007-03-14 Thread Simon Laws

I think we need to reorg the itest directory for databinding a bit and I
need some help to get the maven bit correct as I'm a bit of a maven novice.
I'm looking at the integration branch at the moment as that is where the
tests are checked in but I don't see why these tests can't go in the trunk
also.

Currently we have:

itest
  databinding
  sdo
  jaxb
  transformers (not sure what this is but assume it tests the data type
transformer logic)


From previous threads on this subject, [1] and [2], we talked about changing

to something like

itest
  databinding
  generator
  common
  sdo
  jaxb
  interop
  transformers (not sure what this is but assume it tests the data type
transformer logic)

Where common holds xsd, wsdl and files generated from them that are common
between the tests, interop holds tests that look at cross databinding
testing and generator holds a test generator. This chane means moving most
of the WSDL and XML files from where they are now out into common and also
replacing test source files with generated versions.

Interop would depend on sdo and jaxb which in turn depend on common. Can we
do this in maven simply by adding suitable dependency lines in the module
poms. In particular I will need to copy some schema and wsdl files from one
module to another. I notice that various build helper plugins are used in
various poms. Is there a recommended one for  copying resources in from
dependencies.

I have created a test generator to build the test source and configuration
automatically. As we have quite a number of datatypes to tests (potentially
in the 100s) I made some velocity templates to create the 8 files required
by the test. Initially I intended to throw the generator away having run it
but now feel that it might be useful as we extend the tests. So I want to
run it more than once but not every time the test is compiled/run. Is this a
candidate for a maven profile? Or is there another mechanism for runinning
the generator occasionally?

Regards

Simon

[1] http://www.mail-archive.com/tuscany-dev@ws.apache.org/msg13947.html
[2] http://www.mail-archive.com/tuscany-dev@ws.apache.org/msg15261.html


[sca-java-integration-branch] databinding sdo test case failure

2007-03-14 Thread Simon Laws

Just took an update from svn, and removed my mvn repository, and am getting
failures in the sdo databinding tests starting with...

Running org.apache.tuscany.databinding.sdo.SDOExceptionHandlerTestCase
Tests run: 3, Failures: 0, Errors: 3, Skipped: 0, Time elapsed: 0.09 sec 
FAI
LURE!
testGetFaultType(
org.apache.tuscany.databinding.sdo.SDOExceptionHandlerTestCase)
 Time elapsed: 0.03 sec   ERROR!
java.lang.ExceptionInInitializerError
   at java.lang.J9VMInternals.initialize(J9VMInternals.java:167)
   at
org.apache.tuscany.databinding.sdo.SDOExceptionHandlerTestCase.setUp(
SDOExceptionHandlerTestCase.java:47)
   at junit.framework.TestCase.runBare(TestCase.java:132)
   at junit.framework.TestResult$1.protect(TestResult.java:110)
   at junit.framework.TestResult.runProtected(TestResult.java:128)
   at junit.framework.TestResult.run(TestResult.java:113)
   at junit.framework.TestCase.run(TestCase.java:124)
   at junit.framework.TestSuite.runTest(TestSuite.java:232)
   at junit.framework.TestSuite.run(TestSuite.java:227)

Is this just me?

Simon


Re: [sca-java-integration-branch] databinding sdo test case failure

2007-03-14 Thread Simon Laws

On 3/14/07, Simon Laws [EMAIL PROTECTED] wrote:


Just took an update from svn, and removed my mvn repository, and am
getting failures in the sdo databinding tests starting with...

Running org.apache.tuscany.databinding.sdo.SDOExceptionHandlerTestCase
Tests run: 3, Failures: 0, Errors: 3, Skipped: 0, Time elapsed: 0.09 sec
 FAI
LURE!
testGetFaultType(
org.apache.tuscany.databinding.sdo.SDOExceptionHandlerTestCase)
  Time elapsed: 0.03 sec   ERROR!
java.lang.ExceptionInInitializerError
at java.lang.J9VMInternals.initialize(J9VMInternals.java:167)
at
org.apache.tuscany.databinding.sdo.SDOExceptionHandlerTestCase.setUp(
SDOExceptionHandlerTestCase.java:47)
at junit.framework.TestCase.runBare (TestCase.java:132)
at junit.framework.TestResult$1.protect(TestResult.java:110)
at junit.framework.TestResult.runProtected(TestResult.java:128)
at junit.framework.TestResult.run(TestResult.java :113)
at junit.framework.TestCase.run(TestCase.java:124)
at junit.framework.TestSuite.runTest(TestSuite.java:232)
at junit.framework.TestSuite.run(TestSuite.java:227)

Is this just me?

Simon


Apparently it was just me. Went in and cleaned/built the module in question
independently as opposed to from a higher level and the problem went away? I
have now started  getting this

at java.lang.J9VMInternals.initialize(J9VMInternals.java:167)

problem in other modules so something in my build is out of line. Time for
another complete refresh I think

Simon

Simon


Checkstyle in testing/sca

2007-03-14 Thread Simon Laws

I note that there are some checkstyle/pmd plugin configurations in the
testing/sca pom. Can anyone tell me if this is actually running. I've not
seen any indication in the mvn builds that I'm doing that it is. Maybe it's
just that the code is perfect or that I'm not configuring mvn properly!

Outside of the test hierarchy there is a profile ( sourcecheck I think) for
this. Why is this different in the testing modules?

Thanks

Simon


Re: Checkstyle in testing/sca

2007-03-14 Thread Simon Laws

On 3/14/07, Luciano Resende [EMAIL PROTECTED] wrote:


I'll probably not answer you question about differences, but what I
usually
just run mvn -Psourcecheck. But i have found very useful to use checkstyle
and pmd plugins inside IDE (my case eclipse).

On 3/14/07, Simon Laws [EMAIL PROTECTED] wrote:

 I note that there are some checkstyle/pmd plugin configurations in the
 testing/sca pom. Can anyone tell me if this is actually running. I've
not
 seen any indication in the mvn builds that I'm doing that it is. Maybe
 it's
 just that the code is perfect or that I'm not configuring mvn properly!

 Outside of the test hierarchy there is a profile ( sourcecheck I think)
 for
 this. Why is this different in the testing modules?

 Thanks

 Simon




--
Luciano Resende
http://people.apache.org/~lresende


Hi Luciano

Thanks for that. Doing mvn -Psourcecheck does at least give me some output!
It also gives an error...

Embedded error: Could not find resource
'C:\simon\Projects\Tuscany\java\java-int
egration\testing\sca\itest\databindings\sdo/.ruleset'.

Is this important?

Simon


Re: Native M3 Release Candidate

2007-03-14 Thread Simon Laws

On 3/14/07, Pete Robbins [EMAIL PROTECTED] wrote:


On 14/03/07, ant elder [EMAIL PROTECTED] wrote:

 I've just given this a try with the windows binary builds and following
 the
 getting started instructions to run the calculator sample.

 The first try failed as libxml2 and iconv are missing. I see now it does
 mention these in the SDO system prereqs section but a note pointing that
 out
 in the Getting Tuscany SDO for C++ working with the binary release on
 Windows would make this a bit clearer.


I've added some extra words for this.

Thanks for checking it out,

--
Pete



Pete

Have tried SDO on Fedora Core 5 and it compiled and ran according to the
GettingStarted instructions Have to go now but will try SCA also.

Simon


Re: Native M3 Release Candidate

2007-03-15 Thread Simon Laws

On 3/15/07, Jean-Sebastien Delfino [EMAIL PROTECTED] wrote:


ant elder wrote:
 I was using libxml2 2.6.27, seeing you had 2.6.24 i went looking for
that
 but can't find a pre-compiled win32 version for that, so I tried
 2.6.23 and
 using that the sample runs fine.

   ...ant


Would it help to have on our Wiki a page with actual links to the
(Windows) dependency downloads that people have been successful with?
This way users won't have to fish for distributions of libxml for
example that work for us.

What do you think?

--
Jean-Sebastien


-
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]

The getting started pages in the release have got dependency lists with

quite an amount of detail in but I like the idea  of having somewhere where
we can record the links to dependencies and update it with any funnies that
people find.  This page was started a while back  (
http://cwiki.apache.org/confluence/display/TUSCANY/SCA+CPP+Dependencies) but
not finished. We could tidy that up and put more accurate links in.

Simon


Re: Native M3 Release Candidate

2007-03-15 Thread Simon Laws

On 3/15/07, Simon Laws [EMAIL PROTECTED] wrote:




On 3/15/07, Jean-Sebastien Delfino [EMAIL PROTECTED] wrote:

 ant elder wrote:
  I was using libxml2 2.6.27, seeing you had 2.6.24 i went looking for
 that
  but can't find a pre-compiled win32 version for that, so I tried
  2.6.23 and
  using that the sample runs fine.
 
...ant
 

 Would it help to have on our Wiki a page with actual links to the
 (Windows) dependency downloads that people have been successful with?
 This way users won't have to fish for distributions of libxml for
 example that work for us.

 What do you think?

 --
 Jean-Sebastien


 -
 To unsubscribe, e-mail: [EMAIL PROTECTED]
 For additional commands, e-mail: [EMAIL PROTECTED]

 The getting started pages in the release have got dependency lists with
quite an amount of detail in but I like the idea  of having somewhere where
we can record the links to dependencies and update it with any funnies that
people find.  This page was started a while back  (
http://cwiki.apache.org/confluence/display/TUSCANY/SCA+CPP+Dependencies)
but not finished. We could tidy that up and put more accurate links in.

Simon


Just compiling up SCA (on a fresh Fedora Core 6 machine this time). A few
comments when following the instructions so far.

Builidng core SCA
 Without Ant and JDK on the path the tools build fails

Looking at the getting starter page. Not sure what to do once the core has
been built. I know from experience that I need to build a sample and, before
this, build the appropriate extensions. But it's not obvious from the SCA
getting started page. Would be good to give an example of the very first
thing someone might like to try after buidling the core, e.g. now go build
the C++ extensions and try the CPPCalculator sample. Or is there some other
test I should try to prove that the compile was successful?

Tuscany SCA Extensions

Links don't work as doc is missing.

Samples
Small point - Could you extend the link highlighting from getting started
to samples getting started.

Tuscany Samples - Getting Started

Samples Dependencies table 1.
Doc appears to be missing from the doc directory so links at top of table
don;t work
HTTPDBigBang link doesn't work
AlertAggregator link doesn't work

Just downloading a JDK so will let you know how I get on once done.

Simon


Re: Checkstyle in testing/sca

2007-03-15 Thread Simon Laws

On 3/14/07, Luciano Resende [EMAIL PROTECTED] wrote:


I've seen some recent commits from Raymond and myself around cleaning up
checkstyle and pmd violations in the sca-java-integration branch. And, as
I
said before, having the checkstyle and pmd plugins directly integrated in
the IDE helps identifying violation right when you are coding.

On 3/14/07, ant elder [EMAIL PROTECTED] wrote:

 On 3/14/07, Simon Laws [EMAIL PROTECTED] wrote:
 
  On 3/14/07, Luciano Resende [EMAIL PROTECTED] wrote:
  
   I'll probably not answer you question about differences, but what I
   usually
   just run mvn -Psourcecheck. But i have found very useful to use
  checkstyle
   and pmd plugins inside IDE (my case eclipse).
  
   On 3/14/07, Simon Laws [EMAIL PROTECTED] wrote:
   
I note that there are some checkstyle/pmd plugin configurations in
 the
testing/sca pom. Can anyone tell me if this is actually running.
 I've
   not
seen any indication in the mvn builds that I'm doing that it is.
 Maybe
it's
just that the code is perfect or that I'm not configuring mvn
  properly!
   
Outside of the test hierarchy there is a profile ( sourcecheck I
  think)
for
this. Why is this different in the testing modules?
   
Thanks
   
Simon
   
  
  
  
   --
   Luciano Resende
   http://people.apache.org/~lresende
  
  Hi Luciano
 
  Thanks for that. Doing mvn -Psourcecheck does at least give me some
  output!
  It also gives an error...
 
  Embedded error: Could not find resource
  'C:\simon\Projects\Tuscany\java\java-int
  egration\testing\sca\itest\databindings\sdo/.ruleset'.
 
  Is this important?
 
  Simon
 

 I don't think we used the sourcecheck stuff in the sca-java-integration
 branch, I never use it anyway, and i guess from what you're seeing
others
 don't either.

...ant




--
Luciano Resende
http://people.apache.org/~lresende


OK thanks everyone. I'll run the new code I have through mvn -Psourcecheck
and see if I can get the IDE integration up and running.

Simon


Re: Native M3 Release Candidate

2007-03-15 Thread Simon Laws

On 3/15/07, ant elder [EMAIL PROTECTED] wrote:


So that prompted me to check the licenses. Zlib and libxml2 look fine i
think though they do need to be added to the Tuscany LICENSE and NOTICE
files if the Tuscany code is using those APIs whether or not Tuscany
Native
is distributing them. Looks like iconv is LGPL which isn't ASF friendly.
Is
the Tuscany Native code actually using the iconv APIs?

   ...ant

On 3/15/07, Pete Robbins [EMAIL PROTECTED] wrote:

 On 15/03/07, ant elder [EMAIL PROTECTED] wrote:
 
  On 3/15/07, Jean-Sebastien Delfino [EMAIL PROTECTED] wrote:
  
   ant elder wrote:
I was using libxml2 2.6.27, seeing you had 2.6.24 i went looking
for
   that
but can't find a pre-compiled win32 version for that, so I tried
2.6.23 and
using that the sample runs fine.
   
  ...ant
   
  
   Would it help to have on our Wiki a page with actual links to the
   (Windows) dependency downloads that people have been successful
with?
   This way users won't have to fish for distributions of libxml for
   example that work for us.
  
   What do you think?
 
 
  Sure that would be useful but IMHO even better would be to just
 distribute
  these dependencies with the binary distro. I know not everyone agrees
 with
  that though.
 
...ant
 

 I really think that is a bad thing to do even considering the License
 issues
 of re-distributing some of them.

 --
 Pete




You might want to ignore this as I didn't strictly follow the
instructions

I installed IBM JDK 5 and dependencies on my Fedora Core 6 box (only becuase
that's what I have on my FC5 box which has worked fine with Tuscany Native
in the past). Am getting strange results from scagen. In the CppCalculator
test it generated, for example

...
float CalculatorImpl_CalculatorService_Proxy::add( float arg0,  float arg1)
{
   tuscany::sca::Operation operation(add);
   operation.addParameter(arg1, amp;arg0);
   operation.addParameter(arg2, amp;arg1);
   float ret;
   operation.setReturnValue(amp;ret);
   target-gt;invoke(operation);
   return *(float*)operation.getReturnValue();
}
...

I.e. it seems to be URL encoding the generated code. I suspect this is some
setting somewhere as I've not had this before and Sebastien has just run
successfully.  I'll look at it a little closer later when I have more time.

Simon


Re: Databinding itest reorg proposal

2007-03-15 Thread Simon Laws

On 3/15/07, Jean-Sebastien Delfino [EMAIL PROTECTED] wrote:


Comments inline

Raymond Feng wrote:
 +1.

 transformers could go under interop because it's there to test
 transformations across databindings at the transformer level.


+1 from me, I'm assuming you meant move the tests currently in
transformer to interop and not creating interop/transformer.



I'll  look at combining the transformer tests into interop


Thanks,
 Raymond

 - Original Message - From: Simon Laws
 [EMAIL PROTECTED]
 To: tuscany-dev tuscany-dev@ws.apache.org
 Sent: Wednesday, March 14, 2007 4:06 AM
 Subject: Databinding itest reorg proposal


 I think we need to reorg the itest directory for databinding a bit and
I
 need some help to get the maven bit correct as I'm a bit of a maven
 novice.
 I'm looking at the integration branch at the moment as that is where
the
 tests are checked in but I don't see why these tests can't go in the
 trunk
 also.

 Currently we have:

 itest
   databinding
   sdo
   jaxb
   transformers (not sure what this is but assume it tests the
 data type
 transformer logic)

 From previous threads on this subject, [1] and [2], we talked about
 changing
 to something like

 itest
   databinding
   generator
   common
   sdo
   jaxb
   interop
   transformers (not sure what this is but assume it tests the
 data type
 transformer logic)

 Where common holds xsd, wsdl and files generated from them that are
 common
 between the tests, interop holds tests that look at cross databinding
 testing and generator holds a test generator. This chane means moving
 most
 of the WSDL and XML files from where they are now out into common and
 also
 replacing test source files with generated versions.

 Interop would depend on sdo and jaxb which in turn depend on common.
 Can we
 do this in maven simply by adding suitable dependency lines in the
 module
 poms.

Yes

 In particular I will need to copy some schema and wsdl files from one
 module to another.
 I notice that various build helper plugins are used in
 various poms. Is there a recommended one for  copying resources in from
 dependencies.

I apologize if I missed it in an earlier thread but why would the build
need to copy the files around?



No problem. I wasn't very clear. The question was motivated by needing to
use all of the resource (XSDs etc) in the common module in  the other test
modules. I.e. I  want to use the same XSD across tests but don't want to
create manual copies=. How do the processors in each test access the common
resources?




 I have created a test generator to build the test source and
 configuration
 automatically. As we have quite a number of datatypes to tests
 (potentially
 in the 100s) I made some velocity templates to create the 8 files
 required
 by the test. Initially I intended to throw the generator away having
 run it
 but now feel that it might be useful as we extend the tests. So I
 want to
 run it more than once but not every time the test is compiled/run.

Wouldn't it be simpler to just run it as part of the build and not check
in the generated tests, to make sure that they are always up to date?



I had originally though that parts of the test would have to be hand crafted
but I expect as we add more types this will be impractical so maybe you are
right.


Is this a

 candidate for a maven profile? Or is there another mechanism for
 runinning
 the generator occasionally?


I was looking for the generator but couldn't find it, can you point us
to it? Thanks.



Sorry about that. Am trying to check it in at the moment. Was just trying to
work out this resource dependency thing.



Regards


 Simon

 [1] http://www.mail-archive.com/tuscany-dev@ws.apache.org/msg13947.html
 [2] http://www.mail-archive.com/tuscany-dev@ws.apache.org/msg15261.html




--
Jean-Sebastien


-
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]




Re: Databinding itest reorg proposal

2007-03-15 Thread Simon Laws

On 3/15/07, Jean-Sebastien Delfino [EMAIL PROTECTED] wrote:


[snip]
Simon Laws wrote:

  In particular I will need to copy some schema and wsdl files from
one
  module to another.
  I notice that various build helper plugins are used in
  various poms. Is there a recommended one for  copying resources in
 from
  dependencies.

 I apologize if I missed it in an earlier thread but why would the build
 need to copy the files around?


 No problem. I wasn't very clear. The question was motivated by needing
to
 use all of the resource (XSDs etc) in the common module in  the other
 test
 modules. I.e. I  want to use the same XSD across tests but don't want to
 create manual copies=. How do the processors in each test access the
 common
 resources?

As always, there's probably many ways to do it, but here's one way.

Add this to your pom:
dependency
groupId.../groupId
artifactIdcommon/artifactId
version.../version
scopetest/scope
/dependency

Put abc.wsdl in common/src/main/resources.

And in your test case:
getClass().getClassLoader().getResourceAsStream(abc.wsdl)

Hope this helps.

--
Jean-Sebastien


-
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]



Thanks Jean-Sebastien.

This might be useful actually. The direction I took was to add a dependency
with common and then unpack common into the test project.

I also decided to try out your suggestion of running the test generation
every time. Getting everything in the right place at the right time has been
interesting and I know a lot more about maven now! There must be easier ways
of doing this though.

Basically the scheme is that common holds all the XSDs, generation templates
and a small program that reads a configuration file and runs the templates.
The test projects (i've only been playing with SDO at present - see sdogen)
have nothing in them apart from the generation configuration file and the
pom. The pom reads the configuration files generates the test, compiles it
an runs it.

The wrinkle at the moment is that when you change the configuration file to
add more types to a test project you have to run maven twice in that test
project This is because the test generation step generates the pom itself
(the pom directs the SDO generation processing and has to have lines
generated into it). The second time you run it you pick up the newly
generated pom and the test should run as intended. You then check this new
pom and carry on.

Looking at your suggested code above I expect we can actually do all the
hard work in the generate program and also drive the sdo generation step
from here. This would remove the need to generate the pom and hence the need
to run maven twice when you update the config file. Anyhow I didn't really
want to make a career out of building test generators so it can probably
stay as it is for the time being. The next task is to do the real testing.

I haven't included this generation stuff in the databindings pom some
hopefully you will see no different when running tests at present. You can
run it though just by referencing common and sdogen.

Regards

Simon


Re: svn move, was: Databinding itest reorg proposal

2007-03-16 Thread Simon Laws

On 3/15/07, Jeremy Boynes [EMAIL PROTECTED] wrote:


On Mar 15, 2007, at 3:34 PM, Simon Laws wrote:
 I forgot to mention that the reason that so many XML files have
 suddenly
 appeared is that I've take the files that currently live in /
 interop and
 renamed and refactored them.

Thanks for explaining as this did look a bit odd.

One way to avoid that is to use svn move to move the files rather
than adding them again. When you do that, SVN shows that the file was
copied from somewhere else in the repo and so it is fairly clear that
it isn't a new work but just a derivative. This also has the
advantage that the history of the file is maintained so users can
track changes even across the move. It has an even bigger benefit in
that it makes life easier for the lawyers, and a happy lawyer is much
nicer to have than a grumpy one :-)

Some IDEs which grew up with CVS don't seem to realize that SVN
allows them to just move things rather than delete old and add new
(losing history in the process). If that's the case, then you can
still get the benefits of moving through the svn command.

--
Jeremy


-
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]

Ah, apollogies for that. I have to admit that I'm a cvs person at heart so

just getting to grips with svn. I ljust ooked up svn move and got that why
didn't I look there first feeling, so I'll remember that for next time.
Thanks for taking the time to explain.

Regards

Simon


Re: svn move, was: Databinding itest reorg proposal

2007-03-19 Thread Simon Laws

On 3/16/07, Jeremy Boynes [EMAIL PROTECTED] wrote:


 Ah, apollogies for that. I have to admit that I'm a cvs person at
 heart so
 just getting to grips with svn. I ljust ooked up svn move and got
 that why
 didn't I look there first feeling, so I'll remember that for next
 time.
 Thanks for taking the time to explain.

If you're new to Subversion I highly recommend reading this:
   http://svnbook.red-bean.com/

There's an Appendix on Subversion for CVS Users :-)
--
Jeremy


-
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]

Cool. Thanks Jeremy


Re: [VOTE] Release Milestone 3 of Tuscany SCA Native and Tuscany SDO C++

2007-03-20 Thread Simon Laws

On 3/20/07, Geoffrey Winn [EMAIL PROTECTED] wrote:


I've downloaded the SDO src distribution on XP and it builds and runs as
advertised.

+1 from me.

Geoff.

On 20/03/07, ant elder [EMAIL PROTECTED] wrote:

 +1

...ant

 On 3/16/07, Pete Robbins [EMAIL PROTECTED] wrote:
 
  Please vote to approve the release of milestone 3 of Tuscany SCA
Native
  and
  Tuscany SDO C++.
 
  The SDO release includes performance improvements (30-40%) along with
  improvements to robustness.
  The SCA release includes support for C++, Python and Ruby languages
and
  sca,
  webservice and REST bindings.
 
  The distribution artifacts are here:
 
 - linux and Mac OS X (source only) -
 http://people.apache.org/~robbinspg/M3-RC4/linux-macosx/
 - windows (source and binary) -
 http://people.apache.org/~robbinspg/M3-RC4/win32/
 
  The RAT tool output for the release artifacts is here:
  http://people.apache.org/~robbinspg/M3-RC4/
 
  The SDO release is tagged here
 
 

http://svn.apache.org/repos/asf/incubator/tuscany/tags/cpp-sdo-1.0.incubating-M3-RC1/
  The SCA release is tagged here
 
 

http://svn.apache.org/repos/asf/incubator/tuscany/tags/native-sca-1.0.incubating-M3-RC4/
 
  Thank you.
 
  --
  Pete
 



Pete

I installed the new RCs on my Fedora Core 6 box this evening

+1 for SDO

I still get this strange effect with SCA where scagen inserts URL encoded
strings from the scagen XSL into generated CPP files. I have the IBM
java2-i386-50 JDK installed. Has anyone else tried with that? I might try
with a different version and see if that has the desired effect.

Regards

Simon


Re: [VOTE] Release Milestone 3 of Tuscany SCA Native and Tuscany SDO C++

2007-03-21 Thread Simon Laws

On 3/21/07, Andrew Borley [EMAIL PROTECTED] wrote:


On 3/21/07, Pete Robbins [EMAIL PROTECTED] wrote:

 On 21/03/07, Pete Robbins [EMAIL PROTECTED] wrote:
  On 20/03/07, Simon Laws [EMAIL PROTECTED] wrote:
   On 3/20/07, Geoffrey Winn [EMAIL PROTECTED] wrote:
   
I've downloaded the SDO src distribution on XP and it builds and
 runs as
advertised.
   
+1 from me.
   
Geoff.
   
On 20/03/07, ant elder [EMAIL PROTECTED] wrote:

 +1

...ant

 On 3/16/07, Pete Robbins [EMAIL PROTECTED] wrote:
 
  Please vote to approve the release of milestone 3 of Tuscany
SCA
Native
  and
  Tuscany SDO C++.
 
  The SDO release includes performance improvements (30-40%)
along
 with
  improvements to robustness.
  The SCA release includes support for C++, Python and Ruby
 languages
and
  sca,
  webservice and REST bindings.
 
  The distribution artifacts are here:
 
 - linux and Mac OS X (source only) -
 http://people.apache.org/~robbinspg/M3-RC4/linux-macosx/
 - windows (source and binary) -
 http://people.apache.org/~robbinspg/M3-RC4/win32/
 
  The RAT tool output for the release artifacts is here:
  http://people.apache.org/~robbinspg/M3-RC4/
 
  The SDO release is tagged here
 
 

   

http://svn.apache.org/repos/asf/incubator/tuscany/tags/cpp-sdo-1.0.incubating-M3-RC1/
  The SCA release is tagged here
 
 

   

http://svn.apache.org/repos/asf/incubator/tuscany/tags/native-sca-1.0.incubating-M3-RC4/
 
  Thank you.
 
  --
  Pete
 

   
   Pete
  
   I installed the new RCs on my Fedora Core 6 box this evening
  
   +1 for SDO
  
   I still get this strange effect with SCA where scagen inserts URL
 encoded
   strings from the scagen XSL into generated CPP files. I have the IBM
   java2-i386-50 JDK installed. Has anyone else tried with that? I
might
 try
   with a different version and see if that has the desired effect.
  
   Regards
  
   Simon
  
 
  Sorry Simon I've never seen anything like this on any of my 3 systems
  with various Java SDKs. Not sure I've tried with that IBM Java though.
 


I think I have seen this on my Linux system when I tried using the IBM JDK
5
a while ago - I reverted to JDK 1.4.2 and it works fine.
Have you tried an earlier JDK Simon?
We should raise a jira  add this to the readme I guess.

Cheers
Andy


OK, so after complete reinstall on my FC5 machine with the same JDK the URL
encoding problem does not appear so there must be some character encoding
setting on my other machine that I'm not seeing. Anyhow not a problem with
the  SCA release candidate but a warning note somewhere would be good.

The basic samples that I tried worked like a dream. Nice one.  I can't try
the WS samples here because of some other lonstanding issue on this box
(this is why I was trying on my nice clean FC6 machine in the first case)
but I'll give it the benefit of the doubt assuming someone else is able to
run them on linux. So +1 from me for SCA now also.

The only slight gotcha for the unwary I found was that when building SCA
with ./build_scanative.sh for me it assumes you want the C++ extension and
hence expects JAVA_HOME and ANT_HOME to be set which I didn't have. Anyhow
once set it all went OK.

Simon


Re: maven dependency plugin un pack problem in samples

2007-03-21 Thread Simon Laws

On 3/21/07, kelvin goodson [EMAIL PROTECTED] wrote:


There's an example in

http://svn.apache.org/viewvc/incubator/tuscany/java/sdo/pom.xml?view=markup
Hope that helps,
Kelvin.


On 21/03/07, muhwas [EMAIL PROTECTED] wrote:

 Hi,

 I am trying to run hello world web service sample. I
 am following instructions given with the sample.
 According to instruction

 1. To build the sample  issue :

 mvn

 2. Set up the Tuscany standalone runtime environment
 using the following command:

 mvn dependency:unpack

 After completion there should be a target\distribution
 subdirectory created that has the Tuscany standalone
 runtime.

 But when i am running mvn dependency:unpack. I am
 getting error.

 [0] inside the definition for plugin:
 'maven-dependency-plugin'specify the foll
 wing:

 configuration
...
artifactItemsVALUE/artifactItems
 /configuration.

 Can somebody please tell me what should i specify in
 the articatItems to fix this problem.

 thank you,
 muhwas

 -
 To unsubscribe, e-mail: [EMAIL PROTECTED]
 For additional commands, e-mail: [EMAIL PROTECTED]




Hi Muhwas

Looking at your question I seems you are trying the M2 SCA distribution. I
had a go at downloading the sca bin and sample jars and get the same effect
as you. I.e. the unpack stage doesn't. I don't know how the poms, as
configured in the downloads, are intended to work. I expect someone involved
in M2 will be able to give us the details. However I can make it work
manually (I tried the standalone calculator sample as it's a little simpler
to start with).

I downloaded tuscany-sca-1.0-incubator-M2-bin.zip and unpacked it to
/sca-bin
I downloaded tuscany-sca-1.0-incubator-M2-samples.zip and unpacked it to
/samples
cd \samples
mvn
cd \samples\standalone\calculator (I just did this to be as faithful to the
readme as I could)
java -jar ..\..\..\sca-bin\bin\launcher.jar target\sample-calculator.jar

produces
3 + 2=5.0
3 - 2=1.0
3 * 2=6.0
3 / 2=1.5
Which looks like the right kind of thing.

I'm not able to use mvn in the calculator directory directly unless I mess
around with the pom hierarchy so maybe this is a bug.
Not sure why the unpack doesn't work. You can do an unpack-dependencies and
this will get you all the class files but you would have to work out your
java classpath and get the launcher up and running.

Anyhow. I hope that helps. I'm sure an M2 expert will be on soon and can
tell us the real answer.

Regards

Simon


Re: Revolutions or a Mess!!

2007-03-22 Thread Simon Laws

On 3/21/07, Meeraj Kunnumpurath [EMAIL PROTECTED] wrote:


Hi,

I am glad you brought this point up.

You mentioned about constant confrontation between two sets of people. I
would say, unfortunately, this has been caused by a lack of diversity in
the community.

I hope most of these confrontations are based on technical differences.

For the first group, who I understand all work for a specific vendor,
the push has always been just on simple spec compliance, with no
emphasis on value adds, end user experience etc. I am not saying that
spec compliance is an inconsequential issue; however, there are other
issues that are as important. To give an example, some of the work that
has been happening on distributed heterogeneous federation is something
which I think is a key differentiator for Tuscany. This has been
discussed heavily on the list, and unfortunately only three of the
committers, who don't work for a specific vendor, that took any interest
in this discussion.

I sincerely hope the technical direction within Tuscany is not purely
shaped by a given vendor's aspirations for getting their product suite
SCA-compliant. Otherwise, independent committers like me would be
wasting our time on this.

I would say we need to generate better community interest and bring in
more independent committers, so that there is a better balance on
technical debates. Unfortunately, these constant conflicts have been
putting people off. I don't think the differences are irreconcilable;
however, there should be a willingness from both sides to have an open
discussion, leaving other vested interests out, purely based on what is
best for us as a community to build Tuscany.

Thanks
Meeraj

-Original Message-
From: Davanum Srinivas [mailto:[EMAIL PROTECTED]
Sent: Wednesday, March 21, 2007 1:31 PM
To: tuscany-dev@ws.apache.org
Subject: Revolutions or a Mess!!

Folks,

I don't really like what's going on. Too much conflicts between people.
Whatever be the issue of the day, I see 2 sets of people in constant
confrontation. The constant branching/merging is not healthy or
productive. Is there any effort or hope of reconciliation or should we
start looking at other options?

Thanks,
dims

--
Davanum Srinivas :: http://wso2.org/ :: Oxygen for Web Services
Developers

-
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]


This message has been checked for all email viruses by MessageLabs.


*

You can find us at www.voca.com

*
This communication is confidential and intended for
the exclusive use of the addressee only. You should
not disclose its contents to any other person.
If you are not the intended recipient please notify
the sender named above immediately.

Registered in England, No 1023742,
Registered Office: Voca Limited
Drake House, Three Rivers Court,
Homestead Road, Rickmansworth,
Hertfordshire, WD3 1FX. United Kingdom

VAT No. 226 6112 87


This message has been checked for all email viruses by MessageLabs.

-
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]

Hi


I'm looking at this as someone who has been nibbling away at the edges of
the java implementation for a while. I'm not currently a user neither have I
been a full on developer of the java code. I've spent my time working on a
PHP implementation of SCA (or at least a simplified version of SCA) and on
the Tuscany Native SCA implementation. I guess my personal motivation is to
understand how SCA is realized in more that one environment and how they
play together. For that reason (and because it's pretty cool stuff) I am
interested in the success of the Java implementation and have started
getting involved.

Primarily I want to see SCA be accepted as a useful tool in all of its
guises. If we only support a small subset of our potential users with SCA
then we are not really living up to the promise of SOA. For this to happen I
think we need people to actually use the software, we need them to feed back
requirements, problems, enhancements etc and we need to encourage them to
get involved directly in the community.

I realize that Tuscany is incubation and we may be looking for the tech
savvy user but we can't just wish for this. To make this happen we need an
effective development environment where working software can be released
regularly in a form that is consumable by users with a wide variety of
requirements. It's great that we get to build interesting stuff but we
really don't do ourselves justice if people can't download and run it easily
and update it regularly.

Looking at the mail list I see people trying to do good work in a project
they believe in. By careful design we have a runtime with a core and
extensions which build on 

Re: ServerSide Presentation and Demo

2007-03-22 Thread Simon Laws

On 3/22/07, Venkata Krishnan [EMAIL PROTECTED] wrote:


Hi Jim,

Thanks for sharing this information - its really useful.

- Venkat

On 3/22/07, Jim Marino [EMAIL PROTECTED] wrote:

 Hi,

 We just finished the ServerSide demo and I figured I send a mail to
 the list outlining how it went...

 We had the slot following the opening keynote and were up against Rod
 (Spring) and Patrick (OpenJPA) as the other  two talks. I was
 surprised to find that the ballroom was pretty full. I gave the talk
 and the demo showing end-to-end federated deployment and reaction
 seemed very positive.  Meeraj gets the hero award for staying up to
 an obscene hour in the morning to implement a JMS-based discovery
 service as we encountered last-minute hiccups with JXTA.

 My observations are:

 - After speaking with people after the presentation, feedback on the
 value of SCA was consistent. Specifically, they thought the
 programming model was nice but not a differentiator. What people got
 excited about was being able to dynamically provision services to
 remote nodes and have a representation of their service network.  In
 this respect, I think the demo worked well. Two people said they need
 what the demo showed for projects they currently have underway.

 - People asked how SCA is different than Spring.  They reacted
 positively when I said federation and distributed wiring. Related
 to this, people get dependency injection (i.e. it's old-hat) and just
 seem to assume that is the way local components obtain references.

 - People seemed to react positively when I compared SCA to Microsoft WCF

 - People liked the idea of heterogeneous service networks and support
 for components written in different languages, particularly C++.

 - People didn't ask about web services. People were nodding their
 heads (in agreement) when I talked about having the runtime select
 alternative bindings such as AMQP and JMS.

 - People want modularity and choice. Two areas they wanted choice in
 was databinding and persistence. They liked the fact that we are not
 locked into one databinding solution and that we have JPA
 integration. (as an aside, they also liked that SDO can be used
 without SCA). Spring integration was also popular.

 - People also liked the idea of a 2MB kernel download. One person
 mentioned they only want to download what they intend to use and not
 a lot of extra clutter.

 - People wanted to know how SCA is different than an ESB. I basically
 described it using the switch vs. router metaphor and how a component
 implementation type can be a proxy for an ESB. Related to this and
 point-to-point wires, people thought wire optimization by the
 Controller was cool.

 - People seemed to be more interested in running Tuscany as a
 standalone edge server or embedded in an OSGi container. I didn't get
 any questions about running Tuscany in a Servlet container or J2EE
 application server. This seems to be consistent with there being a
 number of talks on server-side OSGi.

 My big takeway is that we need to make the demo a reality.

 Jim






 -
 To unsubscribe, e-mail: [EMAIL PROTECTED]
 For additional commands, e-mail: [EMAIL PROTECTED]




Jim,

Nice one. Thanks for the summary. Did the conference record the talk? Would
be good to see it. Noting your comment and recent mails about the last
minute changes to get JMS working in short order, is everything checked in
that's needed to run the demo? Looking back I see several notes on build
instructions and it would be pretty cool to give it a spin.

Can I ask a question about support for components written in different
languages? Did people specifically say they were interested in C++? Did they
mention other languages (and, if so, which ones)?

Presumably the sweet spot is the ability to show components implemented in
various languages all acting as part of a single SCA Domain. How big a deal
do you think this ability to be able to draw a picture of you
heterogeneous service network (in SCDL) vs some of the other things you
mention like standalone edge server or selectable bindings. I'm asking
this question because, as you know, I like the idea and from your notes it
seems the audience likes the idea but I'm interested to know how much
interest there was for this vs other things.

I imagine, from reading your closing comments, you have a whole stack of
ideas now in your head about what needs doing next. This would seem like a
great opportunity for us all to look at what technical challenges lie ahead
and to have a discussion about how, as a community, we step up to meeting
some of them. How do we do this? Do we start some threads on individual
items? A thread on the grand plan and then split onto areas of peoples
interest. Having this summary is great because is really pushes on what we
really need to focus on, i.e. making something that is useful to our
(potential) users. We need to convert it into technical 

Re: ServerSide Presentation and Demo

2007-03-22 Thread Simon Laws

On 3/22/07, Meeraj Kunnumpurath [EMAIL PROTECTED] wrote:


Simon,

All the work that was done for the demo has been committed. I posted a
set of build instructions to get the demo running for Mario. However,
the information is scattered across multiple emails. I can collate them
and repost it to the list, if that helps.

Thanks
Meeraj

-Original Message-
From: Simon Laws [mailto:[EMAIL PROTECTED]
Sent: Thursday, March 22, 2007 11:31 AM
To: tuscany-dev@ws.apache.org
Subject: Re: ServerSide Presentation and Demo

On 3/22/07, Venkata Krishnan [EMAIL PROTECTED] wrote:

 Hi Jim,

 Thanks for sharing this information - its really useful.

 - Venkat

 On 3/22/07, Jim Marino [EMAIL PROTECTED] wrote:
 
  Hi,
 
  We just finished the ServerSide demo and I figured I send a mail to
  the list outlining how it went...
 
  We had the slot following the opening keynote and were up against
  Rod
  (Spring) and Patrick (OpenJPA) as the other  two talks. I was
  surprised to find that the ballroom was pretty full. I gave the talk

  and the demo showing end-to-end federated deployment and reaction
  seemed very positive.  Meeraj gets the hero award for staying up
  to an obscene hour in the morning to implement a JMS-based discovery

  service as we encountered last-minute hiccups with JXTA.
 
  My observations are:
 
  - After speaking with people after the presentation, feedback on the

  value of SCA was consistent. Specifically, they thought the
  programming model was nice but not a differentiator. What people got

  excited about was being able to dynamically provision services to
  remote nodes and have a representation of their service network.  In

  this respect, I think the demo worked well. Two people said they
  need what the demo showed for projects they currently have underway.
 
  - People asked how SCA is different than Spring.  They reacted
  positively when I said federation and distributed wiring.
  Related to this, people get dependency injection (i.e. it's old-hat)

  and just seem to assume that is the way local components obtain
references.
 
  - People seemed to react positively when I compared SCA to Microsoft

  WCF
 
  - People liked the idea of heterogeneous service networks and
  support for components written in different languages, particularly
C++.
 
  - People didn't ask about web services. People were nodding their
  heads (in agreement) when I talked about having the runtime select
  alternative bindings such as AMQP and JMS.
 
  - People want modularity and choice. Two areas they wanted choice in

  was databinding and persistence. They liked the fact that we are not

  locked into one databinding solution and that we have JPA
  integration. (as an aside, they also liked that SDO can be used
  without SCA). Spring integration was also popular.
 
  - People also liked the idea of a 2MB kernel download. One person
  mentioned they only want to download what they intend to use and not

  a lot of extra clutter.
 
  - People wanted to know how SCA is different than an ESB. I
  basically described it using the switch vs. router metaphor and how
  a component implementation type can be a proxy for an ESB. Related
  to this and point-to-point wires, people thought wire optimization
  by the Controller was cool.
 
  - People seemed to be more interested in running Tuscany as a
  standalone edge server or embedded in an OSGi container. I didn't
  get any questions about running Tuscany in a Servlet container or
  J2EE application server. This seems to be consistent with there
  being a number of talks on server-side OSGi.
 
  My big takeway is that we need to make the demo a reality.
 
  Jim
 
 
 
 
 
 
  
  - To unsubscribe, e-mail: [EMAIL PROTECTED]
  For additional commands, e-mail: [EMAIL PROTECTED]
 
 

Jim,

Nice one. Thanks for the summary. Did the conference record the talk?
Would be good to see it. Noting your comment and recent mails about the
last minute changes to get JMS working in short order, is everything
checked in that's needed to run the demo? Looking back I see several
notes on build instructions and it would be pretty cool to give it a
spin.

Can I ask a question about support for components written in different
languages? Did people specifically say they were interested in C++? Did
they mention other languages (and, if so, which ones)?

Presumably the sweet spot is the ability to show components implemented
in various languages all acting as part of a single SCA Domain. How big
a deal do you think this ability to be able to draw a picture of you
heterogeneous service network (in SCDL) vs some of the other things you
mention like standalone edge server or selectable bindings. I'm
asking this question because, as you know, I like the idea and from your
notes it seems the audience likes the idea but I'm interested to know
how much interest there was for this vs other things.

I imagine, from reading

A question of federation - was: Planning kernel release 2.0

2007-03-22 Thread Simon Laws

On 3/22/07, Meeraj Kunnumpurath [EMAIL PROTECTED] wrote:


Hi,

Now that the SPI is getting stable and we have the initial end-to-end
story for federation working, I would suggest we plan for the final
release for kernel 2.0, with emphasis on federation and user experience.
I was thinking about aiming for a beta in June in time for TSSJS
Barcelona and the final release for August. Maybe we can have couple of
alpha releases from now and June as well. These are the features, I
would like to see in 2.0.

1. Tidy up anything required in physical model, now that it is starting
to take good shape.
2. Tidy up generators from logical to physical model.
3. Fix the JXTA discovery issues, also investigate other discovery
protocols.
4. Federation end-to-end fully completed, this would include, maybe,
profiles advertising their capabilities and the information being used
in intent-based autowiring etc.
5. Intent-based auto wiring
6. Emphasis on end user experience in terms of ease of use.
7. Assembly service, this kind of now related to the generators that
have been introduced in the last week or so
8. Artifact management, especially mobile code when we target components
to remote profiles.

Also, now the SPI has started settling in, we need to start looking at
binding and container extensions as well. Some of the bindings I would
be interested in are,

1. JMS
2. AMQP
3. Hessian

Ta
Meeraj


*

You can find us at www.voca.com

*
This communication is confidential and intended for
the exclusive use of the addressee only. You should
not disclose its contents to any other person.
If you are not the intended recipient please notify
the sender named above immediately.

Registered in England, No 1023742,
Registered Office: Voca Limited
Drake House, Three Rivers Court,
Homestead Road, Rickmansworth,
Hertfordshire, WD3 1FX. United Kingdom

VAT No. 226 6112 87


This message has been checked for all email viruses by MessageLabs.

-
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]

Hi Meeraj



From my perspective having demonstrable code in June would be spot on as I

have to speak on SCA then and would consider a demo if we could do it.

I don't have the knowledge yet to comment on the details of your proposal
just yet (hence the new subject) but a question. From a future demo point of
view I would like to show various runtime options some of which are not
federated  examples some of which are. Can I miss out the federation bit if
I want to? For example, I would potentially like to show a variety of
scenarios

- Hello world. the simplest possible single process example to get people
into how SCA works
- Standalone domain (a single VM)
service provision (perhaps an AJAX style example where an SCA composite
provides services to the browser)
service consumption (backend service access providing content to my
AJAX service)
- Federated domain (multiple VM)
How SCA describes many connected composites.

I'm just starting now to look at how all the kernel stuff works so I expect
all this will become clear soon enough (I found your previous posts giving
explantion b.t.w - so am starting from there)

Regards

Simon


Re: ServerSide Presentation and Demo

2007-03-22 Thread Simon Laws

On 3/22/07, Meeraj Kunnumpurath [EMAIL PROTECTED] wrote:


Simon,

My reply to Mario has all the detail to run the demo.

Ta
Meeraj

-Original Message-
From: Simon Laws [mailto:[EMAIL PROTECTED]
Sent: Thursday, March 22, 2007 12:00 PM
To: tuscany-dev@ws.apache.org
Subject: Re: ServerSide Presentation and Demo

On 3/22/07, Meeraj Kunnumpurath [EMAIL PROTECTED] wrote:

 Simon,

 All the work that was done for the demo has been committed. I posted a

 set of build instructions to get the demo running for Mario. However,
 the information is scattered across multiple emails. I can collate
 them and repost it to the list, if that helps.

 Thanks
 Meeraj

 -Original Message-
 From: Simon Laws [mailto:[EMAIL PROTECTED]
 Sent: Thursday, March 22, 2007 11:31 AM
 To: tuscany-dev@ws.apache.org
 Subject: Re: ServerSide Presentation and Demo

 On 3/22/07, Venkata Krishnan [EMAIL PROTECTED] wrote:
 
  Hi Jim,
 
  Thanks for sharing this information - its really useful.
 
  - Venkat
 
  On 3/22/07, Jim Marino [EMAIL PROTECTED] wrote:
  
   Hi,
  
   We just finished the ServerSide demo and I figured I send a mail
   to the list outlining how it went...
  
   We had the slot following the opening keynote and were up against
   Rod
   (Spring) and Patrick (OpenJPA) as the other  two talks. I was
   surprised to find that the ballroom was pretty full. I gave the
   talk

   and the demo showing end-to-end federated deployment and reaction
   seemed very positive.  Meeraj gets the hero award for staying up

   to an obscene hour in the morning to implement a JMS-based
   discovery

   service as we encountered last-minute hiccups with JXTA.
  
   My observations are:
  
   - After speaking with people after the presentation, feedback on
   the

   value of SCA was consistent. Specifically, they thought the
   programming model was nice but not a differentiator. What people
   got

   excited about was being able to dynamically provision services to
   remote nodes and have a representation of their service network.
   In

   this respect, I think the demo worked well. Two people said they
   need what the demo showed for projects they currently have
underway.
  
   - People asked how SCA is different than Spring.  They reacted
   positively when I said federation and distributed wiring.
   Related to this, people get dependency injection (i.e. it's
   old-hat)

   and just seem to assume that is the way local components obtain
 references.
  
   - People seemed to react positively when I compared SCA to
   Microsoft

   WCF
  
   - People liked the idea of heterogeneous service networks and
   support for components written in different languages,
   particularly
 C++.
  
   - People didn't ask about web services. People were nodding their
   heads (in agreement) when I talked about having the runtime select

   alternative bindings such as AMQP and JMS.
  
   - People want modularity and choice. Two areas they wanted choice
   in

   was databinding and persistence. They liked the fact that we are
   not

   locked into one databinding solution and that we have JPA
   integration. (as an aside, they also liked that SDO can be used
   without SCA). Spring integration was also popular.
  
   - People also liked the idea of a 2MB kernel download. One person
   mentioned they only want to download what they intend to use and
   not

   a lot of extra clutter.
  
   - People wanted to know how SCA is different than an ESB. I
   basically described it using the switch vs. router metaphor and
   how a component implementation type can be a proxy for an ESB.
   Related to this and point-to-point wires, people thought wire
   optimization by the Controller was cool.
  
   - People seemed to be more interested in running Tuscany as a
   standalone edge server or embedded in an OSGi container. I didn't
   get any questions about running Tuscany in a Servlet container or
   J2EE application server. This seems to be consistent with there
   being a number of talks on server-side OSGi.
  
   My big takeway is that we need to make the demo a reality.
  
   Jim
  
  
  
  
  
  
   --
   --
   - To unsubscribe, e-mail: [EMAIL PROTECTED]
   For additional commands, e-mail: [EMAIL PROTECTED]
  
  
 
 Jim,

 Nice one. Thanks for the summary. Did the conference record the talk?
 Would be good to see it. Noting your comment and recent mails about
 the last minute changes to get JMS working in short order, is
 everything checked in that's needed to run the demo? Looking back I
 see several notes on build instructions and it would be pretty cool to

 give it a spin.

 Can I ask a question about support for components written in different

 languages? Did people specifically say they were interested in C++?
 Did they mention other languages (and, if so, which ones)?

 Presumably the sweet spot is the ability to show components
 implemented in various languages all

Re: Compilation status

2007-03-22 Thread Simon Laws

On 3/22/07, Luciano Resende [EMAIL PROTECTED] wrote:


I think this issue will be raised again and again every time new members
come to try Tuscany trunk, and this is very bad for a project that is
trying
to build a community.  Also, trying to quote an article Jim Marino sent
from
Martin Fowler about Continuous Integration [1] :

Automated environments for builds are a common feature of systems. The
Unix
world has had make for decades, the Java community developed Ant, the .NET
community has had Nant and now has MSBuild. Make sure you can build and
launch your system using these scripts using a single command.

A common mistake is not to include everything in the automated build.

also, this has been already discussed in various other threads [2] [3].

Based on this, I'll start a VOTE around a proposal to get a set of
profiles
to allow for building the trunk in a more automated way.

[1] - http://www.martinfowler.com/articles/continuousIntegration.html
[2] - http://www.mail-archive.com/tuscany-dev@ws.apache.org/msg14658.html
[3] -
http://www.mail-archive.com/tuscany-dev%40ws.apache.org/msg15303.html


On 3/22/07, Raymond Feng [EMAIL PROTECTED] wrote:

 Hi,

 I hate to bring up this issue again, but I really share the pain that
 Mario
 just went through. Don't we think we have room for improvements to build
 the
 stuff in a much simpler fashion? To me, to have a build for a bundle
which
 consists of a set of the modules working together at the same level
would
 be
 really helpful for the poor guys. It's very difficult to manually
 coordinate
 the build across modules even with published SNAPSHOTs (which I don't
see
 it
 happens frequently and it's also very hard because a collection of
 SNAPSHOTs
 don't really establish a baseline for those who want to try the latest
 code).

 I (assume that I) understand all the rationales and pricinples for
 modulization. But I'm really scared by the user experiences. Where is
the
 reasonable middle ground?

 Thanks,
 Raymond

 - Original Message -
 From: Antollini, Mario [EMAIL PROTECTED]
 To: tuscany-dev@ws.apache.org
 Sent: Thursday, March 22, 2007 6:57 AM
 Subject: RE: Compilation status


 Meeraj,

 Finally, I was able to generate the server.star.jar file.

 This is compilation order that worked for me:

 java/spec/commonj/
 java/spec/sca-api-r1.0/
 java/sca/kernel/
 java/sca/runtime/
 java/sca/services/
 java/sca/contrib/discovery/
 java/sca/contrib/discovery/jms
 java/sca/console/
 java/sca/core-samples/
 java/distribution/sca/demo.app
 java/distribution/sca/demo/

 Disclaimer: I have been struggling with the compilation for two days, I
 cannot fully assure that the order of the above list is the actual
 order. If anyone is able to compile this exact way, please let us know.

 BTW, java/sca/extensions/ cannot be compiled for now.

 Besides the good news, I was not able to start the servers (take a look
 at the attachment to see the errors)

 Do you have any idea what could be happening?

 Thanks and regards,
 Mario


 -Original Message-
 From: Meeraj Kunnumpurath [mailto:[EMAIL PROTECTED]
 Sent: Thursday, March 22, 2007 10:13 AM
 To: tuscany-dev@ws.apache.org
 Subject: RE: Compilation status

 Mario,

 AFAIK extensions in trunk is still in a bit of a flux. If you want to
 run the demo, you don't need to run the extensions (the demo uses Java
 container with local bindings), I will try to post a dfeinitive list of
 tasks to build and run the demo later in the day, which will be useful
 to Simon as well.

 Ta
 Meeraj

 -Original Message-
 From: Antollini, Mario [mailto:[EMAIL PROTECTED]
 Sent: Thursday, March 22, 2007 12:29 PM
 To: tuscany-dev@ws.apache.org
 Subject: Compilation status

 Meeraj,



 I just wanted you to know that I am still not able to compile the code I
 checked out from SVN. The main problem is located in the *extensions*
 project. I have been modifying the pom files within this project but I
 did not manage to get it compiled yet.



 Basically, the main problems are related to inconsistencies between
 parent references (e.g.; axis2's root project is using groupId
 *org.apache.tuscany.sca.axis2* while the plugin subproject states that
 its parent is *org.apache.tuscany.sca.extensions.axis2*).



 Any tips about this?



 Thanks,

 Mario


 This message has been checked for all email viruses by MessageLabs.


 *

 You can find us at www.voca.com

 *
 This communication is confidential and intended for
 the exclusive use of the addressee only. You should
 not disclose its contents to any other person.
 If you are not the intended recipient please notify
 the sender named above immediately.

 Registered in England, No 1023742,
 Registered Office: Voca Limited
 Drake House, Three Rivers Court,
 Homestead Road, Rickmansworth,
 Hertfordshire, WD3 1FX. United Kingdom

 VAT No. 226 6112 87


 This message has been 

Re: Build structure - having cake and still eating

2007-03-22 Thread Simon Laws

On 3/22/07, Jeremy Boynes [EMAIL PROTECTED] wrote:


On Mar 22, 2007, at 10:21 AM, Raymond Feng wrote:

 +1.

 I think it's in line with the proposal in my response to Meeraj.

 One question: For a bundle to reference a module in the Tuscany
 source tree, do we really have to copy (or use svn:externals
 property) if it points to a location (under trunk, tags, or
 branches) in the Tuscany tree? I think a relative path for the
 module will work.

It will.

The difference would be that with a ../.. type relative path someone
can't just check out the assembly module, they need to check out the
whole tree from some common root. With an external they could just
check out the assembly module and the source for the dependency would
be checked out as well. Of course, then they might have multiple
copies of the dependency source to manage.

Either works and it would up to the users of the assembly module to
choose which style they prefer.

--
Jeremy


-
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]

Sounds like a good compromise to me


stupidquestion
When you talk about flattening the module hierarchy do you mean this
literally in svn (which I like the sound of as I can never find anything in
all the nested dirs - my inexperience showing) or is this some virtual
flattening?
/stupidquestion

Simon


Re: A question of federation - was: Planning kernel release 2.0

2007-03-22 Thread Simon Laws

On 3/22/07, Jeremy Boynes [EMAIL PROTECTED] wrote:


On Mar 22, 2007, at 8:47 AM, Simon Laws wrote:
 Ok, cool, so I can run a simple app in a single VM. Let me try it
 out.

Just to set expectations, I don't think the system configuration in
the default runtime has been switched over to the federated deployer
yet. So if you run the calc sample on that runtime then it will still
be using the old stuff.

If you want to experiment with the federated stuff, you would need to
use the scdl from the master profile in the demo.

--
Jeremy


-
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]

so I get the option to include/exclude federation by modifying the system

configuration? Nice. Not saying I don't want federation but nice to have the
option.

S


Re: Compilation status

2007-03-22 Thread Simon Laws

On 3/22/07, Antollini, Mario [EMAIL PROTECTED] wrote:



Simon,

This is the script I am using right now. The script file must be located
one level above of the java directory:

echo
echo java/spec/commonj/ ...
echo
cd ./java/spec/commonj/
mvn install
echo
echo java/spec/sca-api-r1.0/ ...
echo
cd ../../../java/spec/sca-api-r1.0/
mvn install
echo
echo java/sca/kernel/ ...
echo
cd ../../../java/sca/kernel/
mvn install
echo
echo java/sca/runtime/ ...
echo
cd ../../../java/sca/runtime/
mvn install
echo
echo java/sca/services/ ...
echo
cd ../../../java/sca/services/
mvn install
echo
echo java/sca/contrib/discovery/ ...
echo
cd ../../../java/sca/contrib/discovery/
mvn install
echo
echo java/sca/contrib/discovery/jms ...
echo
cd ../../../../java/sca/contrib/discovery/jms
mvn install
echo
echo java/sca/console/ ...
echo
cd ../../../java/sca/console/
mvn install
echo
echo java/sca/core-samples/ ...
echo
cd ../../../java/sca/core-samples/
mvn install
echo
echo java/distribution/sca/demo.app ...
echo
cd ../../../java/distribution/sca/demo.app
mvn install
echo
echo java/distribution/sca/demo/ ...
echo
cd ../../../../java/distribution/sca/demo/
mvn install
cd ../../../../


Mario


-Original Message-
From: Simon Laws [mailto:[EMAIL PROTECTED]
Sent: Thursday, March 22, 2007 1:43 PM
To: tuscany-dev@ws.apache.org
Subject: Re: Compilation status

On 3/22/07, Luciano Resende [EMAIL PROTECTED] wrote:

 I think this issue will be raised again and again every time new
members
 come to try Tuscany trunk, and this is very bad for a project that is
 trying
 to build a community.  Also, trying to quote an article Jim Marino
sent
 from
 Martin Fowler about Continuous Integration [1] :

 Automated environments for builds are a common feature of systems.
The
 Unix
 world has had make for decades, the Java community developed Ant, the
.NET
 community has had Nant and now has MSBuild. Make sure you can build
and
 launch your system using these scripts using a single command.

 A common mistake is not to include everything in the automated
build.

 also, this has been already discussed in various other threads [2]
[3].

 Based on this, I'll start a VOTE around a proposal to get a set of
 profiles
 to allow for building the trunk in a more automated way.

 [1] - http://www.martinfowler.com/articles/continuousIntegration.html
 [2] -
http://www.mail-archive.com/tuscany-dev@ws.apache.org/msg14658.html
 [3] -
 http://www.mail-archive.com/tuscany-dev%40ws.apache.org/msg15303.html


 On 3/22/07, Raymond Feng [EMAIL PROTECTED] wrote:
 
  Hi,
 
  I hate to bring up this issue again, but I really share the pain
that
  Mario
  just went through. Don't we think we have room for improvements to
build
  the
  stuff in a much simpler fashion? To me, to have a build for a bundle
 which
  consists of a set of the modules working together at the same level
 would
  be
  really helpful for the poor guys. It's very difficult to manually
  coordinate
  the build across modules even with published SNAPSHOTs (which I
don't
 see
  it
  happens frequently and it's also very hard because a collection of
  SNAPSHOTs
  don't really establish a baseline for those who want to try the
latest
  code).
 
  I (assume that I) understand all the rationales and pricinples for
  modulization. But I'm really scared by the user experiences. Where
is
 the
  reasonable middle ground?
 
  Thanks,
  Raymond
 
  - Original Message -
  From: Antollini, Mario [EMAIL PROTECTED]
  To: tuscany-dev@ws.apache.org
  Sent: Thursday, March 22, 2007 6:57 AM
  Subject: RE: Compilation status
 
 
  Meeraj,
 
  Finally, I was able to generate the server.star.jar file.
 
  This is compilation order that worked for me:
 
  java/spec/commonj/
  java/spec/sca-api-r1.0/
  java/sca/kernel/
  java/sca/runtime/
  java/sca/services/
  java/sca/contrib/discovery/
  java/sca/contrib/discovery/jms
  java/sca/console/
  java/sca/core-samples/
  java/distribution/sca/demo.app
  java/distribution/sca/demo/
 
  Disclaimer: I have been struggling with the compilation for two
days, I
  cannot fully assure that the order of the above list is the actual
  order. If anyone is able to compile this exact way, please let us
know.
 
  BTW, java/sca/extensions/ cannot be compiled for now.
 
  Besides the good news, I was not able to start the servers (take a
look
  at the attachment to see the errors)
 
  Do you have any idea what could be happening?
 
  Thanks and regards,
  Mario
 
 
  -Original Message-
  From: Meeraj Kunnumpurath [mailto:[EMAIL PROTECTED]
  Sent: Thursday, March 22, 2007 10:13 AM
  To: tuscany-dev@ws.apache.org
  Subject: RE: Compilation status
 
  Mario,
 
  AFAIK extensions in trunk is still in a bit of a flux. If you want
to
  run the demo, you don't need to run the extensions (the demo uses
Java
  container with local bindings), I will try to post a dfeinitive list
of
  tasks to build and run the demo later in the day, which will be
useful
  to Simon as well.
 
  Ta

Re: Build structure - having cake and still eating

2007-03-22 Thread Simon Laws

On 3/22/07, Jeremy Boynes [EMAIL PROTECTED] wrote:


On Mar 22, 2007, at 11:19 AM, Simon Laws wrote:
 stupidquestion
 When you talk about flattening the module hierarchy do you mean this
 literally in svn (which I like the sound of as I can never find
 anything in
 all the nested dirs - my inexperience showing) or is this some virtual
 flattening?
 /stupidquestion

Not a stupidquestion at all. We can do either or both ...
mvnStuffAsBackgound
The logical/virtual tree here is the parent structure in the pom.
Maven projects can inherit project definitions (stuff like
dependencies, repo locations, plugins to use) from another project by
specifying it in their parent element - this can be used to avoid
repetition of stuff like dependency versions, plugin configurations
and so on.

This is actually independent of the physical directory structure,
although often the pom in one directory is used as the parent for
modules that it builds. This is actually conflating physical
structure and logical structure together and although it seemed like
a good idea at the time I don't think that it is working any more.
/mvnStuffAsBackground

I think we should first flatten the logical hierarchy so that all
independent module groups inherit from a global parent. This would be
org.apache.tuscany:sca:1.0-incubating.  By independent I mean
things that could be released independently - these could be groups
of tightly coupled modules (such the current kernel or runtime)
or individual modules such as http.jetty

I started on this for the modules that were part of 2.0-alpha but
have not done it yet for the modules in extensions or services that
were not. I think we should do this now for the others - I'll make a
start on the ones used in the demo.

An orthogonal issue is how we lay out the physical directory tree. It
is probably simpler to get rid of midlevel parents like extensions
and services all together and just have a flat structure under
sca - I think that would help make things easier to find.

We started doing that with a gradual migration of stuff from
services to extensions but I think doing this gradually has
probably just added to the confusion. I'd suggest we give up on the
gradual approach and move everything under contrib until we can fix
the logical structure as above.

To summarize:
1) move everything that does not logical depend on
org.apache.tuscany:sca:1.0-incubating to contrib
2) update each module or group in contrib to be logically independent
3) once the module is independent move it to the flat structure under
sca so it is easy to find

I'm going to get started with the modules used in the demo.
--
Jeremy

-
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]

Jeremy. This sounds like a simpler approach than what is there now. I like

the idea but a question.

1) move everything that does not logical depend on
org.apache.tuscany:sca:1.0-incubating to contrib

from your previous definition do you mean those things that are not
considered to be independent. Or do you mean things that could be
independent but just aren't packaged that way now. I assume that you mean
the latter as your next point is to go ahead and make them independent.

Also is the global parent version 1.0-incubating or 1.0-incubating-SNAPSHOT.
I note that now you have it without SNAPSHOT but its children with SNAPSHOT.
Are you just saying that the global parent doesn't get packaged/released
per-se SNAPSHOT or not.

S


Re: Build structure - having cake and still eating

2007-03-22 Thread Simon Laws

On 3/22/07, Jeremy Boynes [EMAIL PROTECTED] wrote:


On Mar 22, 2007, at 12:31 PM, Simon Laws wrote:
 Jeremy. This sounds like a simpler approach than what is there now.
 I like
 the idea but a question.

 1) move everything that does not logical depend on
 org.apache.tuscany:sca:1.0-incubating to contrib

 from your previous definition do you mean those things that are not
 considered to be independent. Or do you mean things that could be
 independent but just aren't packaged that way now. I assume that
 you mean
 the latter as your next point is to go ahead and make them
 independent.

Yes things aren't really independent due to the intermediate poms in
the physical directory tree.


 Also is the global parent version 1.0-incubating or 1.0-incubating-
 SNAPSHOT.
 I note that now you have it without SNAPSHOT but its children with
 SNAPSHOT.
 Are you just saying that the global parent doesn't get packaged/
 released
 per-se SNAPSHOT or not.

This is the pom defined in the tag here:
   https://svn.apache.org/repos/asf/incubator/tuscany/tags/java/pom/
sca/1.0-incubating

which is a stable artifact - one we have voted to release but are
waiting for the IPMC to approve. It will not move.

[[ BIG NAG TO OUR MENTORS, PLEASE CAN YOU HELP BY VOTING ON THIS
THREAD]]
   http://mail-archives.apache.org/mod_mbox/incubator-general/
200703.mbox/[EMAIL PROTECTED]

The things in trunk are not stable and so have a SNAPSHOT version.
--
Jeremy


-
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]

Ok, I get it. Thanks Jemery.


S


Re: Tag for TSSS demo code

2007-03-26 Thread Simon Laws

On 3/26/07, Meeraj Kunnumpurath [EMAIL PROTECTED] wrote:


Simon,

Did you start ActiveMQ before you started the master?

Ta
Meeraj


From: Simon Laws [EMAIL PROTECTED]
Reply-To: tuscany-dev@ws.apache.org
To: tuscany-dev@ws.apache.org
Subject: Re: Tag for TSSS demo code
Date: Mon, 26 Mar 2007 17:35:24 +0100

On 3/22/07, Jeremy Boynes [EMAIL PROTECTED] wrote:

I've created a tag corresponding to the code used to build the demo
(r520715) and added a trivial pom to build the lot. To build:
$ svn co https://svn.apache.org/repos/asf/incubator/tuscany/tags/
java/tsss-demo
$ mvn install

I did not change the versions in the poms so they are using the same
ones as trunk. To avoid conflict in the snapshot repo we should not
deploy jars built from this.

--
Jeremy

On Mar 22, 2007, at 7:39 AM, Meeraj Kunnumpurath wrote:

  Jeremy,
 
  This is the definitve list, thanks to Mario.
 
  java/spec/commonj/
  java/spec/sca-api-r1.0/
  java/sca/kernel/
  java/sca/runtime/
  java/sca/services/
  java/sca/contrib/discovery/
  java/sca/contrib/discovery/jms
  java/sca/console/
  java/sca/core-samples/
  java/distribution/sca/demo.app
  java/distribution/sca/demo/
 
  Ta
  Meeraj
 
  -Original Message-
  From: Jeremy Boynes [mailto:[EMAIL PROTECTED]
  Sent: Thursday, March 22, 2007 1:52 PM
  To: tuscany-dev@ws.apache.org
  Subject: Re: Compilation status
 
  I think we should tag and deploy SNAPSHOTs of the revision used for
  the
  demo - that way people can build as much or as little as they wish.
If
  you can post the list, I get those modules tagged and deployed later
  today.
 
  --
  Jeremy
 
  On Mar 22, 2007, at 6:13 AM, Meeraj Kunnumpurath wrote:
 
  Mario,
 
  AFAIK extensions in trunk is still in a bit of a flux. If you want
to
  run the demo, you don't need to run the extensions (the demo uses
  Java
 
  container with local bindings), I will try to post a dfeinitive list
  of tasks to build and run the demo later in the day, which will be
  useful to Simon as well.
 
  Ta
  Meeraj
 
  -Original Message-
  From: Antollini, Mario [mailto:[EMAIL PROTECTED]
  Sent: Thursday, March 22, 2007 12:29 PM
  To: tuscany-dev@ws.apache.org
  Subject: Compilation status
 
  Meeraj,
 
 
 
  I just wanted you to know that I am still not able to compile the
  code
 
  I checked out from SVN. The main problem is located in the
  *extensions* project. I have been modifying the pom files within
this
  project but I did not manage to get it compiled yet.
 
 
 
  Basically, the main problems are related to inconsistencies between
  parent references (e.g.; axis2's root project is using groupId
  *org.apache.tuscany.sca.axis2* while the plugin subproject states
  that
 
  its parent is *org.apache.tuscany.sca.extensions.axis2*).
 
 
 
  Any tips about this?
 
 
 
  Thanks,
 
  Mario
 
 
  This message has been checked for all email viruses by MessageLabs.
 
 
  *
 
  You can find us at www.voca.com
 
  *
  This communication is confidential and intended for the exclusive
use
  of the addressee only. You should not disclose its contents to any
  other person.
  If you are not the intended recipient please notify the sender named
  above immediately.
 
  Registered in England, No 1023742,
  Registered Office: Voca Limited
  Drake House, Three Rivers Court,
  Homestead Road, Rickmansworth,
  Hertfordshire, WD3 1FX. United Kingdom
 
  VAT No. 226 6112 87
 
 
  This message has been checked for all email viruses by MessageLabs.
 
 
-
  To unsubscribe, e-mail: [EMAIL PROTECTED]
  For additional commands, e-mail: [EMAIL PROTECTED]
 
 
 
  -
  To unsubscribe, e-mail: [EMAIL PROTECTED]
  For additional commands, e-mail: [EMAIL PROTECTED]
 
 
  This message has been checked for all email viruses by MessageLabs.
 
  This message has been checked for all email viruses by MessageLabs.
 
  -
  To unsubscribe, e-mail: [EMAIL PROTECTED]
  For additional commands, e-mail: [EMAIL PROTECTED]
 


-
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]

Hi Jeremy

I'm giving the tss demo distribution module you created a spin...

I checked it out and along came all of the svn external dependencies
I did a mvn install at the top level and got a clean build
I took the zip that was generated in the demo project and unzipped it (I
just guessed that this was what was required)

It doesn't run stright off though as there are a number of dependencies
missing from the various directories. I'm working my way through them
now,
e.g I'm adding the following to the boot directory of the master profile
  tuscany-jms-discovery-0.1

Re: Tag for TSSS demo code

2007-03-26 Thread Simon Laws

On 3/26/07, Simon Laws [EMAIL PROTECTED] wrote:




On 3/26/07, Meeraj Kunnumpurath [EMAIL PROTECTED] wrote:

 Simon,

 Did you start ActiveMQ before you started the master?

 Ta
 Meeraj


 From: Simon Laws [EMAIL PROTECTED]
 Reply-To: tuscany-dev@ws.apache.org
 To: tuscany-dev@ws.apache.org
 Subject: Re: Tag for TSSS demo code
 Date: Mon, 26 Mar 2007 17:35:24 +0100
 
 On 3/22/07, Jeremy Boynes [EMAIL PROTECTED] wrote:
 
 I've created a tag corresponding to the code used to build the demo
 (r520715) and added a trivial pom to build the lot. To build:
 $ svn co https://svn.apache.org/repos/asf/incubator/tuscany/tags/
 java/tsss-demo
 $ mvn install
 
 I did not change the versions in the poms so they are using the same
 ones as trunk. To avoid conflict in the snapshot repo we should not
 deploy jars built from this.
 
 --
 Jeremy
 
 On Mar 22, 2007, at 7:39 AM, Meeraj Kunnumpurath wrote:
 
   Jeremy,
  
   This is the definitve list, thanks to Mario.
  
   java/spec/commonj/
   java/spec/sca-api-r1.0/
   java/sca/kernel/
   java/sca/runtime/
   java/sca/services/
   java/sca/contrib/discovery/
   java/sca/contrib/discovery/jms
   java/sca/console/
   java/sca/core-samples/
   java/distribution/sca/demo.app
   java/distribution/sca/demo/
  
   Ta
   Meeraj
  
   -Original Message-
   From: Jeremy Boynes [mailto:[EMAIL PROTECTED]
   Sent: Thursday, March 22, 2007 1:52 PM
   To: tuscany-dev@ws.apache.org
   Subject: Re: Compilation status
  
   I think we should tag and deploy SNAPSHOTs of the revision used for

   the
   demo - that way people can build as much or as little as they wish.
 If
   you can post the list, I get those modules tagged and deployed
 later
   today.
  
   --
   Jeremy
  
   On Mar 22, 2007, at 6:13 AM, Meeraj Kunnumpurath wrote:
  
   Mario,
  
   AFAIK extensions in trunk is still in a bit of a flux. If you want
 to
   run the demo, you don't need to run the extensions (the demo uses
   Java
  
   container with local bindings), I will try to post a dfeinitive
 list
   of tasks to build and run the demo later in the day, which will be
   useful to Simon as well.
  
   Ta
   Meeraj
  
   -Original Message-
   From: Antollini, Mario [mailto:[EMAIL PROTECTED]
   Sent: Thursday, March 22, 2007 12:29 PM
   To: tuscany-dev@ws.apache.org
   Subject: Compilation status
  
   Meeraj,
  
  
  
   I just wanted you to know that I am still not able to compile the
   code
  
   I checked out from SVN. The main problem is located in the
   *extensions* project. I have been modifying the pom files within
 this
   project but I did not manage to get it compiled yet.
  
  
  
   Basically, the main problems are related to inconsistencies
 between
   parent references (e.g.; axis2's root project is using groupId
   *org.apache.tuscany.sca.axis2* while the plugin subproject states
   that
  
   its parent is *org.apache.tuscany.sca.extensions.axis2*).
  
  
  
   Any tips about this?
  
  
  
   Thanks,
  
   Mario
  
  
   This message has been checked for all email viruses by
 MessageLabs.
  
  
   *
  
   You can find us at www.voca.com
  
   *
   This communication is confidential and intended for the exclusive
 use
   of the addressee only. You should not disclose its contents to any
   other person.
   If you are not the intended recipient please notify the sender
 named
   above immediately.
  
   Registered in England, No 1023742,
   Registered Office: Voca Limited
   Drake House, Three Rivers Court,
   Homestead Road, Rickmansworth,
   Hertfordshire, WD3 1FX. United Kingdom
  
   VAT No. 226 6112 87
  
  
   This message has been checked for all email viruses by
 MessageLabs.
  
  
 -
   To unsubscribe, e-mail: [EMAIL PROTECTED]
   For additional commands, e-mail: [EMAIL PROTECTED]
  
  
  
  
 -
   To unsubscribe, e-mail: [EMAIL PROTECTED]
   For additional commands, e-mail: [EMAIL PROTECTED]
  
  
   This message has been checked for all email viruses by MessageLabs.
  
   This message has been checked for all email viruses by MessageLabs.

  
  
 -
   To unsubscribe, e-mail: [EMAIL PROTECTED]
   For additional commands, e-mail: [EMAIL PROTECTED]
  
 
 
 -
 To unsubscribe, e-mail: [EMAIL PROTECTED]
 For additional commands, e-mail: [EMAIL PROTECTED]
 
 Hi Jeremy
 
 I'm giving the tss demo distribution module you created a spin...
 
 I checked it out and along came all of the svn external dependencies
 I did a mvn install at the top level and got a clean build
 I took the zip that was generated in the demo project and unzipped it
 (I
 just guessed that this was what was required)
 
 It doesn't

Re: [VOTE] Adopt a near-zero-tolerance Be Nice policy

2007-03-26 Thread Simon Laws

On 3/26/07, Davanum Srinivas [EMAIL PROTECTED] wrote:


Touché :)

On 3/26/07, Frank Budinsky [EMAIL PROTECTED] wrote:
 +1, and here's a first test case of saying what I really think. I hope
 nobody is going to slam me :-)

 I think Ant's suggestion should go without saying. The fact that we need
 to have a vote as juvenile as this one, makes it hard for any of us to
 maintain our dignity.

 Frank.


 kelvin goodson [EMAIL PROTECTED] wrote on 03/26/2007 02:38:51
 PM:

  +1
 
  On 26/03/07, ant elder [EMAIL PROTECTED] wrote:
  
   I'd like to have a near-zero-tolerance Be Nice policy on the
Tuscany
   mailing lists where we don't allow participants to slam anyone's
 posts.
   When
   replying to email we need to do it in a way that maintains the
 original
   authors dignity.
  
   We've some tough things to work out over the next days and we're
only
   going
   to achieve any real consensus if we all feel the environment allows
us
 to
   say what we really think.
  
   I'd really like to see everyone vote on this, surely if nothing else
 this
   is
   something we can all agree on?
  
   Here's my +1.
  
  ...ant
  


 -
 To unsubscribe, e-mail: [EMAIL PROTECTED]
 For additional commands, e-mail: [EMAIL PROTECTED]




--
Davanum Srinivas :: http://wso2.org/ :: Oxygen for Web Services Developers

-
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]

+1


Re: Tag for TSSS demo code

2007-03-26 Thread Simon Laws

On 3/26/07, Antollini, Mario [EMAIL PROTECTED] wrote:


Simon,

I had had the same problem than you. I solved it starting ActiveMQ
first.

The final steps are:

1 - Donwload ActiveMQ:
http://people.apache.org/repo/m2-incubating-repository/org/apache/active
mq/apache-activemq/4.1.0-incubator/apache-activemq-4.1.0-incubator.zip
2 - uncompress it somewhere and start it (...\bin\ activemq.bat)
3 - compile the TSSS demo
4 -  uncompress
...\tuscany\java\distribution\sca\demo\target\demo-2.0-alpha2-incubating
-SNAPSHOT-bin.zip
5 - go to the bin directory of the uncompressed file and run java
-Dtuscany.adminPort=2000 -jra start.server.jar master

Mario

-Original Message-
From: Simon Laws [mailto:[EMAIL PROTECTED]
Sent: Monday, March 26, 2007 2:41 PM
To: tuscany-dev@ws.apache.org
Subject: Re: Tag for TSSS demo code

On 3/26/07, Simon Laws [EMAIL PROTECTED] wrote:



 On 3/26/07, Meeraj Kunnumpurath [EMAIL PROTECTED] wrote:
 
  Simon,
 
  Did you start ActiveMQ before you started the master?
 
  Ta
  Meeraj
 
 
  From: Simon Laws [EMAIL PROTECTED]
  Reply-To: tuscany-dev@ws.apache.org
  To: tuscany-dev@ws.apache.org
  Subject: Re: Tag for TSSS demo code
  Date: Mon, 26 Mar 2007 17:35:24 +0100
  
  On 3/22/07, Jeremy Boynes [EMAIL PROTECTED] wrote:
  
  I've created a tag corresponding to the code used to build the
demo
  (r520715) and added a trivial pom to build the lot. To build:
  $ svn co
https://svn.apache.org/repos/asf/incubator/tuscany/tags/
  java/tsss-demo
  $ mvn install
  
  I did not change the versions in the poms so they are using the
same
  ones as trunk. To avoid conflict in the snapshot repo we should
not
  deploy jars built from this.
  
  --
  Jeremy
  
  On Mar 22, 2007, at 7:39 AM, Meeraj Kunnumpurath wrote:
  
Jeremy,
   
This is the definitve list, thanks to Mario.
   
java/spec/commonj/
java/spec/sca-api-r1.0/
java/sca/kernel/
java/sca/runtime/
java/sca/services/
java/sca/contrib/discovery/
java/sca/contrib/discovery/jms
java/sca/console/
java/sca/core-samples/
java/distribution/sca/demo.app
java/distribution/sca/demo/
   
Ta
Meeraj
   
-Original Message-
From: Jeremy Boynes [mailto:[EMAIL PROTECTED]
Sent: Thursday, March 22, 2007 1:52 PM
To: tuscany-dev@ws.apache.org
Subject: Re: Compilation status
   
I think we should tag and deploy SNAPSHOTs of the revision used
for
 
the
demo - that way people can build as much or as little as they
wish.
  If
you can post the list, I get those modules tagged and deployed
  later
today.
   
--
Jeremy
   
On Mar 22, 2007, at 6:13 AM, Meeraj Kunnumpurath wrote:
   
Mario,
   
AFAIK extensions in trunk is still in a bit of a flux. If you
want
  to
run the demo, you don't need to run the extensions (the demo
uses
Java
   
container with local bindings), I will try to post a
dfeinitive
  list
of tasks to build and run the demo later in the day, which
will be
useful to Simon as well.
   
Ta
Meeraj
   
-Original Message-
From: Antollini, Mario [mailto:[EMAIL PROTECTED]
Sent: Thursday, March 22, 2007 12:29 PM
To: tuscany-dev@ws.apache.org
Subject: Compilation status
   
Meeraj,
   
   
   
I just wanted you to know that I am still not able to compile
the
code
   
I checked out from SVN. The main problem is located in the
*extensions* project. I have been modifying the pom files
within
  this
project but I did not manage to get it compiled yet.
   
   
   
Basically, the main problems are related to inconsistencies
  between
parent references (e.g.; axis2's root project is using groupId
*org.apache.tuscany.sca.axis2* while the plugin subproject
states
that
   
its parent is *org.apache.tuscany.sca.extensions.axis2*).
   
   
   
Any tips about this?
   
   
   
Thanks,
   
Mario
   
   
This message has been checked for all email viruses by
  MessageLabs.
   
   
*
   
You can find us at www.voca.com
   
*
This communication is confidential and intended for the
exclusive
  use
of the addressee only. You should not disclose its contents to
any
other person.
If you are not the intended recipient please notify the sender
  named
above immediately.
   
Registered in England, No 1023742,
Registered Office: Voca Limited
Drake House, Three Rivers Court,
Homestead Road, Rickmansworth,
Hertfordshire, WD3 1FX. United Kingdom
   
VAT No. 226 6112 87
   
   
This message has been checked for all email viruses by
  MessageLabs.
   
   
 
-
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail:
[EMAIL PROTECTED

Re: [Discussion] Tuscany kernel modulization

2007-03-26 Thread Simon Laws

On 3/26/07, Raymond Feng [EMAIL PROTECTED] wrote:


Hi,

By reading through a bunch of e-mails on this mailing list and adding my
imagination, I put together a conceptual diagram at the following wiki
page
to illustrate the kernel modulization.


http://cwiki.apache.org/confluence/display/TUSCANY/Kernel+Modulization+Design+Discussions

This diagram is merely for the discussion purpose and it only reflects my
understandings so far. By no means it is meant to be complete and
accurate.
But I hope we can use it as the starting point for the technical
discussion.

Going through this exercise, I start to see values of the efforts which
would lead to greater adoption of Tuscany/SCA by various embedders
including
other Apache projects and simplication of the kernel that the community
can
work together. You might take the following items as my crazy
brainstorming:

1) Allow different ways to populate the assembly model: from SCDL, from
other Domain Specific Languages (DSL) or even from another model such as
the
Spring context. On the hand, make it possible to convert the SCA assembly
model to be executed by other frameworks such as Spring.

2) Improve the federation story so that we can plugin different federation
mechanisms, such as P2P discovery-based or repository-based schemes.

3) Bootstrap the Tuscany kernel without the Java container in case that
either the hosting runtime is resource-constrained (for example, cellular
phones or network appliances) or it doesn't need to the support for POJO
(for example, scripting for Web 2.0).

4) Provide more flexibility to integrate with different hosting
environments
with a subset of kernel modules.
...

I guess I throw out enough seeds for thoughts and now it's your turn :-).

Thanks,
Raymond



-
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]

Simple question to start Raymond. Can you give a quick runtime on the

colour coding. I see that the colours group realted items but some of the
groups I'm not sure about. Is it intended to related the modules to what is
in the kernel now?

Simon


Re: Tag for TSSS demo code

2007-03-26 Thread Simon Laws

On 3/27/07, Meeraj Kunnumpurath [EMAIL PROTECTED] wrote:


Sorry for late replies Simon, I am offsite in India for the next two
weeks.

Regarding the SCDL, there was a post earlier in the list with the SCDL
(from
me). You can use the calculator scdl in core-samples and remove the client
component, also each component needs to be targeted with a runtimeId
attribute. Obviously, you will have to start the targeted node on a
different JMX port, before you deploy the SCDL.

On the execption, IIRC, you can register a formatter that will print the
required information.

HTH
Meeraj


From: Simon Laws [EMAIL PROTECTED]
Reply-To: tuscany-dev@ws.apache.org
To: tuscany-dev@ws.apache.org
Subject: Re: Tag for TSSS demo code
Date: Mon, 26 Mar 2007 22:54:43 +0100

On 3/26/07, Antollini, Mario [EMAIL PROTECTED] wrote:

Simon,

I had had the same problem than you. I solved it starting ActiveMQ
first.

The final steps are:

1 - Donwload ActiveMQ:
http://people.apache.org/repo/m2-incubating-repository/org/apache/active
mq/apache-activemq/4.1.0-incubator/apache-activemq-4.1.0-incubator.zip
2 - uncompress it somewhere and start it (...\bin\ activemq.bat)
3 - compile the TSSS demo
4 -  uncompress
...\tuscany\java\distribution\sca\demo\target\demo-2.0-alpha2-incubating
-SNAPSHOT-bin.zip
5 - go to the bin directory of the uncompressed file and run java
-Dtuscany.adminPort=2000 -jra start.server.jar master

Mario

-Original Message-
From: Simon Laws [mailto:[EMAIL PROTECTED]
Sent: Monday, March 26, 2007 2:41 PM
To: tuscany-dev@ws.apache.org
Subject: Re: Tag for TSSS demo code

On 3/26/07, Simon Laws [EMAIL PROTECTED] wrote:
 
 
 
  On 3/26/07, Meeraj Kunnumpurath [EMAIL PROTECTED] wrote:
  
   Simon,
  
   Did you start ActiveMQ before you started the master?
  
   Ta
   Meeraj
  
  
   From: Simon Laws [EMAIL PROTECTED]
   Reply-To: tuscany-dev@ws.apache.org
   To: tuscany-dev@ws.apache.org
   Subject: Re: Tag for TSSS demo code
   Date: Mon, 26 Mar 2007 17:35:24 +0100
   
   On 3/22/07, Jeremy Boynes [EMAIL PROTECTED] wrote:
   
   I've created a tag corresponding to the code used to build the
demo
   (r520715) and added a trivial pom to build the lot. To build:
   $ svn co
https://svn.apache.org/repos/asf/incubator/tuscany/tags/
   java/tsss-demo
   $ mvn install
   
   I did not change the versions in the poms so they are using the
same
   ones as trunk. To avoid conflict in the snapshot repo we should
not
   deploy jars built from this.
   
   --
   Jeremy
   
   On Mar 22, 2007, at 7:39 AM, Meeraj Kunnumpurath wrote:
   
 Jeremy,

 This is the definitve list, thanks to Mario.

 java/spec/commonj/
 java/spec/sca-api-r1.0/
 java/sca/kernel/
 java/sca/runtime/
 java/sca/services/
 java/sca/contrib/discovery/
 java/sca/contrib/discovery/jms
 java/sca/console/
 java/sca/core-samples/
 java/distribution/sca/demo.app
 java/distribution/sca/demo/

 Ta
 Meeraj

 -Original Message-
 From: Jeremy Boynes [mailto:[EMAIL PROTECTED]
 Sent: Thursday, March 22, 2007 1:52 PM
 To: tuscany-dev@ws.apache.org
 Subject: Re: Compilation status

 I think we should tag and deploy SNAPSHOTs of the revision
used
for
  
 the
 demo - that way people can build as much or as little as they
wish.
   If
 you can post the list, I get those modules tagged and deployed
   later
 today.

 --
 Jeremy

 On Mar 22, 2007, at 6:13 AM, Meeraj Kunnumpurath wrote:

 Mario,

 AFAIK extensions in trunk is still in a bit of a flux. If you
want
   to
 run the demo, you don't need to run the extensions (the demo
uses
 Java

 container with local bindings), I will try to post a
dfeinitive
   list
 of tasks to build and run the demo later in the day, which
will be
 useful to Simon as well.

 Ta
 Meeraj

 -Original Message-
 From: Antollini, Mario [mailto:[EMAIL PROTECTED]
 Sent: Thursday, March 22, 2007 12:29 PM
 To: tuscany-dev@ws.apache.org
 Subject: Compilation status

 Meeraj,



 I just wanted you to know that I am still not able to compile
the
 code

 I checked out from SVN. The main problem is located in the
 *extensions* project. I have been modifying the pom files
within
   this
 project but I did not manage to get it compiled yet.



 Basically, the main problems are related to inconsistencies
   between
 parent references (e.g.; axis2's root project is using
groupId
 *org.apache.tuscany.sca.axis2* while the plugin subproject
states
 that

 its parent is *org.apache.tuscany.sca.extensions.axis2*).



 Any tips about this?



 Thanks,

 Mario


 This message has been checked for all email viruses by
   MessageLabs

Re: Tag for TSSS demo code

2007-03-27 Thread Simon Laws

On 3/27/07, Simon Laws [EMAIL PROTECTED] wrote:




On 3/27/07, Meeraj Kunnumpurath [EMAIL PROTECTED] wrote:

 Sorry for late replies Simon, I am offsite in India for the next two
 weeks.

 Regarding the SCDL, there was a post earlier in the list with the SCDL
 (from
 me). You can use the calculator scdl in core-samples and remove the
 client
 component, also each component needs to be targeted with a runtimeId
 attribute. Obviously, you will have to start the targeted node on a
 different JMX port, before you deploy the SCDL.

 On the execption, IIRC, you can register a formatter that will print the

 required information.

 HTH
 Meeraj


 From: Simon Laws [EMAIL PROTECTED]
 Reply-To: tuscany-dev@ws.apache.org
 To: tuscany-dev@ws.apache.org
 Subject: Re: Tag for TSSS demo code
 Date: Mon, 26 Mar 2007 22:54:43 +0100
 
 On 3/26/07, Antollini, Mario  [EMAIL PROTECTED] wrote:
 
 Simon,
 
 I had had the same problem than you. I solved it starting ActiveMQ
 first.
 
 The final steps are:
 
 1 - Donwload ActiveMQ:
 http://people.apache.org/repo/m2-incubating-repository/org/apache/active

 mq/apache-activemq/4.1.0-incubator/apache-activemq-4.1.0-incubator.zip
 2 - uncompress it somewhere and start it (...\bin\ activemq.bat)
 3 - compile the TSSS demo
 4 -  uncompress
 ...\tuscany\java\distribution\sca\demo\target\demo-
 2.0-alpha2-incubating
 -SNAPSHOT-bin.zip
 5 - go to the bin directory of the uncompressed file and run java
 -Dtuscany.adminPort=2000 -jra start.server.jar master
 
 Mario
 
 -Original Message-
 From: Simon Laws [mailto:[EMAIL PROTECTED] ]
 Sent: Monday, March 26, 2007 2:41 PM
 To: tuscany-dev@ws.apache.org
 Subject: Re: Tag for TSSS demo code
 
 On 3/26/07, Simon Laws  [EMAIL PROTECTED] wrote:
  
  
  
   On 3/26/07, Meeraj Kunnumpurath  [EMAIL PROTECTED]
 wrote:
   
Simon,
   
Did you start ActiveMQ before you started the master?
   
Ta
Meeraj
   
   
From: Simon Laws [EMAIL PROTECTED]
Reply-To: tuscany-dev@ws.apache.org
To: tuscany-dev@ws.apache.org
Subject: Re: Tag for TSSS demo code
Date: Mon, 26 Mar 2007 17:35:24 +0100

On 3/22/07, Jeremy Boynes [EMAIL PROTECTED] wrote:

I've created a tag corresponding to the code used to build the
 demo
(r520715) and added a trivial pom to build the lot. To build:
$ svn co
 https://svn.apache.org/repos/asf/incubator/tuscany/tags/
java/tsss-demo
$ mvn install

I did not change the versions in the poms so they are using the
 same
ones as trunk. To avoid conflict in the snapshot repo we should

 not
deploy jars built from this.

--
Jeremy

On Mar 22, 2007, at 7:39 AM, Meeraj Kunnumpurath wrote:

  Jeremy,
 
  This is the definitve list, thanks to Mario.
 
  java/spec/commonj/
  java/spec/sca-api-r1.0/
  java/sca/kernel/
  java/sca/runtime/
  java/sca/services/
  java/sca/contrib/discovery/
  java/sca/contrib/discovery/jms
  java/sca/console/
  java/sca/core-samples/
  java/distribution/sca/demo.app
  java/distribution/sca/demo/
 
  Ta
  Meeraj
 
  -Original Message-
  From: Jeremy Boynes [mailto: [EMAIL PROTECTED]
  Sent: Thursday, March 22, 2007 1:52 PM
  To: tuscany-dev@ws.apache.org
  Subject: Re: Compilation status
 
  I think we should tag and deploy SNAPSHOTs of the revision
 used
 for
   
  the
  demo - that way people can build as much or as little as
 they
 wish.
If
  you can post the list, I get those modules tagged and
 deployed
later
  today.
 
  --
  Jeremy
 
  On Mar 22, 2007, at 6:13 AM, Meeraj Kunnumpurath wrote:
 
  Mario,
 
  AFAIK extensions in trunk is still in a bit of a flux. If
 you
 want
to
  run the demo, you don't need to run the extensions (the
 demo
 uses
  Java
 
  container with local bindings), I will try to post a
 dfeinitive
list
  of tasks to build and run the demo later in the day, which
 will be
  useful to Simon as well.
 
  Ta
  Meeraj
 
  -Original Message-
  From: Antollini, Mario [mailto:[EMAIL PROTECTED]
  Sent: Thursday, March 22, 2007 12:29 PM
  To: tuscany-dev@ws.apache.org
  Subject: Compilation status
 
  Meeraj,
 
 
 
  I just wanted you to know that I am still not able to
 compile
 the
  code
 
  I checked out from SVN. The main problem is located in the
  *extensions* project. I have been modifying the pom files
 within
this
  project but I did not manage to get it compiled yet.
 
 
 
  Basically, the main problems are related to inconsistencies
between
  parent references ( e.g.; axis2's root project is using
 groupId
  *org.apache.tuscany.sca.axis2* while the plugin subproject
 states
  that
 
  its parent is *org.apache.tuscany.sca.extensions.axis2

Re: Discovery update

2007-03-27 Thread Simon Laws

On 3/27/07, Antollini, Mario [EMAIL PROTECTED] wrote:


Meeraj,

I finally got JXTA working! The problem was that the message being sent
was null...

In JxtaDiscoverService.java the code for sending the message was:

public int sendMessage(final String runtimeId, final XMLStreamReader
content) throws DiscoveryException {

if(content == null) {
throw new IllegalArgumentException(Content id is null);
}

PeerID peerID = null;
if(runtimeId != null) {
peerID = peerListener.getPeerId(runtimeId);
if(peerID == null) {
throw new DiscoveryException(Unrecognized runtime  +
runtimeId);
}
}

String message = null;
try {
StaxUtil.serialize(content);
} catch(XMLStreamException ex) {
throw new DiscoveryException(ex);
}



So, note that the StaxUtil.serialize(content) method is not assigning
the returned value to the message.

Besides that, remember that when you try to contribute the SCDL (via the
browser), there is an exception since it is trying to send the message
to the peer called slave and there is not such peer in the network.
Therefore, I did another modification to the sendMessage method in order
to send the message to all the peers (just to see if it works). So, the
working piece of code is:


public int sendMessage(String runtimeId, final XMLStreamReader content)
throws DiscoveryException {

runtimeId = null;

if(content == null) {
throw new IllegalArgumentException(Content id is null);
}

PeerID peerID = null;
if(runtimeId != null) {
peerID = peerListener.getPeerId(runtimeId);
if(peerID == null) {
throw new DiscoveryException(Unrecognized runtime  +
runtimeId);
}
}

String message = null;
try {
message = StaxUtil.serialize(content);
} catch(XMLStreamException ex) {
throw new DiscoveryException(ex);
}



Note that I removed the final keyword to the runtimeId parameter in
order to turn it to null in the first statement of the method (to allow
broadcast of the message).
In addition to that I just modified StaxUtil.serialize(content); for
 message = StaxUtil.serialize(content);

And that is all I did and after pressing the Contribute SCDL button, I
saw in both slaves' console window a system.out I added to the
processQuery(ResolverQueryMsg queryMessage) method in
TuscanyQueryHandler.java.

So, now it is important to know why the runtimeId arrives with a value
equal to slave. I had already tried to figure it out and sent you an
email, remember? I am copying it here just in case:


Now, I was trying to understand where the target name comes wrong from
and I think the problem could be that the AssemblyServiceImpl class is
setting the wrong id in the include method:
.
// create physical wire definitions
for (ComponentDefinition? child :
type.getDeclaredComponents().values()) {
  URI id = child.getRuntimeId()
.

Since, it finally invokes the marshallAndSend(id, context), which in
turn invokes the
discoveryService.sendMessage(id.toASCIIString(), pcsReader) method,
which ends up in an invocation to  JxtaDiscoveryService.sendMessage(...)
with the wrong runtimeId (i.e.; slave)

So, as you can see, it seems that the problem comes from some place
outside of the scope of JXTA and I am not experienced enough to deal
with such issue. Do you have any idea where the slave id is being
wrongly set?


Ok, I hope it is all useful and if you need any further help, please do
not hesitate to contact me.

Best regards,
Mario



-Original Message-
From: Meeraj Kunnumpurath [mailto:[EMAIL PROTECTED]
Sent: Sunday, March 25, 2007 5:52 PM
To: tuscany-dev@ws.apache.org
Subject: RE: Discovery update

Thanks Mario. If you have any more queries, pls post to the list.

Ta
Meerj


From: Antollini, Mario [EMAIL PROTECTED]
Reply-To: tuscany-dev@ws.apache.org
To: tuscany-dev@ws.apache.org
Subject: RE: Discovery update
Date: Sun, 25 Mar 2007 07:53:39 -0700

Meeraj,

You were right, it is not working yet. I am still struggling with it.
I'll come back to you as soon as I have any news about it.

Regards,
Mario

-Original Message-
From: Meeraj Kunnumpurath [mailto:[EMAIL PROTECTED]
Sent: Friday, March 23, 2007 8:16 PM
To: tuscany-dev@ws.apache.org
Subject: RE: Discovery update

Mario,

By hard-coding the runtime id of the target peer, did the message
actually reached the intended peer? i.e. did you see any log messages
on
the console widow of slave1 or slave2?

Thanks
Meeraj

  -Original Message-
  From: Antollini, Mario [mailto:[EMAIL PROTECTED]
  Sent: 23 March 2007 21:02
  To: tuscany-dev@ws.apache.org
  Subject: RE: Discovery update
 
  Meeraj,
 
  I got the JXTA working for sending messages. However, what I
  did was just finding the error and patching it, so I just
  

Re: Merge improved databinding code into trunk

2007-03-28 Thread Simon Laws

On 3/28/07, Raymond Feng [EMAIL PROTECTED] wrote:


Hi,

I'll go ahead to commit the last piece which integrates the databinding
framework with the latest core if there are no other concerns.

The new picture will be:

kernel/core: will depend on databinding-framework (the dependency would be
removed as the core is further decomposed)
services/databinding/databinding-framework: Databinding SPIs and built-in
transformers for XML
kernel/databinding: Databinding related WirePostProcessors and Inteceptors

Thanks,
Raymond

- Original Message -
From: Raymond Feng [EMAIL PROTECTED]
To: tuscany-dev@ws.apache.org
Sent: Friday, March 16, 2007 10:38 PM
Subject: Merge improved databinding code into trunk


 Hi,

 As you might have noticed on the ML, I have improved the databinding
code
 in the sca-java-integration branch over time. I would like to merge the
 changes back to the trunk and bring up the databinding support again in
 the trunk.

 Here is the summary of the improvements:

 1. Minimize the usage of @DataType by agressively introspecting the java
 classes

 2. Data transformation for business exceptions

 3. Add copy() support for JAXB and AXIOM

 4. More databindings and transformers such as:
   * JSON databinding
   * SDO -- AXIOM using OMDataSource
   * JavaBean -- XMLStreamReader

 5. More unit and integration test cases for better coverage

 To avoid the disruption, I'll stage them as follows:
 1) Add a databinding-framework module under
 java/sca/services/databinding to hold the updated code in spi and
core.
 2) Update individual databindings
 3) Integrate the databinding pieces with the latest core

 Thanks,
 Raymond



-
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]

Hi


If you are updating Databinding funtionality in the trunk I would like to
copy the databinding itests from the integration branch into the trunk. We
can use the itests to test the updated databinding functionality. Also gives
us a chance to shake down the tests a little more. Does that sound OK to
everyone?

Regards

Simon


Re: [VOTE] Use single version for all Java/SCA modules and enable building all modules together

2007-03-29 Thread Simon Laws

On 3/29/07, Ignacio Silva-Lepe [EMAIL PROTECTED] wrote:


If I understand your clarification correctly, this vote is about putting
out
a single release with a certain number of modules in it, and with each
module having the same version number. In particular, this vote does
not set a cast-in-stone precedent about how future releases will be put
together. Also, it is assumed that consensus will eventually be able to
be reached about what the modules in the release will be (without this
assumption, this vote seems pointless to me).

While it is not clear to me that this is the best approach to follow from
an open source project point of view, with the constraints and assump-
tions above it seems to me to be reasonably safe to vote +1.

So, if the constraints and assumptions above are true, then here's
my +1.

Thanks

On 3/29/07, ant elder [EMAIL PROTECTED] wrote:

 I wasn't clear enough when starting this vote, let me try to fix that:

 I intended this vote to *be* the short term reality, I.e. about getting
a
 release out from trunk with everyone contributing.

 It may well be right that a single module version doesn't scale, in the
 longer term maybe we need to adopt one the many proposals that has been
 made
 on this subject. We can revisit all this once we've managed to get the
 next
 release out.

 The specifics of what extensions are included in this release is left
out
 of
 this vote and can be decided in the release plan discussion. All this
vote
 is saying is that all the modules that are to be included in this next
 release will have the same version and that a top level pom.xml will
exist
 to enable building all those modules at once. We don't have so many
 extensions planed for the next release so i think this will scale ok for

 now.

 Does that help at all? Would anyone more vote for this now?

 If there isn't a reasonably large majority one way or the other I'm fine
 with forgetting this vote and going back to the drawing board for a
 different approach. I'd really prefer not to have to do that though as
we
 need to start finding some ways to make progress, so lets keep this
going
 till tomorrow to see how the votes look then.

   ...ant

 On 3/28/07, Raymond Feng [EMAIL PROTECTED] wrote:
 
  Hi, Bert.
 
  I think I'm with you on the proposal. The rule of the game should be
 very
  simple as follows:
 
  Let's agree on a set of modules that are supposed to work together for
 the
  next target (a release or a demo), then define an assembly and
pom.xmlto
  enforce the cohesions.
 
  Thanks,
  Raymond
 
  - Original Message -
  From: Bert Lamb  [EMAIL PROTECTED]
  To: tuscany-dev@ws.apache.org
  Sent: Wednesday, March 28, 2007 1:28 PM
  Subject: Re: [VOTE] Use single version for all Java/SCA modules and
 enable
  building all modules together
 
 
   Would something like what I outlined in this email[1] be more
amenable
   to people voting against this proposal?
  
   -Bert
  
   [1]
 http://www.mail-archive.com/tuscany-dev@ws.apache.org/msg16062.html
  
   On 3/28/07, Meeraj Kunnumpurath [EMAIL PROTECTED]
wrote:
   I have expressed my views on all modules sharing the same version
and
 a
   top
   down build in quite a bit of detail in my previous emails on the
same
   subject. Unfortunately, I will have to vote -1 on this.
  
   Meeraj
  
  
   From: Jim Marino [EMAIL PROTECTED]
   Reply-To: tuscany-dev@ws.apache.org
   To: tuscany-dev@ws.apache.org
   Subject: Re: [VOTE] Use single version for all Java/SCA modules
and
   enable
   building all modules together
   Date: Wed, 28 Mar 2007 12:19:53 -0700
   
   
   On Mar 28, 2007, at 12:51 AM, ant elder wrote:
   
   Here's the vote on this I said [1] I'd start to get closure on
this
   issue.
   
   The proposal is to have top-level pom for the Java SCA project
that
   enables
   building all the modules together - kernel, services, runtimes,
   extensions
   etc, and for that to work all those modules need to use the same
   version
   name.
   
   Here's my +1.
   
  ...ant
   
   [1] http://www.mail-archive.com/tuscany-dev@ws.apache.org/
   msg16024.html
   
   
   
   There has been no proposal for how to resolve the issue
  about  building
   extensions using multiple versions of kernel and how modules  on
   different
   release schedules requiring different levels of kernel  or plugins
  will
   be
   handled.
   
   Until we can come up with a solution for these issues, I feel I
  have  to
   vote against the proposal.
   
   -1
   
   Jim
   
  
 -
   To unsubscribe, e-mail: [EMAIL PROTECTED]
   For additional commands, e-mail: [EMAIL PROTECTED]
   
  
   _
   Match.com - Click Here To Find Singles In Your Area Today!
   http://msnuk.match.com/
  
  
  
-
   To unsubscribe, e-mail: [EMAIL PROTECTED]
   For additional commands, 

Re: Unpack issues with Tuscany

2007-03-30 Thread Simon Laws

On 3/29/07, Brian Fox [EMAIL PROTECTED] wrote:


Hi,
I'm one of the Maven developers next-door at apache and the main
developer for the maven-dependency-plugin. We've had a few requests
recently from Tuscany users who have problems with the instructions or
with the pom. (I haven't found the instructions yet so I can't be
positive) You can see this thread for more info:
http://www.nabble.com/mvn-dependency%3Aunpack-tf3436260s177.html#a9580702

It seems that the instructions indicate to run mvn
dependency:unpack, however the POM isn't setup correctly to do this.
I recently added a FAQ to the site as well as more examples for this
use case here:
http://maven.apache.org/plugins/maven-dependency-plugin/faq.html#question

I figured I'd pop in to see if we can try to get this fixed up so the
users don't have a bad experience both with Tuscany and Maven. Let me
know if there's anything I can do to help.

-Brian

-
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]

Hi Brian, Thanks for dropping by. I worked with Muhwas to get him a work

around so he could get our M2 tests running. I couldn't get the Tuscany M2
samples to work either following the instructions as documented in the
readme. It specifically says to do  mvn dependecy:unpack but I can't find
anywhere that the maven-dependency-plugin is configured in the pom hierarchy
for our M2 release. What is the intended behaviour if you do mvn
dependecy:unpack against a pom with no maven-dependency-plugin
configuration? Should it do anything sensible.

I wasn't very close to the M2 release but I know people tried the samples
before cutting the tag so something strange is going on. Anyhow I suspect
that it's a Tuscany problem and not a Maven problem but it needs more
investigation. If nothing else we need to fix our documentation!

Thanks again for posting this.

Simon


Re: Tuscany Unpack issues

2007-03-30 Thread Simon Laws

On 3/30/07, Raymond Feng [EMAIL PROTECTED] wrote:


Hi, Brian.

It's so nice of you to remind us. IIRC, the original problem was due to
the
newer versions of the maven-dependency-plugin as we referenced the
SNAPSHOT
version of the plugin in M2 driver. I think we have fixed the wrong
configuration in the latest code but I'll double-check.

BTW, do you know if there is a way in the pom.xml to use LATEST RELEASED
plugin? Maven takes the latest SNAPSHOTs if the version is not explicitly
specified. As we know, it's very risky for a release to depend on
SNAPSHOTs.

Thanks,
Raymond


- Original Message -
From: Brian Fox [EMAIL PROTECTED]
To: tuscany-dev@ws.apache.org
Sent: Thursday, March 29, 2007 4:31 PM
Subject: Tuscany Unpack issues


 Hi,
 I'm one of the Maven developers next-door at apache and the main
 developer for the maven-dependency-plugin. We've had a few requests
 recently from Tuscany users who have problems with the instructions or
 with the pom. (I haven't found the instructions yet so I can't be
 positive) You can see this thread for more info:

http://www.nabble.com/mvn-dependency%3Aunpack-tf3436260s177.html#a9580702

 It seems that the instructions indicate to run mvn
 dependency:unpack, however the POM isn't setup correctly to do this.
 I recently added a FAQ to the site as well as more examples for this
 use case here:

http://maven.apache.org/plugins/maven-dependency-plugin/faq.html#question

 I figured I'd pop in to see if we can try to get this fixed up so the
 users don't have a bad experience both with Tuscany and Maven. Let me
 know if there's anything I can do to help.

 -Brian

 -
 To unsubscribe, e-mail: [EMAIL PROTECTED]
 For additional commands, e-mail: [EMAIL PROTECTED]



-
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]

Hi Raymond, snap. I just replied on the other thread. So what you are

saying is that it means that  our M2 poms are effectively wrong now. So we
need to do a point release to fix it or document a work round on the
download page I guess. We should probably put a note up on the download page
now anyhow. Want me to go do that.

Regards

Simon


Re: Unpack issues with Tuscany

2007-03-30 Thread Simon Laws

On 3/30/07, Brian E. Fox [EMAIL PROTECTED] wrote:


Simon,
The dependency:unpack and copy goals are intended to be used by binding
to a phase in the pom. They use custom objects (ArtifactItems) that
can't be defined on the CLI. If you want to run it from the CLI, you
have to first add the configuration to the pom as described on the link
I sent below.

Another alternative that doesn't require pom config would be
unpack-dependencies but that will grab all dependencies...there are some
flags that could be used to filter them that are doc'd on the plugin
page.

Based on what it looks like you're doing, you want unpack but just need
to put the config in the pom. Alternatively, you could add it to the
build and make it happen for the user automatically.

-Original Message-
From: Simon Laws [mailto:[EMAIL PROTECTED]
Sent: Friday, March 30, 2007 12:03 PM
To: tuscany-dev@ws.apache.org
Subject: Re: Unpack issues with Tuscany

On 3/29/07, Brian Fox [EMAIL PROTECTED] wrote:

 Hi,
 I'm one of the Maven developers next-door at apache and the main
 developer for the maven-dependency-plugin. We've had a few requests
 recently from Tuscany users who have problems with the instructions or

 with the pom. (I haven't found the instructions yet so I can't be
 positive) You can see this thread for more info:
 http://www.nabble.com/mvn-dependency%3Aunpack-tf3436260s177.html#a9580
 702

 It seems that the instructions indicate to run mvn
 dependency:unpack, however the POM isn't setup correctly to do this.
 I recently added a FAQ to the site as well as more examples for this
 use case here:
 http://maven.apache.org/plugins/maven-dependency-plugin/faq.html#quest
 ion

 I figured I'd pop in to see if we can try to get this fixed up so the
 users don't have a bad experience both with Tuscany and Maven. Let me
 know if there's anything I can do to help.

 -Brian

 -
 To unsubscribe, e-mail: [EMAIL PROTECTED]
 For additional commands, e-mail: [EMAIL PROTECTED]

 Hi Brian, Thanks for dropping by. I worked with Muhwas to get him a
 work
around so he could get our M2 tests running. I couldn't get the Tuscany
M2 samples to work either following the instructions as documented in
the readme. It specifically says to do  mvn dependecy:unpack but I
can't find anywhere that the maven-dependency-plugin is configured in
the pom hierarchy for our M2 release. What is the intended behaviour if
you do mvn dependecy:unpack against a pom with no
maven-dependency-plugin configuration? Should it do anything sensible.

I wasn't very close to the M2 release but I know people tried the
samples before cutting the tag so something strange is going on. Anyhow
I suspect that it's a Tuscany problem and not a Maven problem but it
needs more investigation. If nothing else we need to fix our
documentation!

Thanks again for posting this.

Simon

-
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]

Great, thanks for that info Brian. Will take a look at it and see if we

can come up with a suitable fix.

Regards

Simon


Re: Merge improved databinding code into trunk

2007-04-02 Thread Simon Laws

On 3/29/07, Venkata Krishnan [EMAIL PROTECTED] wrote:


Hi Raymond,

Once you have done this, I'd like to get started with syncing up the trunk
for complex and many valued properties since this depends on the
databinding
framework to trasform property definitions in SCDLs to JavaObejects.

- Venkat

On 3/28/07, Raymond Feng [EMAIL PROTECTED] wrote:

 Hi,

 I'll go ahead to commit the last piece which integrates the databinding
 framework with the latest core if there are no other concerns.

 The new picture will be:

 kernel/core: will depend on databinding-framework (the dependency would
be
 removed as the core is further decomposed)
 services/databinding/databinding-framework: Databinding SPIs and
built-in
 transformers for XML
 kernel/databinding: Databinding related WirePostProcessors and
Inteceptors

 Thanks,
 Raymond

 - Original Message -
 From: Raymond Feng [EMAIL PROTECTED]
 To: tuscany-dev@ws.apache.org
 Sent: Friday, March 16, 2007 10:38 PM
 Subject: Merge improved databinding code into trunk


  Hi,
 
  As you might have noticed on the ML, I have improved the databinding
 code
  in the sca-java-integration branch over time. I would like to merge
the
  changes back to the trunk and bring up the databinding support again
in
  the trunk.
 
  Here is the summary of the improvements:
 
  1. Minimize the usage of @DataType by agressively introspecting the
java
  classes
 
  2. Data transformation for business exceptions
 
  3. Add copy() support for JAXB and AXIOM
 
  4. More databindings and transformers such as:
* JSON databinding
* SDO -- AXIOM using OMDataSource
* JavaBean -- XMLStreamReader
 
  5. More unit and integration test cases for better coverage
 
  To avoid the disruption, I'll stage them as follows:
  1) Add a databinding-framework module under
  java/sca/services/databinding to hold the updated code in spi and
 core.
  2) Update individual databindings
  3) Integrate the databinding pieces with the latest core
 
  Thanks,
  Raymond
 


 -
 To unsubscribe, e-mail: [EMAIL PROTECTED]
 For additional commands, e-mail: [EMAIL PROTECTED]




Hi

I'm now in the process of copying the Databindings itest from the
integration branch to the trunk. I've copied tests for only a few simple
types but it doesn't  work yet as there are problems with the axis extension
(WorkContexT param missing from some method signatures). Once extension
problems are ironed out we can back fill with more complex type testing,
transformer tests etc. I don't have much time this week so will move it
along as and when I can grab 5 mins. It's not plumbed into the higher level
itest pom yet so it shouldn't break anything.

Regards

Simon


Which samples work?

2007-04-11 Thread Simon Laws

Ok, so back in from Easter hols and I've debugged through the Calculator
sample with the newly organized trunk. You guys have been busy! I'd like to
get some more of the samples on line and hence learn more about how it
works. I was thinking of having a crack at composite-impl because this looks
like it doesn't have complicated bindings. Is this a good one to pick? Is
someone else working on it?

Regards

Simon


Re: Which samples work?

2007-04-11 Thread Simon Laws

On 4/11/07, Jean-Sebastien Delfino [EMAIL PROTECTED] wrote:


Simon Laws wrote:
 Ok, so back in from Easter hols and I've debugged through the Calculator
 sample with the newly organized trunk. You guys have been busy! I'd
 like to
 get some more of the samples on line and hence learn more about how it
 works. I was thinking of having a crack at composite-impl because this
 looks
 like it doesn't have complicated bindings. Is this a good one to pick?
Is
 someone else working on it?

 Regards

 Simon


Yes, it looks like a good one to me. Once bindings have been brought
back up then we'll be able to run many more samples, but for now it's
probably best to start with the samples that only exercise composition
and Java components.

--
Jean-Sebastien


-
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]

great - thanks jean-sebastien. I'll give it a go.


Simon


Eclipse project generation

2007-04-11 Thread Simon Laws

To debug the calculator sample I did a mvn -Peclipse eclipse:eclipse in
java/sca and imported the calculator sample project and all dependecy
projects, into eclipse.  Is there a way of generating the workspace
automatically to avoid the import step? Looking at the eclipse plugin
documentation there doesn't seem to be an option.

Regards

Simon


Re: Eclipse project generation

2007-04-11 Thread Simon Laws

On 4/11/07, Luciano Resende [EMAIL PROTECTED] wrote:


Look at how people is doing in Apache Abdera
https://svn.apache.org/repos/asf/incubator/abdera/java/trunk/BUILDING

I think you can use :

mvn -Declipse.workspace=/path/to/workspace eclipse:eclipse



On 4/11/07, Simon Laws [EMAIL PROTECTED] wrote:

 To debug the calculator sample I did a mvn -Peclipse eclipse:eclipse in
 java/sca and imported the calculator sample project and all dependecy
 projects, into eclipse.  Is there a way of generating the workspace
 automatically to avoid the import step? Looking at the eclipse plugin
 documentation there doesn't seem to be an option.

 Regards

 Simon




--
Luciano Resende
http://people.apache.org/~lresende


Thanks Luciano for pointing that out. I tried it and it seems to be tryting
to do the right sort of thing but not quite getting all the way, i.e. the
resulting workspace doesn't load with projects in it. As their advice is to
start with the manual import I'll stick with what I have for now and
investigate alternatives in the future.

Regards

Simon


re: Project conventions

2007-04-12 Thread Simon Laws

On 4/12/07, haleh mahbod [EMAIL PROTECTED]  wrote:


Can you please add conventions for Java SCA to this page

http://cwiki.apache.org/confluence/display/TUSCANY/SCA+Java+Development

This makes it easier for contributors to find the information.

Thank you,
Haleh

On 4/11/07, Jean-Sebastien Delfino  [EMAIL PROTECTED] wrote:

 [snip]
 Simon Laws wrote:
 
  3/ package names within the modules don't always match the module
name
  which makes it trick to find classes sometimes. I don't have loads of

  examples here  but the problem I have was trying to find
   o.a.t.api.SCARuntime
 This is in the host-embedded module. Is it practical to suggest
that
  package names to at least contain the module name?
 

 I'm planning to start fixing some of these package names later this
 evening. For example the packages in implementation-java-runtime
 currenty start with o.a.t.core, I'll rename them to
 o.a.t.implementation.java

 --
 Jean-Sebastien


 -
 To unsubscribe, e-mail: [EMAIL PROTECTED]
 For additional commands, e-mail: [EMAIL PROTECTED]




Hi

We have several java pages on the wiki with develop* in the title. I added
another one yesterday as the others are not linked off the Java SCA menu and
I didn't notice them. They contain two types of information. Some is
infrastructure development information like coding standards. Others contain
information for those wanting to develop applications.I've drawn together
the information that I think is to do with infrastructure development under
the developer Guidelink [1]. This is linked from the guides section of
the menu (i've left the old page in place for comparison but should be
removed in due course). Obviously needs more work but we can collect info
here.

There was another developer guide page [2] which is to do with building
applications. This needs to be moved under the user guide section but I
think there is still a general confusion here because we have a hierarchy of
documents call SCA Java[3] and one called Java SCA[4]. We need to
combine these two and merge the information as appropriate. This is not
helped as the top level link called SCA Java actually links to Java SCA:-(.
Anyhow, I know people have been working in SCA Java [5] but that this isn't
linked from the top level. I'm happy to get stuck in and help sort this out
(and give Shelita some help) but I'm in the middle of something just at the
moment so If someone else wants to make a start feel free.

Regards

Simon

[1] http://cwiki.apache.org/confluence/display/TUSCANY/Java+SCA+Developer+Guide

[2] http://cwiki.apache.org/confluence/display/TUSCANY/SCA+Developer+Guide
[3] http://cwiki.apache.org/confluence/display/TUSCANY/SCA+Java
[4] http://cwiki.apache.org/confluence/display/TUSCANY/Java+SCA
[5] http://www.mail-archive.com/[EMAIL PROTECTED]/msg16117.html


Re: JAVA_SCA_M2 slides

2007-04-12 Thread Simon Laws

On 4/11/07, Salvucci, Sebastian [EMAIL PROTECTED] wrote:


Hello,

I uploaded a document to
http://cwiki.apache.org/confluence/download/attachments/47512/TuscanyJAV
ASCA.pdf which contains some slides about Java SCA Runtime. They are
based on M2 but perhaps some graphics could serve to be reused for the
web site or future documentation.

Regards,



Sebastian Salvucci



Hi Sebastian


The look pretty cool to me. I particularly like the 3Dness of them. At the
level you have described the code I don't think it would be too much of a
problem to bring the diagrams into line with how the code is now. What tool
did you use to prepare the diagrams? Are you happy to share the source?

Regards

Simon


Composites implementing components problem

2007-04-12 Thread Simon Laws

I'm trying to bring the composite-impl sample up. The sample uses nested
composite files and if fails trying to wire up the references from a top
level component (which is implemented in a separate composite - see [1]) to
another component.

The failure happens during the connect phase of  DeployerImpl.deploy(). Here
it loops round all of the references specified in the model for the
component in question and then goes to the component implementation to get
the reference definition so it can subsequently create a wire. Here is the
top of the loop from DeployerImpl.connect() (I added some comments here to
highlight the points of interest)

   // for each  the references specified in the SCDL for the component
   for (ComponentReference ref : definition.getReferences()) {
   ListWire wires = new ArrayListWire();
   String refName = ref.getName();
   // get the definition of the reference which is described by the
component implementation
   org.apache.tuscany.assembly.Reference refDefinition =
getReference(definition.getImplementation(), refName);
   assert refDefinition != null;

So when it comes to SourceComponent [1] it finds that the component is
implemented by another composite. When this information is read into the
model by the CompositeProcessor there is code that specifically reads the
implementation.composite element, i.e.

   } else if
(IMPLEMENTATION_COMPOSITE_QNAME.equals(name)) {

   // Read an implementation.composite
   Composite implementation =
factory.createComposite();
   implementation.setName(getQName(reader, NAME));
   implementation.setUnresolved(true);
   component.setImplementation(implementation);

Now all this does as far as I can see is create a composite type with just
the composite name in it (I assume that the intention is to resolve this
later on). Hence the connect step fails because the component implementation
in our example has nothing in it. Specifically it has none of the reference
definition information that it would have to look in the other composite
file to get.

The problem is I'm not sure when this information is intended to be linked
up. During the resolve phase when this component implementation is reached
the resolver just finds a composite with nothing in it and, as far as I can
tell, just ignores it. How does the system know that this implementation
refers to a composite defined elsewhere rather than just defining a
composite with nothing in it?

I would assume at the resolve or optimize stages this should happen so that
we have a complete model when it comes time to build the runtime. Maybe we
need a new type or flag to indicate that this is a composite implementing a
component.  I'll keep plugging away but if someone could give me a pointer
that would be great?

[1]
http://svn.apache.org/repos/asf/incubator/tuscany/java/sca/samples/composite-impl/src/main/resources/OuterComposite.composite


Re: Wiring of Services and References ?

2007-04-12 Thread Simon Laws

On 4/11/07, Luciano Resende [EMAIL PROTECTED] wrote:


I'm trying to run the echo-binding testcases after updating the binding
implementation, but I'm getting the exception below.
Are we actually wiring services and references  ?

java.lang.IllegalStateException: java.lang.NullPointerExceptionat
org.apache.tuscany.api.SCARuntime.start(SCARuntime.java:170)at
org.apache.tuscany.binding.echo.EchoBindingTestCase.setUp(
EchoBindingTestCase.java:35)at junit.framework.TestCase.runBare(
TestCase.java:132)at junit.framework.TestResult$1.protect(
TestResult.java:110)at junit.framework.TestResult.runProtected(
TestResult.java:128)at
junit.framework.TestResult.run(TestResult.java:113)
at junit.framework.TestCase.run(TestCase.java:124)at
junit.framework.TestSuite.runTest(TestSuite.java:232)at
junit.framework.TestSuite.run(TestSuite.java:227)at
org.junit.internal.runners.OldTestClassRunner.run(OldTestClassRunner.java
:35)
at org.eclipse.jdt.internal.junit4.runner.JUnit4TestReference.run(
JUnit4TestReference.java:38)at
org.eclipse.jdt.internal.junit.runner.TestExecution.run(TestExecution.java
:38)
at org.eclipse.jdt.internal.junit.runner.RemoteTestRunner.runTests(
RemoteTestRunner.java:460)at
org.eclipse.jdt.internal.junit.runner.RemoteTestRunner.runTests(
RemoteTestRunner.java:673)at
org.eclipse.jdt.internal.junit.runner.RemoteTestRunner.run(
RemoteTestRunner.java:386)at
org.eclipse.jdt.internal.junit.runner.RemoteTestRunner.main(
RemoteTestRunner.java:196)Caused by: java.lang.NullPointerExceptionat
org.apache.tuscany.core.implementation.PojoAtomicComponent.start(
PojoAtomicComponent.java:217)at
org.apache.tuscany.host.embedded.SimpleRuntimeImpl.start(
SimpleRuntimeImpl.java:158)at
org.apache.tuscany.host.embedded.DefaultSCARuntime.startup(
DefaultSCARuntime.java:50)at org.apache.tuscany.api.SCARuntime.start(
SCARuntime.java:168)... 15 more

--
Luciano Resende
http://people.apache.org/~lresende


Luciano

Did you get past this. I'm getting an NPE in composite-impl where the system
is trying to create wires (just posted on it). Be interested to know if you
solved your problem?

Regards

Simon


Re: Wiring of Services and References ?

2007-04-12 Thread Simon Laws

On 4/12/07, Venkata Krishnan [EMAIL PROTECTED] wrote:


Hi,

If the calulator sample is working then I suppose the wiring is in place
isn't it ?   Let me go and try this one.

- Venkat

On 4/12/07, Simon Laws [EMAIL PROTECTED] wrote:

 On 4/11/07, Luciano Resende [EMAIL PROTECTED] wrote:
 
  I'm trying to run the echo-binding testcases after updating the
binding
  implementation, but I'm getting the exception below.
  Are we actually wiring services and references  ?
 
  java.lang.IllegalStateException: java.lang.NullPointerExceptionat
  org.apache.tuscany.api.SCARuntime.start(SCARuntime.java:170)at
  org.apache.tuscany.binding.echo.EchoBindingTestCase.setUp(
  EchoBindingTestCase.java:35)at junit.framework.TestCase.runBare(
  TestCase.java:132)at junit.framework.TestResult$1.protect(
  TestResult.java:110)at junit.framework.TestResult.runProtected(
  TestResult.java:128)at
  junit.framework.TestResult.run(TestResult.java:113)
  at junit.framework.TestCase.run(TestCase.java:124)at
  junit.framework.TestSuite.runTest(TestSuite.java:232)at
  junit.framework.TestSuite.run(TestSuite.java:227)at
  org.junit.internal.runners.OldTestClassRunner.run(
 OldTestClassRunner.java
  :35)
  at org.eclipse.jdt.internal.junit4.runner.JUnit4TestReference.run(
  JUnit4TestReference.java:38)at
  org.eclipse.jdt.internal.junit.runner.TestExecution.run(
 TestExecution.java
  :38)
  at org.eclipse.jdt.internal.junit.runner.RemoteTestRunner.runTests(
  RemoteTestRunner.java:460)at
  org.eclipse.jdt.internal.junit.runner.RemoteTestRunner.runTests(
  RemoteTestRunner.java:673)at
  org.eclipse.jdt.internal.junit.runner.RemoteTestRunner.run(
  RemoteTestRunner.java:386)at
  org.eclipse.jdt.internal.junit.runner.RemoteTestRunner.main(
  RemoteTestRunner.java:196)Caused by: java.lang.NullPointerException
 at
  org.apache.tuscany.core.implementation.PojoAtomicComponent.start(
  PojoAtomicComponent.java:217)at
  org.apache.tuscany.host.embedded.SimpleRuntimeImpl.start(
  SimpleRuntimeImpl.java:158)at
  org.apache.tuscany.host.embedded.DefaultSCARuntime.startup(
  DefaultSCARuntime.java:50)at
org.apache.tuscany.api.SCARuntime.start
 (
  SCARuntime.java:168)... 15 more
 
  --
  Luciano Resende
  http://people.apache.org/~lresende
 
 Luciano

 Did you get past this. I'm getting an NPE in composite-impl where the
 system
 is trying to create wires (just posted on it). Be interested to know if
 you
 solved your problem?

 Regards

 Simon



Hi

I did go back and try the calculator with the latest code from head and it
still works for me. I am getting output from the CompositUtil about the
problems it has found.

Composite configuration problem:
[EMAIL PROTECTED]

I've not looked into this one specifically but it doesn't stop the test
passing. I do get more of these problem reports in the composite-impl test
that I'm playing with and it is a real problem in that case. Still looking
at whats going on.

Regards

Simon


Re: JAVA_SCA_M2 slides

2007-04-12 Thread Simon Laws

On 4/12/07, Salvucci, Sebastian [EMAIL PROTECTED] wrote:


Hi Simon,
I'm happy to share the source. I just used PowerPoint for preparing the
diagrams. The ppt document is already uploaded; you can find it at:
http://cwiki.apache.org/confluence/download/attachments/47512/TuscanyJAV
ASCA.ppt
Regards,

+sebastian


-Original Message-
From: Simon Laws [mailto:[EMAIL PROTECTED]
Sent: Thursday, April 12, 2007 4:40 AM
To: [EMAIL PROTECTED]
Subject: Re: JAVA_SCA_M2 slides

On 4/11/07, Salvucci, Sebastian [EMAIL PROTECTED] wrote:

 Hello,

 I uploaded a document to

http://cwiki.apache.org/confluence/download/attachments/47512/TuscanyJAV
 ASCA.pdf which contains some slides about Java SCA Runtime. They are
 based on M2 but perhaps some graphics could serve to be reused for the
 web site or future documentation.

 Regards,



 Sebastian Salvucci



 Hi Sebastian

The look pretty cool to me. I particularly like the 3Dness of them. At
the
level you have described the code I don't think it would be too much of
a
problem to bring the diagrams into line with how the code is now. What
tool
did you use to prepare the diagrams? Are you happy to share the source?

Regards

Simon

-
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]

Great, Thanks for pointing that out Sebastian. It would be good to have a

go at moving them forward to reflecting the new code base. Are you looking
at head now or are you using the released code in M2? Maybe what we could do
is make a page on the wiki and document each slide/part of the
infrastructure and the look at how the code works now. Then update the
diagrams accordingly. I'm just learning how the latest code works so it's
all good edcuational stuff.

Raymond has, in the past, been working on an architecure guide [1] but I
know that this is a little out of date too. Maybe we can join forces and
help him out.

[1]
http://cwiki.apache.org/confluence/display/TUSCANY/Tuscany+Architecture+Guide


Re: Wiring of Services and References ?

2007-04-12 Thread Simon Laws

On 4/12/07, Jean-Sebastien Delfino [EMAIL PROTECTED] wrote:


Jean-Sebastien Delfino wrote:
 [snip]
 Simon Laws wrote:

 Composite configuration problem:
 [EMAIL PROTECTED]

 I've not looked into this one specifically but it doesn't stop the test
 passing. I do get more of these problem reports in the composite-impl
 test
 that I'm playing with and it is a real problem in that case. Still
 looking
 at whats going on.

 Regards

 Simon

 Simon,

 This output is produced by temporary test code that I added to
 CompositeUtil to help track problems in most of .composite files,
 which are not all following the SCA assembly XML spec 1.0. In
particular:
 - declare a targetNamespace
 - use qnames to name referenced composites
 - name included composites in the content of the include element
 instead of a name attribute
 - use target attributes on references instead of naming the target
 in the element content
 - use promote attributes on composite services and references

 So before testing, you need to make sure that the .composite files are
 correct. These print statements should help you detect that there are
 problems (even if they just dump the model objects at the moment).
 Then to debug you can set a breakpoint on the line that does the
 print, and figure what 's wrong in the test case from there.

Simon,

I just committed a change to print more debug info when encountering a
composite configuration problem. Hope this helps.

--
Jean-Sebastien


-
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]

Great, thanks Sebastien.Thanks for doing that. I gave the new code a spin

and the extra info is interesting. Looking at the CompositeUtil code is
looks like it is always comparing the component definition from the SCDL
with the implementation. Do you have any objection if I go in and add some
more info to the  problem model to record and report the exact nature of the
problem found?

Simon

Simon


Re: Composites implementing components problem

2007-04-12 Thread Simon Laws

On 4/12/07, Jean-Sebastien Delfino [EMAIL PROTECTED] wrote:


Simon Laws wrote:
 I'm trying to bring the composite-impl sample up. The sample uses nested
 composite files and if fails trying to wire up the references from a top
 level component (which is implemented in a separate composite - see
 [1]) to
 another component.

 The failure happens during the connect phase of
 DeployerImpl.deploy(). Here
 it loops round all of the references specified in the model for the
 component in question and then goes to the component implementation to
 get
 the reference definition so it can subsequently create a wire. Here is
 the
 top of the loop from DeployerImpl.connect() (I added some comments
 here to
 highlight the points of interest)

// for each  the references specified in the SCDL for the
 component
for (ComponentReference ref : definition.getReferences()) {
ListWire wires = new ArrayListWire();
String refName = ref.getName();
// get the definition of the reference which is described
 by the
 component implementation
org.apache.tuscany.assembly.Reference refDefinition =
 getReference(definition.getImplementation(), refName);
assert refDefinition != null;

 So when it comes to SourceComponent [1] it finds that the component is
 implemented by another composite. When this information is read into the
 model by the CompositeProcessor there is code that specifically reads
the
 implementation.composite element, i.e.

} else if
 (IMPLEMENTATION_COMPOSITE_QNAME.equals(name)) {

// Read an implementation.composite
Composite implementation =
 factory.createComposite();
implementation.setName(getQName(reader,
 NAME));
implementation.setUnresolved(true);
component.setImplementation(implementation);

 Now all this does as far as I can see is create a composite type with
 just
 the composite name in it (I assume that the intention is to resolve this
 later on). Hence the connect step fails because the component
 implementation
 in our example has nothing in it. Specifically it has none of the
 reference
 definition information that it would have to look in the other composite
 file to get.

 The problem is I'm not sure when this information is intended to be
 linked
 up. During the resolve phase when this component implementation is
 reached
 the resolver just finds a composite with nothing in it and, as far as
 I can
 tell, just ignores it. How does the system know that this implementation
 refers to a composite defined elsewhere rather than just defining a
 composite with nothing in it?

 I would assume at the resolve or optimize stages this should happen so
 that
 we have a complete model when it comes time to build the runtime.
 Maybe we
 need a new type or flag to indicate that this is a composite
 implementing a
 component.  I'll keep plugging away but if someone could give me a
 pointer
 that would be great?

 [1]

http://svn.apache.org/repos/asf/incubator/tuscany/java/sca/samples/composite-impl/src/main/resources/OuterComposite.composite



Simon,

This code:
   // Read an implementation.composite
   Composite implementation =
factory.createComposite();
   implementation.setName(getQName(reader, NAME));
   implementation.setUnresolved(true);
   component.setImplementation(implementation);
creates a reference to the named composite marked Unresolved.

Later in the CompositeProcessor.resolve method, we resolve the
Implementations of all the Components in the Composite, including
references to other Composites, as follows:
   // Resolve component implementations, services and references
for (Component component: composite.getComponents()) {
constrainingType = component.getConstrainingType();
constrainingType = resolver.resolve(ConstrainingType.class,
constrainingType);
component.setConstrainingType(constrainingType);

Implementation implementation = component.getImplementation();
implementation = resolveImplementation(implementation,
resolver);
component.setImplementation(implementation);

resolveContracts(component.getServices(), resolver);
resolveContracts(component.getReferences(), resolver);
}

Hope this helps.

--
Jean-Sebastien


-
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]

Thanks Sebastien, That's really helpful. Thanks also for making some fixes

to the SCDL. I'm made some more changes to make the reference names match
and I'm now able to get past the problem point in my mail above. Not quite
there yet but getting further

Re: Wiring of Services and References ?

2007-04-12 Thread Simon Laws

On 4/12/07, Venkata Krishnan [EMAIL PROTECTED] wrote:


Hi Simon,

Thanks for considering on expanding the 'problem model'.  Infact I was
wondering if every SCA Object such as a Composite, Component, Reference,
 has a bunch of 'error contexts' encapsulated within it.  So if
something is wrong with a Reference defn, you simply add the problematic
'reference defn. instance' (as it is being done now) and also the Error
Context say something as 'MULTIPLICY_TO_TARGET_MISMATCH'.  The for each of
these 'Error Contexts' the SCA Object also encapsulates what information
would be relevant to output as Error Information.  So if you'd as for an
error message to the Reference Object given the context
'MULTIPLICY_TO_TARGET_MISMATCH' it would provide you the multiplicity and
target settings.

So we simply stack the problematic SCA Objects and simply ask for Error
Information at the end.

Not sure it this would complicate things... it was just a thought... maybe
there are better options.

- Venkat



On 4/12/07, Simon Laws [EMAIL PROTECTED] wrote:

 On 4/12/07, Jean-Sebastien Delfino [EMAIL PROTECTED] wrote:
 
  Jean-Sebastien Delfino wrote:
   [snip]
   Simon Laws wrote:
  
   Composite configuration problem:
   [EMAIL PROTECTED]
  
   I've not looked into this one specifically but it doesn't stop the
 test
   passing. I do get more of these problem reports in the
composite-impl
   test
   that I'm playing with and it is a real problem in that case. Still
   looking
   at whats going on.
  
   Regards
  
   Simon
  
   Simon,
  
   This output is produced by temporary test code that I added to
   CompositeUtil to help track problems in most of .composite files,
   which are not all following the SCA assembly XML spec 1.0. In
  particular:
   - declare a targetNamespace
   - use qnames to name referenced composites
   - name included composites in the content of the include element
   instead of a name attribute
   - use target attributes on references instead of naming the target
   in the element content
   - use promote attributes on composite services and references
  
   So before testing, you need to make sure that the .composite files
are
   correct. These print statements should help you detect that there
are
   problems (even if they just dump the model objects at the moment).
   Then to debug you can set a breakpoint on the line that does the
   print, and figure what 's wrong in the test case from there.
  
  Simon,
 
  I just committed a change to print more debug info when encountering a
  composite configuration problem. Hope this helps.
 
  --
  Jean-Sebastien
 
 
  -
  To unsubscribe, e-mail: [EMAIL PROTECTED]
  For additional commands, e-mail: [EMAIL PROTECTED]
 
  Great, thanks Sebastien.Thanks for doing that. I gave the new code a
 spin
 and the extra info is interesting. Looking at the CompositeUtil code is
 looks like it is always comparing the component definition from the SCDL
 with the implementation. Do you have any objection if I go in and add
some
 more info to the  problem model to record and report the exact nature of
 the
 problem found?

 Simon

 Simon



I like the idea that when capturing the state of the object we capture the
error state as well. A question though. Were you thinking of doing this with
the model objects in the assembly or, when you say SCA Object, do you mean
passing the info through to the runtime objects some how. If the latter not
sure how you would do this as I assume they are not around until after the
build stage. If the former then that sounds OK to me. It would be a more
detailed version of the unresolved flag, I.e. I tried to resolve it but
found errors.

As an aside why is that flag called unresolved? Shouldn't it be called
resolved. I struggle with the Unresolved=false double negative. (not a
major point - feel free to ignore me:-)

Regards

Simon


Re: Composites implementing components problem

2007-04-12 Thread Simon Laws

On 4/12/07, Jean-Sebastien Delfino [EMAIL PROTECTED] wrote:


Simon Laws wrote:
 On 4/12/07, Jean-Sebastien Delfino [EMAIL PROTECTED] wrote:

 Simon Laws wrote:
  I'm trying to bring the composite-impl sample up. The sample uses
 nested
  composite files and if fails trying to wire up the references from
 a top
  level component (which is implemented in a separate composite - see
  [1]) to
  another component.
 
  The failure happens during the connect phase of
  DeployerImpl.deploy(). Here
  it loops round all of the references specified in the model for the
  component in question and then goes to the component implementation
to
  get
  the reference definition so it can subsequently create a wire. Here
is
  the
  top of the loop from DeployerImpl.connect() (I added some comments
  here to
  highlight the points of interest)
 
 // for each  the references specified in the SCDL for the
  component
 for (ComponentReference ref : definition.getReferences()) {
 ListWire wires = new ArrayListWire();
 String refName = ref.getName();
 // get the definition of the reference which is described
  by the
  component implementation
 org.apache.tuscany.assembly.Reference refDefinition =
  getReference(definition.getImplementation(), refName);
 assert refDefinition != null;
 
  So when it comes to SourceComponent [1] it finds that the
 component is
  implemented by another composite. When this information is read
 into the
  model by the CompositeProcessor there is code that specifically reads
 the
  implementation.composite element, i.e.
 
 } else if
  (IMPLEMENTATION_COMPOSITE_QNAME.equals(name)) {
 
 // Read an implementation.composite
 Composite implementation =
  factory.createComposite();
 implementation.setName(getQName(reader,
  NAME));
 implementation.setUnresolved(true);
 
 component.setImplementation(implementation);
 
  Now all this does as far as I can see is create a composite type with
  just
  the composite name in it (I assume that the intention is to resolve
 this
  later on). Hence the connect step fails because the component
  implementation
  in our example has nothing in it. Specifically it has none of the
  reference
  definition information that it would have to look in the other
 composite
  file to get.
 
  The problem is I'm not sure when this information is intended to be
  linked
  up. During the resolve phase when this component implementation is
  reached
  the resolver just finds a composite with nothing in it and, as far as
  I can
  tell, just ignores it. How does the system know that this
 implementation
  refers to a composite defined elsewhere rather than just defining a
  composite with nothing in it?
 
  I would assume at the resolve or optimize stages this should happen
so
  that
  we have a complete model when it comes time to build the runtime.
  Maybe we
  need a new type or flag to indicate that this is a composite
  implementing a
  component.  I'll keep plugging away but if someone could give me a
  pointer
  that would be great?
 
  [1]
 

http://svn.apache.org/repos/asf/incubator/tuscany/java/sca/samples/composite-impl/src/main/resources/OuterComposite.composite

 
 

 Simon,

 This code:
// Read an implementation.composite
Composite implementation =
 factory.createComposite();
implementation.setName(getQName(reader,
 NAME));
implementation.setUnresolved(true);
component.setImplementation(implementation);
 creates a reference to the named composite marked Unresolved.

 Later in the CompositeProcessor.resolve method, we resolve the
 Implementations of all the Components in the Composite, including
 references to other Composites, as follows:
// Resolve component implementations, services and references
 for (Component component: composite.getComponents()) {
 constrainingType = component.getConstrainingType();
 constrainingType = resolver.resolve(ConstrainingType.class,
 constrainingType);
 component.setConstrainingType(constrainingType);

 Implementation implementation =
 component.getImplementation();
 implementation = resolveImplementation(implementation,
 resolver);
 component.setImplementation(implementation);

 resolveContracts(component.getServices(), resolver);
 resolveContracts(component.getReferences(), resolver);
 }

 Hope this helps.

 --
 Jean-Sebastien


 -
 To unsubscribe, e-mail: [EMAIL PROTECTED]
 For additional commands, e-mail: [EMAIL PROTECTED]

 Thanks Sebastien, That's really helpful. Thanks also for making some
 fixes

Re: Composites implementing components problem

2007-04-12 Thread Simon Laws

On 4/12/07, Simon Laws [EMAIL PROTECTED] wrote:




On 4/12/07, Jean-Sebastien Delfino [EMAIL PROTECTED] wrote:

 Simon Laws wrote:
  On 4/12/07, Jean-Sebastien Delfino [EMAIL PROTECTED] wrote:
 
  Simon Laws wrote:
   I'm trying to bring the composite-impl sample up. The sample uses
  nested
   composite files and if fails trying to wire up the references from
  a top
   level component (which is implemented in a separate composite - see
   [1]) to
   another component.
  
   The failure happens during the connect phase of
   DeployerImpl.deploy(). Here
   it loops round all of the references specified in the model for the

   component in question and then goes to the component implementation
 to
   get
   the reference definition so it can subsequently create a wire. Here
 is
   the
   top of the loop from DeployerImpl.connect() (I added some comments
   here to
   highlight the points of interest)
  
  // for each  the references specified in the SCDL for the
   component
  for (ComponentReference ref : definition.getReferences()) {
  ListWire wires = new ArrayListWire();
  String refName = ref.getName();
  // get the definition of the reference which is
 described
   by the
   component implementation
  org.apache.tuscany.assembly.Reference refDefinition =
   getReference(definition.getImplementation(), refName);
  assert refDefinition != null;
  
   So when it comes to SourceComponent [1] it finds that the
  component is
   implemented by another composite. When this information is read
  into the
   model by the CompositeProcessor there is code that specifically
 reads
  the
   implementation.composite element, i.e.
  
  } else if
   (IMPLEMENTATION_COMPOSITE_QNAME.equals(name)) {
  
  // Read an implementation.composite
  Composite implementation =
   factory.createComposite();
  implementation.setName(getQName(reader,
   NAME));
  implementation.setUnresolved(true);
  
  component.setImplementation(implementation);
  
   Now all this does as far as I can see is create a composite type
 with
   just
   the composite name in it (I assume that the intention is to resolve
  this
   later on). Hence the connect step fails because the component
   implementation
   in our example has nothing in it. Specifically it has none of the
   reference
   definition information that it would have to look in the other
  composite
   file to get.
  
   The problem is I'm not sure when this information is intended to be
   linked
   up. During the resolve phase when this component implementation is
   reached
   the resolver just finds a composite with nothing in it and, as far
 as
   I can
   tell, just ignores it. How does the system know that this
  implementation
   refers to a composite defined elsewhere rather than just defining a

   composite with nothing in it?
  
   I would assume at the resolve or optimize stages this should happen
 so
   that
   we have a complete model when it comes time to build the runtime.
   Maybe we
   need a new type or flag to indicate that this is a composite
   implementing a
   component.  I'll keep plugging away but if someone could give me a
   pointer
   that would be great?
  
   [1]
  
 
 
http://svn.apache.org/repos/asf/incubator/tuscany/java/sca/samples/composite-impl/src/main/resources/OuterComposite.composite
 
  
  
 
  Simon,
 
  This code:
 // Read an implementation.composite
 Composite implementation =
  factory.createComposite();
  implementation.setName(getQName(reader,
  NAME));
 implementation.setUnresolved(true);
 component.setImplementation
 (implementation);
  creates a reference to the named composite marked Unresolved.
 
  Later in the CompositeProcessor.resolve method, we resolve the
  Implementations of all the Components in the Composite, including
  references to other Composites, as follows:
 // Resolve component implementations, services and references
  for (Component component: composite.getComponents()) {
  constrainingType = component.getConstrainingType();
  constrainingType = resolver.resolve(
 ConstrainingType.class,
  constrainingType);
  component.setConstrainingType(constrainingType);
 
  Implementation implementation =
  component.getImplementation();
  implementation = resolveImplementation(implementation,
  resolver);
  component.setImplementation(implementation);
 
  resolveContracts(component.getServices(), resolver);
  resolveContracts(component.getReferences(), resolver);
  }
 
  Hope this helps.
 
  --
  Jean-Sebastien

Re: Composites implementing components problem

2007-04-12 Thread Simon Laws

On 4/12/07, Simon Laws [EMAIL PROTECTED] wrote:




On 4/12/07, Simon Laws [EMAIL PROTECTED] wrote:



 On 4/12/07, Jean-Sebastien Delfino  [EMAIL PROTECTED] wrote:
 
  Simon Laws wrote:
   On 4/12/07, Jean-Sebastien Delfino [EMAIL PROTECTED] wrote:
  
   Simon Laws wrote:
I'm trying to bring the composite-impl sample up. The sample uses
 
   nested
composite files and if fails trying to wire up the references
  from
   a top
level component (which is implemented in a separate composite -
  see
[1]) to
another component.
   
The failure happens during the connect phase of
DeployerImpl.deploy(). Here
it loops round all of the references specified in the model for
  the
component in question and then goes to the component
  implementation to
get
the reference definition so it can subsequently create a wire.
  Here is
the
top of the loop from DeployerImpl.connect() (I added some
  comments
here to
highlight the points of interest)
   
   // for each  the references specified in the SCDL for the
component
   for (ComponentReference ref : definition.getReferences())
  {
   ListWire wires = new ArrayListWire();
   String refName = ref.getName();
   // get the definition of the reference which is
  described
by the
component implementation
   org.apache.tuscany.assembly.Reference refDefinition =
getReference(definition.getImplementation(), refName);
   assert refDefinition != null;
   
So when it comes to SourceComponent [1] it finds that the
   component is
implemented by another composite. When this information is read
   into the
model by the CompositeProcessor there is code that specifically
  reads
   the
implementation.composite element, i.e.
   
   } else if
(IMPLEMENTATION_COMPOSITE_QNAME.equals(name)) {
   
   // Read an implementation.composite
   Composite implementation =
factory.createComposite();
   implementation.setName(getQName(reader,
 
NAME));
   implementation.setUnresolved(true);
   
   component.setImplementation(implementation);
   
Now all this does as far as I can see is create a composite type
  with
just
the composite name in it (I assume that the intention is to
  resolve
   this
later on). Hence the connect step fails because the component
implementation
in our example has nothing in it. Specifically it has none of the
reference
definition information that it would have to look in the other
   composite
file to get.
   
The problem is I'm not sure when this information is intended to
  be
linked
up. During the resolve phase when this component implementation
  is
reached
the resolver just finds a composite with nothing in it and, as
  far as
I can
tell, just ignores it. How does the system know that this
   implementation
refers to a composite defined elsewhere rather than just defining
  a
composite with nothing in it?
   
I would assume at the resolve or optimize stages this should
  happen so
that
we have a complete model when it comes time to build the runtime.
 
Maybe we
need a new type or flag to indicate that this is a composite
implementing a
component.  I'll keep plugging away but if someone could give me
  a
pointer
that would be great?
   
[1]
   
  
  
http://svn.apache.org/repos/asf/incubator/tuscany/java/sca/samples/composite-impl/src/main/resources/OuterComposite.composite
  
   
   
  
   Simon,
  
   This code:
  // Read an implementation.composite
  Composite implementation =
   factory.createComposite();
   implementation.setName(getQName(reader,
   NAME));
  implementation.setUnresolved(true);
  component.setImplementation
  (implementation);
   creates a reference to the named composite marked Unresolved.
  
   Later in the CompositeProcessor.resolve method, we resolve the
   Implementations of all the Components in the Composite, including
   references to other Composites, as follows:
  // Resolve component implementations, services and
  references
   for (Component component: composite.getComponents()) {
   constrainingType = component.getConstrainingType();
   constrainingType = resolver.resolve(
  ConstrainingType.class,
   constrainingType);
   component.setConstrainingType(constrainingType);
  
   Implementation implementation =
   component.getImplementation();
   implementation = resolveImplementation(implementation,
   resolver);
   component.setImplementation(implementation

Re: Composites implementing components problem

2007-04-12 Thread Simon Laws

On 4/12/07, Raymond Feng [EMAIL PROTECTED] wrote:


Hi, Simon.

For the composite component, it's probably not necessary to attach the
wire.
Can you try to add the following code in DeployerImpl.connect() to see if
it
helps?

public void connect(MapSCAObject, Object models,
org.apache.tuscany.assembly.Component definition)
throws WiringException {
// Skip the composite
if(definition.getImplementation() instanceof Composite) {
return;
}
// End of skip

...
}

Thanks,
Raymond

- Original Message -
From: Simon Laws [EMAIL PROTECTED]
To: [EMAIL PROTECTED]
Sent: Thursday, April 12, 2007 12:19 PM
Subject: Re: Composites implementing components problem


 On 4/12/07, Simon Laws [EMAIL PROTECTED] wrote:



 On 4/12/07, Simon Laws [EMAIL PROTECTED] wrote:
 
 
 
  On 4/12/07, Jean-Sebastien Delfino  [EMAIL PROTECTED] wrote:
  
   Simon Laws wrote:
On 4/12/07, Jean-Sebastien Delfino [EMAIL PROTECTED] wrote:
   
Simon Laws wrote:
 I'm trying to bring the composite-impl sample up. The sample
 uses
  
nested
 composite files and if fails trying to wire up the references
   from
a top
 level component (which is implemented in a separate composite
-
   see
 [1]) to
 another component.

 The failure happens during the connect phase of
 DeployerImpl.deploy(). Here
 it loops round all of the references specified in the model
for
   the
 component in question and then goes to the component
   implementation to
 get
 the reference definition so it can subsequently create a wire.
   Here is
 the
 top of the loop from DeployerImpl.connect() (I added some
   comments
 here to
 highlight the points of interest)

// for each  the references specified in the SCDL for
the
 component
for (ComponentReference ref : definition.getReferences
())
   {
ListWire wires = new ArrayListWire();
String refName = ref.getName();
// get the definition of the reference which is
   described
 by the
 component implementation
org.apache.tuscany.assembly.Reference refDefinition
=
 getReference(definition.getImplementation(), refName);
assert refDefinition != null;

 So when it comes to SourceComponent [1] it finds that the
component is
 implemented by another composite. When this information is
read
into the
 model by the CompositeProcessor there is code that
specifically
   reads
the
 implementation.composite element, i.e.

} else if
 (IMPLEMENTATION_COMPOSITE_QNAME.equals(name)) {

// Read an implementation.composite
Composite implementation =
 factory.createComposite();

 implementation.setName(getQName(reader,
  
 NAME));
implementation.setUnresolved(true);

component.setImplementation(implementation);

 Now all this does as far as I can see is create a composite
type
   with
 just
 the composite name in it (I assume that the intention is to
   resolve
this
 later on). Hence the connect step fails because the component
 implementation
 in our example has nothing in it. Specifically it has none of
 the
 reference
 definition information that it would have to look in the other
composite
 file to get.

 The problem is I'm not sure when this information is intended
to
   be
 linked
 up. During the resolve phase when this component
implementation
   is
 reached
 the resolver just finds a composite with nothing in it and, as
   far as
 I can
 tell, just ignores it. How does the system know that this
implementation
 refers to a composite defined elsewhere rather than just
 defining
   a
 composite with nothing in it?

 I would assume at the resolve or optimize stages this should
   happen so
 that
 we have a complete model when it comes time to build the
 runtime.
  
 Maybe we
 need a new type or flag to indicate that this is a composite
 implementing a
 component.  I'll keep plugging away but if someone could give
me
   a
 pointer
 that would be great?

 [1]

   
  
http://svn.apache.org/repos/asf/incubator/tuscany/java/sca/samples/composite-impl/src/main/resources/OuterComposite.composite
   


   
Simon,
   
This code:
   // Read an implementation.composite
   Composite implementation =
factory.createComposite();
   
implementation.setName(getQName(reader,
NAME));
   implementation.setUnresolved(true);
   component.setImplementation
   (implementation);
creates a reference to the named composite marked Unresolved.
   
Later

Re: How to tell if a component reference is promoted by a comosite reference?

2007-04-13 Thread Simon Laws

On 4/13/07, Raymond Feng [EMAIL PROTECTED] wrote:


Hi,

With the current assembly model, how can we tell if a component reference
is
promoted by a comosite reference? I can get all the promoted references
from
CompositeReference but not the other way.

Thanks,
Raymond


-
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]

If the model is a true reflection of what appears in the SCDL then I

assume you would have to loop over all the references in the composite that
contains the component in question and test if the componet reference is
being promoted. I know that isn't telling you anything you don't already
know but it's just and excuse to add a related question to this thread

Is the philosophy with the assembly model to have it represent precisely
what appears in the SCDL or is there room to include value add, for example,
links from components references to the composite references that promote
them?

Simon


Re: Composites implementing components problem

2007-04-13 Thread Simon Laws

On 4/12/07, Simon Laws [EMAIL PROTECTED] wrote:




On 4/12/07, Raymond Feng  [EMAIL PROTECTED] wrote:

 Hi, Simon.

 For the composite component, it's probably not necessary to attach the
 wire.
 Can you try to add the following code in DeployerImpl.connect() to see
 if it
 helps?

 public void connect(MapSCAObject, Object models,
 org.apache.tuscany.assembly.Component definition)
 throws WiringException {
 // Skip the composite
 if(definition.getImplementation() instanceof Composite) {
 return;
 }
 // End of skip

 ...
 }

 Thanks,
 Raymond

 - Original Message -
 From: Simon Laws  [EMAIL PROTECTED]
 To:  [EMAIL PROTECTED]
 Sent: Thursday, April 12, 2007 12:19 PM
 Subject: Re: Composites implementing components problem


  On 4/12/07, Simon Laws  [EMAIL PROTECTED] wrote:
 
 
 
  On 4/12/07, Simon Laws  [EMAIL PROTECTED] wrote:
  
  
  
   On 4/12/07, Jean-Sebastien Delfino  [EMAIL PROTECTED] wrote:
   
Simon Laws wrote:
 On 4/12/07, Jean-Sebastien Delfino  [EMAIL PROTECTED]
 wrote:

 Simon Laws wrote:
  I'm trying to bring the composite-impl sample up. The sample

  uses
   
 nested
  composite files and if fails trying to wire up the
 references
from
 a top
  level component (which is implemented in a separate
 composite -
see
  [1]) to
  another component.
 
  The failure happens during the connect phase of
  DeployerImpl.deploy(). Here
  it loops round all of the references specified in the model
 for
the
  component in question and then goes to the component
implementation to
  get
  the reference definition so it can subsequently create a
 wire.
Here is
  the
  top of the loop from DeployerImpl.connect() (I added some
comments
  here to
  highlight the points of interest)
 
 // for each  the references specified in the SCDL for
 the
  component
 for (ComponentReference ref :
 definition.getReferences())
{
 ListWire wires = new ArrayListWire();
 String refName = ref.getName();
 // get the definition of the reference which is
described
  by the
  component implementation
 org.apache.tuscany.assembly.ReferencerefDefinition =
  getReference(definition.getImplementation (), refName);
 assert refDefinition != null;
 
  So when it comes to SourceComponent [1] it finds that the
 component is
  implemented by another composite. When this information is
 read
 into the
  model by the CompositeProcessor there is code that
 specifically
reads
 the
  implementation.composite element, i.e.
 
 } else if
  (IMPLEMENTATION_COMPOSITE_QNAME.equals(name)) {
 
 // Read an
 implementation.composite
 Composite implementation =
  factory.createComposite();
 
  implementation.setName(getQName(reader,
   
  NAME));
 implementation.setUnresolved
 (true);
 
 component.setImplementation(implementation);
 
  Now all this does as far as I can see is create a composite
 type
with
  just
  the composite name in it (I assume that the intention is to
resolve
 this
  later on). Hence the connect step fails because the
 component
  implementation
  in our example has nothing in it. Specifically it has none
 of
  the
  reference
  definition information that it would have to look in the
 other
 composite
  file to get.
 
  The problem is I'm not sure when this information is
 intended to
be
  linked
  up. During the resolve phase when this component
 implementation
is
  reached
  the resolver just finds a composite with nothing in it and,
 as
far as
  I can
  tell, just ignores it. How does the system know that this
 implementation
  refers to a composite defined elsewhere rather than just
  defining
a
  composite with nothing in it?
 
  I would assume at the resolve or optimize stages this should

happen so
  that
  we have a complete model when it comes time to build the
  runtime.
   
  Maybe we
  need a new type or flag to indicate that this is a composite
  implementing a
  component.  I'll keep plugging away but if someone could
 give me
a
  pointer
  that would be great?
 
  [1]
 


http://svn.apache.org/repos/asf/incubator/tuscany/java/sca/samples/composite-impl/src/main/resources/OuterComposite.composite


 
 

 Simon,

 This code:
// Read an implementation.composite
Composite implementation =
 factory.createComposite ();

 implementation.setName(getQName(reader

Re: Composites implementing components problem

2007-04-13 Thread Simon Laws

On 4/13/07, Simon Laws [EMAIL PROTECTED] wrote:




On 4/12/07, Simon Laws [EMAIL PROTECTED]  wrote:



 On 4/12/07, Raymond Feng  [EMAIL PROTECTED] wrote:
 
  Hi, Simon.
 
  For the composite component, it's probably not necessary to attach the
  wire.
  Can you try to add the following code in DeployerImpl.connect() to see
  if it
  helps?
 
  public void connect(MapSCAObject, Object models,
  org.apache.tuscany.assembly.Component definition)
  throws WiringException {
  // Skip the composite
  if(definition.getImplementation() instanceof Composite) {
  return;
  }
  // End of skip
 
  ...
  }
 
  Thanks,
  Raymond
 
  - Original Message -
  From: Simon Laws  [EMAIL PROTECTED]
  To:  [EMAIL PROTECTED]
  Sent: Thursday, April 12, 2007 12:19 PM
  Subject: Re: Composites implementing components problem
 
 
   On 4/12/07, Simon Laws  [EMAIL PROTECTED] wrote:
  
  
  
   On 4/12/07, Simon Laws  [EMAIL PROTECTED] wrote:
   
   
   
On 4/12/07, Jean-Sebastien Delfino  [EMAIL PROTECTED] wrote:

 Simon Laws wrote:
  On 4/12/07, Jean-Sebastien Delfino  [EMAIL PROTECTED]
  wrote:
 
  Simon Laws wrote:
   I'm trying to bring the composite-impl sample up. The
  sample
   uses

  nested
   composite files and if fails trying to wire up the
  references
 from
  a top
   level component (which is implemented in a separate
  composite -
 see
   [1]) to
   another component.
  
   The failure happens during the connect phase of
   DeployerImpl.deploy(). Here
   it loops round all of the references specified in the
  model for
 the
   component in question and then goes to the component
 implementation to
   get
   the reference definition so it can subsequently create a
  wire.
 Here is
   the
   top of the loop from DeployerImpl.connect() (I added some
 comments
   here to
   highlight the points of interest)
  
  // for each  the references specified in the SCDL
  for the
   component
  for (ComponentReference ref :
  definition.getReferences())
 {
  ListWire wires = new ArrayListWire();
  String refName = ref.getName();
  // get the definition of the reference which is
 described
   by the
   component implementation
  org.apache.tuscany.assembly.ReferencerefDefinition =
   getReference(definition.getImplementation (), refName);
  assert refDefinition != null;
  
   So when it comes to SourceComponent [1] it finds that
  the
  component is
   implemented by another composite. When this information is
  read
  into the
   model by the CompositeProcessor there is code that
  specifically
 reads
  the
   implementation.composite element, i.e.
  
  } else if
   (IMPLEMENTATION_COMPOSITE_QNAME.equals(name)) {
  
  // Read an
  implementation.composite
  Composite implementation =
   factory.createComposite();
  
   implementation.setName(getQName(reader,

   NAME));
  implementation.setUnresolved
  (true);
  
  component.setImplementation(implementation);
  
   Now all this does as far as I can see is create a
  composite type
 with
   just
   the composite name in it (I assume that the intention is
  to
 resolve
  this
   later on). Hence the connect step fails because the
  component
   implementation
   in our example has nothing in it. Specifically it has none
  of
   the
   reference
   definition information that it would have to look in the
  other
  composite
   file to get.
  
   The problem is I'm not sure when this information is
  intended to
 be
   linked
   up. During the resolve phase when this component
  implementation
 is
   reached
   the resolver just finds a composite with nothing in it
  and, as
 far as
   I can
   tell, just ignores it. How does the system know that this
  implementation
   refers to a composite defined elsewhere rather than just
   defining
 a
   composite with nothing in it?
  
   I would assume at the resolve or optimize stages this
  should
 happen so
   that
   we have a complete model when it comes time to build the
   runtime.

   Maybe we
   need a new type or flag to indicate that this is a
  composite
   implementing a
   component.  I'll keep plugging away but if someone could
  give me
 a
   pointer
   that would be great?
  
   [1]
  
 
 
http://svn.apache.org/repos/asf/incubator/tuscany/java/sca/samples/composite-impl/src/main/resources/OuterComposite.composite

Re: JAVA_SCA_M2 slides

2007-04-13 Thread Simon Laws

On 4/13/07, Salvucci, Sebastian [EMAIL PROTECTED] wrote:


Hi Simon,
I superficially understood Tuscany architecture by looking at the M2
code and the existing documentation some weeks ago. I'm planning to
start looking the current status of the code in the next days.
I agree with you that it's all good educational stuff, so I would like
to join forces for updating/creating diagrams which help the better
understanding of Tuscany architecture. As you proposed, a wiki page for
documenting each part of the infrastructure, getting Raymond's
documentation as a reference, is a very good point for start doing it.
Keep in touch.
Regards,

+sebastian




-Original Message-
From: Simon Laws [mailto:[EMAIL PROTECTED]
Sent: Thursday, April 12, 2007 11:20 AM
To: [EMAIL PROTECTED]
Subject: Re: JAVA_SCA_M2 slides

On 4/12/07, Salvucci, Sebastian [EMAIL PROTECTED] wrote:

 Hi Simon,
 I'm happy to share the source. I just used PowerPoint for preparing
the
 diagrams. The ppt document is already uploaded; you can find it at:

http://cwiki.apache.org/confluence/download/attachments/47512/TuscanyJAV
 ASCA.ppt
 Regards,

 +sebastian


 -Original Message-
 From: Simon Laws [mailto:[EMAIL PROTECTED]
 Sent: Thursday, April 12, 2007 4:40 AM
 To: [EMAIL PROTECTED]
 Subject: Re: JAVA_SCA_M2 slides

 On 4/11/07, Salvucci, Sebastian [EMAIL PROTECTED] wrote:
 
  Hello,
 
  I uploaded a document to
 

http://cwiki.apache.org/confluence/download/attachments/47512/TuscanyJAV
  ASCA.pdf which contains some slides about Java SCA Runtime. They are
  based on M2 but perhaps some graphics could serve to be reused for
the
  web site or future documentation.
 
  Regards,
 
 
 
  Sebastian Salvucci
 
 
 
  Hi Sebastian

 The look pretty cool to me. I particularly like the 3Dness of them. At
 the
 level you have described the code I don't think it would be too much
of
 a
 problem to bring the diagrams into line with how the code is now. What
 tool
 did you use to prepare the diagrams? Are you happy to share the
source?

 Regards

 Simon

 -
 To unsubscribe, e-mail: [EMAIL PROTECTED]
 For additional commands, e-mail: [EMAIL PROTECTED]

 Great, Thanks for pointing that out Sebastian. It would be good to
have a
go at moving them forward to reflecting the new code base. Are you
looking
at head now or are you using the released code in M2? Maybe what we
could do
is make a page on the wiki and document each slide/part of the
infrastructure and the look at how the code works now. Then update the
diagrams accordingly. I'm just learning how the latest code works so
it's
all good edcuational stuff.

Raymond has, in the past, been working on an architecure guide [1] but I
know that this is a little out of date too. Maybe we can join forces and
help him out.

[1]
http://cwiki.apache.org/confluence/display/TUSCANY/Tuscany+Architecture+
Guide

-
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]

Ok, am not going to get to it today, but in debugging a problem I have at

the moment have been going the new code and have been taking some notes so
will try and make a start on something next week.

Regards

Simon


Re: Composites implementing components problem

2007-04-13 Thread Simon Laws

On 4/13/07, Simon Laws [EMAIL PROTECTED] wrote:




On 4/13/07, Simon Laws [EMAIL PROTECTED] wrote:



 On 4/12/07, Simon Laws  [EMAIL PROTECTED]  wrote:
 
 
 
  On 4/12/07, Raymond Feng  [EMAIL PROTECTED] wrote:
  
   Hi, Simon.
  
   For the composite component, it's probably not necessary to attach
   the wire.
   Can you try to add the following code in DeployerImpl.connect() to
   see if it
   helps?
  
   public void connect(MapSCAObject, Object models,
   org.apache.tuscany.assembly.Component definition)
   throws WiringException {
   // Skip the composite
   if(definition.getImplementation() instanceof Composite) {
   return;
   }
   // End of skip
  
   ...
   }
  
   Thanks,
   Raymond
  
   - Original Message -
   From: Simon Laws  [EMAIL PROTECTED]
   To:  [EMAIL PROTECTED]
   Sent: Thursday, April 12, 2007 12:19 PM
   Subject: Re: Composites implementing components problem
  
  
On 4/12/07, Simon Laws  [EMAIL PROTECTED] wrote:
   
   
   
On 4/12/07, Simon Laws  [EMAIL PROTECTED] wrote:



 On 4/12/07, Jean-Sebastien Delfino  [EMAIL PROTECTED]
   wrote:
 
  Simon Laws wrote:
   On 4/12/07, Jean-Sebastien Delfino  [EMAIL PROTECTED]
   wrote:
  
   Simon Laws wrote:
I'm trying to bring the composite-impl sample up. The
   sample
uses
 
   nested
composite files and if fails trying to wire up the
   references
  from
   a top
level component (which is implemented in a separate
   composite -
  see
[1]) to
another component.
   
The failure happens during the connect phase of
DeployerImpl.deploy(). Here
it loops round all of the references specified in the
   model for
  the
component in question and then goes to the component
  implementation to
get
the reference definition so it can subsequently create a
   wire.
  Here is
the
top of the loop from DeployerImpl.connect() (I added
   some
  comments
here to
highlight the points of interest)
   
   // for each  the references specified in the SCDL
   for the
component
   for (ComponentReference ref :
   definition.getReferences())
  {
   ListWire wires = new ArrayListWire();
   String refName = ref.getName();
   // get the definition of the reference which
   is
  described
by the
component implementation
   org.apache.tuscany.assembly.ReferencerefDefinition 
=
getReference(definition.getImplementation (), refName);
   assert refDefinition != null;
   
So when it comes to SourceComponent [1] it finds that
   the
   component is
implemented by another composite. When this information
   is read
   into the
model by the CompositeProcessor there is code that
   specifically
  reads
   the
implementation.composite element, i.e.
   
   } else if
(IMPLEMENTATION_COMPOSITE_QNAME.equals(name)) {
   
   // Read an
   implementation.composite
   Composite implementation =
factory.createComposite();
   
implementation.setName(getQName(reader,
 
NAME));
   implementation.setUnresolved
   (true);
   
   component.setImplementation(implementation);
   
Now all this does as far as I can see is create a
   composite type
  with
just
the composite name in it (I assume that the intention is
   to
  resolve
   this
later on). Hence the connect step fails because the
   component
implementation
in our example has nothing in it. Specifically it has
   none of
the
reference
definition information that it would have to look in the
   other
   composite
file to get.
   
The problem is I'm not sure when this information is
   intended to
  be
linked
up. During the resolve phase when this component
   implementation
  is
reached
the resolver just finds a composite with nothing in it
   and, as
  far as
I can
tell, just ignores it. How does the system know that
   this
   implementation
refers to a composite defined elsewhere rather than just
  
defining
  a
composite with nothing in it?
   
I would assume at the resolve or optimize stages this
   should
  happen so
that
we have a complete model when it comes time to build the
runtime.
 
Maybe we
need a new type or flag to indicate that this is a
   composite
implementing a
component.  I'll keep plugging away but if someone could

Re: Nested composites and callbacks now working, was: Composites implementing components problem

2007-04-14 Thread Simon Laws

On 4/14/07, Jean-Sebastien Delfino [EMAIL PROTECTED] wrote:


Simon Laws wrote:
 On 4/13/07, Simon Laws [EMAIL PROTECTED] wrote:



 On 4/13/07, Simon Laws [EMAIL PROTECTED] wrote:
 
 
 
  On 4/12/07, Simon Laws  [EMAIL PROTECTED]  wrote:
  
  
  
   On 4/12/07, Raymond Feng  [EMAIL PROTECTED] wrote:
   
Hi, Simon.
   
For the composite component, it's probably not necessary to
attach
the wire.
Can you try to add the following code in DeployerImpl.connect()
to
see if it
helps?
   
public void connect(MapSCAObject, Object models,
org.apache.tuscany.assembly.Component definition)
throws WiringException {
// Skip the composite
if(definition.getImplementation() instanceof Composite) {
return;
}
// End of skip
   
...
}
   
Thanks,
Raymond
   
- Original Message -
From: Simon Laws  [EMAIL PROTECTED]
To:  [EMAIL PROTECTED]
Sent: Thursday, April 12, 2007 12:19 PM
Subject: Re: Composites implementing components problem
   
   
 On 4/12/07, Simon Laws  [EMAIL PROTECTED] wrote:



 On 4/12/07, Simon Laws  [EMAIL PROTECTED] wrote:
 
 
 
  On 4/12/07, Jean-Sebastien Delfino  [EMAIL PROTECTED]
wrote:
  
   Simon Laws wrote:
On 4/12/07, Jean-Sebastien Delfino 
 [EMAIL PROTECTED]
wrote:
   
Simon Laws wrote:
 I'm trying to bring the composite-impl sample up. The
sample
 uses
  
nested
 composite files and if fails trying to wire up the
references
   from
a top
 level component (which is implemented in a separate
composite -
   see
 [1]) to
 another component.

 The failure happens during the connect phase of
 DeployerImpl.deploy(). Here
 it loops round all of the references specified in the
model for
   the
 component in question and then goes to the component
   implementation to
 get
 the reference definition so it can subsequently
 create a
wire.
   Here is
 the
 top of the loop from DeployerImpl.connect() (I added
some
   comments
 here to
 highlight the points of interest)

// for each  the references specified in the
 SCDL
for the
 component
for (ComponentReference ref :
definition.getReferences())
   {
ListWire wires = new ArrayListWire();
String refName = ref.getName();
// get the definition of the reference
 which
is
   described
 by the
 component implementation

 org.apache.tuscany.assembly.ReferencerefDefinition =
 getReference(definition.getImplementation (),
 refName);
assert refDefinition != null;

 So when it comes to SourceComponent [1] it finds
 that
the
component is
 implemented by another composite. When this
 information
is read
into the
 model by the CompositeProcessor there is code that
specifically
   reads
the
 implementation.composite element, i.e.

} else if
 (IMPLEMENTATION_COMPOSITE_QNAME.equals(name)) {

// Read an
implementation.composite
Composite implementation =
 factory.createComposite();

 implementation.setName(getQName(reader,
  
 NAME));

 implementation.setUnresolved
(true);

component.setImplementation(implementation);

 Now all this does as far as I can see is create a
composite type
   with
 just
 the composite name in it (I assume that the
 intention is
to
   resolve
this
 later on). Hence the connect step fails because the
component
 implementation
 in our example has nothing in it. Specifically it has
none of
 the
 reference
 definition information that it would have to look
 in the
other
composite
 file to get.

 The problem is I'm not sure when this information is
intended to
   be
 linked
 up. During the resolve phase when this component
implementation
   is
 reached
 the resolver just finds a composite with nothing in
it
and, as
   far as
 I can
 tell, just ignores it. How does the system know that
this
implementation
 refers to a composite defined elsewhere rather than
 just
   
 defining
   a
 composite with nothing in it?

 I would assume at the resolve or optimize stages this
should
   happen so
 that
 we have a complete model when

SDO Databinding test failure

2007-04-15 Thread Simon Laws

I just made the following change in ImportSDOProcessorTestCase to make the
SDO tests work. See commented out/new line below. The loader in this case
dereferences the rolver so you can't pass in null without getting an NPE.
I'm not sure what the intention is here so I haven't checked this in. We
could either create the DefaultArtifactResolver here, test for null in the
loader and do nothing if null is passed in or preferably both.

 public void testFactory() throws Exception {
   String xml = import.sdo xmlns='
http://tuscany.apache.org/xmlns/sca/databinding/sdo/1.0'  + factory='
+ MockFactory.class.getName()
+ '/;
   XMLStreamReader reader = getReader(xml);
   assertFalse(inited);
   ImportSDO importSDO = loader.read(reader);
   assertNotNull(importSDO);
   //loader.resolve(importSDO, null);
   loader.resolve(importSDO, new DefaultArtifactResolver());
   assertTrue(inited);
   }

Simon


Re: Project conventions

2007-04-16 Thread Simon Laws

On 4/16/07, Jean-Sebastien Delfino [EMAIL PROTECTED] wrote:


[snip]
Simon Laws wrote:

 3/ package names within the modules don't always match the module name
 which makes it trick to find classes sometimes. I don't have loads of
 examples here  but the problem I have was trying to find
  o.a.t.api.SCARuntime
This is in the host-embedded module. Is it practical to suggest that
 package names to at least contain the module name?


Simon, I just fixed this API. The package name is now
o.a.t.host.embedded, matching the module name.

--
Jean-Sebastien


-
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]



Great, thanks for that.


Loading XSD includes?

2007-04-16 Thread Simon Laws

I'm having problems getting XSD includes to load in the databinding itest so
am interested to know if we are using a different version of
o.a.ws.common.XmlSchema than used to be the case. I'm getting an NPE in this
package because the baseUri in the XmlSchemaCollection is not set up
correctly. I'm going to dig into why this is the case but if anyone knows
why things have changed or is alos getting this effect I would be interested
to know.

Simon


Re: Loading XSD includes?

2007-04-16 Thread Simon Laws

On 4/16/07, Simon Laws [EMAIL PROTECTED] wrote:


I'm having problems getting XSD includes to load in the databinding itest
so am interested to know if we are using a different version of
o.a.ws.common.XmlSchema than used to be the case. I'm getting an NPE in
this package because the baseUri in the XmlSchemaCollection is not set up
correctly. I'm going to dig into why this is the case but if anyone knows
why things have changed or is alos getting this effect I would be interested
to know.

Simon



OK, so I have not solved this satisfactorily but I have put in a work around
on my machine. I added the following lines to WSDLDocumentProcessor.read()
(the two lines I added are the ones with the comment above them).

   // Read inline schemas
   Types types = definition.getTypes();
   if (types != null) {
   wsdlDefinition.getInlinedSchemas().setSchemaResolver(new
URIResolverImpl());
   for (Object ext : types.getExtensibilityElements()) {
   if (ext instanceof Schema) {
   Element element = ((Schema)ext).getElement();

   // TODO: temporary fix to make includes in imported
   //   schema work. The XmlSchema library was
crashing
   //   because the base uri was not set. This
doesn't
   //   affect imports.
   XmlSchemaCollection schemaCollection =
wsdlDefinition.getInlinedSchemas();
   schemaCollection.setBaseUri
(((Schema)ext).getDocumentBaseURI());

   wsdlDefinition.getInlinedSchemas().read(element,
element.getBaseURI());
   }
   }
   }


The scenario I have is:

WSDL import..--- XSD inlcude..--- XSD

The impact of this change is that the includes are processed BUT they are
processed relative to the base URI I set up, i.e. the URI of of the WSDL
document not the URI of the XML that includes them. I can work round this
for the time being as all the XSDs are in the same directory (and this lets
me get on and work through the rest of the test) but it's not a generic
solution. So this needs some more work to either work out how to configure
the XmlSchema library properly or take a look at the latest version to see
if this apparent problem is fixed.

Simon


Re: Loading XSD includes?

2007-04-16 Thread Simon Laws

On 4/16/07, Jean-Sebastien Delfino [EMAIL PROTECTED] wrote:


Simon Laws wrote:
 On 4/16/07, Simon Laws [EMAIL PROTECTED] wrote:

 I'm having problems getting XSD includes to load in the databinding
 itest
 so am interested to know if we are using a different version of
 o.a.ws.common.XmlSchema than used to be the case. I'm getting an NPE in
 this package because the baseUri in the XmlSchemaCollection is not
 set up
 correctly. I'm going to dig into why this is the case but if anyone
 knows
 why things have changed or is alos getting this effect I would be
 interested
 to know.

 Simon


 OK, so I have not solved this satisfactorily but I have put in a work
 around
 on my machine. I added the following lines to
 WSDLDocumentProcessor.read()
 (the two lines I added are the ones with the comment above them).

// Read inline schemas
Types types = definition.getTypes();
if (types != null) {
wsdlDefinition.getInlinedSchemas().setSchemaResolver(new
 URIResolverImpl());
for (Object ext : types.getExtensibilityElements()) {
if (ext instanceof Schema) {
Element element = ((Schema)ext).getElement();

// TODO: temporary fix to make includes in
 imported
//   schema work. The XmlSchema library was
 crashing
//   because the base uri was not set. This
 doesn't
//   affect imports.
XmlSchemaCollection schemaCollection =
 wsdlDefinition.getInlinedSchemas();
schemaCollection.setBaseUri
 (((Schema)ext).getDocumentBaseURI());

wsdlDefinition.getInlinedSchemas().read(element,
 element.getBaseURI());
}
}
}


 The scenario I have is:

 WSDL import..--- XSD inlcude..--- XSD

 The impact of this change is that the includes are processed BUT they
are
 processed relative to the base URI I set up, i.e. the URI of of the WSDL
 document not the URI of the XML that includes them. I can work round
this
 for the time being as all the XSDs are in the same directory (and this
 lets
 me get on and work through the rest of the test) but it's not a generic
 solution. So this needs some more work to either work out how to
 configure
 the XmlSchema library properly or take a look at the latest version to
 see
 if this apparent problem is fixed.

 Simon


Simon, what you've done looks like the right fix to me as an XSD include
from an XML schema inlined in a WSDL document should be loaded relative
to the WSDL document.

--
Jean-Sebastien


-
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]

it works fine for XSDs include from the WSDL but I don't think it's

correct for an XSD included in an XML schema which is itself
included/imported into the inline XML schema of a WSDL, i.e. nested
includes. The reasone I am concerned is that the databinding schema don;t
now work in the way that they used to. I've not had a chance to chase the
documentation to find out what the correct behaviour is but I think the old
behaviour was correct.

When I say old behaviour here what I mean is that includes are assumed to be
relative to the file that is including them and not relative to some parent
of that file.

Simon


Use of HelperContext to indetify SDO databinding?

2007-04-16 Thread Simon Laws

Static SDO used in the databinding tests with the Axis2 binding are not
being successfully identified as SDOs.
In SDODataBinding.introspect() one of the tests use to identify and SDO from
a Java type is as follows

   HelperContext context = HelperProvider.getDefaultContext();

   ...

   // FIXME: We need to access HelperContext
   Type type = context.getTypeHelper().getType(javaType);
   if (type == null) {
   return false;
   }

However when the ImportSDO functionality runs it associates the importer
with a HelperContext from the helperContextRegistry.

  helperContext = helperContextRegistry.getHelperContext(id);

This doesn't look right to me as I expect different HelperContexts will be
used. I can't work out how to get to the helperContextRegistry from the
SDODataBinding (and it's getting late) but if someone who knows databindings
could take a look and confirm or not whether this is an issue I can take a
look tomorrow.

Regards

Simon


Notifcation of missing extensions

2007-04-17 Thread Simon Laws

I've been caught out a couple of times now by the runtime silently failing
to work properly because I haven't put the correct set of extensions on my
classpath. Locally I have just put a printout in to warn me.

DefaultStAXArtifactProcessorExtensionPoint

   public Object read(XMLStreamReader source) throws
ContributionReadException {

   // Delegate to the processor associated with the element qname
   QName name = source.getName();
   StAXArtifactProcessorExtension? processor =
(StAXArtifactProcessorExtension?)this.getProcessor(name);
   if (processor == null) {
   System.err.print(No extension processor registered for element
 + name.toString());
   return null;
   }
   return processor.read(source);
   }

So that if an element is encountered for which there is no loader I get to
find out I've made a mistake. So can someone explain how we should be doing
logging/montoring in the Tuscany runtime as it would be good to report back
to users that this is happening

Regards

Simon


Problem (?) in sdo2om databinding test case

2007-04-17 Thread Simon Laws

Just doing a complete build from scratch (repo clean etc.) and I go the
following in the maven build

Running org.apache.tuscany.databinding.sdo2om.DataObject2OMElementTestCase
Tests run: 1, Failures: 0, Errors: 1, Skipped: 0, Time elapsed: 1.011 sec
 FA
ILURE!
testTransform(
org.apache.tuscany.databinding.sdo2om.DataObject2OMElementTestCase
)  Time elapsed: 0.891 sec   ERROR!
java.lang.NoSuchFieldError:
org/apache/tuscany/databinding/sdo2om/DataObject2OME
lementTestCase.context
   at
org.apache.tuscany.databinding.sdo2om.DataObject2OMElementTestCase.te
stTransform(DataObject2OMElementTestCase.java:32)
   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
   at sun.reflect.NativeMethodAccessorImpl.invoke
(NativeMethodAccessorImpl.
java:64)
   at sun.reflect.DelegatingMethodAccessorImpl.invoke
(DelegatingMethodAcces
sorImpl.java:43)
   at java.lang.reflect.Method.invoke(Method.java:615)
   at junit.framework.TestCase.runTest(TestCase.java:168)
   at junit.framework.TestCase.runBare(TestCase.java:134)
   at junit.framework.TestResult$1.protect(TestResult.java:110)
   at junit.framework.TestResult.runProtected(TestResult.java:128)
   at junit.framework.TestResult.run(TestResult.java:113)

Is anyone else seeing this or should I be looking to start from scratch
again?

Regards

Simon


Re: Problem (?) in sdo2om databinding test case

2007-04-18 Thread Simon Laws

On 4/17/07, Raymond Feng [EMAIL PROTECTED] wrote:


Hi,

Did you run with mvn clean install? It seems that you have some obsolete
classes in the target folder.

Thanks,
Raymond

- Original Message -
From: Simon Laws [EMAIL PROTECTED]
To: tuscany-dev tuscany-dev@ws.apache.org
Sent: Tuesday, April 17, 2007 10:15 AM
Subject: Problem (?) in sdo2om databinding test case


 Just doing a complete build from scratch (repo clean etc.) and I go the
 following in the maven build

 Running
org.apache.tuscany.databinding.sdo2om.DataObject2OMElementTestCase
 Tests run: 1, Failures: 0, Errors: 1, Skipped: 0, Time elapsed: 1.011sec
  FA
 ILURE!
 testTransform(
 org.apache.tuscany.databinding.sdo2om.DataObject2OMElementTestCase
 )  Time elapsed: 0.891 sec   ERROR!
 java.lang.NoSuchFieldError:
 org/apache/tuscany/databinding/sdo2om/DataObject2OME
 lementTestCase.context
at
 org.apache.tuscany.databinding.sdo2om.DataObject2OMElementTestCase.te
 stTransform(DataObject2OMElementTestCase.java:32)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke
 (NativeMethodAccessorImpl.
 java:64)
at sun.reflect.DelegatingMethodAccessorImpl.invoke
 (DelegatingMethodAcces
 sorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:615)
at junit.framework.TestCase.runTest(TestCase.java:168)
at junit.framework.TestCase.runBare(TestCase.java:134)
at junit.framework.TestResult$1.protect(TestResult.java:110)
at junit.framework.TestResult.runProtected(TestResult.java:128)
at junit.framework.TestResult.run(TestResult.java:113)

 Is anyone else seeing this or should I be looking to start from scratch
 again?

 Regards

 Simon



-
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]

Thanks Raymond. Your are of course right. Coming back with a fresh eye

this morning it all builds OK.

Thanks

Simon


Re: Intermittent failures in tests using the HTTP services

2007-04-18 Thread Simon Laws

On 4/18/07, ant elder [EMAIL PROTECTED] wrote:


I get intermittent build failures when running the build in the testcases
which use the HTTP service, the error is pasted in below, is anyone else
seeing this?

Its really easy (in my environment) to recreate, change into
sca\modules\binding-ws-axis2 and run mvn, around 50% of the time one of
the
testcases will fail.

This looks just like the problem some of us were seeing with testcases in
the integration branch, that did seem to be a Windows only problem, but
I'm
not sure we ever got to the bottom of it.

   ...ant

Caused by: java.net.BindException: Address already in use: bind
at sun.nio.ch.Net.bind(Native Method)
at sun.nio.ch.ServerSocketChannelImpl.bind(
ServerSocketChannelImpl.java:119)
at sun.nio.ch.ServerSocketAdaptor.bind(ServerSocketAdaptor.java
:59)
at org.mortbay.jetty.nio.SelectChannelConnector.open(
SelectChannelConnector.java:198)
at org.mortbay.jetty.AbstractConnector.doStart(
AbstractConnector.java:251)
at org.mortbay.jetty.nio.SelectChannelConnector.doStart(
SelectChannelConnector.java:233)
at org.mortbay.component.AbstractLifeCycle.start(
AbstractLifeCycle.java:40)
at org.mortbay.jetty.Server.doStart(Server.java:221)
at org.mortbay.component.AbstractLifeCycle.start(
AbstractLifeCycle.java:40)
at org.apache.tuscany.http.jetty.JettyServer.addServletMapping(
JettyServer.java:175)


Ant

I'm on windows but not seeing this at the moment. The only time I have seen
the message is when I've had another app open on 8080 by accident (A VOIP
app as it happens with took me by surprise a little). I'll keep an eye out
and report back here if I start to see it.

Simon


Re: Use of HelperContext to indetify SDO databinding?

2007-04-18 Thread Simon Laws

On 4/16/07, Raymond Feng [EMAIL PROTECTED] wrote:


Hi, Simon.

I think you hit the point. We still need to flush out the complete design
on
how to scope SDO metadata and how to provide access to the SCA components.
At this moment, I register the SDO types with the default helper context
too
(HelperProvider.getDefaultContext()). Please let me know which test case I
can use to further investigate.

Thanks,
Raymond

- Original Message -
From: Simon Laws [EMAIL PROTECTED]
To: tuscany-dev tuscany-dev@ws.apache.org
Sent: Monday, April 16, 2007 2:04 PM
Subject: Use of HelperContext to indetify SDO databinding?


 Static SDO used in the databinding tests with the Axis2 binding are not
 being successfully identified as SDOs.
 In SDODataBinding.introspect() one of the tests use to identify and SDO
 from
 a Java type is as follows

HelperContext context = HelperProvider.getDefaultContext();

...

// FIXME: We need to access HelperContext
Type type = context.getTypeHelper().getType(javaType);
if (type == null) {
return false;
}

 However when the ImportSDO functionality runs it associates the importer
 with a HelperContext from the helperContextRegistry.

   helperContext = helperContextRegistry.getHelperContext(id);

 This doesn't look right to me as I expect different HelperContexts will
be
 used. I can't work out how to get to the helperContextRegistry from the
 SDODataBinding (and it's getting late) but if someone who knows
 databindings
 could take a look and confirm or not whether this is an issue I can take
a
 look tomorrow.

 Regards

 Simon



-
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]

Hi Raymond


Sorry about the delay, I had to go do something else yesterday. Anyhow I
tracked down the problem this morning. The CompositeProcessor resolves the
artifacts in the composite in the wrong order. I.e. it resolves components
before it has resolves the extensions, e.g. SDO imports, that the components
depend on.

Moving the extension processing loop above the component processing loop in
CompositeProcessor.resolve works for me. Am just testing my build etc. to
make sure there are no detrimental effects but will check this in when done.

Regards

Simon


Re: SCA distribution script

2007-04-19 Thread Simon Laws

On 4/13/07, Paulo Henrique Trecenti [EMAIL PROTECTED] wrote:


Hi,

How I can use the build of SCA?
The distribution script and same modules is not complete...
I can use the SNAPSHOT version for tests of my application...

--
Paulo Henrique Trecenti


Hi Paulo

Just going through cleaning my mail and I notice that you didn't get a
response to this. If you want to use the latest Java SCA code then it's
quite easy. First you have to check all of the Java source code from the
head of our subversion repository. If you have a command line subversion
client then you can do something like:

svn checkout https://svn.apache.org/repos/asf/incubator/tuscany/java/sca

Once you have the code you can build and run the samples using the maven
build tools. You will need Maven (2.0.4+) and a JDK (5.0+) installed but
then you do.

cd java/sca
mvn

And it should work. Let us know how you get on.

Regards

Simon


Re: using service name to call a service

2007-04-19 Thread Simon Laws

On 4/13/07, muhwas [EMAIL PROTECTED] wrote:


Hi guys,

I was wondering if there is any way to get a reference
to web service interface using service name (in SCDL
file) only instead of doing

compositeContext.locateService(ClassName.class,composite)

thank you,
muhwas

-
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]

Hi Muhwas


I'm not sure what the options were on M2. In the latest software (in the
trunk of our svn respository) we are moving to implement V1.0 of the SCA
specifications. The emphasis here is on component context although how a
component context is obtained is not specified. In the trunk implementation
we currently have an embedded host container that allows a component context
to be returned in the following way.

SCARuntime.start(my.composite);
ComponentContext context = SCARuntime.getComponentContext
(MyComponentName);


From there the spec says that you should be able to do (not sure it quite

works like this yet)

MyService service = context.getService(MyService.class, MyServiceName);

I know this doesn't answer your question re. M2 but hopefully gives you an
idea of where we are going.

Regards

Simon


Re: Notifcation of missing extensions

2007-04-19 Thread Simon Laws

On 4/17/07, Jean-Sebastien Delfino [EMAIL PROTECTED] wrote:


Comments inline.

Raymond Feng wrote:
 Hi,

 We might have a few options to make it more elegant:

 1) By throwing an exception from the extension point when the key
 doesn't hit an entry.
 2) The caller will check the null and make a decision on how to proceed.


Option (1) covers critical error cases where it's not possible at all to
proceed with reading the SCA assembly XML document.

I don't like much option (2) as it will leave users and application
developers in the dark...

I'd like to suggest a third option:
- Record problems like we have started to do in CompositeUtil, and
expand on that as discussed in [1].
- After a contribution has been scanned and read, inspect the problems
list and if we find severe problems, throw an exception, report warnings
and benign issues using some form of monitoring and/or logging.

Doing this will allow us to defer the handling of these problems out of
the individual ArtifactProcessors, handle them with more context (as
we'll have the whole list), and overall provide better error reporting
to users and application developers, as we'll be able to present a whole
list instead of throwing up on the first error.

[1] http://www.mail-archive.com/tuscany-dev@ws.apache.org/msg16554.html

 And for the monitoring/logging, we had a monitoring framework (not
 completely activated ATM). The basic idea is as follows:

 1) The class that supports the monitoring will define an interface for
 the monitoring purpose
 2) The class use @Monitor annotation to receive an implementation of
 the interface by injection
 3) A MonitorFactory is responsible to create a Monitor based on the
 interface

 I found it a bit difficult to follow this pattern as the class itself
 have to take care of most of the stuff. Should we explore other
 opportunities such as AOP?

I found our current Monitor stuff difficult to follow as well. I suggest
that we start a new discussion thread to discuss monitoring in general,
and try to come up with something that will be more usable and easier to
adopt through our whole runtime.


 Thanks,
 Raymond

 - Original Message - From: Simon Laws
 [EMAIL PROTECTED]
 To: tuscany-dev tuscany-dev@ws.apache.org
 Sent: Tuesday, April 17, 2007 9:04 AM
 Subject: Notifcation of missing extensions


 I've been caught out a couple of times now by the runtime silently
 failing
 to work properly because I haven't put the correct set of extensions
 on my
 classpath. Locally I have just put a printout in to warn me.

 DefaultStAXArtifactProcessorExtensionPoint

public Object read(XMLStreamReader source) throws
 ContributionReadException {

// Delegate to the processor associated with the element qname
QName name = source.getName();
StAXArtifactProcessorExtension? processor =
 (StAXArtifactProcessorExtension?)this.getProcessor(name);
if (processor == null) {
System.err.print(No extension processor registered for
 element
  + name.toString());
return null;
}
return processor.read(source);
}

 So that if an element is encountered for which there is no loader I
 get to
 find out I've made a mistake. So can someone explain how we should be
 doing
 logging/montoring in the Tuscany runtime as it would be good to
 report back
 to users that this is happening

 Regards

 Simon



 -
 To unsubscribe, e-mail: [EMAIL PROTECTED]
 For additional commands, e-mail: [EMAIL PROTECTED]




--
Jean-Sebastien


-
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]



I  agree that getting  all of the  missing extension problems out in one
pass is preferable. We need to organize the problem list so that it can
collect errors across the the assembly read, resolve and wire phases. Can we
make it a first class citizen of the contribution? I imagine, in the
furture, that as contributions are added/removed from a domain  we will want
to collate problems as problems may come and go and have a bearing on how
operable a domain is.

Regards

Simon


Re: Ability to use default binding across top-level-Composites?

2007-04-19 Thread Simon Laws

On 4/18/07, Scott Kurz [EMAIL PROTECTED] wrote:


Sorry, I don't have all the new code set up so let me just ask.

Is it possible with the code in trunk today to, over the default binding,
invoke a service that was deployed in a separate top-level-Composite?

That is, say, to deploy a component in Composite A  that invokes a
component
service which is deployed in a separate top-level-Composite, Composite B?

Maybe what I'm asking is to what extent we've integrated the default
binding
with a concept of a Tuscany domain.

Thanks
Scott



Hi Scott

I don't believe we do anything about SCA domains (as they appear in
the V1.0spec) yet. I should point out though that Raymond posted
recently that he
wanted to start bringing domains into the runtime [1] so now's a good time
to get thoughts/comments in.

Regards

Simon

[1] http://www.mail-archive.com/tuscany-dev%40ws.apache.org/msg16792.html


Re: Website - Feedback please

2007-04-19 Thread Simon Laws

On 4/18/07, haleh mahbod [EMAIL PROTECTED] wrote:


These are done now:

 1)  Using SDO Java could move to 'user guide' on this page.


+1

2) Code structure can move to get involved or even to the architecture doc


+1 to moving to get involved


On 4/17/07, kelvin goodson [EMAIL PROTECTED] wrote:

 I replied to this thread on the tuscany-user list

 On 12/04/07, haleh mahbod  [EMAIL PROTECTED] wrote:
 
  Kelvin,
  Thanks for your review.  You mentioned that scopes should be the same
 when
  on a given page.
  I agree.  I fixed it.  We now have a  sdo cpp FAQ and a sdo Java FAQ.
I
  also moved text from some of the mentioned threads to the FAQ. The
ones
 I
  did not is because I did not know how to net it down to a question and

 an
  answer.
 
  You mentioned General heading can be called Tuscany or SDO General
  heading. The General heading is more a collection of things that I
 couldn't
  find a good title for on that page :) It is not intended to be general
 for
  Tuscany.
 
  Some suggestion for the SDO Java page:
 
  1)  Using SDO Java could move to 'user guide' on this page.
  2) Code structure can move to get involved or even to the architecture
 doc
 
  If there is agreement, I go ahead and make the changes.
  Haleh
 
 
  On 4/12/07, kelvin goodson [EMAIL PROTECTED] wrote:
  
   Haleh,
  
 thanks for addressing these issues.  One concern I have after a
 quick
   look
   is that on arriving at a pages such as
   http://cwiki.apache.org/TUSCANY/sdo-java.html some of the links on
the
   left
   under a given heading go off to a scope that is not intuitive from
the
   page
   that you were on.  E.g. under the General sidebar heading,  the
FAQ
   link
   for Java SDO goes to a page that I think is intended to contain
 generic
   cross language SDO questions, i.e. up one level of abstraction from
 the
   page
   heading.  It's been that way since version 3 of the file, and I
can't
   work
   out whether it's intended that way,  or just a result of copying the

   content
   from an existing C++ page.
  
   The next sidebar heading below FAQ --- Downloads --- relates to
   downloading SDO Java ( i.e. at the same level of abstraction of the
   current
   page).  I think it would be good if the sidebar headings grouped
links
   at
   the same level of abstraction and made this clear from the heading
 name
   -
   e.g. Tuscany General, or SDO General
  
  
   BTW,  FWIW, this prompted me to just catalogue the set of
tuscany-dev
   notes
   that I have put an FAQ label against when reading the list. I had
 the
   intention of reviewing this list to see what was worth refining into
   well
   formed info snippets some time,  but haven't got to it yet.   Maybe
we
   can
   divide and conquer to add to the website's FAQ set.  Does anyone
else
   have
   anything like this?
  
   http://www.mail-archive.com/[EMAIL PROTECTED]/msg00469.html

   http://www.mail-archive.com/tuscany-dev@ws.apache.org/msg13291.html
   Ron Gavlin's response to tuscany-user of 5th of Jan (cant find a
very
   good
   archive URL for this one)
   My response to Alexander Pankov of  the  26th of Jan on t-user
   (again  URL
   not readily found)
   http://www.mail-archive.com/[EMAIL PROTECTED]/msg00560.html
   http://www.mail-archive.com/[EMAIL PROTECTED]/msg00610.html
   The thread started by Adriano Crestani on 15th Feb mvn problem?
   Unanswered thread from Ignacio on 16th Feb
   Frank's responses to Murtaza Goga in the thread started 20th March
  
  

http://mail-archives.apache.org/mod_mbox/ws-tuscany-user/200703.mbox/[EMAIL 
PROTECTED]
   The thread entitled Root property that is not containment started
 29th
   of
   Jan
   The thread entitled Getting started with Tuscany databinding
started
   on
   10th April
  
   Regards, Kelvin.
  
  
  
  
  
  
  
  
   On 12/04/07, haleh mahbod  [EMAIL PROTECTED] wrote:
   
Hi,
   
As mentioned in [1] I started working on the website and the first

   phase
is
ready for review.
   
My first attempt ended up with something other than what I had
   originally
planned; which was to move content.  Instead, I worked on the
   readability
of
the website.
I have tried to use a consistent look and feel  across the
pages  to
   make
it
easier to find information. In addition, I tried to make
   the  information
available progressively ( allowing the user to decide if they want

 to
learn
more).
   
Here is the layout at a high level for the Tuscany Website (
http://cwiki.apache.org/TUSCANY/)
   
   - Home page - On this page you will find the general stuff that
   apply
   to the whole Tuscany as well as links to each subproject and
   Downloads.
- Each subproject has an overview page, for example SCA
overview
   or
   DAS overview. On the overview page you find a brief
introduction
 to
   the
   subproject and links to resources which are introductory
   information on
the
   subproject, for 

Re: Website - Feedback please

2007-04-19 Thread Simon Laws

On 4/19/07, ant elder [EMAIL PROTECTED] wrote:


On 4/19/07, Simon Laws [EMAIL PROTECTED] wrote:

snip/

- I like the list of modules I think we should go with the module name
 from the code and link to a separate
   page for each one. (take a look I've made an example). We can then
 use
 URLs such as
   http://cwiki.apache.org/confluence/display/TUSCANY/binding-ws to
 refer
 directly to the module description(*)


I like the one wiki page with the module name per module and as well, but
do we really want all of those listed on the Java SCA Subproject page?
That page seems more user oriented giving an overview of Java SCA
capabilities, where as all the individual modules are a really deep
implementation detail. For example how about on the Java SCA Subproject
page say Tuscany has a web service binding and links to a web service
page
which talks about the WS capabilities and its that web service page
where
the list of WS related modules could go: binding-ws, binding-ws-xml,
binding-ws-axis2, binding-ws-cxf, interface-wsdl, interface-wsdl-xml,
interface-wsdl-runtime etc. Similar thing for all the other binding,
implementation, databinding, runtime etc.

   ...ant


I agree that this list doesn't need to go on this page but it would be good
to have a straight list somewhere so it's easy to get the low down on a
module. Perhaps in the develper guide as I had hoped that these module pages
will include design information.  I would expect the user docs for the
modules, i.e. what to put in the SCDL to make them work, to go in the User
Guide section. This could have a more user friendly index as  suggested

Simon


Re: [DISCUSS] Next version - What should be in it

2007-04-19 Thread Simon Laws

On 4/19/07, ant elder [EMAIL PROTECTED] wrote:


On 4/19/07, Jean-Sebastien Delfino [EMAIL PROTECTED] wrote:

 Davanum Srinivas wrote:
  Folks,
 
  Let's keep the ball rolling...Can someone please come up with a master
  list of extensions, bindings, services, samples which can then help
  decide what's going to get into the next release. Please start a wiki
  page to document the master list. Once we are done documenting the
  list. We can figure out which ones are MUST, which ones are nice to
  have, which ones are out of scope. Then we can work backwards to
  figure out How tightly or loosely coupled each piece is/should be and
  how we could decouple them if necessary using
  interfaces/spi/whatever...
 
  Quote from Bert Lamb:
  I think there should be a voted upon core set of extensions,
  bindings, services, samples, whatever that should be part of a
  monolithic build.
  http://www.mail-archive.com/tuscany-dev@ws.apache.org/msg16062.html
 
  Quote from Ant Elder:
  The specifics of what extensions are included in this release is left
  out of
  this vote and can be decided in the release plan discussion. All this
  vote
  is saying is that all the modules that are to be included in this next
  release will have the same version and that a top level pom.xml will
  exist
  to enable building all those modules at once.
  http://www.mail-archive.com/tuscany-dev@ws.apache.org/msg16155.html
 
  Thanks,
  dims
 
 

 Hi all,

 I think we have made good progress since we initially started this
 discussion. We have a simpler structure in trunk with a working top-down
 build. Samples and integration tests from the integration branch have
 been integrated back in trunk and most are now working.

 We have a more modular runtime with a simpler extension mechanism. For
 example we have separate modules for the various models, the core
 runtime and the Java component support. SPIs between the models and the
 rest of the runtime have been refactored and should become more stable.
 We need to do more work to further simplify the core runtime SPIs and
 improve the core runtime but I think this is going in the right
direction.

 I'm also happy to see better support for the SCA 1.0 spec, with support
 for most of the SCA 1.0 assembly XML, and some of the SCA 1.0 APIs. It
 looks like extensions are starting to work again in the trunk, including
 Web Services, Java and scripting components. It shouldn't be too
 difficult to port some of the other extensions - Spring, JMS, JSON-RPC
 -  to the latest code base as well.

 So, the JavaOne conference is in three weeks, would it make sense to try
 to have a Tuscany release by then?

 We could integrate in that release what we already have working in
 trunk, mature and stabilize our SPIs and our extensibility story, and
 this would be a good foundation for people to use, embed or extend.

 On top of that, I think it would be really cool to do some work to:
 - Make it easier to assemble a distributed SCA domain with components
 running on different runtimes / machines.
 - Improve our scripting and JSON-RPC support a little and show how to
 build Web 2.0 applications with Tuscany.
 - Improve our integration story with Tomcat and also start looking at an
 integration with Geronimo.
 - Improve our Spring-based core variant implementation, as I think it's
 a good example to show how to integrate Tuscany with other IoC
containers.
 - Maybe start looking at the equivalent using Google Guice.
 - Start looking again at some of the extensions that we have in contrib
 or sandboxes (OSGI, ServiceMix, I think there's a Fractal extension in
 sandbox, more databindings etc).
 - ...

 I'm not sure we can do all of that in the next few weeks :) but I'd like
 to get your thoughts and see what people in the community would like to
 have in that next release...


I'm not sure we could do all that in three weeks either :)

+1 to a release soon, but to be honest, attempting all the above seems
rushed to me, I think it would be good to focus on a small core of things
and getting them working and tested and documented, and then use that as a
stable base to build on and to attract others in the community to come
help
us with all the above work.

The website is starting to look much better these days, but there's still
a
a lot we can do to give each bit of supported function clear user
documentation. So as one example, for each feature we support - Tomcat,
Jetty, Java components, scripting, Axis2 etc - a page about what it does
and
how to use it. ServiceMix does this quite well I think, eg:
http://incubator.apache.org/servicemix/servicemix-http.html. Once we have
some good doc and an obvious website structure in place it will be much
easier for people adding new function  to Tuscany to also add doc to the
website instead of leaving things undocumented.

There's been a ton of work on the runtime code of the last few weeks and
some of it was done a bit hastily just to get things working again, so it
may 

Re: Monitoring, logging, and exceptions (was: Re: Notifcation of missing extensions)

2007-04-19 Thread Simon Laws

On 4/19/07, Raymond Feng [EMAIL PROTECTED] wrote:


Hi,

I created a prototype to play with aop-based logging and tracing. The
annotation-style aspect development seems to be simpler as it doesn't
require the aspectj compiler. You can see the code at:

http://svn.apache.org/viewvc/incubator/tuscany/sandbox/rfeng/aop-logging

You can download aspectj 1.5.3 from

http://www.eclipse.org/downloads/download.php?file=/tools/aspectj/aspectj-1.5.3.jar
and then run the prototype using test.bat in the root of the project.

Thanks,
Raymond

- Original Message -
From: Raymond Feng [EMAIL PROTECTED]
To: tuscany-dev@ws.apache.org
Sent: Wednesday, April 18, 2007 10:49 AM
Subject: Re: Monitoring, logging, and exceptions (was: Re: Notifcation of
missing extensions)


 Hi,

 Please see my comments below.

 Thanks,
 Raymond

 - Original Message -
 From: ant elder [EMAIL PROTECTED]
 To: tuscany-dev@ws.apache.org
 Sent: Wednesday, April 18, 2007 3:51 AM
 Subject: Monitoring, logging, and exceptions (was: Re: Notifcation of
 missing extensions)


 On 4/17/07, Jean-Sebastien Delfino [EMAIL PROTECTED] wrote:

 snip/

 I found our current Monitor stuff difficult to follow as well. I
suggest
 that we start a new discussion thread to discuss monitoring in
general,
 and try to come up with something that will be more usable and easier
to
 adopt through our whole runtime.


 Starting the new thread for you...

 I agree we should improve monitoring and logging in the runtime.

 I've used AOP before for this type of thing, its cool, but it does add
 yet
 another new thing to know about which could be off putting for new
 developers. How about just using one of the existing logging packages
 that
 most people are already completely familiar with? Commons Logging looks
 like
 its coming to its end, no one really likes java.util.logging, so how
 about
 SLF4J, its really easy and nice to use?


 I personally don't like the Commons Logging approach very much due to
the
 fact that conflicting versions are used by many 3rd party artifacts.

 With regard to AOP, do we really need to have all the developers learn
how
 to use it? I assume we can put together some logging aspects in a
separate
 module to take care of most of the logging/monitoring concerns. Other
 modules are not even aware of the existence of the AOP. Isn't it the
 objective of AOP to address the cross-cutting concerns without poluting
 the code?

 I also think exception handling could be improved, I don't find the
 current
 exception formatter design easy to use, and most times stack traces end
 up
 missing the important bit of information you need. How about just using
 the
 traditional way of  putting everything in the exception message and
 using
 properties files to allow for I18N?


 I think we might be able to improve the ExceptionFormatter by providing
a
 default formatter which could dump out all the information in the
 exception object. We already have a similar function in
 org.apache.tuscany.assembly.util.PrintUtil and we could further
enhance
 it.

 To support I18N, we could adopt a pattern for the exception so that a
 getter or a field can be recoginzed as the message id.

 One thing I've wondered about was having a release specifically
targeting
 these RAS type features. So once we've worked out the strategy for
 logging,
 exceptions, internationalization etc we have a release where a big
focus
 is
 on implementing/fixing/testing all these RAS things.


 +1. Enabling RAS is a big effort.

   ...ant




-
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]

On the positive side this seems like an approach we could use that

separates logging/tracing from the application code. However you still need
the aspectj runtime to weave the annotation based aspects into the
application classfiles right. I am attracted by doing tracing with aspectj.
So that we can turn it off completely when we are done. However I support
using a more standard approach for logging more highlevel and persistent
events which we might want to plug into other monitoring tools.

Regards

Simon


Re: Website - Feedback please

2007-04-19 Thread Simon Laws

On 4/19/07, ant elder [EMAIL PROTECTED] wrote:


On 4/19/07, Simon Laws [EMAIL PROTECTED] wrote:

 On 4/19/07, ant elder [EMAIL PROTECTED] wrote:
 
  On 4/19/07, Simon Laws [EMAIL PROTECTED] wrote:
 
  snip/
 
  - I like the list of modules I think we should go with the module
 name
   from the code and link to a separate
 page for each one. (take a look I've made an example). We can
 then
   use
   URLs such as
 http://cwiki.apache.org/confluence/display/TUSCANY/binding-wsto
   refer
   directly to the module description(*)
 
 
  I like the one wiki page with the module name per module and as well,
 but
  do we really want all of those listed on the Java SCA Subproject
page?
  That page seems more user oriented giving an overview of Java SCA
  capabilities, where as all the individual modules are a really deep
  implementation detail. For example how about on the Java SCA
 Subproject
  page say Tuscany has a web service binding and links to a web
service
  page
  which talks about the WS capabilities and its that web service page
  where
  the list of WS related modules could go: binding-ws, binding-ws-xml,
  binding-ws-axis2, binding-ws-cxf, interface-wsdl, interface-wsdl-xml,
  interface-wsdl-runtime etc. Similar thing for all the other binding,
  implementation, databinding, runtime etc.
 
 ...ant
 
 I agree that this list doesn't need to go on this page but it would be
 good
 to have a straight list somewhere so it's easy to get the low down on a
 module. Perhaps in the develper guide as I had hoped that these module
 pages
 will include design information.  I would expect the user docs for the
 modules, i.e. what to put in the SCDL to make them work, to go in the
User
 Guide section. This could have a more user friendly index as  suggested


A complete list does sound useful. How about the developer guide links to
something like an architecture page which has a diagram of the runtime, a
bit of a description about it, and the complete list of modules? Eg,
Similar
to [1] but the other way up and more text explaining it.

   ...ant

[1]

http://cwiki.apache.org/confluence/display/TUSCANY/Java+SCA+Modulization+Design+Discussions


I thinks that's spot on. Didn't know the page had been extended to include
the module list. Lets link it into the architecture page (when we decide
which architecture page we are having ;-). We can use module links from this
page to record module design information. Module user information would be
separate of course.So doe this hierarchy look right

Architecture Guide --- Module list  Module technical detail
 ^
 |
User Guide --- Implementation/Binding/Databinding... list -- Extension
User Guide

We could probably tie the two together as you suggest by indicating which
modules are used to implement an Implementation/Binding/Databinding

Simon


Re: Website - Feedback please

2007-04-20 Thread Simon Laws

On 4/20/07, haleh mahbod [EMAIL PROTECTED] wrote:


Thanks for your comments. I haven't gone through all the details and will
do
that tomorrow. However, this  caught my eye and wanted to better
understand
your comment.

 Java SCA
  - Architecture Guide
- still pointing to the old one

What do you mean by still pointing to the old one. If you follow the
link
you should see this page

http://cwiki.apache.org/TUSCANY/java-sca-architecture-overview.html

I agree that the content should be updated, but want to make sure you are
seeing this page.


 - DeveloperGuide
- I still think there should be a developer guide as it fits well
under

What is a developer guide and how is the content different than what would
go into the 'get involved' page under development section?

Here is what I was thinking (perhaps it is not right):
User Guide would hold things  like:
 -  Installation/setup information
 -  user type documentation (SCA concepts and examples, etc)
 -  How to develop a simple SCA application followed with more
advanced
topics

GetInvolved link would point to information that anyone wanting to
contribute to SCA Java would need to know about, for example, code
structure, hints on development, etc.


Haleh

On 4/19/07, Simon Laws [EMAIL PROTECTED] wrote:

 On 4/19/07, ant elder [EMAIL PROTECTED] wrote:
 
  On 4/19/07, Simon Laws [EMAIL PROTECTED] wrote:
  
   On 4/19/07, ant elder [EMAIL PROTECTED] wrote:
   
On 4/19/07, Simon Laws [EMAIL PROTECTED] wrote:
   
snip/
   
- I like the list of modules I think we should go with the
 module
   name
 from the code and link to a separate
   page for each one. (take a look I've made an example). We
 can
   then
 use
 URLs such as

 http://cwiki.apache.org/confluence/display/TUSCANY/binding-wsto
 refer
 directly to the module description(*)
   
   
I like the one wiki page with the module name per module and as
 well,
   but
do we really want all of those listed on the Java SCA Subproject
  page?
That page seems more user oriented giving an overview of Java SCA
capabilities, where as all the individual modules are a really
deep
implementation detail. For example how about on the Java SCA
   Subproject
page say Tuscany has a web service binding and links to a web
  service
page
which talks about the WS capabilities and its that web service
 page
where
the list of WS related modules could go: binding-ws,
binding-ws-xml,
binding-ws-axis2, binding-ws-cxf, interface-wsdl,
 interface-wsdl-xml,
interface-wsdl-runtime etc. Similar thing for all the other
binding,
implementation, databinding, runtime etc.
   
   ...ant
   
   I agree that this list doesn't need to go on this page but it would
be
   good
   to have a straight list somewhere so it's easy to get the low down
on
 a
   module. Perhaps in the develper guide as I had hoped that these
module
   pages
   will include design information.  I would expect the user docs for
the
   modules, i.e. what to put in the SCDL to make them work, to go in
the
  User
   Guide section. This could have a more user friendly index
 as  suggested
 
 
  A complete list does sound useful. How about the developer guide links
 to
  something like an architecture page which has a diagram of the
runtime,
 a
  bit of a description about it, and the complete list of modules? Eg,
  Similar
  to [1] but the other way up and more text explaining it.
 
 ...ant
 
  [1]
 
 

http://cwiki.apache.org/confluence/display/TUSCANY/Java+SCA+Modulization+Design+Discussions
 
 I thinks that's spot on. Didn't know the page had been extended to
include
 the module list. Lets link it into the architecture page (when we decide
 which architecture page we are having ;-). We can use module links from
 this
 page to record module design information. Module user information would
be
 separate of course.So doe this hierarchy look right

 Architecture Guide --- Module list  Module technical detail
   ^
   |
 User Guide --- Implementation/Binding/Databinding... list -- Extension
 User Guide

 We could probably tie the two together as you suggest by indicating
which
 modules are used to implement an Implementation/Binding/Databinding

 Simon



Hi

1/ Architecture Guide.
It was just that the text here has an M2 feel about it, i.e. it refers to
things like Kernel which is not a term used in the code now (core does
appear though). So I think we should go with an architecure page that
describes how the Tuscany runtime is put together. I prefer the level of
detail that you have put on the kernel specific architecture page [2]
although of course this need updating as well. From this wide ranging
discusion of kernel we can link, in appropriate places, to the technical
details of the inidividual modules. So, expanding on a previous post, I
would anticipate something like

Re: Question on ModelObject for binding extension

2007-04-20 Thread Simon Laws

On 4/20/07, Snehit Prabhu [EMAIL PROTECTED] wrote:


Hi,
Is there an updated version of this document (Extending Tuscany) that
reflects the current state of the trunk? Most of the classes in the models
shown are nonexistent today. Is the whole programming model depicted here
irrelevant?
thanks
snehit

On 4/11/07, Jean-Sebastien Delfino [EMAIL PROTECTED] wrote:

 Pamela Fong wrote:
  If I choose to use EMF to generate a model to represent my extended
SCDL
  schema, I would also need to generate EMF model to represent
  sca-core.xsdsince the binding schema extends from the core schema. So
  I would end up
  packaging two generated packages within one binding extension. Someone

  else
  comes along adding extension to sca-core and using EMF to generate the
  model
  code, also needs to package the core and the extended packages. How do
  things co-exist in the long run? Or do we just assume all generated
core
  packages should be identical and thus it's ok to have it multiple
  times in
  the classpath?
 
  On 4/10/07, Jean-Sebastien Delfino  [EMAIL PROTECTED] wrote:
 
  Pamela Fong wrote:
   Hi,
  
   I read the article Extending Tuscany by contributing a new
   implementation /
   binding type by Raymond and Jeremy. Got a question about the
   definition of
   ModelObject. The example in the article is a very simple java
  bean-like
   object. This is fine if all we have to deal with is some simple
   attributes
   in the extension. If the binding requires complex SCDL model
  extension,
   defining the ModelObject by hand may not be the best choice (let's
 say
   one
   could have mulitple layers of nested elements and arrays etc.). One

   obvious
   alternative would be to generate some model code based on the
  extension
   xsd using EMF. However, since the binding's xsd extends from
   sca-core.xsd,
   generating model code would require the core model, which doesn't
   exist in
   Tuscany. What would be the recommended mechanism to define the
   ModelObject
   in this case?
  
   -pam
  
 
  Hi,
 
  ModelObject does not exist anymore in the latest assembly model in
 trunk
  (now under java/sca/modules/assembly). The assembly model is now
  represented by a set of interfaces, so you have the flexibility to
  implement your model classes however you want, without having to
extend
  a ModelObject class.
 
  You can choose to use EMF or another suitable XML databinding
 technology
  to implement your model classes or, probably simpler, just create
plain
  Java classes that implement the model interfaces. The only
requirement
  for a binding model class is to implement the o.a.t.assembly.Binding
  interface. Then, if you choose the plain java class option, to read

  the model from XML use StAX as we've done for the other bindings (see
  java/sca/modules/binding-ws-xml for an example), it is actually
pretty
  easy thanks to StAX.
 
  Hope this helps.
 
  --
  Jean-Sebastien
 
 
  -

  To unsubscribe, e-mail: [EMAIL PROTECTED]
  For additional commands, e-mail: [EMAIL PROTECTED]
 
 
 

 Right, if multiple extensions choose to use a specific databinding or
 modeling technology for their models they'll need to coordinate if they
 want to share some code.

 I would recommend to try implementing your model using plain Java
 classes first, at least this way you can share more code with the other
 model modules from Tuscany.

 --
 Jean-Sebastien


 -
 To unsubscribe, e-mail: [EMAIL PROTECTED]
 For additional commands, e-mail: [EMAIL PROTECTED]




Hi Snehit,

You are correct, the code base has moved on since the document you refer was
written. Not only is the code being tidied up but we are trying to improve
the docs as well.  I just took a look and Raymond has started a page on how
to extend Tuscany [1] on the project website/wiki. It's just a start and I
don't think that the way extension points are written has settled down 100%
but you get the idea.

Most of the function of the Tuscany SCA runtime implementation is provided
using the extension point/module activator mechanism that Raymond is
starting to describe. If you look at the list of modules in the source code
[2] you see it is starting to grow in length. Not all of these are loaded as
extensions but all of the implementation, bindings and databindings are.

Picking one at random, say the runtime that support components implemented
using Java, you can see a module called implementation-java-runtime. Look
inside there and you see a module activator file [3] which in turn refers to
a class that is run automatically by the runtime (using the JDK service
loading mechanism) when it loads all of the extension modules it finds on
the classpath. If you look inside the referenced class [4] you can see what
it has to do to register support for the SCA implementation.java element.
I have to admit that I'm not an expert on how 

Re: [DISCUSS] Next version - What should be in it

2007-04-20 Thread Simon Laws

On 4/20/07, Luciano Resende [EMAIL PROTECTED] wrote:


+1 on focusing on the stability and consumability for the core functions,
other then helping on simplifying the runtime further and work on a Domain
concept, I also want to contribute around having a better integration with
App Servers, basically start by bringing back WAR plugin and TC
integration.

+1 on Raymond as Release Manager

On 4/20/07, Raymond Feng [EMAIL PROTECTED] wrote:

 Hi,

 Considering that we want to achieve this in about 3 weeks, I agree that
we
 focus on the stability and consumability for the core functions.

 Other additional features are welcome. We can decide if they will be
part
 of
 the release based on the readiness.

 Are any of you going to volunteer to be the release manager? If not, I
can
 give a try.

 Thanks,
 Raymond

 - Original Message -
 From: Jean-Sebastien Delfino [EMAIL PROTECTED]
 To: tuscany-dev@ws.apache.org
 Sent: Wednesday, April 18, 2007 6:07 PM
 Subject: Re: [DISCUSS] Next version - What should be in it


  Davanum Srinivas wrote:
  Folks,
 
  Let's keep the ball rolling...Can someone please come up with a
master
  list of extensions, bindings, services, samples which can then help
  decide what's going to get into the next release. Please start a wiki
  page to document the master list. Once we are done documenting the
  list. We can figure out which ones are MUST, which ones are nice to
  have, which ones are out of scope. Then we can work backwards to
  figure out How tightly or loosely coupled each piece is/should be and
  how we could decouple them if necessary using
  interfaces/spi/whatever...
 
  Quote from Bert Lamb:
  I think there should be a voted upon core set of extensions,
  bindings, services, samples, whatever that should be part of a
  monolithic build.
  http://www.mail-archive.com/tuscany-dev@ws.apache.org/msg16062.html
 
  Quote from Ant Elder:
  The specifics of what extensions are included in this release is left
 out
  of
  this vote and can be decided in the release plan discussion. All this
  vote
  is saying is that all the modules that are to be included in this
next
  release will have the same version and that a top level pom.xml will
  exist
  to enable building all those modules at once.
  http://www.mail-archive.com/tuscany-dev@ws.apache.org/msg16155.html
 
  Thanks,
  dims
 
 
 
  Hi all,
 
  I think we have made good progress since we initially started this
  discussion. We have a simpler structure in trunk with a working
top-down
  build. Samples and integration tests from the integration branch have
 been
  integrated back in trunk and most are now working.
 
  We have a more modular runtime with a simpler extension mechanism. For
  example we have separate modules for the various models, the core
 runtime
  and the Java component support. SPIs between the models and the rest
of
  the runtime have been refactored and should become more stable. We
need
 to
  do more work to further simplify the core runtime SPIs and improve the
  core runtime but I think this is going in the right direction.
 
  I'm also happy to see better support for the SCA 1.0 spec, with
support
  for most of the SCA 1.0 assembly XML, and some of the SCA 1.0 APIs. It
  looks like extensions are starting to work again in the trunk,
including
  Web Services, Java and scripting components. It shouldn't be too
 difficult
  to port some of the other extensions - Spring, JMS, JSON-RPC -  to the
  latest code base as well.
 
  So, the JavaOne conference is in three weeks, would it make sense to
try
  to have a Tuscany release by then?
 
  We could integrate in that release what we already have working in
 trunk,
  mature and stabilize our SPIs and our extensibility story, and this
 would
  be a good foundation for people to use, embed or extend.
 
  On top of that, I think it would be really cool to do some work to:
  - Make it easier to assemble a distributed SCA domain with components
  running on different runtimes / machines.
  - Improve our scripting and JSON-RPC support a little and show how to
  build Web 2.0 applications with Tuscany.
  - Improve our integration story with Tomcat and also start looking at
an
  integration with Geronimo.
  - Improve our Spring-based core variant implementation, as I think
it's
 a
  good example to show how to integrate Tuscany with other IoC
containers.
  - Maybe start looking at the equivalent using Google Guice.
  - Start looking again at some of the extensions that we have in
contrib
 or
  sandboxes (OSGI, ServiceMix, I think there's a Fractal extension in
  sandbox, more databindings etc).
  - ...
 
  I'm not sure we can do all of that in the next few weeks :) but I'd
like
  to get your thoughts and see what people in the community would like
to
  have in that next release...
 
  --
  Jean-Sebastien
 
 
  -
  To unsubscribe, e-mail: [EMAIL PROTECTED]
  For additional commands, e-mail: 

Re: Using Tuscany in a webapp, was: [DISCUSS] Next version - What should be in it

2007-04-22 Thread Simon Laws

On 4/21/07, Jean-Sebastien Delfino [EMAIL PROTECTED] wrote:


[snip]
Luciano Resende wrote:
 +1 on focusing on the stability and consumability for the core
functions,
 other then helping on simplifying the runtime further and work on a
 Domain
 concept, I also want to contribute around having a better integration
 with
 App Servers, basically start by bringing back WAR plugin and TC
 integration.


Do we still need a special Tuscany WAR Maven plugin? I was under the
impression that the Tuscany WAR plugin was there to automate the
packaging of all the pieces of Tuscany runtime and their configuration
in a WAR, assuming that it was probably too complicated to do that
packaging by hand.

Before reactivating this machinery, could we take a moment and have a
discussion to understand why this packaging was so complicated that we
needed a special Maven plugin to take care of it? and maybe come up with
a simpler packaging scheme for Web applications and WARs?

To put this discussion in context, I'm going to start with a very simple
question... What do we want to use WARs for? What scenarios do we want
to support in our next release that will require a WAR?

--
Jean-Sebastien


-
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]

Hi


My desire for WAR packaging was to allow people to easily deploy Tuscany
applications (samples?) to existing web application installations. I.e.
allow Tuscany to be plugged into environments people are familiar with. I
was mulling over how to get some more interesting samples into the Java
space, e.g. porting over Andy's alert aggregator sample from the C++ SCA
implementation, and this is why I happened to be thinking about how to make
it work. But we have also had a recent exchange on the user list on this
subject [1].

I actually hadn't considered the mechanism by which this is achieved. I
wasn't aware this was being done in the past by a mvn plugin I was just
aware that, certainly in M1, you could see Tuscany samples as WAR files that
were used with Tomcat in the release. Its good that you have started this
discussion. Lets get consensus on whether we should provide it. Also if
there is an easier and more natural way of providing this integration then
we should investigate that, e.g. if this integration should he host-webapp
or host-tomcat, host-apache etc. then that's fine. If it's already done then
even better.

Regards

Simon

[1] http://www.mail-archive.com/[EMAIL PROTECTED]/msg00839.html


Re: [VOTE] Andy Grove for Tuscany Committer

2007-04-23 Thread Simon Laws

On 4/23/07, Pete Robbins [EMAIL PROTECTED] wrote:


+1

On 23/04/07, Andrew Borley [EMAIL PROTECTED] wrote:

 +1 from me

 On 4/23/07, Luciano Resende [EMAIL PROTECTED] wrote:
  +1 Welcome Andy
 
  On 4/23/07, ant elder [EMAIL PROTECTED] wrote:
  
   +1 from me.
  
  ...ant
  
   On 4/23/07, kelvin goodson [EMAIL PROTECTED] wrote:
   
Andy has taken part in SDO Java and C++ discussions since November
 of
2006,
in particular in the area of the Community Test Suite (CTS).  As
 some of
you
may not follow this closely, I've distilled quite a bit of detail
 from
   the
lists to show Andy's participation.  He ...
   
   
   - been active in creating and resolving numerous JIRAs
   - did some of the work of the initial drop of tests to the SDO
 Java
   CTS from Rogue Wave and in the CTS infrastructure design,
 including
ensuring
   vendor independence.
   - has discovered and offered solutions to a number of anomalies
   between the CTS and the specification
   - developed and contributed tests for testing XML schema choice
   function.
   - provided good insights to the required and permitted
behaviours
 of
   implementations when dealing with elements which are nillable
   - has taken part in discussions for an M1 release of the CTS
   - Initiated discussions on DataHelper formats wrt dates and
 durations
   - developed new test cases for spec section 9.10 -- XML without
   Schema
   to SDO Type and Property
   - solicited input from the Tuscany community with respect to
the
   equivalence or otherwise of null URIs versus empty strings,  in
 order
to
   feed back to the spec group
   - took a significant part in discussions of how to ensure the
CTS
 is
   test harness agnostic, and provided patches to update tests to
 assist
in
   this goal
   - contributed a set of tests for XSD complex types
   - provided support to the community with problems running the
CTS
 and
   with insights into new Junit features
   
Aside from Tuscany, Andy has been active in the SDO Java and C++
specification efforts, and I think he will be a great asset to the
project.
   
Regards, Kelvin.
   
  
 
 
 
  --
  Luciano Resende
  http://people.apache.org/~lresende
 

 -
 To unsubscribe, e-mail: [EMAIL PROTECTED]
 For additional commands, e-mail: [EMAIL PROTECTED]




--
Pete


Looks good to me. +1. Welcome Andy.

Regards

Simon


ApacheCon Europe

2007-04-23 Thread Simon Laws

Hi

At relatively short notice I've sorted out a trip to ApacheCon Europe in
Amsterdam next week. I expect to be there Tuesday evening through Friday and
would love to put faces to the names of any of the Tuscany crowd. So if you
fancy meeting up for a beer/juice/coffee etc. drop me a line and we'll sort
something out.

Also I'm trying to get a BOF together to talk about Tuscany progress etc. If
you want to come along then go and bump up the counter at (
http://wiki.apache.org/apachecon/BirdsOfaFeatherEu07) and hopefully they
will find us a slot.

See you there

Simon


Distributed Composites

2007-04-24 Thread Simon Laws

Following on from the release content thread [1] I'd like to kick off a
discussion on how we resurrect support for a distributed runtime. We had
this feature before the core modularization and I think it would be good to
bring it back again. For me this is about working out how the tuscany
runtime  can  be successfully hosted in a distributed environment without
having to recreate what is done very well by existing distributed computing
technologies.

The assembly model specification [3] deals briefly with this in its
discussion of SCA Domains

An SCA Domain represents a complete runtime configuration, potentially
distributed over a series of interconnected runtime nodes.

Here is my quick sketch of the main structures described by the SCA Domain
section of the assembly spec.

SCA Domain
   Logical Services - service to control the domain
   InstalledContribution (1 or more)
   Base URI
   Installed artifact URIs
   Contribution - SCA and non-SCA artefacts contributed to the runtime
   /META_INF/sca-contribution.xml
deployable (composites)
import
export
   Dependent Contributions - references to installed contributions on
which this one depends
   Deployment Time Composites - composites added into an installed
contribution
   Virtual Domain Level Composite - the running set of SCA artefacts. Top
level services are visible outside domain
   Component, Service, Reference
   Derived from notionally included installed composites

The assembly spec says that A goal of SCA's approach to deployment is that
the contents of a contribution should not need to be modified in order to
install and use the contents of the contribution in a domain.. This seems
sensible in that we don't want to have to rewrite composite files every time
we alter the physical set of nodes on which they are to run. Typically in a
distributed environment there is some automation of the assignment of
applications to nodes to cater for resilience, load balancing, throughput
etc.

The assembly spec is not prescriptive about how an SCA Domain should be
mapped and supported across multiple runtime nodes but I guess the starting
point is to consider the set of components a system has, i.e. the set of
(top level?) components that populate the Virtual Domain Composite and
consider them as likely candidates for distributing across runtimes.

So I would expect a manager of a distributed SCA runtime to go through a
number of stages in getting the system up and running.

Define an SCA Domain (Looking at the mailing list Luciano is think these
thoughts also)
 - domain definition
  - as simple as a file structure (based on hierarchy from assembly
spec) in a shared file system.
  - could implement more complex registry based system
 - allocate nodes to the domain
  - As simple as running up SCA runtime on each node in the domain.
  - For more complex scenarios might want to use a
scheduling/management system

Add contributions to the domain
 - Identify contributions required to form running service network
(developers will build/define contributions)
 - Contribution these contributions to the domain
  - in the simple shared file system scenario I would image they just
end up in on this file system available to all nodes in
the domain.

Add contributions to the Virtual Domain Level Composite
 - At this point it think we have to know where artifacts are physically
going to run
 - It could be that all runtimes load all contributions and only expose
those destined for them or, I.e. each node has the full model loaded but
knows which bits it's running.
 - Alternatively we could target each node specifically and ask it to load
a particular installed contribution and define
   a distributed model.

Manage the Domain
 - Need to be able to target the logical service provided by the domain at
the appropriate runtime node

In order to make this work the sca default binding has to be able to work
remotely across distributed runtimes so we need to select an existing
binding, or create a new one, to do this.

I think in the first instance we should set the bar fairly low. I.e have the
target be running a sample application across two SCA runtimes supporting
java component implementations. This pretty much picks up where we were with
the distribution support before the core modularization effort.

I'm not sure what the target scenario should be but we could take one of the
samples we already have, e.g. SimpleBigBank which happens to have two simple
java components in its implementations, but we could go with any of them.

Thoughts

Simon


[1] http://www.mail-archive.com/tuscany-dev@ws.apache.org/msg16898.html
[2] http://www.mail-archive.com/tuscany-dev%40ws.apache.org/msg16831.html
[3]
http://www.osoa.org/display/Main/Service+Component+Architecture+Specifications


Re: Making clear that modules under contrib do not build

2007-04-24 Thread Simon Laws

On 4/23/07, Jean-Sebastien Delfino [EMAIL PROTECTED] wrote:


I think that the java/sca/contrib directory has potential to confuse
people, as many modules under contrib have obsolete pom.xml files, are
not actively maintained and are not building.

I was thinking about renaming the pom.xml files under contrib to
pom.xml.off for example and adding a README to this directory indicating
the status of these modules, to make this clear and avoid that people
waste time trying to build and use these old modules, like our old
Tuscany WAR plugin under contrib for example.

What do people think?

--
Jean-Sebastien


-
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]

I just yesterday had a conversation with someone saying that one of the

contributed bindings was broken so I think this is  good idea.

Thanks

Simon


Re: Databinding itests hang, was: svn commit: r531619 - /incubator/tuscany/java/sca/itest/pom.xml

2007-04-24 Thread Simon Laws

On 4/24/07, Jean-Sebastien Delfino [EMAIL PROTECTED] wrote:


Jean-Sebastien Delfino wrote:
 The databinding itests seem to hang, blocking the build. Is anybody
 else running into this?

 I have moved these itests temporarily out of the build until this is
 resolved.

 [EMAIL PROTECTED] wrote:
 Author: jsdelfino
 Date: Mon Apr 23 14:43:24 2007
 New Revision: 531619

 URL: http://svn.apache.org/viewvc?view=revrev=531619
 Log:
 Databinding integration tests hang. Moving them out of the main build
 until this is resolved.

 Modified:
 incubator/tuscany/java/sca/itest/pom.xml

 Modified: incubator/tuscany/java/sca/itest/pom.xml
 URL:

http://svn.apache.org/viewvc/incubator/tuscany/java/sca/itest/pom.xml?view=diffrev=531619r1=531618r2=531619


==

 --- incubator/tuscany/java/sca/itest/pom.xml (original)
 +++ incubator/tuscany/java/sca/itest/pom.xml Mon Apr 23 14:43:24 2007
 @@ -44,7 +44,9 @@
  modulecallback-set-conversation/module
  modulecontribution/module
  moduleconversations/module
 +!--
  moduledatabindings/module
 +--
  moduleexceptions/module
  moduleoperation-overloading/module
  moduleproperties/module



 -
 To unsubscribe, e-mail: [EMAIL PROTECTED]
 For additional commands, e-mail: [EMAIL PROTECTED]






After having switched to use Tomcat instead of Jetty I'm seeing other
errors (like Tomcat not being initialized). The following code in
DataBindingCase does not look right to me:

private static boolean initalised = false;
private GreeterService greeterClient;

/**
 * Runs before each test method
 */
protected void setUp() throws Exception {
if (!initalised) {
SCARuntime.start(greeter.composite);
super.setUp();
initalised = true;
}
}

/**
 * Runs after each test method
 */
protected void tearDown() {

}


I'm not sure why we have this initialized field, and also the tearDown
method should stop the SCA runtime or this test case is going to break
other test cases running after it, by leaving its SCA runtime started
and associated with the current thread.

--
Jean-Sebastien


-
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]

Hi


I think you are right that the tearDown method needs some more in it. I'll
go and fix it.

My build is locking up on the Tuscany WSDL Integration Tests  though. It
took 53 seconds to run the 16 tests in
org.apache.tuscany.sca.itest.WSDLTestCase successfully and then failed after
530 seconds on the org.apache.tuscany.sca.itest.SDOWSDLTestCase with a
SocketTimeoutException. So something funny is going on somewhere.

I'll investigate further but if anyone has any bright ideas .

Simon


Re: Next release name? (was: Re: [DISCUSS] Next version - What should be in it)

2007-04-24 Thread Simon Laws

On 4/24/07, kelvin goodson [EMAIL PROTECTED] wrote:


Ant,
  your note is well timed as I've had a couple of off-line chats with
people
in the last week about release naming, particularly with regard to the
effect that a milestone or alpha name can have on uptake of a release.  In
the IRC chat of 16th April [1] we reached a conclusion that given the fact
that a new release candidate had just been posted for consideration, we
would leave naming as it was.  However, I got the impression that in
general
the community was giving me an implicit +0 vote to retaining the M3
release
tag, but the ideal would be to move to a beta1 tag. At the time there was
a
handful of small SDO 2.1 spec features for which we didn't have a first
cut
implementation.  Now this has reduced to just a couple,  and it seemed
that
there was consensus from the discussion that a beta* tag was not
incompatible with this state,  so long as the omissions were documented.

The SDO RC3 has been available for a little while for comment,  but has
not
received much attention.  I have a couple of small non-blocking issues
with
the candidate that I have spotted that I would like to tidy up.  So I
propose that I quickly cut a new 1.0-incubating-beta1 tag from the M3 tag,
make my small fixes (including adopting the incubating name convention
over
the previous incubator convention) post a new candidate and start a vote
on
that candidate. I'd like to do this ASAP and I don't think this is
contentious, but I guess I need to give a little time for reaction before
proceeding, as my actions would not be in accordance with the outcome
community discussions; I propose to do this at start of UK business
tomorrow.

Kelvin

[1] http://www.mail-archive.com/tuscany-dev@ws.apache.org/msg16772.html

On 24/04/07, ant elder [EMAIL PROTECTED] wrote:

 What are we going to be calling this next SCA release?

 We've had M1 and M2 releases, some alpha kernel releases, DAS are
talking
 about an M3 release and SDO is doing an M3 release although there was
some
 discussion about renaming that to beta1. I think milestone and alpha
 release
 names may discourage people from trying a release as it makes it sound
 unstable. The spec defined SCA APIs are stable now and we're talking
about
 making stable SPIs for this next release, so the Tuscany externals are
 becoming stable and that sounds better than alpha quality to me.

 So how about the next Tuscany SCA release is named beta1? and we could
try
 to get DAS and SDO to also follow that naming?

 Any comments or alternative name suggestions?

...ant



Ant

This is an interesting idea. I think going to beta1 will better describe the
type of release I (we) would like to see. I think though that this does
underline our need to get the supporting material e.g. samples, docs etc. up
to the level we would expect of a beta release. This is not a surprise, it's
been discussed on the release content thread and elsewhere but I think a
naming proposition like this can help focus the mind (separate thread
required to get all this stuff sorted)

So are you suggesting we go to 1.0-incubating-beta1 as Kelvin suggested.

Are there any modules that would be part of a beta release but would not be
named this way?I don't have anything in mind just asking.

Are there modules that we have in the build that we would choose to leave
out if we call it a beta release?

Simon


Databinding itest locking up?

2007-04-24 Thread Simon Laws

Following Sebastiens post about the databinding test locking up (he went
ahead and removed it from the itest pom to get the build to work) I tried
the test and, once I had changed the poms to depend on http-jetty as they
used to, I got the same effect of the test hanging. The process is sitting
with port 8080 in the CLOSE_WAIT state which basically means that the client
hasn't closed the socket correctly.

I went in and commented out the line

//   requestMC.getOptions().setProperty(
HTTPConstants.REUSE_HTTP_CLIENT, Boolean.TRUE);

In Axis2TargetInvoker.invokeTarget which was instigated by TUSCANY-1218 [1]
and it worked again.  Now Sebastien did point out a fault/feature of the
itest. The way the databinding itests work as checked in is that they only
create one runtime for all the tests that are run. However it neglects to
call SCARuntime.stop() at the end. I have put this change into my local copy
and it still locks up when using REUSE_HTTP_CLIENT.

Anyone else seeing this kind of behaviour. All of the basic web services
tests work on my box and I don't see that the SDO databinding itest (the
test I'm trying at the moment) is doing anything different.

Also did the databinding itests run ok after the TUSCANY-1218 change was
applied. We could be dealing with some combination of the effect of that
change and the changes since then.

I should note that, in the past, I have not experienced the connection
refused error that originated 1218. Is there some kind of system setting
that's being toggled off again by the code change (just a stab in the dark)?

Simon

[1] http://issues.apache.org/jira/browse/TUSCANY-1218


Re: Tomcat errors when trying to build the latest trunk

2007-04-24 Thread Simon Laws

On 4/24/07, Simon Nash [EMAIL PROTECTED] wrote:


I'm seeing lots of Tomcat-related test failures when trying to build
the latest trunk code.  I've done a new checkout and cleaned out my
maven repo.  Here's a sample:

Running org.apache.tuscany.binding.axis2.itests.HelloWorldTestCase
log4j:WARN No appenders could be found for logger (
org.apache.axiom.om.util.StAXUtils).
log4j:WARN Please initialize the log4j system properly.
24-Apr-2007 17:35:17 org.apache.catalina.startup.Embedded start
INFO: Starting tomcat server
24-Apr-2007 17:35:18 org.apache.catalina.core.StandardEngine start
INFO: Starting Servlet Engine: Apache Tomcat/6.0.10
24-Apr-2007 17:35:18 org.apache.catalina.startup.ContextConfigdefaultWebConfig
INFO: No default web.xml
24-Apr-2007 17:35:18 org.apache.catalina.startup.DigesterFactory register
WARNING: Could not get url for /javax/servlet/jsp/resources/jsp_2_0.xsd
24-Apr-2007 17:35:18 org.apache.catalina.startup.DigesterFactory register
WARNING: Could not get url for
/javax/servlet/jsp/resources/web-jsptaglibrary_2_0.xsd
24-Apr-2007 17:35:18 org.apache.coyote.http11.Http11Protocol init
INFO: Initializing Coyote HTTP/1.1 on http-8080
24-Apr-2007 17:35:18 org.apache.coyote.http11.Http11Protocol start
INFO: Starting Coyote HTTP/1.1 on http-8080
24-Apr-2007 17:35:19 org.apache.catalina.core.StandardWrapper unload
INFO: Waiting for 1 instance(s) to be deallocated
24-Apr-2007 17:35:19 org.apache.coyote.http11.Http11Protocol destroy
INFO: Stopping Coyote HTTP/1.1 on http-8080
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 3.014 sec
Running org.apache.tuscany.binding.axis2.Axis2ServiceTestCase
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.02 sec
Running
org.apache.tuscany.binding.axis2.itests.endpoints.WSDLRelativeURITestCase
24-Apr-2007 17:35:19 org.apache.catalina.startup.Embedded start
INFO: Starting tomcat server
24-Apr-2007 17:35:19 org.apache.catalina.core.StandardEngine start
INFO: Starting Servlet Engine: Apache Tomcat/6.0.10
24-Apr-2007 17:35:19 org.apache.catalina.startup.ContextConfigdefaultWebConfig
INFO: No default web.xml
24-Apr-2007 17:35:19 org.apache.coyote.http11.Http11Protocol init
INFO: Initializing Coyote HTTP/1.1 on http-8080
24-Apr-2007 17:35:19 org.apache.coyote.http11.Http11Protocol start
INFO: Starting Coyote HTTP/1.1 on http-8080
24-Apr-2007 17:35:23 org.apache.coyote.http11.Http11Protocol destroy
INFO: Stopping Coyote HTTP/1.1 on http-8080
24-Apr-2007 17:35:24 org.apache.tomcat.util.net.JIoEndpoint$Acceptor run
SEVERE: Socket accept failed
java.net.SocketException: socket closed
at java.net.PlainSocketImpl.socketAccept(Native Method)
at java.net.PlainSocketImpl.accept(PlainSocketImpl.java:384)
at java.net.ServerSocket.implAccept(ServerSocket.java:450)
at java.net.ServerSocket.accept(ServerSocket.java:421)
at
org.apache.tomcat.util.net.DefaultServerSocketFactory.acceptSocket(
DefaultServerSocketFactory.java:61)
at org.apache.tomcat.util.net.JIoEndpoint$Acceptor.run(
JIoEndpoint.java:310)
at java.lang.Thread.run(Thread.java:595)
Tests run: 1, Failures: 0, Errors: 1, Skipped: 0, Time elapsed: 5.168 sec
 FAILURE!
testCalculator(
org.apache.tuscany.binding.axis2.itests.endpoints.WSDLRelativeURITestCase)  Time
elapsed: 5.137 sec   ERROR!
java.lang.reflect.UndeclaredThrowableException
at $Proxy6.getGreetings(Unknown Source)
at
org.apache.tuscany.binding.axis2.itests.HelloWorldOMComponent.getGreetings
(HelloWorldOMComponent.java:31)
at
org.apache.tuscany.binding.axis2.itests.endpoints.AbstractHelloWorldOMTestCase.testCalculator
(AbstractHelloWorldOMTestCase.java:44)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(
NativeMethodAccessorImpl.java:39)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(
DelegatingMethodAccessorImpl.java:25)
at java.lang.reflect.Method.invoke(Method.java:585)
at junit.framework.TestCase.runTest(TestCase.java:168)
at junit.framework.TestCase.runBare(TestCase.java:134)
at junit.framework.TestResult$1.protect(TestResult.java:110)
at junit.framework.TestResult.runProtected(TestResult.java:128)
at junit.framework.TestResult.run(TestResult.java:113)
at junit.framework.TestCase.run(TestCase.java:124)
at junit.framework.TestSuite.runTest(TestSuite.java:232)
at junit.framework.TestSuite.run(TestSuite.java:227)
at org.junit.internal.runners.OldTestClassRunner.run(
OldTestClassRunner.java:35)
at org.apache.maven.surefire.junit4.JUnit4TestSet.execute(
JUnit4TestSet.java:62)
at
org.apache.maven.surefire.suite.AbstractDirectoryTestSuite.executeTestSet(
AbstractDirectoryTestSuite.java:138)
at
org.apache.maven.surefire.suite.AbstractDirectoryTestSuite.execute(
AbstractDirectoryTestSuite.java:125)
at org.apache.maven.surefire.Surefire.run(Surefire.java:132)
 

Re: Tomcat errors when trying to build the latest trunk

2007-04-24 Thread Simon Laws

On 4/24/07, Simon Nash [EMAIL PROTECTED] wrote:


Simon Laws wrote:

 On 4/24/07, Simon Nash [EMAIL PROTECTED] wrote:


 I'm seeing lots of Tomcat-related test failures when trying to build
 the latest trunk code.  I've done a new checkout and cleaned out my
 maven repo.  Here's a sample:

 

 Is anyone else experiencing this?

Simon


 Hi Simon,

 I don't see this but I see the itest lockups later on in the build that
 Sebastien also gets.  Did you do a mvn clean also?

It was a fresh checkout so there was no need to do a clean.  The only
things in my local maven repo were the artifacts built by SDO.  I don't
have anything listening on port 80 or 8080.

 I did try running the databinding itest with tomcat and it wasn't very
 happy
 so switched back to jetty for the time being while chasing the lockup
 problem. I know that's not much help to you at the moment but generally
 things feel slightly amiss in the web services/app server area to me.

I noticed that these tests now use Tomcat instead of jetty as previously.
How do I switch between Tomcat and jetty?

   Simon



-
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]

I changed the dependency in the pom from http-tomcat to http-jetty.


Simon


Re: Tomcat errors when trying to build the latest trunk

2007-04-24 Thread Simon Laws

On 4/24/07, Jean-Sebastien Delfino [EMAIL PROTECTED] wrote:


[snip]
Simon Laws wrote:

 I did try running the databinding itest with tomcat and it wasn't very
 happy

Simon, what exceptions are you getting when running the databinding
itest with tomcat? do you have a log?

Thanks.

--
Jean-Sebastien


-
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]

Yep, i copied the start of it below. I'm not looking at  this at the

moment. I glanced at it and assumed I'd missed some extra dependency out in
switching to tomcat. I switched back to jetty where it used to work and am
currently trying to work out why the recent axis changes are now causing
this lockup problem. I can't prove yet that the error below doesn't happen
on Jetty.

24-Apr-2007 20:24:44 org.apache.coyote.http11.Http11Protocol init
INFO: Initializing Coyote HTTP/1.1 on http-8080
24-Apr-2007 20:24:44 org.apache.coyote.http11.Http11Protocol start
INFO: Starting Coyote HTTP/1.1 on http-8080
24-Apr-2007 20:24:45 org.apache.axis2.deployment.URLBasedAxisConfiguratorgetAxi
sConfiguration
INFO: No repository found , module will be loaded from classpath
org.apache.tuscany.databinding.TransformationException:
java.lang.RuntimeExcepti
on: org.eclipse.emf.ecore.resource.Resource$IOWrappedException: Package with
uri
'http://apache.org/tuscany/sca/itest/databinding/types' not found.
   at
org.apache.tuscany.databinding.sdo.XMLStreamReader2DataObject.transfo
rm(XMLStreamReader2DataObject.java:48)
   at
org.apache.tuscany.databinding.sdo.XMLStreamReader2DataObject.transfo
rm(XMLStreamReader2DataObject.java:34)
   at org.apache.tuscany.databinding.impl.MediatorImpl.mediate
(MediatorImpl
.java:83)
   at
org.apache.tuscany.core.databinding.transformers.Input2InputTransform
er.transform(Input2InputTransformer.java:169)
   at
org.apache.tuscany.core.databinding.transformers.Input2InputTransform
er.transform(Input2InputTransformer.java:46)
   at org.apache.tuscany.databinding.impl.MediatorImpl.mediate
(MediatorImpl
.java:83)
   at
org.apache.tuscany.core.databinding.wire.DataBindingInteceptor.transf
orm(DataBindingInteceptor.java:189)
   at
org.apache.tuscany.core.databinding.wire.DataBindingInteceptor.invoke
(DataBindingInteceptor.java:86)
   at org.apache.tuscany.binding.axis2.Axis2ServiceBinding.invokeTarget
(Axi
s2ServiceBinding.java:246)
   at
org.apache.tuscany.binding.axis2.Axis2ServiceInOutSyncMessageReceiver
.invokeBusinessLogic(Axis2ServiceInOutSyncMessageReceiver.java:54)
   at
org.apache.axis2.receivers.AbstractInOutSyncMessageReceiver.receive(A
bstractInOutSyncMessageReceiver.java:39)
   at org.apache.axis2.engine.AxisEngine.receive(AxisEngine.java:497)
   at
org.apache.axis2.transport.http.HTTPTransportUtils.processHTTPPostReq
uest(HTTPTransportUtils.java:328)
   at org.apache.axis2.transport.http.AxisServlet.doPost(
AxisServlet.java:2
54)
   at javax.servlet.http.HttpServlet.service(HttpServlet.java:727)

Simon


<    1   2   3   4   5   6   7   8   9   10   >