Policy Framework Scenarios.

2007-11-26 Thread Venkata Krishnan
Hi,

Most part of the core policy framework now implemented except for the thing
that deals with evaluating the 'appliesTo' xpath against the SCDL which is a
bit incomplete.  I hope to wrap this too in a week.

Meanwhile, I'd like to whet and evolve whatever has been done with different
user perspectives... so here are some perspectives I could think of... could
people kindly help with their opinions and inputs on this, please also
if any of you have other scenarios or ways of approaching this... please
pitch in...

A) Perspective of Policy Administrator 

- defines a bunch of intents and policysets for the domain, in the
definitions.xml
- profiles the various binding-types and implementation-types for the
various intents it 'mustProvide' and 'mayProvide'
1) How does the Policy-Admin know from a binding/impl type about the
intents that it provides for ?  Should every binding/impl type have its own
definitions.xml file where it publishes this information ?  The specs says
that there is just this one file for the entire SCA domain - have I got it
wrong?
2)What about the bunch of intents that the spec states as something that
would be supported by every SCA Runtime such as authentication,
confidentiality, integrity etc.  Since it makes no sense to have every
binding/impl type to define this as well, should we have a global
definitions.xml in the core module where we define these ?
3) A binding / implementation type could have its own custom model of
representing policies within policysets and interpretting them.  For example
the ws-binding-axis2 use config param model (which is custom made) and
ws-policy assertion model (which is a standard) to represent policies.  How
should this model information be communicated to the Policy Admin in a
standard way that is consistent across binding/impl types?  If we allow
every binding/impl type to have its own definitions.xml then could this also
contain the xsd for the policy model?

B) Perspective of Binding/Impl type developer ...
-
- defines the intents and xsds for the policy model that the binding/impl
type will use
- defines the StAX processors for loading the policy model that the
binding/impl type will use
- adds code to interpret various policies and exercise them.
1) Do we leave the design for this to every binding / implementation
type or do we put in a programming model that is to be common across all
binding/impl types?  I'd feel it would be better to leave it to the
bindings/impl extension because each extension will have its own way of
implementing various QoS and how it would interface with a QoS
infrastructure as part of its (i.e. the extension's) lifecycle.  For example
the binding-ws-axis2 injects security related policies into the axis2-config
during the service and client creation time and does nothing specific during
invocation of service operations.

Sorry about making this very long.

Thanks
- Venkat


Re: Distribution structure for SCA Java 1.1 release (was Re: Sample dependencies not pulled in distribution)

2007-11-26 Thread Rajini Sivaram
Simon,

I did take a look at splitting the Tuscany distribution into bundles with
the hope of defining something which makes sense for OSGi as well as
non-OSGi. I dont really think that makes much sense anymore. Grouping
modules into OSGi bundles using existing maven plugins was far too time
consuming (in terms of the amount of time it took to do a build), and quite
messy.

So I would like to go for a simpler option for OSGi where the the zip/jar
files generated for the Tuscany distribution have a manifest file containing
OSGi bundle manifest entries, so that they can be directly installed into
OSGi (with an easy repackaging option to get rid of samples from the bundle
if the bundle size was too big). I would also like to add OSGi manifest
entries into all jars distributed by Tuscany including 3rd party jars, so
that we can use the OSGi bundle repository API to install Tuscany into an
OSGi runtime, instead of relying on Tuscany distribution structure.

I have an Eclipse plugin which shows the dependency graphs based on the
import/export statements generated by the maven-bundle-plugin. I could
compare these with the dependencies you generated (it might help to add
appropriate scopes to the dependencies).



Thank you...

Regards,

Rajini


On 11/23/07, Simon Laws [EMAIL PROTECTED] wrote:

 On Nov 22, 2007 2:51 PM, ant elder [EMAIL PROTECTED] wrote:

  On Nov 22, 2007 1:57 PM, Simon Nash [EMAIL PROTECTED] wrote:
 
  
   Jean-Sebastien Delfino wrote:
  
[snip]
Simon Nash wrote:
   
Samples are very important for beginning users.  For users who have
moved beyond that stage and are doing real development using
 Tuscany,
samples are not very important.  If people in this category do want
samples, they are likely to just want to refer to samples source
 code
to cut and paste snippets as necessary.  Having pre-built sample
   binaries
isn't important for these users, and having the main lib directory
polluted/bloated by samples dependencies is a positive nuisance
  because
there's no easy way for them to find and remove the redundant
 files.
   
   
I didn't think we were polluting the lib directory with sample
dependencies, do you have a concrete example?
   
   I thought this thread was discussing the case of a sample having a
   dependency that the runtime does not have.  If there are no such cases
   at present, then the issue doesn't arise.  However, there could be
   such cases in the future as we add more application-style samples,
   and it would be good to have an idea about how such dependencies would
   be handled.
  
   
Having these files in Tuscany's lib directory isn't just wasting a
  few
bits on the disk.  It can be a problem if their version levels
  conflict
with other versions of the same code that the user has installed.
For genuine Tuscany dependencies, such conflicts are a real issue
that must be handled carefully in order to get Tuscany to co-exist
  with
their other software.  For sample dependencies, there is no actual
conflict unless the user needs to run the specific sample that
 pulled
in the dependency,
   
   
Like I said earlier in the initial thread about sample dependencies,
 I
don't think that samples should bring dependencies that are not
  genuine
Tuscany dependencies.
   
   OK, we are agreed about this.  But what if an application-style sample
   does have a non-Tuscany dependency?  This is certainly
 possible.  Would
   the Tuscany distro include the dependency, or leave it up to the user
   to download it as a prereq to running the sample?
  
but it might take them some time to figure out why
   
putting the Tuscany lib directory on the classpath is causing other
code in their application to break.
   
I'd suggest structuring the binary distribution as follows:
   
1. Tuscany runtime in modules and its dependencies in lib.
   
   
+1
   
   At the moment we have separate copies of the Tuscany runtime in
   modules and lib and I'm not quite sure why.
   
   
Which JARs are you talking about?
   
   I'm talking about the tuscany-sca-all.jar in the lib directory, which
   is a combination of the contents of the jars in the modules directory.
   The tuscany-sca-manifest.jar refers to the the tuscany-sca-all.jar
   as well as referring to all the jars in the modules directory, which
   seems somewhat redundant.
  
   
2. Tuscany samples source, READMEs and build files in samples.
   
   
+1
   
3. Tuscany samples binaries in modules/samples,
   
   
I prefer to have the binaries under samples as well, with their
  source.
   
   Having them there is more convenient but makes it harder to see how
   much space they are consuming.  I did some investigation, and it
   turns out that these binaries are causing a huge expansion in the
   size of the samples directory.
  
   In the 1.0.1 binary distro, the source under the samples directory
  

[DAS] Convert from DB Schema to model XSDs

2007-11-26 Thread Amita Vadhavkar
http://www.mail-archive.com/tuscany-dev@ws.apache.org/msg25916.html

As first attempt trying to create an utility/tool to convert from DB
Schema to model XSDs -
RDB DAS has a reusable core part which uses DB Metadata to create SDO
Types and properties. Its result can
be fed to Tuscany SDO's XSDHelper to form XSDs.

http://www.mail-archive.com/tuscany-dev@ws.apache.org/msg19374.html
DB SChema-SDO-XSD is possible without errors as it comes from Types
generated from SDO

Use:
When doing static SDO model based Data Access (not only RDB DAS) and
model is not available or not up-to-date w.r.t. DB schema -
like cases of rapid prototyping when DB schema is undergoing changes.

Limitation:
DB Metadata APIs should support -
DatabaseMetaData.getTables(), getPrimaryKeys(), getCrossReference(),
ResultSetMetaData.getColumnCount(), getTableName(), getSchemaName(),
getColumnName(), getColumnType(),

Suggestions?

Regards,
Amita

-
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]



Re: [NOTICE] Rajini Sivaram voted as Tuscany committer

2007-11-26 Thread Mike Edwards

Rajini,

Belated congratulations,

Yours,  Mike.

ant elder wrote:

The Tuscany PPMC and Incubator PMC have voted for Rajini Sivaram to become a
Tuscany committer.

Congratulations and welcome Rajini!

   ...ant



-
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]



Re: Data transformation from/to POJO

2007-11-26 Thread Mike Edwards

Raymond,

Where angels fear to tread

My initial thoughts about this mused on why people had spent so much 
time on specs like SDO and JAXB.  If mapping POJOs to XML was simple and 
straightforward, why did we need those large specs?


Perhaps you are right in thinking that there are simple cases that can 
be mapped simply.  But then, what do you do about the more awkward cases?


What I'd like us to consider deeply first is whether we want to create 
(yet) another Java - XML mapping specification and if so, what is its 
relationship to the existing ones.


My initial 2 cents


Yours,  Mike.

Raymond Feng wrote:

Hi,

With the recent development of the online store tutorial, we encounter 
quite a few issues around the transformation between POJO and other 
databindings (such as XML, JSON).


Let's take the POJO -- XML as an example. Here is a set of questions 
to be answered.


1) Do we require the POJO to be a strict JavaBean or free-form class?

2) How to read properties from a java object?

The data in a java object can be accessed by the field or by JavaBean 
style getter methods. There are different strategies:


a) Always use JavaBean-style getter method
b) Always use field access
c) A combination of a  b

The other factor is the modifier of a field/method defintion. What 
modifiers are allowed? public, protected, default and private?


If a property only have getter method, should we dump the property into 
the XML? How about transient fields?


3) How to write properties to populate the target POJO instance?

a) Use JavaBean setter?
b) Use field
c) Combination of a  b

When we convert XML element back to a POJO property, how do we 
instantiate the property instance if the property type is an interface 
or abstract class?


For example,

package com.example;
public class MyBean {
   private MyInterface p1;

   public void setP1(MyInterface p1) {
   this.p1 = p1;
   }

   public MyInterface getP1() {
   return p1;
   }
}

Do we require the XML element contains xsi:type attribute which will be 
generated from POJO--XML to represent the concrete property type? Such as:


myBean xsi:type=ns1:MyBean xmlns:ns1=http://example.com/;
   p1 xsi:type=ns2:MyInterface xmlns:ns2=http://example.com//
/myBean

Thanks,
Raymond

-
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]




-
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]



[jira] Commented: (TUSCANY-1918) Support for dynamic containment

2007-11-26 Thread bert.robben (JIRA)

[ 
https://issues.apache.org/jira/browse/TUSCANY-1918?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#action_12545477
 ] 

bert.robben commented on TUSCANY-1918:
--

Interesting. How would that work? I don't know too much about OASIS. Would that 
involve becoming a member?

 Support for dynamic containment
 ---

 Key: TUSCANY-1918
 URL: https://issues.apache.org/jira/browse/TUSCANY-1918
 Project: Tuscany
  Issue Type: New Feature
  Components: Java SDO Implementation
Affects Versions: Java-SDO-Next
Reporter: bert.robben

 In SDO, the boundaries of a datagraph are defined by the containment 
 relation. Only objects which can be reached from the root object by following 
 properties that are contained are part of the datagraph. Containment is 
 defined at the type level.
 In cases where applications need to dynamically select what information they 
 want, this fixed containment relationship is an issue. For instance, suppose 
 in a medical context you have defined a number of types defines to represent 
 patients together with their clinical (e.g. procedures they have taken) and 
 administrative data (for instance their address). The type definition needs 
 to decide on the containment of the clinical and administrative data. However 
 it is hard to decide whether or not the administrative and clinical data 
 should be contained because some applications might only need clinical or 
 administrative data and others might need both. In cases where the type 
 system is large or where there are large volumes of data involved (for 
 instance in the example, procedures could have an associated pdf-report 
 property) this becomes a real issue.
 Current solutions within the SDO framework could be (for the interested, 
 there has been a mail thread about this a while ago in the user mailing list)
 - Each app shoud define its own type with an appropriate containment 
 relation. The downside of this is a proliferation of types.
 - The main types should not have any containment relations. Containment is 
 specified using a synthetic type. Think of this as a special list type that 
 contains its elements. The root of the datagraph then would be an instance of 
 such a list type. All instances that are needed should be put in this flat 
 list.
 I would like to propose an alternative solution. In this solution, 
 containment would not be specified at the type level. Whenever the boundary 
 of a datagraph is needed (for instance when an xml document it be generated 
 or a datagraph is to be exchanged between for instance a client and a 
 server), the application should provide appropriate information that 
 specifies exactly what is part of the graph and what not. This can be seen as 
 a select clause for sql, or even better as a set of fetch joins in Hibernate. 
 This would give the application control over exactly what it wants. In the 
 example for instance, the application can easily decide at each point whether 
 or not it would want the address information together with the patient data.
 This proposal would have a number of interesting implications.
 - What is the implication of this for cases where datagraphs are represented 
 as xml documents that should be according to an xml schema?
 - How to deal with links to objects that don't belong to the datagraph? One 
 strategy could be just to drop them. Another one to provide some kind of 
 proxy.
 Interested parties can have a look at our SDO implementation (see also JIRA 
 1527 and 1493) where we try to support this.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.


-
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]



Re: svn commit: r596692 - /incubator/tuscany/java/sca/modules/assembly/src/main/java/org/apache/tuscany/sca/assembly/builder/impl/CompositeConfigurationBuilderImpl.java

2007-11-26 Thread Simon Laws
On Nov 23, 2007 11:27 PM, Simon Nash [EMAIL PROTECTED] wrote:


 Simon Laws wrote:

  On Nov 20, 2007 6:47 PM, Jean-Sebastien Delfino [EMAIL PROTECTED]
  wrote:
 
 
 Simon Laws wrote:
 
 On Nov 20, 2007 3:59 PM, Jean-Sebastien Delfino [EMAIL PROTECTED]
 wrote:
 
 
 Are you sure that this is the right semantics? Can you help me
 understand why we need to change the naming of the service if there's
 a
 a callback?
 
 Thanks.
 
 
 [EMAIL PROTECTED] wrote:
 
 Author: slaws
 Date: Tue Nov 20 06:35:45 2007
 New Revision: 596692
 
 URL: http://svn.apache.org/viewvc?rev=596692view=rev
 Log:
 TUSCANY-1914
 Construct URLs as ComponentName/ServiceName if callbacks have been
 
 added
 
 causing the number of services to be greater than 1
 
 Modified:
 
 

 incubator/tuscany/java/sca/modules/assembly/src/main/java/org/apache/tuscany/sca/assembly/builder/impl/CompositeConfigurationBuilderImpl.java
 
 Modified:
 

 incubator/tuscany/java/sca/modules/assembly/src/main/java/org/apache/tuscany/sca/assembly/builder/impl/CompositeConfigurationBuilderImpl.java
 
 URL:
 
 
 http://svn.apache.org/viewvc/incubator/tuscany/java/sca/modules/assembly/src/main/java/org/apache/tuscany/sca/assembly/builder/impl/CompositeConfigurationBuilderImpl.java?rev=596692r1=596691r2=596692view=diff
 

 ==
 
 ---
 

 incubator/tuscany/java/sca/modules/assembly/src/main/java/org/apache/tuscany/sca/assembly/builder/impl/CompositeConfigurationBuilderImpl.java
 
 (original)
 
 +++
 

 incubator/tuscany/java/sca/modules/assembly/src/main/java/org/apache/tuscany/sca/assembly/builder/impl/CompositeConfigurationBuilderImpl.java
 
 Tue Nov 20 06:35:45 2007
 
 @@ -280,7 +280,8 @@
 
  String bindingURI;
  if (binding.getURI() == null) {
 -if (componentServices.size()  1) {
 +//if (componentServices.size()  1) {
 +if (component.getServices().size()  1) {
  // Binding URI defaults to component URI
 
 /
 
 binding name
 
  bindingURI = String.valueOf(
 
 binding.getName
 
 ());
 
  bindingURI = URI.create(component.getURI
 
 ()
 
 + '/').resolve(bindingURI).toString();
 
 
 -
 To unsubscribe, e-mail: [EMAIL PROTECTED]
 For additional commands, e-mail: [EMAIL PROTECTED]
 
 
 --
 Jean-Sebastien
 
 -
 To unsubscribe, e-mail: [EMAIL PROTECTED]
 For additional commands, e-mail: [EMAIL PROTECTED]
 
 Maybe I'm getting the wrong end of the stick here.
 
 When a callback is encountered on a component reference a new callback
 service is now created to represent the endpoint of the callback
 [createCallbackService(Component, ComponentReference) in the
 CompositeConfigurationBuilder].
 
 I believe the intention is to treat these new services in the same way
 
 as
 
 any other service that the component may be providing.
 
 There's a difference, they cannot be wired to using reference
 target=
 
 They can't be wired, but they can be looked up using createSelfReference()
 and getService().  This is necessary to allow them to be passed as
 service references on the setCallback() API.

 For wiring, createSelfReference(), and getService(), it is always
 possible to identify services by the fully qualified component/service
 name.  In addition, if there is only one service on the component
 (counting both regular explicit services and implicit services for
 callbacks), it should be possible to specify the component name
 alone as a shorthand.

 So far so good.  Now we get to the trickier case of how URIs are
 constructed for these services.  For regular explicit services, the
 approach in the spec is inconsistent with how this works for wiring
 etc. as described above.  The difference is that you either get a
 fully qualified component/service name or you get a shorthand
 component name only.

 The either/or/only part of the last sentence has very undesirable
 consequences in the following scenario:
  1. A component has a single service A exposed via the shorthand URI.
  2. Other bindings or non-SCA clients refer to it using the shorthand URI.
  3. Another service B is added to the component.  Now both A and B
 are exposed via fully qualified URIs.
  4. Everyone who was referring to A by the shorthand URI is now broken.

 It's just as bad in the reverse scenario:
  1. A component has a two services A and B exposed via fully qualified
 URIs.
  2. Other bindings or non-SCA clients refer to A using a fully qualified
 URI.
  3. Service B is removed from the component.  Now A is only exposed via
 a shorthand URI.
  4. Everyone who was referring to A by the fully qualified URI is now
 broken.

 The problem can be solved by always exposing services by the fully
 qualified
 

[jira] Commented: (TUSCANY-1918) Support for dynamic containment

2007-11-26 Thread Frank Budinsky (JIRA)

[ 
https://issues.apache.org/jira/browse/TUSCANY-1918?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#action_12545487
 ] 

Frank Budinsky commented on TUSCANY-1918:
-

Hi Bert,

If you want to join the TC and attend the regular telecons, you (or your 
company) needs to be an OASIS member. We'd love to have you :-) Alternatively, 
anybody can observe what's going on and provide comments. 

Details can be found at: 
http://lists.oasis-open.org/archives/tc-announce/200710/msg00011.html

Frank.

 Support for dynamic containment
 ---

 Key: TUSCANY-1918
 URL: https://issues.apache.org/jira/browse/TUSCANY-1918
 Project: Tuscany
  Issue Type: New Feature
  Components: Java SDO Implementation
Affects Versions: Java-SDO-Next
Reporter: bert.robben

 In SDO, the boundaries of a datagraph are defined by the containment 
 relation. Only objects which can be reached from the root object by following 
 properties that are contained are part of the datagraph. Containment is 
 defined at the type level.
 In cases where applications need to dynamically select what information they 
 want, this fixed containment relationship is an issue. For instance, suppose 
 in a medical context you have defined a number of types defines to represent 
 patients together with their clinical (e.g. procedures they have taken) and 
 administrative data (for instance their address). The type definition needs 
 to decide on the containment of the clinical and administrative data. However 
 it is hard to decide whether or not the administrative and clinical data 
 should be contained because some applications might only need clinical or 
 administrative data and others might need both. In cases where the type 
 system is large or where there are large volumes of data involved (for 
 instance in the example, procedures could have an associated pdf-report 
 property) this becomes a real issue.
 Current solutions within the SDO framework could be (for the interested, 
 there has been a mail thread about this a while ago in the user mailing list)
 - Each app shoud define its own type with an appropriate containment 
 relation. The downside of this is a proliferation of types.
 - The main types should not have any containment relations. Containment is 
 specified using a synthetic type. Think of this as a special list type that 
 contains its elements. The root of the datagraph then would be an instance of 
 such a list type. All instances that are needed should be put in this flat 
 list.
 I would like to propose an alternative solution. In this solution, 
 containment would not be specified at the type level. Whenever the boundary 
 of a datagraph is needed (for instance when an xml document it be generated 
 or a datagraph is to be exchanged between for instance a client and a 
 server), the application should provide appropriate information that 
 specifies exactly what is part of the graph and what not. This can be seen as 
 a select clause for sql, or even better as a set of fetch joins in Hibernate. 
 This would give the application control over exactly what it wants. In the 
 example for instance, the application can easily decide at each point whether 
 or not it would want the address information together with the patient data.
 This proposal would have a number of interesting implications.
 - What is the implication of this for cases where datagraphs are represented 
 as xml documents that should be according to an xml schema?
 - How to deal with links to objects that don't belong to the datagraph? One 
 strategy could be just to drop them. Another one to provide some kind of 
 proxy.
 Interested parties can have a look at our SDO implementation (see also JIRA 
 1527 and 1493) where we try to support this.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.


-
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]



Re: [Policy Fwk Specs Relalted] Operations Inheritance

2007-11-26 Thread Mike Edwards

Venkat,

Not exactly sure what you're saying here, but I'll try to help.

First, in general if a component reference has an intent attached and it 
gets promoted by some composite reference, the intent applies to the 
composite reference.


If the promoting composite reference has one or more intents applied to 
it, then those intents get added to any intents from the promoted 
component reference.  Let's assume no clashes of intents (in which case 
it's an error).


The basic idea is true for the intents applied to operations within the 
interface.  They get promoted too - but are applied only to the 
operations they are attached to.  Same merging idea applies as well, if 
there are intents on the composite reference.


Does that short explanation help?

The kind if scenario this supports is where for performance reasons only 
one or two operations in a service need encryption and all the rest are 
left unencrypted.



Yours,  Mike.

Venkata Krishnan wrote:

Hi,

Looking at getting the policies working on operations of services I am
missing out on what is to be done for the following: -

- For operations on composite services when operations are defined in the
component service that is being promoted by this composite service.  Right
now, I aggregate the operations in the component service over to the
composite service.  Where the composite service specifies an operation
already specified in the component service, I have aggregated the intents
and policysets from the component service operations over to the composite
services'.  The same has been done for references as well.  Is this a right
thing to do ?

If somebody has better clarity on this please help.

Thanks

- Venkat



-
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]



Re: Method names in SCADomain* and SCANode* APIs

2007-11-26 Thread Mike Edwards

Simon,

+1 to have consistency.

Shorter is better.

Yours,  Mike.

Simon Nash wrote:

The following method names in domain-api and node-api include a
reference to either a domain or a node:
 SCADomain.addToDomainLevelComposite()
 SCADomain.removeFromDomainLevelComposite()
 SCADomain.getDomainLevelComposite()
 SCADomainFactory.createSCADomain()
 SCADomainFinder.getSCADomain()
 SCANode.getDomain()
 SCANode.addToDomainLevelComposite()
 SCANodeFactory.createSCANode()
 SCANodeFactory.createNodeWithComposite()

Of these 9 method names, 3 of them refer to SCADomain or SCANode
and 6 of them refer to plain Domain or Node.

I would like to remove the SCA from the 3 method names that
include it.  Since the SCADomain* and SCANode* class names already
include SCA to disambiguate them from other kinds of node and
domain, I don't think there is a need to repeat the SCA in the
method names as well.

What do others think about this?

  Simon



-
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]




-
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]



Re: [Policy Fwk Specs Relalted] Operations Inheritance

2007-11-26 Thread Venkata Krishnan
Thanks Mike.  That pretty much answers my query and is just about what I
have implemented as well.

- Venkat


On Nov 26, 2007 9:41 PM, Mike Edwards [EMAIL PROTECTED]
wrote:

 Venkat,

 Not exactly sure what you're saying here, but I'll try to help.

 First, in general if a component reference has an intent attached and it
 gets promoted by some composite reference, the intent applies to the
 composite reference.

 If the promoting composite reference has one or more intents applied to
 it, then those intents get added to any intents from the promoted
 component reference.  Let's assume no clashes of intents (in which case
 it's an error).

 The basic idea is true for the intents applied to operations within the
 interface.  They get promoted too - but are applied only to the
 operations they are attached to.  Same merging idea applies as well, if
 there are intents on the composite reference.

 Does that short explanation help?

 The kind if scenario this supports is where for performance reasons only
 one or two operations in a service need encryption and all the rest are
 left unencrypted.


 Yours,  Mike.

 Venkata Krishnan wrote:
  Hi,
 
  Looking at getting the policies working on operations of services I am
  missing out on what is to be done for the following: -
 
  - For operations on composite services when operations are defined in
 the
  component service that is being promoted by this composite service.
  Right
  now, I aggregate the operations in the component service over to the
  composite service.  Where the composite service specifies an operation
  already specified in the component service, I have aggregated the
 intents
  and policysets from the component service operations over to the
 composite
  services'.  The same has been done for references as well.  Is this a
 right
  thing to do ?
 
  If somebody has better clarity on this please help.
 
  Thanks
 
  - Venkat
 

 -
 To unsubscribe, e-mail: [EMAIL PROTECTED]
 For additional commands, e-mail: [EMAIL PROTECTED]




Re: Distribution structure for SCA Java 1.1 release (was Re: Sample dependencies not pulled in distribution)

2007-11-26 Thread Simon Nash

I would like to make a start on improving the modularity of the
distro by building a distro containing only a base SCA runtime.
Sebastien's description of this was
 - base SCA runtime (assembly, policy fwk, impl-java)
If others would like to propose any changes to this list and/or
suggest which maven modules should be included, that would be
helpful.  Otherwise I'll go through the modules myself to make
a first cut and iterate from there.

I think this will be a useful excercise in seeing how small we
can make this basic functionality and its dependencies.  It will
also allow us to explore some of the dependency issues between
modules (official SPIs or implementation code) in a smaller and
simpler world than the whole of Tuscany SCA Java.

For now I'd like to consider this a personal experiment that may
or may not ever get released.  For this reason, I'd like to do this
work in my sandbox.  I haven't yet figured out how to do a sandbox
build that pulls in code from the trunk, without copying it.  Does
anyone else have an example of this that I could look at?

Expect a number of these how to questions as I get further into
this :-)

  Simon

Rajini Sivaram wrote:


Simon,

I did take a look at splitting the Tuscany distribution into bundles with
the hope of defining something which makes sense for OSGi as well as
non-OSGi. I dont really think that makes much sense anymore. Grouping
modules into OSGi bundles using existing maven plugins was far too time
consuming (in terms of the amount of time it took to do a build), and quite
messy.

So I would like to go for a simpler option for OSGi where the the zip/jar
files generated for the Tuscany distribution have a manifest file containing
OSGi bundle manifest entries, so that they can be directly installed into
OSGi (with an easy repackaging option to get rid of samples from the bundle
if the bundle size was too big). I would also like to add OSGi manifest
entries into all jars distributed by Tuscany including 3rd party jars, so
that we can use the OSGi bundle repository API to install Tuscany into an
OSGi runtime, instead of relying on Tuscany distribution structure.

I have an Eclipse plugin which shows the dependency graphs based on the
import/export statements generated by the maven-bundle-plugin. I could
compare these with the dependencies you generated (it might help to add
appropriate scopes to the dependencies).



Thank you...

Regards,

Rajini


On 11/23/07, Simon Laws [EMAIL PROTECTED] wrote:


On Nov 22, 2007 2:51 PM, ant elder [EMAIL PROTECTED] wrote:



On Nov 22, 2007 1:57 PM, Simon Nash [EMAIL PROTECTED] wrote:



Jean-Sebastien Delfino wrote:



[snip]
Simon Nash wrote:



Samples are very important for beginning users.  For users who have
moved beyond that stage and are doing real development using


Tuscany,


samples are not very important.  If people in this category do want
samples, they are likely to just want to refer to samples source


code


to cut and paste snippets as necessary.  Having pre-built sample


binaries


isn't important for these users, and having the main lib directory
polluted/bloated by samples dependencies is a positive nuisance


because


there's no easy way for them to find and remove the redundant


files.



I didn't think we were polluting the lib directory with sample
dependencies, do you have a concrete example?



I thought this thread was discussing the case of a sample having a
dependency that the runtime does not have.  If there are no such cases
at present, then the issue doesn't arise.  However, there could be
such cases in the future as we add more application-style samples,
and it would be good to have an idea about how such dependencies would
be handled.



Having these files in Tuscany's lib directory isn't just wasting a


few


bits on the disk.  It can be a problem if their version levels


conflict


with other versions of the same code that the user has installed.
For genuine Tuscany dependencies, such conflicts are a real issue
that must be handled carefully in order to get Tuscany to co-exist


with


their other software.  For sample dependencies, there is no actual
conflict unless the user needs to run the specific sample that


pulled


in the dependency,



Like I said earlier in the initial thread about sample dependencies,


I


don't think that samples should bring dependencies that are not


genuine


Tuscany dependencies.



OK, we are agreed about this.  But what if an application-style sample
does have a non-Tuscany dependency?  This is certainly


possible.  Would


the Tuscany distro include the dependency, or leave it up to the user
to download it as a prereq to running the sample?



but it might take them some time to figure out why



putting the Tuscany lib directory on the classpath is causing other
code in their application to break.

I'd suggest structuring the binary distribution as follows:

1. Tuscany runtime in modules and its dependencies in lib.



+1



  At the 

Re: Distribution structure for SCA Java 1.1 release (was Re: Sample dependencies not pulled in distribution)

2007-11-26 Thread Simon Nash


Simon Laws wrote:


On Nov 22, 2007 2:51 PM, ant elder [EMAIL PROTECTED] wrote:

(cut)

I can do some more processing on (
http://people.apache.org/~slaws/dependencies.htmhttp://people.apache.org/%7Eslaws/dependencies.htm)
to give some help here if required.



What do the columns mean?

  Simon



-
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]



Re: Distribution structure for SCA Java 1.1 release (was Re: Sample dependencies not pulled in distribution)

2007-11-26 Thread Simon Laws
On Nov 26, 2007 5:18 PM, Simon Nash [EMAIL PROTECTED] wrote:


 Simon Laws wrote:

  On Nov 22, 2007 2:51 PM, ant elder [EMAIL PROTECTED] wrote:
 
  (cut)
 
  I can do some more processing on (
  http://people.apache.org/~slaws/dependencies.htmhttp://people.apache.org/%7Eslaws/dependencies.htm
 http://people.apache.org/%7Eslaws/dependencies.htm)
  to give some help here if required.
 
 
 What do the columns mean?

   Simon



 -
 To unsubscribe, e-mail: [EMAIL PROTECTED]
 For additional commands, e-mail: [EMAIL PROTECTED]

 Column 1 - A jar on which Tuscany depends
Column 2 - The type of dependency
Column 3 - The Tuscany module that requires the dependency
Column 4 to N - The transitive dependency path

Simon


Release 1.1 JIRA

2007-11-26 Thread Simon Laws
Last week I moved remaining JIRA form 1.0.1 into the 1.1 bucket so we are
set to go accumulating the JIRA that you want to see in R1.1. Since our last
general roadmap discussion [1][2] I know people have been making progress so
I want to start populating 1.1 with the JIRA that belong there to get a feel
about what we can expect/have left to do. I'm still looking at cutting an RC
on 14th December.

I'll have a sweep through completed JIRA targetting JAVA-SCA-Next. What I'm
particularly interested in of course are the incomplete ones. So if you know
which ones you want in can you move them.

Thanks

Simon

[1] http://www.mail-archive.com/tuscany-dev@ws.apache.org/msg25458.html
[2]
http://cwiki.apache.org/confluence/display/TUSCANYWIKI/Roadmap+Discussion


Re: Data transformation from/to POJO

2007-11-26 Thread Simon Nash

Mike has brought up a very good point.  I don't think it would make
sense for Tuscany to invent yet another Java to XML mapping.  What
are the issues if we were to go with what JAXB defines for this?

  Simon

Mike Edwards wrote:


Raymond,

Where angels fear to tread

My initial thoughts about this mused on why people had spent so much 
time on specs like SDO and JAXB.  If mapping POJOs to XML was simple and 
straightforward, why did we need those large specs?


Perhaps you are right in thinking that there are simple cases that can 
be mapped simply.  But then, what do you do about the more awkward cases?


What I'd like us to consider deeply first is whether we want to create 
(yet) another Java - XML mapping specification and if so, what is its 
relationship to the existing ones.


My initial 2 cents


Yours,  Mike.

Raymond Feng wrote:


Hi,

With the recent development of the online store tutorial, we encounter 
quite a few issues around the transformation between POJO and other 
databindings (such as XML, JSON).


Let's take the POJO -- XML as an example. Here is a set of questions 
to be answered.


1) Do we require the POJO to be a strict JavaBean or free-form class?

2) How to read properties from a java object?

The data in a java object can be accessed by the field or by JavaBean 
style getter methods. There are different strategies:


a) Always use JavaBean-style getter method
b) Always use field access
c) A combination of a  b

The other factor is the modifier of a field/method defintion. What 
modifiers are allowed? public, protected, default and private?


If a property only have getter method, should we dump the property 
into the XML? How about transient fields?


3) How to write properties to populate the target POJO instance?

a) Use JavaBean setter?
b) Use field
c) Combination of a  b

When we convert XML element back to a POJO property, how do we 
instantiate the property instance if the property type is an interface 
or abstract class?


For example,

package com.example;
public class MyBean {
   private MyInterface p1;

   public void setP1(MyInterface p1) {
   this.p1 = p1;
   }

   public MyInterface getP1() {
   return p1;
   }
}

Do we require the XML element contains xsi:type attribute which will 
be generated from POJO--XML to represent the concrete property type? 
Such as:


myBean xsi:type=ns1:MyBean xmlns:ns1=http://example.com/;
   p1 xsi:type=ns2:MyInterface xmlns:ns2=http://example.com//
/myBean

Thanks,
Raymond

-
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]






-
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]



Re: Data transformation from/to POJO

2007-11-26 Thread Raymond Feng

Hi,

I'm on the same boat as Mike and you. The discussion was about how can we 
simplify the data transformation of a subset of POJOs following a strict 
pattern without starting from a formal model such as XSD. I don't know any 
JAXB implementation can handle a POJO without JAXB annotations. If there is 
one with reasonable support of default Java/XML mapping (no XSD or 
annotations are required), I would be happy to use it.


Thanks,
Raymond

- Original Message - 
From: Simon Nash [EMAIL PROTECTED]

To: tuscany-dev@ws.apache.org
Sent: Monday, November 26, 2007 12:36 PM
Subject: Re: Data transformation from/to POJO



Mike has brought up a very good point.  I don't think it would make
sense for Tuscany to invent yet another Java to XML mapping.  What
are the issues if we were to go with what JAXB defines for this?

  Simon

Mike Edwards wrote:


Raymond,

Where angels fear to tread

My initial thoughts about this mused on why people had spent so much time 
on specs like SDO and JAXB.  If mapping POJOs to XML was simple and 
straightforward, why did we need those large specs?


Perhaps you are right in thinking that there are simple cases that can be 
mapped simply.  But then, what do you do about the more awkward cases?


What I'd like us to consider deeply first is whether we want to create 
(yet) another Java - XML mapping specification and if so, what is its 
relationship to the existing ones.


My initial 2 cents


Yours,  Mike.

Raymond Feng wrote:


Hi,

With the recent development of the online store tutorial, we encounter 
quite a few issues around the transformation between POJO and other 
databindings (such as XML, JSON).


Let's take the POJO -- XML as an example. Here is a set of questions 
to be answered.


1) Do we require the POJO to be a strict JavaBean or free-form class?

2) How to read properties from a java object?

The data in a java object can be accessed by the field or by JavaBean 
style getter methods. There are different strategies:


a) Always use JavaBean-style getter method
b) Always use field access
c) A combination of a  b

The other factor is the modifier of a field/method defintion. What 
modifiers are allowed? public, protected, default and private?


If a property only have getter method, should we dump the property into 
the XML? How about transient fields?


3) How to write properties to populate the target POJO instance?

a) Use JavaBean setter?
b) Use field
c) Combination of a  b

When we convert XML element back to a POJO property, how do we 
instantiate the property instance if the property type is an interface 
or abstract class?


For example,

package com.example;
public class MyBean {
   private MyInterface p1;

   public void setP1(MyInterface p1) {
   this.p1 = p1;
   }

   public MyInterface getP1() {
   return p1;
   }
}

Do we require the XML element contains xsi:type attribute which will be 
generated from POJO--XML to represent the concrete property type? Such 
as:


myBean xsi:type=ns1:MyBean xmlns:ns1=http://example.com/;
   p1 xsi:type=ns2:MyInterface xmlns:ns2=http://example.com//
/myBean

Thanks,
Raymond

-
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]






-
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]




-
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]



Re: Policy support for implementation elements

2007-11-26 Thread Raymond Feng
It seems the logging example is taken from the SCA policy framework 
specification. Are you going to propose to the spec group to replace it?


logging may not be accurate term, but I think it would be reasonble to 
model the requirements of monitoring or auditing business activities as 
intents.


Thanks,
Raymond

- Original Message - 
From: Mike Edwards [EMAIL PROTECTED]

To: tuscany-dev@ws.apache.org
Sent: Tuesday, October 16, 2007 5:12 AM
Subject: Re: Policy support for implementation elements



Venkat,

I'm sorry to be a party pooper here, but I don't think that the use of 
logging via an intent is a good way of doing things.


The SCA idea of intents is that they are a way of expressing requirements 
that an implementation has (or a service or a reference).


So, for a service, it's possible to say I need my messages encrypted by 
using @requires=confidentiality.  For an implementation, one of the 
transaction intents like @requires=managedTransaction would also be 
reasonable.


Logging is a very different thing, in my opinion.  It seems like something 
that is really a runtime configuration option.  I can see that it is 
useful to tell the runtime to log or not to log, and how much to log, but 
I'm really struggling to see why I'd mark my implementations or my 
composites with metadata about logging.


Now, I'm not against capturing the logging levels within a PolicySet, it 
that is convenient.  But the application of those policies to the runtime 
does look like an act of configuring the runtime itself (start the 
runtime with PolicySets X, Y, Z.)



Yours,  Mike.

Venkata Krishnan wrote:

Hi...

I have set up a policyset for JDKLogging and a policy handler for the 
same.
All this is now in a separate module called policy-logging.   I have 
hooked

up the policyhandler in the java impl runtime.  It would be good if folks
can give me opinion on the hooks I have placed to setup and invoke 
policies.


I have modified the calculator sample to include the jdk logging policy.
Here is the policyset that is used

policySet name=tuscany:JDKLoggingPolicy
 provides=logging
 appliesTo=sca:implementation.java
 xmlns=http://www.osoa.org/xmlns/sca/1.0;
 tuscany:jdkLogger xmlns:tuscany=
http://tuscany.apache.org/xmlns/sca/1.0; name=test.logger
logLevelINFO/logLevel
resourceBundleCalculatorLogMessages/resourceBundle
useParentHandlersfalse/useParentHandlers
/tuscany:jdkLogger
/policySet

This is just a few of the things that we could configure in JDK Logging. 
We

can grow this to include other things as well as we go along.

Here is how a component in the calculator sample uses logging intent.

component name=AddServiceComponent
implementation.java class=calculator.AddServiceImpl
requires=logging/
/component

If you run the sample you will see an 'INFO' level log message now for 
the

add function.  If you change the 'logLevel' element to 'ALL' in the above
policyset, in the definitions.xml file of the calculator sample and run
again, you'd see more log statements.

I guess applications can use this policyset structure to define their
logging options. If more options are to be supported such as specifying
additional log handler classes etc. we need to go and extend the 
processor
and the handler class in the policy.logging module.  I'll do this as an 
when

I get feedback on this.

While the current set up enables logging in the java implementation
extension, the logging intent and policy can also be used to configure
logging within the business logic implementation too, since JDK logging
functions at a global level.  So, from the SCA composite level you could
instrument the logging within implementations.  I will add something that
demonstrates this, into the calculator sample... hope I am right in my
supposition :)

Thanks

- Venkat



-
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]




-
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]



Re: Distribution structure for SCA Java 1.1 release (was Re: Sample dependencies not pulled in distribution)

2007-11-26 Thread Jean-Sebastien Delfino

[snip]
Simon Nash wrote:

I would like to make a start on improving the modularity of the
distro by building a distro containing only a base SCA runtime.
Sebastien's description of this was
 - base SCA runtime (assembly, policy fwk, impl-java)


[snip]

I haven't yet figured out how to do a sandbox

build that pulls in code from the trunk, without copying it.  Does
anyone else have an example of this that I could look at?


Modules assembled by the Maven assembly plugin are taken out of the 
Maven repository, so you shouldn't need to copy the code to come up with 
a different assembly.


For an example, look at the assemblies under sca/distribution.

I'm also going to put together an assembly for the eclipse plugin, I'll 
give a pointer when it's there.


Hope this helps.
--
Jean-Sebastien

-
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]



Re: Distribution structure for SCA Java 1.1 release (was Re: Sample dependencies not pulled in distribution)

2007-11-26 Thread Jean-Sebastien Delfino

Rajini Sivaram wrote:

Simon,

I did take a look at splitting the Tuscany distribution into bundles with
the hope of defining something which makes sense for OSGi as well as
non-OSGi. I dont really think that makes much sense anymore. Grouping
modules into OSGi bundles using existing maven plugins was far too time
consuming (in terms of the amount of time it took to do a build), and quite
messy.

So I would like to go for a simpler option for OSGi where the the zip/jar
files generated for the Tuscany distribution have a manifest file containing
OSGi bundle manifest entries, so that they can be directly installed into
OSGi


+1 from me, I'm glad you reached that conclusion, after thinking about 
it that was the only option that made sense to me :)


(with an easy repackaging option to get rid of samples from the bundle

if the bundle size was too big).


Didn't quite get that, can you explain?

I would also like to add OSGi manifest

entries into all jars distributed by Tuscany including 3rd party jars, so
that we can use the OSGi bundle repository API to install Tuscany into an
OSGi runtime, instead of relying on Tuscany distribution structure.



Not sure I'd like to go and change 3rd party dependency jars... but that 
triggers a very basic question:


Independent of Tuscany, isn't this something that every OSGi user is 
going to bump into with non-OSGified 3rd party jars? What is the best 
practice in the OSGi community for this issue these days?




I have an Eclipse plugin which shows the dependency graphs based on the
import/export statements generated by the maven-bundle-plugin. I could
compare these with the dependencies you generated (it might help to add
appropriate scopes to the dependencies).



Great, that'll help. Thanks.

--
Jean-Sebastien

-
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]



Re: Data transformation from/to POJO

2007-11-26 Thread Simon Nash

There is a default Java to XML mapping (without annotations) in the
JAXB spec, in addition to the customized mapping (with annotations).
Does the default mapping have unacceptable limitations?  If so, what
are they?

  Simon

Raymond Feng wrote:


Hi,

I'm on the same boat as Mike and you. The discussion was about how can 
we simplify the data transformation of a subset of POJOs following a 
strict pattern without starting from a formal model such as XSD. I don't 
know any JAXB implementation can handle a POJO without JAXB annotations. 
If there is one with reasonable support of default Java/XML mapping (no 
XSD or annotations are required), I would be happy to use it.


Thanks,
Raymond

- Original Message - From: Simon Nash [EMAIL PROTECTED]
To: tuscany-dev@ws.apache.org
Sent: Monday, November 26, 2007 12:36 PM
Subject: Re: Data transformation from/to POJO



Mike has brought up a very good point.  I don't think it would make
sense for Tuscany to invent yet another Java to XML mapping.  What
are the issues if we were to go with what JAXB defines for this?

  Simon

Mike Edwards wrote:


Raymond,

Where angels fear to tread

My initial thoughts about this mused on why people had spent so much 
time on specs like SDO and JAXB.  If mapping POJOs to XML was simple 
and straightforward, why did we need those large specs?


Perhaps you are right in thinking that there are simple cases that 
can be mapped simply.  But then, what do you do about the more 
awkward cases?


What I'd like us to consider deeply first is whether we want to 
create (yet) another Java - XML mapping specification and if so, 
what is its relationship to the existing ones.


My initial 2 cents


Yours,  Mike.

Raymond Feng wrote:


Hi,

With the recent development of the online store tutorial, we 
encounter quite a few issues around the transformation between POJO 
and other databindings (such as XML, JSON).


Let's take the POJO -- XML as an example. Here is a set of 
questions to be answered.


1) Do we require the POJO to be a strict JavaBean or free-form class?

2) How to read properties from a java object?

The data in a java object can be accessed by the field or by 
JavaBean style getter methods. There are different strategies:


a) Always use JavaBean-style getter method
b) Always use field access
c) A combination of a  b

The other factor is the modifier of a field/method defintion. What 
modifiers are allowed? public, protected, default and private?


If a property only have getter method, should we dump the property 
into the XML? How about transient fields?


3) How to write properties to populate the target POJO instance?

a) Use JavaBean setter?
b) Use field
c) Combination of a  b

When we convert XML element back to a POJO property, how do we 
instantiate the property instance if the property type is an 
interface or abstract class?


For example,

package com.example;
public class MyBean {
   private MyInterface p1;

   public void setP1(MyInterface p1) {
   this.p1 = p1;
   }

   public MyInterface getP1() {
   return p1;
   }
}

Do we require the XML element contains xsi:type attribute which will 
be generated from POJO--XML to represent the concrete property 
type? Such as:


myBean xsi:type=ns1:MyBean xmlns:ns1=http://example.com/;
   p1 xsi:type=ns2:MyInterface xmlns:ns2=http://example.com//
/myBean

Thanks,
Raymond

-
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]






-
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]



Re: Data transformation from/to POJO

2007-11-26 Thread Jean-Sebastien Delfino

Raymond Feng wrote:

Hi,

I'm on the same boat as Mike and you. The discussion was about how can 
we simplify the data transformation of a subset of POJOs following a 
strict pattern without starting from a formal model such as XSD. I don't 
know any JAXB implementation can handle a POJO without JAXB annotations. 
If there is one with reasonable support of default Java/XML mapping (no 
XSD or annotations are required), I would be happy to use it.


Thanks,
Raymond



I think I can guess where that discussion is going so let me try to 
bring a different perspective.


The discussion started from me trying to put together a simple online 
store application. My application has a catalog and a shopping cart.


Both contain Item business objects (representing fruits and vegetables). 
   I need to flow these Items through local calls, ATOMPub, JSON-RPC 
and SOAP.


This online store application is developed as a tutorial and in the 
initial steps I write Item as a simple JavaBean.


I don't have any strong preference for a particular databinding or 
another, but could you guys please help me understand how I go from Item 
to something that actually works in the complete application with the 
bindings I need?


I'm open to change Item, to write some transformation/mediation code if 
there's really no way to flow the same Item through XML and non-XML, or 
whatever other creative solution you find, as long as it's reasonably 
simple (in other words this little data business doesn't become the most 
complicated part of the application).


The current code for the online store is there:
https://svn.apache.org/repos/asf/incubator/tuscany/java/sca/tutorial/assets
https://svn.apache.org/repos/asf/incubator/tuscany/java/sca/tutorial/store

Thanks.
--
Jean-Sebastien

-
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]



Re: Method names in SCADomain* and SCANode* APIs

2007-11-26 Thread Jean-Sebastien Delfino

Simon Nash wrote:

The following method names in domain-api and node-api include a
reference to either a domain or a node:
 SCADomain.addToDomainLevelComposite()
 SCADomain.removeFromDomainLevelComposite()
 SCADomain.getDomainLevelComposite()
 SCADomainFactory.createSCADomain()
 SCADomainFinder.getSCADomain()
 SCANode.getDomain()
 SCANode.addToDomainLevelComposite()
 SCANodeFactory.createSCANode()
 SCANodeFactory.createNodeWithComposite()

Of these 9 method names, 3 of them refer to SCADomain or SCANode
and 6 of them refer to plain Domain or Node.

I would like to remove the SCA from the 3 method names that
include it.  Since the SCADomain* and SCANode* class names already
include SCA to disambiguate them from other kinds of node and
domain, I don't think there is a need to repeat the SCA in the
method names as well.

What do others think about this?

  Simon



+1

--
Jean-Sebastien

-
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]



Re: Data transformation from/to POJO

2007-11-26 Thread Raymond Feng

Hi,

I just did a test to see how JAXB-RI handles the POJO without any 
annotations. The result seems to be promising.


I started with a POJO:

public class MyBean {
   private int age;
   private String name;
   private ListString notes = new ArrayListString();

   public int getAge() {
   return age;
   }
   public void setAge(int age) {
   this.age = age;
   }
   public String getName() {
   return name;
   }
   public void setName(String name) {
   this.name = name;
   }
   public ListString getNotes() {
   return notes;
   }
   public void setNotes(ListString notes) {
   this.notes = notes;
   }
}

The following test case is then successful.

public void testPOJO() throws Exception {
   JAXBContext context = JAXBContext.newInstance(MyBean.class);
   StringWriter writer = new StringWriter();
   MyBean bean = new MyBean();
   bean.setName(Test);
   bean.setAge(20);
   bean.getNotes().add(1);
   bean.getNotes().add(2);
   JAXBElementObject element = new JAXBElementObject(new 
QName(http://ns1;, bean), Object.class, bean);

   context.createMarshaller().marshal(element, writer);
   System.out.println(writer.toString());
   Object result = context.createUnmarshaller().unmarshal(new 
StringReader(writer.toString()));

   assertTrue(result instanceof JAXBElement);
   JAXBElement e2 = (JAXBElement)result;
   assertTrue(e2.getValue() instanceof MyBean);
}

?xml version=1.0 encoding=UTF-8 standalone=yes?ns2:bean 
xsi:type=myBean xmlns:xsi=http://www.w3.org/2001/XMLSchema-instance; 
xmlns:ns2=http://ns1;age20/agenameTest/name/ns2:bean



Thanks,
Raymond
- Original Message - 
From: Simon Nash [EMAIL PROTECTED]

To: tuscany-dev@ws.apache.org
Sent: Monday, November 26, 2007 1:50 PM
Subject: Re: Data transformation from/to POJO



There is a default Java to XML mapping (without annotations) in the
JAXB spec, in addition to the customized mapping (with annotations).
Does the default mapping have unacceptable limitations?  If so, what
are they?

  Simon

Raymond Feng wrote:


Hi,

I'm on the same boat as Mike and you. The discussion was about how can we 
simplify the data transformation of a subset of POJOs following a strict 
pattern without starting from a formal model such as XSD. I don't know 
any JAXB implementation can handle a POJO without JAXB annotations. If 
there is one with reasonable support of default Java/XML mapping (no XSD 
or annotations are required), I would be happy to use it.


Thanks,
Raymond

- Original Message - From: Simon Nash [EMAIL PROTECTED]
To: tuscany-dev@ws.apache.org
Sent: Monday, November 26, 2007 12:36 PM
Subject: Re: Data transformation from/to POJO



Mike has brought up a very good point.  I don't think it would make
sense for Tuscany to invent yet another Java to XML mapping.  What
are the issues if we were to go with what JAXB defines for this?

  Simon

Mike Edwards wrote:


Raymond,

Where angels fear to tread

My initial thoughts about this mused on why people had spent so much 
time on specs like SDO and JAXB.  If mapping POJOs to XML was simple 
and straightforward, why did we need those large specs?


Perhaps you are right in thinking that there are simple cases that can 
be mapped simply.  But then, what do you do about the more awkward 
cases?


What I'd like us to consider deeply first is whether we want to create 
(yet) another Java - XML mapping specification and if so, what is its 
relationship to the existing ones.


My initial 2 cents


Yours,  Mike.

Raymond Feng wrote:


Hi,

With the recent development of the online store tutorial, we encounter 
quite a few issues around the transformation between POJO and other 
databindings (such as XML, JSON).


Let's take the POJO -- XML as an example. Here is a set of questions 
to be answered.


1) Do we require the POJO to be a strict JavaBean or free-form class?

2) How to read properties from a java object?

The data in a java object can be accessed by the field or by JavaBean 
style getter methods. There are different strategies:


a) Always use JavaBean-style getter method
b) Always use field access
c) A combination of a  b

The other factor is the modifier of a field/method defintion. What 
modifiers are allowed? public, protected, default and private?


If a property only have getter method, should we dump the property 
into the XML? How about transient fields?


3) How to write properties to populate the target POJO instance?

a) Use JavaBean setter?
b) Use field
c) Combination of a  b

When we convert XML element back to a POJO property, how do we 
instantiate the property instance if the property type is an interface 
or abstract class?


For example,

package com.example;
public class MyBean {
   private MyInterface p1;

   public void setP1(MyInterface p1) {
   this.p1 = p1;
   }

   public MyInterface getP1() {
   return p1;
   }
}

Do we require the XML element contains xsi:type attribute which will 
be generated from POJO--XML to represent the concrete 

Re: Do we still need special handling of callback bindings and wires?

2007-11-26 Thread Greg Dritschler
I missed it when it happened, but it appears that this discussion was
settled (not sure where or how) in favor of not ever creating static wires
for callbacks.  Why is that?  I have no idea what the performance cost is
one way or the other, but I agree with Simon Nash that it seems a bit
strange to have static wires in the forward direction and only a dynamic
wire in the callback direction.

Greg Dritschler

On Aug 21, 2007 11:00 AM, Simon Nash [EMAIL PROTECTED] wrote:

 Comments inline.

   Simon

 Raymond Feng wrote:

  Comments inline.
 
  Thanks,
  Raymond
 
  - Original Message - From: Simon Nash [EMAIL PROTECTED]
  To: tuscany-dev@ws.apache.org
  Sent: Monday, August 20, 2007 5:14 PM
  Subject: Re: Do we still need special handling of callback bindings and
  wires?
 
 
  The short answer is Yes.  The long answer follows below :-)
 
  I'll describe the design approach used by the code in my patch for
  TUSCANY-1496.  Things are moving rapidly in this area with Raymond's
  work to support late binding between references and services, so some
  of this description may need to be updated.
 
 
  It's my turn to update the discription now :-)
 
 
  Wires may be reference wires or service wires:
   1. Reference wires connect a source reference to a target binding
  and endpoint.  The source reference could be a callback service's
  pseudo-reference.
   2. Service wires connect a binding endpoint to a service
 implementation.
  The service implementation could be a callback reference's
  pseudo-service.
 
  Reference wires may be static or dynamic:
   1. A static wire targets a specific binding and endpoint (local or
  remote).  Dispatching a call down an invocation chain for this
  wire results in a call to the statically configured binding and
  endpoint for the wire.
   2. A dynamic wire targets a specific binding but an unspecified
  endpoint.  The actual target endpoint is provided at invocation
  time.  Depending on the binding type, dynamic wires may perform
  worse than static wires, or their performance may be the same.
  Some bindings may only support static wires.  Some may only support
  dynamic wires.  Some may support both, with static wires providing
  better performance.
 
 
  I'm not sure why you think it's the binding's job to support static or
  dynanic wire. To me, the dynamic wire needs to be bound an endpoint
  before it can used for invocations.
 
 Maybe the terminology static and dynamic is confusing here.  By
 static I mean a wire that is bound to a specific target endpoint and
 all invocations down that wire will go to this pre-bound endpoint.
 By dynamic I mean a wire that is not pre-bound to a specific endpoint,
 allowing each invocation down the wire to specify its target endpoint.

 Some bindings can optimize if they have have static knowledge of the
 target.  The local SCA binding is in this category, because static
 pre-knowledge allows the source and target invocation chains to be
 connected (now by means of the binding invoker), so that each invocation
 becomes a direct call through pre-built invocation chains.

 Other bindings perform the same whether or not they have this static
 knowledge.  The Axis2 Web Service binding is in this category, because
 it always creates an Axis2 operation client for each request, and it
 passes the target endpoint into Axis2 as a creation parameter for the
 operation client.

 Requiring all wires to be be pre-bound to a target endpoint before they
 can be used for an invocation would require many more wires to be created
 than is necessary.  An extreme case of this is callbacks over Web Services
 from multiple clients to a single service, where the service's callback
 pseudo-reference should not use a separate callback wire for each client
 but should have a single dynamic wire that can invoke any client endpoint.
 Forcing every callback operation to create and bind a runtime wire first
 is unneccessary and will incur both time and space costs.

 
  Service wires are effectively always static since on the service
  side, the binding and endpoint is known.  Every service and binding
  combination has a single service wire that is used by the binding
  provider to invoke the service.
 
  For statically connected references and services (e.g., wired in SCDL,
  using an SCA binding, and locally accessible), static forward wires
  are created.  The core can't fully complete the end-to-end invocation
  chain for the static wire, so the start methods of bindings that
  support local optimization (like the local SCA binding) can complete
  these connections using information provided by the core.
 
 
  Now we support the lazy creation of RuntimeWire/Invocation for a
  reference. I also changed the code to have the RuntimeSCABindingInvoker
  to delegate the call to the first invoker in the target service chain
  instead of trying to merge/connect the two chains together.
 
 Lazy creation is fine, as 

Re: Data transformation from/to POJO

2007-11-26 Thread Jean-Sebastien Delfino

Raymond Feng wrote:

Hi,

I just did a test to see how JAXB-RI handles the POJO without any 
annotations. The result seems to be promising.


I started with a POJO:

public class MyBean {
   private int age;
   private String name;
   private ListString notes = new ArrayListString();

   public int getAge() {
   return age;
   }
   public void setAge(int age) {
   this.age = age;
   }
   public String getName() {
   return name;
   }
   public void setName(String name) {
   this.name = name;
   }
   public ListString getNotes() {
   return notes;
   }
   public void setNotes(ListString notes) {
   this.notes = notes;
   }
}

The following test case is then successful.

public void testPOJO() throws Exception {
   JAXBContext context = JAXBContext.newInstance(MyBean.class);
   StringWriter writer = new StringWriter();
   MyBean bean = new MyBean();
   bean.setName(Test);
   bean.setAge(20);
   bean.getNotes().add(1);
   bean.getNotes().add(2);
   JAXBElementObject element = new JAXBElementObject(new 
QName(http://ns1;, bean), Object.class, bean);

   context.createMarshaller().marshal(element, writer);
   System.out.println(writer.toString());
   Object result = context.createUnmarshaller().unmarshal(new 
StringReader(writer.toString()));

   assertTrue(result instanceof JAXBElement);
   JAXBElement e2 = (JAXBElement)result;
   assertTrue(e2.getValue() instanceof MyBean);
}

?xml version=1.0 encoding=UTF-8 standalone=yes?ns2:bean 
xsi:type=myBean xmlns:xsi=http://www.w3.org/2001/XMLSchema-instance; 
xmlns:ns2=http://ns1;age20/agenameTest/name/ns2:bean





Good that it seems promising :) what do I need to do to get the JaxB to 
XML transformer to pick up the Item bean in the store tutorial?


--
Jean-Sebastien

-
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]