Re: SPI reorg (was: Re: [Discussion] Tuscany kernel modulization
Hi, I have shared a recent experience in this regard with comments inline. Thanks - Venkat On 4/20/07, ant elder < [EMAIL PROTECTED]> wrote: On 3/29/07, ant elder < [EMAIL PROTECTED]> wrote: > > > On 3/27/07, Jeremy Boynes <[EMAIL PROTECTED]> wrote: > > > > One reason the SPI module is so large is that it does define many the > > interfaces for the components in you diagram. I think there is room > > for a reorganization there to clarify the usage of those interfaces. > > I would propose we start with that ... > > > There have been several emails now which I think show some agreement that > we can do some SPI refactoring. Here's something that could be a start of > this work: > > One area is the extension SPI, on our architecture diagram [1] thats the > "Extensions" box at the top right. In the past we've had a lot of problems > with extensions continually getting broken as the SPIs keep changing. This > has made maintaining extensions to be in a working state a big job and it > has led to some donated extension just being abandoned. > > One of the reasons for this is that the SPIs more reflect the requirements > of the Tuscany runtime than the requirements of the extension being > contributed. For example, a container extension needs to enable creating, > initializing, invoking and destroying a component, along with exposing the > components services, references and properties. Those things have remained > pretty constant even though the SCA specs and Tuscany runtime and SPI have > undergone significant changes. > > I think we should be able to create SPIs for these type of functions which > clearly represent the requirements of a particular extension type, and that > doing this would go along way to making things more stable. All this code is > there in the current SPI so its mainly just a mater of refactoring parts > out into separate bits with clearly defined function and adding adapter code > so the runtime can use them. > > You can even envisage that if this is successful it could define a runtime > extension SPI for all the extensible areas of SCA assembly which could > eventually be standardized to provide portable SCA extensions in a way > similar to JBI. > > What do people think, is this worth looking at? If so I'd like to make an > attempt at doing it for bindings and components using the Axis2 binding and > script container as guinea pigs. This should be pretty transparent to the > rest of the kernel as other than the adapter code it could all be separate > modules. > I mentioned on the release discussion thread that I'd bring this thread up again. The new trunk code has made things better in the SPI area but I think there's still a lot that could be improved (IMHO). The sort of thing i was thinking about was coming up with runtime support and an spi package for each extension time that made it clear what all the methods needed to be implemented for at least minimum functionality. For example, for an implementation extension minimum functionality would support services, references and properties (at least simple type properties anyway), correctly merging introspected and sidefile defined component type info, component instance life cycle and scope, and the correct invocation semantics for things like pass-by-value support. And do all that in a way where the majority of code is done generically in the runtime instead of the extension either not supporting some of those things or just copying chunks of code from other extensions to get the support. I entirely agree with this. I remember that one of the 'ceremonious' things we used to do in the old code base to implement an extension was to copy over quite a bit of code from a existing one. I guess we must avoid that this time around. I'd really like to have the .componentType info gathering done in the core. The only thing that might get specific to implementation extensions in this is the URI for the .componentType file. i.e. for java-implementations this is derived from the 'class' field of implementaion .java and for script-impl this is derived from the 'scriptname'. I guess, getting this URI can be done by adding one more SPI method 'getComponentTypeURI' like the way we have 'getArtifactType' for each implementation. The other case that I feel could be a part of the core itself is the PropertyValue ObjectFactory and the Databinding support that it uses. This can again be a part of the core and there could just about an SPI method such as 'setProperty(name, value)' that could be implemented by implementation extensions. If there are some specific transformers that the implementation needs then those alone can be contributed by the implementation. Do others agree this is something we should try to do for the next release? If so I thought about starting with a new modules for implementation-spi, binding-spi etc to avoid changing the existing runtime code for now. And I'd like to start on the implementation-spi one wit
Re: Represent the recursive composition in runtime
Hi Raymond, Thanks for bringing this up. I did get a feel of this during a recent addition that I did for supporting configuration of properties for components implemented by a composite. I was probably at loss to lay out an illustration as well as you have done :). 1) I guess the 'includes' thing is presently there in CompositeUtil of the assemly module and gets invoked as part of the 'wire' phase. 2) As far as cloning goes, its seems to be a good solution but I really wonder if this would give a consistent view of the model. I persceive a 'componentType' to be singular in a SCA Domain once loaded by the Contribution and Composite being a 'componentType' makes be feel it should also be left singular. If the configuration of the implementation instances with properties and reference settings happen only during build time for other implementations that we have done up to now.. like the java-impl, maybe this is the pattern we must follow for composite implementations too. I feel this common treatment will help us in framing our SPIs. consistently. This is just about what occurs to me immd. and maybe there are some perspectives I am missing out - please let me know of them. Thanks - Venkat On 4/21/07, Raymond Feng <[EMAIL PROTECTED]> wrote: Hi, In the current code base, we use builder framework to create peer objects for runtime corresponding to the model and use the runtime metadata to drive the component interactions. This approach adds more complexity and redundance. I think now it should be possible to take advantage of the fully-resolved/configured model directly. To acheive this, we need to normalize the model to represent the recursive composition in runtime. There are some cases: 1) For , we need to merge all components/services/references/properties from the included composite to the enclosing composite. 2) Two components use the same non-composite implementation, for example, two components are implemented by the same java class. In this case, we should have two component model instances and one implementation model to avoid duplicate introspection. 3) Two components are implemented by the same composite This is a more interesting case. Please see the diagram @ http://cwiki.apache.org/confluence/display/TUSCANY/Java+SCA+Runtime+Component+Hierarchy . Path a: Composite1.ComponentB is implemented by Composite3 Path b: Composite2.ComponentD is implemented by Composite3 The service/reference can be promoted to different things: a: the final target for the ComponentF.Reference1 is Composite1.ComponentA b: the final target for the ComponentF.Reference1 is Composite1.Reference1 (pointing to an external service) The property can be set to different value following different composition path: a: Composite3.ComponentE.Property1 is overrided by Composite1.ComponentB.Property1 (say value="ABC") b: Composite3.ComponentE.Property1 is overrided by Composite2.ComponentD.Property1 (say value="XYZ") To represent the fully-configured components, we need to clone the model for Composite3 for Path a and b so that it can be used to hold different resolved values. With the flatten structure, we should be able to fully configure the components at model level. Am I on the right track? Feedbacks are welcome. Once we agree on this, I'll continue to bring up the discussions on additional runtime behaviors beyond the model that needs to be provided by the component implementation or binding extensions to work with the core invocation framework. Thanks, Raymond - To unsubscribe, e-mail: [EMAIL PROTECTED] For additional commands, e-mail: [EMAIL PROTECTED]
Re: Using Tuscany in a webapp, was: [DISCUSS] Next version - What should be in it
On 4/21/07, Jean-Sebastien Delfino <[EMAIL PROTECTED]> wrote: [snip] Luciano Resende wrote: > +1 on focusing on the stability and consumability for the core functions, > other then helping on simplifying the runtime further and work on a > Domain > concept, I also want to contribute around having a better integration > with > App Servers, basically start by bringing back WAR plugin and TC > integration. > Do we still need a special Tuscany WAR Maven plugin? I was under the impression that the Tuscany WAR plugin was there to automate the packaging of all the pieces of Tuscany runtime and their configuration in a WAR, assuming that it was probably too complicated to do that packaging by hand. Before reactivating this machinery, could we take a moment and have a discussion to understand why this packaging was so complicated that we needed a special Maven plugin to take care of it? and maybe come up with a simpler packaging scheme for Web applications and WARs? To put this discussion in context, I'm going to start with a very simple question... What do we want to use WARs for? What scenarios do we want to support in our next release that will require a WAR? -- Jean-Sebastien - To unsubscribe, e-mail: [EMAIL PROTECTED] For additional commands, e-mail: [EMAIL PROTECTED] Hi My desire for WAR packaging was to allow people to easily deploy Tuscany applications (samples?) to existing web application installations. I.e. allow Tuscany to be plugged into environments people are familiar with. I was mulling over how to get some more interesting samples into the Java space, e.g. porting over Andy's alert aggregator sample from the C++ SCA implementation, and this is why I happened to be thinking about how to make it work. But we have also had a recent exchange on the user list on this subject [1]. I actually hadn't considered the mechanism by which this is achieved. I wasn't aware this was being done in the past by a mvn plugin I was just aware that, certainly in M1, you could see Tuscany samples as WAR files that were used with Tomcat in the release. Its good that you have started this discussion. Lets get consensus on whether we should provide it. Also if there is an easier and more natural way of providing this integration then we should investigate that, e.g. if this integration should he host-webapp or host-tomcat, host-apache etc. then that's fine. If it's already done then even better. Regards Simon [1] http://www.mail-archive.com/[EMAIL PROTECTED]/msg00839.html