Re: Classloading code in core contribution processing

2008-05-19 Thread Jean-Sebastien Delfino

Jean-Sebastien Delfino wrote:


I completely agree with you that having a ClassLoader delegate to a
ModelResolver delegating to a ClassLoader etc will cause confusion.

So, how about having ContributionClassLoader implement the ModelResolver
interface?



Yes, that sounds like a good idea - if it can be made to work :-).


OK, I'll do some experiments with that approach in sandbox, and then you 
and others too can look and jump in with any ideas. This is a complex 
area so the more eyes can look at code and help improve it with new 
ideas the better.




...

I'm proposing the following:

- Have ContributionClassLoader implement the ModelResolver interface,
allowing it and its other ClassLoader friends to be associated with
Contribution, Imports, Exports, everywhere we can store ModelResolvers
right now.

- Merge ClassReferenceModelResolver into ContributionClassLoader, as we
don't need to have a ModelResolver delegating to a ClassLoader, and they
can be a single object.

- Remove get/setClassLoader from Contribution, making it independent of
the support for Java artifacts as it should be.




...

When you make the changes, could you watch out for support for OSGi
contributions? Classloading for OSGi contributions is currently a hack 
since
only one model resolver can be associated with each model type in 
Tuscany.


I'm finally finding a little bit of time to work on this. We have more 
OSGi samples and test cases now, I'm hoping that they'll help validate 
the approach.


Also I'm trying to see how to reconcile the way we bootstrap code in the 
domain manager and the node runtime and that will require a common 
approach to classloading (right now the domain manager and the runtime 
are using completely different classloader implementations).


I'll start to hack and experiment around that classloading code in a 
sandbox to not break the trunk.

--
Jean-Sebastien


Re: Classloading code in core contribution processing

2008-03-04 Thread Rajini Sivaram
Sebastien,

Thank you for the clarification. A few comments inline.


On 3/4/08, Jean-Sebastien Delfino [EMAIL PROTECTED] wrote:

 Rajini Sivaram wrote:
  Jean-Sebastien Delfino wrote:
  ...
  I think that the following issues have been raised in this thread:
  a) contribution dependency cycles
  b) partial package contributions
  c) ability to use contributions without providing a ClassLoader
  d) error handling (handling of ClassNotFound and related errors)
  e) function layering (contribution code depending on Java support)
  f) increased complexity of the classloading path
  g) differences between model resolving and classloading semantics
  h) reliance on specific classloader implementations
 
  I initially raised (c) and (e) and was really struggling with (f).
 
 
  (c) has already been fixed (as a response to your first note in the
 thread)
  - contributions do not require a classloader anymore, only
  ClassReferenceModelResolver which loads classes requires a classloader.

 Yes, Thanks for fixing that.

 
  (e) has also been (almost) fixed. There is still a classloader object
  associated with contributions, but it is set/get only by
 contribution-java.

 That's the next thing that I think we could fix, as it is odd to have
 both a ModelResolver and a ClassLoader in Contribution, and Contribution
 should really be independent of the support for Java artifacts.


  Rajini raised (a), (b), (d), (g) (and maybe (f) too?).
 
 
  I raised (a) and (b) as problems with the model resolver.

 (a) is definitely an issue, demonstrated in the test case that I've
 added to the contribution-multiple module, and there is a simple fix for
 it. I think that we just need to detect cycles in one of the import
 resolvers.

 Unless I'm mistaken, (b) seems to work though. I'll investigate more
 tomorrow.


I suspected that if contribution A and contribution B provided entries of
the package a.b.c, A and B would both import and export the package a.b.c,
resulting in a cycle. Maybe I was wrong. Anyway, I would expect a solution
for (a) to fix (b) as well.

These are
  currently not issues with contribution classloading since the
 contribution
  classloader does not use model resolvers.

 Right, but I'm hoping that after fixing (a) we can get class loading and
 model resolution to converge a bit more (more on this below).

 
  I raised (g) and I would like the semantics of import/export to be more
  consistent across model resolution and class resolution. I am less keen
 on
  the solution involving common code.
 
  I did raise (f) as well, but I was looking at complexity from a
 different
  point of view.  I think most classloading is not initiated by Tuscany,
 and
  classloading errors are inevitable. It doesn't matter how good
 contribution
  classloading is, I can always create a set of contributions with
  import/export statements which result in class validation errors as a
 result
  of consistency failures. IMHO, adding model resolver to the classloading
  path might make it easier for Tuscany developers to understand
 classloading
  errors, but it will make it much harder for application developers to
 walk
  through and debug a NoClassDefFound error, since most application
 developers
  will have some understanding of classloaders, but little or no
 understanding
  of Tuscany model resolution code.

 I'd like to actually write code for this... as it's difficult to go
 through code details in email, but I was thinking that a small change
 could help.

 I completely agree with you that having a ClassLoader delegate to a
 ModelResolver delegating to a ClassLoader etc will cause confusion.

 So, how about having ContributionClassLoader implement the ModelResolver
 interface?


Yes, that sounds like a good idea - if it can be made to work :-).


  Looks like Luciano is providing an import.resource, which will allow
  resources to be loaded without depending in import.java.
  Simon has helped organize the discussion with valid questions on the
  actual requirements and scenarios.
  Raymond proposed a classloader extensibility mechanism, which should
  help with (h).
 
  Please, anybody jump in if you disagree with this summary or think I've
  missed or mis-interpreted anything :)
 
 
  I still dont understand the changes that you are proposing for
 contribution
  classloading.

 I'm proposing the following:

 - Have ContributionClassLoader implement the ModelResolver interface,
 allowing it and its other ClassLoader friends to be associated with
 Contribution, Imports, Exports, everywhere we can store ModelResolvers
 right now.

 - Merge ClassReferenceModelResolver into ContributionClassLoader, as we
 don't need to have a ModelResolver delegating to a ClassLoader, and they
 can be a single object.

 - Remove get/setClassLoader from Contribution, making it independent of
 the support for Java artifacts as it should be.


When you make the changes, could you watch out for support for OSGi
contributions? Classloading for 

Re: Classloading code in core contribution processing

2008-03-04 Thread Jean-Sebastien Delfino

Rajini Sivaram wrote:

Sebastien,

Thank you for the clarification. A few comments inline.


On 3/4/08, Jean-Sebastien Delfino [EMAIL PROTECTED] wrote:

Rajini Sivaram wrote:

Jean-Sebastien Delfino wrote:

...
I think that the following issues have been raised in this thread:
a) contribution dependency cycles
b) partial package contributions
c) ability to use contributions without providing a ClassLoader
d) error handling (handling of ClassNotFound and related errors)
e) function layering (contribution code depending on Java support)
f) increased complexity of the classloading path
g) differences between model resolving and classloading semantics
h) reliance on specific classloader implementations

I initially raised (c) and (e) and was really struggling with (f).


(c) has already been fixed (as a response to your first note in the

thread)

- contributions do not require a classloader anymore, only
ClassReferenceModelResolver which loads classes requires a classloader.

Yes, Thanks for fixing that.


(e) has also been (almost) fixed. There is still a classloader object
associated with contributions, but it is set/get only by

contribution-java.

That's the next thing that I think we could fix, as it is odd to have
both a ModelResolver and a ClassLoader in Contribution, and Contribution
should really be independent of the support for Java artifacts.



Rajini raised (a), (b), (d), (g) (and maybe (f) too?).


I raised (a) and (b) as problems with the model resolver.

(a) is definitely an issue, demonstrated in the test case that I've
added to the contribution-multiple module, and there is a simple fix for
it. I think that we just need to detect cycles in one of the import
resolvers.

Unless I'm mistaken, (b) seems to work though. I'll investigate more
tomorrow.



I suspected that if contribution A and contribution B provided entries of
the package a.b.c, A and B would both import and export the package a.b.c,
resulting in a cycle. Maybe I was wrong. Anyway, I would expect a solution
for (a) to fix (b) as well.


Ah, yes, the configuration you describe requires both (a) and (b) to 
work. I suspect that (b) currently works, so as you say fixing (a) 
should solve everything.



These are

currently not issues with contribution classloading since the

contribution

classloader does not use model resolvers.

Right, but I'm hoping that after fixing (a) we can get class loading and
model resolution to converge a bit more (more on this below).


I raised (g) and I would like the semantics of import/export to be more
consistent across model resolution and class resolution. I am less keen

on

the solution involving common code.

I did raise (f) as well, but I was looking at complexity from a

different

point of view.  I think most classloading is not initiated by Tuscany,

and

classloading errors are inevitable. It doesn't matter how good

contribution

classloading is, I can always create a set of contributions with
import/export statements which result in class validation errors as a

result

of consistency failures. IMHO, adding model resolver to the classloading
path might make it easier for Tuscany developers to understand

classloading

errors, but it will make it much harder for application developers to

walk

through and debug a NoClassDefFound error, since most application

developers

will have some understanding of classloaders, but little or no

understanding

of Tuscany model resolution code.

I'd like to actually write code for this... as it's difficult to go
through code details in email, but I was thinking that a small change
could help.

I completely agree with you that having a ClassLoader delegate to a
ModelResolver delegating to a ClassLoader etc will cause confusion.

So, how about having ContributionClassLoader implement the ModelResolver
interface?



Yes, that sounds like a good idea - if it can be made to work :-).


OK, I'll do some experiments with that approach in sandbox, and then you 
and others too can look and jump in with any ideas. This is a complex 
area so the more eyes can look at code and help improve it with new 
ideas the better.



Looks like Luciano is providing an import.resource, which will allow

resources to be loaded without depending in import.java.
Simon has helped organize the discussion with valid questions on the
actual requirements and scenarios.
Raymond proposed a classloader extensibility mechanism, which should
help with (h).

Please, anybody jump in if you disagree with this summary or think I've
missed or mis-interpreted anything :)


I still dont understand the changes that you are proposing for

contribution

classloading.

I'm proposing the following:

- Have ContributionClassLoader implement the ModelResolver interface,
allowing it and its other ClassLoader friends to be associated with
Contribution, Imports, Exports, everywhere we can store ModelResolvers
right now.

- Merge ClassReferenceModelResolver into ContributionClassLoader, as we
don't 

Re: Classloading code in core contribution processing

2008-03-03 Thread Jean-Sebastien Delfino
Jean-Sebastien Delfino wrote:

 ...
 I think that the following issues have been raised in this thread:
 a) contribution dependency cycles
 b) partial package contributions
 c) ability to use contributions without providing a ClassLoader
 d) error handling (handling of ClassNotFound and related errors)
 e) function layering (contribution code depending on Java support)
 f) increased complexity of the classloading path
 g) differences between model resolving and classloading semantics
 h) reliance on specific classloader implementations

 I initially raised (c) and (e) and was struggling with (f).
 Rajini raised (a), (b), (d), (g) (and maybe (f) too?)
 Luciano is providing an import.resource, which will allow resources to
 be loaded witho
 Simon has helped organize the discussion with valid questions on the
 actual requirements and scenarios.
 Raymond proposed a classloader extensibility mechanism, which would help
 with (h).

 Please jump in if you disagree with this summary or think I've missed or
 mis-interpreted anything :)

 I was proposing to help with simple fixes for (a) and (b), and a more
 involved fix for (c), (e) and (f). I'm going to hold off on these fixes
 since you're  asking for it. Instead I will contribute test cases for
 (a) and (b), hoping that it will help people understand what's broken.

 Ant, do you want to help fix any of this?
 --
 Jean-Sebastien


I have checked in a test case showing the issues with contribution import
cycles in SVN r633170:
https://svn.apache.org/repos/asf/incubator/tuscany/java/sca/itest/contribution-multiple/src/test/java/test/ContributionCycleTestCaseFIXME.java

The test case causes a StackOverflow exception, that's why I named it *FIXME
to exclude it from the build until the issue with cycles is resolved.

I believe that it's possible to provide a simple fix for this issue (and
wanted to work on it...). Ant, could you please let me know when you don't
have any objections with me fixing it? Thanks.

I'll work on finding a good test case for showing the issues with split
namespaces in general in the next two days.
-- 
Jean-Sebastien


Re: Classloading code in core contribution processing

2008-03-03 Thread Rajini Sivaram
On 2/29/08, Jean-Sebastien Delfino [EMAIL PROTECTED] wrote:

 ant elder wrote:
  On Thu, Feb 28, 2008 at 9:30 AM, Jean-Sebastien Delfino 
  [EMAIL PROTECTED] wrote:
 
  Rajini Sivaram wrote:
  On 2/22/08, Jean-Sebastien Delfino [EMAIL PROTECTED] wrote:
  Jean-Sebastien Delfino wrote:
  Great to see a *test* case for cycles, but my question was: Do you
  have a *use* case for cycles and partial packages right now or can
  it  be fixed later?
 
  Rajini Sivaram wrote:
  No, I dont have an use-case, at least not an SCA one. But there are
  plenty
  of them in OSGi - eg. Tuscany modules cannot run in OSGi without
  support
  for
  split-packages.  Of course you can fix it later.
  I'm not arguing for or against fixing it now or later, I'm trying to
  get
  the real use case to make a decision based on concrete grounds. Can
 you
  point me to your OSGi use cases, or help me understand Tuscany
 modules
  cannot run in OSGi without support for split packages?
 
   Tuscany node and domain code are split into three modules each for
 API,
  SPI
  and Implementation eg. tuscany-node-api, tuscany-node and
  tuscany-node-impl.
  The API module defines a set of classes in
 org.apache.tuscany.sca.nodeand
  the SPI module extends this package with more classes. So the package
  org.apache.tuscany.sca.node is split across tuscany-node-api and
  tuscany-node. If we used maven-bundle-plugin to generate OSGi manifest
  entries for Tuscany modules, we would get three OSGi bundles
  corresponding
  to the node modules. And the API and SPI bundles have to specify that
  they
  use split-packages. It would obviously have been better if API and SPI
  used
  different packages, but the point I am trying to make is that
 splitting
  packages across modules is not as crazy as it sounds, and split
 packages
  do
  appear in code written by experienced programmers.
 
  IMO, supporting overlapping package import/exports is more important
  with
  SCA contributions than with OSGi bundles since SCA contributions can
  specify
  wildcards in import.java/export.java. eg. If you look at packaging
  tuscany-contribution and tuscany-contribution-impl where
  tuscany-contribution-impl depends on tuscany-contribution, there is no
  clear
  naming convention to separate the two modules using a single
  import/export
  statement pair. So if I could use wildcards, the simplest option that
  would
  avoid separate import/export statements for each subpackage (as
 required
  in
  OSGi) would be to export org.apache.tuscany.sca.contribution* from
  tuscany-contribution and import org.apache.tuscany.sca.contribution*in
  tuscany-contribution-impl. The sub-packages themselves are not shared
  but
  the import/export namespaces are. We need to avoid cycles in these
  cases.
  Again, there is a way to avoid sharing package spaces, but it is
 simpler
  to
  share, and there is nothing in the SCA spec which stops you sharing
  packages
  across contributions.
 
  I dont think the current model resolver code which recursively
 searches
  exporting contributions for artifacts is correct anyway - even for
  artifacts
  other than classes. IMO, when an exporting contribution is searched
 for
  an
  artifact, it should only search the exporting contribution itself, not
  its
  imports. And that would avoid cycles in classloading. I would still
  prefer
  not to intertwine classloading and model resolution because that would
  unnecessarily make classloading stack traces which are complex anyway,
  even
  more complex that it needs to be. But at least if it works all the
 time,
  rather than run into stack overflows, I might not have to look at
 those
  stack traces
 
 
 
  and this will convince me to help fix it now :) Thanks.
 
 
  It is not broken now - you have to break it first and then fix it :-).
 
  I have reviewed the model resolution and classloading code and found
 the
  following:
 
  - Split namespaces are currently supported (for example by the WSDL and
  XSD resolvers). The model resolver mechanism does not have an issue
 with
  split namespaces.
 
  - The Java import/export resolvers do not seem to support split
 packages
  (if I understood that code which was quite tricky), but that's an issue
  in that Java import/export specific code, which just needs to be fixed.
  I'll work on it.
 
  - The interactions between the Java import/export listener, the model
  resolvers and the ContributionClassLoader are way too complicated IMHO.
  That complexity is mostly caused by ContributionClassLoader, I'll try
 to
  show a simpler implementation in a few days.
 
  - Dependency cycles are an exception in Java as build tools like Maven
  don't support them, but can exist in XSD for example. Supporting cycles
  just requires a simple fix to the import model resolvers, I'll help fix
  that too.
 
  Hope this helps.
  --
  Jean-Sebastien
 
 
  It doesn't feel like there is agreement on the approach yet so would you
  hold off committing changes to see if 

Re: Classloading code in core contribution processing

2008-03-03 Thread Jean-Sebastien Delfino

Rajini Sivaram wrote:
 Jean-Sebastien Delfino wrote:
 ...

I think that the following issues have been raised in this thread:
a) contribution dependency cycles
b) partial package contributions
c) ability to use contributions without providing a ClassLoader
d) error handling (handling of ClassNotFound and related errors)
e) function layering (contribution code depending on Java support)
f) increased complexity of the classloading path
g) differences between model resolving and classloading semantics
h) reliance on specific classloader implementations

I initially raised (c) and (e) and was really struggling with (f).



(c) has already been fixed (as a response to your first note in the thread)
- contributions do not require a classloader anymore, only
ClassReferenceModelResolver which loads classes requires a classloader.


Yes, Thanks for fixing that.



(e) has also been (almost) fixed. There is still a classloader object
associated with contributions, but it is set/get only by contribution-java.


That's the next thing that I think we could fix, as it is odd to have 
both a ModelResolver and a ClassLoader in Contribution, and Contribution 
should really be independent of the support for Java artifacts.




Rajini raised (a), (b), (d), (g) (and maybe (f) too?).


I raised (a) and (b) as problems with the model resolver.


(a) is definitely an issue, demonstrated in the test case that I've 
added to the contribution-multiple module, and there is a simple fix for 
it. I think that we just need to detect cycles in one of the import 
resolvers.


Unless I'm mistaken, (b) seems to work though. I'll investigate more 
tomorrow.


These are

currently not issues with contribution classloading since the contribution
classloader does not use model resolvers.


Right, but I'm hoping that after fixing (a) we can get class loading and 
model resolution to converge a bit more (more on this below).




I raised (g) and I would like the semantics of import/export to be more
consistent across model resolution and class resolution. I am less keen on
the solution involving common code.

I did raise (f) as well, but I was looking at complexity from a different
point of view.  I think most classloading is not initiated by Tuscany, and
classloading errors are inevitable. It doesn't matter how good contribution
classloading is, I can always create a set of contributions with
import/export statements which result in class validation errors as a result
of consistency failures. IMHO, adding model resolver to the classloading
path might make it easier for Tuscany developers to understand classloading
errors, but it will make it much harder for application developers to walk
through and debug a NoClassDefFound error, since most application developers
will have some understanding of classloaders, but little or no understanding
of Tuscany model resolution code.


I'd like to actually write code for this... as it's difficult to go 
through code details in email, but I was thinking that a small change 
could help.


I completely agree with you that having a ClassLoader delegate to a 
ModelResolver delegating to a ClassLoader etc will cause confusion.


So, how about having ContributionClassLoader implement the ModelResolver 
interface?




Looks like Luciano is providing an import.resource, which will allow

resources to be loaded without depending in import.java.
Simon has helped organize the discussion with valid questions on the
actual requirements and scenarios.
Raymond proposed a classloader extensibility mechanism, which should
help with (h).

Please, anybody jump in if you disagree with this summary or think I've
missed or mis-interpreted anything :)



I still dont understand the changes that you are proposing for contribution
classloading.


I'm proposing the following:

- Have ContributionClassLoader implement the ModelResolver interface, 
allowing it and its other ClassLoader friends to be associated with 
Contribution, Imports, Exports, everywhere we can store ModelResolvers 
right now.


- Merge ClassReferenceModelResolver into ContributionClassLoader, as we 
don't need to have a ModelResolver delegating to a ClassLoader, and they 
can be a single object.


- Remove get/setClassLoader from Contribution, making it independent of 
the support for Java artifacts as it should be.


I do completely agree that model resolution needs to be fixed.

I'm glad to see that we agree :). I think that we should fix (a), and 
maybe (b) when we understand whether it's actually an issue or if it 
just already works :)




I was proposing to help with simple fixes for (a) and (b), and a more

involved fix for (c), (e) and (f). I'm going to hold off on these fixes
since you're  asking for it. Instead I will contribute test cases for
(a) and (b), hoping that it will help people understand what's broken.

Ant, do you want to help fix any of this?
--
Jean-Sebastien



--
Jean-Sebastien


Re: Classloading code in core contribution processing

2008-03-02 Thread ant elder
On Fri, Feb 29, 2008 at 7:10 PM, Jean-Sebastien Delfino 
[EMAIL PROTECTED] wrote:

snip

Ant, do you want to help fix any of this?


 Sebastien, the intervention was in my role as chair-to-be to make sure
everyone is being given the space they need.

   ...ant


Re: Classloading code in core contribution processing

2008-02-29 Thread ant elder
On Thu, Feb 28, 2008 at 9:30 AM, Jean-Sebastien Delfino 
[EMAIL PROTECTED] wrote:

 Rajini Sivaram wrote:
  On 2/22/08, Jean-Sebastien Delfino [EMAIL PROTECTED] wrote:
  Jean-Sebastien Delfino wrote:
  Great to see a *test* case for cycles, but my question was: Do you
  have a *use* case for cycles and partial packages right now or can
  it  be fixed later?
 
  Rajini Sivaram wrote:
  No, I dont have an use-case, at least not an SCA one. But there are
  plenty
  of them in OSGi - eg. Tuscany modules cannot run in OSGi without
 support
  for
  split-packages.  Of course you can fix it later.
  I'm not arguing for or against fixing it now or later, I'm trying to
 get
  the real use case to make a decision based on concrete grounds. Can you
  point me to your OSGi use cases, or help me understand Tuscany modules
  cannot run in OSGi without support for split packages?
 
 
   Tuscany node and domain code are split into three modules each for API,
 SPI
  and Implementation eg. tuscany-node-api, tuscany-node and
 tuscany-node-impl.
  The API module defines a set of classes in org.apache.tuscany.sca.nodeand
  the SPI module extends this package with more classes. So the package
  org.apache.tuscany.sca.node is split across tuscany-node-api and
  tuscany-node. If we used maven-bundle-plugin to generate OSGi manifest
  entries for Tuscany modules, we would get three OSGi bundles
 corresponding
  to the node modules. And the API and SPI bundles have to specify that
 they
  use split-packages. It would obviously have been better if API and SPI
 used
  different packages, but the point I am trying to make is that splitting
  packages across modules is not as crazy as it sounds, and split packages
 do
  appear in code written by experienced programmers.
 
  IMO, supporting overlapping package import/exports is more important
 with
  SCA contributions than with OSGi bundles since SCA contributions can
 specify
  wildcards in import.java/export.java. eg. If you look at packaging
  tuscany-contribution and tuscany-contribution-impl where
  tuscany-contribution-impl depends on tuscany-contribution, there is no
 clear
  naming convention to separate the two modules using a single
 import/export
  statement pair. So if I could use wildcards, the simplest option that
 would
  avoid separate import/export statements for each subpackage (as required
 in
  OSGi) would be to export org.apache.tuscany.sca.contribution* from
  tuscany-contribution and import org.apache.tuscany.sca.contribution* in
  tuscany-contribution-impl. The sub-packages themselves are not shared
 but
  the import/export namespaces are. We need to avoid cycles in these
 cases.
  Again, there is a way to avoid sharing package spaces, but it is simpler
 to
  share, and there is nothing in the SCA spec which stops you sharing
 packages
  across contributions.
 
  I dont think the current model resolver code which recursively searches
  exporting contributions for artifacts is correct anyway - even for
 artifacts
  other than classes. IMO, when an exporting contribution is searched for
 an
  artifact, it should only search the exporting contribution itself, not
 its
  imports. And that would avoid cycles in classloading. I would still
 prefer
  not to intertwine classloading and model resolution because that would
  unnecessarily make classloading stack traces which are complex anyway,
 even
  more complex that it needs to be. But at least if it works all the time,
  rather than run into stack overflows, I might not have to look at those
  stack traces
 
 
 
  and this will convince me to help fix it now :) Thanks.
 
 
  It is not broken now - you have to break it first and then fix it :-).
 

 I have reviewed the model resolution and classloading code and found the
 following:

 - Split namespaces are currently supported (for example by the WSDL and
 XSD resolvers). The model resolver mechanism does not have an issue with
 split namespaces.

 - The Java import/export resolvers do not seem to support split packages
 (if I understood that code which was quite tricky), but that's an issue
 in that Java import/export specific code, which just needs to be fixed.
 I'll work on it.

 - The interactions between the Java import/export listener, the model
 resolvers and the ContributionClassLoader are way too complicated IMHO.
 That complexity is mostly caused by ContributionClassLoader, I'll try to
 show a simpler implementation in a few days.

 - Dependency cycles are an exception in Java as build tools like Maven
 don't support them, but can exist in XSD for example. Supporting cycles
 just requires a simple fix to the import model resolvers, I'll help fix
 that too.

 Hope this helps.
 --
 Jean-Sebastien


It doesn't feel like there is agreement on the approach yet so would you
hold off committing changes to see if we can get better consensus?

Reading through the thread I'm not sure that I properly understand exactly
what it is thats broken with the code as it 

Re: Classloading code in core contribution processing

2008-02-29 Thread Jean-Sebastien Delfino

ant elder wrote:

On Thu, Feb 28, 2008 at 9:30 AM, Jean-Sebastien Delfino 
[EMAIL PROTECTED] wrote:


Rajini Sivaram wrote:

On 2/22/08, Jean-Sebastien Delfino [EMAIL PROTECTED] wrote:

Jean-Sebastien Delfino wrote:
Great to see a *test* case for cycles, but my question was: Do you
have a *use* case for cycles and partial packages right now or can

it  be fixed later?


Rajini Sivaram wrote:
No, I dont have an use-case, at least not an SCA one. But there are

plenty

of them in OSGi - eg. Tuscany modules cannot run in OSGi without

support

for

split-packages.  Of course you can fix it later.

I'm not arguing for or against fixing it now or later, I'm trying to

get

the real use case to make a decision based on concrete grounds. Can you
point me to your OSGi use cases, or help me understand Tuscany modules
cannot run in OSGi without support for split packages?


 Tuscany node and domain code are split into three modules each for API,

SPI

and Implementation eg. tuscany-node-api, tuscany-node and

tuscany-node-impl.

The API module defines a set of classes in org.apache.tuscany.sca.nodeand
the SPI module extends this package with more classes. So the package
org.apache.tuscany.sca.node is split across tuscany-node-api and
tuscany-node. If we used maven-bundle-plugin to generate OSGi manifest
entries for Tuscany modules, we would get three OSGi bundles

corresponding

to the node modules. And the API and SPI bundles have to specify that

they

use split-packages. It would obviously have been better if API and SPI

used

different packages, but the point I am trying to make is that splitting
packages across modules is not as crazy as it sounds, and split packages

do

appear in code written by experienced programmers.

IMO, supporting overlapping package import/exports is more important

with

SCA contributions than with OSGi bundles since SCA contributions can

specify

wildcards in import.java/export.java. eg. If you look at packaging
tuscany-contribution and tuscany-contribution-impl where
tuscany-contribution-impl depends on tuscany-contribution, there is no

clear

naming convention to separate the two modules using a single

import/export

statement pair. So if I could use wildcards, the simplest option that

would

avoid separate import/export statements for each subpackage (as required

in

OSGi) would be to export org.apache.tuscany.sca.contribution* from
tuscany-contribution and import org.apache.tuscany.sca.contribution* in
tuscany-contribution-impl. The sub-packages themselves are not shared

but

the import/export namespaces are. We need to avoid cycles in these

cases.

Again, there is a way to avoid sharing package spaces, but it is simpler

to

share, and there is nothing in the SCA spec which stops you sharing

packages

across contributions.

I dont think the current model resolver code which recursively searches
exporting contributions for artifacts is correct anyway - even for

artifacts

other than classes. IMO, when an exporting contribution is searched for

an

artifact, it should only search the exporting contribution itself, not

its

imports. And that would avoid cycles in classloading. I would still

prefer

not to intertwine classloading and model resolution because that would
unnecessarily make classloading stack traces which are complex anyway,

even

more complex that it needs to be. But at least if it works all the time,
rather than run into stack overflows, I might not have to look at those
stack traces



and this will convince me to help fix it now :) Thanks.


It is not broken now - you have to break it first and then fix it :-).


I have reviewed the model resolution and classloading code and found the
following:

- Split namespaces are currently supported (for example by the WSDL and
XSD resolvers). The model resolver mechanism does not have an issue with
split namespaces.

- The Java import/export resolvers do not seem to support split packages
(if I understood that code which was quite tricky), but that's an issue
in that Java import/export specific code, which just needs to be fixed.
I'll work on it.

- The interactions between the Java import/export listener, the model
resolvers and the ContributionClassLoader are way too complicated IMHO.
That complexity is mostly caused by ContributionClassLoader, I'll try to
show a simpler implementation in a few days.

- Dependency cycles are an exception in Java as build tools like Maven
don't support them, but can exist in XSD for example. Supporting cycles
just requires a simple fix to the import model resolvers, I'll help fix
that too.

Hope this helps.
--
Jean-Sebastien



It doesn't feel like there is agreement on the approach yet so would you
hold off committing changes to see if we can get better consensus?

Reading through the thread I'm not sure that I properly understand exactly
what it is thats broken with the code as it is, would you be able to create
a testcase that shows what it is that is broken to help us 

Re: Classloading code in core contribution processing

2008-02-29 Thread Luciano Resende
I just finished committing Import/Export resource support, more
details on following thread [1].

[1] http://www.mail-archive.com/tuscany-dev@ws.apache.org/msg28457.html

On Fri, Feb 29, 2008 at 11:10 AM, Jean-Sebastien Delfino
[EMAIL PROTECTED] wrote:

 ant elder wrote:
   On Thu, Feb 28, 2008 at 9:30 AM, Jean-Sebastien Delfino 
   [EMAIL PROTECTED] wrote:
  
   Rajini Sivaram wrote:
   On 2/22/08, Jean-Sebastien Delfino [EMAIL PROTECTED] wrote:
   Jean-Sebastien Delfino wrote:
   Great to see a *test* case for cycles, but my question was: Do you
   have a *use* case for cycles and partial packages right now or can
   it  be fixed later?
  
   Rajini Sivaram wrote:
   No, I dont have an use-case, at least not an SCA one. But there are
   plenty
   of them in OSGi - eg. Tuscany modules cannot run in OSGi without
   support
   for
   split-packages.  Of course you can fix it later.
   I'm not arguing for or against fixing it now or later, I'm trying to
   get
   the real use case to make a decision based on concrete grounds. Can you
   point me to your OSGi use cases, or help me understand Tuscany modules
   cannot run in OSGi without support for split packages?
  
Tuscany node and domain code are split into three modules each for API,
   SPI
   and Implementation eg. tuscany-node-api, tuscany-node and
   tuscany-node-impl.
   The API module defines a set of classes in org.apache.tuscany.sca.nodeand
   the SPI module extends this package with more classes. So the package
   org.apache.tuscany.sca.node is split across tuscany-node-api and
   tuscany-node. If we used maven-bundle-plugin to generate OSGi manifest
   entries for Tuscany modules, we would get three OSGi bundles
   corresponding
   to the node modules. And the API and SPI bundles have to specify that
   they
   use split-packages. It would obviously have been better if API and SPI
   used
   different packages, but the point I am trying to make is that splitting
   packages across modules is not as crazy as it sounds, and split packages
   do
   appear in code written by experienced programmers.
  
   IMO, supporting overlapping package import/exports is more important
   with
   SCA contributions than with OSGi bundles since SCA contributions can
   specify
   wildcards in import.java/export.java. eg. If you look at packaging
   tuscany-contribution and tuscany-contribution-impl where
   tuscany-contribution-impl depends on tuscany-contribution, there is no
   clear
   naming convention to separate the two modules using a single
   import/export
   statement pair. So if I could use wildcards, the simplest option that
   would
   avoid separate import/export statements for each subpackage (as required
   in
   OSGi) would be to export org.apache.tuscany.sca.contribution* from
   tuscany-contribution and import org.apache.tuscany.sca.contribution* in
   tuscany-contribution-impl. The sub-packages themselves are not shared
   but
   the import/export namespaces are. We need to avoid cycles in these
   cases.
   Again, there is a way to avoid sharing package spaces, but it is simpler
   to
   share, and there is nothing in the SCA spec which stops you sharing
   packages
   across contributions.
  
   I dont think the current model resolver code which recursively searches
   exporting contributions for artifacts is correct anyway - even for
   artifacts
   other than classes. IMO, when an exporting contribution is searched for
   an
   artifact, it should only search the exporting contribution itself, not
   its
   imports. And that would avoid cycles in classloading. I would still
   prefer
   not to intertwine classloading and model resolution because that would
   unnecessarily make classloading stack traces which are complex anyway,
   even
   more complex that it needs to be. But at least if it works all the time,
   rather than run into stack overflows, I might not have to look at those
   stack traces
  
  
  
   and this will convince me to help fix it now :) Thanks.
  
  
   It is not broken now - you have to break it first and then fix it :-).
  
   I have reviewed the model resolution and classloading code and found the
   following:
  
   - Split namespaces are currently supported (for example by the WSDL and
   XSD resolvers). The model resolver mechanism does not have an issue with
   split namespaces.
  
   - The Java import/export resolvers do not seem to support split packages
   (if I understood that code which was quite tricky), but that's an issue
   in that Java import/export specific code, which just needs to be fixed.
   I'll work on it.
  
   - The interactions between the Java import/export listener, the model
   resolvers and the ContributionClassLoader are way too complicated IMHO.
   That complexity is mostly caused by ContributionClassLoader, I'll try to
   show a simpler implementation in a few days.
  
   - Dependency cycles are an exception in Java as build tools like Maven
   don't support them, but can 

Re: Classloading code in core contribution processing

2008-02-28 Thread Jean-Sebastien Delfino

Rajini Sivaram wrote:

On 2/22/08, Jean-Sebastien Delfino [EMAIL PROTECTED] wrote:

Jean-Sebastien Delfino wrote:
Great to see a *test* case for cycles, but my question was: Do you
have a *use* case for cycles and partial packages right now or can

it  be fixed later?


Rajini Sivaram wrote:
No, I dont have an use-case, at least not an SCA one. But there are

plenty

of them in OSGi - eg. Tuscany modules cannot run in OSGi without support

for

split-packages.  Of course you can fix it later.

I'm not arguing for or against fixing it now or later, I'm trying to get
the real use case to make a decision based on concrete grounds. Can you
point me to your OSGi use cases, or help me understand Tuscany modules
cannot run in OSGi without support for split packages?



 Tuscany node and domain code are split into three modules each for API, SPI
and Implementation eg. tuscany-node-api, tuscany-node and tuscany-node-impl.
The API module defines a set of classes in org.apache.tuscany.sca.node and
the SPI module extends this package with more classes. So the package
org.apache.tuscany.sca.node is split across tuscany-node-api and
tuscany-node. If we used maven-bundle-plugin to generate OSGi manifest
entries for Tuscany modules, we would get three OSGi bundles corresponding
to the node modules. And the API and SPI bundles have to specify that they
use split-packages. It would obviously have been better if API and SPI used
different packages, but the point I am trying to make is that splitting
packages across modules is not as crazy as it sounds, and split packages do
appear in code written by experienced programmers.

IMO, supporting overlapping package import/exports is more important with
SCA contributions than with OSGi bundles since SCA contributions can specify
wildcards in import.java/export.java. eg. If you look at packaging
tuscany-contribution and tuscany-contribution-impl where
tuscany-contribution-impl depends on tuscany-contribution, there is no clear
naming convention to separate the two modules using a single import/export
statement pair. So if I could use wildcards, the simplest option that would
avoid separate import/export statements for each subpackage (as required in
OSGi) would be to export org.apache.tuscany.sca.contribution* from
tuscany-contribution and import org.apache.tuscany.sca.contribution* in
tuscany-contribution-impl. The sub-packages themselves are not shared but
the import/export namespaces are. We need to avoid cycles in these cases.
Again, there is a way to avoid sharing package spaces, but it is simpler to
share, and there is nothing in the SCA spec which stops you sharing packages
across contributions.

I dont think the current model resolver code which recursively searches
exporting contributions for artifacts is correct anyway - even for artifacts
other than classes. IMO, when an exporting contribution is searched for an
artifact, it should only search the exporting contribution itself, not its
imports. And that would avoid cycles in classloading. I would still prefer
not to intertwine classloading and model resolution because that would
unnecessarily make classloading stack traces which are complex anyway, even
more complex that it needs to be. But at least if it works all the time,
rather than run into stack overflows, I might not have to look at those
stack traces



and this will convince me to help fix it now :) Thanks.


It is not broken now - you have to break it first and then fix it :-).



I have reviewed the model resolution and classloading code and found the 
following:


- Split namespaces are currently supported (for example by the WSDL and 
XSD resolvers). The model resolver mechanism does not have an issue with 
split namespaces.


- The Java import/export resolvers do not seem to support split packages 
(if I understood that code which was quite tricky), but that's an issue 
in that Java import/export specific code, which just needs to be fixed. 
I'll work on it.


- The interactions between the Java import/export listener, the model 
resolvers and the ContributionClassLoader are way too complicated IMHO. 
That complexity is mostly caused by ContributionClassLoader, I'll try to 
show a simpler implementation in a few days.


- Dependency cycles are an exception in Java as build tools like Maven 
don't support them, but can exist in XSD for example. Supporting cycles 
just requires a simple fix to the import model resolvers, I'll help fix 
that too.


Hope this helps.
--
Jean-Sebastien

-
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]



Re: Classloading code in core contribution processing

2008-02-26 Thread Rajini Sivaram
Simon,

Comments inline.


On 2/25/08, Simon Laws [EMAIL PROTECTED] wrote:

 Hi Rajini

 I'm covering old ground here but trying to make sure I'm looking at this
 in
 the right way.

 A - How closely class loading should be related to model resolution, i.e.
 options 1 and 2 from previously in this thread
   A1 (classloader uses model resolver) - standardizes the artifact
 resolution process but make classloading more complex


 I am not sure A1 really standardizes artifact resolution. In both A1 and
A2, import/export matching is done in contribution-java, in exactly the same
import.java/export.java-related classes. The difference between A1 and A2 is
purely in who iterates through the import/export list. In both cases, the
code is in contribution-java. In A1, the iteration is in
ClassReferenceModelResolver, and in A2, the iteration code is in
ContributionClassLoader. I dont see any reason why
ClassReferenceModelResolver should look similar to any other model resolver
- after all it uses a classloader, unlike other model resolvers. I am not
convinced A1 actually adds any real value, apart from removing the
get/setClassLoader method from Contribution.java.




   A2 (classloader doesn't use model resolver) - simplifies the
 classloading
 process but leads to multiple mechanisms for artifact resolution


A2 has the advantage that classloading is done by classloaders, and
classloaders alone. You might have got to the first classloader through a
model resolver in the first place (when Tuscany is resolving a class whose
name was found in a composite/componentType file), but in all other cases
(and most application classloading is not triggered by Tuscany),
classloading is (and IMHO should be) the job of a classloader.

For loading of an imported class,
A1 uses the call stack:
   
1.1ExtensibleModelResolverA-ClassReferenceModelResolverA-ClassLoaderA-ExtensibleModelResolverB-ClassReferenceModelResolverB-ClassLoaderB
(for class loading triggered by Tuscany)
OR
   
1.2ClassLoaderA-ExtensibleModelResolverB-ClassReferenceModelResolverB-ClassLoaderB
(For application classloading)

A2 uses the call stack:

2.1ExtensibleModelResolverA-ClassReferenceModelResolverA-ClassLoaderA-ClassLoaderB
(For classloading initiated by Tuscany)
OR
   2.2 ClassLoaderA-ClassLoaderB (For application classloading)

While I think 1.1 is fine, even with a deeper stack compared to 2.1, I think
1.2 is unnecessary, and I would very much prefer to use 2.2, unless there
was a very good reason why it couldn't be.



 B - Support for split namspaces/shared packages
   Supporting this helps when consuming Java artifacts in the case where
 there is legacy code and for some java patterns such as localization. I
 expect this
   could apply to other types of artifacts also, for example, XML schema
 that use library schema for common types.
 C - Recursive searching of contributions
   It's not clear that we have established that this is a requirement
 D - Handling non-existent resources, e.g by spotting cycles in
 imports/exports.
 It would seem to me to be sensible to guard against this generally. Is a
 specific requirement if we have C

 It seems to me that there we are talking about two orthogonal pieces of
 work. Firstly B, C  D describe behaviour of artifact resolution in
 general.


Yes, we do need to agree on the semantics of import/export statements, and
ideally they should be the same regardless of whether they refer to
import.java/export.java or some other types of import/export statements.

Then, given the artifact resolution framework, how does Java classloading
 fit in, I.e. A1 or A2.

 Can we agree the general behaviour first and then agree javal classloading
 as a special case of this.


I dont think Java classloading should be seen as a special case of model
resolution, at least I dont think Java classloading should be dependent on
model resolution (model resolution involving classes is dependent on Java
classloading, I think the reverse dependency should be avoided if possible).

Loading of classes differs from artifact resolution in many ways:

   1. Model resolution is used to resolve artifacts from SCA
   contributions, while classloading is used to resolve classes from
   contributions, Tuscany runtime, or indeed from anywhere on the CLASSPATH.
   2. Model resolution is explicitly triggered through Tuscany. Most
   classloading is implicitly triggered by the VM. Application classloading may
   be explicitly triggered, again without going through Tuscany.
   3. Artifacts loaded by Tuscany using model resolution can be loaded by
   the application in some other way (classLoader.getResource or
   file.read). Applications loaded by Tuscany are forced to use the
   Tuscany contribution classloader to load all application classes, whether
   they like it or not. Classloading is far more pervasive than other Tuscany
   model resolution, and its implications extend beyond the Tuscany runtime.
   4. Multiple artifacts of the same name can 

Re: Classloading code in core contribution processing

2008-02-25 Thread Rajini Sivaram
On 2/22/08, Jean-Sebastien Delfino [EMAIL PROTECTED] wrote:

  Jean-Sebastien Delfino wrote:
  Great to see a *test* case for cycles, but my question was: Do you
  have a *use* case for cycles and partial packages right now or can
 it  be fixed later?

  Rajini Sivaram wrote:
  No, I dont have an use-case, at least not an SCA one. But there are
 plenty
  of them in OSGi - eg. Tuscany modules cannot run in OSGi without support
 for
  split-packages.  Of course you can fix it later.

 I'm not arguing for or against fixing it now or later, I'm trying to get
 the real use case to make a decision based on concrete grounds. Can you
 point me to your OSGi use cases, or help me understand Tuscany modules
 cannot run in OSGi without support for split packages?


 Tuscany node and domain code are split into three modules each for API, SPI
and Implementation eg. tuscany-node-api, tuscany-node and tuscany-node-impl.
The API module defines a set of classes in org.apache.tuscany.sca.node and
the SPI module extends this package with more classes. So the package
org.apache.tuscany.sca.node is split across tuscany-node-api and
tuscany-node. If we used maven-bundle-plugin to generate OSGi manifest
entries for Tuscany modules, we would get three OSGi bundles corresponding
to the node modules. And the API and SPI bundles have to specify that they
use split-packages. It would obviously have been better if API and SPI used
different packages, but the point I am trying to make is that splitting
packages across modules is not as crazy as it sounds, and split packages do
appear in code written by experienced programmers.

IMO, supporting overlapping package import/exports is more important with
SCA contributions than with OSGi bundles since SCA contributions can specify
wildcards in import.java/export.java. eg. If you look at packaging
tuscany-contribution and tuscany-contribution-impl where
tuscany-contribution-impl depends on tuscany-contribution, there is no clear
naming convention to separate the two modules using a single import/export
statement pair. So if I could use wildcards, the simplest option that would
avoid separate import/export statements for each subpackage (as required in
OSGi) would be to export org.apache.tuscany.sca.contribution* from
tuscany-contribution and import org.apache.tuscany.sca.contribution* in
tuscany-contribution-impl. The sub-packages themselves are not shared but
the import/export namespaces are. We need to avoid cycles in these cases.
Again, there is a way to avoid sharing package spaces, but it is simpler to
share, and there is nothing in the SCA spec which stops you sharing packages
across contributions.

I dont think the current model resolver code which recursively searches
exporting contributions for artifacts is correct anyway - even for artifacts
other than classes. IMO, when an exporting contribution is searched for an
artifact, it should only search the exporting contribution itself, not its
imports. And that would avoid cycles in classloading. I would still prefer
not to intertwine classloading and model resolution because that would
unnecessarily make classloading stack traces which are complex anyway, even
more complex that it needs to be. But at least if it works all the time,
rather than run into stack overflows, I might not have to look at those
stack traces



and this will convince me to help fix it now :) Thanks.


It is not broken now - you have to break it first and then fix it :-).


 --
 Jean-Sebastien

 -
 To unsubscribe, e-mail: [EMAIL PROTECTED]
 For additional commands, e-mail: [EMAIL PROTECTED]




-- 
Thank you...

Regards,

Rajini


Re: Classloading code in core contribution processing

2008-02-25 Thread Simon Laws
Hi Rajini

just back in from vacation and catching up. I've put some comments in line
but the text seems to be circling around a few hot issues:

- How closely class loading should be related to model resolution, i.e.
options 1 and 2 from previously in this thread
- Support for split namsepaces/shared packages
- Recursive searching of contributions
- Handling non-existent resources, e.g by spotting cycles in
imports/exports.

These are of course related but it may be easier if we address them
independently.

Simon




  Tuscany node and domain code are split into three modules each for API,
 SPI
 and Implementation eg. tuscany-node-api, tuscany-node and
 tuscany-node-impl.
 The API module defines a set of classes in org.apache.tuscany.sca.node and
 the SPI module extends this package with more classes. So the package
 org.apache.tuscany.sca.node is split across tuscany-node-api and
 tuscany-node. If we used maven-bundle-plugin to generate OSGi manifest
 entries for Tuscany modules, we would get three OSGi bundles corresponding
 to the node modules. And the API and SPI bundles have to specify that they
 use split-packages. It would obviously have been better if API and SPI
 used
 different packages, but the point I am trying to make is that splitting
 packages across modules is not as crazy as it sounds, and split packages
 do
 appear in code written by experienced programmers.


The split packages across the various node/domain module was not by design.
The code moved around and that was the result. We could go ahead and fix
this. Are there any other explicit examples of split packages that you
happen to know about


 IMO, supporting overlapping package import/exports is more important with
 SCA contributions than with OSGi bundles since SCA contributions can
 specify
 wildcards in import.java/export.java. eg. If you look at packaging
 tuscany-contribution and tuscany-contribution-impl where
 tuscany-contribution-impl depends on tuscany-contribution, there is no
 clear
 naming convention to separate the two modules using a single import/export
 statement pair. So if I could use wildcards, the simplest option that
 would
 avoid separate import/export statements for each subpackage (as required
 in
 OSGi) would be to export org.apache.tuscany.sca.contribution* from
 tuscany-contribution and import org.apache.tuscany.sca.contribution* in
 tuscany-contribution-impl. The sub-packages themselves are not shared but
 the import/export namespaces are. We need to avoid cycles in these cases.
 Again, there is a way to avoid sharing package spaces, but it is simpler
 to
 share, and there is nothing in the SCA spec which stops you sharing
 packages
 across contributions.


I'm not sure if you are suggesting that we implement a wildcard mechanism or
that we impose some restrictions, for example, to mandate that
import.javashould use fully qualified package names (as it says in
line 2929 of the
assembly spec). Are wildcards already supported?

The assembly spec seems to recognize that artifacts from the same namespace
may appear in several places (line 2946) but it is suggesting that these
multiple namespace references are included explicitly as distinct import
declarations.



 I dont think the current model resolver code which recursively searches
 exporting contributions for artifacts is correct anyway - even for
 artifacts
 other than classes. IMO, when an exporting contribution is searched for an
 artifact, it should only search the exporting contribution itself, not its
 imports. And that would avoid cycles in classloading. I would still prefer
 not to intertwine classloading and model resolution because that would
 unnecessarily make classloading stack traces which are complex anyway,
 even
 more complex that it needs to be. But at least if it works all the time,
 rather than run into stack overflows, I might not have to look at those
 stack traces


Looking at the assembly spec there is not much discussion of recursive
inclusion. I did find line 3022 which describes the behaviour
w.r.tindirect dependent contributions which to me implies that
contributions
providing exports should be recursively searched




 and this will convince me to help fix it now :) Thanks.


 It is not broken now - you have to break it first and then fix it :-).


  --
  Jean-Sebastien
 
  -
  To unsubscribe, e-mail: [EMAIL PROTECTED]
  For additional commands, e-mail: [EMAIL PROTECTED]
 
 


 --
 Thank you...

 Regards,

 Rajini



Re: Classloading code in core contribution processing

2008-02-25 Thread Rajini Sivaram
Simon,

A few comments inline.


On 2/25/08, Simon Laws [EMAIL PROTECTED] wrote:

 Hi Rajini

 just back in from vacation and catching up. I've put some comments in line
 but the text seems to be circling around a few hot issues:

 - How closely class loading should be related to model resolution, i.e.
 options 1 and 2 from previously in this thread
 - Support for split namsepaces/shared packages
 - Recursive searching of contributions
 - Handling non-existent resources, e.g by spotting cycles in
 imports/exports.

 These are of course related but it may be easier if we address them
 independently.

 Simon


 
 
   Tuscany node and domain code are split into three modules each for API,
  SPI
  and Implementation eg. tuscany-node-api, tuscany-node and
  tuscany-node-impl.
  The API module defines a set of classes in org.apache.tuscany.sca.nodeand
  the SPI module extends this package with more classes. So the package
  org.apache.tuscany.sca.node is split across tuscany-node-api and
  tuscany-node. If we used maven-bundle-plugin to generate OSGi manifest
  entries for Tuscany modules, we would get three OSGi bundles
 corresponding
  to the node modules. And the API and SPI bundles have to specify that
 they
  use split-packages. It would obviously have been better if API and SPI
  used
  different packages, but the point I am trying to make is that splitting
  packages across modules is not as crazy as it sounds, and split packages
  do
  appear in code written by experienced programmers.


 The split packages across the various node/domain module was not by
 design.
 The code moved around and that was the result. We could go ahead and fix
 this. Are there any other explicit examples of split packages that you
 happen to know about


No, as far as I know, in Tuscany modules, the only packages that are split
across multiple modules are o.a.t.s.node and o.a.t.s.domain. I was just
using it as an example to show that there may be existing code which use
split-packages, and the test case for classloading in the presence of
split-packages is not just a fabricated test case. For Tuscany, I agree that
it would be easy to fix domain and node to use different package names, but
that may not always be the case with 3rd party code already packaged as jars
which need to be imported as contributions.

Split-packages are not good practice (according to OSGi), but there are
valid use-cases for them. The most commonly cited example in OSGi is Java
localization classes.



  IMO, supporting overlapping package import/exports is more important
 with
  SCA contributions than with OSGi bundles since SCA contributions can
  specify
  wildcards in import.java/export.java. eg. If you look at packaging
  tuscany-contribution and tuscany-contribution-impl where
  tuscany-contribution-impl depends on tuscany-contribution, there is no
  clear
  naming convention to separate the two modules using a single
 import/export
  statement pair. So if I could use wildcards, the simplest option that
  would
  avoid separate import/export statements for each subpackage (as required
  in
  OSGi) would be to export org.apache.tuscany.sca.contribution* from
  tuscany-contribution and import org.apache.tuscany.sca.contribution* in
  tuscany-contribution-impl. The sub-packages themselves are not shared
 but
  the import/export namespaces are. We need to avoid cycles in these
 cases.
  Again, there is a way to avoid sharing package spaces, but it is simpler
  to
  share, and there is nothing in the SCA spec which stops you sharing
  packages
  across contributions.
 

 I'm not sure if you are suggesting that we implement a wildcard mechanism
 or
 that we impose some restrictions, for example, to mandate that
 import.javashould use fully qualified package names (as it says in
 line 2929 of the
 assembly spec). Are wildcards already supported?


I thought Sebastien added support for wildcards in import.java since I
remember seeing .* in the tutorials (maybe I am wrong).

The assembly spec seems to recognize that artifacts from the same namespace
 may appear in several places (line 2946) but it is suggesting that these
 multiple namespace references are included explicitly as distinct import
 declarations.


If import statements specify location, I would expect distinct import
statements, but I am not sure I would expect to find two separate import
declarations when importing a split-package where location is not specified.



 
  I dont think the current model resolver code which recursively searches
  exporting contributions for artifacts is correct anyway - even for
  artifacts
  other than classes. IMO, when an exporting contribution is searched for
 an
  artifact, it should only search the exporting contribution itself, not
 its
  imports. And that would avoid cycles in classloading. I would still
 prefer
  not to intertwine classloading and model resolution because that would
  unnecessarily make classloading stack traces which are complex anyway,

Re: Contribution classloading pluggability: was: Re: Classloading code in core contribution processing

2008-02-25 Thread Jean-Sebastien Delfino

Raymond Feng wrote:

Hi,

I don't want to intercept the discussion but I'm wondering if we should 
define the pluggability of the classloading scheme for SCA contributions.


Typically we have the following information for a ready-to-deploy unit:

* The URL of the deploment composite (deployable composite)
* A collection of URLs for the required contributions to support the SCA 
composite


There are some class relationship defined using import.java and 
export.java. In different environments, we may need to have different 
classloaders to deal with java classes in the collection of 
contributions. Should we define a SPI as follows to provide the 
pluggability?


public interface ClassLoaderProvider {
   // Start the classloader provider for a collection of contributions 
(deployment unit)

   void start(ListContribution contributions);

   // Get the classloader for a given contribution in the deployment unit
   ClassLoader getClassLoaders(Contribution contribution);

   // Remove the contributions from the provider
   void stop(ListContribution contributions);
}

Thanks,
Raymond



This is an interesting proposal but I think it's orthogonal to the 
discussion we've been having on contribution import cycles and support 
for partial packages.


Import cycles and partial namespaces are not specific to Java and can 
occur too with WSDL/XSD. I think we should handle them in a Java (and 
ClassLoader) independent way.

--
Jean-Sebastien

-
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]



Re: Contribution classloading pluggability: was: Re: Classloading code in core contribution processing

2008-02-25 Thread Raymond Feng


- Original Message - 
From: Jean-Sebastien Delfino [EMAIL PROTECTED]

To: tuscany-dev@ws.apache.org
Sent: Monday, February 25, 2008 8:23 AM
Subject: Re: Contribution classloading pluggability: was: Re: Classloading 
code in core contribution processing




Raymond Feng wrote:

Hi,

I don't want to intercept the discussion but I'm wondering if we should 
define the pluggability of the classloading scheme for SCA contributions.


Typically we have the following information for a ready-to-deploy unit:

* The URL of the deploment composite (deployable composite)
* A collection of URLs for the required contributions to support the SCA 
composite


There are some class relationship defined using import.java and 
export.java. In different environments, we may need to have different 
classloaders to deal with java classes in the collection of 
contributions. Should we define a SPI as follows to provide the 
pluggability?


public interface ClassLoaderProvider {
   // Start the classloader provider for a collection of contributions 
(deployment unit)

   void start(ListContribution contributions);

   // Get the classloader for a given contribution in the deployment unit
   ClassLoader getClassLoaders(Contribution contribution);

   // Remove the contributions from the provider
   void stop(ListContribution contributions);
}

Thanks,
Raymond



This is an interesting proposal but I think it's orthogonal to the 
discussion we've been having on contribution import cycles and support for 
partial packages.


My proposal is for the java classloading strategy over related 
contributions. That's why I started it in a different thread. The general 
disucssion on import/export should stay independent of java.




Import cycles and partial namespaces are not specific to Java and can 
occur too with WSDL/XSD. I think we should handle them in a Java (and 
ClassLoader) independent way.


+1. My understanding is that the contribution service will figure out the 
import/export for various artifacts across contributions in a general way. 
With such metadata in place, the java class loader provider can be plugged 
to implement a classloading scheme which honors the import/export 
statements.



--
Jean-Sebastien

-
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]




-
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]



Re: Classloading code in core contribution processing

2008-02-25 Thread Simon Laws
Hi Rajini

I'm covering old ground here but trying to make sure I'm looking at this in
the right way.

A - How closely class loading should be related to model resolution, i.e.
options 1 and 2 from previously in this thread
   A1 (classloader uses model resolver) - standardizes the artifact
resolution process but make classloading more complex
   A2 (classloader doesn't use model resolver) - simplifies the classloading
process but leads to multiple mechanisms for artifact resolution
B - Support for split namspaces/shared packages
   Supporting this helps when consuming Java artifacts in the case where
there is legacy code and for some java patterns such as localization. I
expect this
   could apply to other types of artifacts also, for example, XML schema
that use library schema for common types.
C - Recursive searching of contributions
   It's not clear that we have established that this is a requirement
D - Handling non-existent resources, e.g by spotting cycles in
imports/exports.
  It would seem to me to be sensible to guard against this generally. Is a
specific requirement if we have C

It seems to me that there we are talking about two orthogonal pieces of
work. Firstly B, C  D describe behaviour of artifact resolution in general.
Then, given the artifact resolution framework, how does Java classloading
fit in, I.e. A1 or A2.

Can we agree the general behaviour first and then agree javal classloading
as a special case of this.

Regards

Simon


Re: Classloading code in core contribution processing

2008-02-22 Thread Jean-Sebastien Delfino

Cut some sections and reordered for readability.
...
 Jean-Sebastien Delfino wrote:
 - we can use ModelResolvers (1) or bypass them (2)
 - ModelResolvers don't handle import cycles and partial packages

 I think that (1) is better. Do you have a use case for cycles and
 partial packages right now or can it be fixed later?
...
Rajini Sivaram wrote:

ContributionTestCase in itest/contribution-classloader contains a test which
runs into stack overflow if classes are resolved using ModelResolver. I have
added another test in there for testing for ClassNotFoundException in the
same scenario. To trigger the failure, you need to modify
ContributionClassLoader.findClass to use the model resolver of exporting
contributions to resolve classes instead of their classloader.


Great to see a *test* case for cycles, but my question was: Do you have 
a *use* case for cycles and partial packages right now or can it be 
fixed later?


--
Jean-Sebastien

-
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]



Re: Classloading code in core contribution processing

2008-02-22 Thread Rajini Sivaram
Sebastien,

On 2/22/08, Jean-Sebastien Delfino [EMAIL PROTECTED] wrote:

 Cut some sections and reordered for readability.
 ...
  Jean-Sebastien Delfino wrote:
  - we can use ModelResolvers (1) or bypass them (2)
  - ModelResolvers don't handle import cycles and partial packages
 
  I think that (1) is better. Do you have a use case for cycles and
  partial packages right now or can it be fixed later?
 ...
 Rajini Sivaram wrote:
  ContributionTestCase in itest/contribution-classloader contains a test
 which
  runs into stack overflow if classes are resolved using ModelResolver. I
 have
  added another test in there for testing for ClassNotFoundException in
 the
  same scenario. To trigger the failure, you need to modify
  ContributionClassLoader.findClass to use the model resolver of exporting
  contributions to resolve classes instead of their classloader.

 Great to see a *test* case for cycles, but my question was: Do you have
 a *use* case for cycles and partial packages right now or can it be
 fixed later?


No, I dont have an use-case, at least not an SCA one. But there are plenty
of them in OSGi - eg. Tuscany modules cannot run in OSGi without support for
split-packages.  Of course you can fix it later. But IMHO, breaking
classloading to improve modularity is hardly worthwhile (all the
classloading related implementation code is now contained in
contribution-java, so the improvement will be very marginal). Classloading
errors tend to be hard to fix because classloading is often triggered by the
VM and not explicitly by the application. If potential stack overflows are
introduced into classloading, it wont be long before someone else complains
All this complexity related to classloading makes my head spin. And
chances are we will be back to a single CLASSPATH based classloader. That is
just my opinion.



 --
 Jean-Sebastien

 -
 To unsubscribe, e-mail: [EMAIL PROTECTED]
 For additional commands, e-mail: [EMAIL PROTECTED]

 Thank you...

Regards,

Rajini


Re: Classloading code in core contribution processing

2008-02-22 Thread Jean-Sebastien Delfino

 Jean-Sebastien Delfino wrote:
 Great to see a *test* case for cycles, but my question was: Do you
 have a *use* case for cycles and partial packages right now or can 
it  be fixed later?


 Rajini Sivaram wrote:

No, I dont have an use-case, at least not an SCA one. But there are plenty
of them in OSGi - eg. Tuscany modules cannot run in OSGi without support for
split-packages.  Of course you can fix it later.


I'm not arguing for or against fixing it now or later, I'm trying to get 
the real use case to make a decision based on concrete grounds. Can you 
point me to your OSGi use cases, or help me understand Tuscany modules 
cannot run in OSGi without support for split packages?


and this will convince me to help fix it now :) Thanks.
--
Jean-Sebastien

-
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]



Contribution classloading pluggability: was: Re: Classloading code in core contribution processing

2008-02-22 Thread Raymond Feng

Hi,

I don't want to intercept the discussion but I'm wondering if we should 
define the pluggability of the classloading scheme for SCA contributions.


Typically we have the following information for a ready-to-deploy unit:

* The URL of the deploment composite (deployable composite)
* A collection of URLs for the required contributions to support the SCA 
composite


There are some class relationship defined using import.java and 
export.java. In different environments, we may need to have different 
classloaders to deal with java classes in the collection of contributions. 
Should we define a SPI as follows to provide the pluggability?


public interface ClassLoaderProvider {
   // Start the classloader provider for a collection of contributions 
(deployment unit)

   void start(ListContribution contributions);

   // Get the classloader for a given contribution in the deployment unit
   ClassLoader getClassLoaders(Contribution contribution);

   // Remove the contributions from the provider
   void stop(ListContribution contributions);
}

Thanks,
Raymond

- Original Message - 
From: Rajini Sivaram [EMAIL PROTECTED]

To: tuscany-dev@ws.apache.org
Sent: Friday, February 22, 2008 12:38 PM
Subject: Re: Classloading code in core contribution processing



Sebastien,

On 2/22/08, Jean-Sebastien Delfino [EMAIL PROTECTED] wrote:


Cut some sections and reordered for readability.
...
 Jean-Sebastien Delfino wrote:
 - we can use ModelResolvers (1) or bypass them (2)
 - ModelResolvers don't handle import cycles and partial packages

 I think that (1) is better. Do you have a use case for cycles and
 partial packages right now or can it be fixed later?
...
Rajini Sivaram wrote:
 ContributionTestCase in itest/contribution-classloader contains a test
which
 runs into stack overflow if classes are resolved using ModelResolver. I
have
 added another test in there for testing for ClassNotFoundException in
the
 same scenario. To trigger the failure, you need to modify
 ContributionClassLoader.findClass to use the model resolver of 
 exporting

 contributions to resolve classes instead of their classloader.

Great to see a *test* case for cycles, but my question was: Do you have
a *use* case for cycles and partial packages right now or can it be
fixed later?



No, I dont have an use-case, at least not an SCA one. But there are plenty
of them in OSGi - eg. Tuscany modules cannot run in OSGi without support 
for

split-packages.  Of course you can fix it later. But IMHO, breaking
classloading to improve modularity is hardly worthwhile (all the
classloading related implementation code is now contained in
contribution-java, so the improvement will be very marginal). Classloading
errors tend to be hard to fix because classloading is often triggered by 
the

VM and not explicitly by the application. If potential stack overflows are
introduced into classloading, it wont be long before someone else 
complains

All this complexity related to classloading makes my head spin. And
chances are we will be back to a single CLASSPATH based classloader. That 
is

just my opinion.




--
Jean-Sebastien

-
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]

Thank you...


Regards,

Rajini




-
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]



Re: Classloading code in core contribution processing

2008-02-21 Thread Jean-Sebastien Delfino

Rajini Sivaram wrote:
...

I will commit some changes now which moves ContributionClassLoader into
contribution-java.


Great, Thanks.

...

We have two choices for classloading:

   1. ContributionClassLoader loads classes from its own contribution,
   and if it cannot find it, it can use the standard model resolver code to
   search other contributions which export the package. This essentially means
   that classloading will always go through ExtensibleModelResolver (since
   there is no way to get to the contribution's ClassReferenceModelResolver
   directly). And this affects not just classes loaded by Tuscany, but all
   classloading inside applications loaded as contributions.
   2. ContributionClassLoader loads classes from its own contribution and
   also searches other exporting contributions using their contribution
   classloader, bypassing the model resolver. This relies on contribution
   classloader being able to get to the classloaders of exporting contributions
   (currently Contribution.getClassLoader()).

At the moment, we use 2) rather than 1). Apart from the performance impact
of using extensible model resolver for all classloading, I don't believe
that the Tuscany model resolvers handle cycles in import/exports properly.
This may not be an issue with other artifact resolution where the artifacts
are always expected to be found, and all resolution is explicitly triggered
by an application or Tuscany. For classloading, searching for non-existent
classes, expecting to obtain a ClassNotFoundException is used routinely in
Java code, including inside Tuscany. And classloading is implicitly
trigerred by the VM. IMO, running into stack overflows in classloading is
unacceptable regardless of whether the import/export statements contained
unexpected cycles and regardless of the fact that an application was trying
to load a non-existent class. The current model resolution code cannot
handle packages that are split across multiple contributions (and before you
say why would anyone want to split packages across contributions, this was
one of the problems I first ran into when trying to run Tuscany under OSGi
since Tuscany modules do use split-packages).

In order to use 2), contribution-java needs some way of obtaining the
classloader of exporting contributions. Since there is currently no way for
contribution-java to get directly to the ClassReferenceModelResolver from a
contribution, the classloader is still associated with the contribution. So
Contribution.java still contains getClassLoader and setClassLoader, but
these are only used by contribution-java. Can you suggest a better way to
get/set classloaders for contributions which can be contained inside
contribution-java?



If I parsed all this correctly:
- we can use ModelResolvers (1) or bypass them (2)
- ModelResolvers don't handle import cycles and partial packages

I think that (1) is better. Do you have a use case for cycles and 
partial packages right now or can it be fixed later?


...

the new code will allow applications to load resources using their classloader only 
if there are explicit import.java/ statements for
the directory containing the resource.


Sounds good.

--
Jean-Sebastien

-
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]



Re: Classloading code in core contribution processing

2008-02-20 Thread Rajini Sivaram
Sebastien,

Comments inline.


On 2/19/08, Jean-Sebastien Delfino [EMAIL PROTECTED] wrote:

 Rajini Sivaram wrote:
  Sebastien,
 
  Contribution classloader was introduced to force isolation of
 contributions.
  Prior to this, all classes were loaded using a single CLASSPATH-based
  classloader, which meant that Java classes had visibility of all classes
 and
  resources that could be loaded using CLASSPATH, regardless of whether
 they
  were imported explicitly from other contributions.

 That's all good, I'm happy with that isolation support, but would like
 to see it in the module implementing support for import/export.java.


I will commit some changes now which moves ContributionClassLoader into
contribution-java. The classloader will be created when the first class from
the contribution is resolved by either its ClassReferenceModelResolver or
other ContributionClassLoaders. But I haven't removed the association of
classloaders with contributions (explained below).


  For Java classes in contributions, it is essential for classloading to
 be
  tied with ModelResolver to ensure that classes inside contributions can
 see
  classes imported from other contributions, and use the exporting
  contribution's classloader to load imported classes. This is absolutely
  necessary to avoid ClassCastExceptions and NoClassDefFoundErrors.

 Agreed that classloading should go through a ModelResolver but that does
 not mean tied with a single ModelResolver in contribution-impl, IMO
 contribution-java should provide an implementation of a ModelResolver
 that handles classloading as specified in the import.java export.java
 statements.


We need one (and only one) classloader per contribution. And this
classloader needs to be able to access classloaders of all contributions
that it imports from, in order to resolve classes. We also have model
resolvers - one ClassReferenceModelResolver per contribution is created when
the first class from the contribution is resolved.

We have two choices for classloading:

   1. ContributionClassLoader loads classes from its own contribution,
   and if it cannot find it, it can use the standard model resolver code to
   search other contributions which export the package. This essentially means
   that classloading will always go through ExtensibleModelResolver (since
   there is no way to get to the contribution's ClassReferenceModelResolver
   directly). And this affects not just classes loaded by Tuscany, but all
   classloading inside applications loaded as contributions.
   2. ContributionClassLoader loads classes from its own contribution and
   also searches other exporting contributions using their contribution
   classloader, bypassing the model resolver. This relies on contribution
   classloader being able to get to the classloaders of exporting contributions
   (currently Contribution.getClassLoader()).

At the moment, we use 2) rather than 1). Apart from the performance impact
of using extensible model resolver for all classloading, I don't believe
that the Tuscany model resolvers handle cycles in import/exports properly.
This may not be an issue with other artifact resolution where the artifacts
are always expected to be found, and all resolution is explicitly triggered
by an application or Tuscany. For classloading, searching for non-existent
classes, expecting to obtain a ClassNotFoundException is used routinely in
Java code, including inside Tuscany. And classloading is implicitly
trigerred by the VM. IMO, running into stack overflows in classloading is
unacceptable regardless of whether the import/export statements contained
unexpected cycles and regardless of the fact that an application was trying
to load a non-existent class. The current model resolution code cannot
handle packages that are split across multiple contributions (and before you
say why would anyone want to split packages across contributions, this was
one of the problems I first ran into when trying to run Tuscany under OSGi
since Tuscany modules do use split-packages).

In order to use 2), contribution-java needs some way of obtaining the
classloader of exporting contributions. Since there is currently no way for
contribution-java to get directly to the ClassReferenceModelResolver from a
contribution, the classloader is still associated with the contribution. So
Contribution.java still contains getClassLoader and setClassLoader, but
these are only used by contribution-java. Can you suggest a better way to
get/set classloaders for contributions which can be contained inside
contribution-java?


 
  I assumed (probably wrongly) when I wrote the contribution classloader
 that
  Java classes inside contributions also have visibility of resources that
 are
  imported using import statements other than import.java.
  For example, if I have
 
  Contribution A:
import.java package=x.y.z /
import.resource uri=a.b.c /
  Contribution B:
 export.java package=x.y.z /
  Contribution C:
 

Re: Classloading code in core contribution processing

2008-02-19 Thread Luciano Resende
On Feb 19, 2008 11:19 AM, Jean-Sebastien Delfino [EMAIL PROTECTED] wrote:
 Rajini Sivaram wrote:
  Sebastien,
 
  Contribution classloader was introduced to force isolation of contributions.
  Prior to this, all classes were loaded using a single CLASSPATH-based
  classloader, which meant that Java classes had visibility of all classes and
  resources that could be loaded using CLASSPATH, regardless of whether they
  were imported explicitly from other contributions.

 That's all good, I'm happy with that isolation support, but would like
 to see it in the module implementing support for import/export.java.

 
  For Java classes in contributions, it is essential for classloading to be
  tied with ModelResolver to ensure that classes inside contributions can see
  classes imported from other contributions, and use the exporting
  contribution's classloader to load imported classes. This is absolutely
  necessary to avoid ClassCastExceptions and NoClassDefFoundErrors.

 Agreed that classloading should go through a ModelResolver but that does
 not mean tied with a single ModelResolver in contribution-impl, IMO
 contribution-java should provide an implementation of a ModelResolver
 that handles classloading as specified in the import.java export.java
 statements.


+1, We had classReferenceModelResover in contribution-java, but it
looks like it now delegate to a OSGIClassReferenceModelResolver
outside import/export java module

 
  I assumed (probably wrongly) when I wrote the contribution classloader that
  Java classes inside contributions also have visibility of resources that are
  imported using import statements other than import.java.
  For example, if I have
 
  Contribution A:
import.java package=x.y.z /
import.resource uri=a.b.c /
  Contribution B:
 export.java package=x.y.z /
  Contribution C:
 export.resource uri=a.b.c /
 
 
  Is there a difference between what is visible to ContributionA (everything
  from A, package x.y.z from B and resource a.b.c from C) and what is visible
  to classes from Contribution A? I assumed that they should be the same. If
  classes from ContributionA should not be allowed to load the resource
  a.b.csince there is no 
  import.java/ statement for the package containing the resource,
  classloading code can be moved to contribution.java.
 

 Sorry, I may be missing something, but I'm a little lost here:

 - import.resource is not implemented yet, Luciano is just starting to
 implement it.

 - its syntax should not be uri=a.b.c as this is a Java package syntax,
 I'd expect to see something like a/b/c.html instead.


Yes, this is how I have it locally for now.

 - when it gets implemented I fail to see why it should be tied to or
 require a class loader.

 - what can be loaded by a Java classloader, .class files, .gif files or
 whatever should be controlled by import/export.java.

 - finally, import.resource should be in a contribution-resource
 extension like the other extensions... not in contribution-impl.


+1, this is what I have locally...

BTW, if people are looking for contribution-resource I could commit
the pieces that I have and not add to the build, just in case it gets
easier for people to look at it and comment.

 --

 Jean-Sebastien

 -
 To unsubscribe, e-mail: [EMAIL PROTECTED]
 For additional commands, e-mail: [EMAIL PROTECTED]





-- 
Luciano Resende
Apache Tuscany Committer
http://people.apache.org/~lresende
http://lresende.blogspot.com/

-
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]



Re: Classloading code in core contribution processing

2008-02-18 Thread Jean-Sebastien Delfino

ant elder wrote:

On Feb 16, 2008 12:56 AM, Jean-Sebastien Delfino [EMAIL PROTECTED]
wrote:

snip

For now I have to make a copy of the contribution code without this

stuff to make progress. Could the people who worked on this classloading
code please help clean it up and move it out of the core contribution
modules?



This doesn't feel good to me, we've had so much trouble in the past with
code forks, isn't there some way to work with whats there instead of just
starting afresh with your own version?


We should not confuse nonlinear development and forking, or confuse 
starting afresh and copying existing code and refactoring it to reuse it 
without some of its dependencies or coupling with classloading.



What do you intend to do with the
copy of the contribution code?  Is the copy going to be functionally
equivalent to whats there at the moment? What is it that you're trying to
do, is there some specific functionality you're trying to add or enhance?

   ...ant



I'm trying to implement the contribution workspace described in [1]. I 
need to reuse some of the logic in contribution-impl but that code is 
coupling reading/processing/classloading/resolving, which I need to 
decouple and provide as separate functions.


If there's no objection I'm planning to commit an implementation of the 
contribution workspace described in [1] to trunk in the next two days, 
implemented with a mix of code copied from contribution-impl and new code.


If you have objections I'll do it outside of trunk and ask the community 
to review it later. However, I'd prefer to do this work in trunk as it 
has already been discussed on this list and trunk is where new 
development happens.


[1] http://marc.info/?l=tuscany-devm=120202180130866
--
Jean-Sebastien

-
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]



Re: Classloading code in core contribution processing

2008-02-16 Thread ant elder
On Feb 16, 2008 12:56 AM, Jean-Sebastien Delfino [EMAIL PROTECTED]
wrote:

snip

For now I have to make a copy of the contribution code without this
 stuff to make progress. Could the people who worked on this classloading
 code please help clean it up and move it out of the core contribution
 modules?


This doesn't feel good to me, we've had so much trouble in the past with
code forks, isn't there some way to work with whats there instead of just
starting afresh with your own version? What do you intend to do with the
copy of the contribution code?  Is the copy going to be functionally
equivalent to whats there at the moment? What is it that you're trying to
do, is there some specific functionality you're trying to add or enhance?

   ...ant


Classloading code in core contribution processing

2008-02-15 Thread Jean-Sebastien Delfino
The last 2 weeks I've been working with the contribution processing code 
and am bumping into the following issues:


- ContributionImpl is tied to a ClassLoader

- ModelResolver as well, and it seems to be used to resolve classes in 
some cases.


- We're now using a special ContributionClassLoader implementation.

- The ContributionService depends on it and assumes that it should be 
used on all contributions.


- ContributionClassLoader contains code to navigate the imports/exports, 
assumes that all contributions are using such a ContributionClassLoader, 
calls implementation methods on it to match imports and exports and 
resolve classes, going around the regular model resolver based scheme 
used for everything else.


- contribution-impl depends on contribution-java, this is going 
backwards IMO and breaks modularity and pluggability, as a core module 
should not have dependencies on extensions.


- I don't fully understand what JavaImportExportListener does but it 
looks like an attempt to implement a fancy domain update scheme, 
bringing another way to match Java imports/exports. Unfortunately it's 
only half implemented.


All this complexity related to classloading makes my head spin, prevents 
me from using the contribution service outside of a running runtime 
(where I don't have a classloader at all) and should not be in the core 
contribution code in the first place, as processing of Java classes 
should be handled in a Java specific extension and not in the core.


For now I have to make a copy of the contribution code without this 
stuff to make progress. Could the people who worked on this classloading 
code please help clean it up and move it out of the core contribution 
modules?


Thanks.
--
Jean-Sebastien

-
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]