On Aug 26, 2012, at 723PM, Gregg Wonderly wrote: > > On Aug 26, 2012, at 5:12 PM, Dennis Reedy <dennis.re...@gmail.com> wrote: > >> Gregg, >> >> If you want to use Netbeans RCP, then why not consider making everything >> OSGi-able? We are using a Netbeans RCP front end on a project I'm working on >> now (with a Rio-backend that uses a custom RMIClassLoaderSpi that does >> artifact resolution for an artifact URL, goodbye http codebases and good >> riddance:) ), and from what I see, you either need to wrap everything up and >> turn it into an OSGi module, or go whole-hog and make everything OSGi (its >> all or nothing). > > Hi Dennis!
Hi Gregg :) > What I'd like to not have to do, is all kinds of packaging and versioning > control work. I'd like to be able to just "run watcha brung". Versioning > happens, and I can appreciate dealing with it. However, I think it's also a > good point of abuse which can lead to stuff just not working well, because > versioning leads to breakage when testing escalates by orders of magnitude > and some tests stop getting run. That's what I don't like about the world of > OSGi. It seems too ridged and too tool intensive for the small applications. I agree with your issues wrt OSGi, the only reason I brought it up was that NB RCP module system is OSGi based. I'm working with one of the NB dream team guys, I think he's on this list, if not maybe I can get him to comment and advise. > > I would like to have something that enables all the dynamic code flexibility, > but which has a much better depends-on graph resolution strategy so that one > could build all the various "code source" bits and fully believe that there > wasn't missing classes. I'm not sure if this helps (or is of interest to) you, but what I've been doing wrt to codebase support is to use the dependency resolution that you find with maven based artifacts (note you dont have to have a maven project, you just deploy your jars to a maven repository). We've been finding it much easier to configure services in a versioned and easy-to-deploy way. So what you end up with is the runtime dependencies of a particular artifact resolved (direct and transitive dependencies) as the codebase for a service, or a service-ui. So your 'depends on graph' is complete, in as much as your dependency graph is correctly constructed for your artifact. This comes naturally for maven/gradle projects, you can't produce your artifact unless the dependencies have been declared correctly. Note that this becomes important especially for a client that uses a service. With the artifact URL scheme, instead of annotating a service's codebase with http:// based jars, the service's codebase contains the artifact URL, which (when resolved) resolves the dependencies for the service's codebase at the client. This not only presents a performance boost (why load classes over http if they can be loaded locally), but also addresses lost codebase issues. Add to that secure repository connections that require uid/password, and you can confirm that the artifact a service requires you download in order for you to use that service, comes from a site you trust. Of course this is all versioned, and once loaded over the wire never needs to be loaded again. If the service's version changes, it's artifact changes, and any jars that have not been resolved get resolved. So the dynamic proxy capabilities remain, no change there. Just one more thing. One of my primary motivations for even looking into this was to address permanent heap OOME. I found a main contributor here was the RMIClassLoader that would keep class loader references around because of an HTTP keep alive thread. Addressing that issue lead me to this idea of using the artifact URL scheme, and avoiding the http URL altogether. HTH Regards Dennis