Re: OSGi NP Complete Was: OSGi - deserialization remote invocation strategy

2017-02-13 Thread Michał Kłeczek (XPro Sp. z o. o.)
Nope - not at all. I am only trying to convince you that there is no 
reason to involve ServiceRegistrar or SDM for code downloading.


HOW the class resolution is done - is another story.
I actually tend to think in a similar way to what Niclas said:
Do not use OSGi to load proxy class - create a separate ClassLoader - 
but make sure it delegates to the client bundle ClassLoader for

non-preferred classes.

Thanks,
Michal

Peter wrote:

I changed it to highlite Nic's point that it's not feasible to resolve and 
provision osgi bundle transitive dependencies during deserialization because 
the time taken to do that can be excessive due to NP Complete nature of 
resolution.

It is incompatible with stream codebase annotations.

I think Mic's currently arguing for a solution that relies on resolution and 
provisioning to occur during deserialization and I'm arguing against it.

I'm arguing for a background task that preceeds deserialization of the proxy.

Regards,

Peter.

Sent from my Samsung device.
  
   Include original message

 Original message 
From: Patricia Shanahan
Sent: 13/02/2017 11:27:27 pm
To: dev@river.apache.org
Subject: Re: OSGi NP Complete Was: OSGi - deserialization remote invocation 
strategy

Sorry, I'm trying to find out the meaning of the current subject line. 
I'm not sure when it changed to "OSGi MP Complete".


On 2/12/2017 10:50 PM, Michał Kłeczek wrote:

  Sorry, NP Completness of what?
  I have been the first to mention NP hardness of constraint satisfaction
  problem
  but I am not sure if this is what you are asking about.

  Thanks,
  Michal

  Patricia Shanahan wrote:

  Are you literally claiming NP Completeness, or just using that as an
  analogy for really, really difficult?








Re: OSGi NP Complete Was: OSGi - deserialization remote invocation strategy

2017-02-13 Thread Michał Kłeczek (XPro Sp. z o. o.)

KerberosEnpoint?
HttpsEndpoint?

Thanks,
Michal

Peter wrote:

How do you establish the secure jeri connection?

Regards,

Peter.

Sent from my Samsung device.
  
   Include original message

 Original message 
From: "Michał Kłeczek (XPro Sp. z o. o.)"<michal.klec...@xpro.biz>
Sent: 13/02/2017 11:34:45 pm
To: dev@river.apache.org
Subject: Re: OSGi NP Complete Was: OSGi - deserialization remote invocation 
strategy

1. The connection can be done using normal (secure) Jeri.
We do not have to verify the installer object since its classes were loaded 
locally and (by definition) are trusted.

2 The attacker cannot instantiate any non-local class. That is the whole point.
Since the "installer" classes must be local, then we can trust the installer to
honor any invocation constraints we place on it. So any code downloads
are secure - in the sense that the client can require 
authentication/integrity/confidentiality etc.

Note that (if necessary) we can apply the same logic recursively - we can provide an 
"installer of an installer"
and still be sure any code download is going to honor the security constraints 
we require.

Thanks,
Michal

Peter wrote:
So this object that you have with a locally installed class is tasked with 
authenticating the remote service, provisioning and resolving a bundle, 
deserializing the smart proxy and registering it with the OSGi service 
registrar in a readResolve or readObject method?

How do you propose the connection from the client to the service established in 
order to enable this to occur?

How do you prevent an attacker from choosing a different class to deserialize?

Regards,

Peter.

Sent from my Samsung device
  
   Include original message

 Original message 
From: Michał Kłeczek<michal@kleczekorg>
Sent: 13/02/2017 10:07:28 pm
To: dev@river.apache.org
Subject: Re: OSGi NP Complete Was: OSGi - deserialization remote invocation 
strategy

Comments inline.

Peter wrote:
  Mic,

  I'm attempting to get my head around your proposal:

  In the case of JERI, the InvocationHandler is part of the smart 
  proxy's serialized state.  A number of smart proxy classes will need 
  to be unmarshalled before the UnmarshallingInvocationHandler is 
  deserialized.


  The smart proxy contains a reference to a dynamic proxy (which sun 
  called the bootstrap proxy) and the dynamic proxy contains a reference 
  to your UnmarshallingInvocationHandler.This means the smart proxy 
  must be unmarshalled first.


  How do you get access to UnmarshallingInvocationHandler without 
  unmarshalling the smart proxy first?


No no - I am saying about wrapping the smart proxy inside another 
object. It can be either a dynamic proxy, or simply an object that 
implements "readResolve" returning the unmarshalled smart proxy.


  More comments inline below.

  On 13/02/2017 6:11 PM, Michał Kłeczek wrote:
  We are talking about the same thing.

  We are turning circles, Peter - all of this has been already discussed.

  1. Yes - you need to resolve bundles in advance (in OSGi it is not 
  possible to do otherwise anyway)

  Agree.
  2. You cannot decide upon the bundle chosen by the container to load 
  the proxy class (the container does the resolution)
  Disagree, nothing in the client depends on the proxy bundle, there's 
  no reason to provision a different version.
  3. The runtime graph of object places additional constraints on the 
  bundle resolution process (to what is specified in bundles' manifests).
  Since you do not have any way to pass these additional constraints to 
  the container - the case is lost.
  Disagree.  The proxy bundle contains a manifest with requirements.  
  The stream has no knowledge of versioning, nor does it need to, there 
  are no additional constraints.  If the service proxy dependencies 
  cannot be resolved, or it doesn't unmarshall, then it will not be 
  registered with the OSGi registry in the client, client code will not 
  discover it and the client will have no knowledge of it's existance 
  except for some logging.


This is totally backwards.
That way no client is able to find any service because there is a 
chicken and egg problem - we do not know the proxy interfaces until the 
proxy's bundle is resolved.


Understand that when you place a bundle identifier in the stream - it is 
equivalent to specifying a Require-Bundle constraint - nothing more 
nothing less.


  Additionally - to explain what I've said before about wrong level of 
  abstraction:


  Your general idea is very similar to mine: have a special object 
  (let's call it installer) that will install software prior to proxy 
  unmarshalling.


  1. For some reason unclear to me you want to constrain the way how 
  this "installer object" is passed only via the route of 
  ServiceRegistrar (as attributes)
  Disagree, I'm not proposing the service have any control over 
  installation at the client, other than

Re: OSGi NP Complete Was: OSGi - deserialization remote invocation strategy

2017-02-13 Thread Michał Kłeczek (XPro Sp. z o. o.)

Comments inline.

Peter wrote:
N.B Can't see any chicken egg problem.  


If service doesn't resolve to same service api as client, then it isn't 
compatible.  The client isn't interested in incompatible services, only those 
that are compatible  This is just an artifact of the dependency resolution 
process.

But when do you perform resolution?

Lets say you have two client bundles:
Client1 is resolved to depend on package a.b.c ver 1.1.3 from bundle 
a.b.c 1.1.3
Client2 is resolved to depend on package a.b.c ver 1.2.0 from bundle 
a.b.c 1.2.0


Your service supports both clients. How do you resolve it so that it can 
be linked with both clients?


The answer is - you can not. You must create two different instances of 
the service proxy each one

instantiated from a class loaded in a different context.
But you cannot decide upon this context in advance!!! You have to know 
which client you are serving.


No bundle indentifiers are necesary in the stream, the smart proxy ClassLoader 
decides visibility and delegation, not RMIClassLoader.

But what ClassLoader is used to load the smart proxy???


You're attempting to do too many things with one class / object, there's a risk 
that this very powerful class could be leveraged by an attacker, best to break 
up the functionality.

Why do you think that it is handled by one class/object???


Also ServiceDiscoveryManager handles a lot of scenarios that occur with remote 
services, your class wont, there are thousands of lines of code.

Have a look at SDM in JGDMS.

Sorry for sarcasm, but... This is one class/object doing so many things...

Thanks,
Michal


Re: OSGi NP Complete Was: OSGi - deserialization remote invocation strategy

2017-02-13 Thread Michał Kłeczek (XPro Sp. z o. o.)

1. The connection can be done using normal (secure) Jeri.
We do not have to verify the installer object since its classes were 
loaded locally and (by definition) are trusted.


2. The attacker cannot instantiate any non-local class. That is the 
whole point.
Since the "installer" classes must be local, then we can trust the 
installer to

honor any invocation constraints we place on it. So any code downloads
are secure - in the sense that the client can require 
authentication/integrity/confidentiality etc.


Note that (if necessary) we can apply the same logic recursively - we 
can provide an "installer of an installer"
and still be sure any code download is going to honor the security 
constraints we require.


Thanks,
Michal

Peter wrote:

So this object that you have with a locally installed class is tasked with 
authenticating the remote service, provisioning and resolving a bundle, 
deserializing the smart proxy and registering it with the OSGi service 
registrar in a readResolve or readObject method?

How do you propose the connection from the client to the service established in 
order to enable this to occur?

How do you prevent an attacker from choosing a different class to deserialize?

Regards,

Peter.

Sent from my Samsung device
  
   Include original message

 Original message 
From: Michał Kłeczek
Sent: 13/02/2017 10:07:28 pm
To: dev@river.apache.org
Subject: Re: OSGi NP Complete Was: OSGi - deserialization remote invocation 
strategy

Comments inline.

Peter wrote:

  Mic,

  I'm attempting to get my head around your proposal:

  In the case of JERI, the InvocationHandler is part of the smart 
  proxy's serialized state.  A number of smart proxy classes will need 
  to be unmarshalled before the UnmarshallingInvocationHandler is 
  deserialized.


  The smart proxy contains a reference to a dynamic proxy (which sun 
  called the bootstrap proxy) and the dynamic proxy contains a reference 
  to your UnmarshallingInvocationHandler.This means the smart proxy 
  must be unmarshalled first.


  How do you get access to UnmarshallingInvocationHandler without 
  unmarshalling the smart proxy first?


No no - I am saying about wrapping the smart proxy inside another 
object. It can be either a dynamic proxy, or simply an object that 
implements "readResolve" returning the unmarshalled smart proxy.



  More comments inline below.

  On 13/02/2017 6:11 PM, Michał Kłeczek wrote:

  We are talking about the same thing.

  We are turning circles, Peter - all of this has been already discussed.

  1. Yes - you need to resolve bundles in advance (in OSGi it is not 
  possible to do otherwise anyway)

  Agree.
  2. You cannot decide upon the bundle chosen by the container to load 
  the proxy class (the container does the resolution)
  Disagree, nothing in the client depends on the proxy bundle, there's 
  no reason to provision a different version.
  3. The runtime graph of object places additional constraints on the 
  bundle resolution process (to what is specified in bundles' manifests).
  Since you do not have any way to pass these additional constraints to 
  the container - the case is lost.
  Disagree.  The proxy bundle contains a manifest with requirements.  
  The stream has no knowledge of versioning, nor does it need to, there 
  are no additional constraints.  If the service proxy dependencies 
  cannot be resolved, or it doesn't unmarshall, then it will not be 
  registered with the OSGi registry in the client, client code will not 
  discover it and the client will have no knowledge of it's existance 
  except for some logging.



This is totally backwards.
That way no client is able to find any service because there is a 
chicken and egg problem - we do not know the proxy interfaces until the 
proxy's bundle is resolved.


Understand that when you place a bundle identifier in the stream - it is 
equivalent to specifying a Require-Bundle constraint - nothing more 
nothing less.


  Additionally - to explain what I've said before about wrong level of 
  abstraction:


  Your general idea is very similar to mine: have a special object 
  (let's call it installer) that will install software prior to proxy 
  unmarshalling.


  1. For some reason unclear to me you want to constrain the way how 
  this "installer object" is passed only via the route of 
  ServiceRegistrar (as attributes)
  Disagree, I'm not proposing the service have any control over 
  installation at the client, other than the manifest in the proxy 
  bundle, nor am I proposing using service attributes, or the use of any 
  existing ServiceRegistar methods (see SafeServiceRegistrar link posted 
  earlier).
If you think about it from the higher architectural view - there is no 
difference. It does not really matter what steps are made - important 
thing is that:
a) you have a special object used to download code - this object is 
supposed to be of a class installed locally in advance
b) the 

Re: OSGi - deserialization remote invocation strategy

2017-02-07 Thread Michał Kłeczek (XPro Sp. z o. o.)
So I must have misunderstood the part about smart proxies being obtained 
via "reflection proxies" or MarshalledInstances.


What are these "reflection proxies"?

Thanks,
Michal

Peter wrote:

No, no bootstrap objects.

Cheers,

Peter.



Sent from my Samsung device.
  
   Include original message

 Original message 
From: "Michał Kłeczek (XPro Sp. z o. o.)"<michal.klec...@xpro.biz>
Sent: 08/02/2017 12:28:50 am
To: dev@river.apache.org
Subject: Re: OSGi - deserialization remote invocation strategy

Are you proposing to provide a bootstrap object that will download some 
meta information prior to class resolution?


How does it differ from simply changing annotations to be those 
"bootstrap objects" instead of Strings?


Thanks,
Michal

Peter wrote:

  Proposed JERI OSGi class loading strategy during deserialization.

  Record caller context - this is the default bundle at the beginning of 
  the stack.  It is obtained by the InvocationHandler on the
  client side.  The InvocationDispatcher on the server side has the 
  calling context of the Remote
  implementation.  The reflection dynamic proxy must be installed in the 
  client's class loader, so the
  InvocationHandler knows exactly what it is, it will be passed to the 
  MarshalInputStream.  Any
  interfaces not found in the client's bundle can be safely shed.  For a 
  smart proxy the reflection proxy will
  be installed in the smart proxy loader.  The smart proxy is obtained 
  either via a reflection proxy or a MarshalledInstance.
  MarshalledInstance also passes in the callers loader to the 
  MarshalInputStream.


  The smart proxy classloader is not a child loader of the clients 
  loader, instead it's a bundle that imports
  service api packages, with a version range that overlaps those already 
  imported by the client.


  Both Invocationhandler and InvocationDispatcher utilise 
  MarshalInputStream and MarshalOutputStream, for marshalling parameters 
  and return values.


  The codebase annotation bundle's manifest contains a list of package 
  imports.


  Do we need to make a list of package imports for every new bundle that 
  we load?
  Do we need to record the wiring and packages and their imports from 
  the remote end?


  I don't think so, the bundles themselves contain this information, I 
  think we just need to keep the view of available classes relevant to 
  the current object being deserialized.


  Codebase Annotations are exact versions!  They need to be to allow the 
  service to ensure the correct proxy codebase is used.  Other proxy 
  codebases will be installed in the client, possibly different 
  versions, but these won't be visible through the resolved 
  dependencies, because the proxy codebases only import packages at the 
  client and OSGi restricts visibility to the current bundle's own 
  classes and any imported packages.
  Instead of appending dependencies to the codebase annotation they'll 
  need be defined in the proxy's bundle manifest.  Of course if an 
  identical version of a proxy codebase bundle is already installed at 
  the client, this will be used again.


  Because a bundle generally imports packages (importing entire bundles 
  is discouraged in OSGi), there may be classes
  that aren't visible from those bundles, such as transient imports, but 
  also including private packages that aren't exported, private
  implementations need to be deserialized, but is it possible to do so 
  safely, without causing package
  conflicts?   Private implementation classes can be used as fields 
  within an exported public object, but cannot and should not
  escape their private scope, doing so risks them being resolved to a 
  bundle with the version of the remote end, instead of the locally 
  resolved / wired package, causing ClassClassExceptions.


  Initial (naive) first pass strategy of class resolution (for each 
  branch in the serialized object graph)?:
  1.Try current bundle on the stack (which will be the callers 
  bundle if we haven't loaded any new bundles yet).
  2.Then use the package name of a class to determine if the package 
  is loaded by any of the bundles
  referenced by the callers bundle imports (to handle any private 
  implementation packages
  that aren't in the current imports).  Is this a good idea? Or should 
  we go straight to step 3
  and let the framework resolve common classes, what if we use a 
  different version to the
  client's imported bundle?  Should we first compare our bundle 
  annotation to the currently
  imported bundles and select one of those if it's a compatible 
  version?  Yes, this could be an

  application bundle, otherwise goto 3.
  3.Load bundle from annotation (if already loaded, it will be an 
  exact version match).  Place the
  new bundle on top of the bundle stack, remove this bundle from the 
  stack once all fields of
  this object have been deserialized, returning to t

Re: OSGi - deserialization remote invocation strategy

2017-02-07 Thread Michał Kłeczek (XPro Sp. z o. o.)

Comments inline

Niclas Hedhman wrote:

4. For Server(osgi)+Client(osgi), number of options goes up. In this space,
Paremus has a lot of experience, and perhaps willing to share a bit,
without compromising the secret sauce? Either way, Michal's talk about
"wiring" becomes important and that wiring should possibly be
re-established on the client side. The insistence on "must be exactly the
same version" is to me a reflection of "we haven't cared about version
management before", and I think it may not be in the best interest to load
many nearly identical bundles just because they are a little off, say stuff
like guava, commons-xyz, slf4j and many more common dependencies.
This problem is generally unsolvable because there are contradicting 
requirements here:

1. The need to transfer object graphs of (unknown) classes
2. The need to optimize the number of class versions (and the number of 
ClassLoaders) in the JVM


It might be tempting to do the resolution on the client but it is 
(AFAIR) NP-hard
- the object graph is a set of constraints on possible module (bundle) 
versions. Plus there is a whole
set of constraints originating from the modules installed in the 
container prior to graph deserialization.


So the only good solution for a library is to provide a client with an 
interface to implement:

Module resolve(Module candidate) (or Module[] resolve(Module[] candidates))
and let it decide what to do.



Peter wrote;

This is why the bundle must be given first
attempt to resolve an objects class and rely on the bundle dependency

resolution process.

OSGi must be allowed to wire up dependencies, we must avoid attempting to

make decisions about

compatibility and use the current bundle wires instead (our stack).


Well, not totally sure about that. The 'root object classloader' doesn't
have visibility to serialized objects, and will fail if left to do it all
by itself. And as soon as you delegate to another BundleClassLoader, you
have made the resolution decision, not the framework. Michal's proposal to
transfer the BundleWiring (available in runtime) from the server to the
client, makes it somewhat possible to do the delegation. And to make
matters worse, it is quite common that packages are exported from more than
one bundle, so the question is what is included in the bundleWiring coming
across the wire.
The whole issue with proposals based on the stream itself is the fact 
that to resolve properly
one have to walk the whole graph first to gather all modules and their 
dependencies.


It is much better to simply provide the module graph (wiring) first (at 
the beginning of the stream)

and only after resolution of all the modules - deserialize the objects.

Thanks,
Michal


Re: Changing TCCL during deserialization

2017-02-07 Thread Michał Kłeczek (XPro Sp. z o. o.)
This is fine for me. I am asking not about one interaction where 
multiple instances of MarshalledInputStreams are used (each with its own 
TCCL)
I am asking about the situation described in another email - that during 
a deserialization using a single instance of the stream the TCCL is changed.


Thanks,
Michal

Gregg Wonderly wrote:

I am not sure about “locked”.  In my example about ServiceUI, imagine that 
there is a common behavior that you ServiceUI hosting environment provides to 
all ServiceUI Components.  It can be that there is a button press or something 
else where an AWTEvent thread is going to take action.  It’s that specific 
thread whose TCCL must be changed, each time, to the codebase of the service 
you are interacting with.  If it calls out the service proxy and that is a 
smart proxy, imagine that the smart proxy might use a different service each 
time, and thats where the TCCL must be set appropriately so that any newly 
created classes are parented by the correct environment in your ServiceUI 
hosting platform.

Gregg






Re: OSGi - deserialization remote invocation strategy

2017-02-07 Thread Michał Kłeczek (XPro Sp. z o. o.)
Are you proposing to provide a bootstrap object that will download some 
meta information prior to class resolution?


How does it differ from simply changing annotations to be those 
"bootstrap objects" instead of Strings?


Thanks,
Michal

Peter wrote:

Proposed JERI OSGi class loading strategy during deserialization.

Record caller context - this is the default bundle at the beginning of 
the stack.  It is obtained by the InvocationHandler on the
client side.  The InvocationDispatcher on the server side has the 
calling context of the Remote
implementation.  The reflection dynamic proxy must be installed in the 
client's class loader, so the
InvocationHandler knows exactly what it is, it will be passed to the 
MarshalInputStream.  Any
interfaces not found in the client's bundle can be safely shed.  For a 
smart proxy the reflection proxy will
be installed in the smart proxy loader.  The smart proxy is obtained 
either via a reflection proxy or a MarshalledInstance.
MarshalledInstance also passes in the callers loader to the 
MarshalInputStream.


The smart proxy classloader is not a child loader of the clients 
loader, instead it's a bundle that imports
service api packages, with a version range that overlaps those already 
imported by the client.


Both Invocationhandler and InvocationDispatcher utilise 
MarshalInputStream and MarshalOutputStream, for marshalling parameters 
and return values.


The codebase annotation bundle's manifest contains a list of package 
imports.


Do we need to make a list of package imports for every new bundle that 
we load?
Do we need to record the wiring and packages and their imports from 
the remote end?


I don't think so, the bundles themselves contain this information, I 
think we just need to keep the view of available classes relevant to 
the current object being deserialized.


Codebase Annotations are exact versions!  They need to be to allow the 
service to ensure the correct proxy codebase is used.  Other proxy 
codebases will be installed in the client, possibly different 
versions, but these won't be visible through the resolved 
dependencies, because the proxy codebases only import packages at the 
client and OSGi restricts visibility to the current bundle's own 
classes and any imported packages.
Instead of appending dependencies to the codebase annotation they'll 
need be defined in the proxy's bundle manifest.  Of course if an 
identical version of a proxy codebase bundle is already installed at 
the client, this will be used again.


Because a bundle generally imports packages (importing entire bundles 
is discouraged in OSGi), there may be classes
that aren't visible from those bundles, such as transient imports, but 
also including private packages that aren't exported, private
implementations need to be deserialized, but is it possible to do so 
safely, without causing package
conflicts?   Private implementation classes can be used as fields 
within an exported public object, but cannot and should not
escape their private scope, doing so risks them being resolved to a 
bundle with the version of the remote end, instead of the locally 
resolved / wired package, causing ClassClassExceptions.


Initial (naive) first pass strategy of class resolution (for each 
branch in the serialized object graph)?:
1.Try current bundle on the stack (which will be the callers 
bundle if we haven't loaded any new bundles yet).
2.Then use the package name of a class to determine if the package 
is loaded by any of the bundles
referenced by the callers bundle imports (to handle any private 
implementation packages
that aren't in the current imports).  Is this a good idea? Or should 
we go straight to step 3
and let the framework resolve common classes, what if we use a 
different version to the
client's imported bundle?  Should we first compare our bundle 
annotation to the currently
imported bundles and select one of those if it's a compatible 
version?  Yes, this could be an

application bundle, otherwise goto 3.
3.Load bundle from annotation (if already loaded, it will be an 
exact version match).  Place the
new bundle on top of the bundle stack, remove this bundle from the 
stack once all fields of
this object have been deserialized, returning to the previous bundle 
context.  We are relying
on the current bundle to wire itself up to the same package versions 
of the clients bundle
imports, for shared classes.  Classes that use different bundles will 
not be visible to the client,

but will need to be visible to the current object's bundle.
4.Place a bundle reference on the stack when a new object is 
deserialized from the stream and
remove it once all fields have been deserialized. (we might need to 
remember stack depth).
5.Don't place non bundle references on the stack.  For example 
system class loader or any
other class loader, we want resolution to occur via the OSGi 
resolution process.


What about a simpler strategy (again naive), where 

Re: Changing TCCL during deserialization

2017-02-06 Thread Michał Kłeczek (XPro Sp. z o. o.)
I am not sure how OSGI relates to this question. But I can imagine the 
situation like this:


class MySmartAssWrappingObject implements Serializable {

  Object myMember;
...

private void readObject(ObjectInputStream ois) {
  
Thread.currentThread().setContextClassLoader(getClass().getClassLoader());

  myMember = ois.readObject();
}

}

That would allow you to do something similar to what you wanted to do 
with class resolution by remembering the stack of class loaders.


So my question is:
is it something that people do?

Thanks,
Michal

Peter wrote:
  
In PreferredClassProvider, no the callers ClassLoader (context) is the parent ClassLoader of the codebase loader.


It depends on the ClassLoader hierarchy and chosen strategy used to resolve 
annotations.

But the index key for PreferrefClassProvider is  URI[] and parent loader 
(callers loader).

This strategy allows codebases to be duplicated for different calling context.

OSGi however, only loads one Bundle per URL, but as Bharath has demonstrated, 
the codebase loader doesn't have to be a BundleReference.

There are some caveats if the proxy codebase loader isn't a BundleReference, 
one is your dependencies aren't version managed for you, and you can only see 
public classes imported by the parent BundleReference.

The strategy of switching context wouldn't work with PreferredClassProvider.

Regards,

Peter.

Sent from my Samsung device.
  
   Include original message

 Original message 
From: "Michał Kłeczek (XPro Sp. z o. o.)"<michal.klec...@xpro.biz>
Sent: 07/02/2017 07:20:59 am
To: dev@river.apache.org
Subject: Re: Changing TCCL during deserialization

This still does not answer my question - maybe I am not clear enough.
Do you have a need to set a TCCL DURING a remote call that is in progress?
Ie. you execute a remote call and DURING deserialization of the return value 
you change the TCCL (so one class is resolved using one context loader and 
another using a different one when reading THE SAME stream)

Thanks,
Michal

Gregg Wonderly wrote:
Anytime that a thread might end up being the one to download code, you need 
that threads CCL to be set.   The AWTEvent thread(s) in particular are a 
sticking point.  I have a class which I use for managing threading in 
AWT/Swing.  It’s called ComponentUpdateThread.  It works as follows.

new ComponentUpdateThread<List>( itemList, actionButton1, actionButton2, 
checkbox1 ) {
public void setup() {
// In event thread
setBusyCursorOn( itemList );
}
public Listconstruct() {
try {
return service.getListOfItems( filterParm1 );
} catch( Exception ex ) {
reportException(ex);
}
return null;
}
public void finished() {
List  let;
if( (lst = get()) != null ) {
itemList.getModel().setContents( lst );
}
}
}.start();

This class will make the passed components disabled to keep them from being 
clicked on again, setup for processing use a non AWTEvent thread for getting 
data with other components of the UI still working, and finally mark the 
disabled components back to enabled, and load the list with the returned items, 
if there where any returned.

There is the opportunity for 3 or more threads to be involved here.  First, 
there is the calling thread.  It doesn’t do anything but start the work.  Next, 
there is an AWTEvent thread which will invoke setup().  Next there is a worker 
thread which will invoke construct().   Finally, there is (possible another) 
AWTEventThread which will invoke finished().

In total there could be up to four different threads involved, all of which 
must have TCCL set to the correct class loader.  My convention in the 
implementation, is that that will be this.getClass()getClassLoader().

This is all managed inside of the implementation of ComponentUpdateThread so 
that I don’t have to worry about it, any more.  But it’s important to 
understand that if you don’t do that, then the classes that the calling thread 
can resolve, and Item in this specific case in particular, and you would thus 
potentially see Item come from another class loader than you intended (the 
services class loader with “null” as the parent), and this will result in 
either a CNFE or CCE.

Gregg

On Feb 6, 2017, at 11:28 AM, Michał Kłeczek (XPro Sp. z o. 
o.)<michal.klec...@xpro.biz>  wrote:

What I was specifically asking for is whether this is needed during 
deserialization or after deserialization.

In other words - if I can lock the TCCL to an instance of MarshalInputStream 
existing for the duration of a single remote call.

Thanks,
Michal

Gregg Wonderly wrote:
The predominant place where it is needed is when you download a serviceUI 
component from a proxy service which just advertises some kind of “browsing”

Re: Changing TCCL during deserialization

2017-02-06 Thread Michał Kłeczek (XPro Sp. z o. o.)

This still does not answer my question - maybe I am not clear enough.
Do you have a need to set a TCCL DURING a remote call that is in progress?
Ie. you execute a remote call and DURING deserialization of the return 
value you change the TCCL (so one class is resolved using one context 
loader and another using a different one when reading THE SAME stream)


Thanks,
Michal

Gregg Wonderly wrote:

Anytime that a thread might end up being the one to download code, you need 
that threads CCL to be set.   The AWTEvent thread(s) in particular are a 
sticking point.  I have a class which I use for managing threading in 
AWT/Swing.  It’s called ComponentUpdateThread.  It works as follows.

new ComponentUpdateThread<List>( itemList, actionButton1, actionButton2, 
checkbox1 ) {
public void setup() {
// In event thread
setBusyCursorOn( itemList );
}
public Listconstruct() {
try {
return service.getListOfItems( filterParm1 );
} catch( Exception ex ) {
reportException(ex);
}
return null;
}
public void finished() {
List  let;
if( (lst = get()) != null ) {
itemList.getModel().setContents( lst );
}
}
}.start();

This class will make the passed components disabled to keep them from being 
clicked on again, setup for processing use a non AWTEvent thread for getting 
data with other components of the UI still working, and finally mark the 
disabled components back to enabled, and load the list with the returned items, 
if there where any returned.

There is the opportunity for 3 or more threads to be involved here.  First, 
there is the calling thread.  It doesn’t do anything but start the work.  Next, 
there is an AWTEvent thread which will invoke setup().  Next there is a worker 
thread which will invoke construct().   Finally, there is (possible another) 
AWTEventThread which will invoke finished().

In total there could be up to four different threads involved, all of which 
must have TCCL set to the correct class loader.  My convention in the 
implementation, is that that will be this.getClass().getClassLoader().

This is all managed inside of the implementation of ComponentUpdateThread so 
that I don’t have to worry about it, any more.  But it’s important to 
understand that if you don’t do that, then the classes that the calling thread 
can resolve, and Item in this specific case in particular, and you would thus 
potentially see Item come from another class loader than you intended (the 
services class loader with “null” as the parent), and this will result in 
either a CNFE or CCE.

Gregg


On Feb 6, 2017, at 11:28 AM, Michał Kłeczek (XPro Sp. z o. 
o.)<michal.klec...@xpro.biz>  wrote:

What I was specifically asking for is whether this is needed during 
deserialization or after deserialization.

In other words - if I can lock the TCCL to an instance of MarshalInputStream 
existing for the duration of a single remote call.

Thanks,
Michal

Gregg Wonderly wrote:

The predominant place where it is needed is when you download a serviceUI 
component from a proxy service which just advertises some kind of “browsing” 
interface to find specific services and interact with them, and that serviceUI 
is embedded in another application with it’s own codebase

appl->serviceUI-for-browsing->Service-to-use->That-Services-ServiceUI

In this case, TCCL must be set to the serviceui classes classloader so that the 
“serviceui-for-browsing” will have a proper parent class pointer.

Anytime that downloaded code might download more code, it should always set 
TCCL to its own class loader so that the classes it downloads reflect against 
the existing class definitions.

Gregg


On Feb 6, 2017, at 12:03 AM, Michał Kłeczek (XPro Sp. z o. 
o.)<michal.klec...@xpro.biz>  <mailto:michal.klec...@xpro.biz>  wrote:

Hi,

During my work on object based annotations I realized it would be more efficient not to 
look for TCCL upon every call to "load class" (when default loader does not 
match the annotation).
It might be more effective to look it up upon stream creation and using it 
subsequently for class loader selection.

But this might change semantics of deserialization a little bit - it would not 
be possible to change the context loader during deserialization.
My question is - are there any scenarios that require that?
I cannot think of any but...

Thanks,
Michal







Re: Changing TCCL during deserialization

2017-02-06 Thread Michał Kłeczek (XPro Sp. z o. o.)
What I was specifically asking for is whether this is needed during 
deserialization or after deserialization.


In other words - if I can lock the TCCL to an instance of 
MarshalInputStream existing for the duration of a single remote call.


Thanks,
Michal

Gregg Wonderly wrote:

The predominant place where it is needed is when you download a serviceUI 
component from a proxy service which just advertises some kind of “browsing” 
interface to find specific services and interact with them, and that serviceUI 
is embedded in another application with it’s own codebase

appl->serviceUI-for-browsing->Service-to-use->That-Services-ServiceUI

In this case, TCCL must be set to the serviceui classes classloader so that the 
“serviceui-for-browsing” will have a proper parent class pointer.

Anytime that downloaded code might download more code, it should always set 
TCCL to its own class loader so that the classes it downloads reflect against 
the existing class definitions.

Gregg


On Feb 6, 2017, at 12:03 AM, Michał Kłeczek (XPro Sp. z o. 
o.)<michal.klec...@xpro.biz>  wrote:

Hi,

During my work on object based annotations I realized it would be more efficient not to 
look for TCCL upon every call to "load class" (when default loader does not 
match the annotation).
It might be more effective to look it up upon stream creation and using it 
subsequently for class loader selection.

But this might change semantics of deserialization a little bit - it would not 
be possible to change the context loader during deserialization.
My question is - are there any scenarios that require that?
I cannot think of any but...

Thanks,
Michal






Re: AbstractILFactory bug?

2017-02-06 Thread Michał Kłeczek (XPro Sp. z o. o.)

I'm talking about this:
Util.checkPackageAccess(interfaces[i].getClass()); //NOTE the getClass() 
here!!!


It should be:
Util.checkPackageAccess(interfaces[i]);

Michal

Michał Kłeczek (XPro Sp. z o. o.) wrote:

I understand the check is needed.

It is that we are not checking the right package but "java.lang"

Thanks,
Michal

Peter wrote:
Ok, worked out why, java.lang.reflect.Proxy's newProxyInstance 
permission check  is caller sensitive.  In this case 
AbstractILFactory is the caller, so not checking it would allow an 
attacker to bypass the check using AbstractILFactory.

Cheers,

Peter.

Sent from my Samsung device.
 Include original message
 Original message ----
From: "Michał Kłeczek (XPro Sp. z o. o.)"<michalklec...@xpro.biz>
Sent: 06/02/2017 05:06:32 pm
To: dev@river.apache.org
Subject: AbstractILFactory bug?

I have just found this piece of code in AbstractILFactory:

Class[] interfaces = getProxyInterfaces(impl);
...
for (int i = 0; i<  interfaces.length; i++) {
  Util.checkPackageAccess(interfaces[i].getClass());
}

So we check "java.lang" package access.

A bug?

Thanks,
Michal








Re: AbstractILFactory bug?

2017-02-06 Thread Michał Kłeczek (XPro Sp. z o. o.)

I understand the check is needed.

It is that we are not checking the right package but "java.lang"

Thanks,
Michal

Peter wrote:
Ok, worked out why, java.lang.reflect.Proxy's newProxyInstance permission check  is caller sensitive.  In this case AbstractILFactory is the caller, so not checking it would allow an attacker to bypass the check using AbstractILFactory. 


Cheers,

Peter.

Sent from my Samsung device.
  
   Include original message

 Original message ----
From: "Michał Kłeczek (XPro Sp. z o. o.)"<michalklec...@xpro.biz>
Sent: 06/02/2017 05:06:32 pm
To: dev@river.apache.org
Subject: AbstractILFactory bug?

I have just found this piece of code in AbstractILFactory:

Class[] interfaces = getProxyInterfaces(impl);
...
for (int i = 0; i<  interfaces.length; i++) {
  Util.checkPackageAccess(interfaces[i].getClass());
}

So we check "java.lang" package access.

A bug?

Thanks,
Michal






Re: OSGi

2017-02-06 Thread Michał Kłeczek (XPro Sp. z o. o.)

The upside is that it simplifies the overall architecture.
For example it makes the whole part of River related to proxy trust 
verification obsolete.
All these ProxyTrustIterators executed in an untrusted security context, 
Verifier implementations loaded using a proper ClassLoader etc. - this 
is not needed anymore.


Thanks,
Michal

Michał Kłeczek (XPro Sp. z o. o.) wrote:

Well - times changed since original Jini has been developed.
There is a whole lot of amazing libraries out there - so the 
undertaking is much easier than doing it without them.

I am specifically talking about Google Guava, JBoss Modules and RxJava.

As River is concerned - once you get past the assumption that codebase 
annotations are Strings - it has all the necessary extension points 
available.


I've already started writing the test suite for the thing and hope to 
present it soon.


Thanks,
Michal

Peter wrote:
For the sake of simplicity it's probably best if OSGi and non interact only using reflection proxy's and have their own djinn groups so code downloading is unnecessary between them. 


At least that's how I'd consider introducing it into an existing djinn.

A jvm that doesn't have version management of some sort may have a lot of 
difficulty interacting with services from a framework that can use incompatible 
library versions (and that includes service api) side by side.

My concern is interacting with non versioned env's will probably cause the 
developer to have to continue dealing with the problems the modular framework 
they selected intended solving

Maven and OSGi can probably interact using mvn: codebase annotations, provided 
all modules have bundle manifests.

I still support what your doing and find it interesting and don't wish to 
discourage you, I think you're likely to admit it will be a difficult 
undertaking, but that's probably an attraction right?  Maybe River could 
provide some interfaces for extensibility where you could plug in?

Regards,

Peter.

Sent from my Samsung device.
  
   Include original message

 Original message 
From: "Michał Kłeczek (XPro Sp. z o. o.)"<michal.klec...@xpro.biz>
Sent: 06/02/2017 03:34:54 pm
To:dev@river.apache.org
Subject: Re: OSGi

Once you realize you need some codebase metadata different than mere 
list of URLs
the next conclusion is that annotations should be something different 
than... a String :)


The next thing to ask is: "what about mixed OSGI and non-OSGI environments"
Then you start to realize you need to abstract over the class loading 
environment itself.


Then you start to realize that to support all the scenarios you need to 
provide a class loading environment that is "pluggable"
- ie allows using it with other class loading environments and allow the 
user to decide which classes should be loaded

by which environment.

This is what I am working on right now :)

Thanks,
Michal

Peter wrote:

  My phone sent the previous email before I completed editing.

  ...If api classes are already loaded locally by client code, then a smart 
proxy codebase bundle will resolve imports to those packages (if they're within 
the imported version range), when the proxy bundle is downloaded, resolved and 
loaded.

  The strategy should be, deserialize using the callers context until a class 
is not found, then switch to the object containing the current field being 
deserialized (which may be a package private implementation class in the 
service api bundle) and if that fails use the codebase annotation (the smart 
proxy).  This is similar in some ways to never preferred, where locally visible 
classes will be selected first.

  The strategy is to let OSGi do all the dependency wiring from bundle 
manifests.  Classes not visible will be visible from a common package import 
class, except for poorly designed services, which is outside of scope.

  Only match api version compatible services.

  No allowances made for split packages or other complexities.

  If deserialization doesn't succeed, look up another service.

  Cheers,

  Peter.

  Sent from my Samsung device.

 Include original message

   Original message 
  From: Peter<j...@zeus.net.au>
  Sent: 06/02/2017 02:59:09 pm
  To:dev@river.apache.org<dev@river.apache.org>
  Subject: Re: OSGi


  Thanks Nic,


  If annot

  You've identified the reason we need an OSGi specific RMIClassLoaderSpi 
implementation; so we can capture and provide Bundle specific annotation 
information.

  Rmiclassloaderspi's loadClass method expects a ClassLoader to be passed in, 
the context ClassLoader is used by PreferredClassProvider when the ClassLoader 
argument is null.

  Standard Java serialization's OIS walks the call stack and selects the first 
non system classloader (it's looking for the application class loader), it 
deserializes into the application ClassLoader's context.  This doesn't  work in 
OSGi because the application classes are loaded 

AbstractILFactory bug?

2017-02-05 Thread Michał Kłeczek (XPro Sp. z o. o.)

I have just found this piece of code in AbstractILFactory:

Class[] interfaces = getProxyInterfaces(impl);
...
for (int i = 0; i < interfaces.length; i++) {
Util.checkPackageAccess(interfaces[i].getClass());
}

So we check "java.lang" package access.

A bug?

Thanks,
Michal


Re: OSGi

2017-02-05 Thread Michał Kłeczek (XPro Sp. z o. o.)

Well - times changed since original Jini has been developed.
There is a whole lot of amazing libraries out there - so the undertaking 
is much easier than doing it without them.

I am specifically talking about Google Guava, JBoss Modules and RxJava.

As River is concerned - once you get past the assumption that codebase 
annotations are Strings - it has all the necessary extension points 
available.


I've already started writing the test suite for the thing and hope to 
present it soon.


Thanks,
Michal

Peter wrote:
For the sake of simplicity it's probably best if OSGi and non interact only using reflection proxy's and have their own djinn groups so code downloading is unnecessary between them. 


At least that's how I'd consider introducing it into an existing djinn.

A jvm that doesn't have version management of some sort may have a lot of 
difficulty interacting with services from a framework that can use incompatible 
library versions (and that includes service api) side by side.

My concern is interacting with non versioned env's will probably cause the 
developer to have to continue dealing with the problems the modular framework 
they selected intended solving

Maven and OSGi can probably interact using mvn: codebase annotations, provided 
all modules have bundle manifests.

I still support what your doing and find it interesting and don't wish to 
discourage you, I think you're likely to admit it will be a difficult 
undertaking, but that's probably an attraction right?  Maybe River could 
provide some interfaces for extensibility where you could plug in?

Regards,

Peter.

Sent from my Samsung device.
  
   Include original message

 Original message 
From: "Michał Kłeczek (XPro Sp. z o. o.)"<michal.klec...@xpro.biz>
Sent: 06/02/2017 03:34:54 pm
To: dev@river.apache.org
Subject: Re: OSGi

Once you realize you need some codebase metadata different than mere 
list of URLs
the next conclusion is that annotations should be something different 
than... a String :)


The next thing to ask is: "what about mixed OSGI and non-OSGI environments"
Then you start to realize you need to abstract over the class loading 
environment itself.


Then you start to realize that to support all the scenarios you need to 
provide a class loading environment that is "pluggable"
- ie allows using it with other class loading environments and allow the 
user to decide which classes should be loaded

by which environment.

This is what I am working on right now :)

Thanks,
Michal

Peter wrote:

  My phone sent the previous email before I completed editing.

  ...If api classes are already loaded locally by client code, then a smart 
proxy codebase bundle will resolve imports to those packages (if they're within 
the imported version range), when the proxy bundle is downloaded, resolved and 
loaded.

  The strategy should be, deserialize using the callers context until a class 
is not found, then switch to the object containing the current field being 
deserialized (which may be a package private implementation class in the 
service api bundle) and if that fails use the codebase annotation (the smart 
proxy).  This is similar in some ways to never preferred, where locally visible 
classes will be selected first.

  The strategy is to let OSGi do all the dependency wiring from bundle 
manifests.  Classes not visible will be visible from a common package import 
class, except for poorly designed services, which is outside of scope.

  Only match api version compatible services.

  No allowances made for split packages or other complexities.

  If deserialization doesn't succeed, look up another service.

  Cheers,

  Peter.

  Sent from my Samsung device.

 Include original message

   Original message 
  From: Peter<j...@zeus.net.au>
  Sent: 06/02/2017 02:59:09 pm
  To: dev@river.apache.org<dev@river.apache.org>
  Subject: Re: OSGi


  Thanks Nic,


  If annot

  You've identified the reason we need an OSGi specific RMIClassLoaderSpi 
implementation; so we can capture and provide Bundle specific annotation 
information.

  Rmiclassloaderspi's loadClass method expects a ClassLoader to be passed in, 
the context ClassLoader is used by PreferredClassProvider when the ClassLoader 
argument is null.

  Standard Java serialization's OIS walks the call stack and selects the first 
non system classloader (it's looking for the application class loader), it 
deserializes into the application ClassLoader's context.  This doesn't  work in 
OSGi because the application classes are loaded by a multitude of ClassLoaders.

  It also looks like we'll need an OSGi specific InvocationLayerFactory to 
capture ClassLoader information to pass to our MarshalInputStream then to our 
RMIClassLoaderSpi during deserialization at both endpoints.

  We also need to know the bundle (ClassLoader) of the class that calls a 
java.lang.reflect.Proxy on the client side, this is ac

Changing TCCL during deserialization

2017-02-05 Thread Michał Kłeczek (XPro Sp. z o. o.)

Hi,

During my work on object based annotations I realized it would be more 
efficient not to look for TCCL upon every call to "load class" (when 
default loader does not match the annotation).
It might be more effective to look it up upon stream creation and using 
it subsequently for class loader selection.


But this might change semantics of deserialization a little bit - it 
would not be possible to change the context loader during deserialization.

My question is - are there any scenarios that require that?
I cannot think of any but...

Thanks,
Michal


Re: OSGi

2017-02-05 Thread Michał Kłeczek (XPro Sp. z o. o.)
Once you realize you need some codebase metadata different than mere 
list of URLs
the next conclusion is that annotations should be something different 
than... a String :)


The next thing to ask is: "what about mixed OSGI and non-OSGI environments"
Then you start to realize you need to abstract over the class loading 
environment itself.


Then you start to realize that to support all the scenarios you need to 
provide a class loading environment that is "pluggable"
- ie allows using it with other class loading environments and allow the 
user to decide which classes should be loaded

by which environment.

This is what I am working on right now :)

Thanks,
Michal

Peter wrote:

My phone sent the previous email before I completed editing.

...If api classes are already loaded locally by client code, then a smart proxy 
codebase bundle will resolve imports to those packages (if they're within the 
imported version range), when the proxy bundle is downloaded, resolved and 
loaded.

The strategy should be, deserialize using the callers context until a class is 
not found, then switch to the object containing the current field being 
deserialized (which may be a package private implementation class in the 
service api bundle) and if that fails use the codebase annotation (the smart 
proxy).  This is similar in some ways to never preferred, where locally visible 
classes will be selected first.

The strategy is to let OSGi do all the dependency wiring from bundle manifests. 
 Classes not visible will be visible from a common package import class, except 
for poorly designed services, which is outside of scope.

Only match api version compatible services.

No allowances made for split packages or other complexities.

If deserialization doesn't succeed, look up another service.

Cheers,

Peter.

Sent from my Samsung device.
  
   Include original message

 Original message 
From: Peter
Sent: 06/02/2017 02:59:09 pm
To: dev@river.apache.org
Subject: Re: OSGi

  
Thanks Nic,


If annot

You've identified the reason we need an OSGi specific RMIClassLoaderSpi 
implementation; so we can capture and provide Bundle specific annotation 
information.

Rmiclassloaderspi's loadClass method expects a ClassLoader to be passed in, the 
context ClassLoader is used by PreferredClassProvider when the ClassLoader 
argument is null.

Standard Java serialization's OIS walks the call stack and selects the first 
non system classloader (it's looking for the application class loader), it 
deserializes into the application ClassLoader's context.  This doesn't  work in 
OSGi because the application classes are loaded by a multitude of ClassLoaders.

It also looks like we'll need an OSGi specific InvocationLayerFactory to 
capture ClassLoader information to pass to our MarshalInputStream then to our 
RMIClassLoaderSpi during deserialization at both endpoints.

We also need to know the bundle (ClassLoader) of the class that calls a 
java.lang.reflect.Proxy on the client side, this is actually quite easy to 
find, walk the stack, find the Proxy class and obtain the BundleReference / 
ClassLoader of the caller.

Currently the java.lang.reflectProxy dynamically generated subclass instance 
proxy's ClassLoader is used, this is acceptable when the proxy bytecode is 
loaded by the the Client's ClassLoader or smart proxy ClassLoader in the case 
where a smart proxy is utilised



If the caller changes, so does the calling context.


Each bundle provides access to all classes within that bundle, including any 
public classes from imported packages.





Sent from my Samsung device.
  
   Include original message

 Original message 
From: Niclas Hedhman
Sent: 04/02/2017 12:43:28 pm
To: dev@river.apache.org
Subject: Re: OSGi



Further, I think the only "sane" approach in a OSGi environment is to 
create a new bundle for the Remote environment, all codebases not part of 
the API goes into that bundle and that the API is required to be present in 
the OSGi environment a priori. I.e. treat the Remote objects in OSGi as it 
is treated in plain Java; one classloader, one chunk, sort out its own 
serialization woes. Likewise for the server; treat it as ordinary RMI, 
without any mumbo-jambo OSGi stuff to be figured out at a non-OSGi-running 
JVM. An important difference is that in OSGi, the BundleClassLoader is not 
(required to be) a URLClassLoader, so the Java serialization's auto 
annotation of globally reachable URLs won't work, and one need to rely on 
java.rmi.server.codebase property, but a bundle could watch for loaded 
bundles and build that up for URLs that can be resolved globally. 



Cheers 
--  
Niclas Hedhman, Software Developer 
http://polygeneapache.org  - New Energy for Java









Re: Serialization issues

2017-02-05 Thread Michał Kłeczek (XPro Sp. z o. o.)

It is performant - no doubt about it.
But it is not scalable because your scalability is limited not by 
network speed but the maximum number of threads.


Thanks,
Michal

Peter wrote:

The other catch is that shared mutable state also needs to be synchronized.

Still River 3.0 should be running at close to raw socket speed, it has the 
worlds most scalable security manager and fastest URLClassLoader.  The 
multiplexer in jeri allows 127 different remote objects to share the same 
connection between two endpoints.

Cheers,

Peter.

Sent from my Samsung device.
  
   Include original message

 Original message 
From: "Michał Kłeczek (XPro Sp. z o. o.)"<michal.klec...@xpro.biz>
Sent: 05/02/2017 04:04:03 am
To: dev@river.apache.org
Subject: Re: Serialization issues

You do not have to do any IO in readObject/writeObject.

The fact that you have readObject/writeObject methods means that you are forced 
to do blocking IO.
It is simple:

readObject(...) {
   ois.defaultReadObject();
   //the above line MUST be blocking because
   verifyMyState();
   //this line expects the data to be read
}

Siilarly:

writeObject(ObjectOutputStream oos) {
   oos.writeInt(whateverField);
   //buffers full? You need to block, sorry
   oos.writeObject(...)
}

Thanks,
Michal

Niclas Hedhman wrote:
I am asking what Network I/O you are doing in the readObject/writeObject
methods. Because to me I can't figure out any use-case where that is a
problem...

On Sun, Feb 5, 2017 at 1:14 AM, "Michał Kłeczek (XPro Sp. z o. o.)"<
michal.klec...@xpro.biz>  wrote:

Don't know about other serialization uses but my issue with it is that it
precludes using it in non-blocking IO.
Sorry if I haven't been clear enough.


Thanks,
Michal

Niclas Hedhman wrote:

And what I/O (network I/O I presume) are you doing during the serialization
(without RMI)?

On Sun, Feb 5, 2017 at 12:48 AM, "Michał Kłeczek (XPro Sp. z o. 
o.)"<michal.klec...@xpro.biz>  wrote:


It is not possible to do non-blocking as in "non blocking IO" - meaning -
threads do not block on IO operations.
Just google "C10K problem"

Thanks,
Michal

Niclas Hedhman wrote:

I don't follow. What does readObject/writeObject got to do with blocking or
not? You could spin up executors to do the work in parallel if you so wish.
And why is "something else" less blocking? And what are you doing that is
"blocking" since the "work" is (or should be) CPU only, there is limited
(still) opportunity to do that non-blocking (whatever that would mean in
CPU-only circumstance). Feel free to elaborate... I am curious.



On Sat, Feb 4, 2017 at 8:38 PM, "Michał Kłeczek (XPro Sp. z o. 
o.)"<michal.klec...@xpro.biz>  <michal.klec...@xpro.biz>
  wrote:


Unfortunately due to "writeObject" and "readObject" methods that have to
be handled (to comply with the spec) - it is not possible to
serialize/deserialize in a non-blocking fashion.
So yes... - it is serialization per se.

Thanks,
Michal

Niclas Hedhman wrote:

Oh, well that is not "Serialization" per se... No wonder i didn't get it.

On Sat, Feb 4, 2017 at 7:20 PM, Peter<j...@zeus.net.au>  <j...@zeus.net.au>  <j...@zeus.net.au>  
<j...@zeus.net.au>  <j...@zeus.net.au>  <j...@zeus.net.au>  <j...@zeus.net.au>  
<j...@zeus.net.au>  wrote:


On 4/02/2017 9:09 PM, Niclas Hedhman wrote:


but rather with the APIs - it is inherently blocking by design.

I am not sure I understand what you mean by that.



He means the client thread that makes the remote call blocks waiting for
the remote end to process the request and respond.

Cheers,

Peter.





















Re: Serialization issues

2017-02-04 Thread Michał Kłeczek (XPro Sp. z o. o.)

1. Yes, you can buffer the whole object graph as long as it is small enough.
2. In the end threads are just abstraction on top of inherently serial 
machine that produces asynchronous events (CPU with interrupts)

providing a nice programming in languages that do not support monads.
It might be worth investigating http://www.paralleluniverse.co/quasar/ 
for River.


OTOH I have also read papers about handling C10M problem. These guys are 
serious :) .
The general conclusion is that any "general" abstraction (such as 
threads) breaks. Any context switches are no-no.
So you implement the full event (interrupt) driven network stack in user 
space - the architecture somewhat similar to an exokernel.

See https://www.freebsd.org/cgi/man.cgi?query=netmap=4

But we are diverging...
Cheers,
Michal

Niclas Hedhman wrote:

Ok, but assuming that you are not talking about GB-sized object graphs, it
is more an issue with RMI than Serialization, because you can create
non-blocking I/O "on top", just like Jetty has non-blocking I/O "on top" of
the equally blocking Servlet API. Right? I assume that there is a similar
thing in Tomcat, because AFAIK Google AppEngine runs on Tomcat
It is not required (even it is) that the ObjectOutputStream is directly
connected to the underlying OS file descriptor. I am pretty sure that it
would be a mistake trying to re-design all software that writes to a stream
to have a non-blocking design.

Additionally, while looking into this, I came across
https://www.usenix.org/legacy/events/hotos03/tech/full_papers/vonbehren/vonbehren_html/index.html,
which might be to your interest. Not totally relevant, but still an
interesting read.

Cheers

On Sun, Feb 5, 2017 at 2:04 AM, "Michał Kłeczek (XPro Sp. z o. o.)"<
michal.klec...@xpro.biz>  wrote:


You do not have to do any IO in readObject/writeObject.

The fact that you have readObject/writeObject methods means that you are
forced to do blocking IO.
It is simple:

readObject(...) {
   ois.defaultReadObject();
   //the above line MUST be blocking because
   verifyMyState();
   //this line expects the data to be read
}

Siilarly:

writeObject(ObjectOutputStream oos) {
   oos.writeInt(whateverField);
   //buffers full? You need to block, sorry
   oos.writeObject(...)

}

Thanks,
Michal

Niclas Hedhman wrote:

I am asking what Network I/O you are doing in the readObject/writeObject
methods. Because to me I can't figure out any use-case where that is a
problem...

On Sun, Feb 5, 2017 at 1:14 AM, "Michał Kłeczek (XPro Sp. z o. 
o.)"<michal.klec...@xpro.biz>  wrote:


Don't know about other serialization uses but my issue with it is that it
precludes using it in non-blocking IO.
Sorry if I haven't been clear enough.


Thanks,
Michal

Niclas Hedhman wrote:

And what I/O (network I/O I presume) are you doing during the serialization
(without RMI)?

On Sun, Feb 5, 2017 at 12:48 AM, "Michał Kłeczek (XPro Sp. z o. 
o.)"<michal.klec...@xpro.biz>  <michal.klec...@xpro.biz>
  wrote:


It is not possible to do non-blocking as in "non blocking IO" - meaning -
threads do not block on IO operations.
Just google "C10K problem"

Thanks,
Michal

Niclas Hedhman wrote:

I don't follow. What does readObject/writeObject got to do with blocking or
not? You could spin up executors to do the work in parallel if you so wish.
And why is "something else" less blocking? And what are you doing that is
"blocking" since the "work" is (or should be) CPU only, there is limited
(still) opportunity to do that non-blocking (whatever that would mean in
CPU-only circumstance). Feel free to elaborate... I am curious.



On Sat, Feb 4, 2017 at 8:38 PM, "Michał Kłeczek (XPro Sp. z o. o.)"<michal.klec...@xpro.biz>  
<michal.klec...@xpro.biz>  <michal.klec...@xpro.biz>  <michal.klec...@xpro.biz>
  wrote:


Unfortunately due to "writeObject" and "readObject" methods that have to
be handled (to comply with the spec) - it is not possible to
serialize/deserialize in a non-blocking fashion.
So yes... - it is serialization per se.

Thanks,
Michal

Niclas Hedhman wrote:

Oh, well that is not "Serialization" per se... No wonder i didn't get it.

On Sat, Feb 4, 2017 at 7:20 PM, Peter<j...@zeus.net.au>  <j...@zeus.net.au>  <j...@zeus.net.au>  <j...@zeus.net.au>  <j...@zeus.net.au>  
<j...@zeus.net.au>  <j...@zeus.net.au>  <j...@zeus.net.au>  <j...@zeus.net.au>  <j...@zeus.net.au>  <j...@zeus.net.au>  
<j...@zeus.net.au>  <j...@zeus.net.au>  <j...@zeus.net.au>  <j...@zeus.net.au>  <j...@zeus.net.au>  wrote:


On 4/02/2017 9:09 PM, Niclas Hedhman wrote:


but rather with the APIs - it is inherently blocking by design.

I am not sure I understand what you mean by that.



He means the client thread that makes the remote call blocks waiting for
the remote end to process the request and respond.

Cheers,

Peter.






















Re: Serialization issues

2017-02-04 Thread Michał Kłeczek (XPro Sp. z o. o.)

You do not have to do any IO in readObject/writeObject.

The fact that you have readObject/writeObject methods means that you are 
forced to do blocking IO.

It is simple:

readObject(...) {
  ois.defaultReadObject();
  //the above line MUST be blocking because
  verifyMyState();
  //this line expects the data to be read
}

Siilarly:

writeObject(ObjectOutputStream oos) {
  oos.writeInt(whateverField);
  //buffers full? You need to block, sorry
  oos.writeObject(...)
}

Thanks,
Michal

Niclas Hedhman wrote:

I am asking what Network I/O you are doing in the readObject/writeObject
methods. Because to me I can't figure out any use-case where that is a
problem...

On Sun, Feb 5, 2017 at 1:14 AM, "Michał Kłeczek (XPro Sp. z o. o.)"<
michal.klec...@xpro.biz>  wrote:


Don't know about other serialization uses but my issue with it is that it
precludes using it in non-blocking IO.
Sorry if I haven't been clear enough.


Thanks,
Michal

Niclas Hedhman wrote:

And what I/O (network I/O I presume) are you doing during the serialization
(without RMI)?

On Sun, Feb 5, 2017 at 12:48 AM, "Michał Kłeczek (XPro Sp. z o. 
o.)"<michal.klec...@xpro.biz>  wrote:


It is not possible to do non-blocking as in "non blocking IO" - meaning -
threads do not block on IO operations.
Just google "C10K problem"

Thanks,
Michal

Niclas Hedhman wrote:

I don't follow. What does readObject/writeObject got to do with blocking or
not? You could spin up executors to do the work in parallel if you so wish.
And why is "something else" less blocking? And what are you doing that is
"blocking" since the "work" is (or should be) CPU only, there is limited
(still) opportunity to do that non-blocking (whatever that would mean in
CPU-only circumstance). Feel free to elaborate... I am curious.



On Sat, Feb 4, 2017 at 8:38 PM, "Michał Kłeczek (XPro Sp. z o. 
o.)"<michal.klec...@xpro.biz>  <michal.klec...@xpro.biz>
  wrote:


Unfortunately due to "writeObject" and "readObject" methods that have to
be handled (to comply with the spec) - it is not possible to
serialize/deserialize in a non-blocking fashion.
So yes... - it is serialization per se.

Thanks,
Michal

Niclas Hedhman wrote:

Oh, well that is not "Serialization" per se... No wonder i didn't get it.

On Sat, Feb 4, 2017 at 7:20 PM, Peter<j...@zeus.net.au>  <j...@zeus.net.au>  <j...@zeus.net.au>  
<j...@zeus.net.au>  <j...@zeus.net.au>  <j...@zeus.net.au>  <j...@zeus.net.au>  
<j...@zeus.net.au>  wrote:


On 4/02/2017 9:09 PM, Niclas Hedhman wrote:


but rather with the APIs - it is inherently blocking by design.

I am not sure I understand what you mean by that.



He means the client thread that makes the remote call blocks waiting for
the remote end to process the request and respond.

Cheers,

Peter.



















Re: Serialization Formats, Previously: OSGi

2017-02-04 Thread Michał Kłeczek (XPro Sp. z o. o.)

I cannot disagree with rants about software industry state.

Let's get back to technical solutions to non-technical problems. I am 
interested in providing tools - whether will be used... is a different 
story.


That said...
IMHO Jini - in all its greatness - DID NOT solve the problem of Java 
code mobility in any way.
As has been discussed on this list several time the way how it "solved" 
it is:
- inherently insecure (because object validation is done _after_ code 
execution)
- is not capable of transferring complicated object graphs - hence it 
cannot be used in many different interesting scenarios.


Partial solutions are worse than lack of solutions - they confuse users 
(in our case programmers) and in the end people loose interest.


I am not a big fan of Java containers - be it JEE or any other (OSGI 
included)
The industry seems to understand they are a dead end - escpecially in 
the age of Docker etc - and is moving away from them (not that in a very 
meaningful direction :) ).


I have worked with OSGI for several years and it was a difficult 
relationship :)
Today I prefer simpler solutions: "java -jar 
my-very-complicated-and-important-service.jar" is the way to go.


Thanks,
Michal


Niclas Hedhman wrote:

(I think wrong thread, so to please Peter, I copied it all into here)

Correct, it is not different. But you are missing the point; CONSTRAINTS.
And without constraints, developers are capable of doing these good deeds
(such as your example) and many very poor ones. The knife cuts your meat or
butcher your neighbor... It is all about the constraints, something that
few developers are willing to admit that makes our work better.

As for the "leasable and you have..."; The problem is that you are probably
wrong on that front too, like the OSGi community have learned the hard way.
There are too many ways software entangle classloading. All kinds of shit
"registers itself" in the bowels of the runtime, such as with the
DriverManager, Loggers, URLHandlers or PermGenSpace (might be gone in Java
8). Then add 100s of common libraries that also does a poor job in
releasing "permanent" resources/instances/classes... The stain sticks, but
the smell is weak, so often we can't tell other than memory leak during
class updates.
And why do we have all this mess? IMHO; Lack of constraints, lack of
lifecycle management in "everything Java" (and most languages) and lack of
discipline (something Gregg has, and I plus 5 million other Java devs don't
have). OSGi is not as successful as it "should" (SpringSource gave up)
because it makes the "stain" stink really badly. OSGi introduces
constraints and fails spectacular when we try to break or circumvent them.

River as it currently stands has "solved" Java code mobility, Java leases,
dynamic service registry with query capabilities and much more. But as with
a lot of good technology, the masses don't buy it. The ignorant masses are
now in Peter Deutsch's Fallacies of Distributed Computing territory,
thinking that microservices on JAX-RS is going to save the day (it isn't, I
am rescuing a project out of it now).
Distributed OSGi tried to solve this problem, and AFAICT has serious
problems to work reliably in production environments. What do I learn? This
is hard, but every 5 years we double in numbers, so half the developer
population is inexperienced and repeat the same mistakes again and again.

Sorry for highlighting problems, mostly psychological/philosophical rather
than technological. I don't have the answers, other than; Without
Constraints Technology Fails. And the better the constraints are defined,
the better likelihood that it can succeed.




On Sat, Feb 4, 2017 at 8:59 PM, "Michał Kłeczek (XPro Sp. z o. o.)"<
michal.klec...@xpro.biz>  wrote:


Comments below.

Niclas Hedhman wrote:

see below

On Sat, Feb 4, 2017 at 6:21 PM, "Michał Kłeczek (XPro Sp. z o. 
o.)"<michal.klec...@xpro.biz>  wrote:

Once you transfer the code with your data - the issue of code version

synchronization disappears, doesn't it?

It also makes the wire data format irrelevant. At least for "short lived

serialized states".

Only works if you have no exchange with the environment it is executing.
And this is where "sandboxing" concern kicks in. What is the sandbox? In a
web browser they try to define it to DOM + handful of other well-defined
objects. In case of Java Serialization, it is all classes reachable from
the loading classloader. And I think Gregg is trying to argue that if one
is very prudent, one need to manage this well.


But how is "exchange with the environment it is executing"
actually different when installing code on demand from installing it in
advance???

The whole point IMHO is to shift thinking from "moving data" to "exchange
configured software" -
think Java specific Docker on ster

Re: Serialization issues

2017-02-04 Thread Michał Kłeczek (XPro Sp. z o. o.)
Don't know about other serialization uses but my issue with it is that 
it precludes using it in non-blocking IO.

Sorry if I haven't been clear enough.

Thanks,
Michal

Niclas Hedhman wrote:

And what I/O (network I/O I presume) are you doing during the serialization
(without RMI)?

On Sun, Feb 5, 2017 at 12:48 AM, "Michał Kłeczek (XPro Sp. z o. o.)"<
michal.klec...@xpro.biz>  wrote:


It is not possible to do non-blocking as in "non blocking IO" - meaning -
threads do not block on IO operations.
Just google "C10K problem"

Thanks,
Michal

Niclas Hedhman wrote:

I don't follow. What does readObject/writeObject got to do with blocking or
not? You could spin up executors to do the work in parallel if you so wish.
And why is "something else" less blocking? And what are you doing that is
"blocking" since the "work" is (or should be) CPU only, there is limited
(still) opportunity to do that non-blocking (whatever that would mean in
CPU-only circumstance). Feel free to elaborate... I am curious.



On Sat, Feb 4, 2017 at 8:38 PM, "Michał Kłeczek (XPro Sp. z o. 
o.)"<michal.klec...@xpro.biz>  wrote:


Unfortunately due to "writeObject" and "readObject" methods that have to
be handled (to comply with the spec) - it is not possible to
serialize/deserialize in a non-blocking fashion.
So yes... - it is serialization per se.

Thanks,
Michal

Niclas Hedhman wrote:

Oh, well that is not "Serialization" per se... No wonder i didn't get it.

On Sat, Feb 4, 2017 at 7:20 PM, Peter<j...@zeus.net.au>  <j...@zeus.net.au>  
<j...@zeus.net.au>  <j...@zeus.net.au>  wrote:


On 4/02/2017 9:09 PM, Niclas Hedhman wrote:


but rather with the APIs - it is inherently blocking by design.

I am not sure I understand what you mean by that.



He means the client thread that makes the remote call blocks waiting for
the remote end to process the request and respond.

Cheers,

Peter.
















Re: OSGi

2017-02-04 Thread Michał Kłeczek (XPro Sp. z o. o.)

For those not following my emails on this list :) :

My "codebase annotations" are actually objects of (sub)classes of:

abstract class CodeBase implements Serializable {
...
  abstract Class loadClass(String name, ClassLoader defaultLoader) 
throws IOException, ClassNotFoundException;

...
}

The interface is actually different for several reasons but the idea is 
the same.


So my AnnotatedInputStream is something like:

class AnnotatedInputStream extends ObjectInputStream {

  protected Class resolveClass(...) {
((CodeBase)readObject()).loadClass(...);
  }

}

Simply speaking I allow the _service_ to provide an object that can 
download the code.


Peter proposed to provide serialized CodeBase instances as Base64 
encoded strings (or something similar) - to maintain the assumption that 
codebase annotation is String.
But I do not see it as important at this moment - if needed - might be 
implemented.


Thanks,
Michal

Gregg Wonderly wrote:

Okay, then I think you should investigate my replacement of the 
RMIClassLoaderSPI implementation with a pluggable mechanism.

public interface CodebaseClassAccess {
public Class loadClass( String codebase,
  String name ) throws IOException, 
ClassNotFoundException;
public Class loadClass(String codebase,
  String name,
  ClassLoader defaultLoader) throws 
IOException,ClassNotFoundException;
 public Class loadProxyClass(String codebase,
   String[] interfaceNames,
   ClassLoader defaultLoader ) throws 
IOException,ClassNotFoundException;
public String getClassAnnotation( Class cls );
public ClassLoader getClassLoader(String codebase) throws IOException;
public ClassLoader createClassLoader( URL[] urls,
ClassLoader parent,
boolean requireDlPerm,
AccessControlContext ctx );
/**
 * This should return the class loader that represents the system
 * environment.  This might often be the same as {@link 
#getSystemContextClassLoader()}
 * but may not be in certain circumstances where container mechanisms 
isolate certain
 * parts of the classpath between various contexts.
 * @return
 */
 public ClassLoader getParentContextClassLoader();
/**
 * This should return the class loader that represents the local system
 * environment that is associated with never-preferred classes
 * @return
 */
 public ClassLoader getSystemContextClassLoader( ClassLoader defaultLoader 
);
}

I have forgotten what Peter renamed it to.  But this base interface is what all 
of the Jini codebase uses to load classes.  The annotation is in the “codebase” 
parameter.  From this you can explore how the annotation can move from being a 
URL, which you could recognize and still use, but substitute your own indicator 
for another platform such as a maven or OSGi targeted codebase.

Thus, you can still use the annotation, but use it to specify the type of 
stream instead of what to download via HTTP.

Gregg



On Feb 4, 2017, at 2:02 AM, Michał Kłeczek (XPro Sp. z o. 
o.)<michal.klec...@xpro.biz>  wrote:

My annotated streams replace codebase resolution with object based one (ie - 
not using RMIClassLoader).

Michal

Gregg Wonderly wrote:

Why specific things do you want your AnnotatedStream to provide?

Gregg









Re: Serialization issues

2017-02-04 Thread Michał Kłeczek (XPro Sp. z o. o.)
It is not possible to do non-blocking as in "non blocking IO" - meaning 
- threads do not block on IO operations.

Just google "C10K problem"

Thanks,
Michal

Niclas Hedhman wrote:

I don't follow. What does readObject/writeObject got to do with blocking or
not? You could spin up executors to do the work in parallel if you so wish.
And why is "something else" less blocking? And what are you doing that is
"blocking" since the "work" is (or should be) CPU only, there is limited
(still) opportunity to do that non-blocking (whatever that would mean in
CPU-only circumstance). Feel free to elaborate... I am curious.



On Sat, Feb 4, 2017 at 8:38 PM, "Michał Kłeczek (XPro Sp. z o. o.)"<
michal.klec...@xpro.biz>  wrote:


Unfortunately due to "writeObject" and "readObject" methods that have to
be handled (to comply with the spec) - it is not possible to
serialize/deserialize in a non-blocking fashion.
So yes... - it is serialization per se.

Thanks,
Michal

Niclas Hedhman wrote:

Oh, well that is not "Serialization" per se... No wonder i didn't get it.

On Sat, Feb 4, 2017 at 7:20 PM, Peter<j...@zeus.net.au>  <j...@zeus.net.au>  
wrote:


On 4/02/2017 9:09 PM, Niclas Hedhman wrote:


but rather with the APIs - it is inherently blocking by design.

I am not sure I understand what you mean by that.



He means the client thread that makes the remote call blocks waiting for
the remote end to process the request and respond.

Cheers,

Peter.












Re: OSGi

2017-02-04 Thread Michał Kłeczek (XPro Sp. z o. o.)

Comments below.

Niclas Hedhman wrote:

see below

On Sat, Feb 4, 2017 at 6:21 PM, "Michał Kłeczek (XPro Sp. z o. o.)"<
michal.klec...@xpro.biz>  wrote:

Once you transfer the code with your data - the issue of code version

synchronization disappears, doesn't it?

It also makes the wire data format irrelevant. At least for "short lived

serialized states".

Only works if you have no exchange with the environment it is executing.
And this is where "sandboxing" concern kicks in. What is the sandbox? In a
web browser they try to define it to DOM + handful of other well-defined
objects. In case of Java Serialization, it is all classes reachable from
the loading classloader. And I think Gregg is trying to argue that if one
is very prudent, one need to manage this well.


But how is "exchange with the environment it is executing"
actually different when installing code on demand from installing it in 
advance???


The whole point IMHO is to shift thinking from "moving data" to 
"exchange configured software" -

think Java specific Docker on steroids.

Transferable objects allow you for example to do things like
downloading your JDBC driver automagically without the fuss of 
installing it and managing upgrades.

Just publish a DataSource object in your ServiceRegistrar and you are done.
Make it leasable and you have automatic upgrades and/or reconfiguration.

Thanks,
Michal


Serialization issues

2017-02-04 Thread Michał Kłeczek (XPro Sp. z o. o.)
Unfortunately due to "writeObject" and "readObject" methods that have to 
be handled (to comply with the spec) - it is not possible to 
serialize/deserialize in a non-blocking fashion.

So yes... - it is serialization per se.

Thanks,
Michal

Niclas Hedhman wrote:

Oh, well that is not "Serialization" per se... No wonder i didn't get it.

On Sat, Feb 4, 2017 at 7:20 PM, Peter  wrote:


On 4/02/2017 9:09 PM, Niclas Hedhman wrote:


but rather with the APIs - it is inherently blocking by design.
I am not sure I understand what you mean by that.



He means the client thread that makes the remote call blocks waiting for
the remote end to process the request and respond.

Cheers,

Peter.









Re: OSGi

2017-02-04 Thread Michał Kłeczek (XPro Sp. z o. o.)
Once you transfer the code with your data - the issue of code version 
synchronization disappears, doesn't it?
It also makes the wire data format irrelevant. At least for "short lived 
serialized states".


I fail to understand how JSON or XML changes anything here.

In the end all of the arguments against Java Object Serialization boil 
down to:
"It is easy to use but if not used carefully it will bite you - so it is 
too easy to use"


What I do not like about Java Object Serialization has nothing to do 
with the format of persistent data

but rather with the APIs - it is inherently blocking by design.

Thanks,
Michal

Niclas Hedhman wrote:

Gregg,
I know that you can manage to "evolve" the binary format if you are
incredibly careful and not make mistakes. BUT, that seems really hard,
since EVEN Sun/Oracle state that using Serilazation for "long live objects"
are highly discouraged. THAT is a sign that it is not nearly as easy as you
make it sound to be, and it is definitely different from XML/JSON as once
the working codebase is lost (i.e. either literally lost (yes, I have been
involved trying to restore that), or modified so much that compatibility
broke, which happens when serialization is not the primary focus of a
project) then you are pretty much screwed forever, unlike XML/JSON.

Now, you may say, that is for "long lived serialized states" but we are
dealing with "short lived" ones. However, in today's architectures and
platforms, almost no organization manages to keep all parts of a system
synchronized when it comes to versioning. Different parts of a system is
upgraded at different rates. And this is essentially the same as "long
lived objects" ---  "uh this was serialized using LibA 1.1, LibB 2.3 and
JRE 1.4, and we are now at LibA 4.6, LibB 3.1 and Java 8", do you see the
similarity? If not, then I will not be able to convince you. If you do,
then ask "why did Sun/Oracle state that long-lived objects with Java
Serialization was a bad idea?", or were they also clueless on how to do it
right, which seems to be your actual argument.

And I think (purely speculative) that many people saw exactly this problem
quite early on, whereas myself I was at the time mostly in relatively small
confined and controlled environments, where up-to-date was managed. And
took me much longer to realize the downsides that are inherent.

Cheers
Niclas






Re: OSGi

2017-02-04 Thread Michał Kłeczek (XPro Sp. z o. o.)
My annotated streams replace codebase resolution with object based one 
(ie - not using RMIClassLoader).


Michal

Gregg Wonderly wrote:

Why specific things do you want your AnnotatedStream to provide?

Gregg






Re: OSGi

2017-02-03 Thread Michał Kłeczek (XPro Sp. z o. o.)

I know that.
And while it is better than Java RMI for several reasons (extensibility 
being one of them) - it is still not perfect:


1) It is inherently blocking
2) Does not support data streaming (in general you need a separate comm 
channel for this)
3) invocation layer depends on particular object serialization 
implementation - Marshall input/output streams (this is my favorite - to 
plug in my new AnnotatedStream implementations I must basically rewrite 
the invocation layer)


Thanks,
Michal


Peter wrote:

FYI.  JERI != Java RMI.

There's no reason these layers couldn't be provided as OSGi services 
and selected from the service registry either.


Cheers,

Peter.


   Protocol Stack

The Jini ERI architecture has a protocol stack with three layers as 
shown in the following table, with interfaces representing the 
abstractions of each layer on the client side and the server side as 
shown:


Layer Client-side abstractions Server-side abstractions
Invocation layer |InvocationHandler| 
 
|InvocationDispatcher|

Object identification layer |ObjectEndpoint| |RequestDispatcher|
Transport layer |Endpoint|, |OutboundRequestIterator|, 
|OutboundRequest| |ServerCapabilities|, |ServerEndpoint|, 
|InboundRequest|


The client-side and server-side implementations of each layer are 
chosen for a particular remote object as part of exporting the remote 
object. The design is intended to allow plugging in different 
implementations of one layer without affecting the implementations of 
the other layers.


The client side abstractions correspond to the structure of the 
client-side proxy for a remote object exported with Jini ERI, with the 
invocation layer implementation containing the object identification 
layer implementation and that, in turn, containing the transport layer 
implementation.


Which invocation constraints are supported for remote invocations to a 
particular remote object exported with Jini ERI is partially dependent 
on the particular implementations of these layers used for the remote 
object (most especially the transport layer implementation).




On 4/02/2017 3:51 PM, Peter wrote:

Thanks Nic,

JERI shouldn't be considered as being limited to or dependant on Java 
Serialization, it's only a transport layer, anything that can write 
to an OutputStream and read from an InputStream will do.


The JSON document could be compressed and sent as bytes, or UTF 
strings sent as bytes.


See the interfaces InboundRequest and OutboundRequest.

Cheers,

Peter.

On 4/02/2017 3:35 PM, Niclas Hedhman wrote:
FYI in case you didn't know; Jackson ObjectMapper takes a POJO 
structure
and creates a (for instance) JSON document, or the other way around. 
It is

not meant for "any object to binary and back".
My point was, Java Serialization (and by extension JERI) has a scope 
that
is possibly wrongly defined in the first place. More constraints 
back then

might have been a good thing...



On Sat, Feb 4, 2017 at 12:36 PM, Peter  wrote:


On 4/02/2017 12:43 PM, Niclas Hedhman wrote:


On Fri, Feb 3, 2017 at 12:23 PM, Peter   wrote:

No serialization or Remote method invocation framework currently 
supports
OSGi very well, one that works well and can provide security 
might gain a

lot of new interest from that user base.

What do you mean by this? Jackson's ObjectMapper doesn't have 
problems on

OSGi. You are formulating the problem wrongly, and if formulated
correctly,
perhaps one realizes why Java Serialization fell out of fashion 
rather
quickly 10-12 years ago, when people realized that code mobility 
(as done

in Java serialization/RMI) caused a lot of problems.


Hmm, I didn't know that, sounds like an option for JERI.


IMHO, RMI/Serialization's design is flawed. Mixing too many 
concerns in the

same abstraction; sandboxing w/ integration , code mobility, class
resolution, versioning and deserialization, with very little hooks to
cusomize any or all of these aspects. And these aspects should not 
have

been wrapped into one monolith.

Further, I think the only "sane" approach in a OSGi environment is to
create a new bundle for the Remote environment, all codebases not 
part of
the API goes into that bundle and that the API is required to be 
present

in
the OSGi environment a priori. I.e. treat the Remote objects in 
OSGi as it
is treated in plain Java; one classloader, one chunk, sort out its 
own
serialization woes. Likewise for the server; treat it as ordinary 
RMI,
without any mumbo-jambo OSGi stuff to be figured out at a 
non-OSGi-running
JVM. An important difference is that in OSGi, the 
BundleClassLoader is not

(required to be) a URLClassLoader, so the Java serialization's auto
annotation of globally reachable URLs won't work, and one need to 
rely on
java.rmi.server.codebase property, but a bundle could watch for 
loaded


Re: OSGi

2017-02-03 Thread Michał Kłeczek (XPro Sp. z o. o.)
Are you opposing the whole idea of sending data and code (or 
instructions how to download it) bundled together? (the spec)

Or just the way how it is done in Java today. (the impl)

If it is the first - we are in an absolute disagreement.
If the second - I agree wholeheartedly.

Thanks,
Michal

Niclas Hedhman wrote:

FYI in case you didn't know; Jackson ObjectMapper takes a POJO structure
and creates a (for instance) JSON document, or the other way around. It is
not meant for "any object to binary and back".
My point was, Java Serialization (and by extension JERI) has a scope that
is possibly wrongly defined in the first place. More constraints back then
might have been a good thing...






Re: object based annotations

2017-02-02 Thread Michał Kłeczek (XPro Sp. z o. o.)
Object based annotations allow creating ClassLoaders - it is not only a 
way to download code but to:

1. Make sure only trusted code is executed
2. Resolve any dependencies and create the whole ClassLoader structure 
when deserializing objects


So URL handler is not enough - alternative implementation of 
RMIClassProviderSpi is necessary.


But anyway - if there is a need to encode annotations as Strings - I do 
not see anything wrong with that except that I personally do not see it 
necessary.


Thanks,
Michal

Peter wrote:

Bitwise operations shouldn't have much performance impact.
  
You could also create a url scheme and handler.


When a ClassLoader retrieves a URL, it doesn't care how it's done, just that it 
retrieves the class code.

eg:

obj:[length][bytes]

Then install your URL handler in your client jvms.

The community agreed to break compatibility with the com.sun namespace change, 
so it can happen, although admittedly concensus is often difficult, if someone 
believes a breaking change is necessary, state the reasons, impacts and request 
a vote..

I've found in the majority of cases with some additional thought and work that 
the api is extensible, so it may not be necessary to cause breakage.

The performance improvent you'll get with your own URL scheme will vastly 
outstrip standard url schemes by avoiding dns calls, which will exceed the cost 
of encoding by orders of magnitude.

Jini/River 2.x.x has a significant number of unnecessary dns calls. 


But it sounds like you may have found an alternative option.

Regards,

Peter.


Sent from my Samsung device.
  
   Include original message

 Original message 
From: "Michał Kłeczek (XPro Sp. z o. o.)"<michal.klec...@xpro.biz>
Sent: 02/02/2017 06:29:55 am
To: dev@river.apache.org
Subject: Re: object based annotations

I have actually given up on the idea of object annotations encoded as 
Strings (in whatever form).

Simply speaking it does not make any sense really:
- it would complicate the solution because of additional encoding and 
decoding logic

- it would influence performance because of additional encoding and decoding
- it would complicate maintaining codebase objects identity (since 
encoded annotation would be a separate stream)


- it would be incompatible with existing clients (if there are any :) ) 
anyway - all expect the annotation to be a space separated list of URLs


I have starting working on a compatibility layer that would allow 
existing clients to download the class loading infrastructure from a 
codebase URL magically (using existing RMI infrastructure).
TBH - I do not see any benefit in maintaining backwards compatibility - 
Jini/River is out of favor nowadays and existing software needs upgrade 
anyway because of security and concurrency fixes.


Thanks,
Michal

Peter Firmstone wrote:

  Mike, I recall the last time I looked at object based annotations, there was 
a backward compatibility issue because both ends of the Marshal streams expect 
string based annotations as does RMIClassLoader.

  However if you are still keen to investigate object based annotations there's 
no reason you couldn't treat a string like a char array = byte array (beware 
signed byte) and have a RMIClassLoaderSPI deserialize the objects after they 
were sent in string form?

  Regards,

  Peter.

  Sent from my Samsung device.














Re: object based annotations

2017-02-01 Thread Michał Kłeczek (XPro Sp. z o. o.)
I have actually given up on the idea of object annotations encoded as 
Strings (in whatever form).

Simply speaking it does not make any sense really:
- it would complicate the solution because of additional encoding and 
decoding logic

- it would influence performance because of additional encoding and decoding
- it would complicate maintaining codebase objects identity (since 
encoded annotation would be a separate stream)


- it would be incompatible with existing clients (if there are any :) ) 
anyway - all expect the annotation to be a space separated list of URLs


I have starting working on a compatibility layer that would allow 
existing clients to download the class loading infrastructure from a 
codebase URL magically (using existing RMI infrastructure).
TBH - I do not see any benefit in maintaining backwards compatibility - 
Jini/River is out of favor nowadays and existing software needs upgrade 
anyway because of security and concurrency fixes.


Thanks,
Michal

Peter Firmstone wrote:

Mike, I recall the last time I looked at object based annotations, there was a 
backward compatibility issue because both ends of the Marshal streams expect 
string based annotations as does RMIClassLoader.

However if you are still keen to investigate object based annotations there's 
no reason you couldn't treat a string like a char array = byte array (beware 
signed byte) and have a RMIClassLoaderSPI deserialize the objects after they 
were sent in string form?

Regards,

Peter.

Sent from my Samsung device.







Re: OSGi

2017-01-31 Thread Michał Kłeczek (XPro Sp. z o. o.)

Rant aside...

This is what I am saying all along... Bundles are not good candidates 
for codebase annotations.
For exactly the reason you describe: bundles represent a template that 
may produce different wirings.


But to recreate an object graph you need the _wiring_ - not the template.

And this is also why any kind of statically (ie. at bundle build time) 
locking of the dependencies is not going to work either.
The ImplementationDetailServiceProxy runtime dependencies are not yet 
known when creating the bundle manifest
since its interface to be useful must be resolved in each client 
environment differently (or all parties in the distributed system
must share the same versions of the code - which makes the whole point 
of OSGI moot)


The only way I can see around this is not to treat jar files downloaded 
dynamically as bundles
but rather create fake bundles from them and generate artificial 
manifests based on information in the stream.

Kind of an ugly hack really...

The situation might change if OSGI defined a way to provide bundle 
manifest not as a statically prepared MANIFEST.MF
inside the jar file but as separate piece of information provided 
dynamically at runtime.


A little rant now...

This container centric view is IMHO quite restricting.
There is no reason why resolution of code dependencies (aka preparing 
the wiring)
must be done by the particular container where the client software is 
executed.
It might as well be done somewhere else (ie. in the service provider 
environment)

and sent to the client already prepared for execution.

OSGI way is not the only way.

I do not find any particular problem with downloading data together with 
the code that manipulates this data.

I am doing it very often when viewing tens (if not more) of web sites daily.
Haven't found any major issues with that - quite the opposite - it seems 
like this is a pretty damn good way of doing things.


End of rant...

Thanks,
Michal

Niclas Hedhman wrote:

As I think you know, the whole purpose of OSGi is to NOT tie the resolution
to Bundles, but to explicitly Exported (with versions) packages. If people
screw up and don't declare the Import/Export package meta data correctly,
then ordinary OSGi may fail just as spectacularly. The difference being
that Java serialization is slightly more sensitive to class changes than
the code itself, and that was an issue from the start. "Back then" it was
quickly realized that "long-term" storage of serialized state was a bad
idea, but "long-term" is relative to code releases and with highly
distributed systems, this is now a reality in networked applications as
well as storing serialized objects to disk. With that in mind, one should
seriously reconsider the status of Java Serialization in one's application,
realize that every serialized class, incl possible exceptions, is a
published API and needs to be treated as such, just as a REST API, SOAP
interface or HTTP itself. Sorry for the rant...

Back to OSGi; even if you really need to tie a class to a particular
bundle, you can do that with attributes and the 'mandatory' directive.

Cheers
Niclas

On Wed, Feb 1, 2017 at 3:29 AM, Michał Kłeczek
wrote:


Unfortunately it is




Re: OSGi

2017-01-31 Thread Michał Kłeczek (XPro Sp. z o. o.)

I meant "of course it is NOT too intelligent". Freudian mistake :D

Michał Kłeczek (XPro Sp. z o. o.) wrote:

Of course it is too intelligent.

What I am saying is that it is at service provider's discretion to 
decide how to load its own proxy classes.
If a service decides that the full container is necessary and the 
client does not have it - well...


On the other hand. How do you manage upgrades of the OSGI container 
due to - let's say - a security issue in current implementation?


Thanks,
Michal

Niclas Hedhman wrote:

It doesn't sound very intelligent to download an OSGi Container to a
client. It surely is something wrong with that... Proxy should depend on
the deployed services, locally some more... What am I missing, other than
you are trying to convey an absurdity?

On Tue, Jan 31, 2017 at 4:10 PM, "Michał Kłeczek (XPro Sp. z o. o.)"<
michal.klec...@xpro.biz>  wrote:


My point throughout the whole thread is that to support these scenarios:

1. Manipulating class streams (like in Voyager) is not necessary (quite
franky - I think it is a bad idea actually since it assumes a single
namespace for classes what precludes class evolution)
2. Dictating a particular "class conveyance mechanism" is not necessary
either

What I am proposing is:
1. Abstract over a "class conveyance mechanism" (by making codebases
serializable objects which classes implement a specific contract)
2. Change ClassProvider API to support the above (accept abstract
codebases instead of only Strings)
3. (Optionally) - provide a default class conveyance mechanism that:
a) allows resolving classes in non-hierarchical way (similar to
ClassWorlds or JBossModules or... OSGI)
b) supports coexisting of other "class conveyance mechanisms" in the same
JVM

Point 1 and 3b) will make the whole solution really dynamic allowing a
"class conveyance mechanism" to be dynamically downloaded by the client.
So - how do you make sure a service deployed in OSGi container may send
its proxy to a non-OSGI client? Yes! You let the client download the OSGI
container dynamically!

What's more - once you abstract over how the classes are downloaded - it
is possible to support downloading code through relays etc.

Thanks,
Michal

Gregg Wonderly wrote:


The annotation for the exported services/classes is what is at issue
here.  Here’s the perspectives I’m trying to make sure everyone sees.

1) Somehow, exported classes from one JVM need to be resolved in another
JVM (at a minimum).  The source of those classes today, is the codebase
specified by the service.  A directed graph of JVMs exchanging classes
demands that all service like JVMs provide a codebase for client like JVMs
to be able to resolve the classes for objects traveling to the client form
the service.  This is nothing we all don’t already know I believe.

2) If there is a 3rd party user of a class from one JVM which is handed
objects resolved by a middle man JVM (as Michal is mentioning here), there
is now a generally required class which all 3 JVMs need to be able to
resolve.  As we know, Jini’s current implementation and basic design is
that a services codebase has to provide a way for clients to resolve the
classes it exports in its service implementation.  In the case Michal is
mentioning, the demand would be for the middle man service to have the
classes that it wants the 3rd service to resolve, in some part of its
codebase.  This is why I mentioned Objectspace Voyage earlier.  I wanted to
use it as an example of a mechanism which always packages class definitions
into the byte stream that is used for sending objects between VMs.  Voyager
would extract the class definitions from the jars, wrap them into the
stream, and the remote JVM would be able to then resolve the classes by
constructing instances of the class using the byte[] data for the class
definition.

Ultimately, no matter what the source of the byte[] data for the class
definition is, it has to be present, at some point in all VMs using that
definition/version of the class.  That’s what I am trying to say.  The
issue is simply where would the class resolve from?  I think that class
definition conveyance, between JVMs is something that we have choices on.
But, practically, you can’t change “annotations” to make this work.  If the
middle man above is a “proxy” service which bridges two different networks,
neither JVM on each network would have routing to get to the one on the
other side of the proxy JVM.  This is why a mechanism like Objectspace
Voyager would be one way to send class definitions defined on one network
to another JVM on another network via this proxy service.

Of course other mechanisms for class conveyance are possible and in fact
already exist.  Maven and even OSGi provide class, version oriented
conveyance from a distribution point, into a particular JVM instance.  Once
the class definition exists inside of one of those JVMs then we hav

Re: OSGi

2017-01-31 Thread Michał Kłeczek (XPro Sp. z o. o.)

Of course it is too intelligent.

What I am saying is that it is at service provider's discretion to 
decide how to load its own proxy classes.
If a service decides that the full container is necessary and the client 
does not have it - well...


On the other hand. How do you manage upgrades of the OSGI container due 
to - let's say - a security issue in current implementation?


Thanks,
Michal

Niclas Hedhman wrote:

It doesn't sound very intelligent to download an OSGi Container to a
client. It surely is something wrong with that... Proxy should depend on
the deployed services, locally some more... What am I missing, other than
you are trying to convey an absurdity?

On Tue, Jan 31, 2017 at 4:10 PM, "Michał Kłeczek (XPro Sp. z o. o.)"<
michal.klec...@xpro.biz>  wrote:


My point throughout the whole thread is that to support these scenarios:

1. Manipulating class streams (like in Voyager) is not necessary (quite
franky - I think it is a bad idea actually since it assumes a single
namespace for classes what precludes class evolution)
2. Dictating a particular "class conveyance mechanism" is not necessary
either

What I am proposing is:
1. Abstract over a "class conveyance mechanism" (by making codebases
serializable objects which classes implement a specific contract)
2. Change ClassProvider API to support the above (accept abstract
codebases instead of only Strings)
3. (Optionally) - provide a default class conveyance mechanism that:
a) allows resolving classes in non-hierarchical way (similar to
ClassWorlds or JBossModules or... OSGI)
b) supports coexisting of other "class conveyance mechanisms" in the same
JVM

Point 1 and 3b) will make the whole solution really dynamic allowing a
"class conveyance mechanism" to be dynamically downloaded by the client.
So - how do you make sure a service deployed in OSGi container may send
its proxy to a non-OSGI client? Yes! You let the client download the OSGI
container dynamically!

What's more - once you abstract over how the classes are downloaded - it
is possible to support downloading code through relays etc.

Thanks,
Michal

Gregg Wonderly wrote:


The annotation for the exported services/classes is what is at issue
here.  Here’s the perspectives I’m trying to make sure everyone sees.

1) Somehow, exported classes from one JVM need to be resolved in another
JVM (at a minimum).  The source of those classes today, is the codebase
specified by the service.  A directed graph of JVMs exchanging classes
demands that all service like JVMs provide a codebase for client like JVMs
to be able to resolve the classes for objects traveling to the client form
the service.  This is nothing we all don’t already know I believe.

2) If there is a 3rd party user of a class from one JVM which is handed
objects resolved by a middle man JVM (as Michal is mentioning here), there
is now a generally required class which all 3 JVMs need to be able to
resolve.  As we know, Jini’s current implementation and basic design is
that a services codebase has to provide a way for clients to resolve the
classes it exports in its service implementation.  In the case Michal is
mentioning, the demand would be for the middle man service to have the
classes that it wants the 3rd service to resolve, in some part of its
codebase.  This is why I mentioned Objectspace Voyage earlier.  I wanted to
use it as an example of a mechanism which always packages class definitions
into the byte stream that is used for sending objects between VMs.  Voyager
would extract the class definitions from the jars, wrap them into the
stream, and the remote JVM would be able to then resolve the classes by
constructing instances of the class using the byte[] data for the class
definition.

Ultimately, no matter what the source of the byte[] data for the class
definition is, it has to be present, at some point in all VMs using that
definition/version of the class.  That’s what I am trying to say.  The
issue is simply where would the class resolve from?  I think that class
definition conveyance, between JVMs is something that we have choices on.
But, practically, you can’t change “annotations” to make this work.  If the
middle man above is a “proxy” service which bridges two different networks,
neither JVM on each network would have routing to get to the one on the
other side of the proxy JVM.  This is why a mechanism like Objectspace
Voyager would be one way to send class definitions defined on one network
to another JVM on another network via this proxy service.

Of course other mechanisms for class conveyance are possible and in fact
already exist.  Maven and even OSGi provide class, version oriented
conveyance from a distribution point, into a particular JVM instance.  Once
the class definition exists inside of one of those JVMs then we have all
the other details about TCCL and creation of proper versions and resolution
from proper class loaders.

I don’t think we have

Re: OSGi

2017-01-31 Thread Michał Kłeczek (XPro Sp. z o. o.)

My point throughout the whole thread is that to support these scenarios:

1. Manipulating class streams (like in Voyager) is not necessary (quite 
franky - I think it is a bad idea actually since it assumes a single 
namespace for classes what precludes class evolution)
2. Dictating a particular "class conveyance mechanism" is not necessary 
either


What I am proposing is:
1. Abstract over a "class conveyance mechanism" (by making codebases 
serializable objects which classes implement a specific contract)
2. Change ClassProvider API to support the above (accept abstract 
codebases instead of only Strings)

3. (Optionally) - provide a default class conveyance mechanism that:
a) allows resolving classes in non-hierarchical way (similar to 
ClassWorlds or JBossModules or... OSGI)
b) supports coexisting of other "class conveyance mechanisms" in the 
same JVM


Point 1 and 3b) will make the whole solution really dynamic allowing a 
"class conveyance mechanism" to be dynamically downloaded by the client.
So - how do you make sure a service deployed in OSGi container may send 
its proxy to a non-OSGI client? Yes! You let the client download the 
OSGI container dynamically!


What's more - once you abstract over how the classes are downloaded - it 
is possible to support downloading code through relays etc.


Thanks,
Michal

Gregg Wonderly wrote:

The annotation for the exported services/classes is what is at issue here.  
Here’s the perspectives I’m trying to make sure everyone sees.

1) Somehow, exported classes from one JVM need to be resolved in another JVM 
(at a minimum).  The source of those classes today, is the codebase specified 
by the service.  A directed graph of JVMs exchanging classes demands that all 
service like JVMs provide a codebase for client like JVMs to be able to resolve 
the classes for objects traveling to the client form the service.  This is 
nothing we all don’t already know I believe.

2) If there is a 3rd party user of a class from one JVM which is handed objects 
resolved by a middle man JVM (as Michal is mentioning here), there is now a 
generally required class which all 3 JVMs need to be able to resolve.  As we 
know, Jini’s current implementation and basic design is that a services 
codebase has to provide a way for clients to resolve the classes it exports in 
its service implementation.  In the case Michal is mentioning, the demand would 
be for the middle man service to have the classes that it wants the 3rd service 
to resolve, in some part of its codebase.  This is why I mentioned Objectspace 
Voyage earlier.  I wanted to use it as an example of a mechanism which always 
packages class definitions into the byte stream that is used for sending 
objects between VMs.  Voyager would extract the class definitions from the 
jars, wrap them into the stream, and the remote JVM would be able to then 
resolve the classes by constructing instances of the class using the byte[] 
data for the class definition.

Ultimately, no matter what the source of the byte[] data for the class 
definition is, it has to be present, at some point in all VMs using that 
definition/version of the class.  That’s what I am trying to say.  The issue is 
simply where would the class resolve from?  I think that class definition 
conveyance, between JVMs is something that we have choices on.  But, 
practically, you can’t change “annotations” to make this work.  If the middle 
man above is a “proxy” service which bridges two different networks, neither 
JVM on each network would have routing to get to the one on the other side of 
the proxy JVM.  This is why a mechanism like Objectspace Voyager would be one 
way to send class definitions defined on one network to another JVM on another 
network via this proxy service.

Of course other mechanisms for class conveyance are possible and in fact 
already exist.  Maven and even OSGi provide class, version oriented conveyance 
from a distribution point, into a particular JVM instance.  Once the class 
definition exists inside of one of those JVMs then we have all the other 
details about TCCL and creation of proper versions and resolution from proper 
class loaders.

I don’t think we have to dictate that a particular class conveyance mechanism 
is the only one.  But, to solve the problem of how to allow classes hop between 
multiple JVMs, we have to  specify how that might work at the level that 
service instances are resolved and some kind of class loading context is 
attached to that service.

The reason I am talking specifically about directed graphs of class loading is 
because I am first focused on the fact that there is a lot less flexibility in 
trying to resolve through a large collection of specific classes rather than an 
open set of classes resolved through a directed graph of the code execution 
path which exposes the places and moments of object use in a much more 
controlled and natural way to me.

Gregg




Re: OSGi

2017-01-30 Thread Michał Kłeczek (XPro Sp. z o. o.)

Let me once again provide a simple example:

interface ForClient {
}

interface ImplementationDetail {
}

class ServiceProxy implements ForClient {
  private ImplementationDetail implementationDetail;
}

class ServiceBackend { //not implementing any remote interface for 
simplicity


  public void start() {
ServiceRegistrar reggie = ...;
ImplementationDetail implDetail = lookupImplIn(reggie);
ServiceProxy myProxy = new ServiceProxy(implDetail);
publish(myProxy, reggie);
  }

}

We have two codebases:
1) ImplementationDetailServiceCodebase
2) ServiceProxyCodebase

Now the question:
How to make it possible to deserialize ServiceProxy in client 
environment assuming:
1) ServiceBackend dynamically downloads classes of ImplementationDetail 
proxy

2) Client dynamically downloads classes of ServiceProxy
3) Client is NOT aware ImplementationDetail interface (well... since it 
is an implementation detail)


Thanks,
Michal


Re: OSGi

2017-01-30 Thread Michał Kłeczek (XPro Sp. z o. o.)

It looks to me like we are talking past each other.

Thread local resolution context is needed - we both agree on this.
What we do not agree on is that the context should be a single 
ClassLoader. It has to be a set of ClassLoaders to support situations 
when dependencies are not hierarchical.


The use case is simple - I want to implement "decorator" services that 
provide smart proxies wrapping (smart) proxies of other services.
I also want to have Exporters provided as dynamic services which would 
allow my services to adapt to changing network environment.


And I would like to stress - I am actually quite negative about OSGI 
being the right environment for this.


Thanks,
Michal

Gregg Wonderly wrote:

Maybe you can help me out here by explaining how it is that execution context 
and class visibility are both handled by OSGi bundles.  For example, one of my 
client applications is a desktop environment.  It does service look up for all 
services registrations providing a “serviceUI”.  It then integrates all of 
those services into a desktop view where the UIs are running at the same time 
with each one imbedded in a JDesktopPane or a JTabbedPane or a JFrame or 
JDialog.  There are callbacks from parts of that environment into my 
application which in turn is interacting with the ServiceUI component.  You 
have AWT event threads which are calling out, into the ServiceUIs and lots of 
other threads of execution which all, ultimately, must have different class 
loading environments so that the ServiceUI components can know where to load 
code from.

It’s exactly TCCL that allows them to know that based on all the other class 
loading standards.  The ClassLoader is exactly the thing that all of them have 
in common if you include OSGi bundles as well.  The important detail, is that 
if the TCCL is not used as new ClassLoaders are created, then there is no 
context for those new ClassLoaders to reference, universally.

The important details are:

1) The desktop application has to be able to prefer certain Entry 
classes which define details that are presented to the user.
2) When the user double clicks on a services icon, or right clicks and 
selects “Open in new Frame”, an async worker thread needs a TCCL pointing at 
the correct parent class loader for the service’s URLClassLoader to reference 
so that the preferred classes work.
3) Anytime that the AWT Event thread might be active inside of the 
services UI implementation, it also needs to indicate the correct parent class 
loader if that UI component causes other class loading to occur.
4) I am speaking specifically in the context of deferred class loading 
which is controlled outside of the service discovery moment.



On Jan 30, 2017, at 4:04 AM, Michał Kłeczek (XPro Sp. z o. 
o.)<michal.klec...@xpro.biz>  wrote:

What I think Jini designers did not realize is that class loading can be 
treated exactly as any other capability provided by a (possibly remote) service.
Once you realize that - it is possible to provide a kind of a "universal container 
infrastructure" where different class loading implementations may co-exist in a 
single JVM.


That’s precisely what ClassLoader is for.  TCCL is precisely to allow “some 
class” to know what context to associate newly loaded classes with, so that in 
such an environment, any code can load classes on behalf of some other 
code/context.  It doesn’t matter if it is TCCL or some other class management 
scheme such as OSGi bundles.  We are talking about the same detail, just 
implemented in a different way.


What's more - these class loading implementations may be dynamic themselves - 
ie. it is a service that provides the client with a way to load its own (proxy) 
classes.

In other words: "there not enough Jini in Jini itself”.


I am not sure I understand where the short coming is at then.  Maybe you can 
illustrate with an example where TCCL fails to allow some piece of code to load 
classes on behalf of another piece of code?

In my desktop application environment, there is a abstract class which is used 
by each serviceUI to allow the desktop to know if it provides the ability to 
open into one of the above mentioned JComponent subclasses.  That class is 
preferred and provided and resolved using the codebase of the desktop client.  
That class loading environment is then the place where the service is finally 
resolved and classes created so that the proxy can be handed to the serviceUI 
component which ultimately only partially resolves from the services codebase.

It’s this class compatibility which needs to be lightweight.


We have _all_ the required pieces in place:
- dynamic code loading and execution (ClassLoaders),
- security model and implementation that allows restricting rights of the 
downloaded code,
- and a serialization/deserialization which allows sending arbitrary data (and 
yes - code too) over the wire.

It is just t

Re: OSGi

2017-01-30 Thread Michał Kłeczek (XPro Sp. z o. o.)
What I think Jini designers did not realize is that class loading can be 
treated exactly as any other capability provided by a (possibly remote) 
service.
Once you realize that - it is possible to provide a kind of a "universal 
container infrastructure" where different class loading implementations 
may co-exist in a single JVM.
What's more - these class loading implementations may be dynamic 
themselves - ie. it is a service that provides the client with a way to 
load its own (proxy) classes.


In other words: "there not enough Jini in Jini itself".

We have _all_ the required pieces in place:
- dynamic code loading and execution (ClassLoaders),
- security model and implementation that allows restricting rights of 
the downloaded code,
- and a serialization/deserialization which allows sending arbitrary 
data (and yes - code too) over the wire.


It is just the matter of glueing the pieces together.

Thanks,
Michal


Gregg Wonderly wrote:


I am not an OSGi user.  I am not trying to be an OSGi opponent.  What I am 
trying to say is that I consider all the commentary in those articles about 
TCCL not working to be just inexperience and argument to try and justify a 
different position or interpretation of what the real problem is.

The real problem is that there is not one “module” concept in Java (another one 
is almost here in JDK 9/Jigsaw).  No one is working together on this, and OSGi 
is solving problems in a small part of the world of software.   It works well 
for embedded, static systems.  I think OSGi misses the mark on dynamic systems 
because of the piecemeal loading and resolving of classes.  I am not sure that 
OSGi developers really understand everything that Jini can do because of the 
choices made (and not made) in the design.  The people who put Jini together 
had a great deal of years of experience piecing together systems which needed 
to work well with a faster degree of variability and adaptation to the 
environment then what most people seem to experience in their classes and work 
environments which are locked down by extremely controlled distribution 
strategies which end up slowing development in an attempt to control everything 
that doesn’t actually cause quality to suffer.

Gregg






Re: OSGi

2017-01-29 Thread Michał Kłeczek (XPro Sp. z o. o.)

I absolutely agree with the requirements you state.

The problem with Jini (and hence River) usage of TCCL is that it assumes 
a parent-child relationship between class loaders - which in turn causes 
the issues with transferring object graphs I've described earlier.


What I understood when working on this is also that OSGI is not the 
right choice either :)
What I also understood is that even having a "module" concept in Java is 
not enough :)


Any solution needs to make a distinction between:
- a static notion of a module (in OSGI it is a bundle, in current Jini - 
a module represented by a codebase string)
- a dynamic notion of a module (in OSGI its a BundleWiring, in current 
Jini it would be the _hierarchy_ of class loaders) - it is a runtime 
state representing a module with its resolved dependencies
And the most important conclusion is that it is the latter that must be 
sent between parties in a distributed system to make it all work.


To support class (API) evolution the solution must also allow to provide 
"open ends" in the module dependency graph. The difference from 
PreferredClassProvider is that these "open ends" must allow selecting 
one of many existing class loaders instead of only one particular 
ClassLoader that a client has set as TCCL.
TCCL is still needed to support many separate services in a single JVM. 
It is used to select a proper subset of class loaders as the set of 
candidates to choose from when resolving code bases. It allows to make 
sure that a single "static module" may produce many instances of 
"dynamic module" resolved differently in the same JVM.


I am working on this and hope to be able to provide an initial 
implementation soon.
The solution I am working on assumes code bases are represented as 
serializable objects, so any example I am giving is based on that.


The basic idea might be presented as follows (of course details related 
to concurrency, weak references and proper method visibility to make the 
thing secure are left out):


class ClassResolver {
  static Map globalResolverMap;
  static ClassResolver getContextClassResolver() {...} //impl returns a 
resolver based on the TCCL


  final Map codeBaseMap;
  final Map existingLoadersMap;
  final Set apiImplementations;

  ClassLoader getClassLoader(CodeBase cb) {
ClassLoader loader = lookup(cb);
if (loader == null) {
  loader = resolve(cb).createLoader(this);
  //update the caches etc
}
return loader;
  }

  CodeBase resolve(CodeBase cb) {
if (cb instanceof ApiCodeBase) {
  return resolveApi((ApiCodeBase) cb);
}
return cb;
  }

  CodeBase resolveApi(ApiCodeBase apiCb) {
//java 8 style
//example is simplified since we want to select a "best match" in 
reality

existingLoadersMap.keySet().stream().filter(apiCb::matchesCodeBase).first().orElse(apiCb);

  }
}

class CodeBase {
  protected abstract ClassLoader createLoader(ClassResolver resolver); 
//creates a ClassLoader using provided resolver to resolve any dependencies

}

class ApiCodeBase {
  Predicate matcher;
  CodeBase defaultImplementation;

  ClassLoader createLoader(ClassResolver resolver) {
return defaultImplementation.createLoader(resolver);
  }

  boolean matchesCodeBase(CodeBase cb) {
return matcher.test(cb);
  }

}

So now when a service provider creates initializes its runtime 
environment it creates a set of instances of CodeBase subclasses
connected to each other in a way specific to a particular class loading 
implementation.
Some of which might be ApiCodeBase instances that will use their default 
implementations to create a class loader in the service provider 
environment (since they are resolved in a "clean" ClassResolver on startup).


Any client that will deserialize an object graph provided by the service 
will either:
1. Not have matching CodeBase to select from when resolving ApiCodeBase 
(the situation similar to service provider startup)

2. Have a matching CodeBase to select from:
a) it is an ApiCodeBase - will possibly be again resolved when 
transferring further

b) it is not an ApiCodeBase - will "lock" an "open end"

So any party is free to mark modules as "private" or "public".

The tricky thing is still handling code base bounce backs.
If not careful when specifying API a service might encounter a "lost 
codebase" problem which might be mitigated by having API modules only 
consist of interfaces.


Thanks,
Michal

Gregg Wonderly wrote:

But codebase identity is not a single thing.  If you are going to allow a 
client to interact with multiple services and use those services, together, to 
create a composite service or just be a client application, you need all of the 
classes to interact.  One of the benefits of dynamic class loading and the 
selling point of how Jini was first presented (and I still consider this to the 
a big deal), is the notion that you can introduce a new version 

Re: OSGi

2017-01-28 Thread Michał Kłeczek (XPro Sp. z o. o.)
Sorry - you haven't shown how to avoid situation when a single instance 
of BImpl is assignable to Child1.myImpl but NOT assignable to Child2.myImpl.


This is simply NOT possible without having information about exact 
bundle wiring on the other end.


Thanks,
Michal

Peter wrote:

Hmm, bundle wiring, not sure, will think about it.

Yep, guilty of lazyness.

So continuing on, with the deserialization process...
The stream reads in the stream class information, now Root's 
ClassLoader is on the stack,

Child2 is loaded using Root's Bundle.
Next, Child2's bundle C2's ClassLoader is pushed onto the stack.
Child2's fields are now read from the stream, in this case a reference 
number is read in from the stream,
this reference number is looked up from the stream reference table, 
and the same instance of BImpl contained

in Class1's field is returned.
Serialization doesn't send an object twice, unless it's written unshared.
Now Class2's instance is created, it's ClassLoader popped off the 
stack, root's second field is written, it's ClassLoader popped off the 
stack and its instance returned.


If Class2's field was another instance of BImpl, the class is stored 
in the ObjectStreamClass's cache and has been verified and loaded, so 
we don't need to resolve it again.  If BImpl was actually a different 
version and was loaded by Bundle BImpl2.0.0, then the annotation won't 
imply BImpl1.0.0,

so it will load a new bundle.

The Bundle wiring will depend on a combination of the local Bundle's 
or downloaded Bundle's manifest package import declarations.


The wiring may be slightly different at each end.  A PackageAdmin 
service may decide to change the loaded bundles, we need to ensure 
that we allow them to communicate accross compatible versions.


Hope this helps clarify it a little better.

Cheers & thanks,

Peter.

On 28/01/2017 10:11 PM, "Michał Kłeczek (XPro Sp. z o. o.)" wrote:

Ahh... You've missed the important part :) - child2.
You cannot assume BImpl class loaded in context of C1 is assignable 
to child2 - it depends on what other unrelated bundles have been 
loaded earlier and how the container chooses to resolve them.


Imagine:

BImpl has:
Require-Bundle bundle.with.exact.child2Api.version 1.0.0

Child2 has:
Import-Package packageOf.child2Api range [1.0.0, 2.0.0)

It might be perfectly fine in one container where only 
bundle.with.exact.child2Api.version 1.0.0 was installed. But might be 
wrong in another where there are two "packageOf.child2Api" exporting 
bundles installed.


So:
1. The information about BImpl is no longer on the stack and not 
available when loading Child2
2. Even if it was available (because for example you modify your 
algorithm so that you remember all previously loaded bundles) - it 
means that when loading Child2 you no longer can depend on the 
container bundle resolution.


And 2) is the point I am trying to make from the beginning: when 
transferring object graphs you MUST serialize the BundleWiring 
information and recreate it when deserializing. You need the snapshot 
of runtime state - declarative dependency information is not enough.


Thanks,
Michal

Peter wrote:

Ah yes, that's easy, assuming all classes are Serializable :)

Firstly we register a BundleListener with our OSGi framework and 
keep track of installed Bundles.


So Root, is the object graph root, Classes Child1 and Child2 are 
visible to Root.

The first thing we do is push the ClassLoader of Root onto our stack.
When the stream reads in the fields of Root, the first field child1, 
it will read the stream class information, the class Child1 will 
resolve from Bundle BR, since it is visible from there.
Now we obtain Child1's ClassLoader (which provides a BundleReference 
to Bundle C1) and push it onto our stack.
The stream reads in the fields of Child1, the first field Child1Api, 
is an instance of BImpl, it first attempts to use Bundle C1, however 
assuming that BImpl isn't visible from Bundle C1, we use the 
codebase annotation we read in from the stream (read in for each 
class), to find any installed bundles implied by the annotation's URI.
Upon finding BImpl's bundle from the BundleContext, we first ask the 
bundle to load the class, then we check that the class is assignable 
to field myImpl in Child1, before Child1 is instantiated, using 
class.isAssignableFrom.
If this fails and there are other versions of BImpl installed we 
might iterate through them until we have a match.
If there are no BImpl bundles installed, then we use the annotation 
to load a new Bundle.

At this time BImpl's ClassLoader is pushed onto the stack.
However BImpl has no fields to deserialize, so a new instance of 
BImpl is created and before it is used to set the field myImpl in 
Child1, BImpl's ClassLoader is popped off the stack.
Now that all fields of Child1 have been created, Child1 can be 
created, and bundle C1's ClassLoader is popped off the stack.

The same process is repeated for

Re: OSGi

2017-01-28 Thread Michał Kłeczek (XPro Sp. z o. o.)

Ahh... You've missed the important part :) - child2.
You cannot assume BImpl class loaded in context of C1 is assignable to 
child2 - it depends on what other unrelated bundles have been loaded 
earlier and how the container chooses to resolve them.


Imagine:

BImpl has:
Require-Bundle bundle.with.exact.child2Api.version 1.0.0

Child2 has:
Import-Package packageOf.child2Api range [1.0.0, 2.0.0)

It might be perfectly fine in one container where only 
bundle.with.exact.child2Api.version 1.0.0 was installed. But might be 
wrong in another where there are two "packageOf.child2Api" exporting 
bundles installed.


So:
1. The information about BImpl is no longer on the stack and not 
available when loading Child2
2. Even if it was available (because for example you modify your 
algorithm so that you remember all previously loaded bundles) - it means 
that when loading Child2 you no longer can depend on the container 
bundle resolution.


And 2) is the point I am trying to make from the beginning: when 
transferring object graphs you MUST serialize the BundleWiring 
information and recreate it when deserializing. You need the snapshot of 
runtime state - declarative dependency information is not enough.


Thanks,
Michal

Peter wrote:

Ah yes, that's easy, assuming all classes are Serializable :)

Firstly we register a BundleListener with our OSGi framework and keep 
track of installed Bundles.


So Root, is the object graph root, Classes Child1 and Child2 are 
visible to Root.

The first thing we do is push the ClassLoader of Root onto our stack.
When the stream reads in the fields of Root, the first field child1, 
it will read the stream class information, the class Child1 will 
resolve from Bundle BR, since it is visible from there.
Now we obtain Child1's ClassLoader (which provides a BundleReference 
to Bundle C1) and push it onto our stack.
The stream reads in the fields of Child1, the first field Child1Api, 
is an instance of BImpl, it first attempts to use Bundle C1, however 
assuming that BImpl isn't visible from Bundle C1, we use the codebase 
annotation we read in from the stream (read in for each class), to 
find any installed bundles implied by the annotation's URI.
Upon finding BImpl's bundle from the BundleContext, we first ask the 
bundle to load the class, then we check that the class is assignable 
to field myImpl in Child1, before Child1 is instantiated, using 
class.isAssignableFrom.
If this fails and there are other versions of BImpl installed we might 
iterate through them until we have a match.
If there are no BImpl bundles installed, then we use the annotation to 
load a new Bundle.

At this time BImpl's ClassLoader is pushed onto the stack.
However BImpl has no fields to deserialize, so a new instance of BImpl 
is created and before it is used to set the field myImpl in Child1, 
BImpl's ClassLoader is popped off the stack.
Now that all fields of Child1 have been created, Child1 can be 
created, and bundle C1's ClassLoader is popped off the stack.

The same process is repeated for Child2.

Cheers,

Peter.




On 28/01/2017 7:41 PM, "Michał Kłeczek (XPro Sp. z o. o.)" wrote:
I fail to see how it could possibly work. Could you walk step-by-step 
serialize/deserialize with the following object graph:


Bundle API:
interface Api {}

Bundle BR:
class Root {
  Child1 child1;
  Child2 child2;

  Api getApi() {
return isEven(getRandom()) ? child1.impl : child2.impl;
  }

}

Bundle C1
class Child1 {
  Child1Api myImpl;
}

Bundle C2
class Child2 {
  Child2Api myImpl;
}

Bundle Api1:
interface Child1Api extends Api {}

Bundle Api2:
interface Child2Api extends Api {}

Bundle BImpl:
class Impl implements Child1Api, Child2Api {}

Object graph:
impl = new BImpl()
root = new Root(new Child1(impl), new Child2(impl));

Serialize and deserialize root.

Thanks,
Michal

Peter wrote:

So here's how we can put it together:

Our OIS contains a stack we can use to track ClassLoader's at each 
branch in our serialized object graph.


  1. First object being read in by the OIS, check the cache see if the
 URI annotation is implied.
  2. If yes, use this Bundle to deserialize, place this Bundle's
 BundleReference [ClassLoader] on the top of the stack.
  3. If no, walk the stack, find the first BundleReference, dereference
 it's Bundle, obtain the BundleContext and request a new Bundle
 with the new URL.
  4. For each field in the current object:

   * Try to load it's class from the current Bundle .
   * If successful push Bundle's ClassLoader onto stack and return
 class.
   * Catch ClassNotFoundException, it's likely this is a dependency
 injected class the parent object's ClassLoader or Bundle
 doesn't have type visibility for.
   * Check the Bundle cache to see if the URI annotation is 
implied.

   * If yes, then iterate through each, load a class from this
 bundle, check that the returned class can be assigned to 

Re: OSGi

2017-01-28 Thread Michał Kłeczek (XPro Sp. z o. o.)
I would say that using TCCL as is a poor man's approach to class 
resolution. Once you have codebase identity done right - it is not 
needed anymore.


Thanks,
Michal

Gregg Wonderly wrote:

The commentary in the first document indicates that there is no rhyme or reason 
to the use of the context class loader.  For me, the reason was very obvious.  
Anytime that you are going to create a new class loader, you should set the 
parent class loader to the context class loader so that the calling thread 
environments class loading context will allow for classes referenced by the new 
class loader to resolve to classes that thread’s execution already can resolve.

What other use of a context class loader would happen?

Gregg


On Jan 27, 2017, at 11:39 AM, Bharath Kumar  wrote:

Yes Peter. Usage of thread context class loader is discouraged in OSGi
environment.

http://njbartlett.name/2012/10/23/dreaded-thread-context-classloader.html

Some of the problems are hard to solve in OSGi environment. For example,
creating dynamic java proxy from 2 or more interfaces that are located in
different bundles.

http://blog.osgi.org/2008/08/classy-solutions-to-tricky-proxies.html?m=1

This problem can be solved using composite class loader. But it is
difficult to write it correctly. Because OSGi environment is dynamic.

I believe that it is possible to provide enough abstraction in river code,
so that service developers don't even require to use context class loader
in their services.



Thanks&  Regards,
Bharath


On 27-Jan-2017 6:25 PM, "Peter"  wrote:


Thanks Gregg,

Thoughts inline below.

Cheers,

Peter.


On 27/01/2017 12:35 AM, Gregg Wonderly wrote:


Is there any thought here about how a single client might use both an
OSGi deployed service and a conventionally deployed service?


Not yet, I'm currently considering how to support OSGi by implementing an
RMIClassLoaderSPI, similar to how Dennis has for Maven in Rio.

I think once a good understanding of OSGi has developed, we can consider
how an implementation could support that, possibly by exploiting something
like Pax URL built into PreferredClassProvider.


  The ContextClassLoader is a good abstraction mechanism for finding “the”

approriate class loader.  It allows applications to deploy a composite
class loader in some form that would be able to resolve classes from many
sources and even provide things like preferred classes.


Yes, it works well for conventional frameworks and is utilised by
PreferredClassProvider, but it's use in OSGi is discouraged, I'd like to
consider how it's use can be avoided in an OSGi env.



In a Java desktop application, would a transition from a background
thread, interacting with a service to get an object from a service which is
not completely resolved to applicable loaders still resolve correctly in an
EventDispatch Thread?  That event dispatch thread can have the context
class loader set on it by the thread which got the object, to be the class
loader of the service object, to make sure that the resolution of classes
happens with the correct class loader such that there will not be a problem
with the service object having one set of definitions and another service
or the application classpath having a conflicting class definition by the
same name.

I’ve had to spend quite a bit of time to make sure that these scenarios
work correctly in my Swing applications.


Have you got more information?  I'm guessing this relates to delayed
unmarshalling into the EventDispatch thread.

It's early days yet, I'm still working it out what information is required
to resolve the correct ClassLoaders&  bundles, but this is an important
question, Bharath mentioned Entry's can be utilised for versioning and this
seems like a good idea.

What follows are thoughts and observations.

A bundle can be created from a URL, provided the codebase the URL refers
to has an OSGi bundle manifest, so this could allow any of the existing URL
formats to deliver a proxy codebase for an OSGi framework.  When OSGi loads
the bundle, the package dependencies will be wired up by the local env.  If
the URL doesn't reference a bundle, then we could use Bharath's approach
and subclass the client's ClassLoader, this does make all the clients
classes visible to the proxy however, but that would happen anyway in a
standard Jini / River environment.

OSGi gives preference to already loaded bundles when resolving package
dependencies, so the client should be careful not to unmarshall any proxy's
that might require a higher version than the bundle already installed when
the client bundle resolved its dependencies.

One of the proxy bundle dependencies will be the service api bundle.  The
proxy bundle can limit the service api package / packages to a specific
version or version range, which it could advertise in an Entry.  Boolean
logic comparison of Entry's would have to be performed locally, after
matching on the service type (this 

Re: OSGi

2017-01-28 Thread Michał Kłeczek (XPro Sp. z o. o.)
In general I think implementing class resolution logic in stream 
implementation is bad. It has to be decoupled.


Thanks,
Michal

Peter wrote:

So here's how we can put it together:

Our OIS contains a stack we can use to track ClassLoader's at each 
branch in our serialized object graph.


  1. First object being read in by the OIS, check the cache see if the
 URI annotation is implied.
  2. If yes, use this Bundle to deserialize, place this Bundle's
 BundleReference [ClassLoader] on the top of the stack.
  3. If no, walk the stack, find the first BundleReference, dereference
 it's Bundle, obtain the BundleContext and request a new Bundle
 with the new URL.
  4. For each field in the current object:

   * Try to load it's class from the current Bundle .
   * If successful push Bundle's ClassLoader onto stack and return
 class.
   * Catch ClassNotFoundException, it's likely this is a dependency
 injected class the parent object's ClassLoader or Bundle
 doesn't have type visibility for.
   * Check the Bundle cache to see if the URI annotation is implied.
   * If yes, then iterate through each, load a class from this
 bundle, check that the returned class can be assigned to the
 current field's type.  If not, discard and continue until a
 class is found that can be assigned to the current field.
   * If not, load the new bundle.
   * Push the current Bundle's ClassLoader on the top of the stack.

  5. Pop the last (Object in the graph) field's ClassLoader off the 
stack.


When the process completes we have a resolved object graph with 
correct class visiblity.


Regards,

Peter.

On 28/01/2017 1:49 PM, Peter wrote:
Having implemented an ObjectInputStream (OIS), I can share the 
following insights:


   * The object at the head of the graph is externally visible and
 encapsulates all other objects and primitives.
   * The objects and primitives at the tails of the graph are fields.
   * Circular references in object graphs are unsafe with untrusted
 input...
   * A serialized object graph without circular references is a tree.
   * Fields may have been dependency injected.
   * Each branch in the tree has an object at the head of the branch.
   * The OIS knows the types of fields in the local class as well as
 the types of fields in the deserialized class before it has been
 resolved.

The information required to deserialized an OSGi graph is in the OIS 
implementation.  The OIS implementation is not visible to the 
extending MarshalInputStream implementation.


MarshalInputStream contains a ClassLoader field, defaultLoader.

So the codebase annotation is critical for the identity of the first 
object,


Regards,

Peter.

On 28/01/2017 3:39 AM, Bharath Kumar wrote:

Yes Peter. Usage of thread context class loader is discouraged in OSGi
environment.

http://njbartlett.name/2012/10/23/dreaded-thread-context-classloader.html 



Some of the problems are hard to solve in OSGi environment. For 
example,
creating dynamic java proxy from 2 or more interfaces that are 
located in

different bundles.

http://blog.osgi.org/2008/08/classy-solutions-to-tricky-proxies.html?m=1 



This problem can be solved using composite class loader. But it is
difficult to write it correctly. Because OSGi environment is dynamic.

I believe that it is possible to provide enough abstraction in river 
code,
so that service developers don't even require to use context class 
loader

in their services.



Thanks&  Regards,
Bharath


On 27-Jan-2017 6:25 PM, "Peter"  wrote:


Thanks Gregg,

Thoughts inline below.

Cheers,

Peter.


On 27/01/2017 12:35 AM, Gregg Wonderly wrote:


Is there any thought here about how a single client might use both an
OSGi deployed service and a conventionally deployed service?

Not yet, I'm currently considering how to support OSGi by 
implementing an

RMIClassLoaderSPI, similar to how Dennis has for Maven in Rio.

I think once a good understanding of OSGi has developed, we can 
consider
how an implementation could support that, possibly by exploiting 
something

like Pax URL built into PreferredClassProvider.


   The ContextClassLoader is a good abstraction mechanism for 
finding “the”
approriate class loader.  It allows applications to deploy a 
composite
class loader in some form that would be able to resolve classes 
from many

sources and even provide things like preferred classes.


Yes, it works well for conventional frameworks and is utilised by
PreferredClassProvider, but it's use in OSGi is discouraged, I'd 
like to

consider how it's use can be avoided in an OSGi env.



In a Java desktop application, would a transition from a background
thread, interacting with a service to get an object from a service 
which is
not completely resolved to applicable loaders still resolve 
correctly in an
EventDispatch Thread?  That event dispatch thread can have the 
context
class loader 

Re: OSGi

2017-01-28 Thread Michał Kłeczek (XPro Sp. z o. o.)
I fail to see how it could possibly work. Could you walk step-by-step 
serialize/deserialize with the following object graph:


Bundle API:
interface Api {}

Bundle BR:
class Root {
  Child1 child1;
  Child2 child2;

  Api getApi() {
return isEven(getRandom()) ? child1.impl : child2.impl;
  }

}

Bundle C1
class Child1 {
  Child1Api myImpl;
}

Bundle C2
class Child2 {
  Child2Api myImpl;
}

Bundle Api1:
interface Child1Api extends Api {}

Bundle Api2:
interface Child2Api extends Api {}

Bundle BImpl:
class Impl implements Child1Api, Child2Api {}

Object graph:
impl = new BImpl()
root = new Root(new Child1(impl), new Child2(impl));

Serialize and deserialize root.

Thanks,
Michal

Peter wrote:

So here's how we can put it together:

Our OIS contains a stack we can use to track ClassLoader's at each 
branch in our serialized object graph.


  1. First object being read in by the OIS, check the cache see if the
 URI annotation is implied.
  2. If yes, use this Bundle to deserialize, place this Bundle's
 BundleReference [ClassLoader] on the top of the stack.
  3. If no, walk the stack, find the first BundleReference, dereference
 it's Bundle, obtain the BundleContext and request a new Bundle
 with the new URL.
  4. For each field in the current object:

   * Try to load it's class from the current Bundle .
   * If successful push Bundle's ClassLoader onto stack and return
 class.
   * Catch ClassNotFoundException, it's likely this is a dependency
 injected class the parent object's ClassLoader or Bundle
 doesn't have type visibility for.
   * Check the Bundle cache to see if the URI annotation is implied.
   * If yes, then iterate through each, load a class from this
 bundle, check that the returned class can be assigned to the
 current field's type.  If not, discard and continue until a
 class is found that can be assigned to the current field.
   * If not, load the new bundle.
   * Push the current Bundle's ClassLoader on the top of the stack.

  5. Pop the last (Object in the graph) field's ClassLoader off the 
stack.


When the process completes we have a resolved object graph with 
correct class visiblity.


Regards,

Peter.

On 28/01/2017 1:49 PM, Peter wrote:
Having implemented an ObjectInputStream (OIS), I can share the 
following insights:


   * The object at the head of the graph is externally visible and
 encapsulates all other objects and primitives.
   * The objects and primitives at the tails of the graph are fields.
   * Circular references in object graphs are unsafe with untrusted
 input...
   * A serialized object graph without circular references is a tree.
   * Fields may have been dependency injected.
   * Each branch in the tree has an object at the head of the branch.
   * The OIS knows the types of fields in the local class as well as
 the types of fields in the deserialized class before it has been
 resolved.

The information required to deserialized an OSGi graph is in the OIS 
implementation.  The OIS implementation is not visible to the 
extending MarshalInputStream implementation.


MarshalInputStream contains a ClassLoader field, defaultLoader.

So the codebase annotation is critical for the identity of the first 
object,


Regards,

Peter.

On 28/01/2017 3:39 AM, Bharath Kumar wrote:

Yes Peter. Usage of thread context class loader is discouraged in OSGi
environment.

http://njbartlett.name/2012/10/23/dreaded-thread-context-classloader.html 



Some of the problems are hard to solve in OSGi environment. For 
example,
creating dynamic java proxy from 2 or more interfaces that are 
located in

different bundles.

http://blog.osgi.org/2008/08/classy-solutions-to-tricky-proxies.html?m=1 



This problem can be solved using composite class loader. But it is
difficult to write it correctly. Because OSGi environment is dynamic.

I believe that it is possible to provide enough abstraction in river 
code,
so that service developers don't even require to use context class 
loader

in their services.



Thanks&  Regards,
Bharath


On 27-Jan-2017 6:25 PM, "Peter"  wrote:


Thanks Gregg,

Thoughts inline below.

Cheers,

Peter.


On 27/01/2017 12:35 AM, Gregg Wonderly wrote:


Is there any thought here about how a single client might use both an
OSGi deployed service and a conventionally deployed service?

Not yet, I'm currently considering how to support OSGi by 
implementing an

RMIClassLoaderSPI, similar to how Dennis has for Maven in Rio.

I think once a good understanding of OSGi has developed, we can 
consider
how an implementation could support that, possibly by exploiting 
something

like Pax URL built into PreferredClassProvider.


   The ContextClassLoader is a good abstraction mechanism for 
finding “the”
approriate class loader.  It allows applications to deploy a 
composite
class loader in some form that would be able to resolve classes 
from many


Re: OSGi

2017-01-25 Thread Michał Kłeczek (XPro Sp. z o. o.)

I also think about adding leasing to the scheme.
If CodeBaseModule can be leased (and the client is capable of handling 
declines of lease renewals) - it would be quite straightforward to 
implement auto-upgrade: the lease for a module "mymodule" ver 1.1 
expires and you have to ask the code server for a new CodeBaseModule - 
which in turn could return a newer patched version of it.


Cheers,
Michal

Michał Kłeczek (XPro Sp. z o. o.) wrote:
So for a client and a service to be able to communicate they must 
agree on a common set of interchangeable CodeRepositories that would 
allow them to have a common understanding of names.
In other words - to be able to work - any party first has to contact a 
CodeRepository that can authenticate itself as a particular principal. 
The issue is that to find the CodeRepository one needs to communicate 
with ServiceRegistrar. And to communicate with ServiceRegistrar you 
need a CodeRepository!!!. So there needs to be some bootstrapping in 
place:

- either ServiceRegistrar and CodeRepository constitute as single entity
- there is a bootstrap well known CodeRepository (Maven central?) - 
its implementation is based on a well known URL and its implementation 
code is shipped with the framework.


Thanks,
Michal

Michał Kłeczek (XPro Sp. z o. o.) wrote:
Honestly - since I am fixed ( :-) ) on having mobile code treated as 
any other object - I see it something like:


interface CodeBaseModule {
  ClassLoader createLoader() throws AnyImportantException;
}

interface CodeRepository {
  CodeBaseModule getCodeBaseModule(String name, Version version);
  boolean isSameNamespace(CodeRepository other);
}

class NamedCodeBase {
  String name; Version version;
  CodeRepository repository;
  boolean equals(Object other) { //check name, version and repo }
}

Now the question is about the implementation of "isSameNamespace". 
Since the protocol(s) to access code repository might differ (and 
there might be multiple available at the same time), location based 
equality won't work (although is the easiest to implement). My rough 
idea is for the CodeRepository to be able to authenticate as any of a 
set of Principals ( ie. satisfy the ServerMinPrincipal constraint ). 
Two CodeRepository instances are interchangeable if intersection of 
their principal sets is non-empty.


At first I thought about having a global naming scheme - then 
cryptographic hash would constitute the part of the name. But that 
would make names obscure and difficult to remember and write by hand.
So I came up with an idea to abstract it away - according to "all 
problems in CS can be solved by introducing another level of 
indirection" :)


Thanks,
Michal

Peter wrote:

codebase identity

So River codebase identity is currently any number of space delimited RFC 3986 
normalised URI strings.

httpmd uses a location filename and message digest.

But should location be part of identity?  How can you relocate a codebase once 
remote objects are deployed?

OSGi and Maven use a name and version to identify a codebase.  


Might we also need codebase signers (if any) to be part of identity?

If no, why not and if yes why?

Regards,

Peter.

Sent from my Samsung device.
  
   Include original message

 Original message ----
From: "Michał Kłeczek (XPro Sp. z o. o.)"<michal.klec...@xpro.biz>
Sent: 26/01/2017 08:30:58 am
To:d...@riverapache.org
Subject: Re: OSGi

I haven't been aware of ObjectSpace Voyager. I just briefly looked at it 
and it seems like it is based on Java 1.x (ancient beast) and - as I 
understand it - the issues you describe are mainly caused by having only 
a single class name space (single ClassLoader).


But sending IMHO class bytes in-band is not necessary (nor good).

What is needed is:
1. Encoding dependency information in codebases (either in-band or by 
providing a downloadable descriptor) so that it is possible to recreate 
proper ClassLoader structure (hierarchy or rather graph - see below) on 
the client.
2. Provide non-hierarchical class loading to support arbitrary object 
graph deserialization (otherwise there is a problem with "diamond 
shaped" object graphs).


A separate issue is with the definition of codebase identity. I guess 
originally Jini designers wanted to avoid this issue and left it 
undefined... but it is unavoidable :)


Thanks,
Michal

Gregg Wonderly wrote:

  That’s what I was suggesting.  The code works, but only if you put the 
required classes into codebases or class paths.  It’s not a problem with mobile 
code, it’s a problem with resolution of objects in mobile code references.  
That’s why I mentioned ObjectSpace Voyager.  It automatically sent/sends class 
definitions with object graphs to the remote VM.

  Gregg


  On Jan 23, 2017, at 3:03 PM, Michał Kłeczek (XPro Sp. z o. 
o.)<michal.klec...@xpro.biz>   wrote:

  The problem is that we only support (smart) proxies that reference only 
objects of cl

Re: OSGi

2017-01-25 Thread Michał Kłeczek (XPro Sp. z o. o.)
Unfortunatelly this is not that easy - the assumption that "the same 
bundle is going to be resolved in the same way in another OSGI 
container" is false.
Not only the containers must share common provisioning infrastructure 
(single view of the names) but - most importantly - the particular 
wiring of a bundle is TIME dependent. It can depend on what other 
bundles were loaded earlier and how the container resolved them. The 
container is free to choose any wiring that meets the requirements.


Thanks,
Michal

Peter wrote:
  
So lets say for argument sake, that we've got River "bundles" that are annotated with package imports (dependencies) and exports.


Using Bharath's proposed 3 bundle nomenclature for services...

Lets say that a third party services defines a service api in a bundle.  
Service api must only change in a backward compatible manner.

A client imports the service api packages.

A service proxy imports the service api packages.

The service api classes are already loaded in the client jvm because the client 
imported them.

The service proxy is deserialised in the client jvm.  Before the proxy can be 
deserialized the RMIClassLoader must first determine whether the proxy's bundle 
(exact version) exists, if not it needs to request OSGi to provision&  load 
that bundle.

When the proxy bundle is loaded, it imports the same service api packages 
visible to the client.

But how do we ensure we have a compatible service api in the client jvm?

Because the lookup service finds matching interfaces.  When those service api 
interfaces were marshalled as arguments to the lookup service by the client, 
they were matched on serial form, so the client will only ever receive 
compatible results.

Regards,

Peter.

Sent from my Samsung device.
  
   Include original message

 Original message ----
From: "Michał Kłeczek (XPro Sp. z o. o.)"<michal.klec...@xpro.biz>
Sent: 26/01/2017 08:30:58 am
To: dev@river.apache.org
Subject: Re: OSGi

I haven't been aware of ObjectSpace Voyager. I just briefly looked at it 
and it seems like it is based on Java 1.x (ancient beast) and - as I 
understand it - the issues you describe are mainly caused by having only 
a single class name space (single ClassLoader).


But sending IMHO class bytes in-band is not necessary (nor good).

What is needed is:
1. Encoding dependency information in codebases (either in-band or by 
providing a downloadable descriptor) so that it is possible to recreate 
proper ClassLoader structure (hierarchy or rather graph - see below) on 
the client.
2. Provide non-hierarchical class loading to support arbitrary object 
graph deserialization (otherwise there is a problem with "diamond 
shaped" object graphs).


A separate issue is with the definition of codebase identity. I guess 
originally Jini designers wanted to avoid this issue and left it 
undefined... but it is unavoidable :)


Thanks,
Michal

Gregg Wonderly wrote:

  That’s what I was suggesting.  The code works, but only if you put the 
required classes into codebases or class paths.  It’s not a problem with mobile 
code, it’s a problem with resolution of objects in mobile code references.  
That’s why I mentioned ObjectSpace Voyager.  It automatically sent/sends class 
definitions with object graphs to the remote VM.

  Gregg


  On Jan 23, 2017, at 3:03 PM, Michał Kłeczek (XPro Sp. z o. 
o.)<michal.klec...@xpro.biz>   wrote:

  The problem is that we only support (smart) proxies that reference only 
objects of classes from their own code base.
  We do not support cases when a (smart) proxy wraps a (smart) proxy of another 
service (annotated with different codebase).

  This precludes several scenarios such as for example "dynamic exporters" - 
exporters that are actually smart proxies.

  Thanks,
  Michal












Re: OSGi

2017-01-25 Thread Michał Kłeczek (XPro Sp. z o. o.)
So for a client and a service to be able to communicate they must agree 
on a common set of interchangeable CodeRepositories that would allow 
them to have a common understanding of names.
In other words - to be able to work - any party first has to contact a 
CodeRepository that can authenticate itself as a particular principal. 
The issue is that to find the CodeRepository one needs to communicate 
with ServiceRegistrar. And to communicate with ServiceRegistrar you need 
a CodeRepository!!!. So there needs to be some bootstrapping in place:

- either ServiceRegistrar and CodeRepository constitute as single entity
- there is a bootstrap well known CodeRepository (Maven central?) - its 
implementation is based on a well known URL and its implementation code 
is shipped with the framework.


Thanks,
Michal

Michał Kłeczek (XPro Sp. z o. o.) wrote:
Honestly - since I am fixed ( :-) ) on having mobile code treated as 
any other object - I see it something like:


interface CodeBaseModule {
  ClassLoader createLoader() throws AnyImportantException;
}

interface CodeRepository {
  CodeBaseModule getCodeBaseModule(String name, Version version);
  boolean isSameNamespace(CodeRepository other);
}

class NamedCodeBase {
  String name; Version version;
  CodeRepository repository;
  boolean equals(Object other) { //check name, version and repo }
}

Now the question is about the implementation of "isSameNamespace". 
Since the protocol(s) to access code repository might differ (and 
there might be multiple available at the same time), location based 
equality won't work (although is the easiest to implement). My rough 
idea is for the CodeRepository to be able to authenticate as any of a 
set of Principals ( ie. satisfy the ServerMinPrincipal constraint ). 
Two CodeRepository instances are interchangeable if intersection of 
their principal sets is non-empty.


At first I thought about having a global naming scheme - then 
cryptographic hash would constitute the part of the name. But that 
would make names obscure and difficult to remember and write by hand.
So I came up with an idea to abstract it away - according to "all 
problems in CS can be solved by introducing another level of 
indirection" :)


Thanks,
Michal

Peter wrote:

codebase identity

So River codebase identity is currently any number of space delimited RFC 3986 
normalised URI strings.

httpmd uses a location filename and message digest.

But should location be part of identity?  How can you relocate a codebase once 
remote objects are deployed?

OSGi and Maven use a name and version to identify a codebase.  


Might we also need codebase signers (if any) to be part of identity?

If no, why not and if yes why?

Regards,

Peter.

Sent from my Samsung device.
  
   Include original message

 Original message 
From: "Michał Kłeczek (XPro Sp. z o. o.)"<michal.klec...@xpro.biz>
Sent: 26/01/2017 08:30:58 am
To:d...@riverapache.org
Subject: Re: OSGi

I haven't been aware of ObjectSpace Voyager. I just briefly looked at it 
and it seems like it is based on Java 1.x (ancient beast) and - as I 
understand it - the issues you describe are mainly caused by having only 
a single class name space (single ClassLoader).


But sending IMHO class bytes in-band is not necessary (nor good).

What is needed is:
1. Encoding dependency information in codebases (either in-band or by 
providing a downloadable descriptor) so that it is possible to recreate 
proper ClassLoader structure (hierarchy or rather graph - see below) on 
the client.
2. Provide non-hierarchical class loading to support arbitrary object 
graph deserialization (otherwise there is a problem with "diamond 
shaped" object graphs).


A separate issue is with the definition of codebase identity. I guess 
originally Jini designers wanted to avoid this issue and left it 
undefined... but it is unavoidable :)


Thanks,
Michal

Gregg Wonderly wrote:

  That’s what I was suggesting.  The code works, but only if you put the 
required classes into codebases or class paths.  It’s not a problem with mobile 
code, it’s a problem with resolution of objects in mobile code references.  
That’s why I mentioned ObjectSpace Voyager.  It automatically sent/sends class 
definitions with object graphs to the remote VM.

  Gregg


  On Jan 23, 2017, at 3:03 PM, Michał Kłeczek (XPro Sp. z o. 
o.)<michal.klec...@xpro.biz>   wrote:

  The problem is that we only support (smart) proxies that reference only 
objects of classes from their own code base.
  We do not support cases when a (smart) proxy wraps a (smart) proxy of another 
service (annotated with different codebase).

  This precludes several scenarios such as for example "dynamic exporters" - 
exporters that are actually smart proxies.

  Thanks,
  Michal













Re: OSGi

2017-01-25 Thread Michał Kłeczek (XPro Sp. z o. o.)
Honestly - since I am fixed ( :-) ) on having mobile code treated as any 
other object - I see it something like:


interface CodeBaseModule {
  ClassLoader createLoader() throws AnyImportantException;
}

interface CodeRepository {
  CodeBaseModule getCodeBaseModule(String name, Version version);
  boolean isSameNamespace(CodeRepository other);
}

class NamedCodeBase {
  String name; Version version;
  CodeRepository repository;
  boolean equals(Object other) { //check name, version and repo }
}

Now the question is about the implementation of "isSameNamespace". Since 
the protocol(s) to access code repository might differ (and there might 
be multiple available at the same time), location based equality won't 
work (although is the easiest to implement). My rough idea is for the 
CodeRepository to be able to authenticate as any of a set of Principals 
( ie. satisfy the ServerMinPrincipal constraint ). Two CodeRepository 
instances are interchangeable if intersection of their principal sets is 
non-empty.


At first I thought about having a global naming scheme - then 
cryptographic hash would constitute the part of the name. But that would 
make names obscure and difficult to remember and write by hand.
So I came up with an idea to abstract it away - according to "all 
problems in CS can be solved by introducing another level of indirection" :)


Thanks,
Michal

Peter wrote:

codebase identity

So River codebase identity is currently any number of space delimited RFC 3986 
normalised URI strings.

httpmd uses a location filename and message digest.

But should location be part of identity?  How can you relocate a codebase once 
remote objects are deployed?

OSGi and Maven use a name and version to identify a codebase.  


Might we also need codebase signers (if any) to be part of identity?

If no, why not and if yes why?

Regards,

Peter.

Sent from my Samsung device.
  
   Include original message

 Original message 
From: "Michał Kłeczek (XPro Sp. z o. o.)"<michal.klec...@xpro.biz>
Sent: 26/01/2017 08:30:58 am
To: d...@riverapache.org
Subject: Re: OSGi

I haven't been aware of ObjectSpace Voyager. I just briefly looked at it 
and it seems like it is based on Java 1.x (ancient beast) and - as I 
understand it - the issues you describe are mainly caused by having only 
a single class name space (single ClassLoader).


But sending IMHO class bytes in-band is not necessary (nor good).

What is needed is:
1. Encoding dependency information in codebases (either in-band or by 
providing a downloadable descriptor) so that it is possible to recreate 
proper ClassLoader structure (hierarchy or rather graph - see below) on 
the client.
2. Provide non-hierarchical class loading to support arbitrary object 
graph deserialization (otherwise there is a problem with "diamond 
shaped" object graphs).


A separate issue is with the definition of codebase identity. I guess 
originally Jini designers wanted to avoid this issue and left it 
undefined... but it is unavoidable :)


Thanks,
Michal

Gregg Wonderly wrote:

  That’s what I was suggesting.  The code works, but only if you put the 
required classes into codebases or class paths.  It’s not a problem with mobile 
code, it’s a problem with resolution of objects in mobile code references.  
That’s why I mentioned ObjectSpace Voyager.  It automatically sent/sends class 
definitions with object graphs to the remote VM.

  Gregg


  On Jan 23, 2017, at 3:03 PM, Michał Kłeczek (XPro Sp. z o. 
o.)<michal.klec...@xpro.biz>   wrote:

  The problem is that we only support (smart) proxies that reference only 
objects of classes from their own code base.
  We do not support cases when a (smart) proxy wraps a (smart) proxy of another 
service (annotated with different codebase).

  This precludes several scenarios such as for example "dynamic exporters" - 
exporters that are actually smart proxies.

  Thanks,
  Michal











Re: OSGi

2017-01-25 Thread Michał Kłeczek (XPro Sp. z o. o.)
I haven't been aware of ObjectSpace Voyager. I just briefly looked at it 
and it seems like it is based on Java 1.x (ancient beast) and - as I 
understand it - the issues you describe are mainly caused by having only 
a single class name space (single ClassLoader).


But sending IMHO class bytes in-band is not necessary (nor good).

What is needed is:
1. Encoding dependency information in codebases (either in-band or by 
providing a downloadable descriptor) so that it is possible to recreate 
proper ClassLoader structure (hierarchy or rather graph - see below) on 
the client.
2. Provide non-hierarchical class loading to support arbitrary object 
graph deserialization (otherwise there is a problem with "diamond 
shaped" object graphs).


A separate issue is with the definition of codebase identity. I guess 
originally Jini designers wanted to avoid this issue and left it 
undefined... but it is unavoidable :)


Thanks,
Michal

Gregg Wonderly wrote:

That’s what I was suggesting.  The code works, but only if you put the required 
classes into codebases or class paths.  It’s not a problem with mobile code, 
it’s a problem with resolution of objects in mobile code references.  That’s 
why I mentioned ObjectSpace Voyager.  It automatically sent/sends class 
definitions with object graphs to the remote VM.

Gregg


On Jan 23, 2017, at 3:03 PM, Michał Kłeczek (XPro Sp. z o. 
o.)<michal.klec...@xpro.biz>  wrote:

The problem is that we only support (smart) proxies that reference only objects 
of classes from their own code base.
We do not support cases when a (smart) proxy wraps a (smart) proxy of another 
service (annotated with different codebase).

This precludes several scenarios such as for example "dynamic exporters" - 
exporters that are actually smart proxies.

Thanks,
Michal







Re: OSGi

2017-01-23 Thread Michał Kłeczek (XPro Sp. z o. o.)
The problem is that we only support (smart) proxies that reference only 
objects of classes from their own code base.
We do not support cases when a (smart) proxy wraps a (smart) proxy of 
another service (annotated with different codebase).


This precludes several scenarios such as for example "dynamic exporters" 
- exporters that are actually smart proxies.


Thanks,
Michal

Gregg Wonderly wrote:

I guess I am not sure then what you are trying to show with your example.

Under what case would the SpacePublisher be sent to another VM, and how is that 
different from normal SmartProxy deserialization?

Gregg


On Jan 23, 2017, at 2:39 PM, Michał Kłeczek (XPro Sp. z o. 
o.)<michal.klec...@xpro.biz>  wrote:



Gregg Wonderly wrote:

michal.klec...@xpro.biz<mailto:michal.klec...@xpro.biz>  
<mailto:michal.klec...@xpro.biz>  <mailto:michal.klec...@xpro.biz>>  wrote:

The use case and the ultimate test to implement is simple - have a

listener that publishes remote events to a JavaSpace acquired dynamically
from a lookup service:

class SpacePublisher implements RemoteEventListener, Serializable {
   private final JavaSpace space;
   public void notify(RemoteEvent evt) {
 space.write(createEntry(evt), ...);
   }
}

It is NOT possible to do currently. It requires non-hierarchical class

loading. It is not easy to solve. It would open a whole lot of
possibilities.

I am probably too ignorant to see it; What exactly is "NOT possible" with
the above use-case snippet?

With currently implemented PreferredClassProvider it is not possible to 
deserialize such an object graph.

This can happen, but what’s necessary is that the codebase of the 
SpacePublisher needs to include all the possible RemoteEvent classes, or the 
javaspace’s classpath has to include them.

I am not sure I understand.
The problem does not have anything to do with RemoteEvent (sub)classes. The 
issue is that SpacePublisher cannot be deserialized at all ( except one case 
when JavaSpace interface is available from context class loader and it is not 
marked as preferred in SpacePublisher code base).

Michal







Re: OSGi

2017-01-23 Thread Michał Kłeczek (XPro Sp. z o. o.)



Gregg Wonderly wrote:

michal.klec...@xpro.biz > wrote:

The use case and the ultimate test to implement is simple - have a

listener that publishes remote events to a JavaSpace acquired dynamically
from a lookup service:

class SpacePublisher implements RemoteEventListener, Serializable {
   private final JavaSpace space;
   public void notify(RemoteEvent evt) {
 space.write(createEntry(evt), ...);
   }
}

It is NOT possible to do currently. It requires non-hierarchical class

loading. It is not easy to solve. It would open a whole lot of
possibilities.

I am probably too ignorant to see it; What exactly is "NOT possible" with
the above use-case snippet?

With currently implemented PreferredClassProvider it is not possible to 
deserialize such an object graph.


This can happen, but what’s necessary is that the codebase of the 
SpacePublisher needs to include all the possible RemoteEvent classes, or the 
javaspace’s classpath has to include them.

I am not sure I understand.
The problem does not have anything to do with RemoteEvent (sub)classes. 
The issue is that SpacePublisher cannot be deserialized at all ( except 
one case when JavaSpace interface is available from context class loader 
and it is not marked as preferred in SpacePublisher code base).


Michal


Re: OSGi

2017-01-22 Thread Michał Kłeczek (XPro Sp. z o. o.)

Hi,

comments below.

Niclas Hedhman wrote:

On Mon, Jan 23, 2017 at 1:48 AM, "Michał Kłeczek (XPro Sp. z o. o.)"<
michal.klec...@xpro.biz>  wrote:


I would say fully declarative approach in OSGI would be to only annotate

with a package version range (and let the OSGI container do the resolution
- it is designed to do it efficiently).

Of course then we have a problem with classes that are not from any

exported package but from internal bundle class path - then bundle id and
version might be necessary.

Then of course there is a question of classes from fragment bundles -

this is totally unsolvable since AFAIK there is no way to get the fragment
information based on the class loader of a class.

Not that I grasp why it is needed, but you can get the Bundle information
from the classloader, IF the class was loaded by OSGi. OSGi defines
BundleClassLoader, so you simply cast the classloader to that, and with
that you can get the Bundle and the BundleContext, which together should be
able to give you all the meta info that you could need.


It is needed to properly annotate a class during serialization.
AFAIK in OSGI you can not find out which fragment (if any) was used to 
load a class.

Based on the class loader you can get the bundle, but not the fragment.

The concept of "download the proxy, both code and state" was quite 
novel in 1999 when I first heard of Jini. But I think it should be 
recognized that it both introduces a lot of security problems as well 
as not being the only way this can be done. For instance, the proxy 
could be an HTML page, which downloads JavaScript which binds into a 
sandbox provided. I think that the "serialized object" should not be a 
requirement eventually, and with that removed, the OSGi environment 
can help quite considerably aiding the dynamic nature of River. 

My view is different:

1. Without this "serialized object" approach there is no such a thing as 
River since there is nothing interesting and novel left.
2. Indeed - the "serialized object" approach is not popular... But my 
take on this is that it is because the concept has not been implemented 
correctly - both in terms of security and in terms of class loading.


As Peter observed, Pax URL allows a whole bunch of URL schemes, which 
you could be annotated in the serialized objects, just like URL 
codebases are today. For instance, Maven "coordinates" could be in the 
annotations and OSGi loads bundles on-demand. Paremus also implemented 
a "bundle garbage collector" in some manner, which unloaded unused 
bundles eventually. Furthermore, there are defined hooks in OSGi for 
the resolution of service registration and service lookup, which I 
think River should exploit. There seems to be a huge complementary 
intersection right there. 

There are no hooks for _class_ resolution - that's what is needed.


The use case and the ultimate test to implement is simple - have a

listener that publishes remote events to a JavaSpace acquired dynamically
from a lookup service:

class SpacePublisher implements RemoteEventListener, Serializable {
   private final JavaSpace space;
   public void notify(RemoteEvent evt) {
 space.write(createEntry(evt), ...);
   }
}

It is NOT possible to do currently. It requires non-hierarchical class

loading. It is not easy to solve. It would open a whole lot of
possibilities.

I am probably too ignorant to see it; What exactly is "NOT possible" with
the above use-case snippet?
With currently implemented PreferredClassProvider it is not possible to 
deserialize such an object graph.


Thanks,
Michal


Re: OSGi

2017-01-22 Thread Michał Kłeczek (XPro Sp. z o. o.)

Hi,

Bharath Kumar wrote:


2. We can annotate the proxy object using osgi bundle symbolic name and 


version.
3. RMIClassLoader provider can check whether the proxy bundle is
installed or not, If it is not installed, it can install it from configured 

repo ( like OBR). We can even use pax-url which adds different URL handlers. 


4. Load class/proxy class from correct proxy bundle


If it only was that easy it would have been done a long time ago :)

The question "how to annotate a class" is difficult to answer.
I can see two approaches:

1. Declarative specification of _requirements_ to successfully 
deserialize an object (ie. a proxy). The client must have all the 
necessary mechanisms in place to resolve, download and install anything 
required to satisfy these requirements.


2. Fully specifying what code to download (and ideally - how). This is 
the approach of River at this moment. The client simply uses provided 
URLs to create an URLClassLoader (of course adding security and 
preferred resources but in general that's how it works)


Annotating with bundle id and version is actually half way - it neither 
is self contained and sufficient to download code and it does not allow 
the client to fully decide how the requirement should be satisfied.


And since the client bundle and the service bundle might be resolved 
differently by the container - it may lead to all sorts of pseudorandom 
class cast exceptions.


I would say fully declarative approach in OSGI would be to only annotate 
with a package version range (and let the OSGI container do the 
resolution - it is designed to do it efficiently).
Of course then we have a problem with classes that are not from any 
exported package but from internal bundle class path - then bundle id 
and version might be necessary.
Then of course there is a question of classes from fragment bundles - 
this is totally unsolvable since AFAIK there is no way to get the 
fragment information based on the class loader of a class.


And the main issue with this approach IMHO is that it requires a central 
authority that governs the naming and versioning of bundles.


Approach 2 in OSGI would require annotating a class not with a bundle 
but with a piece of information that would allow the client to download 
the _BundleWiring_ of the proxy - it unambiguously specifies the class 
loader graph that is required to be recreated by the client to 
deserialize an object..


When I tried to implement it - it appeared to me that the main issue is 
that it simply makes the whole OSGI with its declarative dependency 
resolution moot - we need to totally bypass it to transfer objects 
between containers anyway. So the only thing we actually would use from 
an OSGI container is its capability of non-hierarchical class loading.


One step towards solving the above issues is to implement the idea of 
code base annotations being objects implementing a well known interface 
- that would at least allow us to abstract away from the exact format of 
the annotation data. But I do not have a clear idea on how to solve 
other OSGI integration issues.


The use case and the ultimate test to implement is simple - have a 
listener that publishes remote events to a JavaSpace acquired 
dynamically from a lookup service:


class SpacePublisher implements RemoteEventListener, Serializable {
  private final JavaSpace space;
  public void notify(RemoteEvent evt) {
space.write(createEntry(evt), ...);
  }
}

It is NOT possible to do currently. It requires non-hierarchical class 
loading. It is not easy to solve. It would open a whole lot of 
possibilities.


My 2 cents :)
Michal



Re: site revamp

2016-12-22 Thread Michał Kłeczek (XPro Sp. z o. o.)

Hi,

Great job!

Could you link the mother of all Jini guides?
https://jan.newmarch.name/java/jini/tutorial/Jini.html

Thanks,
Michal


Geoffrey Arnold 
December 22, 2016 at 8:44 PM
Hey Zsolt, really fantastic job. Well done!






Zsolt Kúti 
December 22, 2016 at 6:24 PM
Hello,

The revamped site is now staged and can be reviewed here:
http://river.staging.apache.org/

Community decides when to publish it.

Cheers,
Zsolt





Re: Maven Build

2016-11-17 Thread Michał Kłeczek (XPro Sp. z o. o.)
Indeed - it is possible to specify your dependencies as tightly as you 
want in OSGi. The issue is that:

1) It defeats one of the main purposes of OSGi in the first place
2) Nobody does that which would mean the only objects you can send over 
the wire would be instances of classes from specially crafted bundles


I agree the same problems are present in standard Java class loading - 
this is one of the reasons why I say River class loading is broken and 
needs to be fixed :)


The discussion about JBoss Modules goals is not really important in this 
context. I am only using it as an implementation of non-hierarchical 
class loading. Module identification and dependency resolution needs to 
be River specific anyway. After doing some experiments and analysis I 
dumped OSGi (which was my first choice and I am really willing to 
discuss OSGi integration - would love to see it done properly). JBoss 
Modules is a "bare bones" library that provides the mechanism while not 
imposing any policy. OSGi container can be (and is) implemented on top 
of it. River can be implemented on top of it as well.


The requirements for me generally are:
- downloaded code must not be executed before it is verified
- it should be easy to provide "composite services" - for example 
RemoteEventListener implementation wrapping JavaSpace proxy and 
publishing events to the space. At this moment this is impossible and in 
general requires non-hierarchical class loading.
- programmers should not be required to provide any River specific 
information inside jar files or any River specific "descriptors". This 
is difficult so if any such descriptor is absolutely necessary - build 
tools should be provided that automate this process.

- no specific deployment format and/or structure should be imposed

Two first are must have and two later - nice to have (or be as close as 
possible).


Some design decisions (in no particular order):

- codebase is an implementation of an interface (instead of being a 
String). This will make the mechanism extensible and - hopefully - 
future proof.
- codebase implementation classes are going to be resolved recursively 
using exactly the same algorithms (eat your own dog food). Recursion 
depth limited by a (configurable?) constant.
- codebase identity based on object equality (codebase implementation 
required to implement equals and hashcode). The implementation should be 
(directly or indirectly) based on cryptographic hashes equality. It is 
important not to base codebase identity on names only.
- since codebases are objects - they can be verified before use so no 
downloaded code is executed before use. What's more - since classes of 
objects are known to be trusted already - objects may safely verify 
themselves! What it means in general is that all TrustVerifier concept 
and implementation is not needed anymore in River. Objects can simply 
use built in serialization/ObjectInputValidation mechanisms and/or 
implement a River specific interface for this.
- JBoss Modules as an implementation of non-hierarchical class loading - 
BUT it is only one of possible implementations. Implementation based on 
PreferredClassLoader is possible as well (and might be provided as 
legacy fallback - but not that important at this moment since it is 
broken and I see the whole effort as River 3.0 breaking backwards 
compatibility anyway).
- basically "CodeBase" implementation needs to provide only one method: 
"ClassLoader createClassLoader()" (some more might be required to handle 
related issues like granting permissions to codebases so that they can 
create class loaders)


Some outstanding design questions:
- how to handle "private" codebases: for example my RemoteEventListener 
wraps a JavaSpace proxy and we want to treat the JavaSpace proxy 
codebase as "local" to the wrapper - it should get a separate 
ClassLoader even if the actual code is the same and is downloaded from 
the same location. It basically means location or content itself is not 
enough to establish identity. We also need something more to distinguish 
them. Changing the identity of the codebase might cause "lost codebase" 
issues so it has to be done properly.
- How to implement permission grants properly? Granting to class loaders 
requires duplicating class loaders just to have separate permission sets 
per object. Maybe something else is required based on object (not class) 
identity. That would also allow solving the problem above.


Thanks,
Michal

Niclas Hedhman <mailto:nic...@hedhman.org>
November 16, 2016 at 11:53 PM
On Wed, Nov 16, 2016 at 8:43 PM, "Michał Kłeczek (XPro Sp. z o. o.)"<
michal.klec...@xpro.biz>  wrote:


3. My comment about OSGi being "non-deterministic" in resolving

dependencies means that the

same bundle installed in two different environments is going to be linked

with different dependent

bundles. I

Re: Maven Build

2016-11-16 Thread Michał Kłeczek (XPro Sp. z o. o.)
ll remain;  the existing
   ClassLoader
>>> isolation and the complexities surrounding multiple copies of
   the same or
>>> different versions of the same classes interacting within the
   same jvm.
>>> Maven will present a new alternative of maximum sharing, where
   different
>>> service principals will share the same identity.
>>>>
>>>> Clearly, the simplest solution is to avoid code download and
   only use
>>> reflection proxy's
>>>>
>>>> An inter process call isn't remote, but there is a question of
   how a
>>> reflection proxy should behave when a subprocess is terminated.
>>>>
>>>> UndeclaredThrowableException seems appropriate.
>>>>
>>>> It would plug in via the existing ClassLoading RMIClassLoader
   provider
>>> mechanism, it would be a client concern, transparent to the
   service or
>>> server.
>>>>
>>>> The existing behaviour would remain default.
>>>>
>>>> So there can be multiple class resolution options:
>>>>
>>>> 1. Existing PrefferedClassProvider.
>>>> 2. Maven class resolution, where maximum class sharing
   exists.  This
>> may
>>> be preferable in situations where there is one domain of trust,
   eg within
>>> one corporation or company.  Max performance.
>>>> 3. Process Isolation.  Interoperation between trusted
   entities, where
>>> code version incompatibilities may exist, because of separate
   development
>>> teams and administrators.  Each domain of trust has it's own
   process
>>> domain.  Max compatibility, but slower.
>>>> 4. OSGi.
>>>>
>>>> There may be occassions where simpler (because developers
   don't need to
>>> understand ClassLoaders), slow, compatible and reliable wins
   over fast
>> and
>>> complex or broken.
>>>>
>>>> A subprocess may host numerous proxy's and codebases from one
   principal
>>> trust domain (even a later version of River could be
   provisioned using
>>> Maven).  A subprocess would exist for each trust domain. So if
   there are
>>> two companies, code from each remains isolated and communicates
   only
>> using
>>> common api.  No unintended code versioning conflicts.
>>>>
>>>> This choice would not prevent or exclude other methods of
    >> communication,
>>> the service, even if isolated within it's own process will still
>>> communicate remotely over the network using JERI, JSON etc. 
   This is

>>> orthogonal to and independant of remote communication protocols.
>>>>
>>>> OSGi would of course be an alternative option, if one wished
   to execute
>>> incompatible versions of libraries etc within one process, but
   different
>>> trust domains will have a shared identity, again this may not
   matter
>>> depending on the use case.
>>>>
>>>> Cheers,
>>>>
>>>> Peter.
>>>>
>>>> ESent from my Samsung device.
>>>>
>>>>  Include original message
>>>>  Original message 
>>>> From: "Michał Kłeczek (XPro Sp. z o. o.)"
   <michal.klec...@xpro.biz <javascript:;>
>>> <javascript:;>>
>>>> Sent: 15/11/2016 10:30:29 pm
>>>> To: dev@river.apache.org <javascript:;> <javascript:;>
>>>> Subject: Re: Maven Build
>>>>
>>>> While I also thought about out-of-process based mechanism for
   execution
>>> of dynamically downloaded code, I came to the conclusion that
   in the
>>> context of River/Java in-process mechanism is something that
   MUST be done
>>> right. All other things can (and should) be built on that.
>>>>
>>>> I think that the proposal to implement "remote calls on smart
   proxy
>>> interfaces that aren't remote" is somewhat a misnomer. The call
   is either
>>> remote or local - you cannot have both at the same time. AFAIK Jini
>>> community always believed there is no possibility to have
   local/remote
>>> transparency. That is why there exists java.rmi.Remote marker
   interface
>> in
>>> the first place.
>>>>
>>>> T

Re: Maven Build

2016-11-15 Thread Michał Kłeczek (XPro Sp. z o. o.)
While I also thought about out-of-process based mechanism for execution 
of dynamically downloaded code, I came to the conclusion that in the 
context of River/Java in-process mechanism is something that MUST be 
done right. All other things can (and should) be built on that.


I think that the proposal to implement "remote calls on smart proxy 
interfaces that aren't remote" is somewhat a misnomer. The call is 
either remote or local - you cannot have both at the same time. AFAIK 
Jini community always believed there is no possibility to have 
local/remote transparency. That is why there exists java.rmi.Remote 
marker interface in the first place.


There is also the question about the level of isolation you want to 
achieve. Simple "out-of-process" is not enough, chroot is not enough, 
CGROUPS/containers/jails/zones might be not enough, virtual machines 
might be not enough :) - going the route you propose opens up the whole 
world of new questions to answer. At the same time you loose the most 
important advantages of in-process execution:
- simplicity of communication between components (basic function call, 
no need to do anything complicated to implement callbacks etc.)

- performance

In the end you either standardize on the well known set of communication 
protocols (such as JERI) OR you say "end of protocols" by allowing 
execution of dynamically downloaded code in-process.
If River is going to choose the first route - IMHO it is going to fail 
since it does not propose anything competitive comparing to current 
mainstream HTTP(S)/REST/JSON stack.


Thanks,
Michal

Peter 
November 15, 2016 at 8:28 AM
I've been thinking about process isolation (instead of using 
ClassLoader's for isolation).  Typically, smart proxy's are isolated 
in their own ClassLoader, with their own copies of classes, however 
with Maven, a lot more class sharing occurs.  Since River uses 
codebase annotations for identity, using maven codebase annotations 
will result in proxy's from different services sharing identity.


A better way to provide for different identities coexisting on the 
same node, would be to use subprocess jvm's for each Service's server 
principal identity, to keep classes from different services in 
different processes.


This way, each principal would have their own process & Maven 
namespace for their proxy's.


Presently JERI only exports interfaces in reflection proxy's that 
implement Remote, so I'd need an endpoint that can export all 
interfaces, accross a local interprocess connection to allow remote 
calls on smart proxy interfaces that aren't remote.


This also means that memory resource consumption of smart proxy's can 
be controlled by the client and a smart proxy's process killed without 
killing the client jvm.


Cheers,

Peter.



Dawid Loubser 
November 15, 2016 at 8:50 AM
As a very heavy Maven user, I wanted to say that this is great news.
This is encouraging indeed!

Dawid


Peter 
November 15, 2016 at 4:08 AM
Some other news that might encourage participation, I've been working 
on Dennis Reedy's script to modularise the codebase, I haven't run the 
test suites against it and it isn't generating stubs yet, and I'll 
need to modify the platform modules for the IoT effort after the 
conversion is complete.


Here's the output of the River maven build:

Reactor Summary:

River-Internet Project  SUCCESS [0.689s]
Module :: River Policy  SUCCESS [8.395s]
Module :: River Resources . SUCCESS [0.607s]
Module :: River Platform .. SUCCESS [23.521s]
Module :: River Service DL Library  SUCCESS [8.999s]
Module :: River Service Library ... SUCCESS [8.014s]
Module :: River Service Starter ... SUCCESS [3.930s]
Module :: River SharedGroup Destroy ... SUCCESS [3.018s]
Module :: Outrigger ... SUCCESS [0.056s]
Module :: Outrigger Service Download classes .. SUCCESS [2.416s]
Module :: Outrigger Service Implementation  SUCCESS [4.118s]
Module :: Outrigger Snaplogstore .. SUCCESS [3.273s]
Module :: Lookup Service .. SUCCESS [0.048s]
Module :: Reggie Service Download classes . SUCCESS [3.966s]
Module :: Reggie Service Implementation ... SUCCESS [3.621s]
Module :: Mahalo .. SUCCESS [0.436s]
Module :: Mahalo Service Download classes . SUCCESS [2.059s]
Module :: Mahalo Service Implementation ... SUCCESS [4.175s]
Module :: Mercury the Event Mailbox ... SUCCESS [0.497s]
Module :: Mercury Service Download classes  SUCCESS [3.622s]
Module :: Mercury Service Implementation .. SUCCESS [3.562s]
Module :: Norm  SUCCESS [0.013s]
Module :: Norm Service Download classes 

Re: another interesting link

2016-07-26 Thread Michał Kłeczek (XPro Sp. z o. o.)
I am well aware of StartNow since that is the first Jini "support 
library" I have used. Indeed - it is really easy to use.
But it is only one side of the issue - the API and some support support 
code that is supposed to be linked statically with the service 
implementation.


What I am talking about is actually "externalizing" most aspects of a 
service implementation so that:
- you do not have to package any (for some meaning of "any" :) ) 
libraries statically (since all code can be downloaded dynamically)
- you do not have to provide any (for some meaning of "any" :) ) static 
configuration (ie. configuration files) - a service should simply use 
other services and "reconfigure" itself when those change
It would go towards some kind of an "agent architecture", with movable 
objects (ie "services") being "hosted" by well... other movable objects 
:).The idea is less appealing today when we have all the cloud 
infrastructure, virtualization, software defined networking etc. 
Nevertheless still interesting IMHO.


Thanks,
Michal

Gregg Wonderly 
July 26, 2016 at 1:28 PM
My StartNow project on Java.net aimed directly at this mode of 
operation a decade ago. I wanted conventions that provided use of 
configuration with defaults.


You just extend PersistantJiniService and call start(serviceName). 
Subclasses could override default implementation for how the 
conventions in the APIs created implementation objects through code or 
configuration.


The intent was to create THE API to provide the conventions of service 
creation.


We have a Window/JWindow class and don't have to do all the decorating 
ourselves.


Jini service construction should work the same way!

Gregg

Sent from my iPhone


Tom Hobbs 
July 26, 2016 at 11:50 AM
I would say the comment on that blog sums everything about Jini up.

It’s just too hard to set up and get working.

That’s why I think simplifying reggie is possibly a first step. Make a 
/small/ and simple reggie jar that just handled service registration 
and not proxy downloading etc. Make it really easy to register your 
services without needing class loaders etc, preferably via some 
convention rather than configuration. (This is what I’m trying to find 
the time to work on.)


I’d really like to be able to type;

$ java -jar reggie.jar

And have a reggie running with all the defaults ready to register my 
services with. Or perhaps, as an option;


$ java -jar reggie.jar —ipv6

Security, class loading, proxy downloading and all the rest of it 
could then be put back in by specifying more advanced configuration 
options.


My Scala service would be great if I could define it just as;

object MyCoolService extends LazyLogging with ReggieRegistration with 
ReggieLookup


Or in Java with default interface methods;

class MyCoolService implements ReggieRegistration, ReggieLookup

And that would be it, congratulations you’ve started a reggie and 
registered your service and have methods available to help you find 
other services.


This would satisfy use cases where the network was private and/or 
trusted. And security on top would, ideally, be up to configuration 
again or perhaps injecting some alternative implementation of some 
bean somewhere. But the core premise is, make it easy to startup, demo 
and see if it fits what you want it for.





Peter 
July 26, 2016 at 3:58 AM
Note the comment about security on the blog?

Steps I've taken to simplify security (that could also be adopted by 
river):
1. Deprecate proxy trust, replace with authenticate service prior to 
obtaining proxy.
2. proxy codebase jars contain a list of requested permissions to be 
granted to the jar signer and url (client need not know in advance).
3. Policy file generation, least privilege principles (need to set up 
command line based output for admin verification of each permission 
during policy generation).

4 Input validation for serialization.
5. DownloadPermission automatically granted to authenticated 
registrars (to signer and url, very specific) during multicast discovery.


Need to more work around simplification of certificate management.

Regards,

Peter.
Sent from my Samsung device.

  Include original message
 Original message 
From: Peter 
Sent: 26/07/2016 10:27:59 am
To: dev@river.apache.org 
Subject: another interesting link

https://blogs.oracle.com/hinkmond/entry/jini_iot_edition_connecting_the


Sent from my Samsung device.







Re: another interesting link

2016-07-26 Thread Michał Kłeczek (XPro Sp. z o. o.)
In my dreams I always thought of "self configuring" and "adapting" 
services. So instead of reading a "configuration" a service would simply 
search for other services and use them. Exporter service being an example.
Ideally - the only thing that should be configured would be the 
"identity" (ie. credentials) of a service principal(s).


That would be possible once dynamic code downloading is done right :)

And one more remark - dynamic proxy does not imply avoiding codebases 
and code downloading. You have to download service interface classes. 
You have to download invocation handler implementation class.


Thanks,
Michal

Peter 
July 26, 2016 at 12:43 PM

Perhaps a script that detects the environment, asks a few questions 
and creates the config files?  These can be edited for more complex 
configurations.


I've added a couple of default methods to ServiceRegistrar, the new 
lookup method doesn't return the service proxy's, so the registrar is 
basically used just for search, the client then contacts each service 
it's interested in.


Starting off with simple services that only use dynamic proxy's 
(java.lang.reflect.Proxy instances), avoids codebases.


Reggie still requires a codebase, however if we sign it, this 
addresses code trust.


Regards,

Peter.

Sent from my Samsung device

  Include original message
 Original message 
From: Tom Hobbs 
Sent: 26/07/2016 07:50:43 pm
To: dev@river.apache.org
Subject: Re: another interesting link

I would say the comment on that blog sums everything about Jini up.

It’s just too hard to set up and get working.

That’s why I think simplifying reggie is possibly a first step.  Make a /small/ and simple reggie jar that just handled service registration and not proxy downloading etc.  Make it really easy to register your services without needing class loaders etc, preferably via some convention rather than configuration.  (This is what I’m trying to find the time to work on.) 



I’d really like to be able to type;

$ java -jar reggie.jar

And have a reggie running with all the defaults ready to register my services with.  Or perhaps, as an option; 



$ java -jar reggie.jar —ipv6

Security, class loading, proxy downloading and all the rest of it could then be put back in by specifying more advanced configuration options. 



My Scala service would be great if I could define it just as;

object MyCoolService extends LazyLogging with ReggieRegistration with ReggieLookup 



Or in Java with default interface methods;

class MyCoolService implements ReggieRegistration, ReggieLookup

And that would be it, congratulations you’ve started a reggie and registered your service and have methods available to help you find other services. 



This would satisfy use cases where the network was private and/or trusted.  And security on top would, ideally, be up to configuration again or perhaps injecting some alternative implementation of some bean somewhere.  But the core premise is, make it easy to startup, demo and see if it fits what you want it for.