Re: Patricia Shanahan

2021-07-19 Thread Gregg Wonderly
That’s sad news Roy!  Thanks for letting everyone know and for your warm 
insight into her contributions!

Gregg Wonderly

> On Jul 19, 2021, at 11:56 AM, Roy T. Fielding  wrote:
> 
> We received the sad news last week that our friend and PMC member,
> Patricia Shanahan, has passed away peacefully after a long battle
> with cancer. I have put together a memorial page for her at
> 
>   https://www.apache.org/memorials/patricia_shanahan.html 
> <https://www.apache.org/memorials/patricia_shanahan.html>
> 
> and will eventually update the River site as well. Please let me know
> if you would like to add anything to that page.
> 
> Roy
> 



Serializing data in River

2021-03-05 Thread Gregg Wonderly
I don’t recall if we’ve talked about Google Protocol Buffers as another means 
of serialization.  This seems like something that could be investigated as a 
gateway to support for many other languages/platforms that already have such 
support.

https://developers.google.com/protocol-buffers 
<https://developers.google.com/protocol-buffers>


Gregg Wonderly

Re: Thinking about Extensible Serialization support.

2021-01-30 Thread Gregg Wonderly
ter, as well as discovering the 
> calling class, and to provide access to the stream for writing, optionally 
> supported).   PutField is simply a name -> value list of internal state, 
> however the PutField parameter would need to be caller sensitive, so that 
> each class in an object's inheritance hierarchy has it's own private state 
> namespace.
> 
> So basically a different Serialization protocol layer would have 
> implementations of ObjectInput and ObjectOutput and access the objects passed 
> via the Invocation layer using the public Serialization Layer API.
> 
> Currently I have not implemented any such serialization API.
> 
> -- 
> Regards,
> Peter
> 
>> On 30/01/2021 10:25 am, Gregg Wonderly wrote:
>> Can you speak to why it would be different than the stream of bytes that 
>> existing serialization creates through Object methods to help clarify?
>> 
>> Gregg
>> 
>> Sent from my iPhone
>> 
>>>> On Jan 29, 2021, at 3:46 PM, Peter Firmstone  
>>>> wrote:
>>> 
>>> A question came up recently about supporting other serialization protocols.
>>> 
>>> JERI currently has three layers to it's protocol stack:
>>> 
>>> Invocation Layer,
>>> Object identification layer
>>> Transport layer.
>>> 
>>> Java Serialization doesn't have a public API, I think this would be one 
>>> reason there is no serialization layer in JERI.
>>> 
>>> One might wonder, why does JERI need a serialization layer, people can 
>>> implement an Exporter, similar IIOP and RMI.  Well the answer is quite 
>>> simple, it allows separation of the serialization layer from the transport 
>>> layer, eg TLS, TCP, Kerberos or other transport layer people may wish to 
>>> implement.   Currently someone implementing an Exporter would also require 
>>> a transport layer and that may or may not already exist.
>>> 
>>> In recent years I re-implemented de-serialization for security reasons, 
>>> while doing so, I created a public and explicit de-serialization API, I 
>>> have not implemented an explicit serialization API, it, or something 
>>> similar could easily be used as a serialization provider interface, which 
>>> would allow wrappers for various serialization protocols to be implemented.
>>> 
>>> -- 
>>> Regards,
>>> Peter Firmstone
>>> 0498 286 363
>>> Zeus Project Services Pty Ltd.
>>> 



Re: Thinking about Extensible Serialization support.

2021-01-29 Thread Gregg Wonderly
Can you speak to why it would be different than the stream of bytes that 
existing serialization creates through Object methods to help clarify?

Gregg

Sent from my iPhone

> On Jan 29, 2021, at 3:46 PM, Peter Firmstone  
> wrote:
> 
> A question came up recently about supporting other serialization protocols.
> 
> JERI currently has three layers to it's protocol stack:
> 
> Invocation Layer,
> Object identification layer
> Transport layer.
> 
> Java Serialization doesn't have a public API, I think this would be one 
> reason there is no serialization layer in JERI.
> 
> One might wonder, why does JERI need a serialization layer, people can 
> implement an Exporter, similar IIOP and RMI.  Well the answer is quite 
> simple, it allows separation of the serialization layer from the transport 
> layer, eg TLS, TCP, Kerberos or other transport layer people may wish to 
> implement.   Currently someone implementing an Exporter would also require a 
> transport layer and that may or may not already exist.
> 
> In recent years I re-implemented de-serialization for security reasons, while 
> doing so, I created a public and explicit de-serialization API, I have not 
> implemented an explicit serialization API, it, or something similar could 
> easily be used as a serialization provider interface, which would allow 
> wrappers for various serialization protocols to be implemented.
> 
> -- 
> Regards,
> Peter Firmstone
> 0498 286 363
> Zeus Project Services Pty Ltd.
> 



Re: Git migration

2021-01-18 Thread Gregg Wonderly
I think that separate repositories is a good idea.  It might be interesting for 
one of those repositories to require a specific layout of the repositories and 
provide a script to “pull” all the correlated versions etc.  I sometimes 
struggle with all the variations on how this gets done.  At some place we need 
to pull all the details into view in a way that is also “easy” to consume.

Gregg

> On Jan 18, 2021, at 4:44 PM, Peter Firmstone  
> wrote:
> 
> Hello River folk,
> 
> Just an update on progress, the git mirror was out of date, it has been 
> deleted to clear the way for copying our current SVN.
> 
> https://issues.apache.org/jira/browse/INFRA-21216?page=com.atlassian.jira.plugin.system.issuetabpanels%3Aall-tabpanel
> 
> Also I think it would be cleaner to have separate git repositories for 
> separate components, such as the ldj test suite or other contributions that 
> aren't part of the main release, so that River is easier for new users to 
> become familiar with, rather than having a super repository that contains all 
> components as SVN does currently.
> 
> I welcome suggestions as to how the git repositories should be structured.
> 
> -- 
> Regards,
> Peter Firmstone
> 0498 286 363
> Zeus Project Services Pty Ltd.
> 



Re: SSL Secure Endpoints never fully utilised by River services

2018-04-21 Thread Gregg Wonderly
There are lots of details around lost login context.  I had to wire up some of 
that in my swing/awt infrastructure.  This is required so that those 
event/callbacks also assert the right credentials.

Gregg

Sent from my iPhone

> On Apr 21, 2018, at 1:06 AM, Peter  wrote:
> 
> To be more accurate it limits the call backs to anon client connections, 
> which is vulnerable to man in the middle attacks.
> 
> The way to fix this is to ensure the login context is preserved and utilised 
> when making call backs.
> 
>> On 21/04/2018 9:57 AM, Peter wrote:
>> It's clear to me now that the Jini team never fully completed the 
>> integration of JERI with Jini.
>> 
>> The evidence: call backs to event listeners are not run with the service's 
>> logged in subject, this prevents secure endpoints from establishing 
>> connections for call backs.
>> 
>> I have rectified this in my local code and am running tests.
>> 
>> Just thought you might be interested to know.
>> 
>> Regards,
>> 
>> Peter.
>> 
> 


Re: Prioritize Modernizing The Specification

2018-02-03 Thread Gregg Wonderly
The principal benefit of Jini is mobile code.  Everything else is just network 
communications.  The primary problem is inexperienced developers or web 
developers who just want to send a user interface around.  ServiceUI makes that 
possible in Jini, but the lease services along with transaction services and 
all natures of mobile code allow you to create the complete UI/UX in one 
language with the ability to not write CSS, HTML and Java Script all glued 
together.  Instead, you get an end to end, uniform development and runtime 
environment.

The Web is full of mobile code in the form of JavaScript and other dynamically 
loaded and bound pieces.  But it suffers from single threaded user interfaces 
and the limitations of the web, in general, around network restrictions.

Gregg

Sent from my iPhone

> On Feb 1, 2018, at 5:39 AM, Peter  wrote:
> 
> Hello Gerard,
> 
> Help is always welcomed, the Jini standards are quite old, so yes, I think 
> it's an area definitely in need of some love.  Documentation or standards 
> that explain the philosophies / design patterns River is based on, I can see 
> how that adds appeal.   I'll certiainly jump in and help with reviews, there 
> might be others interested in becoming involved as well.
> 
> Thanks,
> 
> Peter.
> 
>> On 1/02/2018 12:09 PM, Gerard Fulton wrote:
>> Hi Guys,
>> 
>> 
>> 
>> I wanted to float an idea by list that has been in my head for several
>> years. The idea is to prioritize the modernization of the River
>> specification into a set of language a d transport agnostic architectural
>> principles. River currently supports architectural concepts like discovery,
>> events, proxies and more! In reality, both the implementation language and
>> communication transport are minor details. For example a discovery service
>> implementation could backed by DNS and exposed by a WebSockets
>> communications transport protocol. I my opinion the most important part of
>> the DNS discovery service example is the application protocol which
>> potentially could be defined by a request/response model.
>> 
>> 
>> 
>> As a Java developer, I fear that the wider adoption and growth of River are
>> being empeeded by our laser like focus on River's Java reference
>> implementation.
>> 
>> 
>> 
>> 
>> 
>> Feedback is a gift!
>> 
>> 
>> 
>> -Gerard Fulton
>> 
> 


Re: OSGi [PREVIOUSLY]Re: Maven Build

2017-10-02 Thread Gregg Wonderly
I like the constructor argument mechanism as it becomes thread local detail, 
rather than VM level detail.   Too many “platform” things in Java have ended up 
being “service type platform” like, instead of multiple services platform like. 
 Keeping things thread of execution specific helps us support “multiple 
services platform” more readily.  It also affords CodeSource level security 
controls more readily.

Gregg

> On Oct 2, 2017, at 5:01 AM, Peter  wrote:
> 
> I'm considering api that may be required for improved OSGi support.
> 
> In order for an OSGi environment to deserialize a proxy, it needs to first 
> install a bundle for the proxy and resolve any dependencies.  For OSGi a 
> ProxyPreparer must first locally marshall (create a MarshalledInstance) a 
> java.lang.reflect.Proxy (implementing ServiceProxyAccessor, 
> ServiceCodebaseAccessor, ServiceAttributesAccessor) instance (returned by 
> SafeServiceRegistrar.lookup) and unmarshall it, passing in the bundle 
> ClassLoader as the default ClassLoader.  This ensures the ServiceProxy's 
> Bundle ClassLoader becomes the default ClassLoader for the underlying JERI 
> endopoint.  Only at this time will a call 
> ServiceProxyAccessor.getServiceProxy() result in a correctly unmarshalled 
> proxy.   If this step isn't performed the default ClassLoader for the JERI 
> Endpoint will be the SafeServiceRegistrar's proxy ClassLoader, and a 
> ClassNotFoundException will be thrown when calling 
> ServiceProxyAccessor.getServiceProxy().
> 
> Given there's some complexity in the above, it would be prudent to implement 
> this in say a convenience class, perhaps called OSGiProxyPreparer, so 
> developers don't have to (boilerplate).
> 
> But we still need something from the underlying modular framework, to install 
> a Bundle for the service proxy and to ensure OSGiProxyPreparer recieves a 
> ClassLoader, while avoiding a dependency on OSGi.   The OSGiProxyPreparer 
> could accept a ProxyClassLoaderProvisioner (see below) as a constructor 
> argument?   Keep in mind the ProxyPreparer is a configuration concern.
> 
> The discovery infrastructure (LookupLocator, LookupLocatorDiscovery and 
> LookupDiscover classes) also needs a way to receive a ClassLoader to 
> deserialize lookup service proxy's.  The codebase URL can be provided in a 
> multicast response, the same interface would need to be used as in 
> ProxyPreparation.
> 
> Please provide feedback, thoughts or suggestions.
> 
>   /*
> * Copyright 2017 The Apache Software Foundation.
> *
> * Licensed under the Apache License, Version 2.0 (the "License");
> * you may not use this file except in compliance with the License.
> * You may obtain a copy of the License at
> *
> *  http://www.apache.org/licenses/LICENSE-2.0
> *
> * Unless required by applicable law or agreed to in writing, software
> * distributed under the License is distributed on an "AS IS" BASIS,
> * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
>   implied.
> * See the License for the specific language governing permissions and
> * limitations under the License.
> */
>   package net.jini.loader;
> 
>   import java.io.IOException;
> 
>   /**
> * Allows client code to implement provisioning of a ClassLoader
>   for a URI path,
> * where the use of codebase annotations is unsuitable.
> *
> * The first URI in the path, must be the proxy URI, any
>   additionally appended
> * URI are dependants, it's possible these may not be loaded
>   directly by the returned
> * ClassLoader, but their classes should still be visible and
>   resolvable from it.
> * Only the proxy URI classes are guaranteed to be loaded by a
>   returned
> * ClassLoader.
> * Dependant URI are not guaranteed to be loaded if suitable
>   versions are
> * already.
> *
> * Some systems, notably OSGi, manage ClassLoader visibility
>   differently than
> * Java's typical hierarchical ClassLoader relationships,
>   implementors of this
> * interface may implement this to provision codebases into a
>   ClassLoader,
> * prior to deserializing a proxy into the resolved ClassLoader.
> *
> * A proxy may be an instance of {@link java.lang.reflect.Proxy}
>   and have
> * no associated codebase, in this case, a new ClassLoader should
>   be returned
> * into which a dynamic proxy class can be provisioned.
> *
> * The implementing class must have {@link java.lang.RuntimePermission}
> * "getClassLoader" and "createClassLoader".  The implementing
>   class must
> * ensure that the caller has {@link java.lang.RuntimePermission}
> * "createClassLoader" as well.
> *
> */
>   public interface ProxyClassLoaderProvisioner {
> 
>/**
> * Create a new ClassLoader, given a space separated list of URL's.
> *
> * The first URL must contain the proxy class.
> *
> * The 

Re: OSGi [PREVIOUSLY]Re: Maven Build

2017-09-27 Thread Gregg Wonderly
Do you have anything planned around ServiceUI?  I really use ServiceUI as a 
discovery mechanism to find services which export a UI that a user can interact 
with.  What can happen at registration time, besides Entry specification to 
help with codebase where ServiceUI bits are at?  Are you just relying on the 
“service” setup to include all of that detail, or is there something we can do, 
to wrap ServiceUI into the mechanism you are talking about here?

Gregg

> On Sep 27, 2017, at 3:59 AM, Peter  wrote:
> 
> Some updates on thoughts about OSGi:
> 
>  1. In JGDMS, SafeServiceRegistrar (extends ServiceRegistrar),
> ServiceDiscoveryManager and ProxyPreparer allow provisioning of
> OSGi bundles for Jini services.
>  2. SafeServiceRegistrar lookup results contain only instances of
> java.lang.reflect.Proxy (implementing ServiceProxyAccessor,
> ServiceCodebaseAccessor, ServiceAttributesAccessor) which a user
> remarshalls and unmarshalls into their OSGi bundle provisioned
> ClassLoader, prior to retrieving the actual service proxy using
> ServiceProxyAccessor.
>  3. As a result different service principals using identical proxy
> codebases, needn't share a ClassLoader, addressing the trust
> domain issue previously alluded to.
>  4. There's no current mechanism to allow provisioning of a bundle for
> a Registrar.
>  5. Existing discovery providers accept ClassLoader arguments for
> unmarshalling Registrar's.
>  6. Existing Multicast responses allow for additional information to
> be appended; a codebase resource for example.
>  7. LookupLocator, LookupDiscovery and LookupLocatorDiscovery classes
> don't utilise discovery providers ClassLoader arguments.
>  8. Need to allow bundles to be provisioned for lookup services after
> multicast discovery, by exposing discovery provider ClassLoader
> arguments and allowing client to manage provisioning of bundle
> into a ClassLoader, then passing that in during unicast discovery.
>  9. Don't break backward compatiblity.
> 
> Cheers,
> 
> Peter.
> 
> On 16/11/2016 4:18 PM, Dawid Loubser wrote:
>> +1 for OSGi providing the best solution to the class resolution problem,
>> though I think some work will have to be done around trust, as you say.
>> 
>> 
>> On 16/11/2016 02:23, Peter wrote:
>>> 
>>> The conventional alternatives will remain;  the existing ClassLoader 
>>> isolation and the complexities surrounding multiple copies of the same or 
>>> different versions of the same classes interacting within the same jvm.  
>>> Maven will present a new alternative of maximum sharing, where different 
>>> service principals will share the same identity.
>>> 
>>> Clearly, the simplest solution is to avoid code download and only use 
>>> reflection proxy's
>>> 
>>> An inter process call isn't remote, but there is a question of how a 
>>> reflection proxy should behave when a subprocess is terminated.
>>> 
>>> UndeclaredThrowableException seems appropriate.
>>> 
>>> It would plug in via the existing ClassLoading RMIClassLoader provider 
>>> mechanism, it would be a client concern, transparent to the service or 
>>> server.
>>> 
>>> The existing behaviour would remain default.
>>> 
>>> So there can be multiple class resolution options:
>>> 
>>> 1. Existing PrefferedClassProvider.
>>> 2. Maven class resolution, where maximum class sharing exists.  This may be 
>>> preferable in situations where there is one domain of trust, eg within one 
>>> corporation or company.  Max performance.
>>> 3. Process Isolation.  Interoperation between trusted entities, where code 
>>> version incompatibilities may exist, because of separate development teams 
>>> and administrators.  Each domain of trust has it's own process domain.  Max 
>>> compatibility, but slower.
>>> 4. OSGi.
>>> 
>>> There may be occassions where simpler (because developers don't need to 
>>> understand ClassLoaders), slow, compatible and reliable wins over fast and 
>>> complex or broken.
>>> 
>>> A subprocess may host numerous proxy's and codebases from one principal 
>>> trust domain (even a later version of River could be provisioned using 
>>> Maven).  A subprocess would exist for each trust domain. So if there are 
>>> two companies, code from each remains isolated and communicates only using 
>>> common api.  No unintended code versioning conflicts.
>>> 
>>> This choice would not prevent or exclude other methods of communication, 
>>> the service, even if isolated within it's own process will still 
>>> communicate remotely over the network using JERI, JSON etc.  This is 
>>> orthogonal to and independant of remote communication protocols.
>>> 
>>> OSGi would of course be an alternative option, if one wished to execute 
>>> incompatible versions of libraries etc within one process, but different 
>>> trust domains will have a shared identity, again this may not matter 
>>> depending on the use case.
>>> 
>>> Cheers,
>>> 
>>> Peter.
>>> 

Re: Roadmap proposal

2017-03-18 Thread Gregg Wonderly
Looks like a great plan of attack!

Gregg

Sent from my iPhone

> On Mar 17, 2017, at 11:15 PM, Peter Firmstone  
> wrote:
> 
> Proposed Release roadmap:
> 
> River 3.0.1 - thread leak fix
> River 3.1 - Modular build restructure (& binary release)
> River 3.2 - Input validation 4 Serialization, delayed unmarshalling & safe 
> ServiceRegistrar lookup service.
> River 3.3 - OSGi support
> 
> Changes in the modular build and delayed unmarshalling would set us up for 
> later OSGi support.
> 
> I think this might allay any fears people have regarding the pace of change, 
> in the past, latent race conditions prevented stabilisation, hence a 
> significant amount of work was required leading up to River 3.0's release.
> 
> Thoughts?
> 
> Regards,
> 
> Peter.
> 
> Sent from my Samsung device.
>  



Re: reggie not hearing

2017-02-27 Thread Gregg Wonderly
This file has the reggie config in it:  ${myconfigs}/start-reggie.config 

Can you send that?

Gregg

> On Feb 27, 2017, at 3:33 PM, Timothy C Haas  wrote:
> 
> ${myconfigs}/start-reggie.config 



Re: reggie not hearing

2017-02-27 Thread Gregg Wonderly
What is your configuration for reggie’s endpoints?

Gregg


> On Feb 27, 2017, at 10:38 AM, Timothy C Haas  wrote:
> 
> Folks;
> 
> I've fixed my phoenix ClassNotFound problem.  Turns out, when
> I log into the San Diego cluster computer, I'm assigned randomly
> to one of their 640 nodes.  Almost always, a different one each time.
> I wasn't emptying the phoenix log directory that stored the
> node's address from the previous run
> of phoenix.  So, phoenix was always trying to reach classerver
> on the wrong node.  I now delete all files in the phoenix log
> directory before I start phoenix.  I noticed this problem only
> because I acted on a suggestion from this forum to use curl to
> see if classserver could be reached.  This experience might be
> interesting to others working with Apache River on a cluster
> computer platform.
> 
> I'm now trying to get my application to register with reggie.  The
> following error message indicates I'm timing out on the socket.
> The output shows that the reggie service (I turned on reggie debugging)
> is indeed running and a socket has been connected to.
> My SpaceAccessor.java code that tried to register follows.  I'm
> using only Apache River jars.  The reggie configuration files
> follow that.  I think reggie can be reached on port 4160 but it does
> say a different port in the dump (which changes every time the
> script runs).  I think reggie is not hearing the registrar request
> but I don't know how to debug this or fix it.
> 
> Regards,
> 
> -Tim
> 
> 
> 
> INFO: ClassServer started 
> [[/projects/builder-group/jpg/apache-river/lib-dl/,/projects/builder-group/jpg/apache-river/lib/,
>  /projects/builder-group/jpg/apache-
> river/lib-ext/], port 4160]
> Feb 27, 2017 8:04:54 AM org.apache.river.outrigger.OutriggerServerImpl 
> INFO: Outrigger server started: 
> org.apache.river.outrigger.OutriggerServerImpl@502438db
> Feb 27, 2017 8:04:54 AM org.apache.river.phoenix.Activation init
> INFO: activation daemon started
> Feb 27, 2017 8:04:55 AM org.apache.river.reggie.RegistrarImpl$Unicast 
> INFO: Reggie Unicast Discovery listening on port 33,814
> Feb 27, 2017 8:04:55 AM org.apache.river.reggie.RegistrarImpl$3 run
> INFO: started Reggie: 8ed3b539-4a66-4941-941a-17715df5eea9, [nonsecure], 
> jini://
> tscc-2-53.sdsc.edu:33814/
> java.net.SocketTimeoutException: Read timed out
>at java.net.SocketInputStream.socketRead0(Native Method)
>at java.net.SocketInputStream.read(SocketInputStream.java:152)
>at java.net.SocketInputStream.read(SocketInputStream.java:122)
>at java.io.DataInputStream.readFully(DataInputStream.java:195)
>at 
> org.apache.river.discovery.DiscoveryV2.doUnicastDiscovery(DiscoveryV2.java:460)
>at 
> net.jini.core.discovery.LookupLocator$2.performDiscovery(LookupLocator.java:347)
>at 
> org.apache.river.discovery.internal.MultiIPDiscovery.getSingleResponse(MultiIPDiscovery.java:153)
>at 
> org.apache.river.discovery.internal.MultiIPDiscovery.getResponse(MultiIPDiscovery.java:82)
>at 
> net.jini.core.discovery.LookupLocator.getRegistrar(LookupLocator.java:341)
>at 
> net.jini.core.discovery.LookupLocator.getRegistrar(LookupLocator.java:315)
>at SpaceAccessor.(SpaceAccessor.java:83)
> 
> - Socket info from SpaceAccessor.java ---
> 
> spaceaccessor: jiniURL= jini://tscc-2-53.sdsc.edu
> locator= jini://tscc-2-53.sdsc.edu:4160/
> Port: 4160
> Canonical Host Name:  tscc-2-53.sdsc.edu
> Host Address: 132.249.107.73
> 
> Local Address:/132.249.107.73
> Local Port:   40406
> Local Socket Address: /132.249.107.73:40406
> 
> Receive Buffer Size:  87379
> Send Buffer Size: 330075
> 
> Keep-Alive:   false
> SO Timeout:   0
> isConnected= true
> bad request "
> bad request "
> Read timed out
> bad request "
> 
> - SpaceAccessor.java code fragment --
> 
> import net.jini.core.discovery.LookupLocator;
> import net.jini.core.lookup.*;
> import net.jini.core.entry.Entry;
> import net.jini.space.JavaSpace;
> import net.jini.lookup.entry.*;
> import java.io.*;
> import java.rmi.*;
> import java.net.*;
> import java.util.*;
> import net.jini.discovery.LookupDiscovery;
> import net.jini.discovery.DiscoveryListener;
> import net.jini.discovery.DiscoveryEvent;
> import net.jini.discovery.Constants;
> 
> public class SpaceAccessor {
> 
> static String jiniURL   = "jini://" + Id.mstrip;
> 
> static final long MAX_LOOKUP_WAIT = 2000L;
> static final int WAIT = 10;
> 
> JavaSpace space;
> 
> public SpaceAccessor() {
> LookupLocator locator = null;
> ServiceRegistrar registrar = null;
> 
> try {
>   System.setSecurityManager(new SecurityManager());
> }
> catch (Exception e) {
>   e.printStackTrace();
> }
> 
> if (Id.client) {
>   jiniURL = "jini://" + Id.mstrip + ":4160/";
> }
> 

Re: reggie exception

2017-02-23 Thread Gregg Wonderly
What is the setting  
java.rmi.server.useCodebaseOnly

on your client jvms?  The default was changed to true recently which keeps code 
downloading from happening.  Set it to false to make sure that your client can 
download needed classes from the servers' codebases!

Gregg


Sent from my iPhone

> On Feb 23, 2017, at 10:10 AM, Timothy C Haas  wrote:
> 
> Folks;
> 
> I am trying to start a reggie lookup service with Apache River 3.0.0 on a
> unix machine.  Below is the error message followed by the script and its
> two configuration files that I used.  I get a ClassNotFoundException but
> I don't know what jar in what directory I should include where.
> 
> Regards,
> -Tim
> 
> 
> 
> Activation.main: an exception occurred: java.io.IOException: log recover 
> failed
> with exception: java.lang.ClassNotFoundException: 
> com.sun.jini.phoenix.Activatio
> n
> java.io.IOException: log recover failed with exception: 
> java.lang.ClassNotFoundE
> xception: com.sun.jini.phoenix.Activation
>at sun.rmi.log.ReliableLog.recover(ReliableLog.java:226)
>at sun.rmi.server.Activation.startActivation(Activation.java:220)
>at sun.rmi.server.Activation.main(Activation.java:2081)
> Feb 23, 2017 7:59:42 AM org.apache.river.start.HTTPDStatus httpdWarning
> WARNING: Problem accessing desired 
> URL[http://132.249.107.70:4160/reggie.jar]: j
> ava.io.FileNotFoundException: http://132.249.107.70:4160/reggie.jar.
> Feb 23, 2017 7:59:42 AM org.apache.river.start.HTTPDStatus httpdWarning
> WARNING: Problem accessing desired 
> URL[http://132.249.107.70:4160/jsk-policy.jar
> ]: java.io.FileNotFoundException: http://132.249.107.70:4160/jsk-policy.jar.
> Feb 23, 2017 7:59:42 AM org.apache.river.reggie.RegistrarImpl$Unicast 
> INFO: Reggie Unicast Discovery listening on port 45,086
> Feb 23, 2017 7:59:42 AM org.apache.river.reggie.RegistrarImpl$3 run
> INFO: started Reggie: 0011c9db-a890-400f-8a1b-56ea7dfd823a, [nonsecure], 
> jini://
> tscc-2-56.sdsc.edu:45086/
> 
> ---
> 
> rm -f -r /home/haas/jinitmp/
> echo [Deleting jinitmp directory]
> #
> mkdir /home/haas/jinitmp
> echo [Creating jinitmp directory]
> #
> rh="/projects/builder-group/jpg/apache-river"
> myconfigs="/home/haas/jsutils"
> #
> # Start an http server.
> #
> java -jar ${rh}/lib/classserver.jar -port 4160 \
> -dir lib:${rh}/lib-dl $* \
>> /home/haas/jinitmp/http.out \
> 2> /home/haas/jinitmp/http.err &
> #
> # Activation daemon
> #
> rmid -J-Djava.security.policy=policy.all &
> #
> # Start a reggie lookup service.
> #
> java -Djava.security.policy=policy.all \
> -Djava.ext.dirs=${rh}/lib-ext/:${rh}/lib-dl/:${rh}/lib/:${rh}/dep-libs/groovy/
>  \
>-jar ${rh}/lib/start.jar ${myconfigs}/start-reggie.config
> 
> --- start-reggie.config 
> 
> import org.apache.river.config.ConfigUtil;
> import org.apache.river.start.NonActivatableServiceDescriptor;
> import org.apache.river.start.ServiceDescriptor;
> 
> org.apache.river.start {
>   private static policy = "policy.all";
>   port="4160";
>   private static codebasePrefix= " http://; + ConfigUtil.getHostAddress()
>   + ":" + port + "/";
>   private static codebase = codebasePrefix + "reggie-dl.jar" +
>  codebasePrefix + "jsk-dl.jar" +
>  codebasePrefix + "reggie.jar" +
>  codebasePrefix + "jsk-policy.jar";
>private static classpath = "lib${/}reggie.jar";
>private static config = "jrmp-reggie.config";
> 
>static serviceDescriptors = new ServiceDescriptor[] {
> new NonActivatableServiceDescriptor(
>   codebase, policy, classpath,
>"org.apache.river.reggie.TransientRegistrarImpl",
>new String[] { config })
>};
> }
> 
> -- jrmp-reggie.config --
> 
> /* Configure source file for JRMP reggie */
> 
> import net.jini.jrmp.JrmpExporter;
> 
> org.apache.river.reggie {
> 
>serverExporter = new JrmpExporter();
>initialMemberGroups = new String[] { "nonsecure" };
> 
> }//end org.apache.river.reggie
> 


Re: javaspace in 3.0.0?

2017-02-16 Thread Gregg Wonderly
Tim, which JDK/JRE version are you using?

Gregg


> On Feb 16, 2017, at 4:41 PM, Timothy C Haas  wrote:
> 
> Folks;
> 
> I'm trying to start a javaspace using apache river 3.0.0.  The script
> below runs on a unix system at the San Diego Supercomputer center.  It
> breaks on the reggie command just before the exit.  It says it can't find
> a groovy class.  As you can I am currently letting it see the
> groovy-all.2.4.5.jar file in dep-libs
> but that has caused the ConfigUtils class to not be found:
> 
> Feb 16, 2017 12:13:41 PM org.apache.river.start.ServiceStarter main
> SEVERE: Problem reading configuration file.
> net.jini.config.ConfigurationException: start-hello-service.config:27: class 
> no
> found: com.sun.jini.config.ConfigUtil
>at net.jini.config.ConfigurationFile.oops(ConfigurationFile.java:2773)
> 
> It is when I leave-off the deps-lib/groovy directory that I get the
> groovy class problem.
> 
> I was successfully running a javaspace with the previous version of
> Apache River.
> 
> Any help would be greatly appreciated,
> 
> -Tim Haas, Associate Professor, Lubar School of Business,
> University of Wisconsin-Milwaukee
> 
> ---
> 
> rm -f -r /home/haas/jinitmp/
> echo [Deleting jinitmp directory]
> #
> mkdir /home/haas/jinitmp
> echo [Creating jinitmp directory]
> #
> rh="/projects/builder-group/jpg/apache-river"
> myconfigs="/home/haas/jsutils"
> cd ${rh}/examples/home/src/main/home
> #
> # Start an http server.  Was 4160
> #
> java -jar ${rh}/lib/classserver.jar -port 8080 \
> -dir lib:${rh}/lib-dl $* \
>> /home/haas/jinitmp/http.out \
> 2> /home/haas/jinitmp/http.err &
> #java -jar ${rh}/lib/tools.jar -port 8080 \
> # -dir lib:${rh}/lib-dl $* -verbose
> echo - HTTP Server Running -
> #
> # Activation daemon
> #
> #java -Djava.security.manager= \
> #-Djava.security.policy=policy/all.policy \
> #-Djava.rmi.server.codebase="http://$host:8080/phoenix-dl.jar 
> http://$host:8080/jsk-dl.jar; \
> #-DserverHost=$host \
> #-jar ${rh}/lib/phoenix.jar  \
> #configs/jeri/phoenix/phoenix.config
> rmid -J-Djava.security.policy=${rh}/qa/harness/policy/all.policy \
>   -log /home/haas/jinitmp &
> #
> # Start a reggie lookup service.
> #
> java -Djava.security.policy=${rh}/qa/harness/policy/all.policy \
> -Djava.ext.dirs=${rh}/lib-ext:${rh}/lib-dl:${rh}/lib:${rh}/dep-libs/groovy \
>-jar ${rh}/lib/start.jar start-hello-service.config
> # ${myconfigs}/start-reggie.config
> exit
> echo - Lookup Service Running -
> #
> # Start the JavaSpace.
> #
> java -Djava.security.policy=policy/all.policy \
>   -jar ${rh}/lib/start.jar \
>configs/jeri/outrigger/outrigger.config
> echo - JavaSpace Running -
> 
> 



Re: OSGi NP Complete Was: OSGi - deserialization remote invocation strategy

2017-02-16 Thread Gregg Wonderly
The important detail is to understand that there is nearly a decade of 
development and design with experiences driving most of that around what exists 
today.  Peter, nor really anyone any longer, can not “answer” all the questions 
you might have in a short amount of time.  I sense that you have a view of how 
things should work, and believe that because that is not the case, that we need 
to “fix it.”   I am not suggesting, nor do I sense Peter is, that there is 
nothing to fix or improve with River.  However, it’s important to understand 
how River was designed to work, and that will require you to study, from 
several angles, the details.  Yes, the code is hard to read.  It doesn’t just 
calculate numbers, or arrange data in collections.  Instead, it is interacting 
with the various details of security, class loading and JVM reflection to 
provide a flexible mechanism for RPC.  I know, that you know, that there are a 
lot of technologies that exist today, which did not exist at the time that 
River was created as Jini.  Instead, people without knowledge of many things 
that already existed to solve their problems went off to make software that 
works and looks the way that they thought it should for RPC or messaging at 
some RPC like level, and now we have a diverse set of technologies which all, 
in the end, allow network based communications to happen.

River’s way of doing it, is but one.  It’s not perfect and it needs work.  
Please understand that Peter and others have ideas for things/changes which 
will improve the user experience of River.  What we are trying to do, is to 
understand your perspective better.  The questions and comments/answers here 
are not going to be very good if you are just demanding our time, and not 
spending your time to learn the details what Peter is pointing out about how 
River works.

Gregg


> On Feb 15, 2017, at 1:00 AM, Michał Kłeczek  wrote:
> 
> They are valid questions and you haven't answered any of them.
> I've described _your_ way of thinking (which I do not agree with).
> 
> Apache River has many problems both technical and organizational.
> But I find the idea interesting and was expecting openness
> for contributions and open discussion.
> 
> This is an open source project and there are no obligations to take part in 
> the discussion nor answer any questions.
> But I find your patronizing statement disincentive to contribute to this 
> project - especially that you are one of its main contributors.
> 
> Regards,
> Michal
> 
> Peter wrote:
>> Finding the answer to this question should assist you to discover answers to 
>> many of the other questions you've had.
>> 
>> While I've done my best to answer as many of your questions as I can, time 
>> is limited and I haven't had time to answer all of them or rebutt or confirm 
>> all arguments /  assumptions.  Sometimes the right questions are more 
>> important than answers.
>> 
>> Regards,
>> 
>> Peter.
>> 
>> Sent from my Samsung device.
>> Include original message
>>  Original message 
>> From: Peter
>> Sent: 15/02/2017 12:58:55 pm
>> To: dev@river.apache.org
>> Subject: Re: OSGi NP Complete Was: OSGi - deserialization remote invocation 
>> strategy
>> 
>> The PreferredClassLoader will attempt to download the jar file in order to 
>> get the preferred list.
>> 
>> DownloadPermission should be called DefineClassPermission, I don't think it 
>> will prevent download of the jar per say.
>> 
>> Why must the bootstrap proxy be loaded by the codebase ClassLoader?
>> 
>> Regards,
>> 
>> Peter.
>> 
>> Sent from my Samsung device.
>> Include original message
>>  Original message 
>> From: Michał Kłeczek
>> Sent: 15/02/2017 06:20:37 am
>> To: dev@river.apache.org
>> Subject: Re: OSGi NP Complete Was: OSGi - deserialization remote invocation 
>> strategy
>> 
>> So I've given it some thought and the only explanation I can come up  with 
>> is: 
>> 1. To create an instance of the bootstrap proxy you need the codebase  
>> annotation. 2. Codebase annotation is needed because you want the bootstrap 
>> proxy's  class to be defined in the proper codebase ClassLoader 3. Since you 
>> do not want to allow any code downloads before placing  constraints on the 
>> bootstrap proxy - it has to be a dynamic proxy. That way its class can  be 
>> defined by the codebase loader and yet no code is downloaded 
>> So the overall sequence is as follows: 1. Get the codebase annotation and 
>> create the codebase loader 2. Create an instance of a dynamic proxy of a 
>> class defined by the  codebase loader 3. IMPORTANT - before creating the 
>> proxy instance DO NOT grant any  download permissions - that way we are sure 
>> the proxy does not triggers any code download and  execution due to it 
>> implementing some foreign interfaces 4. Once the proxy is instantiated - 
>> grant its ClassLoader download  permissions 5. Place the 

Re: OSGi - deserialization remote invocation strategy

2017-02-07 Thread Gregg Wonderly

> On Feb 7, 2017, at 8:56 AM, Michał Kłeczek (XPro Sp. z o. o.) 
>  wrote:
> 
> Comments inline
> 
> Niclas Hedhman wrote:
>> 4. For Server(osgi)+Client(osgi), number of options goes up. In this space,
>> Paremus has a lot of experience, and perhaps willing to share a bit,
>> without compromising the secret sauce? Either way, Michal's talk about
>> "wiring" becomes important and that wiring should possibly be
>> re-established on the client side. The insistence on "must be exactly the
>> same version" is to me a reflection of "we haven't cared about version
>> management before", and I think it may not be in the best interest to load
>> many nearly identical bundles just because they are a little off, say stuff
>> like guava, commons-xyz, slf4j and many more common dependencies.
> This problem is generally unsolvable because there are contradicting 
> requirements here:
> 1. The need to transfer object graphs of (unknown) classes
> 2. The need to optimize the number of class versions (and the number of 
> ClassLoaders) in the JVM
> 
> It might be tempting to do the resolution on the client but it is (AFAIR) 
> NP-hard
> - the object graph is a set of constraints on possible module (bundle) 
> versions. Plus there is a whole
> set of constraints originating from the modules installed in the container 
> prior to graph deserialization.
> 
> So the only good solution for a library is to provide a client with an 
> interface to implement:
> Module resolve(Module candidate) (or Module[] resolve(Module[] candidates))
> and let it decide what to do.
> 
>> 
>> Peter wrote;
>>> This is why the bundle must be given first
>>> attempt to resolve an objects class and rely on the bundle dependency
>> resolution process.
>>> OSGi must be allowed to wire up dependencies, we must avoid attempting to
>> make decisions about
>>> compatibility and use the current bundle wires instead (our stack).
>> 
>> Well, not totally sure about that. The 'root object classloader' doesn't
>> have visibility to serialized objects, and will fail if left to do it all
>> by itself. And as soon as you delegate to another BundleClassLoader, you
>> have made the resolution decision, not the framework. Michal's proposal to
>> transfer the BundleWiring (available in runtime) from the server to the
>> client, makes it somewhat possible to do the delegation. And to make
>> matters worse, it is quite common that packages are exported from more than
>> one bundle, so the question is what is included in the bundleWiring coming
>> across the wire.
> The whole issue with proposals based on the stream itself is the fact that to 
> resolve properly
> one have to walk the whole graph first to gather all modules and their 
> dependencies.
> 
> It is much better to simply provide the module graph (wiring) first (at the 
> beginning of the stream)
> and only after resolution of all the modules - deserialize the objects.

The missing notion of versioning in classloader was meant to be solved by URLs 
in the annotations.  That would provide explicit versioning control and through 
the use of the TCCL, code and objects could be isolated from other versions.  
However, that’s not perfect, and so the preferred class loading mechanism is 
also a path that allows a “platform” or “hosting” environment to declare the 
classes that it will use to  interact with a client/proxy.

Practically, it’s exactly the NP-hard problem you say though.  There are 
“pretty good” solutions, but realistically there is not a perfect solution 
until there is an exact dependency graph which allows perfect specification of 
dependencies.

Gregg

> 
> Thanks,
> Michal



Re: Changing TCCL during deserialization

2017-02-07 Thread Gregg Wonderly
There are lots of places like this, that I have done exactly this to make sure 
that the visibility of the current class loader is available for a thread which 
I don’t now the history of.  There is nothing that controls how the calling 
thread might decide (via code, or some other magic) what class loader to use 
for the parent of any newly created class loader.

In my serviceUI work, I’ve used readObject in smart proxies and data classes 
referenced by such classes.  Ultimately, I would usually have already set the 
TCCL via some other mechanism related to AWT/Swing events.   But, in some 
cases, you might be on a platform where the TCCL is not set correctly, and thus 
you might find that you had to add this for your software to work on that 
platform.

Gregg

> On Feb 7, 2017, at 1:20 AM, Michał Kłeczek (XPro Sp. z o. o.) 
> <michal.klec...@xpro.biz> wrote:
> 
> I am not sure how OSGI relates to this question. But I can imagine the 
> situation like this:
> 
> class MySmartAssWrappingObject implements Serializable {
> 
>  Object myMember;
> ...
> 
> private void readObject(ObjectInputStream ois) {
>  Thread.currentThread().setContextClassLoader(getClass().getClassLoader());
>  myMember = ois.readObject();
> }
> 
> }
> 
> That would allow you to do something similar to what you wanted to do with 
> class resolution by remembering the stack of class loaders.
> 
> So my question is:
> is it something that people do?
> 
> Thanks,
> Michal
> 
> Peter wrote:
>>  In PreferredClassProvider, no the callers ClassLoader (context) is the 
>> parent ClassLoader of the codebase loader.
>> 
>> It depends on the ClassLoader hierarchy and chosen strategy used to resolve 
>> annotations.
>> 
>> But the index key for PreferrefClassProvider is  URI[] and parent loader 
>> (callers loader).
>> 
>> This strategy allows codebases to be duplicated for different calling 
>> context.
>> 
>> OSGi however, only loads one Bundle per URL, but as Bharath has 
>> demonstrated, the codebase loader doesn't have to be a BundleReference.
>> 
>> There are some caveats if the proxy codebase loader isn't a BundleReference, 
>> one is your dependencies aren't version managed for you, and you can only 
>> see public classes imported by the parent BundleReference.
>> 
>> The strategy of switching context wouldn't work with PreferredClassProvider.
>> 
>> Regards,
>> 
>> Peter.
>> 
>> Sent from my Samsung device.
>> Include original message
>>  Original message 
>> From: "Michał Kłeczek (XPro Sp. z o. o.)"<michal.klec...@xpro.biz>
>> Sent: 07/02/2017 07:20:59 am
>> To: dev@river.apache.org
>> Subject: Re: Changing TCCL during deserialization
>> 
>> This still does not answer my question - maybe I am not clear enough.
>> Do you have a need to set a TCCL DURING a remote call that is in progress?
>> Ie. you execute a remote call and DURING deserialization of the return value 
>> you change the TCCL (so one class is resolved using one context loader and 
>> another using a different one when reading THE SAME stream)
>> 
>> Thanks,
>> Michal
>> 
>> Gregg Wonderly wrote:
>> Anytime that a thread might end up being the one to download code, you need 
>> that threads CCL to be set.   The AWTEvent thread(s) in particular are a 
>> sticking point.  I have a class which I use for managing threading in 
>> AWT/Swing.  It’s called ComponentUpdateThread.  It works as follows.
>> 
>> new ComponentUpdateThread<List>( itemList, actionButton1, 
>> actionButton2, checkbox1 ) {
>>  public void setup() {
>>  // In event thread
>>  setBusyCursorOn( itemList );
>>  }
>>  public Listconstruct() {
>>  try {
>>  return service.getListOfItems( filterParm1 );
>>  } catch( Exception ex ) {
>>  reportException(ex);
>>  }
>>  return null;
>>  }
>>  public void finished() {
>>  List  let;
>>  if( (lst = get()) != null ) {
>>  itemList.getModel().setContents( lst );
>>  }
>>  }
>> }.start();
>> 
>> This class will make the passed components disabled to keep them from being 
>> clicked on again, setup for processing use a non AWTEvent thread for getting 
>> data with other components of the UI still working, and finally mark the 
>> disabled components back to enabled, and load the list with the returned 
>> ite

Re: Changing TCCL during deserialization

2017-02-07 Thread Gregg Wonderly
I am not sure about “locked”.  In my example about ServiceUI, imagine that 
there is a common behavior that you ServiceUI hosting environment provides to 
all ServiceUI Components.  It can be that there is a button press or something 
else where an AWTEvent thread is going to take action.  It’s that specific 
thread whose TCCL must be changed, each time, to the codebase of the service 
you are interacting with.  If it calls out the service proxy and that is a 
smart proxy, imagine that the smart proxy might use a different service each 
time, and thats where the TCCL must be set appropriately so that any newly 
created classes are parented by the correct environment in your ServiceUI 
hosting platform.

Gregg

> On Feb 6, 2017, at 11:28 AM, Michał Kłeczek (XPro Sp. z o. o.) 
> <michal.klec...@xpro.biz> wrote:
> 
> What I was specifically asking for is whether this is needed during 
> deserialization or after deserialization.
> 
> In other words - if I can lock the TCCL to an instance of MarshalInputStream 
> existing for the duration of a single remote call.
> 
> Thanks,
> Michal
> 
> Gregg Wonderly wrote:
>> The predominant place where it is needed is when you download a serviceUI 
>> component from a proxy service which just advertises some kind of “browsing” 
>> interface to find specific services and interact with them, and that 
>> serviceUI is embedded in another application with it’s own codebase
>> 
>> appl->serviceUI-for-browsing->Service-to-use->That-Services-ServiceUI
>> 
>> In this case, TCCL must be set to the serviceui classes classloader so that 
>> the “serviceui-for-browsing” will have a proper parent class pointer.
>> 
>> Anytime that downloaded code might download more code, it should always set 
>> TCCL to its own class loader so that the classes it downloads reflect 
>> against the existing class definitions.
>> 
>> Gregg
>> 
>>> On Feb 6, 2017, at 12:03 AM, Michał Kłeczek (XPro Sp. z o. o.) 
>>> <michal.klec...@xpro.biz> <mailto:michal.klec...@xpro.biz> wrote:
>>> 
>>> Hi,
>>> 
>>> During my work on object based annotations I realized it would be more 
>>> efficient not to look for TCCL upon every call to "load class" (when 
>>> default loader does not match the annotation).
>>> It might be more effective to look it up upon stream creation and using it 
>>> subsequently for class loader selection.
>>> 
>>> But this might change semantics of deserialization a little bit - it would 
>>> not be possible to change the context loader during deserialization.
>>> My question is - are there any scenarios that require that?
>>> I cannot think of any but...
>>> 
>>> Thanks,
>>> Michal
>> 
> 



Re: Changing TCCL during deserialization

2017-02-06 Thread Gregg Wonderly
Anytime that a thread might end up being the one to download code, you need 
that threads CCL to be set.   The AWTEvent thread(s) in particular are a 
sticking point.  I have a class which I use for managing threading in 
AWT/Swing.  It’s called ComponentUpdateThread.  It works as follows.

new ComponentUpdateThread<List>( itemList, actionButton1, actionButton2, 
checkbox1 ) {
public void setup() {
// In event thread
setBusyCursorOn( itemList );
}
public Listconstruct() {
try {
return service.getListOfItems( filterParm1 );
} catch( Exception ex ) {
reportException(ex);
}
return null;
}
public void finished() {
List let;
if( (lst = get()) != null ) {
itemList.getModel().setContents( lst );
}
}
}.start();

This class will make the passed components disabled to keep them from being 
clicked on again, setup for processing use a non AWTEvent thread for getting 
data with other components of the UI still working, and finally mark the 
disabled components back to enabled, and load the list with the returned items, 
if there where any returned.

There is the opportunity for 3 or more threads to be involved here.  First, 
there is the calling thread.  It doesn’t do anything but start the work.  Next, 
there is an AWTEvent thread which will invoke setup().  Next there is a worker 
thread which will invoke construct().   Finally, there is (possible another) 
AWTEventThread which will invoke finished().

In total there could be up to four different threads involved, all of which 
must have TCCL set to the correct class loader.  My convention in the 
implementation, is that that will be this.getClass().getClassLoader().

This is all managed inside of the implementation of ComponentUpdateThread so 
that I don’t have to worry about it, any more.  But it’s important to 
understand that if you don’t do that, then the classes that the calling thread 
can resolve, and Item in this specific case in particular, and you would thus 
potentially see Item come from another class loader than you intended (the 
services class loader with “null” as the parent), and this will result in 
either a CNFE or CCE.

Gregg

> On Feb 6, 2017, at 11:28 AM, Michał Kłeczek (XPro Sp. z o. o.) 
> <michal.klec...@xpro.biz> wrote:
> 
> What I was specifically asking for is whether this is needed during 
> deserialization or after deserialization.
> 
> In other words - if I can lock the TCCL to an instance of MarshalInputStream 
> existing for the duration of a single remote call.
> 
> Thanks,
> Michal
> 
> Gregg Wonderly wrote:
>> The predominant place where it is needed is when you download a serviceUI 
>> component from a proxy service which just advertises some kind of “browsing” 
>> interface to find specific services and interact with them, and that 
>> serviceUI is embedded in another application with it’s own codebase
>> 
>> appl->serviceUI-for-browsing->Service-to-use->That-Services-ServiceUI
>> 
>> In this case, TCCL must be set to the serviceui classes classloader so that 
>> the “serviceui-for-browsing” will have a proper parent class pointer.
>> 
>> Anytime that downloaded code might download more code, it should always set 
>> TCCL to its own class loader so that the classes it downloads reflect 
>> against the existing class definitions.
>> 
>> Gregg
>> 
>>> On Feb 6, 2017, at 12:03 AM, Michał Kłeczek (XPro Sp. z o. o.) 
>>> <michal.klec...@xpro.biz> <mailto:michal.klec...@xpro.biz> wrote:
>>> 
>>> Hi,
>>> 
>>> During my work on object based annotations I realized it would be more 
>>> efficient not to look for TCCL upon every call to "load class" (when 
>>> default loader does not match the annotation).
>>> It might be more effective to look it up upon stream creation and using it 
>>> subsequently for class loader selection.
>>> 
>>> But this might change semantics of deserialization a little bit - it would 
>>> not be possible to change the context loader during deserialization.
>>> My question is - are there any scenarios that require that?
>>> I cannot think of any but...
>>> 
>>> Thanks,
>>> Michal
>> 
> 



Re: OSGi

2017-02-06 Thread Gregg Wonderly
I still feel that RMIClassLoaderSPI can provide this mechanism.  There is an 
execution context required, but predominately, the URL string can still reflect 
the source of the code you want to use.

Gregg

> On Feb 5, 2017, at 11:34 PM, Michał Kłeczek (XPro Sp. z o. o.) 
>  wrote:
> 
> Once you realize you need some codebase metadata different than mere list of 
> URLs
> the next conclusion is that annotations should be something different than... 
> a String :)
> 
> The next thing to ask is: "what about mixed OSGI and non-OSGI environments"
> Then you start to realize you need to abstract over the class loading 
> environment itself.
> 
> Then you start to realize that to support all the scenarios you need to 
> provide a class loading environment that is "pluggable"
> - ie allows using it with other class loading environments and allow the user 
> to decide which classes should be loaded
> by which environment.
> 
> This is what I am working on right now :)
> 
> Thanks,
> Michal
> 
> Peter wrote:
>> My phone sent the previous email before I completed editing.
>> 
>> ...If api classes are already loaded locally by client code, then a smart 
>> proxy codebase bundle will resolve imports to those packages (if they're 
>> within the imported version range), when the proxy bundle is downloaded, 
>> resolved and loaded.
>> 
>> The strategy should be, deserialize using the callers context until a class 
>> is not found, then switch to the object containing the current field being 
>> deserialized (which may be a package private implementation class in the 
>> service api bundle) and if that fails use the codebase annotation (the smart 
>> proxy).  This is similar in some ways to never preferred, where locally 
>> visible classes will be selected first.
>> 
>> The strategy is to let OSGi do all the dependency wiring from bundle 
>> manifests.  Classes not visible will be visible from a common package import 
>> class, except for poorly designed services, which is outside of scope.
>> 
>> Only match api version compatible services.
>> 
>> No allowances made for split packages or other complexities.
>> 
>> If deserialization doesn't succeed, look up another service.
>> 
>> Cheers,
>> 
>> Peter.
>> 
>> Sent from my Samsung device.
>> Include original message
>>  Original message 
>> From: Peter
>> Sent: 06/02/2017 02:59:09 pm
>> To: dev@river.apache.org
>> Subject: Re: OSGi
>> 
>>  Thanks Nic,
>> 
>> If annot
>> 
>> You've identified the reason we need an OSGi specific RMIClassLoaderSpi 
>> implementation; so we can capture and provide Bundle specific annotation 
>> information.
>> 
>> Rmiclassloaderspi's loadClass method expects a ClassLoader to be passed in, 
>> the context ClassLoader is used by PreferredClassProvider when the 
>> ClassLoader argument is null.
>> 
>> Standard Java serialization's OIS walks the call stack and selects the first 
>> non system classloader (it's looking for the application class loader), it 
>> deserializes into the application ClassLoader's context.  This doesn't  work 
>> in OSGi because the application classes are loaded by a multitude of 
>> ClassLoaders.
>> 
>> It also looks like we'll need an OSGi specific InvocationLayerFactory to 
>> capture ClassLoader information to pass to our MarshalInputStream then to 
>> our RMIClassLoaderSpi during deserialization at both endpoints.
>> 
>> We also need to know the bundle (ClassLoader) of the class that calls a 
>> java.lang.reflect.Proxy on the client side, this is actually quite easy to 
>> find, walk the stack, find the Proxy class and obtain the BundleReference / 
>> ClassLoader of the caller.
>> 
>> Currently the java.lang.reflectProxy dynamically generated subclass instance 
>> proxy's ClassLoader is used, this is acceptable when the proxy bytecode is 
>> loaded by the the Client's ClassLoader or smart proxy ClassLoader in the 
>> case where a smart proxy is utilised
>> 
>> 
>> 
>> If the caller changes, so does the calling context.
>> 
>> 
>> Each bundle provides access to all classes within that bundle, including any 
>> public classes from imported packages.
>> 
>> 
>> 
>> 
>> 
>> Sent from my Samsung device.
>> Include original message
>>  Original message 
>> From: Niclas Hedhman
>> Sent: 04/02/2017 12:43:28 pm
>> To: dev@river.apache.org
>> Subject: Re: OSGi
>> 
>> 
>> 
>> Further, I think the only "sane" approach in a OSGi environment is to create 
>> a new bundle for the Remote environment, all codebases not part of the API 
>> goes into that bundle and that the API is required to be present in the OSGi 
>> environment a priori. I.e. treat the Remote objects in OSGi as it is treated 
>> in plain Java; one classloader, one chunk, sort out its own serialization 
>> woes. Likewise for the server; treat it as ordinary RMI, without any 
>> mumbo-jambo OSGi stuff to be figured out at a non-OSGi-running JVM. An 
>> 

Re: Changing TCCL during deserialization

2017-02-06 Thread Gregg Wonderly
The predominant place where it is needed is when you download a serviceUI 
component from a proxy service which just advertises some kind of “browsing” 
interface to find specific services and interact with them, and that serviceUI 
is embedded in another application with it’s own codebase

appl->serviceUI-for-browsing->Service-to-use->That-Services-ServiceUI

In this case, TCCL must be set to the serviceui classes classloader so that the 
“serviceui-for-browsing” will have a proper parent class pointer.

Anytime that downloaded code might download more code, it should always set 
TCCL to its own class loader so that the classes it downloads reflect against 
the existing class definitions.

Gregg

> On Feb 6, 2017, at 12:03 AM, Michał Kłeczek (XPro Sp. z o. o.) 
>  wrote:
> 
> Hi,
> 
> During my work on object based annotations I realized it would be more 
> efficient not to look for TCCL upon every call to "load class" (when default 
> loader does not match the annotation).
> It might be more effective to look it up upon stream creation and using it 
> subsequently for class loader selection.
> 
> But this might change semantics of deserialization a little bit - it would 
> not be possible to change the context loader during deserialization.
> My question is - are there any scenarios that require that?
> I cannot think of any but...
> 
> Thanks,
> Michal



Re: OSGi

2017-02-04 Thread Gregg Wonderly

> On Feb 4, 2017, at 5:09 AM, Niclas Hedhman  wrote:
> 
> see below
> 
> On Sat, Feb 4, 2017 at 6:21 PM, "Michał Kłeczek (XPro Sp. z o. o.)" <
> michal.klec...@xpro.biz> wrote:
>> In the end all of the arguments against Java Object Serialization boil
> down to:
>> "It is easy to use but if not used carefully it will bite you - so it is
> too easy to use"
> 
> Well, kind of...
> If you ever need to deserialize a serialVersionUid=1 with a codebase where
> it is now serialVersionUid != 1, I wouldn't call it "easy to use" anymore.
> Back in the days when I used this stuff heavily, I ended up never change
> serialVersionUid. If I needed to refactor it enough to loose compatibility,
> I would create a new class and make an adapter.

And this is one of the patterns that you had to learn.  I often never change 
serialVersionUid beyond 1 as you suggest here.  Instead, I use an internal, 
private version number in a class field to help me know how to evolve the data. 
 For each version, I know which “data” will not be initialized.  I can have a 
plan for a version 10 object to know how to initialize data introduced in 
version 4, 7 and 8 which will be null references or otherwise unusable.  The 
readObject() can initialize, manufacture or otherwise evolve that object 
correctly.

Gregg

Re: OSGi

2017-02-04 Thread Gregg Wonderly

> On Feb 4, 2017, at 4:21 AM, Michał Kłeczek (XPro Sp. z o. o.) 
>  wrote:
> 
> Once you transfer the code with your data - the issue of code version 
> synchronization disappears, doesn't it?
> It also makes the wire data format irrelevant. At least for "short lived 
> serialized states".
> 
> I fail to understand how JSON or XML changes anything here.
> 
> In the end all of the arguments against Java Object Serialization boil down 
> to:
> "It is easy to use but if not used carefully it will bite you - so it is too 
> easy to use"
> 
> What I do not like about Java Object Serialization has nothing to do with the 
> format of persistent data
> but rather with the APIs - it is inherently blocking by design.

Yes, it is computationally involved and that can cause some problems with the 
thread of execution that encounters it.  My work around delayed unmarshalling 
and the notion of never preferred classes are precisely targeting this issue so 
that you can encounter that “blocking” at the moment you have to.

Gregg

> 
> Thanks,
> Michal
> 
> Niclas Hedhman wrote:
>> Gregg,
>> I know that you can manage to "evolve" the binary format if you are
>> incredibly careful and not make mistakes. BUT, that seems really hard,
>> since EVEN Sun/Oracle state that using Serilazation for "long live objects"
>> are highly discouraged. THAT is a sign that it is not nearly as easy as you
>> make it sound to be, and it is definitely different from XML/JSON as once
>> the working codebase is lost (i.e. either literally lost (yes, I have been
>> involved trying to restore that), or modified so much that compatibility
>> broke, which happens when serialization is not the primary focus of a
>> project) then you are pretty much screwed forever, unlike XML/JSON.
>> 
>> Now, you may say, that is for "long lived serialized states" but we are
>> dealing with "short lived" ones. However, in today's architectures and
>> platforms, almost no organization manages to keep all parts of a system
>> synchronized when it comes to versioning. Different parts of a system is
>> upgraded at different rates. And this is essentially the same as "long
>> lived objects" ---  "uh this was serialized using LibA 1.1, LibB 2.3 and
>> JRE 1.4, and we are now at LibA 4.6, LibB 3.1 and Java 8", do you see the
>> similarity? If not, then I will not be able to convince you. If you do,
>> then ask "why did Sun/Oracle state that long-lived objects with Java
>> Serialization was a bad idea?", or were they also clueless on how to do it
>> right, which seems to be your actual argument.
>> 
>> And I think (purely speculative) that many people saw exactly this problem
>> quite early on, whereas myself I was at the time mostly in relatively small
>> confined and controlled environments, where up-to-date was managed. And
>> took me much longer to realize the downsides that are inherent.
>> 
>> Cheers
>> Niclas
>> 
>> 
> 



Re: OSGi

2017-02-04 Thread Gregg Wonderly
ronments, where up-to-date was managed. And
> took me much longer to realize the downsides that are inherent.
> 
> Cheers
> Niclas
> 
> On Sat, Feb 4, 2017 at 3:35 PM, Gregg Wonderly <ge...@cox.net> wrote:
> 
>> 
>>> On Feb 3, 2017, at 8:43 PM, Niclas Hedhman <nic...@hedhman.org> wrote:
>>> 
>>> On Fri, Feb 3, 2017 at 12:23 PM, Peter <j...@zeus.net.au> wrote:
>>> 
>>>> 
>>>> No serialization or Remote method invocation framework currently
>> supports
>>>> OSGi very well, one that works well and can provide security might gain
>> a
>>>> lot of new interest from that user base.
>>> 
>>> 
>>> What do you mean by this? Jackson's ObjectMapper doesn't have problems on
>>> OSGi. You are formulating the problem wrongly, and if formulated
>> correctly,
>>> perhaps one realizes why Java Serialization fell out of fashion rather
>>> quickly 10-12 years ago, when people realized that code mobility (as done
>>> in Java serialization/RMI) caused a lot of problems.
>> 
>> I’ve seen and heard of many poorly designed pieces of software.  But, the
>> serialization for Java has some very easily managed details which can
>> trivially allow you to be 100% successful with the use of Serialization.
>> I’ve never encountered problems with serialization.  I learned early on
>> about using explicit versioning for any serialization format, and then
>> providing evolution based changes instead of replacement based changes.  It
>> takes some experience and thought for sure.  But, in the end, it’s really
>> no different from using JSON, XML or anything else.  The format of what you
>> send has to be able to change, the content which must remain in a
>> compatible way has to remain accessible in the same way.  I really am
>> saddened by the thought that so many people never learn about binary
>> structured data in their classes or through materials they might read to
>> learn about such things.
>> 
>> What generally happens is that people forget to design extensibility into
>> their data systems, and then end up with all kinds of problems.   Here’s
>> some of the rules I always try to follow.
>> 
>> 1. Remote interfaces should almost always pass non native type objects
>> that wrap the data needed.  This will make sure you can seamlessly add more
>> data without changing method signatures.
>> 2. Always put a serial version id on your serialized classes.  Start with
>> 1, and increment it as you make changes by more than just ‘1’.
>> 3. When you are going to add a new value, think about how you can make
>> that independent of existing serialized data.  For example, when you
>> override readObject or writeObject methods, how will you make sure that
>> those methods can cast the data for “this” version of the data without
>> breaking past or future versions of the object.
>> 4. Data values inside of serialized classes should be carefully designed
>> so that there is a “not present” value that is in line with a “not
>> initialized” value so that you can always insert a new format in between
>> those two (see rule 2 above about leaving holes in the versions).
>> 
>> The purpose of serializing objects is so that you can also send the
>> correct code.  If you can’t send the correct code (you are just sending
>> JSON), and instead have to figure out how to make your new data compatible
>> with code that can’t change, how is that any less complex than designing
>> readObject and writeObject implementations that must do the same thing when
>> you load an old serialization of an object into a new version of the
>> object?  In this case, readObject() needs to be able to inspect the new
>> values that the new code uses in readObject and provide initial values for
>> them just like the constructor(s) would do if the object was created new.
>> 
>> I really have never found anything that shipping JSON around makes any
>> simpler.   You still have to have a parsable JSON string value.  You still
>> have to migrate data formats when their is an old object receive by new
>> code.
>> 
>> The biggest problem of old was people not using an explicit serial version
>> id.  Several times, I have had to add an explicit serial version id to old
>> code so that it would deserialize correctly into new classes.  Sometimes it
>> is hard to do that.  But, that’s not a problem with the system as much as
>> it is a lack of understanding or actual neglect in following the design
>> standards of the serialization process.
>> 
>

Re: OSGi

2017-02-04 Thread Gregg Wonderly
“The web” is, in fact, an example of a mobile code platform different from 
Jini.  It doesn’t work better and in many cases I find it worse than Jini.  It 
has the same problems set we have.  The JSON or XML or whatever “data” you send 
must be in sync with the Javascript running in the browser.  Browser caches 
create problems with old code and new data.  You have to put versioning in your 
service calls to cleanly evolve your services so that they force the client to 
reload and get the new code so that it is in sync with the new data.

With Jini, you have to get a new service to get “different” data, and thus you 
will get a new proxy that can use the new data correctly.

 Different browsers are like different Java platforms, each has some 
interesting and often frustrating nuances of what the “platform” provides with 
correct implementation of Javascript accessible browser services.

The idea of Ajax for RPC out of the application is exactly in line with using a 
multi-threaded Java application to use Jini services, but it doesn’t work if 
that service can’t force the browser to get new code in the browser that knows 
how to use the new data coming from the service after it was restarted.

There is a wide range of “packages” of massive Javascript which try to “fix” 
problems with different ways that inexperienced developers have tried to create 
web applications.  Large web platforms like Facebook have their own platform of 
massive Javascript that their application is based on.  Everything is different 
in each of these platforms.  It’s really all, completely broken from a 
“developer” perspective because where ever you go, you will likely experience a 
completely different platform all running as Javascript.

I agree that we, as users tend to have a fairly good experience with all the 
mess, but realistically, there is still so much churn in “how” to do web apps 
correctly, that I am not convinced, at all, that there is success as a concept 
in the Javascript platform.  It’s entirely too flexible, too primitive and 
precisely inefficient at directing development to create reusable, portable 
code between these platforms.  Every “page”, “panel” or “form” has huge amounts 
of wiring into the “platform” that was used.  You can not simply take a page 
from Facebook and reuse it on your own.  Single page applications like that are 
just mammoth collections of tightly coupled code.

The only reusable code items are things that are simple blocks of code which 
use local data items in my experience…

Okay, rant completed.

But still, the difference between the web and Jini is the concept of the Jini 
Proxy where the correct code for the service endpoint is always running in the 
client.  That’s a dramatic simplifying detail compared to what  you have to 
worry about on the web.  On the web, the problem is knowing what the client is 
running compared to what data the service is providing.

Endpoints using non-fixed ports makes the Proxy stop working when the service 
is restarted so that the client can be forced to rediscover the service and 
reconnect with a working service and get the proxy for that working service so 
that the client doesn’t have the wrong data for a new version of the service.  
The service can have a new readObject() for that class which can migrate the 
old data format to the new data format needed by the service.

What other details of Jini vs the Web are problem areas to you?  What makes the 
“web” better than Jini as a mobile code platform?  The “browser” is a platform. 
 The notions of serviceUI were developed to create a “browser” like platform as 
a starting point for services to export a complete client experience as a web 
page is today.  Are you using ServiceUI to take advantage of that, or do your 
clients have to create their own page or task to talk to your services?

Gregg

> On Feb 4, 2017, at 3:14 AM, Niclas Hedhman  wrote:
> 
> The latter...
> 
> It works rather well for JavaScript in web browsers. I think that is the
> most interesting "mobile code" platform to review as a starting point.
> 
> On Sat, Feb 4, 2017 at 2:54 PM, "Michał Kłeczek (XPro Sp. z o. o.)" <
> michal.klec...@xpro.biz> wrote:
> 
>> Are you opposing the whole idea of sending data and code (or instructions
>> how to download it) bundled together? (the spec)
>> Or just the way how it is done in Java today. (the impl)
>> 
>> If it is the first - we are in an absolute disagreement.
>> If the second - I agree wholeheartedly.
>> 
>> Thanks,
>> Michal
>> 
>> Niclas Hedhman wrote:
>> 
>>> FYI in case you didn't know; Jackson ObjectMapper takes a POJO structure
>>> and creates a (for instance) JSON document, or the other way around. It is
>>> not meant for "any object to binary and back".
>>> My point was, Java Serialization (and by extension JERI) has a scope that
>>> is possibly wrongly defined in the first place. More constraints back then
>>> might have been a good thing...
>>> 
>>> 

Re: OSGi

2017-02-04 Thread Gregg Wonderly
Okay, then I think you should investigate my replacement of the 
RMIClassLoaderSPI implementation with a pluggable mechanism.  

public interface CodebaseClassAccess {
public Class loadClass( String codebase,
  String name ) throws IOException, 
ClassNotFoundException;
public Class loadClass(String codebase,
  String name,
  ClassLoader defaultLoader) throws 
IOException,ClassNotFoundException;
public Class loadProxyClass(String codebase,
   String[] interfaceNames,
   ClassLoader defaultLoader ) throws 
IOException,ClassNotFoundException;
public String getClassAnnotation( Class cls );
public ClassLoader getClassLoader(String codebase) throws IOException;
public ClassLoader createClassLoader( URL[] urls,
ClassLoader parent,
boolean requireDlPerm,
AccessControlContext ctx );
/**
 * This should return the class loader that represents the system
 * environment.  This might often be the same as {@link 
#getSystemContextClassLoader()}
 * but may not be in certain circumstances where container mechanisms 
isolate certain
 * parts of the classpath between various contexts.
 * @return
 */
public ClassLoader getParentContextClassLoader();
/**
 * This should return the class loader that represents the local system
 * environment that is associated with never-preferred classes
 * @return
 */
public ClassLoader getSystemContextClassLoader( ClassLoader defaultLoader );
}

I have forgotten what Peter renamed it to.  But this base interface is what all 
of the Jini codebase uses to load classes.  The annotation is in the “codebase” 
parameter.  From this you can explore how the annotation can move from being a 
URL, which you could recognize and still use, but substitute your own indicator 
for another platform such as a maven or OSGi targeted codebase.

Thus, you can still use the annotation, but use it to specify the type of 
stream instead of what to download via HTTP.

Gregg


> On Feb 4, 2017, at 2:02 AM, Michał Kłeczek (XPro Sp. z o. o.) 
> <michal.klec...@xpro.biz> wrote:
> 
> My annotated streams replace codebase resolution with object based one (ie - 
> not using RMIClassLoader).
> 
> Michal
> 
> Gregg Wonderly wrote:
>> Why specific things do you want your AnnotatedStream to provide?
>> 
>> Gregg
>> 
>> 
> 



Re: OSGi

2017-02-03 Thread Gregg Wonderly
There is a lot of code in the base implementation of the endpoints.  It’s quite 
deep to “replace”.  But, it is replaceable.  I’ve sometimes started to create 
various types of protocols instead of Marshalled data.  But, practically, I’ve 
not decided to do that.  There are other technologies like RabbitMQ or other 
various queue based technologies that are better for “data” streaming.  The 
explicit nature of RPC and of the layers here make them complex to replace.  
There are many things wired into multiple dependencies.

Why specific things do you want your AnnotatedStream to provide?

Gregg

> On Feb 4, 2017, at 1:43 AM, Michał Kłeczek (XPro Sp. z o. o.) 
>  wrote:
> 
> I know that.
> And while it is better than Java RMI for several reasons (extensibility being 
> one of them) - it is still not perfect:
> 
> 1) It is inherently blocking
> 2) Does not support data streaming (in general you need a separate comm 
> channel for this)
> 3) invocation layer depends on particular object serialization implementation 
> - Marshall input/output streams (this is my favorite - to plug in my new 
> AnnotatedStream implementations I must basically rewrite the invocation layer)
> 
> Thanks,
> Michal
> 
> 
> Peter wrote:
>> FYI.  JERI != Java RMI.
>> 
>> There's no reason these layers couldn't be provided as OSGi services and 
>> selected from the service registry either.
>> 
>> Cheers,
>> 
>> Peter.
>> 
>> 
>>   Protocol Stack
>> 
>> The Jini ERI architecture has a protocol stack with three layers as shown in 
>> the following table, with interfaces representing the abstractions of each 
>> layer on the client side and the server side as shown:
>> 
>> Layer Client-side abstractions Server-side abstractions
>> Invocation layer |InvocationHandler| 
>> 
>>  |InvocationDispatcher|
>> Object identification layer |ObjectEndpoint| |RequestDispatcher|
>> Transport layer |Endpoint|, |OutboundRequestIterator|, |OutboundRequest| 
>> |ServerCapabilities|, |ServerEndpoint|, |InboundRequest|
>> 
>> The client-side and server-side implementations of each layer are chosen for 
>> a particular remote object as part of exporting the remote object. The 
>> design is intended to allow plugging in different implementations of one 
>> layer without affecting the implementations of the other layers.
>> 
>> The client side abstractions correspond to the structure of the client-side 
>> proxy for a remote object exported with Jini ERI, with the invocation layer 
>> implementation containing the object identification layer implementation and 
>> that, in turn, containing the transport layer implementation.
>> 
>> Which invocation constraints are supported for remote invocations to a 
>> particular remote object exported with Jini ERI is partially dependent on 
>> the particular implementations of these layers used for the remote object 
>> (most especially the transport layer implementation).
>> 
>> 
>> 
>> On 4/02/2017 3:51 PM, Peter wrote:
>>> Thanks Nic,
>>> 
>>> JERI shouldn't be considered as being limited to or dependant on Java 
>>> Serialization, it's only a transport layer, anything that can write to an 
>>> OutputStream and read from an InputStream will do.
>>> 
>>> The JSON document could be compressed and sent as bytes, or UTF strings 
>>> sent as bytes.
>>> 
>>> See the interfaces InboundRequest and OutboundRequest.
>>> 
>>> Cheers,
>>> 
>>> Peter.
>>> 
>>> On 4/02/2017 3:35 PM, Niclas Hedhman wrote:
 FYI in case you didn't know; Jackson ObjectMapper takes a POJO structure
 and creates a (for instance) JSON document, or the other way around. It is
 not meant for "any object to binary and back".
 My point was, Java Serialization (and by extension JERI) has a scope that
 is possibly wrongly defined in the first place. More constraints back then
 might have been a good thing...
 
 
 
 On Sat, Feb 4, 2017 at 12:36 PM, Peter  wrote:
 
> On 4/02/2017 12:43 PM, Niclas Hedhman wrote:
> 
>> On Fri, Feb 3, 2017 at 12:23 PM, Peter   wrote:
>> 
>> No serialization or Remote method invocation framework currently supports
>>> OSGi very well, one that works well and can provide security might gain 
>>> a
>>> lot of new interest from that user base.
>>> 
>> What do you mean by this? Jackson's ObjectMapper doesn't have problems on
>> OSGi. You are formulating the problem wrongly, and if formulated
>> correctly,
>> perhaps one realizes why Java Serialization fell out of fashion rather
>> quickly 10-12 years ago, when people realized that code mobility (as done
>> in Java serialization/RMI) caused a lot of problems.
>> 
> Hmm, I didn't know that, sounds like an option for JERI.
> 
> 
> IMHO, RMI/Serialization's design is flawed. Mixing 

Re: OSGi

2017-02-03 Thread Gregg Wonderly
I meant RMIClassLoaderSPI below.  And note that this mechanism is currently a 
VM Wide implementation detail, instead of being a service proxy specific 
mechanism.  That’s where we need to be able to allow for something specific to 
plugin for a specific service to use a specific platform for resolving proxy 
and serviceUI components.

Gregg

> On Feb 3, 2017, at 9:52 AM, Gregg Wonderly <ge...@cox.net> wrote:
> 
> I was merely speaking of a ClassLoader which might composite several 
> ClassLoaders using something like the codebase annotation.  If we want to 
> make it possible to load service proxies and serviceUI codebases from 
> “anywhere” in Jini, we should be prepared to utilize whatever “compositing” 
> technique that platform provides/uses, while still trying to provide a 
> concrete view of ClassLoader passed around as TCCL so that specific isolation 
> that will occur, is maintained in code that is shared across all of these 
> codesource types.  
> 
> There may need to be some kind of factory involved at some level that is 
> parameterized by platform type, as is implied by the RMIClassLoaderAPI 
> mechanisms use of the codebase annotation.
> 
> Gregg
> 
>> On Feb 3, 2017, at 12:52 AM, Peter <j...@zeus.net.au> wrote:
>> 
>> Just in case I've misunderstood. Was your reference to composite ClassLoader 
>> is a reference to Bharath's earlier posted link?
>> 
>> http://blog.osgi.org/2008/08/classy-solutions-to-tricky-proxies.html?m=1
>> 
>> Regards,
>> 
>> Peter.
>> 
>> Sent from my Samsung device.
>> 
>>  Include original message
>>  Original message 
>> From: Peter <j...@zeus.net.au>
>> Sent: 03/02/2017 02:23:25 pm
>> To: dev@river.apache.org <d...@riverapache.org>
>> Subject: Re: OSGi
>> 
>> Thanks Gregg,
>> 
>> I realise that Jini may have been a lot more successful had your experiences 
>> with desktop applications been given greater consideration.
>> 
>> Any thoughts on JavaFX, fxml for serviceui?
>> 
>> Criticism had also in the past, been levelled at the lookup service's 
>> inability to perform boolean logic comparisons of Entry fields.  Delayed 
>> unmarshalling allows local Entry comparisons, improved security and a more 
>> responsive ui.
>> 
>> I understand why you wouldn't want to "download the internet" (which really 
>> means download transitive dependencies at startup) when a user first opens 
>> their desktop application, quick start up is important on the desktop.  Less 
>> so on the server where run time performance is more important.
>> 
>> I get that you're not prepared to take a backwards step with desktop start 
>> up performance.
>> 
>> OSGi support is targeted towards Iot and growing our developer base, it's a 
>> convenient time for me to work on that now, as I'm working on a modular 
>> maven build, but it's still my understanding that we (River) also intend to 
>> continue supporting other developers who don't utilise OSGi or Maven (Rio 
>> users).
>> 
>> Maven pulls in a lot of unnecessary transitive dependencies (which isn't a 
>> problem for a server or build env), OSGi reduces them by using package 
>> dependencies at runtime.  Classdep can reduce the number further to only the 
>> class files necessary, but it doesn't support versioning.
>> 
>> Jigsaw is a modular framework that doesn't use ClassLoaders to manage module 
>> visibility but it has no versioning support either
>> 
>> Containers and modularity are developer concerns, but we need to ensure this 
>> is as easy as possible for developers to implement their chosen design and 
>> environment.  Right now OSGi is difficult because no one has written 
>> serialization that takes its ClassLoader visibility into account.
>> 
>> OSGi and Maven provide some good tools for automating and simplifying the 
>> build process.  Bnd generates manifests automatically with package imports 
>> and exports for OSGi and also reads annotations.  Modules certainly make the 
>> code easier to read and understand for the maintainer and allows components 
>> that don't change to be released on different timeframes.  The qa test suite 
>> is still an ant build, which I'm currently working on running with the new 
>> jar names on the class path (affects test policy file grants).  Clearly 
>> tests will need to be written for OSGi.
>> 
>> The TCCL classloader will continue to work as it always has, when an OSGi 
>> framework is not detected.
>> 
>> At the end of the day though, no framework is mand

Re: OSGi

2017-02-03 Thread Gregg Wonderly
 developed a bug in recent times that affects the build 
> (occassionally drops deps) and there has been little maintenance on it 
> recently.
> 
> OSGi could also determine dependency graphs, however would be OSGi specific 
> (package deps) and I'm not yet certain it's necessary.  I also think that 
> dependency graphs will tend to be a framework concern, whether they are 
> class, package or module based dependencies they are specific.
> 
> At this time, choosing a specific container or framework for developers to 
> utilise is a non goal, we're just working on better support for 
> deserialization in an OSGi framework, because there are visibility problems 
> that the TCCL can't completely solve for OSGi.
> 
> No serialization or Remote method invocation framework currently supports 
> OSGi very well, one that works well and can provide security might gain a lot 
> of new interest from that user base.
> 
> A modular River also reduces the api users need to learn to get started, 
> which makes River easier to learn for Maven or OSGi developers.  In this case 
> they already understand their build tools and probably want to continue using 
> them.
> 
> It's a non goal to chose any particular modular ity framework for developers 
> to use at this time.
> 
> Regards,
> 
> Peter.
> 
> Sent from my Samsung device .
>  
>   Include original message
>  Original message 
> From: Gregg Wonderly <ge...@cox.net>
> Sent: 03/02/2017 01:25:35 am
> To: dev@river.apache.org
> Subject: Re: OSGi
> 
> I am a fan of “one jar” because I get real tired of spending time “packaging” 
> when the class loading mechanisms already provide “segregation”.  I 
> understand how “pretty” packaging is and how everyone can be completely 
> excited about a clean view of dependencies.  However, when I package 
> something that works, I don’t want to hand someone 5 things.  The “jar” file 
> has been “click to run” file type for a long time.  That, for me means that 
> you should be able to get everything you need from that jar.  The problem 
> that clickable jar files suffer from, is that lack of use of Java for desktop 
> apps.  Instead, the server mentality of 10s of jars from 10s of places, 
> integration, versioning, etc., have created the “lots of pieces is fine” 
> viewpoint and tooling around that has made it pretty much impossible to 
> easily create one jar, without “custom” packaging. 
> 
> I like to solve problems once, not over and over again.  We need to make sure 
> and think about “tools” that make whatever packaging is decided on, trivial 
> to create.  Think about using annotations to segregate pieces into the 
> packages that you want them to be in.  Think about runtime dependency graphs 
> being expressible in annotations as well, so that we might be able to utilize 
> a composite class loader to “get” dependent jars from an appropriate source.  
> This would allow great, dynamic binding to occur in the class loader, and 
> still provide a single class loader view of the context so that TCCL and 
> other parts of the Java runtime will still work in non-jini packages. 
> 
> Gregg 
> 
>>  On Feb 1, 2017, at 7:44 PM, Peter <j...@zeus.net.au> wrote: 
>>   
>>  Thanks Gregg, 
>>   
>>  I think it's necessary to continue supporting preferred class loading for 
>> those who don't use osgi or maven.  Rio already has a maven class resolver 
>> RMIClassLoaderSPI implementation.  But we also need to ensure we can still 
>> solve the same problems that preferred class loading addresses in modular 
>> environments. 
>>   
>>  We also need to consider how existing implementations can transition to a 
>> modular framework, should developers want to. 
>>   
>>  River / Jini's classdepandjar duplicates classes in jar files.  Maven or 
>> OSGi modules usually don't. 
>>   
>>  In a modular version of Gregg's use case scenario, the shared Entries 
>> wouldn't be included in the proxy codebase but instead be imported from 
>> another module / bundle / package. 
>>   
>>  The Entry's would be imported by the client and proxy modules / bundles, 
>> avoiding unnecessary downloads and ensuring shared visibility.  The client 
>> and proxy will need to have an overlapping import package version range and 
>> the currently utilised package at the client will need to be within the 
>> proxy's imported version range, so it will be wired / resolved correctly. 
>>   
>>  We should look at implementing a modular test case of what your doing, to 
>> test our OSGiClassProvider. 
>>   
>>  Supporting OSGi is likely to require delayed unmarshalling.  Logical 
>> comparis

Re: OSGi

2017-02-02 Thread Gregg Wonderly
I am a fan of “one jar” because I get real tired of spending time “packaging” 
when the class loading mechanisms already provide “segregation”.  I understand 
how “pretty” packaging is and how everyone can be completely excited about a 
clean view of dependencies.  However, when I package something that works, I 
don’t want to hand someone 5 things.  The “jar” file has been “click to run” 
file type for a long time.  That, for me means that you should be able to get 
everything you need from that jar.  The problem that clickable jar files suffer 
from, is that lack of use of Java for desktop apps.  Instead, the server 
mentality of 10s of jars from 10s of places, integration, versioning, etc., 
have created the “lots of pieces is fine” viewpoint and tooling around that has 
made it pretty much impossible to easily create one jar, without “custom” 
packaging.

I like to solve problems once, not over and over again.  We need to make sure 
and think about “tools” that make whatever packaging is decided on, trivial to 
create.  Think about using annotations to segregate pieces into the packages 
that you want them to be in.  Think about runtime dependency graphs being 
expressible in annotations as well, so that we might be able to utilize a 
composite class loader to “get” dependent jars from an appropriate source.  
This would allow great, dynamic binding to occur in the class loader, and still 
provide a single class loader view of the context so that TCCL and other parts 
of the Java runtime will still work in non-jini packages.

Gregg

> On Feb 1, 2017, at 7:44 PM, Peter <j...@zeus.net.au> wrote:
> 
> Thanks Gregg,
> 
> I think it's necessary to continue supporting preferred class loading for 
> those who don't use osgi or maven.  Rio already has a maven class resolver 
> RMIClassLoaderSPI implementation.  But we also need to ensure we can still 
> solve the same problems that preferred class loading addresses in modular 
> environments.
> 
> We also need to consider how existing implementations can transition to a 
> modular framework, should developers want to.
> 
> River / Jini's classdepandjar duplicates classes in jar files.  Maven or OSGi 
> modules usually don't.
> 
> In a modular version of Gregg's use case scenario, the shared Entries 
> wouldn't be included in the proxy codebase but instead be imported from 
> another module / bundle / package.
> 
> The Entry's would be imported by the client and proxy modules / bundles, 
> avoiding unnecessary downloads and ensuring shared visibility.  The client 
> and proxy will need to have an overlapping import package version range and 
> the currently utilised package at the client will need to be within the 
> proxy's imported version range, so it will be wired / resolved correctly.
> 
> We should look at implementing a modular test case of what your doing, to 
> test our OSGiClassProvider.
> 
> Supporting OSGi is likely to require delayed unmarshalling.  Logical 
> comparisons of Package version Entry's will be required before proxy's can be 
> downloaded/ unmarshalled.
> 
> The lookup service only provides exact matching.  However it would be 
> possible to perform a limited range of version matching with wild cards 
> without delayed unmarshalling.
> 
> Modular frameworks reduce downloads by utilising already downloaded code when 
> compatible.
> 
> Regards,
> 
> Peter
> 
> Sent from my Samsung device.
>  
>   Include original message
>  Original message 
> From: Gregg Wonderly <ge...@cox.net>
> Sent: 02/02/2017 06:56:43 am
> To: dev@river.apache.org
> Subject: Re: OSGi
> 
> Part of the “preferred” is to keep downloads from happening.  But the other 
> is the fact that the UI is already using/linked to specific sources of the 
> Entry classes that it uses for finding the name of the service, the icon and 
> other details.  There are serviceUI classes which are also already bound at 
> the time of service discovery and the serviceUI for that service needs to 
> resolve to those classes, not any in the codebase jars for the service. 
> 
> Gregg 
> 
>>  On Feb 1, 2017, at 5:52 AM, Peter <j...@zeus.net.au> wrote: 
>>   
>>  Gregg, 
>>   
>>  Have you got some more detail on your Entry classes that need to be 
>> preferred? 
>>   
>>  Thanks, 
>>   
>>  Peter. 
>>   
>>  Sent from my Samsung device. 
>>
>>Include original message 
>>   Original message  
>>  From: Gregg Wonderly <ge...@cox.net> 
>>  Sent: 31/01/2017 12:56:56 am 
>>  To: dev@river.apache.org 
>>  Subject: Re: OSGi 
>>   
>>  Maybe you can help me out here by explaining how it is that execution 
>> context and class visibility are 

Re: OSGi

2017-02-01 Thread Gregg Wonderly
Part of the “preferred” is to keep downloads from happening.  But the other is 
the fact that the UI is already using/linked to specific sources of the Entry 
classes that it uses for finding the name of the service, the icon and other 
details.  There are serviceUI classes which are also already bound at the time 
of service discovery and the serviceUI for that service needs to resolve to 
those classes, not any in the codebase jars for the service.

Gregg

> On Feb 1, 2017, at 5:52 AM, Peter <j...@zeus.net.au> wrote:
> 
> Gregg,
> 
> Have you got some more detail on your Entry classes that need to be preferred?
> 
> Thanks,
> 
> Peter.
> 
> Sent from my Samsung device.
>  
>   Include original message
> ---- Original message 
> From: Gregg Wonderly <ge...@cox.net>
> Sent: 31/01/2017 12:56:56 am
> To: dev@river.apache.org
> Subject: Re: OSGi
> 
> Maybe you can help me out here by explaining how it is that execution context 
> and class visibility are both handled by OSGi bundles.  For example, one of 
> my client applications is a desktop environment.  It does service look up for 
> all services registrations providing a “serviceUI”.  It then integrates all 
> of those services into a desktop view where the UIs are running at the same 
> time with each one imbedded in a JDesktopPane or a JTabbedPane or a JFrame or 
> JDialog.  There are callbacks from parts of that environment into my 
> application which in turn is interacting with the ServiceUI component.  You 
> have AWT event threads which are calling out, into the ServiceUIs and lots of 
> other threads of execution which all, ultimately, must have different class 
> loading environments so that the ServiceUI components can know where to load 
> code from. 
> 
> It’s exactly TCCL that allows them to know that based on all the other class 
> loading standards.  The ClassLoader is exactly the thing that all of them 
> have in common if you include OSGi bundles as well.  The important detail, is 
> that if the TCCL is not used as new ClassLoaders are created, then there is 
> no context for those new ClassLoaders to reference, universally. 
> 
> The important details are: 
> 
> 1) The desktop application has to be able to prefer certain Entry classes 
> which define details that are presented to the user. 
> 2) When the user double clicks on a services icon, or right clicks and 
> selects “Open in new Frame”, an async worker thread needs a TCCL pointing at 
> the correct parent class loader for the service’s URLClassLoader to reference 
> so that the preferred classes work. 
> 3) Anytime that the AWT Event thread might be active inside of the 
> services UI implementation, it also needs to indicate the correct parent 
> class loader if that UI component causes other class loading to occur. 
> 4) I am speaking specifically in the context of deferred class loading 
> which is controlled outside of the service discovery moment. 
> 
>   
>>  On Jan 30, 2017, at 4:04 AM, Michał Kłeczek (XPro Sp. z o. o.) 
>> <michal.klec...@xpro.biz> wrote: 
>>   
>>  What I think Jini designers did not realize is that class loading can be 
>> treated exactly as any other capability provided by a (possibly remote) 
>> service. 
>>  Once you realize that - it is possible to provide a kind of a "universal 
>> container infrastructure" where different class loading implementations may 
>> co-exist in a single JVM. 
> 
> That’s precisely what ClassLoader is for.  TCCL is precisely to allow “some 
> class” to know what context to associate newly loaded classes with, so that 
> in such an environment, any code can load classes on behalf of some other 
> code/context.  It doesn’t matter if it is TCCL or some other class management 
> scheme such as OSGi bundles.  We are talking about the same detail, just 
> implemented in a different way. 
> 
>>  What's more - these class loading implementations may be dynamic themselves 
>> - ie. it is a service that provides the client with a way to load its own 
>> (proxy) classes. 
>>   
>>  In other words: "there not enough Jini in Jini itself”. 
> 
> I am not sure I understand where the short coming is at then.  Maybe you can 
> illustrate with an example where TCCL fails to allow some piece of code to 
> load classes on behalf of another piece of code? 
> 
> In my desktop application environment, there is a abstract class which is 
> used by each serviceUI to allow the desktop to know if it provides the 
> ability to open into one of the above mentioned JComponent subclasses.  That 
> class is preferred and provided and resolved using the codebase of the 
> desktop client.  That class loading en

Re: OSGi

2017-01-30 Thread Gregg Wonderly
The annotation for the exported services/classes is what is at issue here.  
Here’s the perspectives I’m trying to make sure everyone sees.

1) Somehow, exported classes from one JVM need to be resolved in another JVM 
(at a minimum).  The source of those classes today, is the codebase specified 
by the service.  A directed graph of JVMs exchanging classes demands that all 
service like JVMs provide a codebase for client like JVMs to be able to resolve 
the classes for objects traveling to the client form the service.  This is 
nothing we all don’t already know I believe.

2) If there is a 3rd party user of a class from one JVM which is handed objects 
resolved by a middle man JVM (as Michal is mentioning here), there is now a 
generally required class which all 3 JVMs need to be able to resolve.  As we 
know, Jini’s current implementation and basic design is that a services 
codebase has to provide a way for clients to resolve the classes it exports in 
its service implementation.  In the case Michal is mentioning, the demand would 
be for the middle man service to have the classes that it wants the 3rd service 
to resolve, in some part of its codebase.  This is why I mentioned Objectspace 
Voyage earlier.  I wanted to use it as an example of a mechanism which always 
packages class definitions into the byte stream that is used for sending 
objects between VMs.  Voyager would extract the class definitions from the 
jars, wrap them into the stream, and the remote JVM would be able to then 
resolve the classes by constructing instances of the class using the byte[] 
data for the class definition. 

Ultimately, no matter what the source of the byte[] data for the class 
definition is, it has to be present, at some point in all VMs using that 
definition/version of the class.  That’s what I am trying to say.  The issue is 
simply where would the class resolve from?  I think that class definition 
conveyance, between JVMs is something that we have choices on.  But, 
practically, you can’t change “annotations” to make this work.  If the middle 
man above is a “proxy” service which bridges two different networks, neither 
JVM on each network would have routing to get to the one on the other side of 
the proxy JVM.  This is why a mechanism like Objectspace Voyager would be one 
way to send class definitions defined on one network to another JVM on another 
network via this proxy service.

Of course other mechanisms for class conveyance are possible and in fact 
already exist.  Maven and even OSGi provide class, version oriented conveyance 
from a distribution point, into a particular JVM instance.  Once the class 
definition exists inside of one of those JVMs then we have all the other 
details about TCCL and creation of proper versions and resolution from proper 
class loaders.

I don’t think we have to dictate that a particular class conveyance mechanism 
is the only one.  But, to solve the problem of how to allow classes hop between 
multiple JVMs, we have to  specify how that might work at the level that 
service instances are resolved and some kind of class loading context is 
attached to that service.  

The reason I am talking specifically about directed graphs of class loading is 
because I am first focused on the fact that there is a lot less flexibility in 
trying to resolve through a large collection of specific classes rather than an 
open set of classes resolved through a directed graph of the code execution 
path which exposes the places and moments of object use in a much more 
controlled and natural way to me.

Gregg

> On Jan 30, 2017, at 9:14 AM, Michał Kłeczek (XPro Sp. z o. o.) 
> <michal.klec...@xpro.biz> wrote:
> 
> It looks to me like we are talking past each other.
> 
> Thread local resolution context is needed - we both agree on this.
> What we do not agree on is that the context should be a single ClassLoader. 
> It has to be a set of ClassLoaders to support situations when dependencies 
> are not hierarchical.
> 
> The use case is simple - I want to implement "decorator" services that 
> provide smart proxies wrapping (smart) proxies of other services.
> I also want to have Exporters provided as dynamic services which would allow 
> my services to adapt to changing network environment.
> 
> And I would like to stress - I am actually quite negative about OSGI being 
> the right environment for this.
> 
> Thanks,
> Michal
> 
> Gregg Wonderly wrote:
>> Maybe you can help me out here by explaining how it is that execution 
>> context and class visibility are both handled by OSGi bundles.  For example, 
>> one of my client applications is a desktop environment.  It does service 
>> look up for all services registrations providing a “serviceUI”.  It then 
>> integrates all of those services into a desktop view where the UIs are 
>> running at the same time

Re: OSGi

2017-01-30 Thread Gregg Wonderly
Maybe you can help me out here by explaining how it is that execution context 
and class visibility are both handled by OSGi bundles.  For example, one of my 
client applications is a desktop environment.  It does service look up for all 
services registrations providing a “serviceUI”.  It then integrates all of 
those services into a desktop view where the UIs are running at the same time 
with each one imbedded in a JDesktopPane or a JTabbedPane or a JFrame or 
JDialog.  There are callbacks from parts of that environment into my 
application which in turn is interacting with the ServiceUI component.  You 
have AWT event threads which are calling out, into the ServiceUIs and lots of 
other threads of execution which all, ultimately, must have different class 
loading environments so that the ServiceUI components can know where to load 
code from.

It’s exactly TCCL that allows them to know that based on all the other class 
loading standards.  The ClassLoader is exactly the thing that all of them have 
in common if you include OSGi bundles as well.  The important detail, is that 
if the TCCL is not used as new ClassLoaders are created, then there is no 
context for those new ClassLoaders to reference, universally.

The important details are:

1) The desktop application has to be able to prefer certain Entry 
classes which define details that are presented to the user.
2) When the user double clicks on a services icon, or right clicks and 
selects “Open in new Frame”, an async worker thread needs a TCCL pointing at 
the correct parent class loader for the service’s URLClassLoader to reference 
so that the preferred classes work.
3) Anytime that the AWT Event thread might be active inside of the 
services UI implementation, it also needs to indicate the correct parent class 
loader if that UI component causes other class loading to occur.
4) I am speaking specifically in the context of deferred class loading 
which is controlled outside of the service discovery moment.

 
> On Jan 30, 2017, at 4:04 AM, Michał Kłeczek (XPro Sp. z o. o.) 
> <michal.klec...@xpro.biz> wrote:
> 
> What I think Jini designers did not realize is that class loading can be 
> treated exactly as any other capability provided by a (possibly remote) 
> service.
> Once you realize that - it is possible to provide a kind of a "universal 
> container infrastructure" where different class loading implementations may 
> co-exist in a single JVM.

That’s precisely what ClassLoader is for.  TCCL is precisely to allow “some 
class” to know what context to associate newly loaded classes with, so that in 
such an environment, any code can load classes on behalf of some other 
code/context.  It doesn’t matter if it is TCCL or some other class management 
scheme such as OSGi bundles.  We are talking about the same detail, just 
implemented in a different way.

> What's more - these class loading implementations may be dynamic themselves - 
> ie. it is a service that provides the client with a way to load its own 
> (proxy) classes.
> 
> In other words: "there not enough Jini in Jini itself”.

I am not sure I understand where the short coming is at then.  Maybe you can 
illustrate with an example where TCCL fails to allow some piece of code to load 
classes on behalf of another piece of code?

In my desktop application environment, there is a abstract class which is used 
by each serviceUI to allow the desktop to know if it provides the ability to 
open into one of the above mentioned JComponent subclasses.  That class is 
preferred and provided and resolved using the codebase of the desktop client.  
That class loading environment is then the place where the service is finally 
resolved and classes created so that the proxy can be handed to the serviceUI 
component which ultimately only partially resolves from the services codebase.

It’s this class compatibility which needs to be lightweight.

> 
> We have _all_ the required pieces in place:
> - dynamic code loading and execution (ClassLoaders),
> - security model and implementation that allows restricting rights of the 
> downloaded code,
> - and a serialization/deserialization which allows sending arbitrary data 
> (and yes - code too) over the wire.
> 
> It is just the matter of glueing the pieces together.

Correct, but it’s a matter of class compatibility where a client environment 
has to interact with a service and the serviceUI components where TCCL excels 
and providing the ability to create class loaders with the correct parent 
context, for Java based code.  OSGi introduces the opportunity for some extra 
bells and whistles.  But I don’t see that it can completely eliminate the 
nature of TCCL and how it was intended to be used.

> 
> Thanks,
> Michal
> 
> 
> Gregg Wonderly wrote:
>> 
>> I am not an OSGi user.  I am not trying to 

Re: OSGi

2017-01-29 Thread Gregg Wonderly
But codebase identity is not a single thing.  If you are going to allow a 
client to interact with multiple services and use those services, together, to 
create a composite service or just be a client application, you need all of the 
classes to interact.  One of the benefits of dynamic class loading and the 
selling point of how Jini was first presented (and I still consider this to the 
a big deal), is the notion that you can introduce a new version of a service 
which might already exist in duplicity to try out the new version.  Thus, the 
same class name can have multiple versions presented by multiple jars or 
bundles.  You need to load the right one, and expose it to the client(s) in a 
way that keeps things distinctly separated.

   service A -> codeSource1, codesource2, codesource3
new service A -> codesource4, codesource2, codesource5

If you get the new service A, you need (as if it was a separate service), to 
resolve it using the proper code sources.  I understand how to do this with 
TCCL, and I also understand how it might be done with some other class loading 
mechanism.  The question is, for OSGi bundles, how does a bundle loader manager 
make that any different from TCCL in that intimately, you still have a “tree” 
or “set” of dependencies that are resolved into the composite codebase.  
Bundles introduce a larger collection of active classes and mechanisms managing 
the dependency graph.  There are tools to assemble bundles and lots of other 
associated details.  TCCL makes it trivial to “know” what codesource is active 
and is no different in complexity then casting a class loader back to a bundle 
class loader to find fields which detail the collection of involved classes.  

I am not an OSGi user.  I am not trying to be an OSGi opponent.  What I am 
trying to say is that I consider all the commentary in those articles about 
TCCL not working to be just inexperience and argument to try and justify a 
different position or interpretation of what the real problem is.

The real problem is that there is not one “module” concept in Java (another one 
is almost here in JDK 9/Jigsaw).  No one is working together on this, and OSGi 
is solving problems in a small part of the world of software.   It works well 
for embedded, static systems.  I think OSGi misses the mark on dynamic systems 
because of the piecemeal loading and resolving of classes.  I am not sure that 
OSGi developers really understand everything that Jini can do because of the 
choices made (and not made) in the design.  The people who put Jini together 
had a great deal of years of experience piecing together systems which needed 
to work well with a faster degree of variability and adaptation to the 
environment then what most people seem to experience in their classes and work 
environments which are locked down by extremely controlled distribution 
strategies which end up slowing development in an attempt to control everything 
that doesn’t actually cause quality to suffer.

Gregg

> On Jan 28, 2017, at 3:46 AM, Michał Kłeczek (XPro Sp. z o. o.) 
> <michal.klec...@xpro.biz> wrote:
> 
> I would say that using TCCL as is a poor man's approach to class resolution. 
> Once you have codebase identity done right - it is not needed anymore.
> 
> Thanks,
> Michal
> 
> Gregg Wonderly wrote:
>> The commentary in the first document indicates that there is no rhyme or 
>> reason to the use of the context class loader.  For me, the reason was very 
>> obvious.  Anytime that you are going to create a new class loader, you 
>> should set the parent class loader to the context class loader so that the 
>> calling thread environments class loading context will allow for classes 
>> referenced by the new class loader to resolve to classes that thread’s 
>> execution already can resolve.
>> 
>> What other use of a context class loader would happen?
>> 
>> Gregg
>> 
>>> On Jan 27, 2017, at 11:39 AM, Bharath Kumar <bharathkuma...@gmail.com> 
>>> <mailto:bharathkuma...@gmail.com> wrote:
>>> 
>>> Yes Peter. Usage of thread context class loader is discouraged in OSGi
>>> environment.
>>> 
>>> http://njbartlett.name/2012/10/23/dreaded-thread-context-classloader.html 
>>> <http://njbartlett.name/2012/10/23/dreaded-thread-context-classloader.html>
>>> 
>>> Some of the problems are hard to solve in OSGi environment. For example,
>>> creating dynamic java proxy from 2 or more interfaces that are located in
>>> different bundles.
>>> 
>>> http://blog.osgi.org/2008/08/classy-solutions-to-tricky-proxies.html?m=1 
>>> <http://blog.osgi.org/2008/08/classy-solutions-to-tricky-proxies.html?m=1>
>>> 
>>> This problem can be solved using composite class loader. But

Re: OSGi

2017-01-27 Thread Gregg Wonderly
The commentary in the first document indicates that there is no rhyme or reason 
to the use of the context class loader.  For me, the reason was very obvious.  
Anytime that you are going to create a new class loader, you should set the 
parent class loader to the context class loader so that the calling thread 
environments class loading context will allow for classes referenced by the new 
class loader to resolve to classes that thread’s execution already can resolve.

What other use of a context class loader would happen?

Gregg

> On Jan 27, 2017, at 11:39 AM, Bharath Kumar <bharathkuma...@gmail.com> wrote:
> 
> Yes Peter. Usage of thread context class loader is discouraged in OSGi
> environment.
> 
> http://njbartlett.name/2012/10/23/dreaded-thread-context-classloader.html
> 
> Some of the problems are hard to solve in OSGi environment. For example,
> creating dynamic java proxy from 2 or more interfaces that are located in
> different bundles.
> 
> http://blog.osgi.org/2008/08/classy-solutions-to-tricky-proxies.html?m=1
> 
> This problem can be solved using composite class loader. But it is
> difficult to write it correctly. Because OSGi environment is dynamic.
> 
> I believe that it is possible to provide enough abstraction in river code,
> so that service developers don't even require to use context class loader
> in their services.
> 
> 
> 
> Thanks & Regards,
> Bharath
> 
> 
> On 27-Jan-2017 6:25 PM, "Peter" <j...@zeus.net.au> wrote:
> 
>> Thanks Gregg,
>> 
>> Thoughts inline below.
>> 
>> Cheers,
>> 
>> Peter.
>> 
>> 
>> On 27/01/2017 12:35 AM, Gregg Wonderly wrote:
>> 
>>> Is there any thought here about how a single client might use both an
>>> OSGi deployed service and a conventionally deployed service?
>>> 
>> 
>> Not yet, I'm currently considering how to support OSGi by implementing an
>> RMIClassLoaderSPI, similar to how Dennis has for Maven in Rio.
>> 
>> I think once a good understanding of OSGi has developed, we can consider
>> how an implementation could support that, possibly by exploiting something
>> like Pax URL built into PreferredClassProvider.
>> 
>> 
>>  The ContextClassLoader is a good abstraction mechanism for finding “the”
>>> approriate class loader.  It allows applications to deploy a composite
>>> class loader in some form that would be able to resolve classes from many
>>> sources and even provide things like preferred classes.
>>> 
>> 
>> Yes, it works well for conventional frameworks and is utilised by
>> PreferredClassProvider, but it's use in OSGi is discouraged, I'd like to
>> consider how it's use can be avoided in an OSGi env.
>> 
>> 
>>> 
>>> In a Java desktop application, would a transition from a background
>>> thread, interacting with a service to get an object from a service which is
>>> not completely resolved to applicable loaders still resolve correctly in an
>>> EventDispatch Thread?  That event dispatch thread can have the context
>>> class loader set on it by the thread which got the object, to be the class
>>> loader of the service object, to make sure that the resolution of classes
>>> happens with the correct class loader such that there will not be a problem
>>> with the service object having one set of definitions and another service
>>> or the application classpath having a conflicting class definition by the
>>> same name.
>>> 
>>> I’ve had to spend quite a bit of time to make sure that these scenarios
>>> work correctly in my Swing applications.
>>> 
>> 
>> Have you got more information?  I'm guessing this relates to delayed
>> unmarshalling into the EventDispatch thread.
>> 
>> It's early days yet, I'm still working it out what information is required
>> to resolve the correct ClassLoaders & bundles, but this is an important
>> question, Bharath mentioned Entry's can be utilised for versioning and this
>> seems like a good idea.
>> 
>> What follows are thoughts and observations.
>> 
>> A bundle can be created from a URL, provided the codebase the URL refers
>> to has an OSGi bundle manifest, so this could allow any of the existing URL
>> formats to deliver a proxy codebase for an OSGi framework.  When OSGi loads
>> the bundle, the package dependencies will be wired up by the local env.  If
>> the URL doesn't reference a bundle, then we could use Bharath's approach
>> and subclass the client's ClassLoader, this does make all the clients
>> classes visible to the proxy how

Re: OSGi

2017-01-27 Thread Gregg Wonderly
The ultimate issue is visibility of classes.  From most perspectives, there is 
an execution graph that implies moments when new classes are touched/needed.  
Where OSGi introduces bundles, it erases the ability to depend on that graph to 
“expose” the correct version of classes to the correct threads of execution.  
This is where the bundle model breaks down.  When there are a varied set of 
services with a varied set of versions of various jars, some of which are in 
conflict with each other, the context class loader exposes the correct version 
for code to use at that moment the code is running.

This is the number one detriment to how bundles work and resolve in OSGi.   Its 
nearly impossible to use OSGi unless you are in charge of every bundle and 
update all versions of all software simultaneously.

Without context class loading on a per thread basis, you end up pretty 
restricted in how software can evolve and how deployment of services can be 
unrelated to each other.

Gregg


> On Jan 27, 2017, at 11:39 AM, Bharath Kumar <bharathkuma...@gmail.com> wrote:
> 
> Yes Peter. Usage of thread context class loader is discouraged in OSGi
> environment.
> 
> http://njbartlett.name/2012/10/23/dreaded-thread-context-classloader.html
> 
> Some of the problems are hard to solve in OSGi environment. For example,
> creating dynamic java proxy from 2 or more interfaces that are located in
> different bundles.
> 
> http://blog.osgi.org/2008/08/classy-solutions-to-tricky-proxies.html?m=1
> 
> This problem can be solved using composite class loader. But it is
> difficult to write it correctly. Because OSGi environment is dynamic.
> 
> I believe that it is possible to provide enough abstraction in river code,
> so that service developers don't even require to use context class loader
> in their services.
> 
> 
> 
> Thanks & Regards,
> Bharath
> 
> 
> On 27-Jan-2017 6:25 PM, "Peter" <j...@zeus.net.au> wrote:
> 
>> Thanks Gregg,
>> 
>> Thoughts inline below.
>> 
>> Cheers,
>> 
>> Peter.
>> 
>> 
>> On 27/01/2017 12:35 AM, Gregg Wonderly wrote:
>> 
>>> Is there any thought here about how a single client might use both an
>>> OSGi deployed service and a conventionally deployed service?
>>> 
>> 
>> Not yet, I'm currently considering how to support OSGi by implementing an
>> RMIClassLoaderSPI, similar to how Dennis has for Maven in Rio.
>> 
>> I think once a good understanding of OSGi has developed, we can consider
>> how an implementation could support that, possibly by exploiting something
>> like Pax URL built into PreferredClassProvider.
>> 
>> 
>> The ContextClassLoader is a good abstraction mechanism for finding “the”
>>> approriate class loader.  It allows applications to deploy a composite
>>> class loader in some form that would be able to resolve classes from many
>>> sources and even provide things like preferred classes.
>>> 
>> 
>> Yes, it works well for conventional frameworks and is utilised by
>> PreferredClassProvider, but it's use in OSGi is discouraged, I'd like to
>> consider how it's use can be avoided in an OSGi env.
>> 
>> 
>>> 
>>> In a Java desktop application, would a transition from a background
>>> thread, interacting with a service to get an object from a service which is
>>> not completely resolved to applicable loaders still resolve correctly in an
>>> EventDispatch Thread?  That event dispatch thread can have the context
>>> class loader set on it by the thread which got the object, to be the class
>>> loader of the service object, to make sure that the resolution of classes
>>> happens with the correct class loader such that there will not be a problem
>>> with the service object having one set of definitions and another service
>>> or the application classpath having a conflicting class definition by the
>>> same name.
>>> 
>>> I’ve had to spend quite a bit of time to make sure that these scenarios
>>> work correctly in my Swing applications.
>>> 
>> 
>> Have you got more information?  I'm guessing this relates to delayed
>> unmarshalling into the EventDispatch thread.
>> 
>> It's early days yet, I'm still working it out what information is required
>> to resolve the correct ClassLoaders & bundles, but this is an important
>> question, Bharath mentioned Entry's can be utilised for versioning and this
>> seems like a good idea.
>> 
>> What follows are thoughts and observations.
>> 
>> A bundle can be created from a URL, provided the codebase the URL refers

Re: OSGi

2017-01-25 Thread Gregg Wonderly
>> and it seems like it is based on Java 1.x (ancient beast) and - as I  
>>>> understand it - the issues you describe are mainly caused by having only  
>>>> a single class name space (single ClassLoader). 
>>>> 
>>>> But sending IMHO class bytes in-band is not necessary (nor good). 
>>>> 
>>>> What is needed is: 
>>>> 1. Encoding dependency information in codebases (either in-band or by  
>>>> providing a downloadable descriptor) so that it is possible to recreate  
>>>> proper ClassLoader structure (hierarchy or rather graph - see below) on  
>>>> the client. 
>>>> 2. Provide non-hierarchical class loading to support arbitrary object  
>>>> graph deserialization (otherwise there is a problem with "diamond  
>>>> shaped" object graphs). 
>>>> 
>>>> A separate issue is with the definition of codebase identity. I guess  
>>>> originally Jini designers wanted to avoid this issue and left it  
>>>> undefined... but it is unavoidable :) 
>>>> 
>>>> Thanks, 
>>>> Michal 
>>>> 
>>>> Gregg Wonderly wrote: 
>>>>>  That’s what I was suggesting.  The code works, but only if you put the 
>>>>> required classes into codebases or class paths.  It’s not a problem with 
>>>>> mobile code, it’s a problem with resolution of objects in mobile code 
>>>>> references.  That’s why I mentioned ObjectSpace Voyager.  It 
>>>>> automatically sent/sends class definitions with object graphs to the 
>>>>> remote VM. 
>>>>> 
>>>>>  Gregg 
>>>>> 
>>>>>>  On Jan 23, 2017, at 3:03 PM, Michał Kłeczek (XPro Sp. z o. 
>>>>>> o.)<michal.klec...@xpro.biz> <mailto:michal.klec...@xpro.biz>  wrote: 
>>>>>> 
>>>>>>  The problem is that we only support (smart) proxies that reference only 
>>>>>> objects of classes from their own code base. 
>>>>>>  We do not support cases when a (smart) proxy wraps a (smart) proxy of 
>>>>>> another service (annotated with different codebase). 
>>>>>> 
>>>>>>  This precludes several scenarios such as for example "dynamic 
>>>>>> exporters" - exporters that are actually smart proxies. 
>>>>>> 
>>>>>>  Thanks, 
>>>>>>  Michal 
>>>>>> 
>>>>>> 
>>>> 
>>>> 
>>>> 
>>>> 
>>> 
>> 
> 



Re: OSGi

2017-01-25 Thread Gregg Wonderly
Version ids on jar file names create proper versioning of codebases.  So, you 
can deploy a new service with a well known interface and a different jar file 
with a different name (version number or some other part of the URL different) 
and you get versioning of URLs.  You’ll get a new URLClassLoader instance for 
that URL because it is different.  The HTTPMD protocol handler includes the MD5 
hash on the URL to aid in versioning content too (and yes it is not completely 
unique).   The deployment process you undertake as a service deployer allows 
you to use a symlink to create a different name for a jar you want to be in the 
codebase for example.  Then, you can upgrade new clients but not old clients 
while maintaining the same URI string.

Gregg

> On Jan 25, 2017, at 6:45 PM, Peter <j...@zeus.net.au> wrote:
> 
> codebase identity
> 
> So River codebase identity is currently any number of space delimited RFC 
> 3986 normalised URI strings.
> 
> httpmd uses a location filename and message digest.
> 
> But should location be part of identity?  How can you relocate a codebase 
> once remote objects are deployed?
> 
> OSGi and Maven use a name and version to identify a codebase.  
> 
> Might we also need codebase signers (if any) to be part of identity?
> 
> If no, why not and if yes why?
> 
> Regards,
> 
> Peter.
> 
> Sent from my Samsung device.
>  
>   Include original message
>  Original message 
> From: "Michał Kłeczek (XPro Sp. z o. o.)" <michal.klec...@xpro.biz>
> Sent: 26/01/2017 08:30:58 am
> To: d...@riverapache.org
> Subject: Re: OSGi
> 
> I haven't been aware of ObjectSpace Voyager. I just briefly looked at it  
> and it seems like it is based on Java 1.x (ancient beast) and - as I  
> understand it - the issues you describe are mainly caused by having only  
> a single class name space (single ClassLoader). 
> 
> But sending IMHO class bytes in-band is not necessary (nor good). 
> 
> What is needed is: 
> 1. Encoding dependency information in codebases (either in-band or by  
> providing a downloadable descriptor) so that it is possible to recreate  
> proper ClassLoader structure (hierarchy or rather graph - see below) on  
> the client. 
> 2. Provide non-hierarchical class loading to support arbitrary object  
> graph deserialization (otherwise there is a problem with "diamond  
> shaped" object graphs). 
> 
> A separate issue is with the definition of codebase identity. I guess  
> originally Jini designers wanted to avoid this issue and left it  
> undefined... but it is unavoidable :) 
> 
> Thanks, 
> Michal 
> 
> Gregg Wonderly wrote: 
>>  That’s what I was suggesting.  The code works, but only if you put the 
>> required classes into codebases or class paths.  It’s not a problem with 
>> mobile code, it’s a problem with resolution of objects in mobile code 
>> references.  That’s why I mentioned ObjectSpace Voyager.  It automatically 
>> sent/sends class definitions with object graphs to the remote VM. 
>> 
>>  Gregg 
>> 
>>>  On Jan 23, 2017, at 3:03 PM, Michał Kłeczek (XPro Sp. z o. 
>>> o.)<michal.klec...@xpro.biz>  wrote: 
>>> 
>>>  The problem is that we only support (smart) proxies that reference only 
>>> objects of classes from their own code base. 
>>>  We do not support cases when a (smart) proxy wraps a (smart) proxy of 
>>> another service (annotated with different codebase). 
>>> 
>>>  This precludes several scenarios such as for example "dynamic exporters" - 
>>> exporters that are actually smart proxies. 
>>> 
>>>  Thanks, 
>>>  Michal 
>>> 
>>> 
> 
> 
> 



Re: OSGi

2017-01-23 Thread Gregg Wonderly
That’s what I was suggesting.  The code works, but only if you put the required 
classes into codebases or class paths.  It’s not a problem with mobile code, 
it’s a problem with resolution of objects in mobile code references.  That’s 
why I mentioned ObjectSpace Voyager.  It automatically sent/sends class 
definitions with object graphs to the remote VM.

Gregg

> On Jan 23, 2017, at 3:03 PM, Michał Kłeczek (XPro Sp. z o. o.) 
> <michal.klec...@xpro.biz> wrote:
> 
> The problem is that we only support (smart) proxies that reference only 
> objects of classes from their own code base.
> We do not support cases when a (smart) proxy wraps a (smart) proxy of another 
> service (annotated with different codebase).
> 
> This precludes several scenarios such as for example "dynamic exporters" - 
> exporters that are actually smart proxies.
> 
> Thanks,
> Michal
> 
> Gregg Wonderly wrote:
>> I guess I am not sure then what you are trying to show with your example.
>> 
>> Under what case would the SpacePublisher be sent to another VM, and how is 
>> that different from normal SmartProxy deserialization?
>> 
>> Gregg
>> 
>>> On Jan 23, 2017, at 2:39 PM, Michał Kłeczek (XPro Sp. z o. o.) 
>>> <michal.klec...@xpro.biz> <mailto:michal.klec...@xpro.biz> wrote:
>>> 
>>> 
>>> 
>>> Gregg Wonderly wrote:
>>>>> michal.klec...@xpro.biz <mailto:michal.klec...@xpro.biz> 
>>>>> <mailto:michal.klec...@xpro.biz> <mailto:michal.klec...@xpro.biz> 
>>>>> <mailto:michal.klec...@xpro.biz> <mailto:michal.klec...@xpro.biz> 
>>>>> <mailto:michal.klec...@xpro.biz> <mailto:michal.klec...@xpro.biz>> wrote:
>>>>>>> The use case and the ultimate test to implement is simple - have a
>>>>>> listener that publishes remote events to a JavaSpace acquired dynamically
>>>>>> from a lookup service:
>>>>>>> class SpacePublisher implements RemoteEventListener, Serializable {
>>>>>>>   private final JavaSpace space;
>>>>>>>   public void notify(RemoteEvent evt) {
>>>>>>> space.write(createEntry(evt), ...);
>>>>>>>   }
>>>>>>> }
>>>>>>> 
>>>>>>> It is NOT possible to do currently. It requires non-hierarchical class
>>>>>> loading. It is not easy to solve. It would open a whole lot of
>>>>>> possibilities.
>>>>>> 
>>>>>> I am probably too ignorant to see it; What exactly is "NOT possible" with
>>>>>> the above use-case snippet?
>>>>> With currently implemented PreferredClassProvider it is not possible to 
>>>>> deserialize such an object graph.
>>>> This can happen, but what’s necessary is that the codebase of the 
>>>> SpacePublisher needs to include all the possible RemoteEvent classes, or 
>>>> the javaspace’s classpath has to include them.   
>>> I am not sure I understand.
>>> The problem does not have anything to do with RemoteEvent (sub)classes. The 
>>> issue is that SpacePublisher cannot be deserialized at all ( except one 
>>> case when JavaSpace interface is available from context class loader and it 
>>> is not marked as preferred in SpacePublisher code base).
>>> 
>>> Michal
>> 
>> 
> 



Re: OSGi

2017-01-23 Thread Gregg Wonderly
I guess I am not sure then what you are trying to show with your example.

Under what case would the SpacePublisher be sent to another VM, and how is that 
different from normal SmartProxy deserialization?

Gregg

> On Jan 23, 2017, at 2:39 PM, Michał Kłeczek (XPro Sp. z o. o.) 
> <michal.klec...@xpro.biz> wrote:
> 
> 
> 
> Gregg Wonderly wrote:
>>> michal.klec...@xpro.biz <mailto:michal.klec...@xpro.biz> 
>>> <mailto:michal.klec...@xpro.biz> <mailto:michal.klec...@xpro.biz>> wrote:
>>>> 
>>>>> The use case and the ultimate test to implement is simple - have a
>>>> listener that publishes remote events to a JavaSpace acquired dynamically
>>>> from a lookup service:
>>>>> class SpacePublisher implements RemoteEventListener, Serializable {
>>>>>   private final JavaSpace space;
>>>>>   public void notify(RemoteEvent evt) {
>>>>> space.write(createEntry(evt), ...);
>>>>>   }
>>>>> }
>>>>> 
>>>>> It is NOT possible to do currently. It requires non-hierarchical class
>>>> loading. It is not easy to solve. It would open a whole lot of
>>>> possibilities.
>>>> 
>>>> I am probably too ignorant to see it; What exactly is "NOT possible" with
>>>> the above use-case snippet?
>>> With currently implemented PreferredClassProvider it is not possible to 
>>> deserialize such an object graph.
>> 
>> This can happen, but what’s necessary is that the codebase of the 
>> SpacePublisher needs to include all the possible RemoteEvent classes, or the 
>> javaspace’s classpath has to include them.   
> I am not sure I understand.
> The problem does not have anything to do with RemoteEvent (sub)classes. The 
> issue is that SpacePublisher cannot be deserialized at all ( except one case 
> when JavaSpace interface is available from context class loader and it is not 
> marked as preferred in SpacePublisher code base).
> 
> Michal



Re: OSGi

2017-01-23 Thread Gregg Wonderly

> On Jan 22, 2017, at 6:00 PM, Michał Kłeczek (XPro Sp. z o. o.) 
>  wrote:
> 
> Hi,
> 
> comments below.
> 
> Niclas Hedhman wrote:
>> On Mon, Jan 23, 2017 at 1:48 AM, "Michał Kłeczek (XPro Sp. z o. o.)" <
>> michal.klec...@xpro.biz > wrote:
>>> The use case and the ultimate test to implement is simple - have a
>> listener that publishes remote events to a JavaSpace acquired dynamically
>> from a lookup service:
>>> class SpacePublisher implements RemoteEventListener, Serializable {
>>>   private final JavaSpace space;
>>>   public void notify(RemoteEvent evt) {
>>> space.write(createEntry(evt), ...);
>>>   }
>>> }
>>> 
>>> It is NOT possible to do currently. It requires non-hierarchical class
>> loading. It is not easy to solve. It would open a whole lot of
>> possibilities.
>> 
>> I am probably too ignorant to see it; What exactly is "NOT possible" with
>> the above use-case snippet?
> With currently implemented PreferredClassProvider it is not possible to 
> deserialize such an object graph.

This can happen, but what’s necessary is that the codebase of the 
SpacePublisher needs to include all the possible RemoteEvent classes, or the 
javaspace’s classpath has to include them.   Jini doesn’t, dynamically, create 
codebase references which might flow back to the VM which sent the object, to 
let that VM send the correct class definition.  This has (and does still I 
believe) happen in other platforms.   I first saw this in ObjectSpace Voyager 
which appeared on the scene prior to Jini being open sourced.  

The idea of there being a free flowing graph of class definitions can create 
some conflicts that various timelines might cause different results to occur 
because of different versions of the class coming from different places.

Gregg



Re: Maven build

2017-01-07 Thread Gregg Wonderly
This is a nice looking tool! 

Gregg

Sent from my iPhone

> On Jan 7, 2017, at 4:56 AM, Peter  wrote:
> 
> Neat little tool that generates vulnerability reports on dependencies during 
> a maven build. N.B. the following aren't actual dependencies of Phoenix.
> 
> org.owasp
> dependency-check-maven
> 
> Cheers,
> 
> Pete.
> 
> Dependency-Check is an open source tool performing a best effort analysis of 
> 3rd party dependencies; false positives and false negatives may exist in the 
> analysis performed by the tool. Use of the tool and the reporting provided 
> constitutes acceptance for use in an AS IS condition, and there are NO 
> warranties, implied or otherwise, with regard to the analysis or its use. Any 
> use of the tool and the reporting provided is at the user’s risk. In no event 
> shall the copyright holder or OWASP be held liable for any damages whatsoever 
> arising out of or in connection with the use of this tool, the analysis 
> performed, or the resulting report.
> 
> 
> How to read the report
> 
> | Suppressing false positives
> 
> | Getting Help: google group
>  |
> github issues 
> 
> 
>   Project: Module :: Phoenix
> 
> Scan Information (show all):
> 
>   * /dependency-check version/: 1.4.4
>   * /Report Generated On/: Jan 7, 2017 at 19:06:08 EST
>   * /Dependencies Scanned/: 62 (62 unique)
>   * /Vulnerable Dependencies/: 4
>   * /Vulnerabilities Found/: 9
> Neat little tool that generates vulnerability reports on dependencies during 
> a maven build. N.B. the following aren't actual dependencies of Phoenix.
> 
> org.owasp
> dependency-check-maven
> 
> Cheers,
> 
> Pete.
> 
> Dependency-Check is an open source tool performing a best effort analysis of 
> 3rd party dependencies; false positives and false negatives may exist in the 
> analysis performed by the tool. Use of the tool and the reporting provided 
> constitutes acceptance for use in an AS IS condition, and there are NO 
> warranties, implied or otherwise, with regard to the analysis or its use. Any 
> use of the tool and the reporting provided is at the user’s risk. In no event 
> shall the copyright holder or OWASP be held liable for any damages whatsoever 
> arising out of or in connection with the use of this tool, the analysis 
> performed, or the resulting report.
> 
> 
> How to read the report
> 
> | Suppressing false positives
> 
> | Getting Help: google group
>  |
> github issues 
> 
> 
>   Project: Module :: Phoenix
> 
> Scan Information (show all):
> 
>   * /dependency-check version/: 1.4.4
>   * /Report Generated On/: Jan 7, 2017 at 19:06:08 EST
>   * /Dependencies Scanned/: 62 (62 unique)
>   * /Vulnerable Dependencies/: 4
>   * /Vulnerabilities Found/: 9
>   * /Vulnerabilities Suppressed/: 0
>   * ...
> 
> 
> Display: Showing Vulnerable Dependencies (click to show all)
> 
> DependencyCPEGAVHighest SeverityCVE CountCPE Confidence 
> Evidence Count
> commons-httpclient-3.0.jarcpe:/a:apache:commons-httpclient:3.0 
> 
>  
> cpe:/a:apache:httpclient:3.0 commons-httpclient:commons-httpclient:3.0
> Medium4HIGHEST15
> jackrabbit-jcr-commons-1.5.0.jarcpe:/a:apache:jackrabbit:1.5.0 
> 
> org.apache.jackrabbit:jackrabbit-jcr-commons:1.5.0Medium2
> HIGHEST15
> jackrabbit-webdav-1.5.0.jarcpe:/a:apache:jackrabbit:1.5.0 
> 
> org.apache.jackrabbit:jackrabbit-webdav:1.5.0Medium2HIGHEST   
>  13
> wagon-webdav-jackrabbit-1.0-beta-6.jarcpe:/a:apache:jackrabbit:1.0 
> org.apache.maven.wagon:wagon-webdav-jackrabbit:1.0-beta-6Medium1 LOW  
>   16
> 
> 
>   Dependencies
> 
> 
> commons-httpclient-3.0.jar
> 
> *Description:* The HttpClient component supports the client-side of RFC 1945 
> (HTTP/1.0) and RFC 2616 (HTTP/1.1) , several related specifications (RFC 2109 
> (Cookies) , RFC 2617 (HTTP Authentication) , etc.), and provides a framework 
> by which new request types (methods) or HTTP extensions can be created easily.
> 
> *License:*
> 
> Apache License: http://www.apache.org/licenses/LICENSE-2.0
> 
> 

Re: site revamp

2016-12-25 Thread Gregg Wonderly
Yes that is what I also experienced on my mobile.

Gregg

> On Dec 23, 2016, at 7:52 AM, Niclas Hedhman <hedh...@gmail.com> wrote:
> 
> On my phone (Nexus 6P) it looks like this;
> https://goo.gl/photos/jxVnHvoES4EtaQh5A
> And probably reflects what Gregg is talking about.
> 
> Cheers
> Niclas
> 
>> On Dec 23, 2016 21:40, "Gregg Wonderly" <gr...@wonderly.org> wrote:
>> 
>> I am just thinking about how people use text messages to share things, and
>> thought it might be a little problematic on mobile.
>> 
>> I too am not sure that it matters, but just wanted to share what I saw.
>> 
>> Gregg
>> 
>> Sent from my iPhone
>> 
>>> On Dec 23, 2016, at 12:15 AM, Zsolt Kúti <la.ti...@gmail.com> wrote:
>>> 
>>> @Dan: Thanks for pointig this out: fixed
>>> @Gregg: Bootstrap supports responsive design, however I am not a web
>>> designer :-)   I am not sure how many will follow us on mobile, either.
>>> Anyway, I will take a look into it, if this can easily be fixed.
>>> @ all of you who liked it: thanks!
>>> 
>>> Zsolt
>>> 
>>>> On Fri, Dec 23, 2016 at 3:23 AM, Luis Matta <matta.l...@gmail.com>
>> wrote:
>>>> 
>>>> Very nice, congrats (and thanks)
>>>> 
>>>> On Thu, Dec 22, 2016 at 10:42 PM, Gregg Wonderly <gr...@wonderly.org>
>>>> wrote:
>>>> 
>>>>> When I look at it in portrait mode on my mobile device it is a little
>> too
>>>>> skinny to consume easily.  The text on the water is nearly invisible
>> due
>>>> to
>>>>> lack of contrast.
>>>>> 
>>>>> It is okay in landscape.   I like the content!
>>>>> 
>>>>> Gregg
>>>>> 
>>>>>> On Dec 22, 2016, at 1:44 PM, Geoffrey Arnold <
>>>> geoffrey.arn...@gmail.com>
>>>>> wrote:
>>>>>> 
>>>>>> Hey Zsolt, really fantastic job.  Well done!
>>>>>> 
>>>>>> 
>>>>>> 
>>>>>> 
>>>>>> 
>>>>>>> On Thu, Dec 22, 2016 at 11:24 AM, Zsolt Kúti <la.ti...@gmail.com>
>>>>> wrote:
>>>>>>> 
>>>>>>> Hello,
>>>>>>> 
>>>>>>> The revamped site is now staged and can be reviewed here:
>>>>>>> http://river.staging.apache.org/
>>>>>>> 
>>>>>>> Community decides when to publish it.
>>>>>>> 
>>>>>>> Cheers,
>>>>>>> Zsolt
>>>>>>> 
>>>>> 
>>>>> 
>>>> 
>> 
>> 



Re: site revamp

2016-12-23 Thread Gregg Wonderly
I am just thinking about how people use text messages to share things, and 
thought it might be a little problematic on mobile.  

I too am not sure that it matters, but just wanted to share what I saw.

Gregg

Sent from my iPhone

> On Dec 23, 2016, at 12:15 AM, Zsolt Kúti <la.ti...@gmail.com> wrote:
> 
> @Dan: Thanks for pointig this out: fixed
> @Gregg: Bootstrap supports responsive design, however I am not a web
> designer :-)   I am not sure how many will follow us on mobile, either.
> Anyway, I will take a look into it, if this can easily be fixed.
> @ all of you who liked it: thanks!
> 
> Zsolt
> 
>> On Fri, Dec 23, 2016 at 3:23 AM, Luis Matta <matta.l...@gmail.com> wrote:
>> 
>> Very nice, congrats (and thanks)
>> 
>> On Thu, Dec 22, 2016 at 10:42 PM, Gregg Wonderly <gr...@wonderly.org>
>> wrote:
>> 
>>> When I look at it in portrait mode on my mobile device it is a little too
>>> skinny to consume easily.  The text on the water is nearly invisible due
>> to
>>> lack of contrast.
>>> 
>>> It is okay in landscape.   I like the content!
>>> 
>>> Gregg
>>> 
>>>> On Dec 22, 2016, at 1:44 PM, Geoffrey Arnold <
>> geoffrey.arn...@gmail.com>
>>> wrote:
>>>> 
>>>> Hey Zsolt, really fantastic job.  Well done!
>>>> 
>>>> 
>>>> 
>>>> 
>>>> 
>>>>> On Thu, Dec 22, 2016 at 11:24 AM, Zsolt Kúti <la.ti...@gmail.com>
>>> wrote:
>>>>> 
>>>>> Hello,
>>>>> 
>>>>> The revamped site is now staged and can be reviewed here:
>>>>> http://river.staging.apache.org/
>>>>> 
>>>>> Community decides when to publish it.
>>>>> 
>>>>> Cheers,
>>>>> Zsolt
>>>>> 
>>> 
>>> 
>> 



Re: site revamp

2016-12-22 Thread Gregg Wonderly
When I look at it in portrait mode on my mobile device it is a little too 
skinny to consume easily.  The text on the water is nearly invisible due to 
lack of contrast.

It is okay in landscape.   I like the content!

Gregg

> On Dec 22, 2016, at 1:44 PM, Geoffrey Arnold  
> wrote:
> 
> Hey Zsolt, really fantastic job.  Well done!
> 
> 
> 
> 
> 
>> On Thu, Dec 22, 2016 at 11:24 AM, Zsolt Kúti  wrote:
>> 
>> Hello,
>> 
>> The revamped site is now staged and can be reviewed here:
>> http://river.staging.apache.org/
>> 
>> Community decides when to publish it.
>> 
>> Cheers,
>> Zsolt
>> 



Re: IoT

2016-11-18 Thread Gregg Wonderly
The important detail for me in this presentation is network penetration via 
“another” device on the network.  Once the network is compromised, “private 
networks are secure” goes out the window.  Helping users manage security, even 
on a “private” network is an important detail to keep in mind.

Gregg

> On Nov 4, 2016, at 7:05 AM, Peter  wrote:
> 
> An interesting link:
> 
> https://codek.tv/v/DIhcDRvHii0/iot-security-fundamentals-that-need-to-be-solved/
> 
> Regards,
> 
> Peter.
> 



Re: Hinkmond Wong 2014 blogs about jini and iot

2016-09-14 Thread Gregg Wonderly
This is also interesting http://www.eclipse.org/californium/.  CoAP will be 
very gravitational for many IoT projects I bet.

Gregg

> On Sep 1, 2016, at 6:34 AM, Peter Firmstone  
> wrote:
> 
> https://blogs.oracle.com/hinkmond/entry/easy_iot_sensor_on_boarding
> 
> Must have missed this earlier.
> 
> Sent from my Samsung device.
>  



Re: another interesting link

2016-08-01 Thread Gregg Wonderly
My griddle project on Java.net investigated the notion of using smart 
comparisons for equality.  Basically, griddle separates the keys from the 
object.  The keys are managed and matched by a matching implementation.  The 
intent is that key values would be native types, not downloaded types, but 
downloaded types are not forbidden.  This would allow you to ask a much richer 
question for “watch matches” by sending your read request or take request with 
an executable matcher which could do ranges or sets etc.

Gregg

> On Jul 26, 2016, at 11:21 PM, Peter <j...@zeus.net.au> wrote:
> 
> Also, there's no reason why logical comparisons cannot be made with numerical 
> objects during lookup.  
> 
> Although the Entry spec suggests that entry fields are marshalled as 
> MarshalledObject's, reggie doesn't do this for immutable value objects like 
> Integer etc.
> 
> Regards,
> 
> Peter.
> 
> Sent from my Samsung device.
>  
>   Include original message
>  Original message 
> From: Peter <j...@zeus.net.au>
> Sent: 27/07/2016 09:42:09 am
> To: dev@river.apache.org <dev@river.apache.org>
> Subject: Re: another interesting link
> 
> Discovery and lookup are akin to search engines, but distributed rather than 
> reliant on large corporations.
> 
> With IPv6 global announce, it's possible to perform global search and this is 
> where another of Gregg's innovations, delayed unmarshalling, is very 
> important to minimise local processing.
> 
> Regards,
> 
> Peter
> 
> 
> Sent from my Samsung device.
>  
>   Include original message
>  Original message 
> From: Gregg Wonderly <gr...@wonderly.org>
> Sent: 27/07/2016 02:09:50 am
> To: dev@river.apache.org
> Subject: Re: another interesting link
> 
> More formal interface libraries were supposed to solve that problem so that 
> you would have a formal name for such contracts.  That would be the Jini way 
> to start working this direction. The classic issue is that people believe 
> that HTTP is the interface of today.  They don't understand how POST is 
> contractually equivalent to Jini's invocation layer.  
> 
> The path in a URL is the method name and the payload is the same as 
> arguments.  What is possible with Jini is to use lookup instead of hardcoded 
> URLs.  People are using hostnames for lookup services and being satisfied. 
> 
> Gregg 
> 
> Sent from my iPhone 
> 
>>  On Jul 26, 2016, at 8:41 AM, Michał Kłeczek (XPro Sp. z o. o.) 
>> <michalklec...@xpro.biz> wrote: 
>>   
>>  I am well aware of StartNow since that is the first Jini "support library" 
>> I have used. Indeed - it is really easy to use. 
>>  But it is only one side of the issue - the API and some support support 
>> code that is supposed to be linked statically with the service 
>> implementation. 
>>   
>>  What I am talking about is actually "externalizing" most aspects of a 
>> service implementation so that: 
>>  - you do not have to package any (for some meaning of "any" :) ) libraries 
>> statically (since all code can be downloaded dynamically) 
>>  - you do not have to provide any (for some meaning of "any" :) ) static 
>> configuration (ie. configuration files) - a service should simply use other 
>> services and "reconfigure" itself when those change 
>>  It would go towards some kind of an "agent architecture", with movable 
>> objects (ie "services") being "hosted" by well... other movable objects :). 
>> The idea is less appealing today when we have all the cloud infrastructure, 
>> virtualization, software defined networking etc. Nevertheless still 
>> interesting IMHO. 
>>   
>>  Thanks, 
>>  Michal 
>>>  Gregg Wonderly July 26, 2016 at 1:28 PM 
>>>  My StartNow project on Java.net aimed directly at this mode of operation a 
>>> decade ago. I wanted conventions that provided use of configuration with 
>>> defaults. 
>>>   
>>>  You just extend PersistantJiniService and call start(serviceName). 
>>> Subclasses could override default implementation for how the conventions in 
>>> the APIs created implementation objects through code or configuration. 
>>>   
>>>  The intent was to create THE API to provide the conventions of service 
>>> creation. 
>>>   
>>>  We have a Window/JWindow class and don't have to do all the decorating 
>>> ourselves.  
>>>   
>>>  Jini service construction should work the same way! 
>>>   
>>>  Gregg 
>>>   
>>>  Sent from my iPhone 
>>

Re: another interesting link

2016-07-26 Thread Gregg Wonderly
More formal interface libraries were supposed to solve that problem so that you 
would have a formal name for such contracts.  That would be the Jini way to 
start working this direction. The classic issue is that people believe that 
HTTP is the interface of today.  They don't understand how POST is 
contractually equivalent to Jini's invocation layer. 

The path in a URL is the method name and the payload is the same as arguments.  
What is possible with Jini is to use lookup instead of hardcoded URLs.  People 
are using hostnames for lookup services and being satisfied.

Gregg

Sent from my iPhone

> On Jul 26, 2016, at 8:41 AM, Michał Kłeczek (XPro Sp. z o. o.) 
> <michal.klec...@xpro.biz> wrote:
> 
> I am well aware of StartNow since that is the first Jini "support library" I 
> have used. Indeed - it is really easy to use.
> But it is only one side of the issue - the API and some support support code 
> that is supposed to be linked statically with the service implementation.
> 
> What I am talking about is actually "externalizing" most aspects of a service 
> implementation so that:
> - you do not have to package any (for some meaning of "any" :) ) libraries 
> statically (since all code can be downloaded dynamically)
> - you do not have to provide any (for some meaning of "any" :) ) static 
> configuration (ie. configuration files) - a service should simply use other 
> services and "reconfigure" itself when those change
> It would go towards some kind of an "agent architecture", with movable 
> objects (ie "services") being "hosted" by well... other movable objects :). 
> The idea is less appealing today when we have all the cloud infrastructure, 
> virtualization, software defined networking etc. Nevertheless still 
> interesting IMHO.
> 
> Thanks,
> Michal
>> Gregg Wonderly July 26, 2016 at 1:28 PM
>> My StartNow project on Java.net aimed directly at this mode of operation a 
>> decade ago. I wanted conventions that provided use of configuration with 
>> defaults.
>> 
>> You just extend PersistantJiniService and call start(serviceName). 
>> Subclasses could override default implementation for how the conventions in 
>> the APIs created implementation objects through code or configuration.
>> 
>> The intent was to create THE API to provide the conventions of service 
>> creation.
>> 
>> We have a Window/JWindow class and don't have to do all the decorating 
>> ourselves. 
>> 
>> Jini service construction should work the same way!
>> 
>> Gregg
>> 
>> Sent from my iPhone
>> 
>> 
>> Tom Hobbs July 26, 2016 at 11:50 AM
>> I would say the comment on that blog sums everything about Jini up.
>> 
>> It’s just too hard to set up and get working.
>> 
>> That’s why I think simplifying reggie is possibly a first step. Make a 
>> /small/ and simple reggie jar that just handled service registration and not 
>> proxy downloading etc. Make it really easy to register your services without 
>> needing class loaders etc, preferably via some convention rather than 
>> configuration. (This is what I’m trying to find the time to work on.)
>> 
>> I’d really like to be able to type;
>> 
>> $ java -jar reggie.jar
>> 
>> And have a reggie running with all the defaults ready to register my 
>> services with. Or perhaps, as an option;
>> 
>> $ java -jar reggie.jar —ipv6
>> 
>> Security, class loading, proxy downloading and all the rest of it could then 
>> be put back in by specifying more advanced configuration options.
>> 
>> My Scala service would be great if I could define it just as;
>> 
>> object MyCoolService extends LazyLogging with ReggieRegistration with 
>> ReggieLookup
>> 
>> Or in Java with default interface methods;
>> 
>> class MyCoolService implements ReggieRegistration, ReggieLookup
>> 
>> And that would be it, congratulations you’ve started a reggie and registered 
>> your service and have methods available to help you find other services.
>> 
>> This would satisfy use cases where the network was private and/or trusted. 
>> And security on top would, ideally, be up to configuration again or perhaps 
>> injecting some alternative implementation of some bean somewhere. But the 
>> core premise is, make it easy to startup, demo and see if it fits what you 
>> want it for. 
>> 
>> 
>> 
>> 
>> Peter July 26, 2016 at 3:58 AM
>> Note the comment about security on the blog?
>> 
>> Steps I've taken to simplify security (that could also be adopted by 

Re: another interesting link

2016-07-26 Thread Gregg Wonderly
Also maven has been used as part of this solution too.  Maven lookup by package 
instead of interface seems like adding another layer or a redirection in lookup 
mechanisms.

Gregg

Sent from my iPhone

> On Jul 26, 2016, at 8:41 AM, Michał Kłeczek (XPro Sp. z o. o.) 
> <michal.klec...@xpro.biz> wrote:
> 
> I am well aware of StartNow since that is the first Jini "support library" I 
> have used. Indeed - it is really easy to use.
> But it is only one side of the issue - the API and some support support code 
> that is supposed to be linked statically with the service implementation.
> 
> What I am talking about is actually "externalizing" most aspects of a service 
> implementation so that:
> - you do not have to package any (for some meaning of "any" :) ) libraries 
> statically (since all code can be downloaded dynamically)
> - you do not have to provide any (for some meaning of "any" :) ) static 
> configuration (ie. configuration files) - a service should simply use other 
> services and "reconfigure" itself when those change
> It would go towards some kind of an "agent architecture", with movable 
> objects (ie "services") being "hosted" by well... other movable objects :). 
> The idea is less appealing today when we have all the cloud infrastructure, 
> virtualization, software defined networking etc. Nevertheless still 
> interesting IMHO.
> 
> Thanks,
> Michal
>> Gregg Wonderly July 26, 2016 at 1:28 PM
>> My StartNow project on Java.net aimed directly at this mode of operation a 
>> decade ago. I wanted conventions that provided use of configuration with 
>> defaults.
>> 
>> You just extend PersistantJiniService and call start(serviceName). 
>> Subclasses could override default implementation for how the conventions in 
>> the APIs created implementation objects through code or configuration.
>> 
>> The intent was to create THE API to provide the conventions of service 
>> creation.
>> 
>> We have a Window/JWindow class and don't have to do all the decorating 
>> ourselves. 
>> 
>> Jini service construction should work the same way!
>> 
>> Gregg
>> 
>> Sent from my iPhone
>> 
>> 
>> Tom Hobbs July 26, 2016 at 11:50 AM
>> I would say the comment on that blog sums everything about Jini up.
>> 
>> It’s just too hard to set up and get working.
>> 
>> That’s why I think simplifying reggie is possibly a first step. Make a 
>> /small/ and simple reggie jar that just handled service registration and not 
>> proxy downloading etc. Make it really easy to register your services without 
>> needing class loaders etc, preferably via some convention rather than 
>> configuration. (This is what I’m trying to find the time to work on.)
>> 
>> I’d really like to be able to type;
>> 
>> $ java -jar reggie.jar
>> 
>> And have a reggie running with all the defaults ready to register my 
>> services with. Or perhaps, as an option;
>> 
>> $ java -jar reggie.jar —ipv6
>> 
>> Security, class loading, proxy downloading and all the rest of it could then 
>> be put back in by specifying more advanced configuration options.
>> 
>> My Scala service would be great if I could define it just as;
>> 
>> object MyCoolService extends LazyLogging with ReggieRegistration with 
>> ReggieLookup
>> 
>> Or in Java with default interface methods;
>> 
>> class MyCoolService implements ReggieRegistration, ReggieLookup
>> 
>> And that would be it, congratulations you’ve started a reggie and registered 
>> your service and have methods available to help you find other services.
>> 
>> This would satisfy use cases where the network was private and/or trusted. 
>> And security on top would, ideally, be up to configuration again or perhaps 
>> injecting some alternative implementation of some bean somewhere. But the 
>> core premise is, make it easy to startup, demo and see if it fits what you 
>> want it for. 
>> 
>> 
>> 
>> 
>> Peter July 26, 2016 at 3:58 AM
>> Note the comment about security on the blog?
>> 
>> Steps I've taken to simplify security (that could also be adopted by river):
>> 1. Deprecate proxy trust, replace with authenticate service prior to 
>> obtaining proxy.
>> 2. proxy codebase jars contain a list of requested permissions to be granted 
>> to the jar signer and url (client need not know in advance).
>> 3. Policy file generation, least privilege principles (need to set up 
>> command line based output for admin ve

Re: another interesting link

2016-07-26 Thread Gregg Wonderly
My StartNow project on Java.net aimed directly at this mode of operation a 
decade ago.  I wanted conventions that provided use of configuration with 
defaults.

You just extend PersistantJiniService and call start(serviceName).  Subclasses 
could override default implementation for how the conventions in the APIs 
created implementation objects through code or configuration.

The intent was to create THE API to provide the conventions of service creation.

We have a Window/JWindow class and don't have to do all the decorating 
ourselves.  

Jini service construction should work the same way!

Gregg

Sent from my iPhone

> On Jul 26, 2016, at 5:50 AM, Tom Hobbs  wrote:
> 
> I would say the comment on that blog sums everything about Jini up.
> 
> It’s just too hard to set up and get working.
> 
> That’s why I think simplifying reggie is possibly a first step.  Make a 
> /small/ and simple reggie jar that just handled service registration and not 
> proxy downloading etc.  Make it really easy to register your services without 
> needing class loaders etc, preferably via some convention rather than 
> configuration.  (This is what I’m trying to find the time to work on.)
> 
> I’d really like to be able to type;
> 
> $ java -jar reggie.jar
> 
> And have a reggie running with all the defaults ready to register my services 
> with.  Or perhaps, as an option;
> 
> $ java -jar reggie.jar —ipv6
> 
> Security, class loading, proxy downloading and all the rest of it could then 
> be put back in by specifying more advanced configuration options.
> 
> My Scala service would be great if I could define it just as;
> 
> object MyCoolService extends LazyLogging with ReggieRegistration with 
> ReggieLookup
> 
> Or in Java with default interface methods;
> 
> class MyCoolService implements ReggieRegistration, ReggieLookup
> 
> And that would be it, congratulations you’ve started a reggie and registered 
> your service and have methods available to help you find other services.
> 
> This would satisfy use cases where the network was private and/or trusted.  
> And security on top would, ideally, be up to configuration again or perhaps 
> injecting some alternative implementation of some bean somewhere.  But the 
> core premise is, make it easy to startup, demo and see if it fits what you 
> want it for.  
> 
> 
> 
>> On 26 Jul 2016, at 02:58, Peter  wrote:
>> 
>> Note the comment about security on the blog?
>> 
>> Steps I've taken to simplify security (that could also be adopted by river):
>> 1. Deprecate proxy trust, replace with authenticate service prior to 
>> obtaining proxy.
>> 2. proxy codebase jars contain a list of requested permissions to be granted 
>> to the jar signer and url (client need not know in advance).
>> 3. Policy file generation, least privilege principles (need to set up 
>> command line based output for admin verification of each permission during 
>> policy generation).
>> 4 Input validation for serialization.
>> 5. DownloadPermission automatically granted to authenticated registrars (to 
>> signer and url, very specific) during multicast discovery.
>> 
>> Need to more work around simplification of certificate management.
>> 
>> Regards,
>> 
>> Peter.
>> Sent from my Samsung device.
>> 
>>  Include original message
>>  Original message 
>> From: Peter 
>> Sent: 26/07/2016 10:27:59 am
>> To: dev@river.apache.org 
>> Subject: another interesting link
>> 
>> https://blogs.oracle.com/hinkmond/entry/jini_iot_edition_connecting_the
>> 
>> 
>> Sent from my Samsung device.
> 



Re: IoT

2016-07-24 Thread Gregg Wonderly
The maximum number of devices that can be on the internet with IPV4 is equal to 
the maximum number of IPV4 unique public addresses times the number of ports 
available for TCP at 65535 and UDP at 65535.  Basically, quadrillions if there 
was a single protocol ever active on each device.  Some of the bits in the IPV4 
header protocol field could be used as multipliers for another few quadrillions 
more.  I’m still surprised that there is not visible, if not wide spread use of 
the protocol bits as network multipliers.  Perhaps there are in some places in 
the world.

There are lots of IPV4 possibilities.  But, there are limitations obviously.  
I’ve been hoping for ipv6 for a decade and it should of happened two decades 
ago...

Gregg


> On Jul 24, 2016, at 9:22 PM, Peter  wrote:
> 
> An interesting article relating IoT with the underlying IPv6 network protocol 
> it's dependant upon.
> 
> http://www.computerworld.com/article/3071625/internet-of-things/no-iot-without-ipv6.html
> 
> Regards,
> 
> Peter.
> 
> Sent from my Samsung device.
>  



Re: svn commit: r1729654 - in /river/jtsk/trunk: LICENSE NOTICE build.xml

2016-02-11 Thread Gregg Wonderly
I suppose that one of the details of shipping binaries is that it creates 
“users” rather than “community members”.  The interesting question from my 
perspective, is what value, overall, is there in making that distinction.  That 
is, if you only provide source, then a “user” has to “build”, and if they can 
build and have source, they can create diffs and participate in the projects 
community by requesting their diffs become part of the project.  While that is 
an important part making a community function, I think that there are people 
who literally will never make use of something unless it’s already built for 
them.  It comes down to how much time/money do you have to spend on technology.

I’d suggest that at least what it takes to build things is in a “document”.  
Greg left behind those details in comments for now.  It might be good to 
request people to “ask” or “plead” for build artifacts on this list if it seems 
valuable to know if that is actually a detriment to community participation.

Gregg


> On Feb 11, 2016, at 1:36 PM, Greg Trasuk  wrote:
> 
> 
> One more data point - 
> 
> - Many Apache projects do not ship binaries.  Check out httpd.apache.org and 
> subversion.apache.org.  Both say they do not officially endorse any binaries 
> (although they do point to committer-created binaries).
> 
> Cheers,
> 
> Greg Trasuk
> 
>> On Feb 11, 2016, at 2:31 PM, Greg Trasuk  wrote:
>> 
>> 
>> A little while ago I asked a question - “Does it make sense to release a 
>> binary package”?  
>> 
>> I don’t think we need to.  Here are a few reasons:
>> 
>> - Apache’s products are source distributions.  Officially, if we build a 
>> binary package, it’s a “convenience binary”, and not a released product.  
>> i.e. Apache doesn’t really recognize a binary package, but will insist that 
>> if we distribute a binary, the LICENSE and NOTICE files need to correctly 
>> reflect the other libraries that are included in that binary.
>> - The build.xml has been modified so it uses Ivy to download the build-time 
>> dependencies when you go to build.  That saves us from having to manage a 
>> “build-deps” library and distribute it separately.  This means that _we_ are 
>> not distributing those dependencies, so we don’t have to reference them in 
>> the NOTICE and LICENSE files.  Which is good, because it doesn’t impose any 
>> requirements on downstream users of River who don’t use ‘asm’. 
>>  (I asked about this on the list - you asked me to go ahead and fix the 
>> issue with distributing jars in the source package).
>> 
>> -  ‘classdep’ is built as part of the build process.  Prior to that, 
>> ‘build.xml’ calls ‘Ivy’ to download ‘asm’.  We don’t distribute ‘classdep’ 
>> through Maven Central.  We don’t even recommend using it, why would we 
>> distribute it?
>> - As I explained before, the JTSK binary on its own doesn’t do anything.  
>> You can’t run “reggie” out of it, for example (this is one reason people 
>> find it so confusing to startup using Jini).  All you can do with the JTSK 
>> distribution is run the tests.  If you run the integration tests, it starts 
>> by recompiling, hence there’s no need for a binary to run the integration 
>> tests.  
>> - We _do_ ship the generated jar files as artifacts in Maven Central, which 
>> is realistically how developers will be using the jar files.  For example, 
>> you can build the examples project without downloading or building the main 
>> River distribution.  Harvester gets its jars from Maven Central.  I’m pretty 
>> sure that Rio does too (not sure if Rio uses Maven or Gradle for its build, 
>> but either one uses Maven Central as the artifact repo).  The pom files 
>> include the transitive dependency references.
>> 
>> I left my question as “how about if I comment out the bits that make the 
>> binary release, and if anyone wants it badly enough they can do the work to 
>> build the binary properly”.  That’s what I did.  There’s a note next to the 
>> commented-out part telling what work needs to be done.  As it stands now, 
>> the ‘release’ target does not generate a binary release artifact, just the 
>> source and doc artifacts.  As I’ve explained above, that makes sense as far 
>> as I can tell.
>> 
>> On a practical level, if you desperately want the binary release, somebody 
>> who is not me has to do the work to generate it properly and then manage the 
>> ‘3.0’ release.  If we’re good to go without the binary artifact, I’ll be 
>> happy to spin the ‘3.0’ release as soon as the vote on ‘2.2.3’ is finished.
>> 
>> Cheers,
>> 
>> Greg Trasuk
>> 
>>> On Feb 11, 2016, at 1:35 PM, Peter  wrote:
>>> 
>>> 
>>> Greg,
>>> 
>>> Please revert this.
>>> 
>>> ASM licensed code exists in classdepend, which classdep uses, in the tools 
>>> package.
>>> 
>>> As far as I'm aware were still releasing a binary for River 3, but you've 
>>> found an issue with how we currently do that?
>>> 
>>> 

Re: River - 3.0.0 Release candidate

2016-01-09 Thread Gregg Wonderly
Sorry for this ending up on the list, it was intended to be private discussion, 
I thought I had edited the To: list appropriately. 

Greg, I am not saying that you should not review the release candidate.  This 
ended up being a reply in this thread when it should not of.  I want and value  
your participation as one of the community.

The review is indeed needed, no question about that!  Jars in the source 
distribution as Apache policy, also has to be dealt with.

Again, please don’t consider this to be related to anything in this thread.  I 
want the community to function using the Apache process. That’s what will 
provide the best means for the community to function.

Gregg

> On Jan 9, 2016, at 2:22 PM, Greg Trasuk <tras...@stratuscom.com> wrote:
> 
> 
> Gregg:
> 
> So, you’re saying I shouldn’t review the release candidate?  Sorry, but the 
> bit about “no jars in the source distribution” is Apache policy.  We can’t 
> release with the candidate we have.  This isn’t a technical quarrel.
> 
> Cheers,
> 
> Greg Trasuk
> 
>> On Jan 9, 2016, at 3:16 PM, Gregg Wonderly <ge...@cox.net> wrote:
>> 
>> I sent Greg Trasuk a private note asking him to cease and desist on public 
>> badgering and instead to just step back and let the community vote on what 
>> happens with River, as that is the process that is supposed to work.  I 
>> suggested that if he had a plan and members to vote that plan through, that 
>> he could have things however he wanted.  I really do not appreciate his 
>> attitude and lack of appreciation for the experience and expertise that 
>> others have which is different from his own.  I don’t want to badger or 
>> belittle him in any way.  But, we need to use this process and work through 
>> issues by using our brains and our experiences both.  The “web” as we know 
>> it, is “mobile code” just like Jini uses.  Javascript won, because it was 
>> controlled by the browser camp, not by Sun.  Applets were in the browser 
>> first, but the size of PCs memory and computational resources were no where 
>> near mature enough for Java to have won.  I know, I tried to deploy lots of 
>> Java in Applets and applications in that time, to the desktop, but there was 
>> just not enough money spent on desktop machines in the enterprises where my 
>> customers were.  I am, hopefully going to get back out of the .Net world and 
>> back into Java and Jini again, this coming year.  I am looking forward to 
>> that!
>> 
>> Gregg
>> 
>>> On Jan 8, 2016, at 5:56 AM, Peter Firmstone <peter.firmst...@zeus.net.au> 
>>> wrote:
>>> 
>>> The Apache River 3.0.0 Release candidate is available here:
>>> 
>>> http://people.apache.org/~peter_firmstone/
>>> 
>>> Voting on this release will commence in 4 weeks, to allow time for people 
>>> to check they can reproduce these artifacts and test their code and report 
>>> back with any issues.
>>> 
>>> The code is currently in trunk, this will be branched after the 4 week 
>>> review period and Voting passes.
>>> 
>>> See also http://www.apache.org/dev/release-publishing.html
>>> 
>>> Regards,
>>> 
>>> Peter.
>> 
> 



Re: Trunk merge and thread pools

2015-12-06 Thread Gregg Wonderly
Well Peter, there are lots of things one can do about load management.  The 
obvious solutions are visible in current load balancing on web servers.  That 
simple mechanism of receiving the request and dispatching it into the real 
servers provides the ability to manage load with appropriate logic.

So, put your slowest hardware there, use a small fixed sized dispatch pool and 
tune its size to an appropriate percent of available time.  That is, time each 
service requests time to process.  Bias those times by appropriate variation in 
processing time differences.

As Amazon does, you can use a PID mechanism to automate throttling.

Gregg

Sent from my iPad

> On Dec 3, 2015, at 3:32 PM, Peter <j...@zeus.net.au> wrote:
> 
> Care to share more of your insight?
> 
> Peter.
> 
> Sent from my Samsung device.
>   Include original message
> ---- Original message 
> From: Gregg Wonderly <ge...@cox.net>
> Sent: 03/12/2015 06:37:15 pm
> To: dev@river.apache.org
> Subject: Re: Trunk merge and thread pool
> 
> The original use of thread  pooling was more than likely about getting work 
> done faster by not undergoing overhead of thread creation, since in 
> distributed systems, deferring work can create deadlock by introducing 
> indefinite wait scenarios if resource limits keep work from being dispatched. 
> 
> As a general rule of thumb, I have found that waiting till the point of 
> thread creation, to create introduce load control, is never the right design. 
>  Instead, load control must happen at the head/beginning of any request into 
> a distributed system. 
> 
> Gregg 
> 
> Sent from my iPhone 
> 
>>  On Dec 3, 2015, at 3:26 AM, Peter <j...@zeusnet.au> wrote: 
>>   
>>  Just tried wrapping an Executors.newCachedThreadPool with a thread factory 
>> that creates threads as per the original 
>> org.apache.river.thread.NewThreadAction. 
>>   
>>  Performance is much improved, the hotspot is gone. 
>>   
>>  There are regression tests with sun bug Id's, which cause oome.  I thought 
>> this might  
>>  prevent the executor from running,  but to my surprise both tests pass.   
>> These tests failed when I didn't pool threads and just let them be gc'd.  
>> These tests created over 11000 threads with waiting tasks.  In practise I 
>> wouldn't expect that to happen as an IOException should be thrown.  However 
>> there are sun bug id's 6313626 and 6304782 for these regression tests, if 
>> anyone has a record of these bugs or any information they can share, it 
>> would be much appreciated. 
>>   
>>  It's worth noting that the jvm memory options should be tuned properly to 
>> avoid oome in any case. 
>>   
>>  Lesson here is, creating threads and gc'ing them is much faster than thread 
>> pooling if your thread pool is not well optimised. 
>>   
>>  It's worth noting that ObjectInputStream is now the hotspot for the test, 
>> the tested code's hotspots are DatagramSocket and SocketInputStream. 
>>   
>>  ClassLoading is thread confined, there's a lot of class loading going on, 
>> but because it is uncontended, it only consumes 0.2% cpu, about the same as 
>> our security architecture overhead (non encrypted). 
>>   
>>  Regards, 
>>   
>>  Peter. 
>>   
>>  Sent from my Samsung device. 
>>Include original message 
>>   Original message  
>>  From: Bryan Thompson <br...@systap.com> 
>>  Sent: 02/12/2015 11:25:03 pm 
>>  To: <dev@river.apache.org> <dev@river.apache.org> 
>>  Subject: Re: Trunk merge and thread pools 
>>   
>>  Ah. I did not realize that we were discussing a river specific ThreadPool  
>>  vs a Java Concurrency classes ThreadPoolExecutor.  I assume that it would  
>>  be difficult to just substitute in one of the standard executors?  
>>   
>>  Bryan  
>>   
>>>  On Wed, Dec 2, 2015 at 8:18 AM, Peter <j...@zeus.net.au> wrote:  
>>>   
>>>   First it's worth considering we have a very suboptimal threadpool.  There 
>>>  
>>>   are qa and jtreg tests that limit our ability to do much with ThreadPool. 
>>>  
>>>   
>>>   There are only two instances of ThreadPool, shared by various jeri  
>>>   endpoint implementations, and other components.  
>>>   
>>>   The implementation is allowed to create numerous threads, only limited by 
>>>  
>>>   available memory and oome.  At least two tests cause it to create over  
>>>   11000 threads.  
>>>   
>>>   Also, it previously used a LinkedList queue,  but now uses a  
>>>  

Re: Trunk merge and thread pools

2015-12-04 Thread Gregg Wonderly
With a handful of clients, you can ignore contention.  My applications have 20s 
of threads per client making very frequent calls through the service and this 
means that 10ms delays evolve into seconds of delay fairly quickly.  

I believe that if you can measure the contention with tooling, on your desktop, 
it is a viable goal to reduce it or eliminate it.  

It's like system time vs user time optimizations of old.  Now we are contending 
for processor cores instead of the processor, locked in the kernel, unable to 
dispatch more network traffic where it is always convenient to bury latency.

Gregg

Sent from my iPhone

On Dec 4, 2015, at 9:57 AM, Greg Trasuk  wrote:

>> On Dec 4, 2015, at 1:16 AM, Peter  wrote:
>> 
>> Since ObjectInputStream is a big hotspot,  for testing purposes, I merged 
>> these changes into my local version of River,  my validating 
>> ObjectInputStream outperforms the standard java ois
>> 
>> Then TaskManager, used by the test became a problem, with tasks in 
>> contention up to 30% of the time.
>> 
>> Next I replaced TaskManager with an ExecutorService (River 3, only uses 
>> TaskManager in tests now, it's no longer used by release code), but there 
>> was still contention  although not quite as bad.
>> 
>> Then I notice that tasks in the test call Thread.yield(), which tends to 
>> thrash, so I replaced it with a short sleep of 100ms.
>> 
>> Now monitor state was a maximum of 5%, much better.
>> 
>> After these changes, the hotspot consuming 27% cpu was JERI's 
>> ConnectionManager.connect,  followed by Class.getDeclaredMethod at 15.5%, 
>> Socket.accept 14.4% and Class.newInstance at 10.8%.
> 
> 
> First - performance optimization:  Unless you’re testing with real-life 
> workloads, in real-ife-like network environments, you’re wasting your time.  
> In the real world, clients discover services pretty rarely, and real-world 
> architects always make sure that communications time is small compared to 
> processing time.  In the real world, remote call latency is controlled by 
> network bandwidth and the speed of light.  Running in the integration test 
> environment, you’re seeing processor loads, not network loads.  There isn’t 
> any need for this kind of micro-optimization.  All you’re doing is delaying 
> shipping, no matter how wonderful you keep telling us it is.
> 
> 
>> My validating ois,  originating from apache harmony, was modified to use 
>> explicit constructors during deserialization.  This addressed finalizer 
>> attacks, final field immutability and input stream validation and the ois 
>> itself places a limit on downloaded bytes by controlling


Re: svn commit: r1716613

2015-11-29 Thread Gregg Wonderly
I’ve tried to stress, over the years, how many different issues I have 
encountered regarding contention and locking as well as outright bugs.  Many 
people seem to have use cases which don’t expose all these problems that you 
have worked so hard to take care of.  I encountered lots of problems with SDM 
not working reliably.  DNS and massive downloads also made for huge latency 
problems on desktop applications which use serviceUI for admin and application 
UIs.  The policy stuff… what a nightmare when secure performance is needed…  I 
still encounter lots of people that have no idea how Java 5 JMM changed what 
you must do, because of the non-Intel processors, if you want things to 
actually work on the other processors.  I still loath the non-volatile boolean 
loop hoist, but can not convince anyone that it’s actually a huge problem 
because it actually changes the visible execution of the program, with no 
observable details.  Instead, you can log the boolean control and see it change 
and the loop never exits.  Yes its a data race, but the JMM says that it may be 
possible to observe the non-volatile write.  With the old memory model, where 
Vector and Hashtable constantly created happens before, it did work reliably.

Gregg

> On Nov 28, 2015, at 9:40 PM, Peter <j...@zeus.net.au> wrote:
> 
> Thanks for your support Gregg,
> 
> This should be an interesting release and it's been a long time in the 
> making.   Changes made are in response to years of requests made on jini 
> users and river dev to fix bottlenecks and performance issues.
> 
> All remaining bottle necks (that I'm aware of) are native methods.
> 
> What's new?
> 
> Elimination of unnecessary DNS calls.
> 
> World's fastest scalable policy provider.
> 
> World's fastest class loading thanks to elimination of contention using 
> thread confinement and RFC3986 compliant URI normalisation.
> 
> Use of modern concurrent executors, deprecated TaskManager.  Stress tests in 
> the qa suite still use TaskManager, but no longer stress their intended 
> targets, instead the tests themselves are hotspots now.
> 
> We've also fixed a heap of race conditions and atomicity bugs, even 
> ServiceDiscoveryManager and DGC work reliably now.  UnresovedPermission's 
> always resolve as they should now too (fixed a race condition in Java using 
> thread confinement).  Safe publication has also been used to fix race 
> conditions in Permission classes that use lazy init, but are documented as 
> being immutable.  All services that implement Startable are safely exported, 
> even when using Phoenix Activation.
> 
> Then there a heap of latent bugs fixed as well, findbugs was used along with 
> visual auditing to find and fix many of them.
> 
> The Jini public api maintains backward compatibility.
> 
> The next step is to get this work back into trunk, the package rename is 
> making merge too difficult, so I think I'll do a diff of the current trunk to 
> the branch point where qa refactor originated, then rename packages in the 
> diff file and apply it against qa refactor namspace.
> 
> Then I'll relace trunk.
> 
> That's the plan, dependant on available time.  Anyone have time to volunteer 
> with River 3.0's release once merging is complete?
> 
> Regards,
> 
> Peter.
> 
> 
> 
> 
> Sent from my Samsung device.
>   Include original message
>  Original message 
> From: Gregg Wonderly <gr...@wonderly.org>
> Sent: 29/11/2015 02:25:53 am
> To: dev@river.apache.org
> Subject: Re: svn commit: r1716613
> 
> These kinds of contention reductions can be a huge gain for overall 
> performance. 
> 
> The fastest time through is never faster than the time through the highest 
> contended spot! 
> 
> Gregg 
> 
> Sent from my iPhone 
> 
>>  On Nov 27, 2015, at 4:46 PM, Peter <j...@zeus.net.au> wrote: 
>>   
>>  Last attempt at sending this to the list: 
>>   
>>  During stress testing, the jeri multiplexer can fail when the jvm runs out 
>> of memory and cannot create new Threads.  The mux lock can also become a 
>> point of thread contention.  The changes avoid creating new objects, using a 
>> bitset and array  (that doesn't allocate new objects) instead of collection 
>> classes. 
>>   
>>  The code changes also reduce the time a monitor is held, thus reducing 
>> contention under load. 
>>   
>>  Peter. 
>>   
>>>   
>>>  In order to properly review changes, it would be great to know what the 
>>> problem it is that you’re fixing - could you share?  
>>>   
>>>  Cheers, 
>>>   
>>>  Greg Trasuk 
>>   
> 
> 
> 
> 



Re: svn commit: r1716613

2015-11-28 Thread Gregg Wonderly
These kinds of contention reductions can be a huge gain for overall performance.

The fastest time through is never faster than the time through the highest 
contended spot!

Gregg

Sent from my iPhone

> On Nov 27, 2015, at 4:46 PM, Peter  wrote:
> 
> Last attempt at sending this to the list:
> 
> During stress testing, the jeri multiplexer can fail when the jvm runs out of 
> memory and cannot create new Threads.  The mux lock can also become a point 
> of thread contention.  The changes avoid creating new objects, using a bitset 
> and array  (that doesn't allocate new objects) instead of collection classes.
> 
> The code changes also reduce the time a monitor is held, thus reducing 
> contention under load.
> 
> Peter.
> 
>> 
>> In order to properly review changes, it would be great to know what the 
>> problem it is that you’re fixing - could you share? 
>> 
>> Cheers,
>> 
>> Greg Trasuk
> 


Re: [Discuss] Lookup Service - was Drop support for Activation?

2015-11-17 Thread Gregg Wonderly


Sent from my iPhone

> On Nov 16, 2015, at 5:01 AM, Peter <j...@zeus.net.au> wrote:
> 
> On 16/11/2015 1:47 PM, Gregg Wonderly wrote:
>>> On Nov 13, 2015, at 10:36 PM, Peter<j...@zeus.net.au>  wrote:
>>> 
>>> comment inline, sorry this phone doesn't quote your message
>>> 
>>> Sent from my Samsung device.
>>>   Include original message
>>>  Original message 
>>> From: Greg Trasuk<tras...@stratuscom.com>
>>> Sent: 14/11/2015 12:01:12 pm
>>> To: dev@river.apache.org
>>> Subject: Re: [Discuss] Drop support for Activation?
>>> 
>>> 
>>>>  On Nov 13, 2015, at 6:53 PM, Peter<j...@zeus.net.au>  wrote:
>>>> 
>>>>  On long lived Objects:
>>>> 
>>>>  one of the design issues with the lookup service is the codebase 
>>>> annotation and
>>>>  proxy are uploaded and stored.  unfortunately these can change over time, 
>>>> and codebase annotations can be lost.
>>> I’m confused here - why would the proxy or codebase annotation change on a 
>>> service that is alive, without the service informing the registrar?  The 
>>> only case where that would happen is if the service dies and a new one 
>>> starts up.  In that case, either the new service would re-use the original 
>>> serviceID, hence overwrite the original registration, or the lease on the 
>>> original registration should expire in a reasonable time, causing the 
>>> original registration to be dropped.
>>> 
>>> REPLY:
>>> 
>>>  There is no mechanism to notify the client that the proxy or codebase has 
>>> been updated.  Although you are correct that the registrar should have a 
>>> marshalled instance of the latest proxy.  We could say failure is the 
>>> mechhanism used to cause the client to rediscover a replacement, but 
>>> partial failure and releasing resources can be problematic.
>> Lease cancellation and/or lease expiry notifies the client.  That’s what 
>> should cause the client to rediscover shouldn’t it?
> 
> Right, currently the client needs to wrap the proxy's it receives and look 
> them up again after cancellation.

Idempotent services and a simple proxy wrapper make this pretty easy to do.  
And with a dynamic, reflection based mechanism, you can create the local 
wrapper proxy with the lookup details, and it can dynamically rediscover on 
lease driven losses and your calls through the proxy can almost happen without 
interruption.

Gregg



Re: tree-based / log(p) communication cost algorithms for River?

2015-09-12 Thread Gregg Wonderly
I guess without more knowledge of what your problem space really is I can’t 
understand why the problem doesn’t decompose into small problems which can be 
farmed out to a java space.  Certainly, there are issues if you are using a 
C-language application on and MDI platform for parallelism.   If the barrier, 
right now, is the speed of the network vs the availability of cycles on the 
GPU, then what you are doing now, probably makes since. But, if it would be 
faster to infiniband a dozen requests out to other idle GPUs, then it would 
seem that there might be something to be had by having a javaspace client 
grabbing requests and JNI calling out to use the GPU for calculations.

Perhaps it feels comfortable now to use more GPU cycles close by and less 
infiniband traffic for transport to other GPUs.  Whats the fraction describing 
the GPU time to compute a result verse the infiniband time to transmit such 
data to another GPU?

What I am specifically wondering is what happens when one of the GPU machines 
crashes?  How do you recover and continue processing?

Gregg Wonderly
 
> On Sep 11, 2015, at 4:22 AM, Bryan Thompson <br...@systap.com> wrote:
> 
> Gregg,
> 
> Graphs traversal is in general a non-local problem with irregular, data
> dependent parallelism (the available fine grained parallelism for a vertex
> depends on its edge list, the size of the edge list can vary over many
> orders of magnitude, edges may connect vertices that are non local, and the
> size of the active frontier of vertices during traversal and also vary by
> many orders of magnitude).
> 
> The 2D decomposition minimizes the number of communications that need to be
> performed.  There are also hybrid decompositions that can minimize the
> communication volume (amount of data that needs to be transmitted),
> especially when combined with graph aware partitioning.
> 
> A data layout that ignores these issues will require more communication
> operations and/or have a greater communications volume.  Communication is
> the main barrier to scaling for graphs. So data layouts matter and
> innovation in data layouts is one of the key things driving improved
> scaling.
> 
> We actually use hardware acceleration as well. So each compute node has one
> or more gpus that are used to parallelize operations on the local edges.
> The gpus use an MPI distribution that uses RDMA over infiniband for off
> node communication. This avoids the overhead of having the CPU coordinate
> communication (data is not copied to the CPU but does directly over PCIe to
> the infiniband card).
> 
> The question about tree based communication patterns arises from an
> interest in being able to use java to coordinate data management
> activities.  These activities are not are as performance critical as the
> basic graph traversal, but they should still show good scaling. Tree based
> communication patterns are one way to achieve that scaling.
> 
> Thanks,
> Bryan
> 
> On Thursday, September 10, 2015, Gregg Wonderly <gregg...@gmail.com> wrote:
> 
>> Why doesn’t java spaces let you submit requests and have them worked on
>> and results returned without assigning any particular node any specific
>> responsibility?
>> 
>> Gregg
>> 
>>> On Aug 1, 2015, at 12:06 PM, Bryan Thompson <br...@systap.com
>> <javascript:;>> wrote:
>>> 
>>> First, thanks for the responses and the interest in this topic.  I have
>>> been traveling for the last few days and have not had a chance to follow
>> up.
>>> 
>>> - The network would have multiple switches, but not multiple routers.
>> The
>>> typical target is an infiniband network.  Of course, Java let's us bind
>> to
>>> infiniband now.
>>> 
>>> - As far as I understand it, MPI relies on each node executing the same
>>> logic in a distributed communication pattern.  Thus the concept of a
>> leader
>>> election to determine a balanced tree probably does not show up. Instead,
>>> the tree is expressed in terms of the MPI rank assigned to each node.  I
>> am
>>> not suggesting that the same design pattern is used for river.
>>> 
>>> - We do need a means to define the relationship between a distributed
>>> communication pattern and the manner in which data are decomposed onto a
>>> cluster.  I am not sure that the proposal above gives us this directly,
>> but
>>> some extension of it probably would.   Let me give an example.  In our
>>> application, we are distributing the edges of a graph among a 2-D cluster
>>> of compute nodes (p x p machines).  The distribution is done by assigning
>>> the edges to compute nodes based on so

Re: java.io.EOFException in Jini Client: server is fine

2015-09-11 Thread Gregg Wonderly
The other detail to look at is the ObjectOutputStream cache.  If a 
persistent/long-lived version of that class is used to write all the objects 
across the network, it can present a problem.  That class caches objects and 
sends object ids for duplicates instead of the entire object, so that object 
graphs are recreated with the duplicate references as references instead of as 
new objects.   It may be necessary to use the “reset” method on that class to 
cause the object cache to not grow unbounded or unnecessarily.  If you know 
there are no duplicate objects being sent, that makes it easy to know when to 
call reset.  If there are duplicate objects being sent, but you have no 
execution boundary to line up with the objects being sent, you may have to 
design a different way of transporting the data so that you can minimize 
caching or other unbounded memory use by ObjectOutputStream.

Gregg Wonderly


> On Sep 11, 2015, at 12:03 PM, Bryan Thompson <br...@systap.com> wrote:
> 
> One suggestion is to separate the messages from the payload and use a
> different protocol for the payload.  For example, having the receiver reach
> back across the network when it is ready to read the payload.  This has
> several advantages:
> 
> - The receiver can impose flow control on the heavy messages by deciding
> when it wants the data.
> - You can use one network for the lighter messages that coordinate activity
> and another network for the bandwidth intensive data transfers.
> - You can avoid Java serialization for heavy if you have simple objects
> such as arrays.
> 
> Bryan
> 
> 
> Bryan Thompson
> Chief Scientist & Founder
> SYSTAP, LLC
> 4501 Tower Road
> Greensboro, NC 27410
> br...@systap.com
> http://blazegraph.com
> http://blog.bigdata.com <http://bigdata.com>
> http://mapgraph.io
> 
> Blazegraph™ <http://www.blazegraph.com/> is our ultra high-performance
> graph database that supports both RDF/SPARQL and Tinkerpop/Blueprints
> APIs.  Blazegraph is now available with GPU acceleration using our disruptive
> technology to accelerate data-parallel graph analytics and graph query.
> 
> CONFIDENTIALITY NOTICE:  This email and its contents and attachments are
> for the sole use of the intended recipient(s) and are confidential or
> proprietary to SYSTAP. Any unauthorized review, use, disclosure,
> dissemination or copying of this email or its contents or attachments is
> prohibited. If you have received this communication in error, please notify
> the sender by reply email and permanently delete all copies of the email
> and its contents and attachments.
> 
> On Fri, Sep 11, 2015 at 12:38 PM, Palash Ray <paa...@gmail.com> wrote:
> 
>> Hi,
>> 
>> I have a Jini server, and I am doing a lookup from a Jini registry and
>> then making a call on the remote.
>> 
>> The client code is:
>>LookupLocator lookupLocator = new LookupLocator(jiniRegistryUrl);
>>return (Remote) lookupLocator.getRegistrar().lookup(new
>> ServiceTemplate(null,
>>new Class[]{serviceInterfaceClass}, new Entry[]{new
>> Name(serviceName)}));
>> We are transmitting pretty heavy objects: ArrayList having a million+
>> rows. It works fine for most part. However, when the list has over 10
>> million rows, the server is still fine. But, the client starts
>> behaving weird and throwing java.io.EOFException. I am pasting the
>> full stack trace.
>> 
>> Any help would be appreciated.
>> 
>> Thanks,
>> Palash.
>> 
>> 
>> Caused by: java.lang.reflect.InvocationTargetException
>>at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>>at
>> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
>>at
>> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>>at java.lang.reflect.Method.invoke(Method.java:606)
>>at
>> com.imsi.iss.portiss.jasper.query.RmiQueryExecutor.queryReportData(RmiQueryExecutor.java:112)
>>at
>> com.imsi.iss.portiss.jasper.query.PortissQueryExecutor.createDatasource(PortissQueryExecutor.java:41)
>>... 16 more
>> Caused by: java.rmi.UnmarshalException: exception unmarshalling
>> response; nested exception is:
>>java.io.EOFException
>>at
>> net.jini.jeri.BasicInvocationHandler.invokeRemoteMethodOnce(BasicInvocationHandler.java:847)
>>at
>> net.jini.jeri.BasicInvocationHandler.invokeRemoteMethod(BasicInvocationHandler.java:659)
>>at
>> net.jini.jeri.BasicInvocationHandler.invoke(BasicInvocationHandler.java:528)
>

Re: Compatibility

2015-09-10 Thread Gregg Wonderly
Like many network environments, compatibility needs to be an important part of 
how services are resolved.  I think it’s important that any service which can 
be registered should be able to be resolved by any client which can access the 
ServiceRegistrar instance which accepted the registration.  There are of course 
exceptions for platforms you define yourself, by moving downloaded code to 
codebase code.  But, I don’t think we should trivially change the “base” 
platform by adding references in downloaded code, to new codebase stored 
classes.

Gregg Wonderly


> On Sep 10, 2015, at 11:55 AM, Dennis Reedy <dennis.re...@gmail.com> wrote:
> 
> I’m not sure this is about release notes. You seem quite keen on getting 3.0 
> out the door, while I applaud the urgency, lets not dump the baby out with 
> the bath water. The net.jini namespace has not been changed, the 
> implementation of those interfaces has.
> 
> I should be able to discover a ServiceRegistrar started from 3.0 from a 2.x 
> client. The classes required should be dynamically downloaded with the proxy. 
> The change here that has been aded to jsk-platform has resulted in classes 
> (org.apache.river.api.util.ID for starters), not being available. I’m not so 
> sure this is good. It’s certainly not a good thing for projects that may want 
> to use existing tools for discovery.
> 
> Regards
> 
> Dennis
> 
>> On Sep 10, 2015, at 1242PM, Bryan Thompson <br...@systap.com> wrote:
>> 
>> I guess the question is whether River 2.x is a breaking change in terms of
>> cross service communications with River 3.x.  As this is a major release, I
>> see it an opportunity to make breaking changes if we need to make them.
>> But there is no reason to break interoperability by accident.
>> 
>> So, are there good reasons why River 2.x will not be able to talk to River
>> 3.x?  If so, can we capture them here and then summarize them in release
>> notes?  Is there a specific location in which the release notes are being
>> developed (SVN file, wiki page, etc.)?
>> 
>> Thanks,
>> Bryan
>> 
>> 
>> Bryan Thompson
>> Chief Scientist & Founder
>> SYSTAP, LLC
>> 4501 Tower Road
>> Greensboro, NC 27410
>> br...@systap.com
>> http://blazegraph.com
>> http://blog.bigdata.com <http://bigdata.com>
>> http://mapgraph.io
>> 
>> Blazegraph™ <http://www.blazegraph.com/> is our ultra high-performance
>> graph database that supports both RDF/SPARQL and Tinkerpop/Blueprints
>> APIs.  Blazegraph is now available with GPU acceleration using our disruptive
>> technology to accelerate data-parallel graph analytics and graph query.
>> 
>> CONFIDENTIALITY NOTICE:  This email and its contents and attachments are
>> for the sole use of the intended recipient(s) and are confidential or
>> proprietary to SYSTAP. Any unauthorized review, use, disclosure,
>> dissemination or copying of this email or its contents or attachments is
>> prohibited. If you have received this communication in error, please notify
>> the sender by reply email and permanently delete all copies of the email
>> and its contents and attachments.
>> 
>> On Thu, Sep 10, 2015 at 12:37 PM, Dennis Reedy <dennis.re...@gmail.com>
>> wrote:
>> 
>>> Hi,
>>> 
>>> I’m building and running an example that I based off of Greg’s example
>>> from the qa-refactor-namespace branch. I had a browser utility that I use
>>> at times running that is based on 2.2.2. I could not discover reggie with
>>> the browser utility because of
>>> 
>>> Caused by: java.lang.ClassNotFoundException: org.apache.river.api.util.ID
>>>   at java.net.URLClassLoader$1.run(URLClassLoader.java:372)
>>>   at java.net.URLClassLoader$1.run(URLClassLoader.java:361)
>>>   at java.security.AccessController.doPrivileged(Native Method)
>>>   at java.net.URLClassLoader.findClass(URLClassLoader.java:360)
>>> 
>>> The org.apache.river.api.util.ID class is an interface:
>>> 
>>> /**
>>> * A mix in interface that provides an identity to be used as a key in
>>> Collections.
>>> *
>>> * @param  Object identity.
>>> * @author peter
>>> */
>>> public interface ID {
>>> 
>>>   /**
>>>* @return object representing identity, usually a Uuid.
>>>*/
>>>   public T identity();
>>> }
>>> 
>>> Seems to be used by the following classes:
>>> 
>>> ./src/org/apache/river/fiddler/FiddlerLease.java:import
>>> org.apache.river.api.uti

Re: Don't let Jini Standards become an impediment to development

2015-09-10 Thread Gregg Wonderly
Peter have you ever constructed a demo of a massively active Javaspace app with 
and without reflection use for entry deserialization?  It might be a great 
thing to have those numbers to help everyone recognize what you (and I) have 
seen as impeding implementation/architectural attributes.  

Gregg

> On Sep 8, 2015, at 11:30 PM, Peter  wrote:
> 
> Thanks Greg,
> 
> Was it a case of; because we can't set final fields (well not without a 
> Permission anyway), that they shouldn't be included in Entry serialized 
> state, because then we can't deserialize them?
> 
> I've done my best to fix the existing implementations, so hopefully they 
> won't need further fixes, however, the fixes were very difficult and these 
> implementations very difficult to reason about, because there is so much 
> mutable state.  In ServiceDiscoveryManager, a thread holds a lock while 
> waiting for the result of a remote call, there was no solution I could find 
> to remove this lock.
> 
> To quote Keith Edwards "The Special Semantics of Attributes":
> 
>   "All the methods of the object are ignored for purposes of
>   searching, as are "special" data fields: static, transient,
>   non-public, or final fields.  Likewise all fields that are primitive
>   types (such as ints and booleans) are ignored; only references to
>   other objects within an attribute are considered for searching."
> 
> 
> So our choices are (for River 4.0):
> 
>  1. Break backward compatibility and increase scalability, performance
> and reduce bugs, by not ignoring final fields in Entry's, but
> instead mandating them.
>  2. Or continue full compatibility and live with lower performance,
> less scalability and harder to debug code.
> 
> I think there's plenty of time for implementations to prepare for River 4.0, 
> if we start talking about it now.
> 
> Regards,
> 
> Peter.
> 
> How are these for code comments (from ServiceDiscoveryManager)?
> 
>// Don't like the fact that we're calling foreign code while
>// holding an object lock, however holding this lock doesn't
>// provide an opportunity for DOS as the lock only relates to 
> a specific
>// ServiceRegistrar and doesn't interact with client code.
>matches = proxy.lookup(tmpl, Integer.MAX_VALUE);
> 
>   /* The cache must be created inside the listener sync block,
> 
> * otherwise a race condition can occur. This is because the
> * creation of a cache results in event registration which
> * will ultimately result in the invocation of the serviceAdded()
> * method in the cache's listener, and the interruption of any
> * objects waiting on the cache's listener. If the notifications
> * happen to occur before commencing the wait on the listener
> * object (see below), then the wait will never be interrupted
> * because the interrupts were sent before the wait() method
> * was invoked. Synchronizing on the listener and the listener's
> * serviceAdded() method, and creating the cache only after the
> * lock has been acquired, together will prevent this situation
> * since event registration cannot occur until the cache is
> * created, and the lock that allows entry into the serviceAdded()
> * method (which is invoked once the events do arrive) is not
> * released until the wait() method is invoked .
> */
> 
>/**
> * With respect to a given service (referenced by the parameter
> * newItem), if either an event has been received from the given lookup
> * service (referenced by the proxy parameter), or a snapshot of the
> * given lookup service's state has been retrieved, this method
> * determines whether the service's attributes have changed, or whether
> * a new version of the service has been registered. After the
> * appropriate determination has been made, this method applies the
> * filter associated with the current cache and sends the appropriate
> * local ServiceDiscoveryEvent(s).
> *
> * This method is called under the following conditions: - when a new
> * lookup service is discovered, this method will be called for each
> * previously discovered service - when a gap in the events from a
> * previously discovered lookup service is discovered, this method will
> * be called for each previously discovered service - when a 
> MATCH_MATCH
> * event is received, this method will be called for each previously
> * discovered service - when a NOMATCH_MATCH event is received, this
> * method will be called for each previously discovered service Note
> * that this method is never called when a MATCH_NOMATCH event is
> * received; such an event is 

Re: tree-based / log(p) communication cost algorithms for River?

2015-09-10 Thread Gregg Wonderly
Why doesn’t java spaces let you submit requests and have them worked on and 
results returned without assigning any particular node any specific 
responsibility?

Gregg

> On Aug 1, 2015, at 12:06 PM, Bryan Thompson  wrote:
> 
> First, thanks for the responses and the interest in this topic.  I have
> been traveling for the last few days and have not had a chance to follow up.
> 
> - The network would have multiple switches, but not multiple routers.  The
> typical target is an infiniband network.  Of course, Java let's us bind to
> infiniband now.
> 
> - As far as I understand it, MPI relies on each node executing the same
> logic in a distributed communication pattern.  Thus the concept of a leader
> election to determine a balanced tree probably does not show up. Instead,
> the tree is expressed in terms of the MPI rank assigned to each node.  I am
> not suggesting that the same design pattern is used for river.
> 
> - We do need a means to define the relationship between a distributed
> communication pattern and the manner in which data are decomposed onto a
> cluster.  I am not sure that the proposal above gives us this directly, but
> some extension of it probably would.   Let me give an example.  In our
> application, we are distributing the edges of a graph among a 2-D cluster
> of compute nodes (p x p machines).  The distribution is done by assigning
> the edges to compute nodes based on some function (key-range,
> hash-function) of the source and target vertex identifiers.  When we want
> to read all edges in the graph, we need to do an operation that is data
> parallel across either the rows (in-edges) or the columns (out-edges) of
> the cluster. See http://mapgraph.io/papers/UUSCI-2014-002.pdf for a TR that
> describes this communication pattern for a p x p cluster of GPUs.  In order
> to make this work with river, we would somehow have to associate the nodes
> with their positions in this 2-D topology.  For example, we could annotate
> each node with a "row" and "column" attribute that specifies its location
> in the compute grid.  We could then have a communicator for each row and
> each column based on the approach you suggest above.
> 
> The advantage of such tree based communication patterns is quite large.
> They require log(p) communication operations where you would otherwise do p
> communication operations.  So, for example, only 4 communication operations
> vs 16 for a 16 node cluster.
> 
> Thanks,
> Bryan
> 
> 
> On Wed, Jul 29, 2015 at 1:17 PM, Greg Trasuk  wrote:
> 
>> 
>> I’ve wondered about doing this in the past, but for the workloads I’ve
>> worked with, I/O time has been relatively low compared to processing time.
>> I’d guess there’s some combination of message frequency, cluster size and
>> message size that makes it compelling.
>> 
>> The idea is interesting, though, because it could enable things like
>> distributed JavaSpaces, where we’d be distributing the search queries, etc.
>> 
>> I would guess the mechanism would look like:
>> 
>> -Member nodes want to form a multicast group.
>> -They elect a leader
>> -Leader figures out a balanced notification tree, and passes it on to each
>> member
>> -Leader receives multicast message and starts the message passing into the
>> tree
>> -Recipients pass the message to local recipients, and also to their
>> designated repeater recipients (how many?)
>> -Somehow we monitor for disappearing members and then recast the leader
>> election if necessary.
>> 
>> Paxon protocol would be involved, I’d guess.  Does anyone have references
>> to any academic work on presence monitoring and leader election, beyond
>> Lamport’s original paper?
>> 
>> I also wonder, is there a reason not to just use Multicast if it’s
>> available (I realize that it isn’t always supported - Amazon EC2, for
>> instance).
>> 
>> Interesting question!
>> 
>> Cheers,
>> 
>> Greg Trasuk
>> 
>>> On Jul 29, 2015, at 12:40 PM, Bryan Thompson  wrote:
>>> 
>>> Hello,
>>> 
>>> I am wondering if anyone has looked into creating tree based algorithms
>> for
>>> multi-cast of RMI messages for river.  Assuming a local cluster, such
>>> patterns generally have log(p) cost for a cluster with p nodes.
>>> 
>>> For the curious, this is how many MPI messages are communicated under the
>>> hood.
>>> 
>>> Thanks,
>>> Bryan
>> 
>> 



Re: Using an unchecked exception instead of RemoteException

2015-06-12 Thread Gregg Wonderly
One of the primary things in software design is to place services into a 
service layer.  There should be service failure exceptions and those should be 
distinctly different from “transport” or “implementation” exceptions.  If your 
code has “RemoteException” visible in the business logic, then you’ve probably 
not placed the services into a service layer so that you can properly manage 
implementation of communications, separate from the service is not reachable, 
usable, working states.

Once you have a service layer, then you have the opportunity to “change” the 
service implementation to use varied technologies, or at least provide a hidden 
retry strategy that you can perfect in “one place” instead of having it all 
over the place.

Jini’s documentation and much of the things circulating on the internet, 
typically avoid any indication of using a service layer.  Instead, the service 
interfaces with Remote on them, are all at the forefront of the visible code 
structure.

Gregg Wonderly

 On Jun 11, 2015, at 10:29 PM, Palash Ray paa...@gmail.com wrote:
 
 My two cents:
 I am in favour of having runtime exception. We are facing a huge problem as
 our code base is too cluttered with this kind of code:
 
 try{
 remoteProxy.callRemoteMethod();
 } catch (RemoteException e){
 
 }
 
 On Thu, Jun 11, 2015 at 10:52 AM, Dawid Loubser da...@travellinck.com
 wrote:
 
 On 11/06/2015 16:24, Greg Trasuk wrote:
 * It's perfectly fine to still enforce service implementations to
   declare RemoteException, as a tag / reminder, but honestly, it's
   not the client's concern. Depending on the reliability requirements
   of the client, they need to handle unexpected failure in anyway, and
   RemoteException is functionally no different than, say, a
 NullPointer.
 I disagree.  In my experience, communications exceptions need to be
 carefully considered by the service consumer.
 
 Doesn't that effectively leave only two options?
 
  * Remote services can only implement, and be called via, contracts
that were designed at the time that I decided this will be a remote
service. No re-purposing or adapting of existing functionality to,
say, a new remote service. (i.e. I used to call a local database,
now I'm going to call a Jini service).
  * Just in case, make all methods on all contracts ever throw
RemoteException - although some frameworks like EJB 3.x won't like
that for certain types of services. Applying something to
everything, and to nothing, are semantically equivalent.
 
 Interfaces are supposed to promote plug-ability, right? I agree that
 remote services are a leaky abstraction. Furthermore, Java's limited
 type system does not give us elegant ways to re-use, say, the business
 semantics of an interface in different contexts (such as, one that make
 remote method calls). Because of this, I personally would rather have
 the ability to strongly re-use interfaces in all contexts, across all
 implementation technologies, where possible.
 
 If, at a given level of granularity, we strictly apply the semantic that
 checked exceptions are for *service refusal* (precondition not met), and
 RuntimeException and Error for *syst**em failure* (postcondition not
 met, not caller's fault), things are a whole lot simpler in my opinion
 and experience.
 
 Anyway, this has just been a big pain point for me all these years with
 Jini/River. It has nothing to do with the fallacies of distributed
 computing, and everything with re-use, and the separation of
 functionality from implementation technology in Java interfaces.
 
 If it were up to me, I would introduce an UnexpectedRemoteException
 and/or UnexpectedIOException which both extend RuntimeException, and
 which are understood by River components in addition to the usual
 checked [Remote/IO]Exception - which, by defintion, is expected. Then,
 instead of arguing about one solution for an opinionated framework, the
 users of the framework can choose, experiment, etc. I feel that River,
 as infrastructure, should impede as little as possible, and a checked
 RuntimeException is demonstrably limiting to e.g. interface re-use.
 
 warm regards,
 Dawid Loubser
 



Re: Clustered Jini Server? Was: Re: Mirroring to GitHub

2015-06-09 Thread Gregg Wonderly
Any Java spaces client spends most of its time blocked on a read/take from the 
space which makes it a very synchronous interface.

You would put work into the space and then turn right around an make a blocking 
take in most cases.

Gregg

Sent from my iPhone

 On Jun 3, 2015, at 6:42 AM, Palash Ray paa...@gmail.com wrote:
 
 Interesting thought about using Java Spaces. However, for us, there is an
 extra maintenance of the Java Spaces server in production, which is my
 worry. Moreover, in our application, it is always a synchronous call from
 the Swing client to the Jini server. It would be a lot of effort to make
 this an asynchronous call to Java Sapces. So for our kind of application,
 this would not be suitable.
 
 Thanks,
 Palash.
 
 On Wed, Jun 3, 2015 at 1:14 AM, Simon Roberts 
 si...@dancingcloudservices.com wrote:
 
 Hard to be sure if this is a sensible comment without knowing more about
 what you're trying to do, but the typical load balance in a Jini
 environment has traditionally been a Java Spaces server, into which jobs
 (probably simply Runnable implementations) are placed. The clustered work
 engines are configured to take (in a transaction) a job from the space,
 process it, and put it back with an attribute indicating completion. On
 putting the job back, the original take transaction is committed.
 Therefore, if the server crashes before the job is completed, the take
 evaporates, and some other work engine gets to re-take, hopefully
 completing successfully. This model allows any number of work engines to be
 load balanced for essentially zero communication between them, and no
 actual load balancer exists (in the sense that no active component has to
 keep track of the work engines). The workers take as fast at they're able
 to do work, but no faster, so they don't get overloaded. You can bring
 workers up, and shut them down, with zero reconfiguration.
 
 Cheers,
 Simon
 
 
 On Tue, Jun 2, 2015 at 8:13 PM, Palash Ray paa...@gmail.com wrote:
 
 Thanks Dennis, I will definitely explore that option.
 
 On Tue, Jun 2, 2015 at 9:37 PM, Dennis Reedy dennis.re...@gmail.com
 wrote:
 
 Hi Palash,
 
 Using reggie as a load balancer does not make the most sense, what you
 may
 want to consider to to maintain a collection of discovered services and
 simply round robin across them. You might want to start looking at the
 ServiceDiscoveryManager and the LookupCache for this.
 
 HTH
 
 Dennis
 
 
 On Tue, Jun 2, 2015 at 9:16 PM, Palash Ray paa...@gmail.com wrote:
 
 Excellent. May be we can help each other here.
 
 Let me start by giving some more context around the problem.
 
 *Problem*
 Our middle tier that is a Jini-based rmi server. We have a Swing
 client
 that connects to it. In the middle tier, we have lot of processing
 logic:
 fetch something from the database, do some calculation intensive
 processing, write the results back to the database.
 
 Of late there has been a huge increase of the loads: the no. of Swing
 clients has increased, as has the bulk of the data to be processed.
 It
 has
 come to a point, where our production server, which is a single
 machine,
 is
 creaking under the load.
 
 So, we have decided to cluster it. We are planning to have at least 3
 or
 4
 Jini servers and a load balancer to spread out the load evenly.
 
 I was doing a proof of concept using the Jini infrasctrure itself.
 These
 were my thoughts:
 
 *Option 1*
 https://github.com/paawak/blog/tree/master/code/jini/unsecure/load-balancing
 
 The load balancing architecture here is very very simple. There is a
 single load balancer with its own reggie running at 6670. This is the
 primary contact point for all clients.
 
 There are multiple reggie involved for load balancing. The following
 convention is followed:
 
 1. The reggie for load balancer is at 6670
 2. The reggie for the actual jini servers are at 5561, 5562, 5563,
 etc.
 
 When the load-balancer recieves a request from client, it does the
 look-up at the appropriate jini-server and returns the remote
 service.
 
 *Option 2*
 https://github.com/paawak/jini-in-a-war
 
 I figured that if we can embed the Jini in a Tomcat and then
 clustering
 the
 Tomcat would be very easy. But this is still work in progress, and
 there
 are lot of details that I need to figure out.
 
 Please let me know if the above makes sense or is around the same
 things
 that interest you. I would like to have a out of the box Jini
 solution
 that *just
 works*. And I am happy to code for any solution that you guys think
 should
 be the way forward.
 
 Thanks,
 Palash.
 
 
 
 
 
 On Tue, Jun 2, 2015 at 2:14 PM, Patricia Shanahan p...@acm.org
 wrote:
 
 Also, if there is any chance the bottleneck is in River, I would be
 very,
 very interested in constructing a benchmark based on your workload
 that
 demonstrates the scaling problem. I would like to run it against
 the
 latest
 unreleased version, which I think may fix some scaling issues. If
 it
 still
 shows scaling 

Re: Clustered Jini Server? Was: Re: Mirroring to GitHub

2015-06-09 Thread Gregg Wonderly
One of the primary issues with trying to scale join applications is all of the 
locking and blocking through the security subsystems.  Peter's work in this 
area should be a tremendously visible performance boost for any app which has a 
high call load, such as data processing.

Gregg

Sent from my iPhone

 On Jun 2, 2015, at 8:37 PM, Dennis Reedy dennis.re...@gmail.com wrote:
 
 Hi Palash,
 
 Using reggie as a load balancer does not make the most sense, what you may
 want to consider to to maintain a collection of discovered services and
 simply round robin across them. You might want to start looking at the
 ServiceDiscoveryManager and the LookupCache for this.
 
 HTH
 
 Dennis
 
 
 On Tue, Jun 2, 2015 at 9:16 PM, Palash Ray paa...@gmail.com wrote:
 
 Excellent. May be we can help each other here.
 
 Let me start by giving some more context around the problem.
 
 *Problem*
 Our middle tier that is a Jini-based rmi server. We have a Swing client
 that connects to it. In the middle tier, we have lot of processing logic:
 fetch something from the database, do some calculation intensive
 processing, write the results back to the database.
 
 Of late there has been a huge increase of the loads: the no. of Swing
 clients has increased, as has the bulk of the data to be processed. It has
 come to a point, where our production server, which is a single machine, is
 creaking under the load.
 
 So, we have decided to cluster it. We are planning to have at least 3 or 4
 Jini servers and a load balancer to spread out the load evenly.
 
 I was doing a proof of concept using the Jini infrasctrure itself. These
 were my thoughts:
 
 *Option 1*
 
 https://github.com/paawak/blog/tree/master/code/jini/unsecure/load-balancing
 
 The load balancing architecture here is very very simple. There is a
 single load balancer with its own reggie running at 6670. This is the
 primary contact point for all clients.
 
 There are multiple reggie involved for load balancing. The following
 convention is followed:
 
 1. The reggie for load balancer is at 6670
 2. The reggie for the actual jini servers are at 5561, 5562, 5563, etc.
 
 When the load-balancer recieves a request from client, it does the
 look-up at the appropriate jini-server and returns the remote service.
 
 *Option 2*
 https://github.com/paawak/jini-in-a-war
 
 I figured that if we can embed the Jini in a Tomcat and then clustering the
 Tomcat would be very easy. But this is still work in progress, and there
 are lot of details that I need to figure out.
 
 Please let me know if the above makes sense or is around the same things
 that interest you. I would like to have a out of the box Jini solution
 that *just
 works*. And I am happy to code for any solution that you guys think should
 be the way forward.
 
 Thanks,
 Palash.
 
 
 
 
 
 On Tue, Jun 2, 2015 at 2:14 PM, Patricia Shanahan p...@acm.org wrote:
 
 Also, if there is any chance the bottleneck is in River, I would be very,
 very interested in constructing a benchmark based on your workload that
 demonstrates the scaling problem. I would like to run it against the
 latest
 unreleased version, which I think may fix some scaling issues. If it
 still
 shows scaling problems, I want to track them down and see whether they
 are
 fixable without clustering.
 
 My most recent professional background, before retiring, was as a
 performance architect working on multiprocessor servers for Cray Research
 and Sun Microsystems. When I first got involved in River I was thinking
 of
 doing some performance analysis and improvement, one of my favorite
 games,
 but could not find a suitable benchmark, or an actual user with a scaling
 problem.
 
 Patricia
 
 
 On 6/2/2015 10:24 AM, Greg Trasuk wrote:
 
 
 Palash:
 
 Could you expand on your need for a “clustered Jini server”?  What
 features are you looking for, and what aspects of the application need
 to
 be clustered?  This might provide fertile grounds for development.
 
 Cheers,
 
 Greg Trasuk
 
 On Jun 2, 2015, at 12:38 PM, Palash Ray paa...@gmail.com wrote:
 
 Hi Greg, Patricia,
 
 Really happy to see:
 https://github.com/trasukg/river-container
 
 I think we are off in the right direction. I have been using river for
 almost 2 years now, but only recently started taking an interest in
 the code that makes it tick.
 
 Our organisation is facing some scalability issues with Jini of late,
 well, I am not blaming Jini here, its just that we need a clustered
 Jini server.
 
 To that end I was playing around the code a bit. I have some ideas
 which I can discuss with this group later.
 
 I have created a small proof of concept of embedding Jini in a war and
 running it in a webserver:
 https://github.com/paawak/jini-in-a-war
 
 Also, I keep blogging about Jini with whatever little understanding I
 have:
 http://palashray.com/java/jini/
 
 In the coming days, I look forward to contributing to the river
 project.
 
 Thanks,
 Palash.
 
 
 
 
 On 6/2/15, Greg Trasuk 

Re: Clustered Jini Server? Was: Re: Mirroring to GitHub

2015-06-05 Thread Gregg Wonderly
Greg is on point here.  You really should consider a Java Space (look at Dan 
Creswell's Blitz, [http://www.dancres.org/blitz/, for a very performant 
implementation).  Your clients would then put requests for work into the 
Javaspace.   Worker machines that you can add as many as you want of, would 
consume Entry’s from the Javaspace, do the work, and then put the results into 
the database and return a “work done” Entry to the Javaspace.  Your clients 
would then see the entry arrive and proceed.

Gregg Wonderly

 On Jun 3, 2015, at 9:15 AM, Greg Trasuk tras...@stratuscom.com wrote:
 
 
 On Jun 3, 2015, at 7:42 AM, Palash Ray paa...@gmail.com wrote:
 
 Interesting thought about using Java Spaces. However, for us, there is an
 extra maintenance of the Java Spaces server in production, which is my
 worry. Moreover, in our application, it is always a synchronous call from
 the Swing client to the Jini server. It would be a lot of effort to make
 this an asynchronous call to Java Sapces. So for our kind of application,
 this would not be suitable.
 
 
 What I’ve done in the past is have an RPC-based service that interacts with 
 the JavaSpace (sort of an orchestration service).  That way the UI can have a 
 simplified interface mechanism, but you can still get the load-balancing 
 aspects of the JavaSpace.  Something like the River Container can host 
 multiple services like the orchestration service and task executor services 
 (JavaSpace workers) quite trivially.  Put multiple copies of that instance on 
 the network to scale the workers.  Then you have another River Container 
 instance that hosts your Java Space, Registrar and Transaction Manager 
 services (i.e. your Jini infrastructure).  In the case of River Container, it 
 automatically adapts to multiple containers on one node (it auto-selects 
 ports for the codebase server, etc), so you can develop and deploy in a 
 flexible fashion. 
 
 The above can also be done with the ServiceStarter framework, but with more 
 manual config.  No doubt Rio can manage trivially as well.
 
 Cheers,
 
 Greg Trasuk
 
 Thanks,
 Palash.
 
 On Wed, Jun 3, 2015 at 1:14 AM, Simon Roberts 
 si...@dancingcloudservices.com wrote:
 
 Hard to be sure if this is a sensible comment without knowing more about
 what you're trying to do, but the typical load balance in a Jini
 environment has traditionally been a Java Spaces server, into which jobs
 (probably simply Runnable implementations) are placed. The clustered work
 engines are configured to take (in a transaction) a job from the space,
 process it, and put it back with an attribute indicating completion. On
 putting the job back, the original take transaction is committed.
 Therefore, if the server crashes before the job is completed, the take
 evaporates, and some other work engine gets to re-take, hopefully
 completing successfully. This model allows any number of work engines to be
 load balanced for essentially zero communication between them, and no
 actual load balancer exists (in the sense that no active component has to
 keep track of the work engines). The workers take as fast at they're able
 to do work, but no faster, so they don't get overloaded. You can bring
 workers up, and shut them down, with zero reconfiguration.
 
 Cheers,
 Simon
 
 
 On Tue, Jun 2, 2015 at 8:13 PM, Palash Ray paa...@gmail.com wrote:
 
 Thanks Dennis, I will definitely explore that option.
 
 On Tue, Jun 2, 2015 at 9:37 PM, Dennis Reedy dennis.re...@gmail.com
 wrote:
 
 Hi Palash,
 
 Using reggie as a load balancer does not make the most sense, what you
 may
 want to consider to to maintain a collection of discovered services and
 simply round robin across them. You might want to start looking at the
 ServiceDiscoveryManager and the LookupCache for this.
 
 HTH
 
 Dennis
 
 
 On Tue, Jun 2, 2015 at 9:16 PM, Palash Ray paa...@gmail.com wrote:
 
 Excellent. May be we can help each other here.
 
 Let me start by giving some more context around the problem.
 
 *Problem*
 Our middle tier that is a Jini-based rmi server. We have a Swing
 client
 that connects to it. In the middle tier, we have lot of processing
 logic:
 fetch something from the database, do some calculation intensive
 processing, write the results back to the database.
 
 Of late there has been a huge increase of the loads: the no. of Swing
 clients has increased, as has the bulk of the data to be processed.
 It
 has
 come to a point, where our production server, which is a single
 machine,
 is
 creaking under the load.
 
 So, we have decided to cluster it. We are planning to have at least 3
 or
 4
 Jini servers and a load balancer to spread out the load evenly.
 
 I was doing a proof of concept using the Jini infrasctrure itself.
 These
 were my thoughts:
 
 *Option 1*
 
 
 
 
 https://github.com/paawak/blog/tree/master/code/jini/unsecure/load-balancing
 
 The load balancing architecture here is very very simple. There is a
 single load balancer with its own reggie running

Re: [Vote] Namespace change from com.sun.jini and com.artima to org.apache.river

2015-06-05 Thread Gregg Wonderly
Netbeans used to start with a custom SecurityManager implementation which was 
not replaceable in an APP nor the IDE itself.  I think this was changed to be 
pluggable, but I just don’t remember the details.

Gregg

 On Jun 2, 2015, at 6:14 AM, Peter j...@zeus.net.au wrote:
 
 Yes,
 
 Should run fine from within Netbeans,I haven't tried Eclipse, but it should 
 also work.  Unlike previous builds, it doesn't need cigwin and it  builds on 
 jvm's other than Sun's, such as IBM's J9.
 
 Peter.
 
 On 27/05/2015 6:09 PM, Patricia Shanahan wrote:
 I'll check out that branch and take a look. Any tips on which tool chain to 
 use to compile it on a Windows 8.1 system?
 
 On 5/26/2015 8:30 PM, Dennis Reedy wrote:
 Hi Patricia,
 
 I’ve done the work in the river/jtsk/skunk/qa-refactor-namespace branch, 
 having eyes and hands on this would be great!
 
 Thanks
 
 Dennis
 
 On May 26, 2015, at 1025PM, Patricia Shanahan p...@acm.org wrote:
 
 On 5/2/2015 2:16 PM, Dennis Reedy wrote:
 The vote for the namespace change from com.sun.jini and com.artima to
 org.apache.river has passed. I’ll begin this work in the
 skunk/qa_refactor branch. I might require assistance in testing, so
 volunteers would be appreciated.
 
 Is this the right time to volunteer? If so, what would you like me to do?
 
 Patricia
 
 
 



Re: Apacher River over Internet

2015-06-05 Thread Gregg Wonderly
Across the internet, you loose access to multi-cast only discovery.  So, your 
client must use unicast and the service must advertise/join a unicast capable 
lookup service (Reggie).   I’ve used Jini across long distance network paths 
for quite some time, and it works just fine as long as you use unicast (direct 
IP addresses).  For outside of an intranet access to an intranet lookup service 
and then for access to the discovered service, you will need to take firewall 
configuration into account most likely.  Port forwarding will be required if 
there is an networking barriers such as NATing routers etc.

Gregg

 On Jun 3, 2015, at 8:58 PM, Sergio Gomes sergio_go...@fedeltapos.com wrote:
 
 Hi, I have an client/server application using Apache River using the
 BasicJeriExporter over tcp/ip. Now I have a requirement to use it across
 the Internet (currently using local network). How could be it done? I saw
 Apache River can communicate using IIOP, would it be a good approach? Has
 someone tried to use Apache River over IIOP?
 
 Thank you.



Re: [Vote] Namespace change from com.sun.jini and com.artima to org.apache.river

2015-06-05 Thread Gregg Wonderly
Dennis is thinking of the solution I was trying to use to get things to run 
inside of netbeans, the platform, to use it to create Jini enabled clients.  
The SecurityManger issue, I think, still existed.  The problem is that I’ve 
been away from that issue for a few years now and my memory is clouded (no, I 
am not lost in cloud computing yet :-). That solved the problems with resolving 
classes with the parent classloader issue.  In netbeans, there is not a 
strictly hierarchical class loader structure.  Individual modules don’t share 
common classes and thus can’t exchange any object which is not in the platform. 
 My changes allow for resolution of classes to be customized.

But, the SecurityManage implementation, might still be a problem, just not sure.

Gregg

 On Jun 5, 2015, at 10:06 AM, Greg Trasuk tras...@stratuscom.com wrote:
 
 
 Need to define “run within NetBeans”.  I think Gregg’s work you’re thinking 
 of was about being able to build NetBeans plugins that could access services 
 and perhaps export services.
 
 “Running” a program from within Netbeans, i.e. invoking the Ant script or 
 executing “java …” works fine as-is.  And I think Peter was talking about 
 developing with NetBeans, which also is just fine.  The only thing that is  
 bit of a pain with either NetBeans or Eclipse on the JTSK core project is 
 getting  all the libraries added to the IDE’s class path so that type-ahead, 
 etc, work OK.  In either one, you need to edit the project’s properties.  
 It’s a little easier with a Mavenized project like the river-examples 
 projects, because the IDE picks up on the dependencies called out in the POM.
 
 Cheers,
 
 Greg Trasuk
 
 On Jun 5, 2015, at 9:45 AM, Dennis Reedy dennis.re...@gmail.com wrote:
 
 Hi Gregg,
 
 IIRC, you did some work with the RMIClassLoader that greatly improved 
 interoperability with NB. I don't recall seeing it in the qa-refactor 
 branch. I think it's really important to have that work in the next release.
 
 Any chance you can you merge it into the qa-refactor-namespace branch?
 
 Thanks
 
 Dennis
 
 Sent from my iPhone
 
 On Jun 5, 2015, at 9:04 AM, Gregg Wonderly gregg...@gmail.com wrote:
 
 Netbeans used to start with a custom SecurityManager implementation which 
 was not replaceable in an APP nor the IDE itself.  I think this was changed 
 to be pluggable, but I just don’t remember the details.
 
 Gregg
 
 On Jun 2, 2015, at 6:14 AM, Peter j...@zeus.net.au wrote:
 
 Yes,
 
 Should run fine from within Netbeans,I haven't tried Eclipse, but it 
 should also work.  Unlike previous builds, it doesn't need cigwin and it  
 builds on jvm's other than Sun's, such as IBM's J9.
 
 Peter.
 
 On 27/05/2015 6:09 PM, Patricia Shanahan wrote:
 I'll check out that branch and take a look. Any tips on which tool chain 
 to use to compile it on a Windows 8.1 system?
 
 On 5/26/2015 8:30 PM, Dennis Reedy wrote:
 Hi Patricia,
 
 I’ve done the work in the river/jtsk/skunk/qa-refactor-namespace branch, 
 having eyes and hands on this would be great!
 
 Thanks
 
 Dennis
 
 On May 26, 2015, at 1025PM, Patricia Shanahan p...@acm.org wrote:
 
 On 5/2/2015 2:16 PM, Dennis Reedy wrote:
 The vote for the namespace change from com.sun.jini and com.artima to
 org.apache.river has passed. I’ll begin this work in the
 skunk/qa_refactor branch. I might require assistance in testing, so
 volunteers would be appreciated.
 
 Is this the right time to volunteer? If so, what would you like me to 
 do?
 
 Patricia
 
 



Re: [Vote] Namespace change from com.sun.jini and com.artima to org.apache.river

2015-06-05 Thread Gregg Wonderly
Dennis is thinking of the solution I was trying to use to get things to run 
inside of netbeans, the platform, to use it to create Jini enabled clients.  
The SecurityManger issue, I think, still existed.  The problem is that I’ve 
been away from that issue for a few years now and my memory is clouded (no, I 
am not lost in cloud computing yet :-). That solved the problems with resolving 
classes with the parent classloader issue.  In netbeans, there is not a 
strictly hierarchical class loader structure.  Individual modules don’t share 
common classes and thus can’t exchange any object which is not in the platform. 
 My changes allow for resolution of classes to be customized.

But, the SecurityManage implementation, might still be a problem, just not sure.

Gregg

 On Jun 5, 2015, at 10:06 AM, Greg Trasuk tras...@stratuscom.com wrote:
 
 
 Need to define “run within NetBeans”.  I think Gregg’s work you’re thinking 
 of was about being able to build NetBeans plugins that could access services 
 and perhaps export services.
 
 “Running” a program from within Netbeans, i.e. invoking the Ant script or 
 executing “java …” works fine as-is.  And I think Peter was talking about 
 developing with NetBeans, which also is just fine.  The only thing that is  
 bit of a pain with either NetBeans or Eclipse on the JTSK core project is 
 getting  all the libraries added to the IDE’s class path so that type-ahead, 
 etc, work OK.  In either one, you need to edit the project’s properties.  
 It’s a little easier with a Mavenized project like the river-examples 
 projects, because the IDE picks up on the dependencies called out in the POM.
 
 Cheers,
 
 Greg Trasuk
 
 On Jun 5, 2015, at 9:45 AM, Dennis Reedy dennis.re...@gmail.com wrote:
 
 Hi Gregg,
 
 IIRC, you did some work with the RMIClassLoader that greatly improved 
 interoperability with NB. I don't recall seeing it in the qa-refactor 
 branch. I think it's really important to have that work in the next release.
 
 Any chance you can you merge it into the qa-refactor-namespace branch?
 
 Thanks
 
 Dennis
 
 Sent from my iPhone
 
 On Jun 5, 2015, at 9:04 AM, Gregg Wonderly gregg...@gmail.com wrote:
 
 Netbeans used to start with a custom SecurityManager implementation which 
 was not replaceable in an APP nor the IDE itself.  I think this was changed 
 to be pluggable, but I just don’t remember the details.
 
 Gregg
 
 On Jun 2, 2015, at 6:14 AM, Peter j...@zeus.net.au wrote:
 
 Yes,
 
 Should run fine from within Netbeans,I haven't tried Eclipse, but it 
 should also work.  Unlike previous builds, it doesn't need cigwin and it  
 builds on jvm's other than Sun's, such as IBM's J9.
 
 Peter.
 
 On 27/05/2015 6:09 PM, Patricia Shanahan wrote:
 I'll check out that branch and take a look. Any tips on which tool chain 
 to use to compile it on a Windows 8.1 system?
 
 On 5/26/2015 8:30 PM, Dennis Reedy wrote:
 Hi Patricia,
 
 I’ve done the work in the river/jtsk/skunk/qa-refactor-namespace branch, 
 having eyes and hands on this would be great!
 
 Thanks
 
 Dennis
 
 On May 26, 2015, at 1025PM, Patricia Shanahan p...@acm.org wrote:
 
 On 5/2/2015 2:16 PM, Dennis Reedy wrote:
 The vote for the namespace change from com.sun.jini and com.artima to
 org.apache.river has passed. I’ll begin this work in the
 skunk/qa_refactor branch. I might require assistance in testing, so
 volunteers would be appreciated.
 
 Is this the right time to volunteer? If so, what would you like me to 
 do?
 
 Patricia
 
 



Re: River-examples project - followup

2015-04-13 Thread Gregg Wonderly

 On Apr 13, 2015, at 7:40 AM, Greg Trasuk tras...@stratuscom.com wrote:
 
 Some comments intertwined….
 
 Cheers,
 
 Greg Trasuk
 
 From: gregg...@gmail.com
 Subject: Re: River-examples project - followup
 Date: Sat, 11 Apr 2015 23:50:26 -0500
 To: dev@river.apache.org
 
 
 …
 
 
 The other item would be to provide a new Jeri InvocationLayerFactory and 
 Endpoint which would allow a single port to be used for all inbound 
 services.  The basic idea is that the InvocationLayerFactory construction 
 would provide some kind of mechanism for services to be registered and 
 authentication and everything would just work for multiple services on the 
 same endpoint.
 
 
 I could be wrong, but isn’t this already possible?  You can create multiple 
 BasicJeriExporters against the same ServerEndpoint (even HttpServerEndpoint 
 if you want).   What features do you want added to the InvocationLayerFactory?

Yes, but the wiring is a too complex of an initial learning curve.  Todays 
software developers are 90% self taught, non-computer scientists.  If it 
doesn’t work the first time (well and nearly every time), and takes more than 
15mins to figure out, they will move on to something else.  What we’d want to 
do, is make the wiring happen with annotations or simple “routing” 
registrations which are much easier to look at in code, and which developers 
already know how to use.

Currently, all the wiring is visible in the APIs.  It would really be nice to 
not have wiring being one the first things you have to learn about.  We need to 
invert the learning tree so that you first learn about registering and using a 
service, and that happens with a single function call.  Next, if you need some 
variation, function, configuration different than the default, you can learn 
how to peel back a layer and configure there.

 
 Think about how HTTP routing works in most modern HTTP services.  The user 
 might use Annotations or something to mark the service entry points and we 
 would then be able to use that information to cause the 
 InvocationLayerFactory to call out to the correct class and method.
 
 We really should look at creating an HTTP endpoint with the ability to use 
 POST as an inbound invocation that would deliver a JSON message to the 
 function bound to that HTTP request.  This would allow small, lightweight 
 restful services to be created without a large complex web server 
 underneath it. 
 
 
 In that case, what interface would you publish to Reggie?

You would still publish an interface.  That is what will be called.  What the 
Endpoint and ILF need to manage is the translation from a JSON standard message 
into a native invocation path.  So, all the details of a method signature, 
parameters, and instance GUID would all need to be mapped out into a JSON 
message.  The authentication and authorization would happen through HTTP 
standards (including an SSL/TLS cert etc).

Gregg Wonderly

 
 I think that this would make Jini on Raspberry Pi particularly alluring.
 
 Gregg Wonderly
 
 
 On Apr 8, 2015, at 9:12 PM, Patricia Shanahan p...@acm.org wrote:
 
 Maybe it would be possible to put one or more of the richer functions in 
 an example? That would let us get practical experience before committing 
 to an API change.
 
 Patricia
 
 On 4/8/2015 6:47 PM, Gregg Wonderly wrote:
 I think that it could be beneficial, to provide code examples, in
 some form that do the two different things that are possible to make
 this less visible.  First, show the reader how to use an exit hook in
 the tutorial to see the service registration disappear.  Second, show
 them how to use the lease timeout value to make the change happen
 automatically for the case of a network split or network card or
 computer failure that would keep the exit hook from ever generating
 network traffic to cancel the lease.
 
 I still feel that we actually need new APIs that operate at a bit
 higher level and provide all of these things as parameters to richer
 functions.
 
 Gregg Wonderly
 
 On Apr 6, 2015, at 8:56 PM, Greg Trasuk tras...@stratuscom.com
 wrote:
 
 
 Hi all:
 
 I updated the tutorial to include the discussion below in the
 “hello-service” module.  ‘svn up’ should bring it down to your
 local machine.  I haven’t yet integrated Patricia’s formatting
 suggestions, mainly because I have to dig in to Maven’s site
 command a bit to include the correct css, but I’ll do that before
 we release.
 
 Any feedback is greatly appreciated.
 
 Cheers,
 
 Greg Trasuk
 
 On Apr 6, 2015, at 3:30 PM, Greg Trasuk tras...@stratuscom.com
 wrote:
 
 
 Hi Dan:
 
 Thanks for the great feedback.
 
 I’m pretty sure you already know this, Dan, since you’re a
 long-time Jini user, but let me explain for the newer folks and
 the archives.  This is a case where what you’re seeing is the
 expected behaviour.  When the service registers itself with
 Reggie, it takes out a lease on the registration. That lease is
 usually renewed periodically by the service’s

Re: River-examples project - followup

2015-04-11 Thread Gregg Wonderly
The important thing for me, is to provided simple APIs.  The basics of what I 
am talking about are visible in my startnow project which has been out on 
java.net for about 15years now.  Some of that is rough and unfinished work.  
But the basics are, I think you should be able to start a service like

public class MyApplication extends PersistentJiniService {
public static void main(string args) {
new MyApplication(args).startService();
}

public MyApplication(args) {
super(args);
}
}

The implication is that everything is in your configuration, and the default 
configuration should be a nice set of reasonable defaults which include things 
like a default port for the services endpoint, which we should register and 
advertise in the well known service names list.  That would help with firewall 
rule support.  For more than one service/endpoint, we should have an automatic 
recognition of more than one service being started in a JVM, and we should be 
able to provide the next port to use through a method in 
PersistentJiniService which could be overridden to return different ports.

There are lots of choices which don't add value to the new user experience.  
It would be best to make reasonable defaults for the average user who would put 
a service up for testing.

The other item would be to provide a new Jeri InvocationLayerFactory and 
Endpoint which would allow a single port to be used for all inbound services.  
The basic idea is that the InvocationLayerFactory construction would provide 
some kind of mechanism for services to be registered and authentication and 
everything would just work for multiple services on the same endpoint.

Think about how HTTP routing works in most modern HTTP services.  The user 
might use Annotations or something to mark the service entry points and we 
would then be able to use that information to cause the InvocationLayerFactory 
to call out to the correct class and method.

We really should look at creating an HTTP endpoint with the ability to use POST 
as an inbound invocation that would deliver a JSON message to the function 
bound to that HTTP request.  This would allow small, lightweight restful 
services to be created without a large complex web server underneath it. 

I think that this would make Jini on Raspberry Pi particularly alluring.

Gregg Wonderly


 On Apr 8, 2015, at 9:12 PM, Patricia Shanahan p...@acm.org wrote:
 
 Maybe it would be possible to put one or more of the richer functions in an 
 example? That would let us get practical experience before committing to an 
 API change.
 
 Patricia
 
 On 4/8/2015 6:47 PM, Gregg Wonderly wrote:
 I think that it could be beneficial, to provide code examples, in
 some form that do the two different things that are possible to make
 this less visible.  First, show the reader how to use an exit hook in
 the tutorial to see the service registration disappear.  Second, show
 them how to use the lease timeout value to make the change happen
 automatically for the case of a network split or network card or
 computer failure that would keep the exit hook from ever generating
 network traffic to cancel the lease.
 
 I still feel that we actually need new APIs that operate at a bit
 higher level and provide all of these things as parameters to richer
 functions.
 
 Gregg Wonderly
 
 On Apr 6, 2015, at 8:56 PM, Greg Trasuk tras...@stratuscom.com
 wrote:
 
 
 Hi all:
 
 I updated the tutorial to include the discussion below in the
 “hello-service” module.  ‘svn up’ should bring it down to your
 local machine.  I haven’t yet integrated Patricia’s formatting
 suggestions, mainly because I have to dig in to Maven’s site
 command a bit to include the correct css, but I’ll do that before
 we release.
 
 Any feedback is greatly appreciated.
 
 Cheers,
 
 Greg Trasuk
 
 On Apr 6, 2015, at 3:30 PM, Greg Trasuk tras...@stratuscom.com
 wrote:
 
 
 Hi Dan:
 
 Thanks for the great feedback.
 
 I’m pretty sure you already know this, Dan, since you’re a
 long-time Jini user, but let me explain for the newer folks and
 the archives.  This is a case where what you’re seeing is the
 expected behaviour.  When the service registers itself with
 Reggie, it takes out a lease on the registration. That lease is
 usually renewed periodically by the service’s JoinManager (that
 isn’t quite the whole story, but it’ll do for now).  When you
 kill the service unexpectedly with ctrl-c, the service doesn’t
 de-register itself, however the lease eventually runs out (now
 that it’s not being renewed by the service) and then the
 registration expires, allowing Reggie to reclaim its resources
 and notify any registrar listeners.
 
 It would be possible to register a vm shutdown hook to
 de-register the service before the vm exits, but in this case I
 think it’s actually better to leave it out, since it demonstrates
 nicely that a dead  service (or at least a dead JoinManager)
 eventually gets

Re: River-examples project - followup

2015-04-08 Thread Gregg Wonderly
I think that it could be beneficial, to provide code examples, in some form 
that do the two different things that are possible to make this less visible.  
First, show the reader how to use an exit hook in the tutorial to see the 
service registration disappear.  Second, show them how to use the lease timeout 
value to make the change happen automatically for the case of a network split 
or network card or computer failure that would keep the exit hook from ever 
generating network traffic to cancel the lease.

I still feel that we actually need new APIs that operate at a bit higher level 
and provide all of these things as parameters to richer functions.

Gregg Wonderly

 On Apr 6, 2015, at 8:56 PM, Greg Trasuk tras...@stratuscom.com wrote:
 
 
 Hi all:
 
 I updated the tutorial to include the discussion below in the “hello-service” 
 module.  ‘svn up’ should bring it down to your local machine.  I haven’t yet 
 integrated Patricia’s formatting suggestions, mainly because I have to dig in 
 to Maven’s site command a bit to include the correct css, but I’ll do that 
 before we release.
 
 Any feedback is greatly appreciated.
 
 Cheers,
 
 Greg Trasuk
 
 On Apr 6, 2015, at 3:30 PM, Greg Trasuk tras...@stratuscom.com wrote:
 
 
 Hi Dan:
 
 Thanks for the great feedback.  
 
 I’m pretty sure you already know this, Dan, since you’re a long-time Jini 
 user, but let me explain for the newer folks and the archives.  This is a 
 case where what you’re seeing is the expected behaviour.  When the service 
 registers itself with Reggie, it takes out a lease on the registration. That 
 lease is usually renewed periodically by the service’s JoinManager (that 
 isn’t quite the whole story, but it’ll do for now).  When you kill the 
 service unexpectedly with ctrl-c, the service doesn’t de-register itself, 
 however the lease eventually runs out (now that it’s not being renewed by 
 the service) and then the registration expires, allowing Reggie to reclaim 
 its resources and notify any registrar listeners. 
 
 It would be possible to register a vm shutdown hook to de-register the 
 service before the vm exits, but in this case I think it’s actually better 
 to leave it out, since it demonstrates nicely that a dead  service (or at 
 least a dead JoinManager) eventually gets dropped from the registrar.
 
 You said the duplicate service instances “worked”, in that you can show info 
 and browse the service, but of course, you’re really just looking at the 
 information that’s in the registry - the registrar and service browser don’t 
 actually contact the service.  Reggie has no knowledge of the “liveness” of 
 the service, and doesn’t attempt to do any “health check”.  
 
 In fact, it’s a common misconception that if the service renews the lease, 
 it must be “live”.  This turns out to be false for many reasons.  (1) The 
 service could have delegated its lease renewals to a different service.  (2) 
 There’s no guarantee that failure of the actual service thread would also 
 cause failure of the lease renewal thread, even if they are in the same 
 process (embedded programmers might recognize this as being similar to the 
 “resetting the watchdog in a timer-triggered interrupt service routine” 
 problem).  (3) Even if there were a health check task, the service could 
 fail in the instant just after the health check.  The most a health check, 
 monitor or heartbeat can do is place a limit on how long it takes to find 
 out a service has failed.  The only way to say with certainty that a service 
 “works” is to attempt to use it.
 
 The lease is purely for the convenience of the registrar (or generically, 
 the service granting the lease).  If ever the lease is not renewed, the 
 landlord can go ahead and reclaim whatever resources were dedicated to the 
 lease.  In the case of Reggie, if the lease isn’t renewed, Reggie drops the 
 registration.  So there’s little risk of “stuck registrations”.  And since 
 the lease can be renewed, there’s no need for any kind of extended default 
 timeout.
 
 So, I think I’ll put most of the above explanation into the tutorial, unless 
 anyone has other thoughts.
 
 Cheers,
 
 Greg Trasuk
 
 On Apr 6, 2015, at 1:42 PM, Dan Rollo danro...@gmail.com wrote:
 
 Hi Greg,
 
 I finally took some time to try this out. It really looks great to me!
 
 I noticed one minor thing that I thought might confuse users: While going 
 through tutorial steps, I decided to stop (via cntrl+c) are restart the 
 hello-service a couple times. This resulted in the service being shown 
 multiple times in the service browser (screenshot attached). It appeared 
 all the duplicate instances in the browser “worked” (I could “show info” 
 and “browse service” on all of them). Eventually, the duplicate 
 registrations “cleaned up” and I was left with just one. I’m not sure how 
 best to avoid confusion about this situation. Would more doc about 
 “why”/“how” that works just complicate things? Is there any sort

Re: Security

2015-02-24 Thread Gregg Wonderly
I think the next big thing is going to be HIP networks where Jini could excel 
as a communications platform via service discovery and the other parts of the 
platform that make it fast and easy to put together remote communications.

Gregg

 On Feb 21, 2015, at 9:22 PM, Peter j...@zeus.net.au wrote:
 
 - Original message -
 
 Yes, “accidental” DOS certainly could apply, which is why I say that
 simple measures (like limiting the number of bytes that
 PreferredClassLoader will download before giving up) are a good idea. 
 But I think that any radical re-imagining of object serialization is
 outside the scope of the River project.
 
 Ok, I'll bite, the work I've done doesn't fit into the radical re-imagining 
 category by any stretch, it uses the existing ObjectInputStream public api 
 and the public serial form of existing objects.  It does however allow people 
 to implement an additional constructor by declaring an annotation, so they 
 can check invariants.  These invariant checks won't be performed by the 
 standard ObjectInputStream, but the classes are compatible with either.
 
 My implementation also significantly outperforms java's standard 
 ObjectInputStream, reflectively calling one constructor is more performant 
 than reflectively setting every field in each class of an Object's hierarchy.
 
 I've decided I'll work on this on github, where interested parties can 
 participate if they want.
 
 
 
 
 Cheers,
 
 Greg Trasuk
 
 On Feb 19, 2015, at 11:39 AM, Patricia Shanahan p...@acm.org wrote:
 
 I generally agree, but do have a question.
 
 In other contexts, I've seen unintentional bugs, rather than
 deliberate DOS, lead to behavior similar to DOS. A program goes wrong,
 and tries to e.g. allocate far too much memory, or goes into a loop.
 In contexts where that can happen, work to protect against DOS also
 makes the software more robust.
 
 In shared service situations, an apparently non-critical program can
 cause a DOS that also affects more important programs. Either all
 programs have to be designed, reviewed, and tested to the reliability
 requirements of the most sensitive program with which they share
 resources, or there has to be isolation between them.
 
 Does this sort of consideration apply in reality to River?
 
 On 2/19/2015 6:58 AM, Greg Trasuk wrote:
 
 The type of issues you’re talking about seem to be centred on putting
 Jini services on the open internet, and allowing untrusted, unknown
 clients to access those services safely.
 
 Personally, my interest is more along the lines of Jini’s original
 goal, which was LAN-scoped or datacenter-scoped SOA.   Further, I   use
 it on more controlled networks.   As far as I’m concerned, only code
 that I trust gets on the network.   In a larger corporate scenario, I
 might lock down access to Reggie, but beyond that, I don’t consider
 DOS a threat.   I think it would make sense to be able to put a byte
 limit on the stream used to load the class, and possibly a time
 limit, but beyond that, I think you’re adding complexity that isn’t
 needed.   If you want to put a service on the web, use RESTful
 services, not Jini.   I’m sure there’s a discoverability tool out
 there, if needed, but typically it isn’t.
 
 Also, since object serialization is not specific to River, I wonder
 if there’s a better forum for these kinds of deep discussions.   I
 think it makes River look far harder than it is.
 
 Cheers,
 
 Greg Trasuk.
 
 On Feb 19, 2015, at 9:03 AM, Peter j...@zeus.net.au wrote:
 
 What are your thoughts on security?
 
 Is it important to you?   Is it important for River?
 
 Regards,
 
 Peter.
 
 
 



Re: Requesting for error tracking for openmeetings in Xulrunner.

2014-11-17 Thread Gregg Wonderly
I do not see any river code involved here.  Are you posting to the correct 
mailing list?

Gregg Wonderly

 On Nov 13, 2014, at 1:08 AM, amit batajoo batajooseam...@gmail.com wrote:
 
 Dear Developer Team,
 
 I am Amit Batajoo, research student at Wakkanai University, Hokaido Japan.
 I have installed latest version of openmeetings in my Debian server at my
 university and running in xulrunner browser. during compilation of the
 project i faced the following error.Can you suggest me the solution for the
 following error.
 
 XULRunner home:
 F:\wakkanai-project-main\jyaguchi2012\jyaguchi-client\lib\xulrunner
 Profile directory:
 C:\Users\Amit\AppData\Local\Temp\swing-mozilla8366514100773041324
 Platform: Win32
 Java: 17.0-b17, Sun Microsystems Inc.
 
 --
 
 org.mozilla.browser.MozillaException: org.mozilla.xpcom.XPCOMException:
 Failed to register JavaXPCOM methods  (0x80460003)
at org.mozilla.browser.MozillaExecutor.mozInit(MozillaExecutor.java:220)
at
 org.mozilla.browser.MozillaInitialization.initialize(MozillaInitialization.java:143)
at org.mozilla.browser.MozillaPanel.init(MozillaPanel.java:147)
at org.mozilla.browser.MozillaPanel.init(MozillaPanel.java:116)
at
 client.TabbedPaneWebBrowser.createNewTab(TabbedPaneWebBrowser.java:87)
at client.TabbedPaneWebBrowser.init(TabbedPaneWebBrowser.java:16)
at client.ClientDisplayFrame.init(ClientDisplayFrame.java:21)
at
 client.JyaGuchiJDesktopPaneNew.init(JyaGuchiJDesktopPaneNew.java:126)
at
 client.JyaGuchiJDesktopPaneNew$6.run(JyaGuchiJDesktopPaneNew.java:1132)
at java.awt.event.InvocationEvent.dispatch(InvocationEvent.java:209)
at java.awt.EventQueue.dispatchEvent(EventQueue.java:597)
at
 java.awt.EventDispatchThread.pumpOneEventForFilters(EventDispatchThread.java:269)
at
 java.awt.EventDispatchThread.pumpEventsForFilter(EventDispatchThread.java:184)
at
 java.awt.EventDispatchThread.pumpEventsForHierarchy(EventDispatchThread.java:174)
at java.awt.EventDispatchThread.pumpEvents(EventDispatchThread.java:169)
at java.awt.EventDispatchThread.pumpEvents(EventDispatchThread.java:161)
at java.awt.EventDispatchThread.run(EventDispatchThread.java:122)
 Caused by: org.mozilla.xpcom.XPCOMException: Failed to register JavaXPCOM
 methods  (0x80460003)
at
 org.mozilla.xpcom.internal.JavaXPCOMMethods.registerJavaXPCOMMethodsNative(Native
 Method)
at
 org.mozilla.xpcom.internal.JavaXPCOMMethods.registerJavaXPCOMMethods(JavaXPCOMMethods.java:60)
at
 org.mozilla.xpcom.internal.MozillaImpl.initialize(MozillaImpl.java:48)
at org.mozilla.xpcom.Mozilla.initialize(Mozilla.java:668)
at
 org.mozilla.browser.MozillaInitialization$2.run(MozillaInitialization.java:155)
at org.mozilla.browser.MozillaExecutor$1.run(MozillaExecutor.java:191)
 
 Please find the solution for my problem.
 
 Thank you in Advance.
 
 Regards.
 Thank you
 -- 
 Amit Batajoo
 Researcher
 Wakkanai Hokusei Gakuen University
 Wakkanai, Hokkaido, Japan
 ---
 Skype : abatajoo7
 Mobile:+977-9814156363
 Telephone:+977-014428090
 E-Mail : abata...@yagiten.com,batajooseamu...@gmail.com



Re: My apologies for file replacement with Netbeans and SVN commit.

2014-10-26 Thread Gregg Wonderly
Most Likely, you used the default settings on netbeans editor configuration 
which I believe is to replace all tabs with spaces.  A sad default…

Gregg Wonderly

 On Oct 26, 2014, at 8:27 AM, Peter Firmstone j...@zeus.net.au wrote:
 
 I've just noticed that my last svn commit, called using netbeans on windows, 
 entire files were replaced, even when only very minor changes were made (on 
 line), I'm not sure if this is something to do with Windows txt files or a 
 setting somewhere.
 
 This has occurred on at least one other previous occassion.
 
 I'm going to stop developing on River until I resolve this issue or set up a 
 Unix computer for development.
 
 Regards,
 
 Peter.



Re: SerialReflectionFactory - got a better name?

2014-06-30 Thread Gregg Wonderly
So, maybe transportable or transported or forwarded…

Gregg

On Jun 29, 2014, at 4:41 AM, Peter Firmstone j...@zeus.net.au wrote:

 Hi Gregg,
 
 Thinking out loud:
 
 Transferable, I think it's close, it works for TransferableObjectFactory, 
 it's created on demand to transfer a non serializable object via a 
 serialization stream and it transfers a factory from one jvm to another, in 
 order to recrate the original object in another jvm.
 
 As for Distributed, transfer would imply the original object is moved from 
 one place to another (deleted and recreated), while that could occur, it's 
 also possible for the originating object to be duplicated as well, so it 
 works for what's currently called SerialReflectionFactory, but Transferable 
 wouldn't be a candidate for the Distributed interface.
 
 If the original object has state, then it would be a snapshot of the original 
 object at some point in time, hence memento.
 
 It could be called TransferableMemento or SerializableMemento, it's created 
 within a ObjectOutputStream and replaced in an ObjectInputStream.
 
 Then that leaves Distributed, perhaps Distributable is better?  Other words 
 are Propagatable, Disseminatable.  Latin: disseminare scattering seeds
 
 interface Distributable {
SerializableMemento distribute();
 }
 
 Dissemino - dis (in all directions) + semino (I plant, I sow).
 
 Cheers,
 
 Peter.
 
 On 29/06/2014 1:32 PM, Gregg Wonderly wrote:
 TransferableObjectFactory?
 
 Gregg
 
 Sent from my iPhone
 
 On Jun 23, 2014, at 7:14 AM, Stefano Marianis.mari...@unibo.it  wrote:
 
 
 Il giorno 23/giu/2014, alle ore 13:24, Peter 
 Firmstonej...@zeus.net.aumailto:j...@zeus.net.au  ha scritto:
 
 recreate themselves remotely
 
 Why not
 
  *   RemoteRecreationFactory
  *   DistributedCloningFactory
  *   or a combination of the above?
 
 This way the name is after the goal of the class, not its implementation...
 
 
 Stefano Mariani
 PhD student @ DISI - Alma Mater Studiorum, Bologna
 s.mari...@unibo.itmailto:s.mari...@unibo.it
 stefanomariani.apice.unibo.ithttp://apice.unibo.it/xwiki/bin/view/StefanoMariani/
 
 
 



Re: SerialReflectionFactory - got a better name?

2014-06-28 Thread Gregg Wonderly
TransferableObjectFactory?

Gregg

Sent from my iPhone

 On Jun 23, 2014, at 7:14 AM, Stefano Mariani s.mari...@unibo.it wrote:
 
 
 Il giorno 23/giu/2014, alle ore 13:24, Peter Firmstone 
 j...@zeus.net.aumailto:j...@zeus.net.au ha scritto:
 
 recreate themselves remotely
 
 Why not
 
  *   RemoteRecreationFactory
  *   DistributedCloningFactory
  *   or a combination of the above?
 
 This way the name is after the goal of the class, not its implementation...
 
 
 Stefano Mariani
 PhD student @ DISI - Alma Mater Studiorum, Bologna
 s.mari...@unibo.itmailto:s.mari...@unibo.it
 stefanomariani.apice.unibo.ithttp://apice.unibo.it/xwiki/bin/view/StefanoMariani/
 
 


Re: Interesting test failure

2014-06-12 Thread Gregg Wonderly
So something somewhere is writing to stdout.  I would suggest making the test 
suite redirect go through a pair of InputStream/OutputStream proxies which 
would hex dump all data to private files for subsequent review.

Gregg Wonderly

On Jun 12, 2014, at 5:50 AM, Peter Firmstone j...@zeus.net.au wrote:

 The usual suspects for build failures on Jenkins are ClassDep and ports in 
 use.  Haven't seen any concurrency failures lately.
 
 However I came accross an interesting test failure recently:
 
 https://builds.apache.org/view/M-R/view/River/job/river-PolicySecurityLoaderUrlTests/10/
 
 This error is produced identically by 5 tests in this particular test run, 
 but I have not been able reproduce it:
 
 [java] -
 [java] com/sun/jini/test/impl/start/ActivateWrapperRegisterBadImplClass.td
 [java] Test Failed: Test Failed: com.sun.jini.qa.harness.TestException: 
 Unexpected Exception; nested exception is:
 [java]Problem creating service for net.jini.event.EventMailbox; 
 nested exception is:
 [java]Failed to start the shared nonactivatable group; nested 
 exception is:
 [java]NonActivatableGroupAdmin: Failed to exec the group; nested 
 exception is:
 [java]invalid stream header: 4572726F
 [java]
 [java] -
 
 
 The test suite redirects the System.in System.out streams to communicate with 
 a subprocess, where the exception occurs.
 
 Interestingly the invalid stream header, when converted from hex to ascii 
 reads:
 
 Erro
 
 Regards,
 
 Peter.



Re: River/Jini with WebSocket

2014-05-28 Thread Gregg Wonderly
I think of this working at the endpoint level.  An HTTP endpoint, capable of 
consuming any of the appropriate web serializations (through a parameterized 
CODEC passed to a constructor), that would then devise a Method object to 
invoke would do the trick.

The EndPoint would know about Annotations that would provide mappings between 
URIs and Methods, using a pluggable web technology based factory.

This would then allow Jini services to become Web services seamlessly.

Gregg

Sent from my iPhone

 On May 28, 2014, at 4:21 AM, Bishnu Gautam bishn...@hotmail.com wrote:
 
 
 Thanks Dawid for your experience. It sounds pretty interesting. Definitely, 
 it would be great to see your solution regarding Web Socket or Web-Service. I 
 think if we are able to expose jini service through Web Socket, jini/river 
 can bring another momentum in distributed object field. Please keep it up and 
 lets share the experience and also the source codes.
 RegardsBishnu
 
 Bishnu Prasad Gautam
 
 
 Date: Wed, 28 May 2014 10:27:31 +0200
 From: da...@travellinck.com
 To: dev@river.apache.org
 Subject: Re: River/Jini with WebSocket
 
 I've gone 'half way' - I've created a service that tracks and publishes
 Jini services as web services - either as SOAP using JAX-WS, or as a
 RESTful resource using JAX-RS (or both at that same time). I did this
 using an embedded Grizzly HTTP container, and it's all built in Dennis
 Reedy's Rio framework.
 
 As both of these technologies are annotations-driven, I had quite a
 tough time with this. The thing to ultimately expose as a service ends
 up being a generated smart proxy (i.e. a Rio proxy), and this has lost
 all the annotations. I had to use BCEL to generate, on the fly, a new
 proxy on top of this smart proxy, one which has the annotations from the
 original service interface applied to it, so that standard frameworks
 can expose it as a valid web service. This generated proxy also needed
 to properly track the service disappearing and re-appearing, so that it
 can publish/unpublish the service.
 
 I imagine it won't be extremely different to expose a service as a web
 socket channel - one would just have to figure out what you want this
 channel to map to, i.e. requests/responses from some Jini service, or
 Jini events, or whatever.
 
 I have not been in a position to open-source my Jini web container just
 yet (even though I wrote it almost two years ago, gulp!). When I am in a
 position to do so, I'll definitely share it on this list.
 
 Good luck! In the world of today, Jini really needs seemless web
 interoperability - both to consume and expose services.
 
 Dawid Loubser
 
 
 
 On 28/05/2014 07:51, Bishnu Gautam wrote:
 Hi all
 Have anyone tried to integrate river application with WebSocket 
 Application. If anyone have experience, could you share it. That would be a 
 great help.
 RegardsBishnu
 


Re: New Chair for Apache River PMC

2014-05-15 Thread Gregg Wonderly
There is little total value in doing anything to the non qa_refactor code.  It  
is what it is, and should just be left as it is for use of anyone who might 
still want to use it for some odd reason.

It would be best to just spend the effort on making qa_refactor into the 3.0.0 
release.

Peter has stated that there are no compatibility issues with the code in 
qa_refactor.   He understand the API/source as well as the binary compatibility 
issue, and the binary compatibility is the larger issue for “evaluating” this 
codebase in any existing environment.

The suggestion for use a compatibility layer for the rename takes care of that 
issue.

That just leaves it up to the community to jump in and do testing.

I would strongly suggest that community testing should operate off of the 
existing testing infrastructure, and we would want to extend those tests with 
anything that we believe our own software might depend on working appropriately 
so that others can help with that testing, as well as having such dependencies 
documented in the test suite that will help keep that compatibility in place 
for the future.

Gregg

On May 14, 2014, at 9:04 AM, Rafał Krupiński rafal.krupin...@sorcersoft.com 
wrote:

 Dnia środa, 14 maja 2014 06:26:26 Bryan Thompson pisze:
 What is the argument for pushing out the qa_refactor based release?  Do you
 believe that it is not ready to evaluate in production systems?  Or do you
 believe that the rename is more important?  If so, why?  Just curious about
 people's perspectives here.
 
 These changes (build and rename) can be made to current branch. It will be 
 easier to find any problems with the migration, if any.
 
 Regards
 Rafał



Re: New Chair for Apache River PMC

2014-05-13 Thread Gregg Wonderly
We might want to separate the two paths from a release perspective.

The namespace changes should happen on a major numbered release.  The build 
change might be better targeted at 3.1?

Just my thoughts on making things happen sooner with smaller overall number of 
issues that might then occur and need fixes.  

If 3.0 is too volatile, that won't be good!

Gregg

Sent from my iPhone

 On May 13, 2014, at 6:10 AM, Dennis Reedy dennis.re...@gmail.com wrote:
 
 I think also need to decide if qa_refactor does become defacto 3.0, do we
 do the following:
 
 Change the com.sun.jini namespace to org.apache.river
 Change the com.artima namespace to org.apache.river
 Move to a Maven project and decide on module group and artifact ids
 
 Regards
 
 Dennis
 
 
 On Tue, May 13, 2014 at 6:26 AM, Bryan Thompson br...@systap.com wrote:
 
 Why don't we do a pre-release from this branch?  Does apache support this
 concept?  Give it some time in the wild to shake down the bugs?
 
 If not. Let's just release it and document that there is a lot of churn.
 Give it a 3.0 designation and be prepared to release a series of updates
 as bugs are identified.  The key would be API stability so people could try
 it and roll back as necessary for production deployments onto a known good
 code base.
 
 Bryan
 
 On May 13, 2014, at 3:18 AM, Peter Firmstone j...@zeus.net.au wrote:
 
 On 13/05/2014 9:59 AM, Dennis Reedy wrote:
 Apologies for not chiming in earlier, I've been running around with my
 air
 on fire for the past couple of weeks. As to whether River is dead, I
 don't
 think it is, maybe mostly dead (in which case a visit to Miracle Max
 may be
 in order). I think River is static, but not dead. The technology is so
 worth at least maintaining, fixing bugs and continued care and feeding.
 
 The issue to me is that the project has no direction, and River has no
 community that participates and makes decisions as a community. There
 has
 been tons of work in qa_refactor, is that the future for River? Or is
 it a
 fork?
 
 There are develpers who are concerned about the number of fixes made in
 qa-refactor, but no one yet has identified an issue I haven't been able to
 fix very quickly.  In any case the public api and serial form is backward
 compatible.
 
 I encourage the community to test it, find out for themselves and report
 any issues.
 
 Regards
 
 Dennis
 
 
 On Mon, May 12, 2014 at 9:59 AM, Greg Trasuktras...@stratuscom.com
 wrote:
 
 On May 11, 2014, at 12:30 AM, Peterj...@zeus.net.au  wrote:
 
 
 Ultimately, if community involvement continues to decline, we may have
 to send River to the attic.
 Distributed computing is difficult and we often bump into the
 shortcomings of the java platform, I think these difficulties are why
 developers have trouble agreeing on solutions.
 But I think more importantly we need increased user involvement.
 
 Is there any advise or resources we can draw on from other Apache
 projects?
 It may be, ultimately, that the community has failed and River is
 headed
 to the Attic.  The usual question is “Can the project round up the 3
 ‘+1’
 votes required to make an Apache release?”  Historically, we have been
 able
 to do that, at least for maintenance releases, and I don’t see that
 changing, at least for a while.
 
 The problem is future development and the ongoing health of the
 project.
 On this point, we don’t seem to have consensus on where we want the
 project to go, and there’s limited enthusiasm for user-focused
 requirements.  Also, my calls to discuss the health of the project
 have had
 no response (well, there was a tangent about the build system, but
 personally I think that misses the point).
 
 I will include in the board report the fact that no-one has expressed
 an
 interest in taking over as PMC chair, and ask if there are any other
 expert
 resources that can help.
 
 Cheers,
 
 Greg Trasuk.
 
 


Re: Decision process for a Modular build tool

2014-04-30 Thread Gregg Wonderly

On Apr 23, 2014, at 8:01 AM, Peter j...@zeus.net.au wrote:

 - Original message -
 
 On Apr 22, 2014, at 7:47 PM, Peter Firmstone
 peter.firmst...@zeus.net.au wrote:
 
 
 
 From: Peter Firmstone peter.firmst...@zeus.net.au
 Subject: Decision process for a Modular build tool
 Date: April 22, 2014 at 7:40:29 PM EDT
 To: d...@apache.river.org
 Reply-To: Peter Firmstone peter.firmst...@zeus.net.au
 
 
 I started qa-refactor with the intent of fixing latent bugs, an
 unintentional benefit is significantly reduced processing times,
 contention and increased scalability. 
 
 Changes in timing exposed more bugs. 
 
 Up until recently an occassional build failure would be experienced
 due to classdep only partially writing a dep file, resulting in
 ClassNotFoundException during testing. Knowing that
 RFC3986URLClassLoader is much faster resolving classes than
 URLClassLoader, I thought, I'd try using it in ClassDep. 
 
 Guess what the result was? That's right lot's more
 ClassNotFoundException's 
 
 
 
 That seems kind of odd.   Since ClassDep is single-threaded (it’s
 basically a command line utility after all), how would faster class path
 resolution have any impact on the output file?  
 
 Ok, fair call, ClassDep has a bug, I'm not sure of the exact cause. 

I would suggest just adding a call to System.out.flush() at the end of main() 
and even System.out.close() just because you may be using a broken library that 
is not successfully flushing and closing.  Look at the size of the files that 
are output.  Are they multiples of some power of 2 that would be like a block 
write size?  That would indicate that blocks are being written as a blocks 
worth of output is created.

Gregg Wonderly



Re: RemoteEvent specification - proposal

2014-04-18 Thread Gregg Wonderly
The simple programming mechanism I use for unordered but inclusive events is a 
map or set.  I use a map for what is expected and a map for what has happened.  
I use a thread responding to notifications to fill in the results data and then 
either call out or notify another thread of the new results.  It’s that final 
code activity that checks for “do I have everything I need to do more work?”  
It will then react when that moment occurs, retry, re-dispatch or whatever the 
appropriate action is.  That way, everything is separated and still involves 
testable behaviors.

Gregg Wonderly

On Apr 17, 2014, at 7:56 PM, Peter j...@zeus.net.au wrote:

 Thanks Greg, I agree, remote events have an event id and sequence number, so 
 it's very easy for clients to order them if necessary.
 
 I think for the test I'll create a simple comparator that orders the events 
 at the client.
 
 The test only needs to ensure that all expected events are received and 
 provide sufficient information allowing them to be correctly ordered.
 
 Regards,
 
 Peter.
 
 - Original message -
 
 Hi Peter:
 
 You should probably create a JIRA enhancement ticket to track discussion
 if you’re picturing adding some kind of order-guaranteeing comparator to
 the API.   But I don’t think you really need to do that, because the
 usage would be so dependent on the client’s architecture that it
 probably isn’t sensible to put it in the API.
 
 On the actual question, I’d suggest that Reggie should make no
 guarantees on the order of event delivery (as per the event spec).   That
 being the case, imposing some kind of order is a client problem, not
 Reggie’s.   I would suggest modifying the test simply to ensure that all
 the expected events have been received in the required time, regardless
 of the order.   Perhaps also add some clarification to the service
 registrar spec.
 
 Cheers,
 
 Greg Trasuk.
 
 On Apr 17, 2014, at 7:30 AM, Peter j...@zeus.net.au wrote:
 
 
 
 From: Peter j...@zeus.net.au
 Subject: RemoteEvent specification - proposal
 Date: April 17, 2014 at 7:28:13 AM EDT
 To: d...@apache.river.org
 Reply-To: Peter j...@zeus.net.au
 
 
 The Jini Remote Event specification clearly states that remote events
 may arrive out of order, yet some lookup tck tests in the qa test
 suite require events to arrive in order. 
 
 Presently I have an Executor in Reggie, used specifically for sending
 event notifications, however it is single threaded, to ensure events
 arrive in an order identical to client registration, to avoid qa test
 failures. 
 
 I propose creating a comparator clients can use to order events as
 they arrive. This will allow qa tests, when utilising this comparator,
 to pass when Reggie is configured to use a multi threaded event
 notifier executor. This would increase Reggie's scalability for event
 notifications. 
 
 Thoughts? 
 
 Regards, 
 
 Peter. 
 
 
 
 
 
 



Re: Health of the Apache River Project

2014-04-12 Thread Gregg Wonderly

On Apr 10, 2014, at 4:12 PM, Rafał Krupiński rafal.krupin...@sorcersoft.com 
wrote:

 Dnia 2014-04-10, czw o godzinie 14:40 -0500, Gregg Wonderly pisze:
 
 Maybe you can explain at this point.  Is the problem that  you can’t build, 
 at all, to test your changes?  Is this because you don’t have ANT?
 
 Are you, by any chance, being sarcastic?

I am not trying to be sarcastic, I am trying to find out what keeps you from 
editing River files to make changes you want, testing those changes and 
submitting them.  One of the chief things for me, is the question of how does a 
“build” system cause a “source tree” to be uneditable, untestable etc.

There are very few things about River’s “jars” that are cast in stone.  You can 
pretty much create two jars, one from services and one from proxies and be 
done.  The multiple *-dl.jar files are a separation of “function” that might 
reduce overall downloads, but in the end, the overhead of multiple http 
transactions is probably much larger than the overhead of downloading all the 
proxies in a single jar.

I know that it is “work” to create a build system if you need something 
different than what something comes distributed with.  But, that’s the question 
here.  If the build system is keeping people from contributing, then why isn’t 
there a “github” distribution of the appropriate tooling that everyone can try 
and see how much better it is?

I am not trying to push back and be sarcastic.  I am trying to be serious about 
what the real problem is.

I am one of those developers who will only waste/spend so much time fighting 
with something before I either throw it away, or roll my own so that I know 
what is going on and don’t waste my time with lack of transparency that keeps 
me from understanding how my software is actually working.

 
  It seems it’s because  you don’t know how to use the ANT build system, 
 which I can understand.  But also, you need to understand that there are 
 people who have no idea how to use Maven either.
 
 So, overall, how can we simplify things if there are always new and 
 different build tools/standards that some people know and others don’t?
 
 Is learning a new tool the problem or how would the migration address
 the problem of lack of contributions and new committers?

Learning a new tool is the question at hand.  If you find Maven to be your 
build tool of choice, then I think you’ve already decided to learn a new tool.  
If you don’t know how to use ant to build with the build mechanism that exists, 
then that creates a problem.  If that is THE PROBLEM for everyone, then that’s 
what we need to understand and remedy it would seem.  

Gregg

 Regards,
 Rafał
 



Re: Health of the Apache River Project

2014-04-10 Thread Gregg Wonderly

On Apr 10, 2014, at 2:35 PM, Rafał Krupiński rafal.krupin...@sorcersoft.com 
wrote:

 Dnia 2014-04-10, czw o godzinie 14:40 -0400, Greg Trasuk pisze:
 Hi Rafal:
 
 
 On Apr 10, 2014, at 2:15 PM, Rafał Krupiński 
 rafal.krupin...@sorcersoft.com wrote:
 
 
 I think you missed the point.
 
 
 Could be.  I guess the question is, what are you wanting to contribute?  If 
 you’re going to debug or modify current code, then yes, the build system is 
 an obstacle that you need to overcome.  In which case, maybe changing parts 
 of it could be a great first contribution.  I’m just saying that’s going to 
 be a pretty big job, no matter who does it.
 
 If you want patches and committers it shouldn't be a problem to change a
 few lines, or even half a class in the core River. But it's not, so you
 get no patches nor new committers.

Maybe you can explain at this point.  Is the problem that  you can’t build, at 
all, to test your changes?  Is this because you don’t have ANT?  It seems it’s 
because  you don’t know how to use the ANT build system, which I can 
understand.  But also, you need to understand that there are people who have no 
idea how to use Maven either.

So, overall, how can we simplify things if there are always new and different 
build tools/standards that some people know and others don’t?

Gregg


  And it’s going to be a contentious subject (as it always has been in the 
 past), because every developer has their favourite build system.
 
 It's not the issue here.
 
 (...)
 Don’t get me wrong - I’m not defending the current project structure.
 
 Then I guess I don't understand what are you doing.
 
 Regards,
 Rafał
 



Re: Health of the Apache River Project

2014-04-10 Thread Gregg Wonderly
I’d like to understand what the issue is with building using ANT?

Is it that you can’t “build and run” in your IDE that supports Maven?  What is 
the real issue here?

Years ago, I altered the ant build, slightly, to work in my local environment 
due to some path/tool location issues as I recall.  But, since that time, I’ve 
just been able to build using ANT.   It runs for a while if there are lots of 
changes, but not so long for a small number of changes.

How will moving to Maven “reduce” the build time, or uncomplicated the build 
process if the same artifacts come out of the build that are built at this time?

I’m trying to understand the overall issue, not be harsh or pushing back on 
change.

Gregg Wonderly

On Apr 10, 2014, at 1:40 PM, Greg Trasuk tras...@stratuscom.com wrote:

 
 Hi Rafal:
 
 
 On Apr 10, 2014, at 2:15 PM, Rafał Krupiński rafal.krupin...@sorcersoft.com 
 wrote:
 
 
 I think you missed the point.
 
 
 Could be.  I guess the question is, what are you wanting to contribute?  If 
 you’re going to debug or modify current code, then yes, the build system is 
 an obstacle that you need to overcome.  In which case, maybe changing parts 
 of it could be a great first contribution.  I’m just saying that’s going to 
 be a pretty big job, no matter who does it.  And it’s going to be a 
 contentious subject (as it always has been in the past), because every 
 developer has their favourite build system.
 
 On the other hand, if you’re looking at contributing something that will end 
 up being in a different jar file (like I think you mentioned downloadable 
 URLStream handlers), then ignore the current build system and create a new 
 module with whatever build system and integration tests you like.  We’ll 
 create a new git repository for it, and release it as a separate module that 
 a River user could add to their class path.  It’s still a part of the River 
 project, and users of River will greatly appreciate it.
 
 
 I'm already a user, and I'm perfectly happy with the current build
 system. In fact I couldn't care less about the build system.
 Provided I remain a user.
 
 But becoming a contributor, or even a committer is entirely another
 matter. I don't understand the project structure and I don't want to
 touch those ant scripts, especially classanddepjar task, with a stick,
 let alone modify it.
 
 
 Don’t get me wrong - I’m not defending the current project structure.  I 
 completely agree that I don’t want to touch the existing scripts either.  But 
 that doesn’t have to get in the way of contributing.  If you’re adding new 
 features, you don’t have to plug into the existing project.
 
 
 Cheers,
 Greg Trasuk



Re: River-436 - need some explanation of preferred class provider

2014-03-14 Thread Gregg Wonderly
So, for example, lets saw that you might add the permission class, 
ClassLoaderDomainPermission with the object name being the domain.  Anytime 
that a jar is loaded, if that URL/Codesource doesn’t have an associated 
instance of ClassLoaderDomainPermission(“domain”), a security exception is 
thrown.  If security passes, then a recursive map of PreferredClassLoader 
instances is consulted/constructed using String.split(domain,”.”) elements.  
Finally a new PreferredClassLoader is created with the URL of the jar file and 
the last PreferredClassLoader found/constructed in that path as it’s parent.  
Now, you have the ability to create your hierarchy.  Clearly, there are some 
resolution items to manage in terms of getting all of the jars being visible in 
all of the parent domains.  But, that’s really not a huge deal since the 
existing annotation can provide multiple jar files that can be inspected and 
homed into their respective domains.

Gregg Wonderly

On Mar 10, 2014, at 11:35 AM, Gregg Wonderly gr...@wonderly.org wrote:

 My point is that you have to formalize it in a way that you can then 
 recognize.  For both OSGi and Netbeans and many platforms, the knowledge is 
 hardcode into a relationship between the platforms class loading mechanism, 
 and the jar content.
 
 I don’t want to always use OSGi.  I don’t want to always use Netbeans.  I 
 want to use the appropriate mechanism for where the ‘client’ lives, not for 
 ‘how the service is constructed’.   This is why PreferredClassLoader has been 
 working so well.  It’s something that the ‘client’ dictates and the ‘service 
 jar’ has to standardize on.  So, now that you have something ‘new’ that you 
 want to implement, in terms of a client standard (a common client jar that is 
 not in the clients class path), you are going to have to provide a way for 
 that to work.
 
 One way that comes to mind, is to take River-336 concepts and go just a bit 
 further by adding the notion of a “domain” for class loader hierarchy.  
 Imagine that every jar could have an additional meta-inf property called 
 “Domain”, which would be prepended by the URL’s path, without the jar file 
 name, that it was loaded from.  This would then create a class loading 
 relationship in a graph described by the ‘.’ separated components.
 
 http://server1/jars/Util.jar = Domain=app
 http://server1/jars/service1.jar = Domain=app.service1
 http://server1/jars/service2.jar = Domain=app.service2
 http://server1/jars/service3.jar = Domain=app.service3
 http://server1/jars/service4.jar = Domain=app.service4
 
 would produce a graph of class loaders with ‘app’ at the parent so that 
 everything in Util.jar or any other associated libraries would be there, and 
 the services would have a parent reference to them.
 
 You could add a security permission associated with Domain creation/access so 
 that other services that you had not authorized at server1, could not glue 
 themselves into the class loader hierarchy.
 
 This kind of mechanism lets the app developer designate exactly what they 
 want to have happen and control it from the service where it should be 
 controlled.
 
 Gregg
 
 On Mar 10, 2014, at 1:08 AM, Michał Kłeczek michal.klec...@xpro.biz wrote:
 
 Actually it is even worse. Since RMIClassProvider API is stateless the 
 client 
 has only one list of URLs at a time...
 
 Regards,
 
 On Sunday, March 09, 2014 10:54:57 PM Michał Kłeczek wrote:
 The whole point of my example is that the client has no knowledge of Util
 interface - it is simply not interested in it.
 
 The problem is not that the client cannot plug-in RMIClassProvider
 dynamically. It is just that with current format of codebase annotation the
 client cannot do anything. It simply does not have enough data to decide
 what to do - just two lists of URLs without any dependency information
 encoded.
 
 Regards,
 
 On Sunday, March 09, 2014 02:33:03 PM Gregg Wonderly wrote:
 All you have to provide in the client is a class loading implementation
 that knows about Util and pins it into a parent class loader from the
 class loaders that proxies load.  I netbeans, this happens because meta
 data declares that such a relationship exists.  In OSGi, this happens
 because meta data declares that such a relationship exists.  All you have
 to do, is create meta data that specifies that such a relationship
 exists, and then plug in a River-336 compatible class loading
 implementation in your client.
 
 My point is no that River-336 provides the answer, but rather it provides
 a
 mechanism that an application can use.  Not every application has such a
 need, and not every known implementation uses the same model.  Thus, there
 isn’t a single answer that can exist ahead of time.
 
 If you want to use OSGi, plug it in.  If you want to use Netbeans, plug it
 in.   If you want to use both at the same time, work it out and plug it
 in.
 
 There is room for a single standard to eventually win.  But, there isn’t
 a
 
 single standard

Re: River-436 - need some explanation of preferred class provider

2014-03-10 Thread Gregg Wonderly
My point is that you have to formalize it in a way that you can then recognize. 
 For both OSGi and Netbeans and many platforms, the knowledge is hardcode into 
a relationship between the platforms class loading mechanism, and the jar 
content.

I don’t want to always use OSGi.  I don’t want to always use Netbeans.  I want 
to use the appropriate mechanism for where the ‘client’ lives, not for ‘how the 
service is constructed’.   This is why PreferredClassLoader has been working so 
well.  It’s something that the ‘client’ dictates and the ‘service jar’ has to 
standardize on.  So, now that you have something ‘new’ that you want to 
implement, in terms of a client standard (a common client jar that is not in 
the clients class path), you are going to have to provide a way for that to 
work.

One way that comes to mind, is to take River-336 concepts and go just a bit 
further by adding the notion of a “domain” for class loader hierarchy.  Imagine 
that every jar could have an additional meta-inf property called “Domain”, 
which would be prepended by the URL’s path, without the jar file name, that it 
was loaded from.  This would then create a class loading relationship in a 
graph described by the ‘.’ separated components.

http://server1/jars/Util.jar = Domain=app
http://server1/jars/service1.jar = Domain=app.service1
http://server1/jars/service2.jar = Domain=app.service2
http://server1/jars/service3.jar = Domain=app.service3
http://server1/jars/service4.jar = Domain=app.service4

would produce a graph of class loaders with ‘app’ at the parent so that 
everything in Util.jar or any other associated libraries would be there, and 
the services would have a parent reference to them.

You could add a security permission associated with Domain creation/access so 
that other services that you had not authorized at server1, could not glue 
themselves into the class loader hierarchy.

This kind of mechanism lets the app developer designate exactly what they want 
to have happen and control it from the service where it should be controlled.

Gregg

On Mar 10, 2014, at 1:08 AM, Michał Kłeczek michal.klec...@xpro.biz wrote:

 Actually it is even worse. Since RMIClassProvider API is stateless the client 
 has only one list of URLs at a time...
 
 Regards,
 
 On Sunday, March 09, 2014 10:54:57 PM Michał Kłeczek wrote:
 The whole point of my example is that the client has no knowledge of Util
 interface - it is simply not interested in it.
 
 The problem is not that the client cannot plug-in RMIClassProvider
 dynamically. It is just that with current format of codebase annotation the
 client cannot do anything. It simply does not have enough data to decide
 what to do - just two lists of URLs without any dependency information
 encoded.
 
 Regards,
 
 On Sunday, March 09, 2014 02:33:03 PM Gregg Wonderly wrote:
 All you have to provide in the client is a class loading implementation
 that knows about Util and pins it into a parent class loader from the
 class loaders that proxies load.  I netbeans, this happens because meta
 data declares that such a relationship exists.  In OSGi, this happens
 because meta data declares that such a relationship exists.  All you have
 to do, is create meta data that specifies that such a relationship
 exists, and then plug in a River-336 compatible class loading
 implementation in your client.
 
 My point is no that River-336 provides the answer, but rather it provides
 a
 mechanism that an application can use.  Not every application has such a
 need, and not every known implementation uses the same model.  Thus, there
 isn’t a single answer that can exist ahead of time.
 
 If you want to use OSGi, plug it in.  If you want to use Netbeans, plug it
 in.   If you want to use both at the same time, work it out and plug it
 in.
 
 There is room for a single standard to eventually win.  But, there isn’t
 a
 
 single standard that is standing alone right now that I see.
 
 Gregg Wonderly
 
 -- 
 Michał Kłeczek
 XPro Sp. z o. o.
 ul. Borowskiego 2
 03-475 Warszawa
 Polska
 -- 
 Michał Kłeczek
 XPro Sp. z o. o.
 ul. Borowskiego 2
 03-475 Warszawa
 Polska



Re: River-436 - need some explanation of preferred class provider

2014-03-09 Thread Gregg Wonderly
All you have to provide in the client is a class loading implementation that 
knows about Util and pins it into a parent class loader from the class loaders 
that proxies load.  I netbeans, this happens because meta data declares that 
such a relationship exists.  In OSGi, this happens because meta data declares 
that such a relationship exists.  All you have to do, is create meta data that 
specifies that such a relationship exists, and then plug in a River-336 
compatible class loading implementation in your client.  

My point is no that River-336 provides the answer, but rather it provides a 
mechanism that an application can use.  Not every application has such a need, 
and not every known implementation uses the same model.  Thus, there isn’t a 
single answer that can exist ahead of time.  

If you want to use OSGi, plug it in.  If you want to use Netbeans, plug it in.  
 If you want to use both at the same time, work it out and plug it in.  There 
is room for a single standard to eventually win.  But, there isn’t a single 
standard that is standing alone right now that I see.

Gregg Wonderly

On Mar 7, 2014, at 12:17 PM, Michał Kłeczek michal.klec...@xpro.biz wrote:

 Greg, please look at my example in the first message of this thread. And
 tell my how the client can decide what ClassLoader should load Util
 interface assuming it does not have it in it's classpath.
 
 Regards,
 7 mar 2014 18:51 Gregg Wonderly gr...@wonderly.org napisał(a):
 
 Okay, I don’t have to reply to all of the exchanges I missed, but I really
 want to make it clear, that my class loading changes in River-336, do in
 fact fix ALL CLASSLOADING ISSUES!  The reason I “scream” that out, is
 because it encapsulates every single way that class loading occurs.  If you
 don’t have a preferred list in your jar, then preferred class loader is
 going to always “ask” the parent to load the class, and the call into the
 River-336 provided code can delegate loading in whatever mechanism is
 appropriate for the “platform” that the client wants to use.
 
 This makes it possible to get the class form wherever is needed, and puts
 the client in complete control of how class loader resolution occurs, as
 well as how class objects are loaded into class loaders as “owners” of the
 classes.
 
 Just because the methods have names indicating “parent” or other
 hierarchal relationships doesn’t mean that the actions taken there have to
 create any sort of hierarchy.
 
 Gregg Wonderly
 
 On Mar 7, 2014, at 10:32 AM, Michał Kłeczek michal.klec...@xpro.biz
 wrote:
 
 Sure there is a need for code downloading for JERI proxies. You seem to
 assume
 no custom endpoint implementations.
 
 There is really no difference between dynamic proxy and normal object.
 
 Regards,
 
 On Friday, March 07, 2014 09:32:04 AM Greg Trasuk wrote:
 
 Now, dynamic proxies are a different story, and JERI already uses the
 dynamic proxy mechanism.  There’s no need, for example to download an
 implementation class for an object that is directly exported - you only
 really need the service interface to be available locally.
 
 
 Cheers,
 
 Greg Trasuk
 
 --
 Michał Kłeczek
 XPro Sp. z o. o.
 ul. Borowskiego 2
 03-475 Warszawa
 PolskaMichał Kłeczek (XPro).vcf
 
 



Re: River-436 - need some explanation of preferred class provider

2014-03-07 Thread Gregg Wonderly
Okay, I don’t have to reply to all of the exchanges I missed, but I really want 
to make it clear, that my class loading changes in River-336, do in fact fix 
ALL CLASSLOADING ISSUES!  The reason I “scream” that out, is because it 
encapsulates every single way that class loading occurs.  If you don’t have a 
preferred list in your jar, then preferred class loader is going to always 
“ask” the parent to load the class, and the call into the River-336 provided 
code can delegate loading in whatever mechanism is appropriate for the 
“platform” that the client wants to use.

This makes it possible to get the class form wherever is needed, and puts the 
client in complete control of how class loader resolution occurs, as well as 
how class objects are loaded into class loaders as “owners” of the classes.

Just because the methods have names indicating “parent” or other hierarchal 
relationships doesn’t mean that the actions taken there have to create any sort 
of hierarchy.

Gregg Wonderly

On Mar 7, 2014, at 10:32 AM, Michał Kłeczek michal.klec...@xpro.biz wrote:

 Sure there is a need for code downloading for JERI proxies. You seem to 
 assume 
 no custom endpoint implementations.
 
 There is really no difference between dynamic proxy and normal object.
 
 Regards,
 
 On Friday, March 07, 2014 09:32:04 AM Greg Trasuk wrote:
 
 Now, dynamic proxies are a different story, and JERI already uses the
 dynamic proxy mechanism.  There’s no need, for example to download an
 implementation class for an object that is directly exported - you only
 really need the service interface to be available locally.
 
 
 Cheers,
 
 Greg Trasuk
 
 -- 
 Michał Kłeczek
 XPro Sp. z o. o.
 ul. Borowskiego 2
 03-475 Warszawa
 PolskaMichał Kłeczek (XPro).vcf



Re: River-436 - need some explanation of preferred class provider

2014-03-04 Thread Gregg Wonderly
In Jini/River, there are two ways that any class is resolved.  First, it is 
resolved because a class under construction has a reference to it.  Second is 
that it is resolved by a client class that needs it to specify an 
interface/class to use for service discovery/usage.

In the first case, we don’t care where it is resolved, if the second case 
doesn’t occur.   For the first case, every service JAR file should carry the 
definition of every single class that it depends on, marking those that will 
never be encountered for the Second case as preferred.

This is just a simple fact of how mobile code works.  If the “jar” that the 
service uses to resolve classes for the first case can be the same jar as the 
client uses to resolve the classes in the second case (OSGI, Maven and other 
non-mobile code, jar distribution mechanisms), then you can have one class 
loader having view of the class.

The Preferred class loader will do the right thing, automatically as long as 
you create the correct preferred list.  If you don’t do that correctly, then 
you can encounter problems.  Without a preferred list, the PreferredClassLoader 
is going to always look in the parent class loader, and that is usually the 
right thing to do.  Most people encounter problems when the list something as 
preferred which is then passed around to another “proxy” that has another 
instance of the same class that is not resolved to the first class loader and 
then you see class cast exception.

The preferred list should only really ever contain the names of classes that 
are not publicly visible in the API or any reference those public API classes 
have.

The basic reason to not use Maven or OSGi or other static class resolution 
mechanisms is that it provides one the flexibility to have a much more 
dynamically evolving runtime environment including test scenarios where it 
doesn’t make sense to “publish” a jar file that others may then have access to.

Gregg Wonderly

On Mar 4, 2014, at 12:02 AM, Michał Kłeczek michal.klec...@xpro.biz wrote:

 The real problem is that Util interface is in two codebases. It should be
 in a single codebase shared between UtilProxy and WrapperProxy.
 But to make it possible we would need to have peer class loading like in
 ClassWorlds or OSGI.
 It is not solvable in a standard hierarchical class loading scheme.
 
 Anyway... It is not really River-436 problem so my patch proposal is going
 to have the same issue since it is just a replacement for String
 annotations and not change in class loading scheme.
 
 Thanks,
 Michal
 4 mar 2014 06:38 Michał Kłeczek michal.klec...@xpro.biz napisał(a):
 
 1. The problem is there is no such thing as the service interface. It is
 context dependent. What is the service interface for service browser?
 
 2. In this particular case Util interface is an implementation detail of
 WrapperProxy. It is Wrapper interface the client is interested in. So I
 would say it should be preferred in WrapperProxy codebase.
 
 3. Even if Util is not preferred in WrapperProxy codebase we still have
 ClassCastException if the client does not have Util in its classpath. Why
 should it? it is interested in Wrapper not in Util. So either
 a. We always get ClassCastException if Util is preferred in WrapperProxy
 codebase, or
 b. We get ClassCastException anyway if a client does not have Util in its
 classpath.
 Let's say I want to register RemoteEventListener that wraps a Javaspace
 proxy to write events in a space. Does that mean the service event source
 has to be aware of Javaspace interface??? That would be absurd...
 
 It all does not have anything to do with codebase services.
 
 Thanks,
 Michal
 4 mar 2014 00:09 Peter j...@zeus.net.au napisał(a):
 
 The Util interface should not be preferred.  Implementations of Util can
 be preferred but not Util itself.
 
 Services need a common api that all implementations and clients can use
 to interract, even if this is a kind of codebase service.
 
 Modifying an interface is generally considered bad practise but now Java
 8 makes it possible to add default methods for added functionality, that
 line blurs somewhat.  What can you do if an earlier interface is loaded by
 a parent ClassLoader and you need a later version, make it preferred?
 
 My thoughts are that interfaces should never be preferred and all classes
 defined in their methods shouldn't be preferred either.
 
 It would be relatively easy to write a new implementation that ensures
 that interfaces are loaded into their own ProtectionDomain in a parent
 ClassLoader.  But that would be confusing as dynamic policy grants are made
 to ClassLoader's not ProtectionDomains.
 
 But using ProtectionDomains in this manner, preserves security, ensures
 maximum visibility and avoids codebase annotation loss, if we ask the
 ProtectionDomain for the annotation, instead of the ClassLoader.  But this
 is not how we do things presently.
 
 Cheers,
 
 Peter.
 
 - Original message

Re: River-436 - need some explanation of preferred class provider

2014-03-04 Thread Gregg Wonderly


 On Mar 4, 2014, at 12:02 AM, Michał Kłeczek michal.klec...@xpro.biz wrote:
 
 The real problem is that Util interface is in two codebases. It should be
 in a single codebase shared between UtilProxy and WrapperProxy.
 But to make it possible we would need to have peer class loading like in
 ClassWorlds or OSGI.
 It is not solvable in a standard hierarchical class loading scheme.

This is one of the good examples of where hierarchical loading can present 
challenges.

But the question really is, can an arbitrary client really expect for arbitrary 
services to interact correctly?  If you want them to do this, it has been shown 
over and over that global types are the best, least troublesome choice.  

If you want ubiquitous interactions why not use string based values such as XML 
or better yet, JSON?

Then code and data is immune to class loading snafus and not bound to a 
container or hosting standard!

Gregg

 Anyway... It is not really River-436 problem so my patch proposal is going
 to have the same issue since it is just a replacement for String
 annotations and not change in class loading scheme.
 
 Thanks,
 Michal
 4 mar 2014 06:38 Michał Kłeczek michal.klec...@xpro.biz napisał(a):
 
 1. The problem is there is no such thing as the service interface. It is
 context dependent. What is the service interface for service browser?
 
 2. In this particular case Util interface is an implementation detail of
 WrapperProxy. It is Wrapper interface the client is interested in. So I
 would say it should be preferred in WrapperProxy codebase.
 
 3. Even if Util is not preferred in WrapperProxy codebase we still have
 ClassCastException if the client does not have Util in its classpath. Why
 should it? it is interested in Wrapper not in Util. So either
 a. We always get ClassCastException if Util is preferred in WrapperProxy
 codebase, or
 b. We get ClassCastException anyway if a client does not have Util in its
 classpath.
 Let's say I want to register RemoteEventListener that wraps a Javaspace
 proxy to write events in a space. Does that mean the service event source
 has to be aware of Javaspace interface??? That would be absurd...
 
 It all does not have anything to do with codebase services.
 
 Thanks,
 Michal
 4 mar 2014 00:09 Peter j...@zeus.net.au napisał(a):
 
 The Util interface should not be preferred.  Implementations of Util can
 be preferred but not Util itself.
 
 Services need a common api that all implementations and clients can use
 to interract, even if this is a kind of codebase service.
 
 Modifying an interface is generally considered bad practise but now Java
 8 makes it possible to add default methods for added functionality, that
 line blurs somewhat.  What can you do if an earlier interface is loaded by
 a parent ClassLoader and you need a later version, make it preferred?
 
 My thoughts are that interfaces should never be preferred and all classes
 defined in their methods shouldn't be preferred either.
 
 It would be relatively easy to write a new implementation that ensures
 that interfaces are loaded into their own ProtectionDomain in a parent
 ClassLoader.  But that would be confusing as dynamic policy grants are made
 to ClassLoader's not ProtectionDomains.
 
 But using ProtectionDomains in this manner, preserves security, ensures
 maximum visibility and avoids codebase annotation loss, if we ask the
 ProtectionDomain for the annotation, instead of the ClassLoader.  But this
 is not how we do things presently.
 
 Cheers,
 
 Peter.
 
 - Original message -
 But it will also be loaded by WrapperProxy ClassLoader, since it is
 preferred there. So it will end up with ClassCastException, right?
 
 Regards,
 Michal
 
 If Util is installed locally, it will only be loaded by the application
 ClassLoader, since it isn't preferred.
 
 Peter.
 
 - Original message -
 Folks,
 while woking on the River-436 patch proposal I've came across the
 scenario that I am not sure how to handle:
 
 Utility service:
 //inteface is NOT preferred
 interface Util {...}
 //class IS preferred
 class UtilProxy implements Util {}
 
 Wrapper service:
 //NOT preferred
 interface Wrapper {}
 //preferred
 class WrapperProxy implements Serializable{
 //initialized with Util impl from a lookup service
 private Util util;
 }
 
 Wrapper service codebase includes Util interface but it is
 _preferred_.
 
 Would deserialization of WrapperProxy end with ClassCastException?
 From what I understand UtilProxy is annotated with its codebase. When
 deserializing UtilProxy a ClassLoader is going to be created with
 parent set to TCCL. It means Util interface is going to be loaded
 twice by two ClassLoaders - one for WrapperProxy codebase and another
 for UtilProxy codebase.
 
 Am I correct?
 And if so: is it desired behavior?
 
 Regards,
 
 --
 Michał Kłeczek
 XPro Quality Matters
 http://www.xpro.biz
 
 


Re: River-436 - need some explanation of preferred class provider

2014-03-04 Thread Gregg Wonderly
One of the greatest things about Java is serialization and mobile code!  One of 
the most limiting aspects of any language is Serialization!

If you have an interface or data class that two classes need to access, there 
is no choice but to have a common parent class loader.  Your client can 
institute such as class loading scheme completely independently of Jini’s use 
of some other class loading scheme, provided that you at least allow the 
“parent load this please” mechanism of hierarchical class loading to occur.

My changes to remove the explicit reliance/dependence on RMIClassLoader in 
River-336, and instead allow you to plug in how the “parent load this please” 
call out to work, is how you can solve this so that it actually works for your 
clients special needs.  The best thing is that this mechanism can be plugged 
into, at runtime, whereas RMIClassLoaderSPI is a onetime thing that requires 
access to the app class loader, which might not be possible in some clients.  
All that is required is a security grant that allows a particular codebase to 
plugin a specific class name. 

Gregg Wonderly

On Mar 4, 2014, at 7:39 PM, Gregg Wonderly ge...@cox.net wrote:

 
 
 On Mar 4, 2014, at 12:02 AM, Michał Kłeczek michal.klec...@xpro.biz wrote:
 
 The real problem is that Util interface is in two codebases. It should be
 in a single codebase shared between UtilProxy and WrapperProxy.
 But to make it possible we would need to have peer class loading like in
 ClassWorlds or OSGI.
 It is not solvable in a standard hierarchical class loading scheme.
 
 This is one of the good examples of where hierarchical loading can present 
 challenges.
 
 But the question really is, can an arbitrary client really expect for 
 arbitrary services to interact correctly?  If you want them to do this, it 
 has been shown over and over that global types are the best, least 
 troublesome choice.  
 
 If you want ubiquitous interactions why not use string based values such as 
 XML or better yet, JSON?
 
 Then code and data is immune to class loading snafus and not bound to a 
 container or hosting standard!
 
 Gregg
 
 Anyway... It is not really River-436 problem so my patch proposal is going
 to have the same issue since it is just a replacement for String
 annotations and not change in class loading scheme.
 
 Thanks,
 Michal
 4 mar 2014 06:38 Michał Kłeczek michal.klec...@xpro.biz napisał(a):
 
 1. The problem is there is no such thing as the service interface. It is
 context dependent. What is the service interface for service browser?
 
 2. In this particular case Util interface is an implementation detail of
 WrapperProxy. It is Wrapper interface the client is interested in. So I
 would say it should be preferred in WrapperProxy codebase.
 
 3. Even if Util is not preferred in WrapperProxy codebase we still have
 ClassCastException if the client does not have Util in its classpath. Why
 should it? it is interested in Wrapper not in Util. So either
 a. We always get ClassCastException if Util is preferred in WrapperProxy
 codebase, or
 b. We get ClassCastException anyway if a client does not have Util in its
 classpath.
 Let's say I want to register RemoteEventListener that wraps a Javaspace
 proxy to write events in a space. Does that mean the service event source
 has to be aware of Javaspace interface??? That would be absurd...
 
 It all does not have anything to do with codebase services.
 
 Thanks,
 Michal
 4 mar 2014 00:09 Peter j...@zeus.net.au napisał(a):
 
 The Util interface should not be preferred.  Implementations of Util can
 be preferred but not Util itself.
 
 Services need a common api that all implementations and clients can use
 to interract, even if this is a kind of codebase service.
 
 Modifying an interface is generally considered bad practise but now Java
 8 makes it possible to add default methods for added functionality, that
 line blurs somewhat.  What can you do if an earlier interface is loaded by
 a parent ClassLoader and you need a later version, make it preferred?
 
 My thoughts are that interfaces should never be preferred and all classes
 defined in their methods shouldn't be preferred either.
 
 It would be relatively easy to write a new implementation that ensures
 that interfaces are loaded into their own ProtectionDomain in a parent
 ClassLoader.  But that would be confusing as dynamic policy grants are made
 to ClassLoader's not ProtectionDomains.
 
 But using ProtectionDomains in this manner, preserves security, ensures
 maximum visibility and avoids codebase annotation loss, if we ask the
 ProtectionDomain for the annotation, instead of the ClassLoader.  But this
 is not how we do things presently.
 
 Cheers,
 
 Peter.
 
 - Original message -
 But it will also be loaded by WrapperProxy ClassLoader, since it is
 preferred there. So it will end up with ClassCastException, right?
 
 Regards,
 Michal
 
 If Util is installed locally, it will only be loaded by the application

Re: River-436 - need some explanation of preferred class provider

2014-03-04 Thread Gregg Wonderly


 On Mar 4, 2014, at 11:18 AM, Michał Kłeczek michal.klec...@xpro.biz wrote:
 
 ClassLoader complexity and dependency management is something that was 
 understood long time ago and solved several times (OSGI or NetBeans module 
 system come to mind first).

First thing to understand is that these two along with others each have a 
specific focused problem that they are addressing.

Netbeans is about isolation, not sharing.

OSGi is about versioning and isolation based on versioning.

OSGi supporters, proponents and developers have fought a long to try and create 
a remoting specification which worked with what they already had.

 To be honest I really do not understand why River community does not make use 
 of existing solutions and tries to either reinvent the wheel or live with 
 broken architecture and develop ad-hoc fixes not really addressing the root 
 causes.
 
 It may sound harsh but for me it looks like NIH syndrome.

It's certainly possible to feel like that's the way it is, but remember that   
RMI and JRMP existed before these did.

Is it a great and perfect thing?  By no means would I say that.  

But, there are good things amongst all if these and rather then mash together 
all of it into a new thing, why can't we just provide pluggable interfaces and 
implementations that let them all coexist?

There are, in my mind, some pretty powerful abstractions that should make it 
possible.

Gregg

 
 We are discussing service packaging and container implementation while we 
 have 
 _basic_ stuff not working:
 a) we have a huge security issue with allowing untrusted code to execute
 b) we have a class loading issue that makes it impossible to use River for 
 simple service composition - as I've shown in my example.
 c) we have a whole lot of concurrency issues in the implementation from which 
 a lot were fixed by Peter and it is still not decided whether it is worth 
 incorporating those fixes!!!.
 
 On the other hand I hear voices that we should drop/deprecate some of River 
 components:
 a) Phoenix
 IMHO _any_ River service container should be _based_ on Phoenix. It has 
 capabilities not available in most other containers (JEE containers 
 specifically):
 - ability to deploy/execute components in _isolation_ (different virtual 
 machines - instantiated on demand)
 - watchdog functionality
 - easily accessible API (since Phoenix _is_ a River service)
 - on-demand component activation (which I think is underestimated - see below)
 
 b) Norm/Mercury/Fiddler
 These are crucial for creating activatable sevices.
 Activation is important in service composition scenarios where a service is 
 actually implemented only as a client proxy wrapping other services (no 
 server 
 side logic). I would even say we are missing some (simple but important) 
 services to make it possible:
 - CodeRepository service (I remember one implementation created some time ago)
 - ProxyTrustDelegate service (that would allow a service to delegate proxy 
 verification logic to another service)
 
 Just from the top of my head... :-)
 
 Regards,
 
 On Tuesday, March 04, 2014 09:17:12 PM Peter wrote:
 Thanks Michal,
 
 Welcome to ClassLoader complexity.
 
 We've more recently encouraged the separation of Service API, from
 implementation at the development stage, instead of relying on tools like
 classdep.  Rio, uses these conventions.
 
 This is an important first step.
 
 Basically Service API, are interfaces and classes that implementations use
 to communicate with each other.  In this case, because your Util interface
 needs to be shared, it correlates to service api.
 
 If all application code was loaded into URLClassLoader instances as
 suggested previously some time ago by Nic on this list, then we could
 ensure that all Service API is loaded into it's own ProtectionDomain in the
 main application ClassLoader (a URLClassLoader instance that proxy's use as
 their parent loader)
 
 To do so however requires new conventions for codebase annotations.
 
 One restriction is that service api cannot be changed after deployment.
 
 We could allow Service API to be loaded on demand after deployment, if it
 doesn't already exist at the client, but again it cannot be changed after
 deployment, only added to.
 
 Cheers,
 
 Peter.
 
 -- 
 Michał Kłeczek
 XPro Sp. z o. o.
 ul. Borowskiego 2
 03-475 Warszawa
 Polska
 Michał Kłeczek (XPro).vcf


Re: [jira] [Commented] (RIVER-435) Proposed Standard for Single-Archive Service Deployment Packaging

2014-02-25 Thread Gregg Wonderly

On Feb 25, 2014, at 12:18 PM, Dennis Reedy dennis.re...@gmail.com wrote:

 On Tue, Feb 25, 2014 at 12:36 PM, Michal Kleczek 
 michal.klec...@xpro.bizwrote:
 
 Hmm... I don't think it is an implementation detail - codebase annotations
 must be understood by every client - so the format becomes a part of the
 spec.
 
 
 Fair enough, it does need to be part of a specification.
 
 
 
 For example Maven based naming 
 (groupId:artifactId:version:classifier:version)
 is incompatible with Eclipse p2 (MANIFEST.MF OSGI metadata - in practice I
 would say it would be bundleId:version or bundleId:version-range).
 Additionally - just name/version based artifact identification is not
 enough - I would much rather see something like strong names from .NET
 where signature is part of the identifier.
 
 Besides... Maven based provisioning requires every party to agree on a set
 of common repositories.
 
 
 Not necessarily true. If there is no context for a repository in the
 annotation then yes. An example where there is context is in the following
 URL:
 
 artifact:groupId/artifactId/version[/type[/classifier]][;
 repository[@repositoryId]]
 
 
 Ideal solution would be to decouple the client from the how code is
 downloaded. Not having this is one of the problems with current River
 architecture - all have to have http and httpmd URL handlers installed.
 This decoupling could be achieved if codebase annotations were objects -
 that was my proposal discussed some time ago. It allows a service to
 provide clients with code downloaders as annotations.
 
 
 This. I like this. How would this work, would it be an Entry, an attribute
 of the service (perhaps similar to the ServiceUI factory?).

What I think would be good, would be to make the annotation be used to download 
the details.  That is, that today, it is the client code.  What if it was 
actually running code which created child class loaders that would be attached 
to the parent class loader that the annotation URL was associated with.

Today, you would do this by using a delegate pattern on the service interface, 
so that the first use of any method on the interface, would “get” the actual 
code, and then delegate method calls into it.

With a “tool”, (named ccd for createCodeDelegate) one could say something like

ccd -t maven -r repository-spec --classpath code.jar —class 
serviceClassName —interfaces interface.name1 interface.name2

which would know about ‘-t’ argument types and pass the -r details to them.  it 
would use code.jar to resolve the interface names and the ‘-t’ details to 
create delegating implementations of the specified interfaces.  Now, ideally, 
this would be a “proxy” whose invoke() method would check if code has been 
resolved, and do that once, before delegating to the proxy’s delegate.

Gregg Wonderly

Re: [jira] [Commented] (RIVER-435) Proposed Standard for Single-Archive Service Deployment Packaging

2014-02-20 Thread Gregg Wonderly
Part of the decision logic for class loader structure involves proxies which 
could interact.  In the past we've required the client to put such shared types 
into the application classloader.  There is always the chance that there are 
abstractions and implementations that might be shared between proxies yet 
unknown to the application/client.

So, having some layers on the classloader is sometimes a necessity, and even a 
super proxy or middleman proxy can solve the problem but become a pain.

Gregg

Sent from my iPhone

 On Feb 20, 2014, at 8:08 AM, Greg Trasuk tras...@stratuscom.com wrote:
 
 
 On Feb 19, 2014, at 11:23 PM, Peter Firmstone j...@zeus.net.au wrote:
 
 You could adopt the directory conventions api, impl and proxy, instead of 
 lib and lib-dl?  That way you could make sure the api is loaded into the 
 application class loader, while the implementation can be loaded into a 
 child ClassLoader for maximum cooperation (in case the service 
 implementation also uses other remote services) while avoiding name space 
 visibility issues.
 
 
 I’m not sure what that would accomplish.  As it stands now, the application 
 has one class loader which is effectively a child of the system class loader. 
  If the app is going to act as a consumer, it will unmarshall the service 
 provider’s proxy in the usual way, which will end up with a 
 PreferredClassLoader that is a child of the application’s class loader 
 (standard Jini stuff - nothing to do with the container), so proxies from 
 other providers are effectively in different class loaders.  What would be 
 the advantage of separating the API classes from the implementation classes?  
 It’s only the lib-dl jars that are available to outsiders, so there’s no 
 chance of leaking the implementation classes to consumers, assuming the jars 
 files are created correctly, which is the service author’s responsibility.
 
 So, I would not be in favour of separating out the class loaders in that way. 
  It adds complexity and imposes a structure on service authors for no reason. 
  The only fundamental question that service writers need to answer now is 
 “should this class be available for download to remote clients?”  If so, it 
 goes into a jar file that’s in the ‘lib-dl’ folder (that folder would 
 typically include ‘hello-api.jar’ and ‘hello-proxy.jar’ (assuming the naming 
 conventions are followed for the ‘hello’ service).  If not, it goes into 
 ‘lib’ (that folder would contain ‘hello-api.jar’, ‘hello-proxy.jar’ and 
 ‘hello-impl.jar’.
 
 Or perhaps I misunderstand your suggestion…Please elaborate if that’s the 
 case.
 
 I should also note that as currently implemented and written in the proposed 
 spec, the container _does not_ share the Jini platform libraries (jsk-lib, 
 jsk-platform, etc) between applications.  Each application class loader 
 includes the Jini libraries separately.  I just couldn’t think of a case 
 where sharing made very much sense, plus eventually it would make sense to 
 have separate thread pools per application (1), which would be complicated if 
 the platform jars were shared.
 
 (1) Each application will have separate threads as it is, but the threads are 
 created inside the JERI framework, so they’re not under the container’s 
 control, i.e. it’s not possible for the container to setup prioritization 
 between apps, until the threading system is updated.
 
 Cheers,
 
 Greg Trasuk
 
 
 Regards,
 
 Peter.
 
 On 20/02/2014 12:58 PM, Greg Trasuk (JIRA) wrote:
[ 
 https://issues.apache.org/jira/browse/RIVER-435?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13906531#comment-13906531
  ]
 
 Greg Trasuk commented on RIVER-435:
 ---
 
 Comments from the mailing list discussion...
 
 Greg Trasuk
 ==
 OK, so on the topic of the jar file naming conventions (hello-api.jar, 
 hello-proxy.jar, hello-impl.jar, etc), I thought we had already adopted 
 that as a recommended convention.  It follows common “good practices” that 
 most of us have used for a long time, and it allows you to build without 
 ‘classdepandjar’.  As well, it happens to dovetail nicely with a Maven 
 build.
 
 Having said that, I don’t believe that convention needs to be mentioned in 
 the single-archive packaging spec (or at least not required - I suppose it 
 could be referenced as good practice).
 
 The spec differentiates between “class path” and “codebase” jars by having 
 them in different folders inside the deployment archive (lib and lib-dl).  
 So, while the build that creates the archive may very well use the 
 conventions to determine which dependent files go into which folder, from 
 the container’s point of view, it doesn’t care about the naming 
 conventions.  Basically, everything in the ‘lib’ dir gets included in the 
 service’s class path, and everything in the ‘lib-dl’ dir gets published 
 through the codebase server and included in the service’s codebase 
 

Re: [Discuss] Please have a look at the River Container

2014-02-18 Thread Gregg Wonderly
I’ll offer my observation from overheard discussions over the years, from a 
few, but varied Jini community members.  But first, let me state that I am a 
pro Rio person (and Dennis I must apologize again for leaving it off of my 
slide at the Jini Community meeting in Europe).

I’ve never used Rio in a deployment, but I’ve looked into it for a couple of 
different projects. My primary issue in my River deployments has always been 
delayed codebase downloads and proxy unmarshalling were needed because of 
network bandwidth restrictions, computer resource limitations and user 
interface speed to get my ServiceUI desktop to “display” all the icons.  The 
large number of services that I deployed onto multiple machines, verses the few 
that anyone person would use. Would require deserialization of hundreds of 
proxies that would never be used.  Windows restrictions on a handful of active 
sockets, max, would cause endpoints to “fail” to connect.  There were all kinds 
of issues and I needed delayed unmarshalling to solve those issues.  So, the 
solutions that I rolled into Jini 2.0/2.1 to solve these problems for me, 
provided some isolation from other things available in the community.

Ultimately, I’ve been trying to push for a “container” specification for some 
time. My simple “startnow” project on java.net is where I’ve put most of the 
things that I’ve done to put things on top of Jini.   The simple interface that 
Seven provides, is something that I think is a good start. 

My observation is that the community has stated in various conversations, that 
Rio was just an awfully large and complicated bit of code to “start” with.  It 
is very powerful and very much an end to end solution to a lot of things, and 
that is what I understand people in the community to not want to “include” in 
their simple Jini services.

Some of that probably comes from JavaEE experience or “knowledge” which makes 
them feel that Rio might just take them down the path of not being in control 
of much of anything and having to always have “the same” container for all 
their services when that might not be required.

I am all about fixing things that need to be fixed, and standardizing things 
that as standards, don’t limit choices on evolving to better standards.

That’s what we need to focus on.  Because of the flexibility of River with so 
many endpoint implementations, flexible implementation details, etc., it is 
really an unfinished platform.  There needs to be fewer “free” choices, and a 
lot more “refinement” of interfaces so that very specific issues are fixed for 
specific releases, but we can still evolve to create better and better 
experiences.

These things have all been said before by members of this community.  There are 
lots of experienced people here, and lots of people who have found “easier” 
ways to do things, because of the unfinished nature of the beast.

We know, really need to start working on finishing things with solid 
limitations on choices where more choices just don’t make anything easier or 
more possible.

Gregg

On Feb 18, 2014, at 11:50 AM, Dennis Reedy dennis.re...@gmail.com wrote:

 
 On Feb 18, 2014, at 1236PM, Greg Trasuk tras...@stratuscom.com wrote:
 
 
 Hi Dennis:
 
 Discussion intertwined…
 
 Cheers,
 
 Greg.
 
 On Feb 18, 2014, at 11:45 AM, Dennis Reedy dennis.re...@gmail.com wrote:
 
 
 On Feb 18, 2014, at 1113AM, Greg Trasuk tras...@stratuscom.com wrote:
 
 
 Hi Dennis:
 
 I’ll bite twice:
 
 - Your offer to contribute Rio may have been before my time as a 
 committer, because I don’t recall the discussion (mind you I’m also at a 
 loss to recall what I had for dinner last night ;-).  
 
 November 28th, 2013. Email thread entitled River Container (was surrogate 
 container). You responded asking questions about code provenance. Snippet 
 from the thread:
 
 I see it’s Apache licensed.  Ideally we’d have a CCLA in place from all the 
 corporate contributors, but I personally don’t know if that’s required if 
 the contributed code is ASL2.  We might have to consult more experienced 
 Apache people.
 
 Greg.
 
 I'd like to find out what would need to be done here. If anyone could help, 
 that would be great. I have no problems donating Rio to the River project. 
 River would get a mature project, with tons of real-world application of 
 River put into it. I think it would do River good, and also Rio.
 
 
 If not part of the project I think River should at least reference it as a 
 notable project that can really speed developer adoption of River.
 
 
 OK, let’s assume that you’re willing to contribute Rio, and that the River 
 community is in favour.  I’ll start a separate thread to discuss the steps.
 
 And we should go ahead and add a reference to Rio on the River site in the 
 meantime.  While we’re at it, any other projects that should be referenced?  
 The “notable projects” idea is a very good one.
 
 Great!
 
 
 
 How was River unwelcoming, and do you feel the same situation 

  1   2   >