Re: [DISCUSS] Moving Apache River to the Attic
Thanks Roy, I'll keep that in mind. On 16/02/2022 3:15 am, Roy T. Fielding wrote: On Feb 15, 2022, at 4:05 AM, Peter Firmstone wrote: I think the PMC has already decided River's fate, and I tend to agree with their decision, the problem is that historically, it hasn't been possible to innovate inside the Apache River project, innovation has been forced to happen elsewhere and it wasn't just what I was doing, there was an attempt to do some container work in River but that also got shut down. People had trouble in the past agreeing on Rivers direction and there are no currently active developers. It is still possible to get a group of people together to create an Apache project, but I don't think the code needs it. github and other sites like it are better for loose collaboration, where developers can feed off each others ideas and innovations and the best solutions survive. The PMC has decided to move Apache River to the Attic, and the Board is likely to approve that tomorrow. This is often what triggers people "in waiting" to do something on their own rather than wait on a perceived consensus. That is a very good thing. It's hard to encourage contributions outside the box of an existing design, since there is no clear path to a release. Removing the project will remove that perceived barrier to development. (Though anyone could have started up a sandbox within the original project and done the same. It's just a matter of will.) In any case, good luck with your efforts. I strongly encourage you to select a good project name that reflects your individual goals, rather than continue to use River or Jini (which I think is still trademarked by Oracle, but maybe they have abandoned it now). Likewise, if you start moving towards a larger collaboration and need a safe place to do that, the ASF will still be here and able to help provide legal oversight for your project without managing it for you. That is particularly recommended for anything that crosses Java with code execution in a remote environment. Cheers, and thanks for all the +1s Roy
Re: [DISCUSS] Moving Apache River to the Attic
Some final thoughts on using OSGi with Jini. https://www.artima.com/weblogs/viewpost.jsp?thread=202304 This was always a contentious issue historically on Jini lists, I expect there will be vastly differing opinions on the subject, so I haven't made any decisions that would constrain developers options to any particular framework. While I respectfully disagree with most of Michał's assessments and opinions regarding JGDMS on this occasion, I haven't made any changes that would prevent him from marshaling Objects, ClassLoaders or bytecode to replace codebase annotations in streams. I still respect that Michał has an inquiring mind and is willing to experiment with some very complex issues. Work to support OSGi in JGDMS is ongoing and doesn't require developers to install or use OSGi. To dispel any confusion that might have arisen as a result of this discussion, and for anyone reading the email archives, JGDMS avoids annotating marshaling streams with codebase strings and uses a CodebaseAccessor service instead, which also provides signer certificates, not just space separated URI strings, these certificates may be self signed, they are dynamically granted permission as they have come from an authenticated service via a secure connection. CodebaseAccessor is also used to authenticate the service, prior to unmarshaling its proxy. ProxyCodebaseSpi allows you to use the modular framework of your choice for the provisioning of ClassLoaders and wiring of their dependencies, CodebaseAccessor provides information about the proxy's codebase requirements prior to its unmarshaling. Atomic marshaling streams make no assumptions about the module structures of systems at either endpoint. The atomic marshal stream only requires that identical proxy codebases are loaded at either endpoint, that is all, so that all classes the service uses will be resolved by the ClassLoader at the remote endpoint. This is why all services are given their own independent marshaling streams to manage class visibility and avoid incorrect class resolution, which results in codebase annotation loss. This is why I am not using the atomic marshaling framework to control or dictate ClassLoader hierarchies, and allow it instead to only focus on marshaling and unmarshaling object bytes securely. This also solves a long standing issue with unmarshaling in OSGi. Now OSGi (or other frameworks) can manage class resolution, rather than the object marshaling framework taking on this responsbility and trying to replicate the functionality of each or forcing developers to chose. Marshaling objects has been uncoupled from class resolution in JGDMS. The ClassLoader is only assigned once to the Endpoint, from that point on, all class resolution decisions are made by the ClassLoader. This is unlike Jini and RMI marshalling of codebase annotations, where an attempt is made for each class to find its ClassLoader using stream annotations, if present during unmarshaling. This work has taken many years to complete and a lot of research has gone into the decision making process. JGDMS is modular and all jar file artifacts are also OSGi bundles. AtomicILFactory, uses ClassLoaders at Endpoints, doesnt utilise the thread context ClassLoader, making it more compatible with modular frameworks. JGDMS provides a RFC 3986 compliant URI that supports RFC 5952 normalization of IPv6 addresses, this provides a huge performance benefit, as DNS calls are not required to check for address equality. I am still using space separated URI strings for codebases. This URI class is not serializable, parsing URI strings instead is good security practice. https://github.com/pfirmstone/JGDMS/blob/trunk/JGDMS/jgdms-platform/src/main/java/org/apache/river/api/net/Uri.java One of the problems that I am currently working on in the support of OSGi, is that of proxy identity, there are many proxies that use the same bundle version, eg all common services, and these have static fields that shouldn't be shared among global providers. For example different lookup services from different providers on the internet shouldn't share implementation classes. I am currently investigating using Apache Aries and OSGi subsystems, so that service proxies remain independent and isolated from each other when utilizing OSGi as the underlying framework only cooperating on service API. It is a non goal to make incompatible versions of Service APIs interoperate, for example if someone changes an Entry in a way that breaks compatibility, then I expect that it will use a different non compatible service version, and these will be considered different services. Any assistance from anyone who is familiar with OSGi and Jini, would be much appreciated. Regards, Peter. On 16/02/2022 9:32 pm, Peter Firmstone wrote: Hi Michał, I didn't take it personal, and don't expect you to take it personal either when I say
Re: [DISCUSS] Moving Apache River to the Attic
Hi Michał, I didn't take it personal, and don't expect you to take it personal either when I say, I'm pretty sure everyone here is aware of River / Jini limitations. Anyway River has been a lot of fun, all the best for the future everyone, hope to see you around from time to time. Cheers, Peter. On 16/02/2022 7:53 pm, Michał Kłeczek wrote: Hi Peter, On 16 Feb 2022, at 10:01, Peter Firmstone wrote: Inline below. On 16/02/2022 5:24 pm, Michał Kłeczek wrote: On 16 Feb 2022, at 04:25, Peter Firmstone wrote: From the CodebaseAccessor service. The CodebaseAccessor proxy (local code) is passed as a parameter along with a MarshalledInstance of the proxy, by ProxySerializer to ProxyCodebaseSpi. https://github.com/pfirmstone/JGDMS/blob/trunk/JGDMS/jgdms-platform/src/main/java/net/jini/export/CodebaseAccessor.java <https://github.com/pfirmstone/JGDMS/blob/trunk/JGDMS/jgdms-platform/src/main/java/net/jini/export/CodebaseAccessor.java> Ok, so you have introduced a level of indirection to retrieve String codebase annotations. Why not go one step further and instead of: interface CodebaseAccessor { String getClassAnnotation() throws IOException; } have something along the lines of (conceptually): interface Codebase extends RemoteMethodControl { ClassLoader getClassLoader() throws IOException; } I personally wouldn't take this step, because CodebaseAccessor is a Remote interface and ClassLoader is a class of high privilege, so it presents a security risk. Not really as implementation is constrained by the same security rules as any other code: you can constrain it via policy that only grants create ClassLoader permission to specific implementations. See above: you can move this code from the client to the service and let it provide the implementation of class resolution algorithm. I would advise against that, the remote service JVM knows little about the client JVM, both are likely to have completely different ClassLoader hierarchies. To be able to communicate they have to understand each other class loading mechanism anyway. In particular the class annotation String syntax and semantics has to be the same for both. [...] If using OSGi, OSGi will resolve the required dependencies and download them if not already present on the client, OSGi will give preference if a compatible version of the bundle dependencies already loaded at the client. If using preferred classes the preferred class list will determine the order of preference, whether classes are loaded from the proxy codebase or the client ClassLoader (the parent loader of the proxy ClassLoader) first. I also tried this route and it is a dead end because * it is not possible to statically (ie. as part of the software package like OSGi manifest) provide dependency resolution constraints to be able to exchange arbitrarily constructed object graphs * This is a limitation and compromise I have accepted, JGDMS doesn't attempt to load arbitrarily constructed object graphs, instead it ensures that both endpoints of a Service have the same class resolution view, the same proxy bundle version is used at the Server and client. This breaks once there are multiple parties (services) involved because it means _all_ of them have to have their software versions synchronised in advance - which in turn makes the excercise moot: if you have to do that there is no need for mobile code anymore. What is really IMHO needed is a practical way of independent evolution of system components. At the server, the proxy bundle is depended upon by the service implementation, but nothing at the client depends upon the proxy bundle loaded there, instead the proxy bundle depends on the api loaded by the client. That is why JGDMS discourages marshaling of client subclasses that override service method parameter classes, because they cannot be exported as remote objects and are subject to codebase annotation loss and class resolution problems. There is no subclassing in my example of RemoteEventSpacePublisher. Sometimes, less is more, I've chosen this compromise in this instance to avoid complexity. I saw little to be gained from the added complexity, it can be worked around by improving service api design and practices. It cannot. How would you improve RemoteEventListener or JavaSpace API to avoid these issues? Instead the client can export method parameter interface types as remote objects, with independent ClassLoader visibility, that will resolve to common super interface types. If there is a versioning problem at the client, where it uses an incompatible API with the client, ServiceDiscoveryManager will recognise the service is the incorrect type and discard it. The whole point of my example is that this is not the issue of compatibility between client and service interface but it is inherent to existing class loading mechanism. I recognise my own limitation
Re: [DISCUSS] Moving Apache River to the Attic
Hi Michał, Inline below. On 16/02/2022 5:24 pm, Michał Kłeczek wrote: On 16 Feb 2022, at 04:25, Peter Firmstone wrote: From the CodebaseAccessor service. The CodebaseAccessor proxy (local code) is passed as a parameter along with a MarshalledInstance of the proxy, by ProxySerializer to ProxyCodebaseSpi. https://github.com/pfirmstone/JGDMS/blob/trunk/JGDMS/jgdms-platform/src/main/java/net/jini/export/CodebaseAccessor.java <https://github.com/pfirmstone/JGDMS/blob/trunk/JGDMS/jgdms-platform/src/main/java/net/jini/export/CodebaseAccessor.java> Ok, so you have introduced a level of indirection to retrieve String codebase annotations. Why not go one step further and instead of: interface CodebaseAccessor { String getClassAnnotation() throws IOException; } have something along the lines of (conceptually): interface Codebase extends RemoteMethodControl { ClassLoader getClassLoader() throws IOException; } I personally wouldn't take this step, because CodebaseAccessor is a Remote interface and ClassLoader is a class of high privilege, so it presents a security risk. It is also used in IPv6 multicast lookup discovery, where I want to avoid sending any objects over the network. If you want to send java bytecode, you could do that by converting the byte[] array to a String, then convert it back to byte[], that way you can leverage existing api. The problem is that you are still constrained by the limitations I have chosen, that is, to use CodebaseAccessor requires the export of a remote object. I think you wish to be able to send arbitrary graphs, so you will need to append these bytes in the stream, rather than use CodebaseAccessor. The reasoning is: In principle class annotation is a program in (unspecified) language that is executed by the client to create a ClassLoader instance. You have to make sure all participants share this language and can execute it in the same way. If the type of class annotation is String - it only means this is some kind of obscure interpreted scripting language. The insight here is that we _already have_ the language that we _know_ all participants can execute: Java bytecode. There is no reason not to use it. Ok, so the client has to know in advance how to load the service code? Yes, the client knows how to load the service code dynamically just prior to proxy unmarshalling, a default ProxyCodebaseSpi is provided for that purpose and it is used by default. See above: you can move this code from the client to the service and let it provide the implementation of class resolution algorithm. I would advise against that, the remote service JVM knows little about the client JVM, both are likely to have completely different ClassLoader hierarchies. What I try to do is have identical bundles loaded by the ClassLoaders at the ServerEndpoint and (client) Endpoint and let OSGi resolve the classes, so they both have an absolutely identical view of classes. At the very least, dependency bundles resolved need to be version compatible for serialized form. Same with other modular frameworks. If you are using Maven, eg Rio Resolver, or OSGi, then a ProxyCodebaseSpi implementation specific to one of these should be used. If you have a mixed environment, then you can use the codebase string to determine which to use. See above: you can also abstract it away behind an interface. Does it require to have it installed in advance? If so - how? Only JGDMS platform and JERI. How are service proxy classes loaded then? ProxyCodebaseSpi::resolve does this by provisioning a ClassLoader then unmarshalling the proxy into it. OSGi: https://github.com/pfirmstone/JGDMS/blob/a774a9141e6571f1d7f9771f74b714850d447d3e/JGDMS/jgdms-osgi-proxy-bundle-provider/src/main/java/org/apache/river/osgi/ProxyBundleProvider.java#L131 <https://github.com/pfirmstone/JGDMS/blob/a774a9141e6571f1d7f9771f74b714850d447d3e/JGDMS/jgdms-osgi-proxy-bundle-provider/src/main/java/org/apache/river/osgi/ProxyBundleProvider.java#L131> [...] I am asking about something different - the smart proxy class depends on _two_ interfaces: RemoteEventListener <—— proxy class ——> JavaSpace RemoteEventListener is its service interface. But it is not known in advance what interfaces the client already has: 1) Just RemoteEventListener 2) Both RemoteEventListener and JavaSpace 3) None How is class resolution implemented in JGDMS so that it works properly in _all_ of the above cases? This is a responsibility of the underlying platform used for class resolution or modularity. If using OSGi, OSGi will resolve the required dependencies and download them if not already present on the client, OSGi will give preference if a compatible version of the bundle dependencies already loaded at the client. If using preferred classes the preferred class list will determine the order of preference, whether classes are loaded from the proxy codebase or the c
Re: [DISCUSS] Moving Apache River to the Attic
Hi Michał, responses inline below. On 15/02/2022 10:22 pm, Michał Kłeczek wrote: On 15 Feb 2022, at 13:05, Peter Firmstone wrote: How the client knows the code needed to deserialise? The service provides this information, typically in a services configuration, How is this configuration provided to the client and when? In the service configuration file, with AtomicILFactory, you specify a class, whose ClassLoader will perform class resolution for the ServerEndpoint. You should also use a configuration entry for your service codebase string as well as keystores for certificates. Then implement the CodebaseAccessor service, this is not implemented by the smart proxy, just the service at the server, a separate CodebaseAccessor proxy is created during service export, this is marshalled prior to the unmarshalling of the service proxy, using only local code, to allow provisioning of the codebase. Heres an example in Reggie: https://github.com/pfirmstone/JGDMS/blob/a774a9141e6571f1d7f9771f74b714850d447d3e/JGDMS/services/reggie/reggie-service/src/main/java/org/apache/river/reggie/RegistrarImpl.java#L659 See below for further info how the the codebase string is obtained from the CodebaseAccessor service proxy (local code). by default this is a space separated list of URI, similar to a codebase annotation, but it doesn't have to be. JERI manages the deserialization of code through a default ProxyCodebaseSpi implementation, How does the default ProxyCodebaseSpi implementation know where to download code from if there are no annotations? From the CodebaseAccessor service. The CodebaseAccessor proxy (local code) is passed as a parameter along with a MarshalledInstance of the proxy, by ProxySerializer to ProxyCodebaseSpi. https://github.com/pfirmstone/JGDMS/blob/trunk/JGDMS/jgdms-platform/src/main/java/net/jini/export/CodebaseAccessor.java https://github.com/pfirmstone/JGDMS/blob/trunk/JGDMS/jgdms-platform/src/main/java/org/apache/river/api/io/ProxySerializer.java Proxies are serialized separately from the current marshalling stream by ProxySerializer. It identifies proxies in the stream and packages them for independant unmarshalling. It does this because it is unlikely the current streams Endpoint ClassLoader will be able to resolve classes, it is treated as a separate concern, or independent entity. the client applies constraints, to ensure that input validation is used as well as any other constraints, such as principals, or encryption strength. ProxyCodebaseSpi can be customized by the client, so the client may implement ProxyCodebaseSpi if it wants to do something different, eg use OSGi or Maven to manage dependency resolution. Ok, so the client has to know in advance how to load the service code? Yes, the client knows how to load the service code dynamically just prior to proxy unmarshalling, a default ProxyCodebaseSpi is provided for that purpose and it is used by default. If you are using Maven, eg Rio Resolver, or OSGi, then a ProxyCodebaseSpi implementation specific to one of these should be used. If you have a mixed environment, then you can use the codebase string to determine which to use. Does it require to have it installed in advance? If so - how? Only JGDMS platform and JERI. How are service proxy classes loaded then? ProxyCodebaseSpi::resolve does this by provisioning a ClassLoader then unmarshalling the proxy into it. OSGi: https://github.com/pfirmstone/JGDMS/blob/a774a9141e6571f1d7f9771f74b714850d447d3e/JGDMS/jgdms-osgi-proxy-bundle-provider/src/main/java/org/apache/river/osgi/ProxyBundleProvider.java#L131 Preferred classes: https://github.com/pfirmstone/JGDMS/blob/a774a9141e6571f1d7f9771f74b714850d447d3e/JGDMS/jgdms-pref-class-loader/src/main/java/net/jini/loader/pref/PreferredProxyCodebaseProvider.java#L106 Basically there are different options for loading proxy classes, depending on the underlying platform used for class resolution, JGDMS is agnostic toward this. Future work includes a Rio Resolver provider. How is the following scenario handled: class JavaSpaceEventPublisher implements RemoteEventListener, Serializable { private final JavaSpace space; //… publish event in JavaSpace implementation } The smart proxy class has dependencies on RemoteEventListener and on JavaSpace. How do you properly resolve classes in this case? Typically the client has the ServiceAPI it needs already installed locally, however this may not always be the case, depending on how you want to resolve the proxy classes and how much you want to share with the client, you can include additional jar files in the annotation, and use preferred.list or you can use Maven or OSGi to resolve dependencies and provision the ClassLoader used for proxy deserialization. I am asking about something different - the smart proxy class depends on _two_ interfaces: RemoteEventListener <—— proxy class ——> Jav
Re: [DISCUSS] Moving Apache River to the Attic
On 15/02/2022 8:29 pm, Michał Kłeczek wrote: Hi Peter, JGDMS uses a new implementation of a subset of the Java Serialization’s stream format, with input validation and defenses against malicious data (all connections are first authenticated when using secure endpoints). Codebase annotations are no longer appended in serialization streams, this feature is deprecated but it can still be enabled. How the client knows the code needed to deserialise? The service provides this information, typically in a services configuration, by default this is a space separated list of URI, similar to a codebase annotation, but it doesn't have to be. JERI manages the deserialization of code through a default ProxyCodebaseSpi implementation, the client applies constraints, to ensure that input validation is used as well as any other constraints, such as principals, or encryption strength. ProxyCodebaseSpi can be customized by the client, so the client may implement ProxyCodebaseSpi if it wants to do something different, eg use OSGi or Maven to manage dependency resolution. Does it require to have it installed in advance? If so - how? Only JGDMS platform and JERI. How is the following scenario handled: class JavaSpaceEventPublisher implements RemoteEventListener, Serializable { private final JavaSpace space; //… publish event in JavaSpace implementation } The smart proxy class has dependencies on RemoteEventListener and on JavaSpace. How do you properly resolve classes in this case? Typically the client has the ServiceAPI it needs already installed locally, however this may not always be the case, depending on how you want to resolve the proxy classes and how much you want to share with the client, you can include additional jar files in the annotation, and use preferred.list or you can use Maven or OSGi to resolve dependencies and provision the ClassLoader used for proxy deserialization. This paper documents the problems with this approach: https://dl.acm.org/doi/pdf/10./1698139 JGDMS provisions a ClassLoader at each Endpoint, the ClassLoader is solely responsible for class resolution, once it has been assigned to the relevant ObjectEndpoint. A provider mechanism allows customization. JGDMS doesn't suffer from codebase annotation loss, nor class resolution issues. But it did have to give up some functionality; it cannot resolve classes that do not belong to a service proxy or its service api and are not resolvable from the Endpoint ClassLoader, if they are not present on the remote machine. The solution is to always use a service, for parameters passed to a service, if they are not part of the service api, eg the client overrides the type of parameter arguments for a service. This means that if the parameter is not an interface, you cannot create a service that implements it and pass it as an argument. That’s why its still possible, but not recommended to use codebase annotations appended to the serialization stream. The solution is to create service api that uses only interfaces for parameter arguments. For example a remote events and listeners use this pattern. To prevent unexpected breakages, either use interfaces, or final classes, or both, for service api remote method parameters. Then you won’t get into the situation where you need codebase annotations appended in the stream. I am not sure I follow but... What I am trying to achieve is exactly the opposite - place as little constraints as possible on service implementors and make the whole thing “magically work” :) JGDMS only does this with AtomicILFactory by default, you aren't constrained to using that, you can enable codebase annotations in the stream, or override BasicILFactory, if you want to do something different, or just use BasicILFactory as is, you can avoid applying this restriction to service parameter arguments, but then you have to accept the compromises that come with that such as codebase annotation loss, which can spoil the magic. For me it is simpler to use interface types for service method arguments and provide a final implementation class as part of the Service API, this allows the client to either use the default ServiceAPI classes or implement the interface with another service. If you want to use non final classes for your service method arguments and allow clients to override these classes, then you will need to enable codebase annotations in AtomicILFactory in your configuration. The caveat is there is no guarantee, the service will be able to resolve these classes at the server endpoint, or that codebase annotation loss won't occur, it will try using existing mechanisms, such as RMIClassLoaderSPI, which is probably fine for seasoned Jini vets, but not so user friendly for the newbie, who now has to debug ClassNotFoundExceptions. It's like Java serialization, magic comes with compromises. For example if a service
Re: [DISCUSS] Moving Apache River to the Attic
possible, but not recommended to use codebase annotations appended to the serialization stream. The solution is to create service api that uses only interfaces for parameter arguments. For example a remote events and listeners use this pattern. To prevent unexpected breakages, either use interfaces, or final classes, or both, for service api remote method parameters. Then you won’t get into the situation where you need codebase annotations appended in the stream. For example if a service proxy is serialized within a serialization stream, it will be replaced by a proxy serializer and it will be assigned its own independent stream, with ClassLoader, independent of the stream in which it was serialized. This is based on the ObjectEndpoint identity, so it will always resolve to the same ClassLoader. Note that ProxyCodebaseSpi can be a provider or OSGi service. Now the proxy serializer is itself a service (bootstrap proxy), that is authenticated when using secure endpoints. You could quite easily add an interface to the proxy serializer to return your object annotation. Note that I use a string, because I also use it in secure multicast discovery protocols (typically IPv6), which don't include objects, for authentication and provisioning a ClassLoader for a lookup service proxy prior to any Object de-serialization. https://www.iana.org/assignments/ipv6-multicast-addresses/ipv6-multicast-addresses.xhtml Summing up to simplify JGDMS and solve some very difficult issues, it had to give up: 1. Support for circular references in serialized object graphs, was dropped. 2. Extensible classes in service api method parameters are not advised. 3. ProxyTrust - deprecated and replaced with secure authentication and httpmd (SHA-256) or signer certificates using ProxySerializer. 4. Untrusted machines are not allowed in a djinn, some level of trust is required, with authentication and authorisation constraints. What enabled solving these issues, was the River community’s (and Jini users) ability to identify problems, although they didnt agree on solutions, they identified the problems and that’s the most important step in finding a solution. I’d like to say that all of the problems with Jini 2.1 on the internet were solved, but there is always something left to do, such as supporting a marshaling layer within JERI to allow extensible support for different serialization protocols, or re-implementing access controls following JEP 411. BasicILFactory is still available, should you wish to adopt a more conventional approach, using Java Serialization. Regards, Peter. /* * Copyright 2018 The Apache Software Foundation. * * Licensed under the Apache License, Version 2.0 (the "License"); * you may not use this file except in compliance with the License. * You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. */ package org.apache.river.api.io; import net.jini.loader.ProxyCodebaseSpi; import java.io.IOException; import java.io.InvalidObjectException; import java.io.ObjectInput; import java.io.ObjectOutputStream; import java.io.ObjectStreamField; import java.io.Serializable; import java.lang.reflect.InvocationHandler; import java.lang.reflect.Proxy; import java.security.AccessController; import java.security.Guard; import java.security.PrivilegedAction; import java.util.Collection; import java.util.Iterator; import java.util.logging.Level; import java.util.logging.Logger; import net.jini.core.constraint.RemoteMethodControl; import net.jini.export.ProxyAccessor; import net.jini.export.CodebaseAccessor; import net.jini.export.DynamicProxyCodebaseAccessor; import net.jini.io.MarshalInputStream; import net.jini.io.MarshalledInstance; import org.apache.river.api.io.AtomicSerial.GetArg; import org.apache.river.api.io.AtomicSerial.PutArg; import org.apache.river.api.io.AtomicSerial.ReadObject; import org.apache.river.api.io.AtomicSerial.SerialForm; import org.apache.river.resource.Service; /** * * @author peter */ @AtomicSerial class ProxySerializer implements Serializable { private static final long serialVersionUID = 1L; private static final String BOOTSTRAP_PROXY = "bootstrapProxy"; private static final String SERVICE_PROXY = "serviceProxy"; /** * By defining serial persistent fields, we don't need to use transient fields. * All fields can be final and this object becomes immutable. */ private static final ObjectStreamField[] serialPersistentFields = serialForm(); public static SerialForm[] serialForm(){ return new
Re: [DISCUSS] Moving Apache River to the Attic
Hi Bishnu, Which version of River are you using? River 3.0 broke compatibility with 2.2, with the com.sun.jini to org.apache.river namespace change. Downstream projects I have looked at are still using 2.2, eg Rio. The River SVN codebase hasn't received much love for a very long time, none of the existing releases of River are secure if their connections are exposed to the internet, and are vulnerable to Java deserialization attacks. All supported TLS ciphers are out of date, no support for stateless TLS etc. Basically River 2.2 and 3.0 need to stay behind the firewall on trusted networks in their current form. These are the strongest ciphers River supports (ConfidentialityStrength STRONG), all are known to be vulnerable to attack: 3DES_EDE_CBC AES_128_CBC AES_256_CBC IDEA_CBC RC4_128 River downloads code and de-serializes prior to service authentication, because it was assumed Java Serialization and the Java Sandbox were secure, (based on the old applet model) during Jini 2.0 development. River also grants RuntimePermission createClassLoader to downloaded proxy code prior to authentication, which code can quite easily be used to perform privilege escalation. It also allows an attacker to steal service proxy identity, because identity was based only on codebase annotations and the callers ClassLoader. At least River provided httpmd URLs for codebase annotations to avoid code tampering, but it doesn't prevent an attacker providing a malicious serialization stream to that code and because the attacker can use it to create a ClassLoader (the stolen identity has permission to), then they can inject their own code anyway. The last plan we had agreed on was to make River modular and migrate to Git, then we could integrate external fixes on a module to module basis that would make River secure for the internet, but this never eventuated. I still maintain my own version of River secured for use over untrusted networks on github, where it gets some use, I would donate the code should River want it, but it would be a massive undertaking for anyone wishing to reinvigorate River, I certainly wouldn't object to someone trying. We tried really hard to make River work for more than a decade. Issues I needed to address for the internet: https://github.com/pfirmstone/JGDMS/issues?q=is%3Aissue+is%3Aclosed https://github.com/pfirmstone/JGDMS/issues/125 https://github.com/pfirmstone/JGDMS/blob/trunk/JGDMS/jgdms-jeri/src/main/java/net/jini/jeri/ssl/ConfidentialityStrength.java According to github traffic stats, there are approximately 13 people per month who clone it, so it is only a small community. There is also an OpenJDK bug relating to its use of TLS on Java 17: https://bugs.openjdk.java.net/browse/JDK-8272340 The codebase is modular, all modules are also OSGi bundles (OSGi is not a requirement) and it has a new object marshaling implementation that is backward compatible with Java serialization, that doesn't use codebase annotations and solves class resolution issues for OSGi deserialization. A new InvocationLayerFactory has been provided that uses the new marshaling implementation, more information regarding ClassLoader resolution and codebases can be found here: https://github.com/pfirmstone/JGDMS/blob/trunk/JGDMS/jgdms-jeri/src/main/java/net/jini/jeri/AtomicILFactory.java It includes support for IPv6 Address Normalization and IPv6 discovery, uses an announcement protocol for global lookup service discovery and it supports https unicast lookup discovery. https://github.com/pfirmstone/JGDMS/issues/81 It also has a compatibility layer based on Rio's use of Jini 2.x. to address breakages caused by River 3.0 namespace changes, so it has better backward compatibility with River 2.2 than River 3.0. In any case people who are still using River are welcome to discuss their needs and ideas here: https://github.com/pfirmstone/JGDMS/discussions Cheers, Peter. On 10/02/2022 10:48 am, Bishnu Gautam wrote: Hi River Team I am an occasional user/developer of JINI and River. Here are my thoughts on this project. Since last summer I am using River to educate my seminar students to learn the concept of distributed computing by using River. A couple of students are already working to integrate River with other technologies to cross the NAT and other limitations of private network. For example, you can use the technique of UDP hole punching integrating with STUN server to address that problem as Peter mentioned. We already have the clear solution for that River is facing on which we are working now. I am even thinking to introduce the River from next year as a formal course for my seminar students (as a sub project) seeing the potential of this project. I think it is too early to decide to move it to Attic as there are some users as of me and my students. Instead of moving this project to the corner, why not some of us
Re: [DISCUSS] Moving Apache River to the Attic
Even in the Attic, River will remain a valuable resource of information. When OpenJDK published JEP411 in April 2021, they believed what we were already doing with River was impossible, which succinctly sums up a number of River's features. https://mail.openjdk.java.net/pipermail/security-dev/2021-May/thread.html >/Lets be clear Java will no longer be able to finely control access to sensitive data with the removal of SecurityManager. I'm sure it will be a great bonus for OpenJDK dev's not to have to think about, but it will impact some developers significantly, who would like to do so with the least suffering possible. / I wouldn’t say Java (or anything else, for that matter) is “able" to do it now, except in the sense that people (scientists) are able (in a billion-dollar particle accelerator) to transmute lead into gold (a few atoms). We’ve had twenty five years to convince the world this could work, the world isn’t buying, and our job isn’t to sell ideas but to serve millions of developers by giving them what we believe they need now, not what we wished they wanted. OpenJDK's arguments around SMs poor scalability, poor performance etc, only applied to the unloved provider code shipped with OpenJDK, (hardly changed since Java 1.4), they were proven wrong on every account, except for development cost; maintaining Security had a cost, they were right about that, and OpenJDK no longer wished to bear that cost for a small uptake. For example we are not vulnerable to the recent Log4j vulnerability, even when using the logger, provided SecurityManager is enabled with tool generated principle of least privilege policy files. Parts of OpenJDK didn't make use of SecurityManager, eg data parsing (deserialization) and OpenJDKs trusted codebase became too large, when it should have been restricted to the Java core language features (too much Java platform code has AllPermission). While OpenJDK might have learned from River, they chose not to, perhaps things might have been different had some of the original team remained. Perhaps Jini's original vision might be commercially viable today had Oracle reconsidered it and the role Java was intended for. The challenges River as a project faced: * Challenges for new developers: o The large monolithic build, new developers struggled to understand how River worked under the hood, they couldn't see the forest for the trees. River / Jini also had many layers of indirection, a result of its well designed architecture. o Classdepandjar - a unique dependency based build that, while innovative in its time, was confusing to new developers and modern modular frameworks provided better solutions. * Technical challenges for users: o Codebase annotation loss and ClassLoader resolution problems, relating to flaws in the design of Java Serialization, that River / Jini was forced to plaster over. o IPv4 network address translation had relegated River / Jini to private networks, limiting its appeal in the age of the internet. o TLS, HTTPS & Kerberos transport layers were configurable replacements, but event notifications stopped working. Events are a pretty important feature. o How to integrate with modular frameworks, eg Maven or OSGi. o These historical technical challenges were solved outside of the project. * Disagreements on technical solutions, no doubt due to different understandings or experiences and complexity (OSGi integration caused a lot of contention). * Many developers maintained their own forks of Jini / River, to solve problems they needed to make something work, but it was never standardized or agreed upon and it was difficult the merge the changes back, even when in agreement, it was a big undertaking due to River's use of SVN and large monolithic build. River's complexity came from making the impossible, possible. When your process involves turning lead into gold in your billion dollar particle accelerator, you have to accept there will be some difficulties and disagreements among the boffins. Maybe cat herding might have been easier. But hey, it was fun. Cheers, Peter. On 10/02/2022 1:34 am, Dan Rollo wrote: I agree it is time. Well said Jeremy! Thanks for sharing. I have fond memories of Jini conferences in Chicago and Brussels (even if all I remember is the Delirium Cafe). Dan Rollo On Feb 9, 2022, at 10:03 AM, Jeremy R. Easton-Marks wrote: I, sadly, agree that it is time to move this project to the Attic. While I hoped to work on this as a side project, I have not been able to carve out time for it. While I do think this project has a lot of potential, without some type of sponsorship in time and resources I don't see it moving forward. Thank you Roy for stepping up as the chair as well as the rest of the River team for contributing to this project ov
Re: Patricia Shanahan
I'm so sorry to hear Patricia lost her battle with cancer, she had a stabilizing influence on all of us, especially when there was strong disagreement among developers, she would find a way to rationalize the discussion. She would thoroughly investigate and document problems with the code, like TaskManager, she was an enabling influence, who will no doubt be sorely missed by all who knew her, we were very lucky to have such a kind, considerate and knowledgeable developer on the team. -- Regards, Peter On 20/07/2021 2:56 am, Roy T. Fielding wrote: We received the sad news last week that our friend and PMC member, Patricia Shanahan, has passed away peacefully after a long battle with cancer. I have put together a memorial page for her at https://www.apache.org/memorials/patricia_shanahan.html <https://www.apache.org/memorials/patricia_shanahan.html> and will eventually update the River site as well. Please let me know if you would like to add anything to that page. Roy
Time to retire from Apache
Hello River folk, Recently the Apache board cancelled a PMC member / founder of another Apache project, for posts on Twitter, no doubt this will become public knowledge in the near future. I am personally in favor of free speech, regardless of whether it's offensive or whether I agree or disagree, I have voluntarily cancelled my membership, no doubt this will be of little significance to an organization as large as Apache, but I wouldn't feel comfortable turning a blind eye. Apache is free to make decisions about who it wants participating and I am not about to argue with that. The person in question appeared to be making a political statement against cancel culture by publishing deliberately offensive speech and has been cancelled by Apache as a result, I guess that shouldn't be a surprise. We used to have some heated discussions on River's mailing lists, at no time, would I have considered cancelling someone with whom I disagreed, I would debate till the cows came home, but never cancel another contributors voice. I appreciate the contributions and participation of everyone who is part of the River community, however I think the project is likely long overdue for a new chair to lead, and it is time for people to discuss and nominate someone they think is most suited to take care of the River community. Today I retire from my role as River PMC Chair and volunteer development work for Apache. I wish you all well and hope to see you in future. -- Regards, Peter Firmstone.
Re: Git conversion (was Re: Project Health / Interest)
Hi Dennis, Did you want to commit your Gradle build changes to trunk? Cheers, Peter. On 16/02/2021 7:19 am, Peter Firmstone wrote: Hi Dennis, Yes & No, this is a module Gradle build of trunk, looking forward to some contribution from you on that front. We're currently trying to figure out how to move other components of the project, such as the site, ldj tests and other contributions. Perhaps we should move http://svn.apache.org/viewvc/river/jtsk/ now, then people who want to focus on trunk can proceed, eg the gradle build and then move the remaining parts of http://svn.apache.org/viewvc/river/ later? artwork/ <http://svn.apache.org/viewvc/river/artwork/> *1069292* <http://svn.apache.org/viewvc/river/artwork/?view=log> 10 years gmcdonald River now a TLP attic/ <http://svn.apache.org/viewvc/river/attic/> *1069292* <http://svn.apache.org/viewvc/river/attic/?view=log> 10 years gmcdonald River now a TLP extra/ <http://svn.apache.org/viewvc/river/extra/> *1069292* <http://svn.apache.org/viewvc/river/extra/?view=log> 10 years gmcdonald River now a TLP jtsk/ <http://svn.apache.org/viewvc/river/jtsk/> *1884639* <http://svn.apache.org/viewvc/river/jtsk/?view=log> 8 weeks peter_firmstone replace trunk with modules branch ldj-tests/ <http://svn.apache.org/viewvc/river/ldj-tests/> *1234443* <http://svn.apache.org/viewvc/river/ldj-tests/?view=log> 9 years peter_firmstone River-32 The Jini spec tests inside the qa suite used used to be called tck, so… permission_delegates/ <http://svn.apache.org/viewvc/river/permission_delegates/> *1394888* <http://svn.apache.org/viewvc/river/permission_delegates/?view=log> 8 years peter_firmstone upload delegate implementations river-examples/ <http://svn.apache.org/viewvc/river/river-examples/> *1694732* <http://svn.apache.org/viewvc/river/river-examples/?view=log> 5 years gtrasuk [maven-release-plugin] prepare for next development iteration river-rt-tools/ <http://svn.apache.org/viewvc/river/river-rt-tools/> *1645730* <http://svn.apache.org/viewvc/river/river-rt-tools/?view=log> 6 years gtrasuk Start package now references the correct message bundle, and can start the brows… site/ <http://svn.apache.org/viewvc/river/site/> *1827149* <http://svn.apache.org/viewvc/river/site/?view=log> 2 years zkuti - alerts for helping hands - long due changes in people added - success stories … doap_river.rdf <http://svn.apache.org/viewvc/river/doap_river.rdf?view=log> Regards, Peter. On 16/02/2021 6:19 am, Dennis Reedy wrote: I did this conversion a while ago, does this help? https://github.com/dreedyman/apache-river <https://github.com/dreedyman/apache-river> Regards Dennis On Mon, Feb 15, 2021 at 2:44 PM Dan Rollo <mailto:danro...@gmail.com>> wrote: Hi Peter, Silly questions to follow. I found the empty repo at: river-ldj-tests.git <https://git.apache.org/repos/asf/river-ldj-tests.git <https://git.apache.org/repos/asf/river-ldj-tests.git>> (https://git.apache.org/repos/asf/river-ldj-tests.git <https://git.apache.org/repos/asf/river-ldj-tests.git>) As a sanity check: Is the one to migrate from?: http://svn.apache.org/repos/asf/river/jtsk/trunk <http://svn.apache.org/repos/asf/river/jtsk/trunk> While I agree refactoring of the project structure into separate, stand alone git repos would be an improvement, I’m wondering if a safer first step would be to move the svn repo to git as is? That way, we start from a working state in git, and preserve all the svn history. From there, we can start to “break things” (literally and figuratively) into smaller repos. The underlying assumption is just about everything will be easier to do (merging, etc) with a git repo. Either way, I’ll start looking for lines along which I could break things. Dan > On Feb 9, 2021, at 3:42 AM, Peter Firmstone mailto:peter.firmst...@zeus.net.au>> wrote: > > Thanks Dan, > > https://infra.apache.org/svn-to-git-migration.html <https://infra.apache.org/svn-to-git-migration.html> > > https://gitbox.apache.org/setup/newrepo.html <https://gitbox.apache.org/setup/newrepo.html> > > http://svn.apache.org/viewvc/river/ <http://svn.apache.org/viewvc/river/> > > https://gitbox.apache.org/repos/asf#river <https://gitbox.apache.org/repos/asf#river> > > I just created a git repository for the "Apache river JiniTM Technology Lookup, Discovery, and Join Compatibility Kit", I figure it might be easier to start with something that's had relatively few commits as an experiment. > > But basicall
Re: Git conversion (was Re: Project Health / Interest)
Hi Dennis, Yes & No, this is a module Gradle build of trunk, looking forward to some contribution from you on that front. We're currently trying to figure out how to move other components of the project, such as the site, ldj tests and other contributions. Perhaps we should move http://svn.apache.org/viewvc/river/jtsk/ now, then people who want to focus on trunk can proceed, eg the gradle build and then move the remaining parts of http://svn.apache.org/viewvc/river/ later? artwork/ <http://svn.apache.org/viewvc/river/artwork/> *1069292* <http://svn.apache.org/viewvc/river/artwork/?view=log> 10 years gmcdonald River now a TLP attic/ <http://svn.apache.org/viewvc/river/attic/> *1069292* <http://svn.apache.org/viewvc/river/attic/?view=log> 10 years gmcdonald River now a TLP extra/ <http://svn.apache.org/viewvc/river/extra/> *1069292* <http://svn.apache.org/viewvc/river/extra/?view=log> 10 years gmcdonald River now a TLP jtsk/ <http://svn.apache.org/viewvc/river/jtsk/> *1884639* <http://svn.apache.org/viewvc/river/jtsk/?view=log> 8 weeks peter_firmstone replace trunk with modules branch ldj-tests/ <http://svn.apache.org/viewvc/river/ldj-tests/> *1234443* <http://svn.apache.org/viewvc/river/ldj-tests/?view=log> 9 years peter_firmstone River-32 The Jini spec tests inside the qa suite used used to be called tck, so… permission_delegates/ <http://svn.apache.org/viewvc/river/permission_delegates/> *1394888* <http://svn.apache.org/viewvc/river/permission_delegates/?view=log> 8 years peter_firmstone upload delegate implementations river-examples/ <http://svn.apache.org/viewvc/river/river-examples/> *1694732* <http://svn.apache.org/viewvc/river/river-examples/?view=log> 5 years gtrasuk [maven-release-plugin] prepare for next development iteration river-rt-tools/ <http://svn.apache.org/viewvc/river/river-rt-tools/> *1645730* <http://svn.apache.org/viewvc/river/river-rt-tools/?view=log> 6 years gtrasuk Start package now references the correct message bundle, and can start the brows… site/ <http://svn.apache.org/viewvc/river/site/> *1827149* <http://svn.apache.org/viewvc/river/site/?view=log> 2 years zkuti - alerts for helping hands - long due changes in people added - success stories … doap_river.rdf <http://svn.apache.org/viewvc/river/doap_river.rdf?view=log> Regards, Peter. On 16/02/2021 6:19 am, Dennis Reedy wrote: I did this conversion a while ago, does this help? https://github.com/dreedyman/apache-river <https://github.com/dreedyman/apache-river> Regards Dennis On Mon, Feb 15, 2021 at 2:44 PM Dan Rollo <mailto:danro...@gmail.com>> wrote: Hi Peter, Silly questions to follow. I found the empty repo at: river-ldj-tests.git <https://git.apache.org/repos/asf/river-ldj-tests.git <https://git.apache.org/repos/asf/river-ldj-tests.git>> (https://git.apache.org/repos/asf/river-ldj-tests.git <https://git.apache.org/repos/asf/river-ldj-tests.git>) As a sanity check: Is the one to migrate from?: http://svn.apache.org/repos/asf/river/jtsk/trunk <http://svn.apache.org/repos/asf/river/jtsk/trunk> While I agree refactoring of the project structure into separate, stand alone git repos would be an improvement, I’m wondering if a safer first step would be to move the svn repo to git as is? That way, we start from a working state in git, and preserve all the svn history. From there, we can start to “break things” (literally and figuratively) into smaller repos. The underlying assumption is just about everything will be easier to do (merging, etc) with a git repo. Either way, I’ll start looking for lines along which I could break things. Dan > On Feb 9, 2021, at 3:42 AM, Peter Firmstone mailto:peter.firmst...@zeus.net.au>> wrote: > > Thanks Dan, > > https://infra.apache.org/svn-to-git-migration.html <https://infra.apache.org/svn-to-git-migration.html> > > https://gitbox.apache.org/setup/newrepo.html <https://gitbox.apache.org/setup/newrepo.html> > > http://svn.apache.org/viewvc/river/ <http://svn.apache.org/viewvc/river/> > > https://gitbox.apache.org/repos/asf#river <https://gitbox.apache.org/repos/asf#river> > > I just created a git repository for the "Apache river JiniTM Technology Lookup, Discovery, and Join Compatibility Kit", I figure it might be easier to start with something that's had relatively few commits as an experiment. > > But basically everything we have on svn needs to be broken up into separate projects, typical of what people are used to seeing on github. > > The QA test suite is
Re: Git conversion (was Re: Project Health / Interest)
Hi Dan, This one: http://svn.apache.org/viewvc/river/ldj-tests/ I considered a single svn git move, it would certainly simplify the process on our part, the cost is that people looking at the repository on github will need to dig through the directory structure to find the development code, or build the project. Having said that, I would say that doers decide, we're short on resources and we could do a single move now and break it up later. Cheers, Peter. On 16/02/2021 5:44 am, Dan Rollo wrote: Hi Peter, Silly questions to follow. I found the empty repo at: river-ldj-tests.git <https://git.apache.org/repos/asf/river-ldj-tests.git> (https://git.apache.org/repos/asf/river-ldj-tests.git <https://git.apache.org/repos/asf/river-ldj-tests.git>) As a sanity check: Is the one to migrate from?: http://svn.apache.org/repos/asf/river/jtsk/trunk While I agree refactoring of the project structure into separate, stand alone git repos would be an improvement, I’m wondering if a safer first step would be to move the svn repo to git as is? That way, we start from a working state in git, and preserve all the svn history. From there, we can start to “break things” (literally and figuratively) into smaller repos. The underlying assumption is just about everything will be easier to do (merging, etc) with a git repo. Either way, I’ll start looking for lines along which I could break things. Dan On Feb 9, 2021, at 3:42 AM, Peter Firmstone mailto:peter.firmst...@zeus.net.au>> wrote: Thanks Dan, https://infra.apache.org/svn-to-git-migration.html <https://infra.apache.org/svn-to-git-migration.html> https://gitbox.apache.org/setup/newrepo.html http://svn.apache.org/viewvc/river/ https://gitbox.apache.org/repos/asf#river I just created a git repository for the "Apache river JiniTM Technology Lookup, Discovery, and Join Compatibility Kit", I figure it might be easier to start with something that's had relatively few commits as an experiment. But basically everything we have on svn needs to be broken up into separate projects, typical of what people are used to seeing on github. The QA test suite is an ant build project, it's currently part of trunk, but I was thinking of separating it out into it's own build. On 9/02/2021 12:31 pm, Dan Rollo wrote: Hi Peter, I’m apologize for not being more help. I haven’t had much time of late. Is there a thread I could pull regarding the Git migration? Where to start? The plan you itemize makes sense to me. Dan Rollo On Feb 8, 2021, at 8:35 PM, Peter Firmstone wrote: Hello River folk, There's an upcoming board report due shortly, I wanted to gauge people's interest in the project. We've got two pending tasks: 1. SVN to Git Migration 2. Website migration Following the Git Migration, the plan is to continue with the modular build, using gradle? Lately we don't have a lot of participation, I was hoping that we would have some more buy in, especially with the Git migration. I don't want to be making decisions alone, are there people on this list who have time to assist, are willing to assist or intend to do so in future when they have time? -- Regards, Peter Firmstone -- Regards, Peter Firmstone
Public Serialization API to support other serialization frameworks.
Hello River Folk, This is a concept test class, for testing a Public Serialization API, for supporting alternative serialization frameworks. Note this doesn't implement Serializable for clarity. -- Regards, Peter /* * Copyright 2021 The Apache Software Foundation. * * Licensed under the Apache License, Version 2.0 (the "License"); * you may not use this file except in compliance with the License. * You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. */ package tests.support; import java.io.IOException; import java.io.ObjectStreamField; import java.util.Arrays; import java.util.Objects; import org.apache.river.api.io.AtomicSerial; import org.apache.river.api.io.AtomicSerial.GetArg; import org.apache.river.api.io.AtomicSerial.PutArg; /** * * @author peter */ @AtomicSerial public class SerializableTestObject { /** * Names of serial fields. Note how these names are unrelated to field * names. If we refactor field names, and rename them, the Strings * representing serial fields don't change and the serial form of the * class is not broken. */ private static final String TEST_STR = "testString"; private static final String TEST_ARRY = "testArray"; private static final String TEST_INT = "testInt"; /** * serialPersistentFields. * * This method will be used by serialization frameworks to get names * and types of serial fields. These will ensure type checking occurs * during de-serialization, fields will be de-serialized and created prior * to the instantiation of the parent object. * * @return array of ObjectStreamFields */ public static ObjectStreamField [] serialPersistentFields() { return new ObjectStreamField []{ new ObjectStreamField(TEST_STR, String.class), new ObjectStreamField(TEST_ARRY, long [].class), new ObjectStreamField(TEST_INT, int.class) }; } public static void serialize(PutArg args, SerializableTestObject obj) throws IOException{ args.put(TEST_STR, obj.str); args.put(TEST_ARRY, obj.longs); args.put(TEST_INT, obj.integer); args.writeFields(); } /** * Invariant validation * @param args * @return * @throws IOException * @throws ClassNotFoundException */ private static GetArg check(GetArg args) throws IOException, ClassNotFoundException{ args.get(TEST_STR, null, String.class); // check String class type. args.get(TEST_ARRY, null, long [].class); // check array class type. // don't need to test int class type, but if there are other invariants // we check them here. return args; } /** * AtomicSerial constructor. * @param args * @throws IOException * @throws ClassNotFoundException */ public SerializableTestObject(GetArg args) throws IOException, ClassNotFoundException{ this(check(args).get(TEST_STR, "default", String.class), args.get(TEST_ARRY, new long [0], long [].class), args.get(TEST_INT, 0) ); } private final String str; private final long[] longs; private final int integer; public SerializableTestObject(String str, long [] longs, int integer){ this.str = str; this.longs = longs.clone(); this.integer = integer; } @Override public int hashCode() { int hash = 5; hash = 67 * hash + Objects.hashCode(this.str); hash = 67 * hash + Arrays.hashCode(this.longs); hash = 67 * hash + this.integer; return hash; } @Override public boolean equals(Object obj) { if (this == obj) { return true; } if (obj == null) { return false; } if (getClass() != obj.getClass()) { return false; } final SerializableTestObject other = (SerializableTestObject) obj; if (this.integer != other.integer) { return false; } if (!Objects.equals(this.str, other.str)) { return false; } return Arrays.equals(this.longs, other.longs); } }
Re: Project Health / Interest
Thanks Dan, https://infra.apache.org/svn-to-git-migration.html https://gitbox.apache.org/setup/newrepo.html http://svn.apache.org/viewvc/river/ https://gitbox.apache.org/repos/asf#river I just created a git repository for the "Apache river JiniTM Technology Lookup, Discovery, and Join Compatibility Kit", I figure it might be easier to start with something that's had relatively few commits as an experiment. But basically everything we have on svn needs to be broken up into separate projects, typical of what people are used to seeing on github. The QA test suite is an ant build project, it's currently part of trunk, but I was thinking of separating it out into it's own build. On 9/02/2021 12:31 pm, Dan Rollo wrote: Hi Peter, I’m apologize for not being more help. I haven’t had much time of late. Is there a thread I could pull regarding the Git migration? Where to start? The plan you itemize makes sense to me. Dan Rollo On Feb 8, 2021, at 8:35 PM, Peter Firmstone wrote: Hello River folk, There's an upcoming board report due shortly, I wanted to gauge people's interest in the project. We've got two pending tasks: 1. SVN to Git Migration 2. Website migration Following the Git Migration, the plan is to continue with the modular build, using gradle? Lately we don't have a lot of participation, I was hoping that we would have some more buy in, especially with the Git migration. I don't want to be making decisions alone, are there people on this list who have time to assist, are willing to assist or intend to do so in future when they have time? -- Regards, Peter Firmstone -- Regards, Peter Firmstone
Project Health / Interest
Hello River folk, There's an upcoming board report due shortly, I wanted to gauge people's interest in the project. We've got two pending tasks: 1. SVN to Git Migration 2. Website migration Following the Git Migration, the plan is to continue with the modular build, using gradle? Lately we don't have a lot of participation, I was hoping that we would have some more buy in, especially with the Git migration. I don't want to be making decisions alone, are there people on this list who have time to assist, are willing to assist or intend to do so in future when they have time? -- Regards, Peter Firmstone
Any volunteers to assist with SVN to Git migration
Anyone have some cycles to help out with the SVN to Git migration? -- Regards, Peter Firmstone 0498 286 363 Zeus Project Services Pty Ltd.
Re: Your project website
Thanks Andrew, Which option is the fastest path / least work required for transition? I don't have time to look into each option, so any such advise will be much appreciated. Regards, Peter. On 4/02/2021 10:57 pm, Andrew Wetmore wrote: Hi: We were hoping to have all projects migrated by the end of last year. There are still a number using the Apache CMS, and we have not set a hard deadline for shutting it down. However, the system is becoming less reliable, so moving sooner rather than later is probably a good idea. Andrew On Wed, Feb 3, 2021 at 8:08 PM Peter Firmstone wrote: Thanks Andrew, What's the timeframe for migration? Regards, Peter. On 1/02/2021 11:55 pm, Andrew Wetmore wrote: Hi, and happy New Year! I know you folks are busy with your svn-to-git migration, but I wanted to bring up again the need to migrate your website off the Apache CMS. Please let me know what your plans are, and whether you need help from Infra. Andrew Wetmore On 2020/08/10 10:39:31, Zsolt Kúti wrote: Hi Andrew, As no reaction has arrived until now, I, as probably the last one who dealt with our website, take the liberty of answering. Yes, our project uses Apache CMS [see here: https://river.apache.org/user-doc/website.html]. I hope somebody is going to step up for being a contact and/or for transferring our website content to one of the alternatives. If not, I'll be around somewhen in September, after back from my holidays and may do what is needed for it. Thanks for contacting us and offering help! Zsolt On Fri, Aug 7, 2020 at 2:51 PM Andrew Wetmore wrote: Hi: I am part of the Infrastructure team, and am writing to ask whether your project is still using the Apache CMS for your project website. As you know, the CMS is reaching end-of-life, and we need projects to move their websites onto a different option within the next few weeks. There are several alternatives available, including those listed on this page [1] on managing project websites. Infra is assembling a Wiki page [2] on migrating a website from the CMS, and is looking forward to helping projects with this transition. Please let me know whether your site is still on the Apache CMS and, if so, who will be the project point-of-contact with Infra for the migration. Thank you! [1] https://infra.apache.org/project-site.html [2] https://cwiki.apache.org/confluence/display/INFRA/Migrate+your+project+website+from+the+Apache+CMS -- Andrew Wetmore http://cottage14.blogspot.com/ < https://www.avast.com/sig-email?utm_medium=email_source=link_campaign=sig-email_content=webmail Virus-free. www.avast.com < https://www.avast.com/sig-email?utm_medium=email_source=link_campaign=sig-email_content=webmail <#DAB4FAD8-2DD7-40BB-A1B8-4E2AA1F9FDF2> -- Regards, Peter Firmstone 0498 286 363 Zeus Project Services Pty Ltd. -- Regards, Peter Firmstone 0498 286 363 Zeus Project Services Pty Ltd.
Re: Your project website
Thanks Andrew, What's the timeframe for migration? Regards, Peter. On 1/02/2021 11:55 pm, Andrew Wetmore wrote: Hi, and happy New Year! I know you folks are busy with your svn-to-git migration, but I wanted to bring up again the need to migrate your website off the Apache CMS. Please let me know what your plans are, and whether you need help from Infra. Andrew Wetmore On 2020/08/10 10:39:31, Zsolt Kúti wrote: Hi Andrew, As no reaction has arrived until now, I, as probably the last one who dealt with our website, take the liberty of answering. Yes, our project uses Apache CMS [see here: https://river.apache.org/user-doc/website.html]. I hope somebody is going to step up for being a contact and/or for transferring our website content to one of the alternatives. If not, I'll be around somewhen in September, after back from my holidays and may do what is needed for it. Thanks for contacting us and offering help! Zsolt On Fri, Aug 7, 2020 at 2:51 PM Andrew Wetmore wrote: Hi: I am part of the Infrastructure team, and am writing to ask whether your project is still using the Apache CMS for your project website. As you know, the CMS is reaching end-of-life, and we need projects to move their websites onto a different option within the next few weeks. There are several alternatives available, including those listed on this page [1] on managing project websites. Infra is assembling a Wiki page [2] on migrating a website from the CMS, and is looking forward to helping projects with this transition. Please let me know whether your site is still on the Apache CMS and, if so, who will be the project point-of-contact with Infra for the migration. Thank you! [1] https://infra.apache.org/project-site.html [2] https://cwiki.apache.org/confluence/display/INFRA/Migrate+your+project+website+from+the+Apache+CMS -- Andrew Wetmore http://cottage14.blogspot.com/ < https://www.avast.com/sig-email?utm_medium=email_source=link_campaign=sig-email_content=webmail Virus-free. www.avast.com < https://www.avast.com/sig-email?utm_medium=email_source=link_campaign=sig-email_content=webmail <#DAB4FAD8-2DD7-40BB-A1B8-4E2AA1F9FDF2> -- Regards, Peter Firmstone 0498 286 363 Zeus Project Services Pty Ltd.
Re: Thinking about Extensible Serialization support.
You're welcome, thanks for asking :) I'm proposing an API that allows for support and implementation of any other serialization protocol (or combined serialization transport layer), so all existing River Serializable classes would implement it, to allow a standard method of access to internal Object state for implementations of serialization and re-creation of Object's during deserialization. Personally I've found the @AtomicSerial API suitable for defensively recreating objects during deserialization, but no API currently exists for access to internal state that would make it possible to decorate existing serialization implementations, so they are pluggable into River as a configuration concern. I've also been thinking about how to allow these serialization wrappers & implementations to be part of proxy code, that is, for protocol code doesn't exist on the client. People are probably wondering, how that might that be possible? Cheers, Peter. On 31/01/2021 5:40 am, Gregg Wonderly wrote: Thanks for putting the words here (again) for reference. Java Serialization and the Web with MIME are so interlinked in time that it’s hard, sometimes to think about the larger implications of interchange protocols that are transport and language independent. We still pay a pretty large cost for marshal and unmarshall activities and edge devices don’t always have the available resources for full stacks. Both packaging like Json but also encoding like Sparkplug have an impact on system design! Gregg Sent from my iPhone On Jan 30, 2021, at 12:05 AM, Peter Firmstone wrote: Hi Gregg, Yes, of course, if the service was using Java Serialization, the bytes would be the same, but if a different Serialzation protocol was used, the bytes would be different, appropriate for the serialization protocol in use, these bytes would be transferred over existing transport layers, such as TCP, TLS, HTTPS etc (and new transport layers when created, eg bluetooth...) . It would be a service implementation choice, via configuration, although a client might reject it using constraints.The implementation would be a subclass that overrides functionality in BasicILFactory. To serialize object state, one must have access to internal object state. Java Serialization is afforded special privileges by the JVM, not afforded to other serialization protocols, that allow it to access private state. Lets say for example a service developer wanted to use JSON, or protobuf instead of Java Serialization, their reason for doing so, might be that their server side service is written in another language, such as .NET, C++, C, etc. In order to support other languages, other JERI protocol layers would need to be written in those languages also. Extending BasicILFactory is relatively straightforward, however methods in BasicInvocationHandler and BasicInvocationDispatcher with parameters and return types using ObjectInputStream and ObjectOutputStream would need to be replaced with ObjectInput and ObjectOutput. This is possible without breaking existing functionality. For simple message passing style serialization like protobuf, each parameter would simply use the OutputStream and InputStream from the underlying transport layer to send parameters and receive return values, The bytecodes of parameter and return value classes for protobuf are generated from .proto schema definitions. So a simple serialization layer like protobuf, doesn't need a Serialization API, to access internal object state. For more complex object graphs, like those JSON can support, access to object internal state is required, as fields are sent as name value pairs. Like Java Serialization, JSON can also serialize objects containing object fields. Java Serialization can of course transmit object graphs containing circular references, while re-implementing Java deserialization (to address security), I chose not to support circular object graphs, the only class this impacted was Throwable, however I didn't find it difficult to work around. This reimplementation of deserialization is called AtomicSerial, after it's failure atomicity. Developers who implement @AtomicSerial are at least required to implement a constructor, that accepts a single parameter argument called GetArg. GetArg extends java.io.ObjectInputStream.GetField. https://github.com/pfirmstone/JGDMS/wiki https://pfirmstone.github.io/JGDMS/jgdms-platform/apidocs/org/apache/river/api/io/package-summary.html AtomicSerial's public API, as implemented by developers, is suitable for any deserialization framework, in JGDMS all Serializable objects also implement @AtomicSerial. All classes implementing @AtomicSerial are also Serializable and their serial form is unchanged. The constructor argument is caller sensitive, the namespace for each class in an inheritance hierarchy is private, so only the calling class can see it's serial fi
Re: Thinking about Extensible Serialization support.
Hi Gregg, Yes, of course, if the service was using Java Serialization, the bytes would be the same, but if a different Serialzation protocol was used, the bytes would be different, appropriate for the serialization protocol in use, these bytes would be transferred over existing transport layers, such as TCP, TLS, HTTPS etc (and new transport layers when created, eg bluetooth...) . It would be a service implementation choice, via configuration, although a client might reject it using constraints. The implementation would be a subclass that overrides functionality in BasicILFactory. To serialize object state, one must have access to internal object state. Java Serialization is afforded special privileges by the JVM, not afforded to other serialization protocols, that allow it to access private state. Lets say for example a service developer wanted to use JSON, or protobuf instead of Java Serialization, their reason for doing so, might be that their server side service is written in another language, such as .NET, C++, C, etc. In order to support other languages, other JERI protocol layers would need to be written in those languages also. Extending BasicILFactory is relatively straightforward, however methods in BasicInvocationHandler and BasicInvocationDispatcher with parameters and return types using ObjectInputStream and ObjectOutputStream would need to be replaced with ObjectInput and ObjectOutput. This is possible without breaking existing functionality. For simple message passing style serialization like protobuf, each parameter would simply use the OutputStream and InputStream from the underlying transport layer to send parameters and receive return values, The bytecodes of parameter and return value classes for protobuf are generated from .proto schema definitions. So a simple serialization layer like protobuf, doesn't need a Serialization API, to access internal object state. For more complex object graphs, like those JSON can support, access to object internal state is required, as fields are sent as name value pairs. Like Java Serialization, JSON can also serialize objects containing object fields. Java Serialization can of course transmit object graphs containing circular references, while re-implementing Java deserialization (to address security), I chose not to support circular object graphs, the only class this impacted was Throwable, however I didn't find it difficult to work around. This reimplementation of deserialization is called AtomicSerial, after it's failure atomicity. Developers who implement @AtomicSerial are at least required to implement a constructor, that accepts a single parameter argument called GetArg. GetArg extends java.io.ObjectInputStream.GetField. https://github.com/pfirmstone/JGDMS/wiki https://pfirmstone.github.io/JGDMS/jgdms-platform/apidocs/org/apache/river/api/io/package-summary.html AtomicSerial's public API, as implemented by developers, is suitable for any deserialization framework, in JGDMS all Serializable objects also implement @AtomicSerial. All classes implementing @AtomicSerial are also Serializable and their serial form is unchanged. The constructor argument is caller sensitive, the namespace for each class in an inheritance hierarchy is private, so only the calling class can see it's serial fields, to access object state of other classes in it's own inheritance heirarchy, it's possible to do this by creating an instance of that class by calling it's constructor and passing the GetArg instance as a parameter, this makes it possible to validate intra-class invariants prior to creating an object instance. I've been thinking that all that would be required to support access to internal object state, would be for each class to implement a static method, that accepts an instance of it's own type as well as an subclass instance of ObjectOutputSteam.PutField. (A subclass of PutField is required to provide some security around creation of this parameter, as well as discovering the calling class, and to provide access to the stream for writing, optionally supported). PutField is simply a name -> value list of internal state, however the PutField parameter would need to be caller sensitive, so that each class in an object's inheritance hierarchy has it's own private state namespace. So basically a different Serialization protocol layer would have implementations of ObjectInput and ObjectOutput and access the objects passed via the Invocation layer using the public Serialization Layer API. Currently I have not implemented any such serialization API. -- Regards, Peter On 30/01/2021 10:25 am, Gregg Wonderly wrote: Can you speak to why it would be different than the stream of bytes that existing serialization creates through Object methods to help clarify? Gregg Sent from my iPhone On Jan 29, 2021, at 3:46 PM, Peter Firmstone wrote: A question came
Thinking about Extensible Serialization support.
A question came up recently about supporting other serialization protocols. JERI currently has three layers to it's protocol stack: Invocation Layer, Object identification layer Transport layer. Java Serialization doesn't have a public API, I think this would be one reason there is no serialization layer in JERI. One might wonder, why does JERI need a serialization layer, people can implement an Exporter, similar IIOP and RMI. Well the answer is quite simple, it allows separation of the serialization layer from the transport layer, eg TLS, TCP, Kerberos or other transport layer people may wish to implement. Currently someone implementing an Exporter would also require a transport layer and that may or may not already exist. In recent years I re-implemented de-serialization for security reasons, while doing so, I created a public and explicit de-serialization API, I have not implemented an explicit serialization API, it, or something similar could easily be used as a serialization provider interface, which would allow wrappers for various serialization protocols to be implemented. -- Regards, Peter Firmstone 0498 286 363 Zeus Project Services Pty Ltd.
Re: Git migration
Thanks Gregg, Something that's been in the back of my mind also is the qa test suite, whether this should be a separate binary build, so that binary compatibility can also be tested between releases. I agree with your sentiment here, it would be nice to streamline and make what we have easier to understand. Information on the migration is pretty light on at the moment, there's a tool to get the branches structured properly. https://infra.apache.org/svn-to-git-migration.html Cheers, Peter. On 19/01/2021 2:10 pm, Gregg Wonderly wrote: I think that separate repositories is a good idea. It might be interesting for one of those repositories to require a specific layout of the repositories and provide a script to “pull” all the correlated versions etc. I sometimes struggle with all the variations on how this gets done. At some place we need to pull all the details into view in a way that is also “easy” to consume. Gregg On Jan 18, 2021, at 4:44 PM, Peter Firmstone wrote: Hello River folk, Just an update on progress, the git mirror was out of date, it has been deleted to clear the way for copying our current SVN. https://issues.apache.org/jira/browse/INFRA-21216?page=com.atlassian.jira.plugin.system.issuetabpanels%3Aall-tabpanel Also I think it would be cleaner to have separate git repositories for separate components, such as the ldj test suite or other contributions that aren't part of the main release, so that River is easier for new users to become familiar with, rather than having a super repository that contains all components as SVN does currently. I welcome suggestions as to how the git repositories should be structured.
Git migration
Hello River folk, Just an update on progress, the git mirror was out of date, it has been deleted to clear the way for copying our current SVN. https://issues.apache.org/jira/browse/INFRA-21216?page=com.atlassian.jira.plugin.system.issuetabpanels%3Aall-tabpanel Also I think it would be cleaner to have separate git repositories for separate components, such as the ldj test suite or other contributions that aren't part of the main release, so that River is easier for new users to become familiar with, rather than having a super repository that contains all components as SVN does currently. I welcome suggestions as to how the git repositories should be structured. -- Regards, Peter Firmstone 0498 286 363 Zeus Project Services Pty Ltd.
SVN to Git migration
https://infra.apache.org/svn-to-git-migration.html I've raised and issue on JIRA, as it appears I am unable to create a git repository as something has been created already. https://issues.apache.org/jira/browse/INFRA-21216 https://gitbox.apache.org/repos/asf#river I have requested assistance from INFRA. -- Regards, Peter Firmstone 0498 286 363 Zeus Project Services Pty Ltd.
Re: Next steps
No pressure if you're not comfortable, I should have some time in a fortnight. On 11/22/2020 4:21 PM, Peter Firmstone wrote: If you feel confident enough to have a go at SVN move. :-) On 11/22/2020 11:03 AM, Dan Rollo wrote: Hi Peter, Sounds good to me. If you have any idiot proof tasks, let me know, I’d be happy to help. Dan On Nov 21, 2020, at 6:16 PM, Peter Firmstone wrote: Hello River folk, What I had in mind next was to SVN move the existing trunk out of the way, then SVN move the modular branch to trunk, then SVN move other relevant code branches we wanted to keep into trunk as well. Then finally we can ask INFRA to migrate us to git. Please feel free to discuss or mention any ideas you have, or if you think it should be done a little differently. I'd like to retain our SVN history when making the git transition, so we can track everything back to the original Sun Microsystems contribution in 2007. For the next fortnight, I'll be in remote Queensland, so hoping to get some time over Christmas to get this done, and all help will be gladly welcomed. -- Regards, Peter -- Regards, Peter Firmstone 0498 286 363 Zeus Project Services Pty Ltd.
Re: Next steps
If you feel confident enough to have a go at SVN move. :-) On 11/22/2020 11:03 AM, Dan Rollo wrote: Hi Peter, Sounds good to me. If you have any idiot proof tasks, let me know, I’d be happy to help. Dan On Nov 21, 2020, at 6:16 PM, Peter Firmstone wrote: Hello River folk, What I had in mind next was to SVN move the existing trunk out of the way, then SVN move the modular branch to trunk, then SVN move other relevant code branches we wanted to keep into trunk as well. Then finally we can ask INFRA to migrate us to git. Please feel free to discuss or mention any ideas you have, or if you think it should be done a little differently. I'd like to retain our SVN history when making the git transition, so we can track everything back to the original Sun Microsystems contribution in 2007. For the next fortnight, I'll be in remote Queensland, so hoping to get some time over Christmas to get this done, and all help will be gladly welcomed. -- Regards, Peter -- Regards, Peter Firmstone 0498 286 363 Zeus Project Services Pty Ltd.
Next steps
Hello River folk, What I had in mind next was to SVN move the existing trunk out of the way, then SVN move the modular branch to trunk, then SVN move other relevant code branches we wanted to keep into trunk as well. Then finally we can ask INFRA to migrate us to git. Please feel free to discuss or mention any ideas you have, or if you think it should be done a little differently. I'd like to retain our SVN history when making the git transition, so we can track everything back to the original Sun Microsystems contribution in 2007. For the next fortnight, I'll be in remote Queensland, so hoping to get some time over Christmas to get this done, and all help will be gladly welcomed. -- Regards, Peter
November Board Report [DRAFT]
Hello River folk, Please review or make any suggestions you'd like to communicate to the board. Regards, Peter. The River project typically operates in maintenance mode, however there is an ongoing long term undertaking to make River's monolithic codebase modular. The project has voted to move from SVN to Git. The modules branch will become the trunk branch after the next release, this modular build has been a significant undertaking, hence the long time since the project's last release. ## Description: The mission of River is the creation and maintenance of software related to Jini service oriented architecture ## Issues: No issues warranting attention at this time. ## Membership Data: Apache River was founded 2011-01-19 (10 years ago) There are currently 16 committers and 12 PMC members in this project. The Committer-to-PMC ratio is 4:3. Community changes, past quarter: - No new PMC members. Last addition was Dan Rollo on 2017-12-01. - No new committers. Last addition was Dan Rollo on 2017-11-02. - Recently we have received new contributions and we are likely to see new additions in the near future. ## Project Activity: River-3.0.0 was released on 2016-10-06. river-jtsk-2.2.3 was released on 2016-02-21. river-examples-1.0 was released on 2015-08-10. ## Community Health: dev@river.apache.org had a 75% decrease in traffic in the past quarter (24 emails compared to 96) ## Busiest email threads: * dev@river.apache.org/[VOTE]: make trunk an unstable development branch./(8 emails) * dev@river.apache.org/Example Gradle Buuild/(4 emails) * dev@river.apache.org/August Board Report [DRAFT]/(3 emails) * dev@river.apache.org/Your project website/(2 emails) * dev@river.apache.org/Why jtreg/(2 emails) * dev@river.apache.org/Git repository/(2 emails) * dev@river.apache.org/Serialization and serial form/(1 emails) * dev@river.apache.org/Java Deserialization CVE's/(1 emails) * dev@river.apache.org/Problems starting Infra services with new build/(1 emails)
[RESULT] Re: [VOTE]: make trunk an unstable development branch.
The vote passes with 4 binding and 1 non binding: +1 Dan Rollo +1 Dennis Reedy +1 Phillip Rhodes (non binding) +1 Bryan Thompson +1 Peter Firmstone Regards, Peter. On 10/15/2020 12:12 AM, danro...@gmail.com wrote: +1 On Oct 14, 2020, at 8:57 AM, Dennis Reedy wrote: +1 On Oct 12, 2020, at 10:23 PM, Phillip Rhodes wrote: On Fri, Oct 9, 2020 at 7:03 PM Peter Firmstone wrote: Currently the trunk branch is a stable branch, it is not for development code, let's make it so we can develop in trunk. The vote concludes in two weeks. +1 (non-binding) from me Phil
Serialization and serial form
The following is an interesting slide: https://speakerdeck.com/pwntester/surviving-the-java-deserialization-apocalypse?slide=31 Oracle has stated they will not fix these security issues with Collection classes for de-serialization. River-49 also identifies serial form issues with Collections. https://issues.apache.org/jira/projects/RIVER/issues/RIVER-49?filter=allopenissues Cheers, Peter.
Development Environment
Thanks Bryan, On 10/11/2020 8:18 AM, Bryan Thompson wrote: +1. It sounds like this addresses concerns for moving to GitHub. Question: will this help people to find the right development environment or do we need to do something more to enable that? It will help, however to achieve that goal, we need to document the new practices of modular development with River, not only have the build tools changed, but development has too, for example exporting services; the old way a service was exported was during construction, which lets the service's "this" reference to escape before the constructor has completed and any final fields are frozen. There are a lot of Jini books out there, where the standard way of doing things contain bad practices. While it's been discussed on this list, there are no easily accessible documents containing best practices. We also need to show people how to code their services, so that should they wish to secure them at a later date, they only need change their configuration, this is actually very simple, but I don't think many people are aware of the details. When people think of secure services they remember the complexity of proxy trust, which turns out, thanks to the insecurity of Java Serialization (which thankfully is now clearly true and unquestionable), is both completely useless and unnecessary. To be fair, the developers of Jini, who also created Java Serialization must have assumed that it would be maintained in such a way to address security issues as they arose. Of course River is not yet secure, but the security issues have been solved now outside the project in ways that also reduce complexity and improve performance, which can be used by River as a prototype to create an even better implementation. Without diverging too far down a tangent, the last time I had a discussion with Java's developers, they were not ready to give up deserialization of object graphs with circular references, which prevents validation of invariant's during deserialization prior to object construction, this discussion was prior to the current serial filters implementation. An example of an object graph that contains a circular reference is Throwable, it contains a reference to itself. But I've found it possible to wire up circular references, without requiring the serialization framework to support it. The advantage that Java serialization has over other frameworks, is the transmission of object graphs, that is objects, not just primitive types, and that is the basis of River. As a result people are turning to serialization frameworks that only support serialization of primitive values. However serialization of object graphs can be secure, if we give up transmission of circular references, leaving it up to the objects themselves to wire up the circular links afterwards. But for now, Java serialization is best thought of as a tool that allows the originator of the serialized data to create any object they like using any parameters they like. The way I solved Java serialization's known insecurities was to take the serialization framework from Apache Harmony, and re-implement deserialization after studying known vulnerabilities. Cheers, Peter. On Sat, Oct 10, 2020 at 15:10 Peter Firmstone wrote: Some additional rationale: The original reason some people wanted a stable trunk branch was they were concerned about the pace of development moving too fast, clearly that's no longer a problem, now the stable development branch has become an impediment as people on github can't see the development we are doing. Even Apache board members looked at github and thought we had no commits for 3 years. We need people to see the ongoing work on River, so they have confidence in the project's future and it's continued development and support. Cheers, Peter. On 10/10/2020 9:03 AM, Peter Firmstone wrote: Currently the trunk branch is a stable branch, it is not for development code, let's make it so we can develop in trunk. The vote concludes in two weeks. +1 Peter. Rationale: The project needs to migrate from SVN to GIT. The trunk branch is the GIT branch, currently it's only read only but we can make it a live writable git repository simply with an INFRA JIRA ticket. https://github.com/apache/river If we allow the trunk branch to become a development branch, then we can move the current modular develpment branch into trunk, and migrate other components not currently in the trunk branch like the ldj-tests, surrogate and other bits and pieces, which are also in a development state not ready for release. Note that these should probably go under their own directory in trunk. Doing this will preserve the commit history of Apache River. Are there any git experts on the list? If this is not the right way to go about the migration to git, please give us your thoughts? Regards, Peter.
Re: [VOTE]: make trunk an unstable development branch.
Some additional rationale: The original reason some people wanted a stable trunk branch was they were concerned about the pace of development moving too fast, clearly that's no longer a problem, now the stable development branch has become an impediment as people on github can't see the development we are doing. Even Apache board members looked at github and thought we had no commits for 3 years. We need people to see the ongoing work on River, so they have confidence in the project's future and it's continued development and support. Cheers, Peter. On 10/10/2020 9:03 AM, Peter Firmstone wrote: Currently the trunk branch is a stable branch, it is not for development code, let's make it so we can develop in trunk. The vote concludes in two weeks. +1 Peter. Rationale: The project needs to migrate from SVN to GIT. The trunk branch is the GIT branch, currently it's only read only but we can make it a live writable git repository simply with an INFRA JIRA ticket. https://github.com/apache/river If we allow the trunk branch to become a development branch, then we can move the current modular develpment branch into trunk, and migrate other components not currently in the trunk branch like the ldj-tests, surrogate and other bits and pieces, which are also in a development state not ready for release. Note that these should probably go under their own directory in trunk. Doing this will preserve the commit history of Apache River. Are there any git experts on the list? If this is not the right way to go about the migration to git, please give us your thoughts? Regards, Peter.
Java Deserialization CVE's
A good summary of all known Java deserialization vulnerabilities. https://github.com/PalindromeLabs/Java-Deserialization-CVEs Cheers, Peter.
Re: [VOTE]: make trunk an unstable development branch.
If this vote fails, I will instead raise a Jira INFRA ticket to base the root of the git repository on: https://svn.apache.org/repos/asf/river/ Actually is that a better idea? The negative is there are 11 forks of Apache River on github and I don't know what impact it will have. Regards, Peter. On 10/10/2020 9:03 AM, Peter Firmstone wrote: Currently the trunk branch is a stable branch, it is not for development code, let's make it so we can develop in trunk. The vote concludes in two weeks. +1 Peter. Rationale: The project needs to migrate from SVN to GIT. The trunk branch is the GIT branch, currently it's only read only but we can make it a live writable git repository simply with an INFRA JIRA ticket. https://github.com/apache/river If we allow the trunk branch to become a development branch, then we can move the current modular develpment branch into trunk, and migrate other components not currently in the trunk branch like the ldj-tests, surrogate and other bits and pieces, which are also in a development state not ready for release. Note that these should probably go under their own directory in trunk. Doing this will preserve the commit history of Apache River. Are there any git experts on the list? If this is not the right way to go about the migration to git, please give us your thoughts? Regards, Peter.
[VOTE]: make trunk an unstable development branch.
Currently the trunk branch is a stable branch, it is not for development code, let's make it so we can develop in trunk. The vote concludes in two weeks. +1 Peter. Rationale: The project needs to migrate from SVN to GIT. The trunk branch is the GIT branch, currently it's only read only but we can make it a live writable git repository simply with an INFRA JIRA ticket. https://github.com/apache/river If we allow the trunk branch to become a development branch, then we can move the current modular develpment branch into trunk, and migrate other components not currently in the trunk branch like the ldj-tests, surrogate and other bits and pieces, which are also in a development state not ready for release. Note that these should probably go under their own directory in trunk. Doing this will preserve the commit history of Apache River. Are there any git experts on the list? If this is not the right way to go about the migration to git, please give us your thoughts? Regards, Peter.
Re: Example Gradle Buuild
Hi Phil, My thoughts inline below: On 10/9/2020 9:05 AM, Phillip Rhodes wrote: Two things 1. What needs to happen for us to move forward with merging in the Gradle build to the official repo? Dennis, what are your thoughts? 2. Regarding the modular build in general: Not sure how much testing has been done with any of the artifacts from any of these builds, so what I'm doing tonight is grabbing the 3.0-SNAPSHOT artifacts from the maven build, and trying to build a simple River service with them, just as a quick "sanity check" that things work. No qa or jtreg tests have been run against the modular build, junit tests that have made the transition have, but these don't provide much coverage. In the original jar files, many class files were duplicated, but now class files are no longer duplicated and have been replaced by modular dependencies. There are very minimal changes to source code just to get it to compile. However the names of many of the original jar files are hard coded into the qa suite, these need to be cleaned up before the suite will run. I have done this work previously, the artifact names are variables in JGDMS' qa suite: https://github.com/pfirmstone/JGDMS/tree/trunk/qa We can take the structure of JGDMS' qa test suite, without changing River's test source code, except where absolutely necessary. Note the qa suite is still an ant build, it hasn't been modularized. JGDMS is a fork of River, but there are a lot of code changes, the amount of change as well as the pace of change was concerning for some so it didn't make it into River, hence the fork, so we don't want to go changing River's source code unless we absolutely have to for the modular build. Once we have a modular build, changes can be made at the module level, which will be easier to review, understand and digest. If folks are still interested in moving forward with the Gradle approach, I'd love to see us go ahead and get that stuff merged and commit to it as The Path Forward. Thoughts? +1 Peter. Phil On Mon, Jul 13, 2020 at 4:55 PM Dennis Reedy wrote: Hi all, I've updated the Gradle build project over here <https://github.com/dreedyman/apache-river>. It reflects the latest from the SVN version here <http://svn.apache.org/repos/asf/river/jtsk/modules>. If we'd like to move forward with this, I'd like to see us do that, and do it with the approved move to a Git repository. Regards Dennis On Sat, Jul 11, 2020 at 8:32 PM Peter Firmstone wrote: Hi Dennis, Yes definitely, if you're ok with that. The qa test suite could potentially be modularized as well, I'm guessing it would be easier to run these tests with a gradle build. Cheers, Peter. On 7/11/2020 11:12 PM, Dennis Reedy wrote: Hi Peter, We could just fold what you’ve done into the project. I merged the modules for expediency. I’ll spend some time next week doing that if we’d like to move it forward. Regards Dennis On Jul 11, 2020, at 5:04 AM, Peter Firmstone < peter.firmst...@zeus.net.au> wrote: HI Dennis, Had a quick look just now, I can see why gradle is attractive. I'm not a big fan of the larger modules, but you have demonstrated it can work. I guess it's a trade off between maintainability and avoiding the need to untangle the circular links. Have you had a look at the code changes I made to remove the circular links? Cheers, Peter. On 7/11/2020 5:50 AM, Dennis Reedy wrote: Curious as to whether anyone has looked at this. Regards Dennis On Tue, Jul 7, 2020 at 1:30 PM Dennis Reedy wrote: To demonstrate how a modular Gradle build would look like, I put together a clone of Apache River subversion branch of http://svn.apache.org/repos/asf/river/jtsk/modules, created as a Git repository, and built with Gradle here: https://github.com/dreedyman/apache-river. This is not to take away from the Maven effort by any means, that work was the baseline for creating this effort last night. This is by means complete, or an accepted way of building Apache River, but used as a means to demonstrate how a modular version of Apache River can be built with Gradle. - Besides using Gradle, there are differences in this project's structure. The river-jeri, river-jrmp, river-iiop and river-pref-loader modules have been merged into river-platform to avoid circular dependencies. - The groovy-config module has also been enabled. - All OSGi configurations have not been enabled. - There were issues with the Velocity work, it was removed Regards Dennis Reedy
Re: Git repository
Hi Phil, We haven't made the switch yet. https://gitbox.apache.org/ Hadoop's infra ticket: https://issues.apache.org/jira/browse/INFRA-8195 Apache River's github site: https://github.com/apache/river I just checked this, someone tried to contributed back in 2018, as there's a pull request there. SVN has more software than just what's in trunk, such as site, the ldj tests and other contributions: http://svn.apache.org/viewvc/river/ Note sure how we proceed, any ideas? Cheers, Peter. On 10/9/2020 9:07 AM, Phillip Rhodes wrote: Any update on this? Are we switching to Git as the primary repo? Or have we already? Phil On Wed, Jun 17, 2020 at 12:58 AM Peter Firmstone wrote: We seem to have a few smaller projects outside of river trunk, including the website and the modular build. http://svn.apache.org/viewvc/river/ It may be easier to replace trunk with the modular build in svn as this includes all work over the last three years, then make git primary. Where are other projects maintaining their website's? As for some of the smaller related works that aren't forks of trunk, such as the lookup discovery and join test kit, and , I wonder if these should have their own git repo's? And perhaps creation of that should be left for a future effort, as no modifications have been made for a long time, they could remain available read only in svn. Regards, Peter. On 6/16/2020 1:28 AM, Dennis Reedy wrote: I see there is https://github.com/apache/river. Can this be moved to be a primary and not a mirror? The link referenced from Peter (this one https://cwiki.apache.org/confluence/display/commons/MovingToGit) contains stale references, how best to move forward with this? Regards Dennis
August Board Report [DRAFT]
Hello River folk, Please review or make any suggestions you'd like to communicate to the board. Regards, Peter. As per the boards request we discussed the Attic and there was strong support for continuing the River project, despite activity being relatively quiet, it does appear that the board was not aware of our commit history due to the project's use of svn and a stable trunk branch. Development is currently performed in the modules branch, which doesn't appear on github or show up in commit statistics. Since the boards request to consider the attic, the project team has voted to change from svn to git, however due to the number of branches and separate components, we are still trying to figure out how to execute the change. Additionally the project has also voted to change the modular build from Maven to Gradle. The River project typically operates in maintenance mode, however there is an ongoing long term undertaking to make River's monolithic codebase modular. The modules branch will become the stable trunk branch after the next release, this modular build has been a significant undertaking, hence the long time since the project's last release. ##Commit statistics (from comm...@river.apache.org): July 2020 64 commits May 2020 12 commits Sept 2019 1 commit Aug 2019 32 commits June 2019 8 commits May 2019 71 commits Dec 2018 3 commits Nov 2018 2 commits May 2018 3 commits Apr 2018 2 commits Mar 2018 8 commits Feb 2018 48 commits ## Description: The mission of River is the creation and maintenance of software related to Jini service oriented architecture ## Issues: No issues warranting attention at this time. ## Membership Data: Apache River was founded 2011-01-19 (10 years ago) There are currently 16 committers and 12 PMC members in this project. The Committer-to-PMC ratio is 4:3. Community changes, past quarter: - No new PMC members. Last addition was Dan Rollo on 2017-12-01. - No new committers. Last addition was Dan Rollo on 2017-11-02. - Recently we have received new contributions and we are likely to see new additions in the near future. ## Project Activity: River-3.0.0 was released on 2016-10-06. river-jtsk-2.2.3 was released on 2016-02-21. river-examples-1.0 was released on 2015-08-10. ## Community Health: dev@river.apache.org had a 1516% increase in traffic in the past quarter (97 emails compared to 6): 2 issues opened in JIRA, past quarter (200% increase) ## Busiest email threads: * dev@river.apache.org/Board feedback - Request discuss attic for River/(17 emails) * dev@river.apache.org/Maven build/(14 emails) * dev@river.apache.org/Example Gradle Buuild/(11 emails) * dev@river.apache.org/Vote: Change from subversion to git/(7 emails) * dev@river.apache.org/Workaround for JDK 14.0.1 and TLS: -Djdk.tls.server.enableSessionTicketExtension=false/(6 emails) * dev@river.apache.org/Gradle Build [PREVIOUSLY] Re: Board feedback - Request discuss attic for River/(5 emails) * dev@river.apache.org/Further update regarding firewall and NAT issues in River/(4 emails) * dev@river.apache.org/svn commit: r1879695 - in /river/jtsk/modules/modularize/apache-river: ./ browser/ dist/ extra/ phoenix-activation/phoenix-common/ phoenix-activation/phoenix-dl/ phoenix-activation/phoenix-group/ phoenix-activation/phoenix/ phoenix-activation/phoenix/s.../(4 emails) * dev@river.apache.org/Proxy identity behaves unexpectedly for secure services./(3 emails) * dev@river.apache.org/Draft Report River - May 2020/(3 emails) ## Busiest JIRA tickets: * RIVER-471 <https://issues.apache.org/jira/browse/RIVER-471>/Untangle circular links between modules/(0 comments) * RIVER-472 <https://issues.apache.org/jira/browse/RIVER-472>/Gradle build/(0 comments)
Re: Why jtreg
Why not indeed :) I'd be in favour of moving "unit tests" out of the jtreg and qa tests suites into junit first . I'd suggesting leaving the jtreg bug regression tests where they are for now at least, as far more work is required and they may not all be relevant; we no longer have bug descriptions or information on these bug's, as Oracle has removed them from the Sun bug database. I have looked into the undocumented regression tests previously, some are very difficult to figure out. I have made some attempts at recovering information on older bugs for these regression tests from release notes of earlier versions of Jini, but haven't had much luck, requests for information from Oracle go unanswered. The tests can be broken down into: 1. Unit tests - simple tests that don't need more than one running jvm, eg a lookup service, or activation and don't require a SecurityManager (maybe junit is ok with a security manager? Just need appropriate policy files?). 2. Integration tests - requires multiple jvm's, eg testing network functionality (typically in the qa suite). 3. Bug regression tests - testing a known, or often in our case, an undocumented bug. River never received an upload of the Sun bug database relating to Jini, Oracle has long since made it inaccessible. Many of the bug regression tests in jtreg lack documentation, I guess there might be some information in the Jini users mail list archives. I'd suggest grabbing the low hanging fruit first. Cheers, Peter. On 7/14/2020 6:58 AM, Dennis Reedy wrote: As the title says, why use jtreg? We have modern test frameworks (Junit, Spock, etc...). Asd we move forward with River, why not migrate tests to use these? Regards Dennis Reedy
Re: Example Gradle Buuild
On 7/13/2020 6:14 AM, Zsolt Kúti wrote: On Sat, Jul 11, 2020 at 3:16 PM Dennis Reedy wrote: Hi Zsolt, There are a few tests in there, most are in the qa directory in the main svn repository. I think it would be great if we could find a way to merge them into the modules and follow conventions. I'll take a look at them, do not promise anything though. Any jtreg version preferred from here?: https://ci.adoptopenjdk.net/view/Dependencies/job/jtreg/ Yes the latest, for some reason the html reporter is no longer working, so best to redirect the output to a text file. As far as the gradle version, did gradlew not work for you? It's just I had JDK 11 set system-wide, gradle picked it up and missed rmi and corba classes. Same goes for running from IDE. Zsolt River only builds on Java 8 presently. We will need to come up with solutions to the missing classes. There are changes made in JGDMS that allow it to build on Java 11 and run on 14, but there are a lot of changes (historically this has been controversial) and some work still remains, such as replacing the IIOP implementation with something else. I was thinking https://www.jacorb.org/ as well adding support for IIOP over TLS. I initially tried using GlassFish, but there were security vulnerabilities present. My notes in the relevant pom: org.jboss.openjdk-orb openjdk-orb 8.1.4.Final Cheers, Peter.
Re: Example Gradle Buuild
Hi Dennis, Yes definitely, if you're ok with that. The qa test suite could potentially be modularized as well, I'm guessing it would be easier to run these tests with a gradle build. Cheers, Peter. On 7/11/2020 11:12 PM, Dennis Reedy wrote: Hi Peter, We could just fold what you’ve done into the project. I merged the modules for expediency. I’ll spend some time next week doing that if we’d like to move it forward. Regards Dennis On Jul 11, 2020, at 5:04 AM, Peter Firmstone wrote: HI Dennis, Had a quick look just now, I can see why gradle is attractive. I'm not a big fan of the larger modules, but you have demonstrated it can work. I guess it's a trade off between maintainability and avoiding the need to untangle the circular links. Have you had a look at the code changes I made to remove the circular links? Cheers, Peter. On 7/11/2020 5:50 AM, Dennis Reedy wrote: Curious as to whether anyone has looked at this. Regards Dennis On Tue, Jul 7, 2020 at 1:30 PM Dennis Reedy wrote: To demonstrate how a modular Gradle build would look like, I put together a clone of Apache River subversion branch of http://svn.apache.org/repos/asf/river/jtsk/modules, created as a Git repository, and built with Gradle here: https://github.com/dreedyman/apache-river. This is not to take away from the Maven effort by any means, that work was the baseline for creating this effort last night. This is by means complete, or an accepted way of building Apache River, but used as a means to demonstrate how a modular version of Apache River can be built with Gradle. - Besides using Gradle, there are differences in this project's structure. The river-jeri, river-jrmp, river-iiop and river-pref-loader modules have been merged into river-platform to avoid circular dependencies. - The groovy-config module has also been enabled. - All OSGi configurations have not been enabled. - There were issues with the Velocity work, it was removed Regards Dennis Reedy
Re: svn commit: r1879695 - in /river/jtsk/modules/modularize/apache-river: ./ browser/ dist/ extra/ phoenix-activation/phoenix-common/ phoenix-activation/phoenix-dl/ phoenix-activation/phoenix-group/
Thanks Phil, Definitely a bonus, I guess now we need to get the qa suite of tests running against the new module jar's. There will be a lot of test failures due to security policy file changes that will be required, this usually happens with every major Java version change in any case. There's the jtreg test suite as well, which is unit tests and bug regression tests. I'd be in favour of moving the unit tests to junit, but it hasn't happened yet, it's a lot of work I guess. The jtreg tests tend to get neglected. Cheers, Peter. On 7/10/2020 9:50 AM, Phillip Rhodes wrote: Everything builds cleanly here, at that revision. Haven't tried doing anything with the result artifacts yet, but at least it builds. That's an important step.. :-) Phil
Re: Example Gradle Buuild
HI Dennis, Had a quick look just now, I can see why gradle is attractive. I'm not a big fan of the larger modules, but you have demonstrated it can work. I guess it's a trade off between maintainability and avoiding the need to untangle the circular links. Have you had a look at the code changes I made to remove the circular links? Cheers, Peter. On 7/11/2020 5:50 AM, Dennis Reedy wrote: Curious as to whether anyone has looked at this. Regards Dennis On Tue, Jul 7, 2020 at 1:30 PM Dennis Reedy wrote: To demonstrate how a modular Gradle build would look like, I put together a clone of Apache River subversion branch of http://svn.apache.org/repos/asf/river/jtsk/modules, created as a Git repository, and built with Gradle here: https://github.com/dreedyman/apache-river. This is not to take away from the Maven effort by any means, that work was the baseline for creating this effort last night. This is by means complete, or an accepted way of building Apache River, but used as a means to demonstrate how a modular version of Apache River can be built with Gradle. - Besides using Gradle, there are differences in this project's structure. The river-jeri, river-jrmp, river-iiop and river-pref-loader modules have been merged into river-platform to avoid circular dependencies. - The groovy-config module has also been enabled. - All OSGi configurations have not been enabled. - There were issues with the Velocity work, it was removed Regards Dennis Reedy
Re: svn commit: r1879695 - in /river/jtsk/modules/modularize/apache-river: ./ browser/ dist/ extra/ phoenix-activation/phoenix-common/ phoenix-activation/phoenix-dl/ phoenix-activation/phoenix-group/
Oh, it builds now, I think there are some remaining junit tests that need relocating too. Cheers, Peter. On 7/9/2020 8:17 PM, Peter Firmstone wrote: I've created issue River-471 all commits to this issue are untangling circular links. Please review. On 7/9/2020 8:10 PM, peter_firmst...@apache.org wrote: Author: peter_firmstone Date: Thu Jul 9 10:10:53 2020 New Revision: 1879695 URL: http://svn.apache.org/viewvc?rev=1879695=rev Log: RIVER-471 Moved classes between modules to break circular links and fixed dependencies in pom files. Added: river/jtsk/modules/modularize/apache-river/phoenix-activation/phoenix/src/test/ river/jtsk/modules/modularize/apache-river/phoenix-activation/phoenix/src/test/java/ river/jtsk/modules/modularize/apache-river/river-discovery-providers/src/test/ river/jtsk/modules/modularize/apache-river/river-discovery-providers/src/test/java/ river/jtsk/modules/modularize/apache-river/river-pref-loader/src/test/ river/jtsk/modules/modularize/apache-river/river-pref-loader/src/test/java/ river/jtsk/modules/modularize/apache-river/river-start/src/test/ river/jtsk/modules/modularize/apache-river/river-start/src/test/java/ Modified: river/jtsk/modules/modularize/apache-river/browser/ (props changed) river/jtsk/modules/modularize/apache-river/dist/ (props changed) river/jtsk/modules/modularize/apache-river/extra/ (props changed) river/jtsk/modules/modularize/apache-river/phoenix-activation/phoenix/ (props changed) river/jtsk/modules/modularize/apache-river/phoenix-activation/phoenix-common/ (props changed) river/jtsk/modules/modularize/apache-river/phoenix-activation/phoenix-dl/ (props changed) river/jtsk/modules/modularize/apache-river/phoenix-activation/phoenix-dl/pom.xml river/jtsk/modules/modularize/apache-river/phoenix-activation/phoenix-group/ (props changed) river/jtsk/modules/modularize/apache-river/phoenix-activation/phoenix-group/pom.xml river/jtsk/modules/modularize/apache-river/phoenix-activation/phoenix/pom.xml river/jtsk/modules/modularize/apache-river/phoenix-activation/phoenix/src/main/java/org/apache/river/phoenix/AbstractSystem.java river/jtsk/modules/modularize/apache-river/phoenix-activation/phoenix/src/main/java/org/apache/river/phoenix/PhoenixStarter.java river/jtsk/modules/modularize/apache-river/phoenix-activation/phoenix/src/main/java/org/apache/river/phoenix/SystemAccessExporter.java river/jtsk/modules/modularize/apache-river/pom.xml river/jtsk/modules/modularize/apache-river/river-activation/ (props changed) river/jtsk/modules/modularize/apache-river/river-collections/ (props changed) river/jtsk/modules/modularize/apache-river/river-destroy/ (props changed) river/jtsk/modules/modularize/apache-river/river-destroy/src/main/java/org/apache/river/start/destroy/DestroySharedGroup.java river/jtsk/modules/modularize/apache-river/river-discovery-providers/ (props changed) river/jtsk/modules/modularize/apache-river/river-discovery-providers/src/main/java/org/apache/river/discovery/internal/EndpointBasedClient.java river/jtsk/modules/modularize/apache-river/river-discovery-providers/src/main/java/org/apache/river/discovery/internal/EndpointBasedProvider.java river/jtsk/modules/modularize/apache-river/river-discovery-providers/src/main/java/org/apache/river/discovery/internal/EndpointBasedServer.java river/jtsk/modules/modularize/apache-river/river-discovery-providers/src/main/java/org/apache/river/discovery/internal/X500Client.java river/jtsk/modules/modularize/apache-river/river-discovery-providers/src/main/java/org/apache/river/discovery/internal/X500Provider.java river/jtsk/modules/modularize/apache-river/river-discovery-providers/src/main/java/org/apache/river/discovery/internal/X500Server.java river/jtsk/modules/modularize/apache-river/river-discovery-providers/src/main/java/org/apache/river/discovery/kerberos/Client.java river/jtsk/modules/modularize/apache-river/river-discovery-providers/src/main/java/org/apache/river/discovery/kerberos/Server.java river/jtsk/modules/modularize/apache-river/river-discovery-providers/src/main/java/org/apache/river/discovery/plaintext/Client.java river/jtsk/modules/modularize/apache-river/river-discovery-providers/src/main/java/org/apache/river/discovery/plaintext/Server.java river/jtsk/modules/modularize/apache-river/river-discovery-providers/src/main/java/org/apache/river/discovery/ssl/Client.java river/jtsk/modules/modularize/apache-river/river-discovery-providers/src/main/java/org/apache/river/discovery/ssl/Server.java river/jtsk/modules/modularize/apache-river/river-dl/ (props changed) river/jtsk/modules/modularize/apache-river/river-jeri/ (props changed) river/jtsk/modules/modularize/apache-river/river-jeri/pom.xml river/jtsk/modules/modularize/apache-river/river-jeri/src/main/java/net/jini/jeri/kerberos/KerberosEndpoint.java river/jtsk/modules/modularize/apache-river/river-jeri/src/main/java/net/jini/jeri/ssl/SslEndpoint.java river/jtsk/modules/modularize/apache
Re: svn commit: r1879695 - in /river/jtsk/modules/modularize/apache-river: ./ browser/ dist/ extra/ phoenix-activation/phoenix-common/ phoenix-activation/phoenix-dl/ phoenix-activation/phoenix-group/
I've created issue River-471 all commits to this issue are untangling circular links. Please review. On 7/9/2020 8:10 PM, peter_firmst...@apache.org wrote: Author: peter_firmstone Date: Thu Jul 9 10:10:53 2020 New Revision: 1879695 URL: http://svn.apache.org/viewvc?rev=1879695=rev Log: RIVER-471 Moved classes between modules to break circular links and fixed dependencies in pom files. Added: river/jtsk/modules/modularize/apache-river/phoenix-activation/phoenix/src/test/ river/jtsk/modules/modularize/apache-river/phoenix-activation/phoenix/src/test/java/ river/jtsk/modules/modularize/apache-river/river-discovery-providers/src/test/ river/jtsk/modules/modularize/apache-river/river-discovery-providers/src/test/java/ river/jtsk/modules/modularize/apache-river/river-pref-loader/src/test/ river/jtsk/modules/modularize/apache-river/river-pref-loader/src/test/java/ river/jtsk/modules/modularize/apache-river/river-start/src/test/ river/jtsk/modules/modularize/apache-river/river-start/src/test/java/ Modified: river/jtsk/modules/modularize/apache-river/browser/ (props changed) river/jtsk/modules/modularize/apache-river/dist/ (props changed) river/jtsk/modules/modularize/apache-river/extra/ (props changed) river/jtsk/modules/modularize/apache-river/phoenix-activation/phoenix/ (props changed) river/jtsk/modules/modularize/apache-river/phoenix-activation/phoenix-common/ (props changed) river/jtsk/modules/modularize/apache-river/phoenix-activation/phoenix-dl/ (props changed) river/jtsk/modules/modularize/apache-river/phoenix-activation/phoenix-dl/pom.xml river/jtsk/modules/modularize/apache-river/phoenix-activation/phoenix-group/ (props changed) river/jtsk/modules/modularize/apache-river/phoenix-activation/phoenix-group/pom.xml river/jtsk/modules/modularize/apache-river/phoenix-activation/phoenix/pom.xml river/jtsk/modules/modularize/apache-river/phoenix-activation/phoenix/src/main/java/org/apache/river/phoenix/AbstractSystem.java river/jtsk/modules/modularize/apache-river/phoenix-activation/phoenix/src/main/java/org/apache/river/phoenix/PhoenixStarter.java river/jtsk/modules/modularize/apache-river/phoenix-activation/phoenix/src/main/java/org/apache/river/phoenix/SystemAccessExporter.java river/jtsk/modules/modularize/apache-river/pom.xml river/jtsk/modules/modularize/apache-river/river-activation/ (props changed) river/jtsk/modules/modularize/apache-river/river-collections/ (props changed) river/jtsk/modules/modularize/apache-river/river-destroy/ (props changed) river/jtsk/modules/modularize/apache-river/river-destroy/src/main/java/org/apache/river/start/destroy/DestroySharedGroup.java river/jtsk/modules/modularize/apache-river/river-discovery-providers/ (props changed) river/jtsk/modules/modularize/apache-river/river-discovery-providers/src/main/java/org/apache/river/discovery/internal/EndpointBasedClient.java river/jtsk/modules/modularize/apache-river/river-discovery-providers/src/main/java/org/apache/river/discovery/internal/EndpointBasedProvider.java river/jtsk/modules/modularize/apache-river/river-discovery-providers/src/main/java/org/apache/river/discovery/internal/EndpointBasedServer.java river/jtsk/modules/modularize/apache-river/river-discovery-providers/src/main/java/org/apache/river/discovery/internal/X500Client.java river/jtsk/modules/modularize/apache-river/river-discovery-providers/src/main/java/org/apache/river/discovery/internal/X500Provider.java river/jtsk/modules/modularize/apache-river/river-discovery-providers/src/main/java/org/apache/river/discovery/internal/X500Server.java river/jtsk/modules/modularize/apache-river/river-discovery-providers/src/main/java/org/apache/river/discovery/kerberos/Client.java river/jtsk/modules/modularize/apache-river/river-discovery-providers/src/main/java/org/apache/river/discovery/kerberos/Server.java river/jtsk/modules/modularize/apache-river/river-discovery-providers/src/main/java/org/apache/river/discovery/plaintext/Client.java river/jtsk/modules/modularize/apache-river/river-discovery-providers/src/main/java/org/apache/river/discovery/plaintext/Server.java river/jtsk/modules/modularize/apache-river/river-discovery-providers/src/main/java/org/apache/river/discovery/ssl/Client.java river/jtsk/modules/modularize/apache-river/river-discovery-providers/src/main/java/org/apache/river/discovery/ssl/Server.java river/jtsk/modules/modularize/apache-river/river-dl/ (props changed) river/jtsk/modules/modularize/apache-river/river-jeri/ (props changed) river/jtsk/modules/modularize/apache-river/river-jeri/pom.xml river/jtsk/modules/modularize/apache-river/river-jeri/src/main/java/net/jini/jeri/kerberos/KerberosEndpoint.java
Re: Question on module breakouts
Perhaps we could agglomerate the modules, in the case below however, this would make river-platform depend on river-lib, which depends on river-dl, due to other dependencies and we don't really want that either. In practise I was able to eliminate the circular dependencies in JGDMS without much difficulty, it was some time ago now, so I'm a little rusty on the details, but I'll go over my commit history and find out. At some point we'll want to upgrade river-iiop to depend on external modules so it can be supported in later versions of Java, so there's an argument to keep it out of the platform module to reduce platfrom dependencies on external libs. One benefit of smaller modules is the ability to focus on a smaller amount of code for developers to digest how a module works without needing to understand code in other modules. The benefit of the larger module is the ability to absorb the circular dependencies without having to untangle them, but this tends towards a monolith again. Cheers, Pete. On 7/7/2020 6:04 AM, Dennis Reedy wrote: Hio all, I thought I'd take a look at the work going on with the modularization effort, and aside from just trying to get the project built (still doesnt), I have some questions on the rationale on how it's been broken into its constituent modules (sub-projects). Some of the breakup causes circular dependencies ( river-platform <-> river-jeri), and some seem questionable. What I'd like to suggest is a slight re-organization. For the bulleted lists below, the indented project should be added to the enclosing project (example: add the code from river-jeri into the river-platform project and remove river-jeri). This simplifies the project and removes circular dependencies. - river-platform - river-jeri - river-iiop - river-url-integrity - river-pref-loader - river-lib - river-destroy - river-collections - river-phoenix - river-start - river-activation Thoughts? Regards Dennis Reedy
Re: Maven build
That's correct, it's not at a stage where it's building yet. On 7/7/2020 1:50 AM, Dennis Reedy wrote: I'm wondering if I'm missing a step here. This is what I've done: 1. svn checkout http://svn.apache.org/repos/asf/river/jtsk/modules 2. cd modules/modularize/apache-river 3. mvn package [INFO] Scanning for projects... [ERROR] [ERROR] Some problems were encountered while processing the POMs: [WARNING] The expression ${pom.version} is deprecated. Please use ${project.version} instead. @ [ERROR] Child module /Users/dreedy/projects/apache-river/modules/modularize/apache-river/phoenix-activation of /Users/dreedy/projects/apache-river/modules/modularize/apache-river/pom.xml does not exist @ [ERROR] Child module /Users/dreedy/projects/apache-river/modules/modularize/apache-river/river-logging of /Users/dreedy/projects/apache-river/modules/modularize/apache-river/pom.xml does not exist @ @ [ERROR] The build could not read 1 project -> [Help 1] [ERROR] [ERROR] The project org.apache:river:3.0-SNAPSHOT (/Users/dreedy/projects/apache-river/modules/modularize/apache-river/pom.xml) has 2 errors [ERROR] Child module /Users/dreedy/projects/apache-river/modules/modularize/apache-river/phoenix-activation of /Users/dreedy/projects/apache-river/modules/modularize/apache-river/pom.xml does not exist [ERROR] Child module /Users/dreedy/projects/apache-river/modules/modularize/apache-river/river-logging of /Users/dreedy/projects/apache-river/modules/modularize/apache-river/pom.xml does not exist [ERROR] [ERROR] To see the full stack trace of the errors, re-run Maven with the -e switch. [ERROR] Re-run Maven using the -X switch to enable full debug logging. [ERROR] [ERROR] For more information about the errors and possible solutions, please read the following articles: [ERROR] [Help 1] http://cwiki.apache.org/confluence/display/MAVEN/ProjectBuildingException Is this what I should expect? On Sun, Jul 5, 2020 at 2:41 PM Phillip Rhodes mailto:motley.crue@gmail.com>> wrote: On Sun, Jul 5, 2020 at 8:07 AM Peter Firmstone mailto:peter.firmst...@zeus.net.au>> wrote: > > Hi Phil, > > I've been going through your patch, you've got a lot of work done in a > short time. :) > > I've just committed your changes. > > I'll have a look at the circular dependencies and see what I can do in > the coming week. Sounds good. I may also be able to free up some time to work on that some more. I thought I'd try slowly moving classes back to their original locations, from where I moved them to river-lib, and try to isolate the absolute smallest number of classes that are circularly dependent. Although it may turn out that you, or somebody else who knows this code better, may be able to just jump in and quickly work it all out. Hopefully that will be the case, and will moot the need for the exercise described above. :-) In any case, all of this is good for me as far as getting familiar with the River code in general. Phil
Re: Maven build
Hi Phil, I've solved this circular dependency problem once before with JGDMS, there were also circular package dependencies I wanted to avoid for OSGi users, so it will be a good opportunity to take another look at this problem, so we've got the best possible solution. I'm familiar with the code, but I prefer to work with more than one set of eyes. ;) Note that packages in org.apache.river are implementation classes, which can break backward compatibility if necessary, apart from org.apache.river.api and net.jini which are the public api and need to remain backward compatible. River 3.0 broke backwards compatibility with the the com.sun.jini package namespace by renaming it to org.apache.river. I've created a compatibility layer module to provide a migration path for client code still using River 2.x, I've used Rio as my guideline for what com.sun.jini.* packages client code depended on, others can be added as required. Using a compatibility layer module for breaking changes appears to be a good approach. Cheers, Peter. On 7/6/2020 4:41 AM, Phillip Rhodes wrote: On Sun, Jul 5, 2020 at 8:07 AM Peter Firmstone wrote: Hi Phil, I've been going through your patch, you've got a lot of work done in a short time. :) I've just committed your changes. I'll have a look at the circular dependencies and see what I can do in the coming week. Sounds good. I may also be able to free up some time to work on that some more. I thought I'd try slowly moving classes back to their original locations, from where I moved them to river-lib, and try to isolate the absolute smallest number of classes that are circularly dependent. Although it may turn out that you, or somebody else who knows this code better, may be able to just jump in and quickly work it all out. Hopefully that will be the case, and will moot the need for the exercise described above. :-) In any case, all of this is good for me as far as getting familiar with the River code in general. Phil
Re: Maven build
Hi Phil, I've been going through your patch, you've got a lot of work done in a short time. :) I've just committed your changes. I'll have a look at the circular dependencies and see what I can do in the coming week. Cheers, Peter. On 7/5/2020 3:46 AM, Phillip Rhodes wrote: Hi Phil, Wow, you're really getting into the code, thank you. It's late here, I'll post again in the morning, just some quick clarifications, the discovery providers depend on the platform, but the platform shouldn't depend on the discovery providers, try removing that dependency from the platform pom. There will be some classes that need untangling. Yeah, that's the problem. There's definitely two-way coupling between the classes in those modules. I'm not qualified to sort it out though, as I really don't know much about this code-base at the moment. Which is why I refer to what I did as a "brute force" approach. I used a machete and a backhoe to move stuff around until I had something that would compile... somebody needs to go in with a surgeon's scalpel and do a neater job. If I find some time I'll go back and try to recreate the intermediate state I was at where *almost* everything was compiling, with the problems being the coupled classes between those modules, and post some notes on where the coupling is. In the meantime, the patch to get from a raw checkout of http://svn.apache.org/repos/asf/river/jtsk/modules to the compilable setup I hacked up, is attached to the RIVER-300 ticket in Jira. Phil
Re: Maven build
Hi Phil, Wow, you're really getting into the code, thank you. It's late here, I'll post again in the morning, just some quick clarifications, the discovery providers depend on the platform, but the platform shouldn't depend on the discovery providers, try removing that dependency from the platform pom. There will be some classes that need untangling. The modular build is based on a fork of River that uses Maven called JGDMS, all of the code in the fork is available to the project, however the intent is to do the modular build without that code if possible, the reasons for this are historical as changes in the fork are intended to be brought it gradually and it's possible not all changes will be accepted by the community. The River modular build doesn't need to replicate the fork, but please feel free to use it for guidance. The classes you've found with incorrect package declarations have been moved from their original package, however the code itself has not been updated to reflect the change. Also moving these classes hasn't been reviewed or accepted by the River community at this stage. The fork is available here: https://github.com/pfirmstone/JGDMS/tree/trunk/JGDMS Cheers, Peter. On 7/4/2020 3:53 PM, Phillip Rhodes wrote: OK, for the sake of my own edification if nothing else, I've been plowing ahead with making the various changes needed to get all of this stuff to compile. I'm pretty close (in relative terms anyway) to having everything compiling, but now I've hit something that I think I need to get everyone else's thoughts on. In the module river-discovery-providers, we have at least one class (one example, X500Provider) which depends on two classes that are in module river-platform. import org.apache.river.discovery.DatagramBufferFactory; import org.apache.river.logging.Levels; The problem is, you can't declare a dependency on river-platform from river-discovery-providers, because there is apparently something(s) in river-platform that depends on river-discovery-providers. So if you try this, Maven errors out with a "circular dependency error". I think we're going to need to shuffle a class or two around to avoid this, but I don't know enough about the semantics and intent of any of this code to have a good feel for the best way to approach that. I mean, I could probably futz around with it and come up with something that will compile, but it probably wouldn't make sense. Any thoughts on how to deal with this? Phil On Fri, Jul 3, 2020 at 6:26 PM Phillip Rhodes wrote: OK, sounds good. Another question: it looks like some packing renaming / code restructuring has gone on as part of this effort, unless I'm missing something. I see things like class DestroySharedGroup having a package statement at the top like package org.apache.river.start; but it's physical location is now org/apache/river/start/destroy, and so the compiler complains that the declared and actual packages don't match. There are quite a few examples of this. Is there any simple heuristic to know which one is right? eg, a rule saying "the declared package in the code is right" OR "the actual physical location is right and the package declaration should change." Just for grins and giggles I stared down the path of doing the latter to see if I could get things to compile, and I find that that also causes issues due to things like package visibility. Some classes depend on references to fields in other classes that are not visible if you use the "altered" package declaration. For example, the aforementioned DestroySharedGroup references fields on ServiceStarter and ServiceDescriptor that were "package private". For example, the ServiceStarter.logger field. Any advice on the best way to resolve these things? Phil
Re: Maven build
Hi Phil, Yes, we'd like your patches :) You can upload them here: https://issues.apache.org/jira/projects/RIVER/issues/RIVER-300 It depends on river-dl, this module was renamed from river-lib-dl. Cheers, Pete. On 7/4/2020 7:10 AM, Phillip Rhodes wrote: Moving this to a new thread. I was working on the wrong branch before, so that explains some of the issues I was seeing. I think. :-) I checked out http://svn.apache.org/repos/asf/river/jtsk/modules and am trying to make the Maven build work. I found a few small issues, for which I can submit patches if you folks would like. But now I'm stuck here: 1. Compilation fails with this error. [ERROR] Failed to execute goal on project river-discovery-providers: Could not resolve dependencies for project org.apache.river:river-discovery-providers:jar:3.0-SNAPSHOT: Could not find artifact org.apache.river:river-li b-dl:jar:3.0-SNAPSHOT 2. There does not appear to be a module named org.apache.river:river-lib-dl at all. There is river-lib, and river-dl, but not river-lib-dl. Not sure if river-discovery-providers just needs to be changed to depend on one or the other of river-lib or river-dl, or if there is actually a module named river-lib-dl that I'm just not seeing. Thoughts? Phil This message optimized for indexing by NSA PRISM
Re: Gradle Build [PREVIOUSLY] Re: Board feedback - Request discuss attic for River
Hi Philip, The most recent modular build attempt is here: http://svn.apache.org/viewvc/river/jtsk/modules/ Cheers, Peter. On 7/3/2020 2:19 PM, Phillip Rhodes wrote: Aaah, I may not be using the latest code then. For me, the maven build is failing right now due to missing dependencies on classes from the river-policy module, and that module doesn't even have a pom.xml in it. Which branch is everybody working on? And is work still going on through the svn repo at the moment? I haven't had time to catch up on all the email threads... I saw some reference to switching to Git ( a move I endorse) but I am not sure if it's time to switch yet. Any insight would be much appreciated. Phil
Re: Gradle Build [PREVIOUSLY] Re: Board feedback - Request discuss attic for River
Hi Phil, It's great to have your help. :) The maven build structure is almost complete, there are some junit tests that need to be moved over to their relevant modules from the old ant build. After that there will be some minor implementation dependencies between the modules that need to be broken, as well as some issues with the pom files themselves. I believe that the gradle build will utilise the same module layout, so any work getting a maven build working will not go to waste. It's not at a stage where it builds yet, there is some work remaining. It's probably best to stick with Java 8, as there will be some additional problems building later Java versions. Cheers, Peter. On 7/2/2020 12:10 PM, Phillip Rhodes wrote: A Gradle build would be nice. I'm willing to invest some time trying to help make it happen if need be. But I am curious.., it looks like someone started a Maven build a while back.. From what I can see it seems to maybe be incomplete, or just bit-rotted. But depending on the details of the state of that work, would there be any reason to prefer sticking with maven? (FSM help me, I can't believe I just said that in a public forum). I'm not the biggest Maven fan in the world, so I only raise this issue from the "can we use existing work instead of starting from scratch" perspective. Phil
Re: Git repository
We seem to have a few smaller projects outside of river trunk, including the website and the modular build. http://svn.apache.org/viewvc/river/ It may be easier to replace trunk with the modular build in svn as this includes all work over the last three years, then make git primary. Where are other projects maintaining their website's? As for some of the smaller related works that aren't forks of trunk, such as the lookup discovery and join test kit, and , I wonder if these should have their own git repo's? And perhaps creation of that should be left for a future effort, as no modifications have been made for a long time, they could remain available read only in svn. Regards, Peter. On 6/16/2020 1:28 AM, Dennis Reedy wrote: I see there is https://github.com/apache/river. Can this be moved to be a primary and not a mirror? The link referenced from Peter (this one https://cwiki.apache.org/confluence/display/commons/MovingToGit) contains stale references, how best to move forward with this? Regards Dennis
Re: Pack200 and Deflate / Zip compression
Well after many years of thinking about it, it didn't take long to implement, below is a configuration file entry for deflate compression. Performance impact is no noticeable I'm not getting any hotspot's for compression / decompression. It would probably make sense for a registrar, but not so much for a service that doesn't return a large number of results or is unlikely to receive large parameters. Cheers, Peter. /* the exporter for test listeners */ integrityExporter = new BasicJeriExporter( SslServerEndpoint.getInstance(0), new AtomicILFactory( new StringMethodConstraints( new InvocationConstraints( new InvocationConstraint[]{ Integrity.YES, AtomicInputValidation.YES}, null ) ), AccessPermission.class, org.apache.river.test.share.BaseQATest.class, Compression.DEFLATE ) ); On 6/8/2020 2:58 PM, Peter Firmstone wrote: Hello River folk, A couple of years or so ago I was working on using Pack200 for compression of proxy codebases, then it was deprecated and more recently removed from Java 14. Initially I took pack 200 from Harmony and started working on that, at the time I thought the JDK version was written in C. But then it turned out there was a java version of Pack200 in the openJDK. I haven't focused on this recently, however I registered the pack200.net domain, so that I could release on Maven Central. The OpenJDK version supports Java 8, while the Harmony version is Java 5. It also seem that not a lot of work is required to get this up to date for the latest bytecode. https://github.com/pfirmstone/pack200 https://github.com/pfirmstone/Pack200-ex-openjdk The other thing I've long considered using is deflate, gzip or zip compression of marshalled streams. This is actually very easy to code up into a JERI InvocationLayerFactory implementation, however I've had other priorities and never gotten around to. I wanted to determine whether there is interest in improving performance using compression? Regards, Peter.
Pack200 and Deflate / Zip compression
Hello River folk, A couple of years or so ago I was working on using Pack200 for compression of proxy codebases, then it was deprecated and more recently removed from Java 14. Initially I took pack 200 from Harmony and started working on that, at the time I thought the JDK version was written in C. But then it turned out there was a java version of Pack200 in the openJDK. I haven't focused on this recently, however I registered the pack200.net domain, so that I could release on Maven Central. The OpenJDK version supports Java 8, while the Harmony version is Java 5. It also seem that not a lot of work is required to get this up to date for the latest bytecode. https://github.com/pfirmstone/pack200 https://github.com/pfirmstone/Pack200-ex-openjdk The other thing I've long considered using is deflate, gzip or zip compression of marshalled streams. This is actually very easy to code up into a JERI InvocationLayerFactory implementation, however I've had other priorities and never gotten around to. I wanted to determine whether there is interest in improving performance using compression? Regards, Peter.
Re: Proxy identity behaves unexpectedly for secure services.
Thanks Michał, I did consider that briefly, but then I realised there was no way to determine through equality what constraints had been applied and if unsure, the developer can apply constraints again. It's a much bigger advantage to have equals working as expected. Existing service utilities such as SDM apply the constraints for users, who can set them in their configuration. Regards, Peter. On 6/6/2020 3:08 AM, Kłeczek, Michał wrote: Hi Peter, I think we need to be careful here - basically the semantics should be: MyServiceProxy originalProxy = ... MethodConstraints localClientConstraints = ... // THIS IS IMPORTANT!!! ((RemoteMethodControl)originalProxy).setConstraints(localClientConstraints).equals(originalProxy) == false If you break this contract the client might be vulnerable because it could easily confuse constrained and unconstrained proxies. The solution is to have two objects with different identities: - the proxy - service identity (not to be confused with Registrar ServiceID) The identity of the service could only be verified using service identity object: // hypotthetical API ((RemoteMethodControl)originalProxy).getServiceIdentity().equals(((RemoteMethodControl)originalProxy).setConstraints(localClientConstraints).getServiceIdentity()) == true My 2 cents :) Thanks, Michal On 26/05/2020 10:10:50, "Peter Firmstone" wrote: Hello River folk. As you are probably aware, I have an interest in security and have been focused on simplifying the use of secure services. In JGDMS the qa suite runs in jsse mode, which means the majority of tests are run with a login Subject and services use SSL/TLS Endpoints. A number of tests that passed with non secure Endpoint's failed with SSL/TLS Endpoints. Activation also failed, like the failing tests, it made assumptions on proxy identity. One of the problems I faced was proxy identity is defined by the underlying InvocationHandler's equals method, namely that of BasicObjectEndpoint. BasicObjectEndpoint was including the client's MethodConstraints in the Proxy's identity. This meant the service proxy's identity changed after the client applied constraints, and as a result, the tests weren't passing because the proxy's identity wasn't as expected. Also Activation would fail as the ActivationID would be different. A code comment in placed in ActivatableInvocationHandler:activate0() method, for working around this issue. /* Equality of ActivationID instances are influenced by the * equality of their Activator proxy's InvocationHandler, * when client constraints differ, and everything else is * identical, they will not be equal. Proxy's deserialized * by atomic input streams will inherit the constraints of * the stream from which they were deserialized. */ // if (!id.equals(handler.id)) { if (id.hashCode() != handler.getActivationID().hashCode()) { // Same UID. StringBuilder sb = new StringBuilder(128); sb.append("unexpected activation id: ") .append(handler.getActivationID()) .append(" expected: ") .append(id); throw new ActivateFailedException(sb.toString()); } It got worse when I implemented AtomicILFactory, in Atomic streams, when proxy's are unmarshalled, they inherit any client constraints applied to the stream, to prevent elevation of privilege gadget attacks, where a third party proxy might bypass an integrity or privacy constraint for instance such as allow a connection that wasn't encrypted, or not authenticated. Again this changed feature the identity of the proxy and tests failed as the proxy the test had to confirm the test passed didn't have client constraints applied. So it appears to me that client's MethodConstraints shouldn't be part of proxy identity. Does anyone have a good reason why client MethodConstraints should be part of proxy identity? It doesn't seem right to me that the client is able to change the identity of a proxy, just by applying constraints. This seems to have been overlooked in the implementation. Also as a side note, I needed to make a lot of changes to existing services to support secure endpoints, as the server's often didn't reply to call back proxy's using their Subject, for example EventLIstener's didn't work in a secure environment. I fixed that of course. These are changes I'd like to make to River, to make secure services behave like insecure services, but with security. :) So that security can be a configuration concern. So new developers can develop and test their application, later configure it to be secure without it breaking. Thanks, Peter.
Re: [RESULT] Vote: Change from subversion to git
No worries, happy to help. Cheers, Peter. On 6/3/2020 9:50 AM, Dennis Reedy wrote: Hey Peter, Thanks for the reference. I think it would be great to get this done this month, might need some assistance. Regards Dennis On Jun 2, 2020, at 5:35 PM, Peter Firmstone wrote: Dennis, How soon did you want to migrate to git? The following seems like a good guide: https://cwiki.apache.org/confluence/display/commons/MovingToGit Regards, Peter. On 5/30/2020 12:11 AM, Dennis Reedy wrote: With 4 in favor, 0 against, the vote to change from subversion to Git is approved.
Re: [RESULT] Vote: Change from subversion to git
Dennis, How soon did you want to migrate to git? The following seems like a good guide: https://cwiki.apache.org/confluence/display/commons/MovingToGit Regards, Peter. On 5/30/2020 12:11 AM, Dennis Reedy wrote: With 4 in favor, 0 against, the vote to change from subversion to Git is approved.
Re: Workaround for JDK 14.0.1 and TLS: -Djdk.tls.server.enableSessionTicketExtension=false
Thanks Shawn, There's some other new stuff too, you may have also noticed the endpoints in the tests are IPv6 and the use of Atomic Invocation constraints: * IPv6 X500 Multicast (includes global announcement), or secure end to end discovery and connectivity, it makes a lot of sense now with global deployment of IPv6 at 30%. * Failure atomic object serialization / marshalling, a security enhanced re-implementation of Java Serialization, gadget attacks and serialization attack vectors, such as million laughs attacks simply don't work. There's no whitelist profiling required or filtering mechanisims, instead is uses object encapsulation principles and constructors, there's no implicit object creation or circular object graphs, deserialization permission is granted dynamically after authentication. One need only implement a single argument constructor, which is passed a caller sensitive parameter which can be passed to super class constructors, each class has it's own private scope in an object's serial form, and doesn't have access to parent or child class serial form. Each class validates each of it's serial fields before it is constructed, or created. Child classes can even create a super class instance, check superclass invariants, calling various methods on the superclass instance, prior to the child class creating an instance of itself. Implementations are expected to defensively copy any mutable shared state. Serializers are provided for collections and other java library classes. Utility methods are provided to assist validating invariants as well. The stream length is limited and periodical resets must be sent by the remote end or the stream will throw an Exception and return control to the caller. When reading an Object, they stream reads ahead to check the type of object in the stream matches. * AtomicILFactory, is a JERI implementation that utilises atomic serialization, and also provides a new way to manage codebase annotations, rather than appending annotations in the stream, the service is consulted for the codebase annotation and any codebase signer certificates, these are used specifically for the service and to dynamically grant permissions, ClassLoader's are assigned at each Endpoint to provide class resolution for the service and it's proxy (The ServerEndpoint's ClassLoader is a configuration concern, a class is provided in configuration and it's ClassLoader is used for unmarshalling. The Endpoints created are service specific. If another service proxy (eg a Listener) is passed to a service, it will be marshalled separately and have it's own ClassLoaders assigned at each Endpoint with it's own unique marshalling streams, it's service proxy will only be unmarshalled after it's service has been authenticated, the codebase annotation received, permission to deserialize granted and a ClassLoader assigned, then the proxy is unmarshalled into it's assigned ClassLoader. Codebase annotations become a configuration concern for each service and each service maintains separate class resolution visibility. This addresses codebase annotation loss issues, every time a proxy is remarshalled, it's object graph is marshalled independently and the service is consulted for it's codebase annotation. Additionally, any constraints applied to a service proxy will be applied to any other proxy passed to it as a parameter, for example other services such as Listeners, at least until the remote endpoint applies a new set of contraints. A provider interface allows customization of codebase annotation and ClassLoader provisioning. * Codebase Signer Certificates can be self signed, they are granted permission dynamically, this is a convenient way for the service to allow the client to ensure it's interacting using it's intended codebase, provided it trusts the client (and authenticated it). * New constraints to ensure a proxy uses failure atomic serialization. * ProxyPreparer is still used, however ProxyTrust is no longer required (it didn't work anyway, it checked too late). The aim is to allow services to go global and be pervasive. Some more info about discovery (note the codebase and certificates included for the registrar proxy), the proxy is also unmarshalled using failure atomicity, although it is not mentioned in the documentation: https://pfirmstone.github.io/JGDMS/jgdms-discovery-providers/apidocs/org/apache/river/discovery/x500/sha512withecdsa/package-summary.html https://pfirmstone.github.io/JGDMS/jgdms-discovery-providers/apidocs/org/apache/river/discovery/ssl/sha512/package-summary.html https://pfirmstone.github.io/JGDMS/jgdms-discovery-providers/apidocs/org/apache/river/discovery/ssl/sha224/package-summary.html Cheers, Peter. On 6/2/2020 2:58 PM, Shawn Ellis wrote: Nice! I wasn’t expecting
Re: Workaround for JDK 14.0.1 and TLS: -Djdk.tls.server.enableSessionTicketExtension=false
Just confirming I've found failing tests, still working on it On 6/1/2020 10:12 PM, Peter Firmstone wrote: Thanks Shawn, I've been testing on JDK 11 and 13 recently, I've just downloaded JDK 14.0.1 I ran the qa suite lookupservice tests with JSSE enabled (Using qa suite on JGDMS which is working with JSSE). Confirmed the tests hang and fail. Looked at JDK-8242008 Notably: "SSLSessions obtained after an initial connection may return a null value when its getSessionContext() method is called." Worked around this below by obtaining local certificate for valid session. Ran tests again. Confirming tests are now passing... Will run some more. Regards, Peter. /** * Returns the principal that the server used to authenticate for the * specified session. Returns null if the session is not found or if the * server did not authenticate itself. */ X509Certificate getServerCertificate(SSLSession session) { X509Certificate cert = null; synchronized (credentialCache) { if (sslSessionContext.getSession(session.getId()) != null) { Object val = credentialCache.get( Utilities.getKeyAlgorithm(session.getCipherSuite())); if (val instanceof X500PrivateCredential) { X500PrivateCredential cred = (X500PrivateCredential) val; if (!cred.isDestroyed()) { cert= cred.getCertificate(); } } } } if (cert == null && session.isValid()){ //Stateless connection. Certificate [] certs = session.getLocalCertificates(); if (certs[0] instanceof X509Certificate) { cert = (X509Certificate) certs[0]; } } return cert; } On 6/1/2020 5:41 PM, Shawn Ellis wrote: Hello, I've seen a TLS problem with JDK 14.0.1 and Apache River 3.0 that I want to share in case someone else runs into the same issue. A client will receive a "Contraints are not supported” error when attempting to perform a reggie lookup when using TLS. The call to ServerAuthManager.getServerCertificate() returns null instead of the server certificate because the sslSessionContext doesn't have any session ids. ServerAuthManager.java:113 if (sslSessionContext.getSession(session.getId()) != null) { // sslSessionContext.getSession() returns null with JDK 14.0.1 // returns the server certificate } The workaround is to use -Djdk.tls.server.enableSessionTicketExtension=false on the server or use JDK 15 according to the OpenJDK bug report: https://bugs.openjdk.java.net/browse/JDK-8242008 <https://bugs.openjdk.java.net/browse/JDK-8242008>
Re: Workaround for JDK 14.0.1 and TLS: -Djdk.tls.server.enableSessionTicketExtension=false
Thanks Shawn, I've been testing on JDK 11 and 13 recently, I've just downloaded JDK 14.0.1 I ran the qa suite lookupservice tests with JSSE enabled (Using qa suite on JGDMS which is working with JSSE). Confirmed the tests hang and fail. Looked at JDK-8242008 Notably: "SSLSessions obtained after an initial connection may return a null value when its getSessionContext() method is called." Worked around this below by obtaining local certificate for valid session. Ran tests again. Confirming tests are now passing... Will run some more. Regards, Peter. /** * Returns the principal that the server used to authenticate for the * specified session. Returns null if the session is not found or if the * server did not authenticate itself. */ X509Certificate getServerCertificate(SSLSession session) { X509Certificate cert = null; synchronized (credentialCache) { if (sslSessionContext.getSession(session.getId()) != null) { Object val = credentialCache.get( Utilities.getKeyAlgorithm(session.getCipherSuite())); if (val instanceof X500PrivateCredential) { X500PrivateCredential cred = (X500PrivateCredential) val; if (!cred.isDestroyed()) { cert= cred.getCertificate(); } } } } if (cert == null && session.isValid()){ //Stateless connection. Certificate [] certs = session.getLocalCertificates(); if (certs[0] instanceof X509Certificate) { cert = (X509Certificate) certs[0]; } } return cert; } On 6/1/2020 5:41 PM, Shawn Ellis wrote: Hello, I've seen a TLS problem with JDK 14.0.1 and Apache River 3.0 that I want to share in case someone else runs into the same issue. A client will receive a "Contraints are not supported” error when attempting to perform a reggie lookup when using TLS. The call to ServerAuthManager.getServerCertificate() returns null instead of the server certificate because the sslSessionContext doesn't have any session ids. ServerAuthManager.java:113 if (sslSessionContext.getSession(session.getId()) != null) { // sslSessionContext.getSession() returns null with JDK 14.0.1 // returns the server certificate } The workaround is to use -Djdk.tls.server.enableSessionTicketExtension=false on the server or use JDK 15 according to the OpenJDK bug report: https://bugs.openjdk.java.net/browse/JDK-8242008 <https://bugs.openjdk.java.net/browse/JDK-8242008>
Re: Vote: Change from subversion to git
Hi Dennis, Yes, the vote passed (minimum 3 committers), you can post a [RESULT] email with the voting results of all voters. I found the following: https://cwiki.apache.org/confluence/display/commons/MovingToGit The Maven project's progress: https://cwiki.apache.org/confluence/display/MAVEN/Git+Migration#GitMigration-ThingstodiscusswithINFRA Regards, Peter. On 5/29/2020 4:27 AM, Michael Sobolewski wrote: +1 Mike On May 28, 2020, at 1:23 PM, Dennis Reedy wrote: Is 3 enough to carry the vote for success? If so, what are the next steps? Do we need to contact infrastructure? Regards Dennis On Wed, May 27, 2020 at 9:04 PM Norman Kabir wrote: Another vote for Git. On Wed, May 27, 2020 at 6:26 PM Dennis Reedy wrote: Git provides greater flexibility for distributed development, feature branches, pull requests, etc... this is a vote to move River from subversion to git
Re: Gradle Build [PREVIOUSLY] Re: Board feedback - Request discuss attic for River
Excellent, thanks Mike, Good to have you with us. Cheers, Peter. On 5/28/2020 12:40 AM, Michael Sobolewski wrote: Hi Peter, I am still interested. Dennis worked with me on the SORCER/Rio integration for a couple years at AFRL/WPAFB. He helped us to integrate all projects at the Multidisciplinary Science and Technology Center with git/gradle uniform build automation, distributions and testing. I do not see another better alternative for River than git/gradle. If Dennis needs my help I am available. Regards Mike On May 27, 2020, at 3:33 AM, Peter Firmstone mailto:peter.firmst...@zeus.net.au>> wrote: Thanks Dan, Hi Dennis, I recall Michael from Sorcer Soft (cc'd) also showed interest in a Gradle build. The modular build is here: https://svn.apache.org/viewvc/river/jtsk/modules/ svn checkout http://svn.apache.org/repos/asf/river/jtsk/modules Do you still have svn access? It's a development build, I think people would be pleased to see some development action. The qa test suite is currently an ant build. Regards, Peter. On 5/27/2020 3:40 AM, Dan Rollo wrote: Regarding a gradle build: I’m not against a gradle build, but I’m by no means a gradle expert. For the initial modular build, I think the “opinionated” nature of maven is helpful in providing some guard rails, but that could just be a function of me having more familiarity with maven. Dan
Re: Vote: Change from subversion to git
+1 Peter On 5/28/2020 9:26 AM, Dennis Reedy wrote: Git provides greater flexibility for distributed development, feature branches, pull requests, etc... this is a vote to move River from subversion to git
Re: Gradle Build [PREVIOUSLY] Re: Board feedback - Request discuss attic for River
River's still using SVN. Feel free to bring that up for discussion or a vote if you like, I don't think there will be any resistance to change. On 5/27/2020 11:01 PM, Dennis Reedy wrote: Peter, I’ll try checking that out. One thing, I had thought River switched to git? Or is River still using subversion? Dennis On May 27, 2020, at 4:33 AM, Peter Firmstone wrote: Thanks Dan, Hi Dennis, I recall Michael from Sorcer Soft (cc'd) also showed interest in a Gradle build. The modular build is here: https://svn.apache.org/viewvc/river/jtsk/modules/ svn checkout http://svn.apache.org/repos/asf/river/jtsk/modules Do you still have svn access? It's a development build, I think people would be pleased to see some development action. The qa test suite is currently an ant build. Regards, Peter. On 5/27/2020 3:40 AM, Dan Rollo wrote: Regarding a gradle build: I’m not against a gradle build, but I’m by no means a gradle expert. For the initial modular build, I think the “opinionated” nature of maven is helpful in providing some guard rails, but that could just be a function of me having more familiarity with maven. Dan
Re: Proxy identity behaves unexpectedly for secure services.
Thanks Dan, I'll raise a bug on JIRA. Cheers, Peter. On 5/27/2020 3:45 AM, Dan Rollo wrote: I don’t know the historical reason for the proxy identity changing behavior you describe. Assuming no good reason to keep it, I’m +1 to remove MethodConstraints from the proxy identity logic. Dan From: Peter Firmstone Subject: Proxy identity behaves unexpectedly for secure services. Date: May 26, 2020 at 4:10:50 AM EDT To: dev@river.apache.org Hello River folk. As you are probably aware, I have an interest in security and have been focused on simplifying the use of secure services. In JGDMS the qa suite runs in jsse mode, which means the majority of tests are run with a login Subject and services use SSL/TLS Endpoints. A number of tests that passed with non secure Endpoint's failed with SSL/TLS Endpoints. Activation also failed, like the failing tests, it made assumptions on proxy identity. One of the problems I faced was proxy identity is defined by the underlying InvocationHandler's equals method, namely that of BasicObjectEndpoint. BasicObjectEndpoint was including the client's MethodConstraints in the Proxy's identity. This meant the service proxy's identity changed after the client applied constraints, and as a result, the tests weren't passing because the proxy's identity wasn't as expected. Also Activation would fail as the ActivationID would be different. A code comment in placed in ActivatableInvocationHandler:activate0() method, for working around this issue. /* Equality of ActivationID instances are influenced by the * equality of their Activator proxy's InvocationHandler, * when client constraints differ, and everything else is * identical, they will not be equal. Proxy's deserialized * by atomic input streams will inherit the constraints of * the stream from which they were deserialized. */ //if (!id.equals(handler.id)) { if (id.hashCode() != handler.getActivationID().hashCode()) { // Same UID. StringBuilder sb = new StringBuilder(128); sb.append("unexpected activation id: ") .append(handler.getActivationID()) .append(" expected: ") .append(id); throw new ActivateFailedException(sb.toString()); } It got worse when I implemented AtomicILFactory, in Atomic streams, when proxy's are unmarshalled, they inherit any client constraints applied to the stream, to prevent elevation of privilege gadget attacks, where a third party proxy might bypass an integrity or privacy constraint for instance such as allow a connection that wasn't encrypted, or not authenticated. Again this changed feature the identity of the proxy and tests failed as the proxy the test had to confirm the test passed didn't have client constraints applied. So it appears to me that client's MethodConstraints shouldn't be part of proxy identity. Does anyone have a good reason why client MethodConstraints should be part of proxy identity? It doesn't seem right to me that the client is able to change the identity of a proxy, just by applying constraints. This seems to have been overlooked in the implementation. Also as a side note, I needed to make a lot of changes to existing services to support secure endpoints, as the server's often didn't reply to call back proxy's using their Subject, for example EventLIstener's didn't work in a secure environment. I fixed that of course. These are changes I'd like to make to River, to make secure services behave like insecure services, but with security. :) So that security can be a configuration concern. So new developers can develop and test their application, later configure it to be secure without it breaking. Thanks, Peter.
Gradle Build [PREVIOUSLY] Re: Board feedback - Request discuss attic for River
Thanks Dan, Hi Dennis, I recall Michael from Sorcer Soft (cc'd) also showed interest in a Gradle build. The modular build is here: https://svn.apache.org/viewvc/river/jtsk/modules/ svn checkout http://svn.apache.org/repos/asf/river/jtsk/modules Do you still have svn access? It's a development build, I think people would be pleased to see some development action. The qa test suite is currently an ant build. Regards, Peter. On 5/27/2020 3:40 AM, Dan Rollo wrote: Regarding a gradle build: I’m not against a gradle build, but I’m by no means a gradle expert. For the initial modular build, I think the “opinionated” nature of maven is helpful in providing some guard rails, but that could just be a function of me having more familiarity with maven. Dan
Re: Board feedback - Request discuss attic for River
Hi Dennis, Replies inline below. On 5/22/2020 2:56 PM, Dennis Reedy wrote: Hi Peter, Some quick late night thoughts ... - Time depending, helping a Gradle build for River is something that I'd be glad to participate in. I think one of the biggest challenges will be getting the testing framework working, or choosing a modern equivalent. I would definitely appreciate your participation :) - As it relates to combining Rio and River, I'd vote no. River has enough complexities/challenges, no reason to muddy the waters even more with Rio. Perhaps looking at how to create River components within a Spring Boot app might be more attractive, and open the an entire open source eco-system. Ok, interested to hear your thoughts on that, I'm not a spring boot dev. - I have not seen the tide turning for dynamic code, my experience has been the opposite. Due to complexity no doubt and additional testing requirements. I was in my own bubble, thinking about how the complexities of dynamic code can be made simpler and how this might affect it's adoption in the future. - Something else to consider is to look at the level of effort for moving River to a more current Java version though, at least Java 8 if not 11. JGDMS is Java 8 source and tested on Java 11, actually it has some minor changes to support Java 11 specifically, but I can't remember them off hand. So if this code is brought in after the modular build we'll pick up these changes. Regards Dennis On Fri, May 22, 2020 at 12:07 AM Peter Firmstone < peter.firmst...@zeus.net.au> wrote: Hi Dennis, Good to hear from you. The truth is, I'm no build expert, Dan is much more capable than I am, so I'm waiting to hear Dan's opinion on Gradle, it certainly looks good. Would you be interested in helping to create a Gradle build? Something else I've been thinking about is you once proposed to integrate Rio with River, by the time I saw the proposal it had unfortunately already been shot down. I was wondering if you were still interested in doing that?If so, what do others on the list think about it? I think these are good suggestions, but we'll need some help with presentation and communication. I'm low level infrastructure focused, not so much at the application level. River's potential future strong points with IPv6 will be secure end to end dynamic discovery. The tide seems to be turning for dynamic code, which was both a strength and an Achilles heel, is now looking a lot less like an Achillies heel and more like a strength again as originally envisioned. Unfortunately today Android and Apple iOS and not really supportable as Jini clients, so I have doubts about consumer IOT, I think we might be better suited to industrial IoT, time will tell. We will have the only pure Java Object based remote invocation framework that can integrate properly with OSGi, when support for it is added. I think we've had much longer to understand problems with distributed computing and are overcoming them now. I'll reply some more later, given time to think some more... Regards, Peter. On 5/22/2020 3:20 AM, Dennis Reedy wrote: I think showing/explaing how River can fit into a larger eco-system of existing applications would certainly help. How could River augment Spring Boot? What would it look like to combine River and Kafka? A discussion of what it would mean to deploy micro-services built with RIver in the cloud? What are the set of problems that River solves in 2020 that are not solvable by other technologies? Is this about IoT? If so then the project page should reflect that. As far as the modular build is concerned, sure it may help, but if it is done, lets please use Gradle. A few years ago I put this example <https://github.com/dreedyman/apache-river-example> together, it may help with that. On Thu, May 21, 2020 at 9:54 AM Bryan Thompson wrote: I think engagement requires a few things. +1 on the technical points that you have called out Peter. But we also need to engage developers beyond those who are traditional users of River/Jini to foster new development with River. Which can then lead to new development of River by a broader base of developers. The modular build can definitely help create opportunities for developers to get engaged. But having examples of what they can achieve and how easily using modern security and IPV6 Is important for there to be any new developers using River. My question would be where are we on that roadmap. I know that we have been making progress towards this. When will the project reach a state where it will engage a new group of developers and what actions will it take to help make that happen beyond the technology development? Bryan On Thu, May 21, 2020 at 01:46 Peter Firmstone < peter.firmst...@zeus.net.au wrote: Thanks Patricia, I'm not sure we're going to see a significant number of developers contribut
Re: Board feedback - Request discuss attic for River
Hi Dennis, Good to hear from you. The truth is, I'm no build expert, Dan is much more capable than I am, so I'm waiting to hear Dan's opinion on Gradle, it certainly looks good. Would you be interested in helping to create a Gradle build? Something else I've been thinking about is you once proposed to integrate Rio with River, by the time I saw the proposal it had unfortunately already been shot down. I was wondering if you were still interested in doing that? If so, what do others on the list think about it? I think these are good suggestions, but we'll need some help with presentation and communication. I'm low level infrastructure focused, not so much at the application level. River's potential future strong points with IPv6 will be secure end to end dynamic discovery. The tide seems to be turning for dynamic code, which was both a strength and an Achilles heel, is now looking a lot less like an Achillies heel and more like a strength again as originally envisioned. Unfortunately today Android and Apple iOS and not really supportable as Jini clients, so I have doubts about consumer IOT, I think we might be better suited to industrial IoT, time will tell. We will have the only pure Java Object based remote invocation framework that can integrate properly with OSGi, when support for it is added. I think we've had much longer to understand problems with distributed computing and are overcoming them now. I'll reply some more later, given time to think some more... Regards, Peter. On 5/22/2020 3:20 AM, Dennis Reedy wrote: I think showing/explaing how River can fit into a larger eco-system of existing applications would certainly help. How could River augment Spring Boot? What would it look like to combine River and Kafka? A discussion of what it would mean to deploy micro-services built with RIver in the cloud? What are the set of problems that River solves in 2020 that are not solvable by other technologies? Is this about IoT? If so then the project page should reflect that. As far as the modular build is concerned, sure it may help, but if it is done, lets please use Gradle. A few years ago I put this example <https://github.com/dreedyman/apache-river-example> together, it may help with that. On Thu, May 21, 2020 at 9:54 AM Bryan Thompson wrote: I think engagement requires a few things. +1 on the technical points that you have called out Peter. But we also need to engage developers beyond those who are traditional users of River/Jini to foster new development with River. Which can then lead to new development of River by a broader base of developers. The modular build can definitely help create opportunities for developers to get engaged. But having examples of what they can achieve and how easily using modern security and IPV6 Is important for there to be any new developers using River. My question would be where are we on that roadmap. I know that we have been making progress towards this. When will the project reach a state where it will engage a new group of developers and what actions will it take to help make that happen beyond the technology development? Bryan On Thu, May 21, 2020 at 01:46 Peter Firmstone Thanks Patricia, I'm not sure we're going to see a significant number of developers contributing in the near future. Unfortunately the technical matters, which are complex and difficult have caused a lot of argument in the past and have been the cause of a high barrier to entry for new developers. At least that's what I think, feel free to provide your perspective. I have been working on solutions to address some of the complexity. I noticed that the board thought we hadn't had any commits for 3+ years, I'll be sure to add some commit statistics to the next board report. I counted 191 commits over the last three years, not big, but not nothing either. I think it's fair to say that we need Apache's infrastructure, our web site, Jira bug reporting system, repository and mailing lists. The old 2.2 series code suffered from fragility due to race conditions and other bugs, understandably this made some existing developers very fearful of change (who had probably suffered from this fragility in the past) and the pace at which development was occurring had some frightened, which impacted our ability to work on the code, for this reason I don't talk too much about the code, as I fear the return of arguments of old, you might say I'm a bit shy. In fact I am hoping that slowing down the pace of development as was requested has given people time to accept and adapt to the 3.0 release series. I have been pinning my hopes on the Modular build to allow new developers to focus on smaller components without having to understand the whole project. Also I think that River is constrained by the limitations of IPv4, I mean who only codes for the intranet these days? Once we integrate multicast IPv6 support (I have been using this for at
Re: Board feedback - Request discuss attic for River
Thanks Patricia, I'm not sure we're going to see a significant number of developers contributing in the near future. Unfortunately the technical matters, which are complex and difficult have caused a lot of argument in the past and have been the cause of a high barrier to entry for new developers. At least that's what I think, feel free to provide your perspective. I have been working on solutions to address some of the complexity. I noticed that the board thought we hadn't had any commits for 3+ years, I'll be sure to add some commit statistics to the next board report. I counted 191 commits over the last three years, not big, but not nothing either. I think it's fair to say that we need Apache's infrastructure, our web site, Jira bug reporting system, repository and mailing lists. The old 2.2 series code suffered from fragility due to race conditions and other bugs, understandably this made some existing developers very fearful of change (who had probably suffered from this fragility in the past) and the pace at which development was occurring had some frightened, which impacted our ability to work on the code, for this reason I don't talk too much about the code, as I fear the return of arguments of old, you might say I'm a bit shy. In fact I am hoping that slowing down the pace of development as was requested has given people time to accept and adapt to the 3.0 release series. I have been pinning my hopes on the Modular build to allow new developers to focus on smaller components without having to understand the whole project. Also I think that River is constrained by the limitations of IPv4, I mean who only codes for the intranet these days? Once we integrate multicast IPv6 support (I have been using this for at least two years now), we can communicate easily over the internet. Another concern I have is security, we should have fixed security issues a long time ago, however the mood of development at the time didn't foster that, I think it was 2010 when I first flagged security issues with Serialization. That's another reason why I've been hesitant to create bug fix releases, I don't think the code is good enough for a release without addressing some significant security issues first. But clearly people are using the security features of River like SSL/ TLS, which indicates people are using River over the internet already, in spite of limitations with IPv4 NAT. I have other fixes that address TLS security issues in River and bring cyphers and constraints into 2020, rather than 2004 era cyphers. Regards, Peter. On 5/21/2020 10:54 AM, Patricia Shanahan wrote: The board tends to be more concerned about an active community than technical matters. We need to discuss whether there is a pool of potential contributors who are likely to become active. On 5/20/2020 5:51 PM, Peter Firmstone wrote: Hello River Folk, I've received feedback from the Board this morning, they are requesting that we discuss the Attic for River. Personally I think the project still has a lot of potential to pick up again once the modular build is complete, and it is a useful place to send patches and discuss changes. A very important patch was sent in June last year by Shawn Ellis for Java 11.0.3 and later for services using SSL/TLS. Another important change to the JERI protocol was discussed in September last year. http://mail-archives.apache.org/mod_mbox/river-dev/201909.mbox/browser What are your thoughts? I don't think the board is asking that we send River to the attic, just that we discuss it. Regards, Peter.
Re: Board feedback - Request discuss attic for River
Thanks Dan. :) On 5/21/2020 12:20 PM, Dan Rollo wrote: Hi Peter, I acknowledge the contributions are slow to come, but I agree with your observation that contributions are still coming. I would prefer the River project not be moved to the attic just yet. (“I don’t want to go on the cart. I feel better”….to soon? ;) Dan From: Peter Firmstone Subject: Board feedback - Request discuss attic for River Date: May 20, 2020 at 8:51:42 PM EDT To: dev@river.apache.org Hello River Folk, I've received feedback from the Board this morning, they are requesting that we discuss the Attic for River. Personally I think the project still has a lot of potential to pick up again once the modular build is complete, and it is a useful place to send patches and discuss changes. A very important patch was sent in June last year by Shawn Ellis for Java 11.0.3 and later for services using SSL/TLS. Another important change to the JERI protocol was discussed in September last year. http://mail-archives.apache.org/mod_mbox/river-dev/201909.mbox/browser What are your thoughts? I don't think the board is asking that we send River to the attic, just that we discuss it. Regards, Peter.
Complexity and Codebase annotations
visible to each client's bundle as the client bundle ClassLoader will be the parent of the service proxy ClassLoader. In this case service codebase annotations would include the proxy and the service API jar URL's, in case some interfaces were not imported by the client's OSGi bundle (not resolvable by local code). Code that is resolved locally and reserialized across the marshalling stream will not lose it's identity as it is resolved by the ClassLoader at the remote Endpoint, therefore codebase annotation loss issues don't occur. The challenge for modular environments is creating a ClassLoader that is unique to the service while also wiring dependencies for maximum compatibility. Once the ClassLoader has been assigned to a client Endpoint it will continue to be used without consulting codebase annotations again. With this model, there are no codebase annotations to lose and the server is always consulted for the configured codebase annotation whenever a service proxy is serialized and the use of ClassLoader's is compatible with Modular environments like OSGi and Maven / Plexus Classworlds. This also allows for evolution of serialized proxy state, to a new later versioned codebase for example, rather than transferring the old codebase annotation which was subject to change from codebase annotation loss from node to node. I think this has considerably simplified class resolution for services by handing back responsibility to ClassLoaders. Note that it's complimentary to existing infrastructure, without requiring anyone to change if they don't want to. This will also eliminate the many problems that codebase annotation loss causes for new developers. Regards, Peter Firmstone.
Board feedback - Request discuss attic for River
Hello River Folk, I've received feedback from the Board this morning, they are requesting that we discuss the Attic for River. Personally I think the project still has a lot of potential to pick up again once the modular build is complete, and it is a useful place to send patches and discuss changes. A very important patch was sent in June last year by Shawn Ellis for Java 11.0.3 and later for services using SSL/TLS. Another important change to the JERI protocol was discussed in September last year. http://mail-archives.apache.org/mod_mbox/river-dev/201909.mbox/browser What are your thoughts? I don't think the board is asking that we send River to the attic, just that we discuss it. Regards, Peter.
Re: Further update regarding firewall and NAT issues in River
Hi Bushnu, The plan is to donate this code back to River after completing River's modular build (on a per module basis, hopefully it will be more digestible for community review that way), it may be subject to change following community review. If people find it useful, that should improve the likely-hood of it's acceptance, I would appreciate any feedback. LookupLocator can be used for https unicast lookup as documented: https://github.com/pfirmstone/JGDMS/blob/trunk/JGDMS/jgdms-platform/src/main/java/net/jini/core/discovery/LookupLocator.java Reggie needs to be configured to support https unicast, it is not documented at this time as it is still experimental. This is the Reggie code that reads in the configuration: this.httpsUnicastPort = Config.getIntEntry( config, COMPONENT, "httpsUnicastDiscoveryPort", 443, 0, 0x); this.enableHttpsUnicast = config.getEntry(COMPONENT, "enableHttpsUnicast" ,Boolean.class , Boolean.FALSE); Using IPv6 multicast requires the following properties to be set, I've been using IPv6 for some time now. Note only the announcement protocol can be global. :) System Property Purpose java.net.preferIPv6Addresses This property is interpreted as a boolean value. If true, jini-announcement <https://pfirmstone.github.io/JGDMS/old-static-site/doc/specs/html/discovery-spec.html#19194> and jini-request <https://pfirmstone.github.io/JGDMS/old-static-site/doc/specs/html/discovery-spec.html#40029> protocols will use IPv6 multicast addresses: IANA IPv6 Multicast Addresses <http://www.iana.org/assignments/ipv6-multicast-addresses/ipv6-multicast-addresses.xhtml> net.jini.discovery.GLOBAL_ANNOUNCE This property is interpreted as a boolean value. If true, jini-announcement <https://pfirmstone.github.io/JGDMS/old-static-site/doc/specs/html/discovery-spec.html#19194> will join the global multicast address group FF0X::155. If false the jini-announcement protocol will join the site local multicast address group FF05::155. As defined in RFC4291, IPv6 multicast addresses which are only different in scope represent different groups. Clients joining the global group, will not receive site local announcement packets and vice versa. You're probably best downloading and building from the latest source as it contains improvements, however it's also available at Maven Central. https://mvnrepository.com/artifact/au.net.zeus.jgdms Regards, Peter. On 5/16/2020 7:36 PM, Bishnu Gautam wrote: Hi Peter Thanks for your response It would be great to have Unicast https implementation or also the IPv6 multicast discovery will work for me. Could you share the codes and the documents about it. I would really appreciate for that. Sincerely Yours Bishnu Prasad Gautam From: Peter Firmstone Sent: Wednesday, May 13, 2020 7:52 AM To: dev@river.apache.org Subject: Re: Further update regarding firewall and NAT issues in River Hi Bishnu, Can you use IPv6? I have a Unicast https implementation (to get through https firewalls) and IPv6 multicast discovery (can be configured to be global or limited to your intranet) There were a number of different types of IPv4 NAT firewalls / routers, so it was never going to be reliable, I kinda figured it would impact negatively if users had to debug it. Cheers, Peter. On 5/5/2020 10:32 PM, Bishnu Gautam wrote: Hello All Are there any updates regarding firewall bypassing in Apache River. I have been around Jini Technology a decade ago and was stuck due to its inability to punch through firewall and NAT. However, there used to have some threads of working on it by some of the developers. It would be great if there are any work going on about it. I came to know that there is a technique of UDP hole punching by which you can over come this constraint. Please let me know if any body is working on it that can address the firewall issue. It would be great a tool specially in IoT networks if we can overcome firewall and NAT issue. Bishnu Prasad Gautam Bishnu Prasad Gautam
Project Loom and other stuff
Project loom looks very promising: http://cr.openjdk.java.net/~rpressler/loom/loom/sol1_part1.html River uses a lot of threads, many block waiting for network responses... Virtual Threads could help significantly with salability. Something else I've thought about in the past is Pack200, since this was introduced in Java 5, it was never part of Jini, however is anyone aware of it having been used to reduce downloads? Incidentally it's been removed from Java since 14 I think. While focused on using River on the Internet, one of the considerations I had was remote endpoint identity. I had realised that multiple parties would be involved and it could cause problems if different parties shared the same identity locally, incidentally River grants permission based on a ClassLoader for a service proxy, so a second party might be able to obtain access to private state as well as the permissions and identity of another service, simply by using the same codebase annotation. When I wrote AtomicILFactory, I made sure that ObjectEndpoint was part of the identity in addition to the codebase annotation. Of course it works a little different than other AbstractILFactory instances, rather than encoding the codebase annotation into the stream, the ServerEndpoint is contacted, authenticated, the codebase annotation is requested from the server and a ClassLoader created specifically for the service's identity (It can be effectively treated as the server's Principal), then the service proxy is deserialized into it. This addresses some codebase annotation loss issues identified by Warres et al and a service can change it's codebase annotation, the next time a service is deserialized, it will use the latest codebase annotation. N.B. How a ClassLoader is assigned is extensible if required, eg OSGi. Cheers, Peter.
Re: Further update regarding firewall and NAT issues in River
Hi Bishnu, Can you use IPv6? I have a Unicast https implementation (to get through https firewalls) and IPv6 multicast discovery (can be configured to be global or limited to your intranet) There were a number of different types of IPv4 NAT firewalls / routers, so it was never going to be reliable, I kinda figured it would impact negatively if users had to debug it. Cheers, Peter. On 5/5/2020 10:32 PM, Bishnu Gautam wrote: Hello All Are there any updates regarding firewall bypassing in Apache River. I have been around Jini Technology a decade ago and was stuck due to its inability to punch through firewall and NAT. However, there used to have some threads of working on it by some of the developers. It would be great if there are any work going on about it. I came to know that there is a technique of UDP hole punching by which you can over come this constraint. Please let me know if any body is working on it that can address the firewall issue. It would be great a tool specially in IoT networks if we can overcome firewall and NAT issue. Bishnu Prasad Gautam Bishnu Prasad Gautam
Re: dev Digest 7 May 2020 15:27:04 -0000 Issue 1659
I'm a bit of a Hermit myself now too. Looking forward to getting out again when this is all over. Good to hear you're all well. Cheers, Peter. On 5/8/2020 4:45 AM, Dan Rollo wrote: +1 Oddly enough, I work remotely, and it seems things are busier than pre-pandemic. Thankfully, healthy so far. Happy hermit life. Dan On May 7, 2020, at 11:27 AM, dev-digest-h...@river.apache.org <mailto:dev-digest-h...@river.apache.org> wrote: *From:*Peter Firmstone <mailto:peter.firmst...@zeus.net.au>> *Subject:**Draft Report River - May 2020* *Date:*May 7, 2020 at 3:31:09 AM EDT *To:*dev@river.apache.org <mailto:dev@river.apache.org> Hello River Folk, Please review the May report draft below. With work starting to slow down, I should have some time to complete the modular build soon. How are you being impacted by Covid-19? Regards, Peter Firmstone. ## Description: - Apache River provides a platform for dynamic discovery and lookup search of network services. Services may be implemented in a number of languages, while clients are required to be jvm based (presently at least), to allow proxy jvm byte code to be provisioned dynamically. ## Issues: - There are no issues requiring board attention at this time. ## Activity: - Minimal activity at present, initial work on the modular build structure has commenced. The current monolithic build is complex, with it's own build tool classdepandjar, it adds complexity for new developers. In recent months I have had work commitments that have limited my ability to integrate the modular build. The other committers are waiting for the modular build and I have done a lot of work on this locally, this work has been a significant undertaking integrating the works of Dennis Reedy, Dan Rollo and myself. This is also a mature codebase, having been in development since the late 1990's. - The monolithic code has been svn moved into modules into an initial maven build structure, next step is to move junit tests to each module. - Until the monolithic build has been broken up into maven modules, we are likely to have difficulty attracting new contributors due to the appearance of complexity. Release roadmap: River 3.1 - Modular build restructure (& binary release) River 3.2 - Input validation 4 Serialization, delayed unmarshalling& safe ServiceRegistrar lookup service.River 3.3 - OSGi support ## Health report: - River is a mature codebase with existing deployments, it was primarily designed for dynamic discovery of services on private networks. IPv4 NAT limitations historically prevented the use of River on public networks, however the use of IPv6 on public networks removes these limitations. Web services evolved with the publish subscribe model of today's internet, River has the potential to dynamically discover services on IPv6 networks, peer to peer, blurring current distinctions between client and server, it has the potential to address many of the security issues currently experienced with IoT and avoid any dependency on the proprietary cloud for "things". - Future Direction: * Target IOT space with support for OSGi and IPv6 (security fixes required prior to announcement) * Input validation for java deserialization - prevents DOS and Gadget attacks. * IPv6 Multicast Service Discovery (River currently only supports IPv4 multicast discovery). * Delayed unmarshalling for Service Lookup and Discovery (includes SafeServiceRegistrar mentioned in release roadmap), so authentication can occur prior to downloading service proxy's, this addresses a long standing security issue with service lookup while significantly improving performance under some use cases. * Security fixes for SSL endpoints, updated to TLS v1.2 with removal of support for insecure cypher's. * Secure TLS SocketFactory's for RMI Registry, uses the currently logged in Subject for authentication. The RMI Registry still plays a minor role in service activation, this allows those who still use the Registry to secure it. * Maven build to replace existing ant built that uses classdepandjar, a bytecode dependency analysis build tool. * Updating the Jini specifications. ## Project Composition: There are currently 16 committers and 12 PMC members in this project. The Committer-to-PMC ratio is 4:3. ## Community changes, past quarter: No new PMC members. Last addition was Dan Rollo on 2017-12-01. No new committers. Last addition was Dan Rollo on 2017-11-02. ## Project Release Activity: - Recent releases: River-3.0.0 was released on 2016-10-06. river-jtsk-2.2.3 was released on 2016-02-21. river-examples-1.0 was released on 2015-08-10.
Draft Report River - May 2020
Hello River Folk, Please review the May report draft below. With work starting to slow down, I should have some time to complete the modular build soon. How are you being impacted by Covid-19? Regards, Peter Firmstone. ## Description: - Apache River provides a platform for dynamic discovery and lookup search of network services. Services may be implemented in a number of languages, while clients are required to be jvm based (presently at least), to allow proxy jvm byte code to be provisioned dynamically. ## Issues: - There are no issues requiring board attention at this time. ## Activity: - Minimal activity at present, initial work on the modular build structure has commenced. The current monolithic build is complex, with it's own build tool classdepandjar, it adds complexity for new developers. In recent months I have had work commitments that have limited my ability to integrate the modular build. The other committers are waiting for the modular build and I have done a lot of work on this locally, this work has been a significant undertaking integrating the works of Dennis Reedy, Dan Rollo and myself. This is also a mature codebase, having been in development since the late 1990's. - The monolithic code has been svn moved into modules into an initial maven build structure, next step is to move junit tests to each module. - Until the monolithic build has been broken up into maven modules, we are likely to have difficulty attracting new contributors due to the appearance of complexity. Release roadmap: River 3.1 - Modular build restructure (& binary release) River 3.2 - Input validation 4 Serialization, delayed unmarshalling& safe ServiceRegistrar lookup service.River 3.3 - OSGi support ## Health report: - River is a mature codebase with existing deployments, it was primarily designed for dynamic discovery of services on private networks. IPv4 NAT limitations historically prevented the use of River on public networks, however the use of IPv6 on public networks removes these limitations. Web services evolved with the publish subscribe model of today's internet, River has the potential to dynamically discover services on IPv6 networks, peer to peer, blurring current distinctions between client and server, it has the potential to address many of the security issues currently experienced with IoT and avoid any dependency on the proprietary cloud for "things". - Future Direction: * Target IOT space with support for OSGi and IPv6 (security fixes required prior to announcement) * Input validation for java deserialization - prevents DOS and Gadget attacks. * IPv6 Multicast Service Discovery (River currently only supports IPv4 multicast discovery). * Delayed unmarshalling for Service Lookup and Discovery (includes SafeServiceRegistrar mentioned in release roadmap), so authentication can occur prior to downloading service proxy's, this addresses a long standing security issue with service lookup while significantly improving performance under some use cases. * Security fixes for SSL endpoints, updated to TLS v1.2 with removal of support for insecure cypher's. * Secure TLS SocketFactory's for RMI Registry, uses the currently logged in Subject for authentication. The RMI Registry still plays a minor role in service activation, this allows those who still use the Registry to secure it. * Maven build to replace existing ant built that uses classdepandjar, a bytecode dependency analysis build tool. * Updating the Jini specifications. ## Project Composition: There are currently 16 committers and 12 PMC members in this project. The Committer-to-PMC ratio is 4:3. ## Community changes, past quarter: No new PMC members. Last addition was Dan Rollo on 2017-12-01. No new committers. Last addition was Dan Rollo on 2017-11-02. ## Project Release Activity: - Recent releases: River-3.0.0 was released on 2016-10-06. river-jtsk-2.2.3 was released on 2016-02-21. river-examples-1.0 was released on 2015-08-10.
Re: February Board Report Draft
I didn't make the February deadline, so I'll post the report in time for March. +1 Peter. Please vote at your convenience. Regards, Peter. On 2/20/2020 6:29 AM, Dan Rollo wrote: Looks good to me. +1 Dan Rollo From: Peter Firmstone Subject: February Board Report Draft Date: February 18, 2020 at 10:10:04 PM EST To: dev@river.apache.org Hello River folk, please review / comment / suggest / changes for the draft board report for February below. Regards, Peter. ## Description: - Apache River provides a platform for dynamic discovery and lookup search of network services. Services may be implemented in a number of languages, while clients are required to be jvm based (presently at least), to allow proxy jvm byte code to be provisioned dynamically. ## Issues: - There are no issues requiring board attention at this time. ## Activity: - Minimal activity at present, initial work on the modular build structure has commenced. The current monolithic build is complex, with it's own build tool classdepandjar, it adds complexity for new developers. In recent months I have had work committments that have limited my ability to integrate the modular build. The other committers are waiting for the modular build and I have done a lot of work on this locally, this work has been a significant undertaking integrating the works of Dennis Reedy, Dan Rollo and myself. This is also a mature codebase, having been in development since the late 1990's. - The monolithic code has been svn moved into modules into an initial maven build structure, next step is to move junit tests to each module. - Until the monolithic build has been broken up into maven modules, we are likely to have difficulty attracting new contributors due to the appearance of complexity. Release roadmap: River 3.1 - Modular build restructure (& binary release) River 3.2 - Input validation 4 Serialization, delayed unmarshalling& safe ServiceRegistrar lookup service.River 3.3 - OSGi support ## Health report: - River is a mature codebase with existing deployments, it was primarily designed for dynamic discovery of services on private networks. IPv4 NAT limitations historically prevented the use of River on public networks, however the use of IPv6 on public networks removes these limitations. Web services evolved with the publish subscribe model of todays internet, River has the potential to dynamically discover services on IPv6 networks, peer to peer, blurring current destinctions between client and server, it has the potential to address many of the security issues currently experienced with IoT and avoid any dependency on the proprietary cloud for "things". - Future Direction: * Target IOT space with support for OSGi and IPv6 (security fixes required prior to announcement) * Input validation for java deserialization - prevents DOS and Gadget attacks. * IPv6 Multicast Service Discovery (River currently only supports IPv4 multicast discovery). * Delayed unmarshalling for Service Lookup and Discovery (includes SafeServiceRegistrar mentioned in release roadmap), so authentication can occur prior to downloading service proxy's, this addresses a long standing security issue with service lookup while significantly improving performance under some use cases. * Security fixes for SSL endpoints, updated to TLS v1.2 with removal of support for insecure cyphers. * Secure TLS SocketFactory's for RMI Registry, uses the currently logged in Subject for authentication. The RMI Registry still plays a minor role in service activation, this allows those who still use the Registry to secure it. * Maven build to replace existing ant built that uses classdepandjar, a bytecode dependency analysis build tool. * Updating the Jini specifications. ## Project Composition: There are currently 16 committers and 12 PMC members in this project. The Committer-to-PMC ratio is 4:3. ## Community changes, past quarter: No new PMC members. Last addition was Dan Rollo on 2017-12-01. No new committers. Last addition was Dan Rollo on 2017-11-02. ## Project Release Activity: - Recent releases: River-3.0.0 was released on 2016-10-06. river-jtsk-2.2.3 was released on 2016-02-21. river-examples-1.0 was released on 2015-08-10.
February Board Report Draft
Hello River folk, please review / comment / suggest / changes for the draft board report for February below. Regards, Peter. ## Description: - Apache River provides a platform for dynamic discovery and lookup search of network services. Services may be implemented in a number of languages, while clients are required to be jvm based (presently at least), to allow proxy jvm byte code to be provisioned dynamically. ## Issues: - There are no issues requiring board attention at this time. ## Activity: - Minimal activity at present, initial work on the modular build structure has commenced. The current monolithic build is complex, with it's own build tool classdepandjar, it adds complexity for new developers. In recent months I have had work committments that have limited my ability to integrate the modular build. The other committers are waiting for the modular build and I have done a lot of work on this locally, this work has been a significant undertaking integrating the works of Dennis Reedy, Dan Rollo and myself. This is also a mature codebase, having been in development since the late 1990's. - The monolithic code has been svn moved into modules into an initial maven build structure, next step is to move junit tests to each module. - Until the monolithic build has been broken up into maven modules, we are likely to have difficulty attracting new contributors due to the appearance of complexity. Release roadmap: River 3.1 - Modular build restructure (& binary release) River 3.2 - Input validation 4 Serialization, delayed unmarshalling& safe ServiceRegistrar lookup service.River 3.3 - OSGi support ## Health report: - River is a mature codebase with existing deployments, it was primarily designed for dynamic discovery of services on private networks. IPv4 NAT limitations historically prevented the use of River on public networks, however the use of IPv6 on public networks removes these limitations. Web services evolved with the publish subscribe model of todays internet, River has the potential to dynamically discover services on IPv6 networks, peer to peer, blurring current destinctions between client and server, it has the potential to address many of the security issues currently experienced with IoT and avoid any dependency on the proprietary cloud for "things". - Future Direction: * Target IOT space with support for OSGi and IPv6 (security fixes required prior to announcement) * Input validation for java deserialization - prevents DOS and Gadget attacks. * IPv6 Multicast Service Discovery (River currently only supports IPv4 multicast discovery). * Delayed unmarshalling for Service Lookup and Discovery (includes SafeServiceRegistrar mentioned in release roadmap), so authentication can occur prior to downloading service proxy's, this addresses a long standing security issue with service lookup while significantly improving performance under some use cases. * Security fixes for SSL endpoints, updated to TLS v1.2 with removal of support for insecure cyphers. * Secure TLS SocketFactory's for RMI Registry, uses the currently logged in Subject for authentication. The RMI Registry still plays a minor role in service activation, this allows those who still use the Registry to secure it. * Maven build to replace existing ant built that uses classdepandjar, a bytecode dependency analysis build tool. * Updating the Jini specifications. ## Project Composition: There are currently 16 committers and 12 PMC members in this project. The Committer-to-PMC ratio is 4:3. ## Community changes, past quarter: No new PMC members. Last addition was Dan Rollo on 2017-12-01. No new committers. Last addition was Dan Rollo on 2017-11-02. ## Project Release Activity: - Recent releases: River-3.0.0 was released on 2016-10-06. river-jtsk-2.2.3 was released on 2016-02-21. river-examples-1.0 was released on 2015-08-10.
November Board Report Draft
Hello River folk, please review / comment / suggest / changes for the draft board report for November below. Regards, Peter. ## Description: - Apache River provides a platform for dynamic discovery and lookup search of network services. Services may be implemented in a number of languages, while clients are required to be jvm based (presently at least), to allow proxy jvm byte code to be provisioned dynamically. ## Issues: - There are no issues requiring board attention at this time. ## Activity: - Minimal activity at present, initial work on the modular build structure has commenced. The current monolithic build is complex, with it's own build tool classdepandjar, it adds complexity for new developers. In recent months I have had work committments that have limited my ability to integrate the modular build. The other committers are waiting for the modular build and I have done a lot of work on this locally, this work has been a significant undertaking integrating the works of Dennis Reedy, Dan Rollo and myself. This is also a mature codebase, having been in development since the late 1990's. - The monolithic code has been svn moved into modules into an initial maven build structure, next step is to move junit tests to each module. Release roadmap: River 3.1 - Modular build restructure (& binary release) River 3.2 - Input validation 4 Serialization, delayed unmarshalling& safe ServiceRegistrar lookup service.River 3.3 - OSGi support ## Health report: - River is a mature codebase with existing deployments, it was primarily designed for dynamic discovery of services on private networks. IPv4 NAT limitations historically prevented the use of River on public networks, however the use of IPv6 on public networks removes these limitations. Web services evolved with the publish subscribe model of todays internet, River has the potential to dynamically discover services on IPv6 networks, peer to peer, blurring current destinctions between client and server, it has the potential to address many of the security issues currently experienced with IoT and avoid any dependency on the proprietary cloud for "things". - Future Direction: * Target IOT space with support for OSGi and IPv6 (security fixes required prior to announcement) * Input validation for java deserialization - prevents DOS and Gadget attacks. * IPv6 Multicast Service Discovery (River currently only supports IPv4 multicast discovery). * Delayed unmarshalling for Service Lookup and Discovery (includes SafeServiceRegistrar mentioned in release roadmap), so authentication can occur prior to downloading service proxy's, this addresses a long standing security issue with service lookup while significantly improving performance under some use cases. * Security fixes for SSL endpoints, updated to TLS v1.2 with removal of support for insecure cyphers. * Secure TLS SocketFactory's for RMI Registry, uses the currently logged in Subject for authentication. The RMI Registry still plays a minor role in service activation, this allows those who still use the Registry to secure it. * Maven build to replace existing ant built that uses classdepandjar, a bytecode dependency analysis build tool. * Updating the Jini specifications. ## Project Composition: There are currently 16 committers and 12 PMC members in this project. The Committer-to-PMC ratio is 4:3. ## Community changes, past quarter: No new PMC members. Last addition was Dan Rollo on 2017-12-01. No new committers. Last addition was Dan Rollo on 2017-11-02. ## Project Release Activity: - Recent releases: River-3.0.0 was released on 2016-10-06. river-jtsk-2.2.3 was released on 2016-02-21. river-examples-1.0 was released on 2015-08-10. ## JIRA activity: 1 issue opened in JIRA, past quarter (no change) 0 issues closed in JIRA, past quarter (-100% decrease)
Re: JERI Multiplexing protocol increasing the number of sessions.
Thanks Gregg, It works well in testing and allows a maximum of 256 remote object between two nodes over a JERI endpoint. Is anyone approaching 128 remote object in deployment? If no one objects in the next week, I'll assume lazy concensus. Regards, Peter. On 3/09/2019 1:31 AM, Gregg Wonderly wrote: I think considering an unsigned byte value should be a reasonable step. Gregg Sent from my iPhone On Sep 1, 2019, at 9:41 PM, Peter Firmstone wrote: Hello, The JERI multiplexing protocol allows 128 sessions between two nodes, when this value is exceeded, an exception is thrown and a connection cannot be made. I have run into some situations during stress testing where 128 sessions isn't enough. The JERI multiplexing protocol sends a signed byte, the allowable range is from 0 to 127 (inclusive) and consumes it at the remote end. However I have noticed in the implementation, checks for maximum and minimum sessions occurs while the number is a 32 bit integer, before being cast to byte, so basically we can change this to an unsigned byte, without breaking compatibility with existing implementations (until we exceed 128 sessions). Using an unsigned byte would allow a maximum 255 Sessions. As both endpoints have to consume a byte, increasing this value further would break the protocol. Existing connections break when the number of sessions exceeds 128, this will not cause any unexpcected additional breakage. I'm not aware of any additional third party implementations of the JERI protocol. It's also worth noting that the JERI implementation of today, is much faster and more efficient than JERI released in Jini 2.1, 128 connections would have suffereed from contention, today, this isn't an issue. Regards, Peter.
Maven Build and OSGi Platform Support
/pfirmstone/JGDMS/wiki/OSGi-and-JGDMS Note this is separate to River, the integration of these features into River is dependant on community review and acceptance. Regards, Peter.
August Board Report - Draft
Hello River folk, please review / comment / suggest / changes for the draft board report for August below. Regards, Peter. ## Description: - Apache River provides a platform for dynamic discovery and lookup search of network services. Services may be implemented in a number of languages, while clients are required to be jvm based (presently at least), to allow proxy jvm byte code to be provisioned dynamically. ## Issues: - There are no issues requiring board attention at this time. ## Activity: - Minimal activity at present, initial work on the modular build structure has commenced. The current monolithic build is complex, with it's own build tool classdepandjar, it adds complexity for new developers. In recent months I have had work committments that have limited my ability to integrate the modular build. The other committers are waiting for the modular build and I have done a lot of work on this locally, this work has been a significant undertaking integrating the works of Dennis Reedy, Dan Rollo and myself. This is also a mature codebase, having been in development since the late 1990's. Release roadmap: River 3.1 - Modular build restructure (& binary release) River 3.2 - Input validation 4 Serialization, delayed unmarshalling& safe ServiceRegistrar lookup service.River 3.3 - OSGi support ## Health report: - River is a mature codebase with existing deployments, it was primarily designed for dynamic discovery of services on private networks. IPv4 NAT limitations historically prevented the use of River on public networks, however the use of IPv6 on public networks removes these limitations. Web services evolved with the publish subscribe model of todays internet, River has the potential to dynamically discover services on IPv6 networks, peer to peer, blurring current destinctions between client and server, it has the potential to address many of the security issues currently experienced with IoT and avoid any dependency on the proprietary cloud for "things". - Future Direction: * Target IOT space with support for OSGi and IPv6 (security fixes required prior to announcement) * Input validation for java deserialization - prevents DOS and Gadget attacks. * IPv6 Multicast Service Discovery (River currently only supports IPv4 multicast discovery). * Delayed unmarshalling for Service Lookup and Discovery (includes SafeServiceRegistrar mentioned in release roadmap), so authentication can occur prior to downloading service proxy's, this addresses a long standing security issue with service lookup while significantly improving performance under some use cases. * Security fixes for SSL endpoints, updated to TLS v1.2 with removal of support for insecure cyphers. * Secure TLS SocketFactory's for RMI Registry, uses the currently logged in Subject for authentication. The RMI Registry still plays a minor role in service activation, this allows those who still use the Registry to secure it. * Maven build to replace existing ant built that uses classdepandjar, a bytecode dependency analysis build tool. * Updating the Jini specifications. ## PMC changes: - Currently 12 PMC members. - No new PMC members added in the last 3 months - Last PMC addition was Dan Rollo on Fri Dec 01 2017 ## Committer base changes: - Currently 16 committers. - No new committers added in the last 3 months - Last committer addition was Dan Rollo at Thu Nov 02 2017 ## Releases: - Last release was River-3.0.0 on Thu Oct 06 2016 ## Mailing list activity: - dev@river.apache.org: - 90 subscribers (up 0 in the last 3 months): - 4 emails sent to list (4 in previous quarter) - u...@river.apache.org: - 90 subscribers (up 0 in the last 3 months): - 1 emails sent to list (1 in previous quarter) ## JIRA activity: - 1 JIRA tickets created in the last 3 months - 0 JIRA tickets closed/resolved in the last 3 months
River Board Report
Hello River folk, please review / comment / suggest / changes for the draft board report for June below. Regards, Peter. ## Description: - Apache River provides a platform for dynamic discovery and lookup search of network services. Services may be implemented in a number of languages, while clients are required to be jvm based (presently at least), to allow proxy jvm byte code to be provisioned dynamically. ## Issues: - No significant issues requiring board attention at this time. ## Activity: - Minimal activity at present, initial work on the modular build structure has commenced. The current monolithic build is complex, with it's own build tool classdepandjar, it adds complexity for new developers. In recent months I have had work committments that have limited my ability to integrate the modular build. The other committers are waiting for the modular build and I have done a lot of work on this locally, this work has been a significant undertaking integrating the works of Dennis Reedy, Dan Rollo and myself. This is also a mature codebase, having been in development since the late 1990's. Release roadmap: River 3.1 - Modular build restructure (& binary release) River 3.2 - Input validation for Serialization, delayed unmarshalling& safe ServiceRegistrar lookup service.River 3.3 - OSGi support ## Health report: - River is a mature codebase with existing deployments, it was primarily designed for dynamic discovery of services on private networks. IPv4 NAT limitations historically prevented the use of River on public networks, however the use of IPv6 on public networks removes these limitations. Web services evolved with the publish subscribe model of todays internet, River has the potential to dynamically discover services on IPv6 networks, peer to peer, blurring current destinctions between client and server, it has the potential to address many of the security issues currently experienced with IoT and avoid any dependency on the proprietary cloud for "things". - Future Direction: * Target IOT space with support for OSGi and IPv6 (security fixes required prior to announcement) * Input validation for java deserialization - prevents DOS and Gadget attacks. * IPv6 Multicast Service Discovery (River currently only supports IPv4 multicast discovery). * Delayed unmarshalling for Service Lookup and Discovery (includes SafeServiceRegistrar mentioned in release roadmap), so authentication can occur prior to downloading service proxy's, this addresses a long standing security issue with service lookup while significantly improving performance under some use cases. * Security fixes for SSL endpoints, updated to TLS v1.2 with removal of support for insecure cyphers. * Secure TLS SocketFactory's for RMI Registry, uses the currently logged in Subject for authentication. The RMI Registry still plays a minor role in service activation, this allows those who still use the Registry to secure it. * Maven build to replace existing ant built that uses classdepandjar, a bytecode dependency analysis build tool. * Updating the Jini specifications. ## PMC changes: - Currently 12 PMC members. - No new PMC members added in the last 3 months - Last PMC addition was Dan Rollo on Fri Dec 01 2017 ## Committer base changes: - Currently 16 committers. - No new committers added in the last 3 months - Last committer addition was Dan Rollo at Thu Nov 02 2017 ## Releases: - Last release was River-3.0.0 on Thu Oct 06 2016 ## Mailing list activity: - dev@river.apache.org: - 90 subscribers (up 1 in the last 3 months): - 4 emails sent to list (5 in previous quarter) - u...@river.apache.org: - 90 subscribers (down -2 in the last 3 months): - 1 emails sent to list (0 in previous quarter)
River Board Report
Hello River folk, please review / comment / suggest / changes for the draft board report for March below. Regards, Peter. ## Description: - Apache River provides a platform for dynamic discovery and lookup search of network services. Services may be implemented in a number of languages, while clients are required to be jvm based (presently at least), to allow proxy jvm byte code to be provisioned dynamically. ## Issues: - Answers to board questions: idf: It's been a year since the last committer addition. Are there a new prospects? - Not at present, due to low activity and the complexity of the unique monolithic build system. We are working to resolve this with a Maven modular build structure. rs: given 12 vs 16 members of PMC and committership roster, is there anything preventing the remaining 4 committers to consider joining the PMC? - There are no blockers, I will ask them to join the PMC. ## Activity: - Minimal activity at present, initial work on the modular build structure has commenced. The current monolithic build is complex, with it's own build tool classdepandjar, it adds complexity for new developers. In recent months I have had work committments that have limited my ability to integrate the modular build. The other committers are waiting for the modular build and I have done a lot of work on this locally, this work has been a significant undertaking integrating the works of Dennis Reedy, Dan Rollo and myself. This is also a mature codebase, having been in development since the late 1990's. Release roadmap: River 3.1 - Modular build restructure (& binary release) River 3.2 - Input validation 4 Serialization, delayed unmarshalling& safe ServiceRegistrar lookup service.River 3.3 - OSGi support ## Health report: - River is a mature codebase with existing deployments, it was primarily designed for dynamic discovery of services on private networks. IPv4 NAT limitations historically prevented the use of River on public networks, however the use of IPv6 on public networks removes these limitations. Web services evolved with the publish subscribe model of todays internet, River has the potential to dynamically discover services on IPv6 networks, peer to peer, blurring current destinctions between client and server, it has the potential to address many of the security issues currently experienced with IoT and avoid any dependency on the proprietary cloud for "things". - Future Direction: * Target IOT space with support for OSGi and IPv6 (security fixes required prior to announcement) * Input validation for java deserialization - prevents DOS and Gadget attacks. * IPv6 Multicast Service Discovery (River currently only supports IPv4 multicast discovery). * Delayed unmarshalling for Service Lookup and Discovery (includes SafeServiceRegistrar mentioned in release roadmap), so authentication can occur prior to downloading service proxy's, this addresses a long standing security issue with service lookup while significantly improving performance under some use cases. * Security fixes for SSL endpoints, updated to TLS v1.2 with removal of support for insecure cyphers. * Secure TLS SocketFactory's for RMI Registry, uses the currently logged in Subject for authentication. The RMI Registry still plays a minor role in service activation, this allows those who still use the Registry to secure it. * Maven build to replace existing ant built that uses classdepandjar, a bytecode dependency analysis build tool. * Updating the Jini specifications. ## PMC changes: - Currently 12 PMC members. - No new PMC members added in the last 3 months - Last PMC addition was Dan Rollo on Fri Dec 01 2017 ## Committer base changes: - Currently 16 committers. - No new committers added in the last 3 months - Last committer addition was Dan Rollo at Thu Nov 02 2017 ## Releases: - Last release was River-3.0.0 on Thu Oct 06 2016 ## /dist/ errors: 4 - TODO - Developer certificates expired, investigate solution. I created new certificates, prior to the expiry of my old certificates, should I resign the release artifacts with the new certificates? ## Mailing list activity: - Relatively quiet - dev@river.apache.org: - 89 subscribers (down -1 in the last 3 months): - 5 emails sent to list (9 in previous quarter) - u...@river.apache.org: - 92 subscribers (up 0 in the last 3 months): - 1 emails sent to list (0 in previous quarter)
November Board Report
Hello River folk, please review / comment / suggest / changes for the draft board report for November below. Regards, Peter. ## Description: - Apache River provides a platform for dynamic discovery and lookup search of network services. Services may be implemented in a number of languages, while clients are required to be jvm based (presently at least), to allow proxy jvm byte code to be provisioned dynamically. ## Issues: - No significant issues requiring board attention at this time. ## Activity: - Minimal activity at present, initial work modular build structure has commenced, awaiting to be populated with River 3.0 code. Release roadmap: River 3.1 - Modular build restructure (& binary release) River 3.2 - Input validation 4 Serialization, delayed unmarshalling& safe ServiceRegistrar lookup service.River 3.3 - OSGi support ## Health report: - River is a mature codebase with existing deployments, it was primarily designed for dynamic discovery of services on private networks. IPv4 NAT limitations historically prevented the use of River on public networks, however the use of IPv6 on public networks removes these limitations. Web services evolved with the publish subscribe model of todays internet, River has the potential to dynamically discover services on IPv6 networks, peer to peer, blurring current destinctions between client and server, it has the potential to address many of the security issues currently experienced with IoT and avoid any dependency on the proprietary cloud for "things". - Future Direction: * Target IOT space with support for OSGi and IPv6 (security fixes required prior to announcement) * Input validation for java deserialization - prevents DOS and Gadget attacks. * IPv6 Multicast Service Discovery (River currently only supports IPv4 multicast discovery). * Delayed unmarshalling for Service Lookup and Discovery (includes SafeServiceRegistrar mentioned in release roadmap), so authentication can occur prior to downloading service proxy's, this addresses a long standing security issue with service lookup while significantly improving performance under some use cases. * Security fixes for SSL endpoints, updated to TLS v1.2 with removal of support for insecure cyphers. * Secure TLS SocketFactory's for RMI Registry, uses the currently logged in Subject for authentication. The RMI Registry still plays a minor role in service activation, this allows those who still use the Registry to secure it. * Maven build to replace existing ant built that uses classdepandjar, a bytecode dependency analysis build tool. * Updating the Jini specifications. ## PMC changes: - Currently 12 PMC members. - No new PMC members added in the last 3 months - Last PMC addition was Dan Rollo on Fri Dec 01 2017 ## Committer base changes: - Currently 16 committers. - No new committers added in the last 3 months - Last committer addition was Dan Rollo at Thu Nov 02 2017 ## Releases: - Last release was River-3.0.0 on Thu Oct 06 2016 ## Mailing list activity: - Relatively quiet. - dev@river.apache.org: - 91 subscribers (down -3 in the last 3 months): - 7 emails sent to list (6 in previous quarter) - u...@river.apache.org: - 92 subscribers (up 0 in the last 3 months): - 1 emails sent to list (3 in previous quarter) ## JIRA activity: - 1 JIRA tickets created in the last 3 months - 0 JIRA tickets closed/resolved in the last 3 months
Re: [jira] [Updated] (RIVER-467) NullPointerException when JoinManager is terminating
Thanks Shawn, Well spotted and sorted. Regards, Peter. On 5/11/2018 6:46 AM, Shawn Ellis (JIRA) wrote: [ https://issues.apache.org/jira/browse/RIVER-467?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Shawn Ellis updated RIVER-467: -- Attachment: LeaseRenewalManager-NPE-1.patch NullPointerException when JoinManager is terminating Key: RIVER-467 URL: https://issues.apache.org/jira/browse/RIVER-467 Project: River Issue Type: Bug Components: com_sun_jini_lookup Affects Versions: River_3.0.0 Reporter: Shawn Ellis Priority: Minor Attachments: LeaseRenewalManager-NPE-1.patch Every now and then I would encounter a NullPointerException with the JoinManager. The way that I was able to reproduce this problem was to have a service up and running and then switch the network on my laptop. For example, I would switch from one wifi network to another. I've attached a patch that protects against the NullPointerException and seems to work fine when switching networks. [^LeaseRenewalManager-NPE-1.patch] {code:java} Oct 7 14:8:28 CDT SEVERE:thr 652:Exception in thread "LeaseRenewalManager_thread-3" Oct 7 14:8:28 CDT SEVERE:thr 652:java.lang.NullPointerException Oct 7 14:8:28 CDT SEVERE:thr 652: at net.jini.lookup.JoinManager$ProxyReg.terminate(JoinManager.java:1266) Oct 7 14:8:28 CDT SEVERE:thr 652: at net.jini.lookup.JoinManager.removeTasks(JoinManager.java:2774) Oct 7 14:8:28 CDT SEVERE:thr 652: at net.jini.lookup.JoinManager.access$400(JoinManager.java:456) Oct 7 14:8:28 CDT SEVERE:thr 652: at net.jini.lookup.JoinManager$ProxyReg$DiscLeaseListener.notify(JoinManager.java:1188) Oct 7 14:8:28 CDT SEVERE:thr 652: at net.jini.lease.LeaseRenewalManager.tell(LeaseRenewalManager.java:1412) Oct 7 14:8:28 CDT SEVERE:thr 652: at net.jini.lease.LeaseRenewalManager.access$500(LeaseRenewalManager.java:322) Oct 7 14:8:28 CDT SEVERE:thr 652: at net.jini.lease.LeaseRenewalManager$RenewTask.run(LeaseRenewalManager.java:451) Oct 7 14:8:28 CDT SEVERE:thr 652: at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1135) Oct 7 14:8:28 CDT SEVERE:thr 652: at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:635) Oct 7 14:8:28 CDT SEVERE:thr 652: at java.base/java.lang.Thread.run(Thread.java:844) {code} -- This message was sent by Atlassian JIRA (v7.6.3#76005)
Re: ConstrainableLookupLocator identity
Hmm, I should have said constraints shouldn't be part of LookupLocator identity below, my bad. Perhaps if ConstrainableLookupLocator used decoration, rather than inheritance, the design intent would have been clearer. I don't like the fact that a ConstrainableLookupLocator can be considered equal to a LookupLocator, but it breaks the following tests. So I'm going to lodge an issue and back it out. I think the design intent is like decorating a LookupLocator with constraints, it's was never intended to change it's identity. org/apache/river/test/impl/mahalo/AdminIFTest.td org/apache/river/test/impl/mahalo/AdminIFShutdownTest.td N.B. This only affects people using secure services. Regards, Peter. On 18/10/2018 6:40 AM, Bryan Thompson wrote: I've never used that aspect. Nothing to offer. B On Wed, Oct 17, 2018 at 3:57 AM Peter Firmstone wrote: LookupLocator's identity contract: /** * Two locators are equal if they have the samehost and *port fields. The case of thehost is ignored. * Alternative forms of the same IPv6 addresses for the host * value are treated as being unequal. */ At some point in history, here, http://svn.apache.org/viewvc/river/jtsk/trunk/src/net/jini/discovery/ConstrainableLookupLocator.java?r1=1034266=1034267; FindBug's identified the ConstrainableLookupLocator didn't override equals, so I implemented it, however while it seemed to make sense at the time to include constraints, I'm finding that it's causing problems for discovery magement and now I'm thinking that the constraints, probably shouldn't be part of constraints. What are your thoughts? Regards, Peter.
ConstrainableLookupLocator identity
LookupLocator's identity contract: /** * Two locators are equal if they have the same host and * port fields. The case of the host is ignored. * Alternative forms of the same IPv6 addresses for the host * value are treated as being unequal. */ At some point in history, here, http://svn.apache.org/viewvc/river/jtsk/trunk/src/net/jini/discovery/ConstrainableLookupLocator.java?r1=1034266=1034267; FindBug's identified the ConstrainableLookupLocator didn't override equals, so I implemented it, however while it seemed to make sense at the time to include constraints, I'm finding that it's causing problems for discovery magement and now I'm thinking that the constraints, probably shouldn't be part of constraints. What are your thoughts? Regards, Peter.
Testing progress
Hello River Folk, My focus in recent years has been addressing security concerns, such as deserialization and TSLv1.2 secure endpoints, during the River project's existance, we've only tested the TCP endpoints with the qa suite (the majority of tests), apart from some jtreg tests that test secure jeri endpoints. I don't talk about security a lot on this list, there are two reasons, the first is it's generally not a good idea to publicly discuss the details of secuity issues, at least until they're fixed anyway, the second is, that historically security has been a sensitive topic, When I first started running the qa test suite with secure endpoints, they didn't work at all, as the test suite was lacking the appropriate configuration and certificates, I've added certificate, keystore and truststore generation. I've been running the qa tests using secure endpoints, initially there were a lot of test failures, but, I've been working though them one by one, some failures were simply missing permissions in policy files, or minor configuration file changes (note configuration files for secure endpoints are independant of tcp configuration files and haven't changed since 2004).Some of the more difficult failures were remote callbacks being made by our service implementations without running in the services logged in Subject's context, necessary to establish a connection using the services certificates (services act as a client when making a remote callback). Basically we need our users to be able to utilise secure endpoints, almost as easily as TCP endpoints, with simple configuration changes, we don't want them having to debug. Anyway, so you know work is happening, I'm making progress and once all qa suite tests are passing using secure endpoints, I'll resume with River's modular build, so we can, module by module, reintegrate my work. The following are tests that were failing, but now are running and passing with JERI secure endpoints. ### FIXED ### #org/apache/river/test/impl/outrigger/admin/DestroyTestMahalo.td,\ #org/apache/river/test/impl/outrigger/admin/LookupGroupAdminTestTxnMgr.td,\ #org/apache/river/test/impl/outrigger/leasing/TxnMgrLeaseGrantTest.td,\ #org/apache/river/test/impl/outrigger/leasing/TxnMgrLeaseGrantTestAnyLength.td,\ #org/apache/river/test/impl/outrigger/leasing/TxnMgrLeaseGrantTestForever.td,\ #org/apache/river/test/impl/outrigger/leasing/UseNotifyLeaseTestShutdown.td,\ #org/apache/river/test/impl/outrigger/leasing/UseTxnMgrLeaseTest.td,\ #org/apache/river/test/impl/outrigger/leasing/UseTxnMgrLeaseTestCancel.td,\ #org/apache/river/test/impl/outrigger/leasing/UseTxnMgrLeaseTestRenew.td,\ #org/apache/river/test/impl/outrigger/leasing/UseTxnMgrLeaseTestRenewCancel.td,\ #org/apache/river/test/impl/outrigger/leasing/UserSpaceLeaseTest.td,\ #org/apache/river/test/impl/mahalo/ServerTransactionEqualityTest.td,\ #org/apache/river/test/impl/mahalo/ServerTransactionToStringTest.td,\ #org/apache/river/test/impl/mahalo/TxnMgrImplNullActivationConfigEntries.td,\ #org/apache/river/test/impl/mahalo/TxnMgrImplNullConfigEntries.td,\ #org/apache/river/test/impl/mahalo/TxnMgrImplNullRecoveredLocators.td,\ #org/apache/river/test/impl/mahalo/TxnMgrProxyEqualityTest.td,\ #org/apache/river/test/impl/mahalo/LeaseExpireCancelTest.td,\ #org/apache/river/test/impl/mahalo/LeaseExpireRenewTest.td,\ #org/apache/river/test/impl/mahalo/LeaseMapTest.td,\ #org/apache/river/test/impl/mahalo/LeaseTest.td,\ #org/apache/river/test/spec/lookupdiscovery/DiscardUnreachable.td,\ #org/apache/river/test/spec/lookupdiscovery/MulticastMonitorStop.td,\ #org/apache/river/test/spec/lookupdiscovery/MulticastMonitorStopReplace.td,\ #org/apache/river/test/spec/lookupdiscovery/MulticastMonitorTerminate.td,\ #org/apache/river/test/spec/lookupdiscovery/AddGroups.td,\ #org/apache/river/test/spec/lookupdiscovery/AddGroupsDups.td,\ #org/apache/river/test/spec/lookupdiscovery/AddNewDiscoveryChangeListener.td,\ #org/apache/river/test/spec/lookupdiscovery/AddNewDiscoveryListener.td,\ #org/apache/river/test/spec/lookupdiscovery/ConstructorAllGroups.td,\ #org/apache/river/test/spec/lookupdiscovery/ConstructorDups.td,\ #org/apache/river/test/spec/lookupdiscovery/DiscardUnreachable.td,\ #org/apache/river/test/spec/lookupdiscovery/Discovered.td,\ #org/apache/river/test/spec/lookupdiscovery/DiscoveredDelay.td,\ #org/apache/river/test/spec/lookupdiscovery/DiscoveredStagger.td,\ #org/apache/river/test/spec/lookupdiscovery/DiscoveryBeginsOnAddGroupsAfterEmpty.td,\ #org/apache/river/test/spec/lookupdiscovery/DiscoveryBeginsOnSetGroupsAfterEmpty.td,\ #org/apache/river/test/spec/lookupdiscovery/DiscoveryBeginsOnSetGroupsAllAfterEmpty.td,\ #org/apache/river/test/spec/lookupdiscovery/DiscoveryEndsOnTerminate.td,\ #org/apache/river/test/spec/lookupdiscovery/GetRegistrars.td,\ #org/apache/river/test/spec/lookupdiscovery/GetRegistrarsNew.td,\ #org/apache/river/test/spec/lookupdiscovery/MulticastMonitorAllChange.td,\
Draft August Board Report
Hello River folk, please review / comment / suggest / changes for the draft board report for August below. Regards, Peter. ## Description: - Apache River provides a platform for dynamic discovery and lookup search of network services. Services may be implemented in a number of languages, while clients are required to be jvm based (presently at least), to allow proxy jvm byte code to be provisioned dynamically. ## Issues: - No significant issues requiring board attention at this time. ## Activity: - Minimal activity at present, initial work modular build structure has commenced, awaiting to be populated with River 3.0 code. Release roadmap: River 3.1 - Modular build restructure (& binary release) River 3.2 - Input validation 4 Serialization, delayed unmarshalling& safe ServiceRegistrar lookup service.River 3.3 - OSGi support ## Health report: - River is a mature codebase with existing deployments, it was primarily designed for dynamic discovery of services on private networks. IPv4 NAT limitations historically prevented the use of River on public networks, however the use of IPv6 on public networks removes these limitations. Web services evolved with the publish subscribe model of todays internet, River has the potential to dynamically discover services on IPv6 networks, peer to peer, blurring current destinctions between client and server, it has the potential to address many of the security issues currently experienced with IoT and avoid any dependency on the proprietary cloud for "things". - Future Direction: * Target IOT space with support for OSGi and IPv6 (security fixes required prior to announcement) * Input validation for java deserialization - prevents DOS and Gadget attacks. * IPv6 Multicast Service Discovery (River currently only supports IPv4 multicast discovery). * Delayed unmarshalling for Service Lookup and Discovery (includes SafeServiceRegistrar mentioned in release roadmap), so authentication can occur prior to downloading service proxy's, this addresses a long standing security issue with service lookup while significantly improving performance under some use cases. * Security fixes for SSL endpoints, updated to TLS v1.2 with removal of support for insecure cyphers. * Secure TLS SocketFactory's for RMI Registry, uses the currently logged in Subject for authentication. The RMI Registry still plays a minor role in service activation, this allows those who still use the Registry to secure it. * Maven build to replace existing ant built that uses classdepandjar, a bytecode dependency analysis build tool. * Updating the Jini specifications. ## PMC changes: - Currently 12 PMC members. - No new PMC members added in the last 3 months - Last PMC addition was Dan Rollo on Fri Dec 01 2017 ## Committer base changes: - Currently 16 committers. - No new committers added in the last 3 months - Last committer addition was Dan Rollo at Thu Nov 02 2017 ## Releases: - Last release was River-3.0.0 on Thu Oct 06 2016 ## Mailing list activity: - Relatively quiet. - dev@river.apache.org: - 94 subscribers (up 0 in the last 3 months): - 10 emails sent to list (39 in previous quarter) - u...@river.apache.org: - 92 subscribers (up 0 in the last 3 months): - 3 emails sent to list (3 in previous quarter) ## JIRA activity: - 1 JIRA tickets created in the last 3 months - 0 JIRA tickets closed/resolved in the last 3 months
Progress
I've been a little quiet, but have been busy testing, testing, testing. Will get back to work on River's modular build in the near future. AtomicILFactory, is a JERI invocation layer factory alternative to BasicILFactory, which uses atomic input validation during deserialization. Unlike BasicILFactory, proxy's are serialized independantly of other objects in the stream (no shared state), service proxy's have their codebases provisioned after they're authenticated, proxy's aren't deserialized, nor are their codebases provisioned until they're authenticated. Currently the only time a codebase annotation is utilised is during provisioning, the stream uses ClassLoaders at each endpoint. In a modular environment, such as Maven or OSGi, the ClassLoader's at each endpoint (local and remote), use the same codebase (that of the proxy), so all class dependencies are resolved at each endpoint. There are cases where some classes cannot be resolved, such as when a client passes a parameter object, who's class is not resolvable from the proxy's dependencies (the proxy's ClassLoader), I'm currently working on making it an option to use codebase annotations, but only when classes cannot be resolved from the proxy's ClassLoader. In these cases the codebase must resolve dependencies to the same service api version (service api is the public api the client and proxy use to interact) at the remote server endpoint. Regards, Peter.
Re: [jira] [Updated] (RIVER-466) ServiceDiscoveryManager not exiting lookup loop when serviceItems.length >= minMatches
Thanks Shawn, A deceptively simple fix, must have taken time and investigation to work out why it was getting unecessarily delayed, interesting given that it has gone on for so long too. Cheers, Peter. On 14/05/2018 5:57 AM, Shawn Ellis (JIRA) wrote: [ https://issues.apache.org/jira/browse/RIVER-466?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Shawn Ellis updated RIVER-466: -- Description: The problem occurs when a lookup for only one service is required, but more than one is found. Currently, the lookup loop only exits if the number of services found is equivalent to the minMatches or the timeout has expired. How to Reproduce: 1. Have multiple instances of a service registered with reggie. 2. Have a client call that performs a lookup with a constraint of only one service {code:java} lookup(serviceTemplate, 1, 1, null, 30 * 100){code} 3. The lookup loop will not exit until the timeout has expired even though more than minMatches were found. The attached patch causes the lookup loop to be exited which results in less time for service lookups. was: The problem occurs when a lookup for only one service is required, but more than one is found. Currently, the lookup loop only exits if the number of services found is equivalent to the minMatches or the timeout has expired. How to Reproduce: 1. Have multiple instances of a service registered with reggie. 2. Have a client call that performs a lookup with a constraint of only one service {code:java} lookup(serviceTemplate, 1, 1, null, 30 * 100){code} 3. The lookup loop will not exit until the timeout has expired even though more than minMatches were found. The attached patch causes the lookup loop to be exited which results in less time to service lookups. ServiceDiscoveryManager not exiting lookup loop when serviceItems.length>= minMatches -- Key: RIVER-466 URL: https://issues.apache.org/jira/browse/RIVER-466 Project: River Issue Type: Bug Components: net_jini_lookup Affects Versions: River_3.0.0 Reporter: Shawn Ellis Priority: Minor Attachments: MinMatches.patch The problem occurs when a lookup for only one service is required, but more than one is found. Currently, the lookup loop only exits if the number of services found is equivalent to the minMatches or the timeout has expired. How to Reproduce: 1. Have multiple instances of a service registered with reggie. 2. Have a client call that performs a lookup with a constraint of only one service {code:java} lookup(serviceTemplate, 1, 1, null, 30 * 100){code} 3. The lookup loop will not exit until the timeout has expired even though more than minMatches were found. The attached patch causes the lookup loop to be exited which results in less time for service lookups. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
Re: May Board Report
Actually we can still submit up to 24 hours prior to the meeting. Does anyone have any thoughts or anything they want to add? Regards, Peter. On 12/05/2018 8:41 AM, Peter wrote: This board report is due, however it will need to be delayed until June, I have been aware since last month that it needed to be done. ## Description: - Apache River provides a platform for dynamic discovery and lookup search of network services. Services may be implemented in a number of languages, while clients are required to be jvm based (presently at least), to allow proxy jvm byte code to be provisioned dynamically. ## Issues: No significant issues requiring board attention at this time. ## Activity: Interest in making Jini specifications programming language agnostic. Release roadmap: River 3.1 - Modular build restructure (& binary release) River 3.2 - Input validation 4 Serialization, delayed unmarshalling& safe ServiceRegistrar lookup service.River 3.3 - OSGi support ## Health report: - Minimal activity at present on dev. - Some recent commit activity, around modular build. - Future Direction: * Target IOT space with support for OSGi and IPv6 (security fixes required prior to announcement) * Input validation for java deserialization - prevents DOS and Gadget attacks. * IPv6 Multicast Service Discovery (River currently only supports IPv4 multicast discovery). * Delayed unmarshalling for Service Lookup and Discovery (includes SafeServiceRegistrar mentioned in release roadmap), so authentication can occur prior to downloading service proxy's, this addresses a long standing security issue with service lookup while significantly improving performance under some use cases. * Security fixes for SSL endpoints, updated to TLS v1.2 with removal of support for insecure cyphers. * Maven build to replace existing ant built that uses classdepandjar, a bytecode dependency analysis build tool. * Updating the Jini specifications. ## PMC changes: - Currently 12 PMC members. - Last PMC addition was Dan Rollo on Fri 1st December 2017 ## Committer base changes: - Currently 16 committers. ## Releases: - River-3.0.0 was released on Wed Oct 05 2016 ## Mailing list activity: - Relatively quiet . ## JIRA activity: - Activity around making Jini specifications programming language agnostic.- Some additional bug reports
May Board Report
This board report is due, however it will need to be delayed until June, I have been aware since last month that it needed to be done. ## Description: - Apache River provides a platform for dynamic discovery and lookup search of network services. Services may be implemented in a number of languages, while clients are required to be jvm based (presently at least), to allow proxy jvm byte code to be provisioned dynamically. ## Issues: No significant issues requiring board attention at this time. ## Activity: Interest in making Jini specifications programming language agnostic. Release roadmap: River 3.1 - Modular build restructure (& binary release) River 3.2 - Input validation 4 Serialization, delayed unmarshalling& safe ServiceRegistrar lookup service.River 3.3 - OSGi support ## Health report: - Minimal activity at present on dev. - Some recent commit activity, around modular build. - Future Direction: * Target IOT space with support for OSGi and IPv6 (security fixes required prior to announcement) * Input validation for java deserialization - prevents DOS and Gadget attacks. * IPv6 Multicast Service Discovery (River currently only supports IPv4 multicast discovery). * Delayed unmarshalling for Service Lookup and Discovery (includes SafeServiceRegistrar mentioned in release roadmap), so authentication can occur prior to downloading service proxy's, this addresses a long standing security issue with service lookup while significantly improving performance under some use cases. * Security fixes for SSL endpoints, updated to TLS v1.2 with removal of support for insecure cyphers. * Maven build to replace existing ant built that uses classdepandjar, a bytecode dependency analysis build tool. * Updating the Jini specifications. ## PMC changes: - Currently 12 PMC members. - Last PMC addition was Dan Rollo on Fri 1st December 2017 ## Committer base changes: - Currently 16 committers. ## Releases: - River-3.0.0 was released on Wed Oct 05 2016 ## Mailing list activity: - Relatively quiet . ## JIRA activity: - Activity around making Jini specifications programming language agnostic.- Some additional bug reports
A little more background on AtomicILFactory
JERI is of course extensible, there are a number of layers: 1. Invocation Layer. 2. Object identification layer 3. Transport layer. All proxy's that use JERI contain a java.lang.reflect.Proxy instance that uses an InvocationHandler from an invocation layer factory. Currently we have BasicILFactory, this uses standard java serialization and Marshal streams from the net.jini.io package, which annotate streams with codebase annotations. The invocation layer provided by BasicILFactory allows you to download any class from anywhere. Serialization was considered secure, at the time when it was written. To be fair, serialization should have been maintained secure. In this model, after you deserialized a proxy into its downloaded code, then you ask the remote end to check it, and then you apply constraints.The problem today is serialization is not secure, and we can use a secure transport layer, however a proxy can download another proxy that doesn't use a secure transport layer and the constraints won't be applied to it. For example, Reggie provides a lookup service, you can apply contraints against it, but it can still download proxy's from other services and the constraints aren't applied to those. Enter AtomicILFactory, it utilises codebase annotations still, but in a limited form. Instead of using codebase annotations for every class, each endpoint is assigned a default ClassLoader that determines class visibility and resolution. The service's server endpoint is assigned a ClassLoader by AtomicILFactory, but how is its proxy ClassLoader determined you ask? Ok, so we need to go back one step, proxy's are marshalled independently of the stream, this means, unlike BasicILFactory, Reggie cannot download proxy's from other services, because their proxy classes won't be available via the default ClassLoader. Instead, proxy's are marshalled by a ProxySerializer, that contains a MarshalledInstance and a CodebaseAccessor bootstrap proxy, which only utilises local classes. There's a new provider net.jini.loader.ProxyCodebaseSpi, which the ProxySerializer passes the MarshalledInstance and CodebaseAccess to as arguments.The bootstrap proxy is used for authentication and codebase provisioning, the provisioned ClassLoader is then used by the MarshalledInstance to deserialize the proxy. So this is how the ClassLoader is established for the proxy. Now I was applying constraints to the proxy before it returns, but this breaks a number of InvocationHandlers, that are expecting a proxy without constraints, so for now, constraints are only applied to the bootstrap proxy. There are two ProxyCodebaseSpi implementations, one for preferred classes, the other for OSGi. Note the parent ClassLoader is the loader of the stream that deserialized the proxy in it's marshalled form, at least for preferred class loading, but not for OSGi. The reason is, a service proxy may already contain proxy's for other services, which it utilises privately, so it needs to be able to control the visibility of classes using preferred class loading, the interfaces shared by the service proxy and other proxy's it contains This gives the client total control over who can download classes and enforce constraints before deserialization occurs. I haven't made this code publicly available yet. Regards, Peter.