Hi Michał, thank you for your compliment, I can assure you, I have had a lot of help and guidance from others ;-).

You are right, people have different visions and needs, on github you can fork and collaborate, or fork and do your own thing.  The vision and the implementation are two different things, often the implementation is designed or conceived then changes and evolves in response to overcoming challenges, as it is a process of discovery, of learning.   Because River had an existing user base, the experimentation required to solve difficult problems was significantly constrained due to fears of breaking backward compatibility, differing visions, or opinions of what River should be.

What is possible, that is not generally done, or possibly even thought of, because it is considered too difficult:

Peer to peer connectivity using services secured over untrusted networks (the internet), with authentication and access controls, to establish trust among separate parties cooperating dynamically to exchange information and logic.  Basically a distributed dynamic operating system using Java.   You could argue that Javascript is logic, but it is one way, publish subscribe, the client has to remain connected for the server to push notifications, there is always a one to many relationship between the server and clients.  In our case the client is likely also a server, it can be much more dynamic, more easily constructed, reliable and up-gradable.

JINI ORIGINS
Pre-1994, Bill Joy presented a proposal to Sun Labs where he
presented three main concepts:
1) a language that would run on all platforms,
2) a virtual machine to run this language, and
3) a networked system to allow the distributed virtual machines
to work as a singular system.
In 1995, the language and virtual machine were introduced to
the market as the Java Programming language and Java Virtual
Machine. The system context, however, was kept in the Sun R&D
for continued research and development. This system context
is Jini.

Jini 2.x, although originally intended for untrusted networks, was trapped by NAT and IPv4 in local networks, limiting its appeal.  NAT destroyed end to end connectivity, changing the internet into a publish subscribe model, which didn't fit well with Jini’s peer to peer architecture.   Some have incorrectly claimed that Jini isn't peer to peer, because of the lookup service, but failed to recognise that it was possible to have multiple lookup services and different groups.

There are sufficient IPv6 network providers now for peer to peer networking.

JGDMS is a modular Maven based fork of River (prior to the release of River 3.0, code that was originally intended for River), feel free to fork it as a base for JPMS modules.   I think the OSGi designers have some valid criticisms of JPMS, as well as advise for working around some of its pitfalls. https://www.infoq.com/articles/java9-osgi-future-modularity/ JGDMS modules are also OSGi bundles and service providers are also registered and consumed as OSGi services, when OSGi is present.

The qa test suite in JGDMS now runs tests with JSSE enabled, if you do this in River, many tests fail, as Sun’s Jini2.1 didn't completely support secure services.

The point of Jini 2.1 was to run on untrusted networks (the internet), I remember listening to one of Jim Waldo’s talks on work that was still required / outstanding in JCM10, unfortunately the video is no longer available.   Perhaps someone archived it? A summary can be found here: https://wstrange.wordpress.com/2006/10/05/summary-jim-waldos-keynote-at-the-10th-jini-community-meeting/

JGDMS uses a new implementation of a subset of the Java Serialization’s stream format, with input validation and defenses against malicious data (all connections are first authenticated when using secure endpoints).   Codebase annotations are no longer appended in serialization streams, this feature is deprecated but it can still be enabled.  This paper documents the problems with this approach: https://dl.acm.org/doi/pdf/10.5555/1698139

JGDMS provisions a ClassLoader at each Endpoint, the ClassLoader is solely responsible for class resolution, once it has been assigned to the relevant ObjectEndpoint.  A provider mechanism allows customization.

JGDMS doesn't suffer from codebase annotation loss, nor class resolution issues.   But it did have to give up some functionality; it cannot resolve classes that do not belong to a service proxy or its service api and are not resolvable from the Endpoint ClassLoader, if they are not present on the remote machine.   The solution is to always use a service, for parameters passed to a service, if they are not part of the service api, eg the client overrides the type of parameter arguments for a service.  This means that if the parameter is not an interface, you cannot create a service that implements it and pass it as an argument.  That’s why its still possible, but not recommended to use codebase annotations appended to the serialization stream. The solution is to create service api that uses only interfaces for parameter arguments.   For example a remote events and listeners use this pattern.  To prevent unexpected breakages, either use interfaces, or final classes, or both, for service api remote method parameters.  Then you won’t get into the situation where you need codebase annotations appended in the stream.

For example if a service proxy is serialized within a serialization stream, it will be replaced by a proxy serializer and it will be assigned its own independent stream, with ClassLoader, independent of the stream in which it was serialized.  This is based on the ObjectEndpoint identity, so it will always resolve to the same ClassLoader.  Note that ProxyCodebaseSpi can be a provider or OSGi service.

Now the proxy serializer is itself a service (bootstrap proxy), that is authenticated when using secure endpoints.  You could quite easily add an interface to the proxy serializer to return your object annotation.

Note that I use a string, because I also use it in secure multicast discovery protocols (typically IPv6), which don't include objects, for authentication and provisioning a ClassLoader for a lookup service proxy prior to any Object de-serialization.

https://www.iana.org/assignments/ipv6-multicast-addresses/ipv6-multicast-addresses.xhtml

Summing up to simplify JGDMS and solve some very difficult issues, it had to give up:

1. Support for circular references in serialized object graphs, was
   dropped.
2. Extensible classes in service api method parameters are not advised.
3. ProxyTrust - deprecated and replaced with secure authentication and
   httpmd (SHA-256) or signer certificates using ProxySerializer.
4. Untrusted machines are not allowed in a djinn, some level of trust
   is required, with authentication and authorisation constraints.

What enabled solving these issues, was the River community’s (and Jini users) ability to identify problems, although they didnt agree on solutions, they identified the problems and that’s the most important step in finding a solution.

I’d like to say that all of the problems with Jini 2.1 on the internet were solved, but there is always something left to do, such as supporting a marshaling layer within JERI to allow extensible support for different serialization protocols, or re-implementing access controls following JEP 411.

BasicILFactory is still available, should you wish to adopt a more conventional approach, using Java Serialization.

Regards,

Peter.

/*
 * Copyright 2018 The Apache Software Foundation.
 *
 * Licensed under the Apache License, Version 2.0 (the "License");
 * you may not use this file except in compliance with the License.
 * You may obtain a copy of the License at
 *
 * http://www.apache.org/licenses/LICENSE-2.0
 *
 * Unless required by applicable law or agreed to in writing, software
 * distributed under the License is distributed on an "AS IS" BASIS,
 * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
 * See the License for the specific language governing permissions and
 * limitations under the License.
 */
package org.apache.river.api.io;

import net.jini.loader.ProxyCodebaseSpi;
import java.io.IOException;
import java.io.InvalidObjectException;
import java.io.ObjectInput;
import java.io.ObjectOutputStream;
import java.io.ObjectStreamField;
import java.io.Serializable;
import java.lang.reflect.InvocationHandler;
import java.lang.reflect.Proxy;
import java.security.AccessController;
import java.security.Guard;
import java.security.PrivilegedAction;
import java.util.Collection;
import java.util.Iterator;
import java.util.logging.Level;
import java.util.logging.Logger;
import net.jini.core.constraint.RemoteMethodControl;
import net.jini.export.ProxyAccessor;
import net.jini.export.CodebaseAccessor;
import net.jini.export.DynamicProxyCodebaseAccessor;
import net.jini.io.MarshalInputStream;
import net.jini.io.MarshalledInstance;
import org.apache.river.api.io.AtomicSerial.GetArg;
import org.apache.river.api.io.AtomicSerial.PutArg;
import org.apache.river.api.io.AtomicSerial.ReadObject;
import org.apache.river.api.io.AtomicSerial.SerialForm;
import org.apache.river.resource.Service;

/**
 *
 * @author peter
 */
@AtomicSerial
class ProxySerializer implements Serializable {
    private static final long serialVersionUID = 1L;

    private static final String BOOTSTRAP_PROXY = "bootstrapProxy";
    private static final String SERVICE_PROXY = "serviceProxy";

    /**
     * By defining serial persistent fields, we don't need to use transient fields.
     * All fields can be final and this object becomes immutable.
     */
    private static final ObjectStreamField[] serialPersistentFields =
    serialForm();

    public static SerialForm[] serialForm(){
        return new SerialForm[]{
            new SerialForm(BOOTSTRAP_PROXY, CodebaseAccessor.class),
        new SerialForm(SERVICE_PROXY, MarshalledInstance.class)
        };
    }

    public static void serialize(PutArg arg, ProxySerializer ps) throws IOException{
        arg.put(BOOTSTRAP_PROXY, ps.bootstrapProxy);
        arg.put(SERVICE_PROXY, ps.serviceProxy);
        arg.writeArgs();
    }
    /**
     * The bootstrap proxy must be limited to the following interfaces, in case      * additional interfaces implemented by the proxy aren't available remotely.
     */
    private static final Class[] BOOTSTRAP_PROXY_INTERFACES =
    {
        CodebaseAccessor.class,
        RemoteMethodControl.class
    };

    private static final Logger LOGGER = Logger.getLogger("org.apache.river.api.io");


    private static final Guard CLASSLOADER_GUARD = new RuntimePermission("getClassLoader");
    /**
     * Returns the class loader for the specified proxy class.
     */
    private static ClassLoader getProxyLoader(final Class proxyClass) {
    return (ClassLoader)
        AccessController.doPrivileged(new PrivilegedAction() {
        public Object run() {
            return proxyClass.getClassLoader();
        }
        });
    }

    private static ProxyCodebaseSpi getProvider(final ClassLoader loader){
    ProxyCodebaseSpi result =
        AccessController.doPrivileged(new PrivilegedAction<ProxyCodebaseSpi>(){
        public ProxyCodebaseSpi run(){
            Iterator<ProxyCodebaseSpi> spit =
                Service.providers(
                ProxyCodebaseSpi.class,
                loader
                );
            CLASSLOADER_GUARD.checkGuard(null);
            while (spit.hasNext()){
            return spit.next();
            }
            return null;
        }
        }
    );
    if (result != null) return result;
    // By default, if no provider is available, doesn't attempt to
    // download codebase or substitute.
    return new ProxyCodebaseSpi(){

        public Object resolve(
            CodebaseAccessor bootstrapProxy,
            MarshalledInstance smartProxy,
            ClassLoader parentLoader,
            ClassLoader verifierLoader,
            Collection context) throws IOException, ClassNotFoundException
        {
        return smartProxy.get(parentLoader, true, verifierLoader, context);
        }

        public boolean substitute(
            Class serviceClass,
            ClassLoader streamLoader)
        {
        return false;
        }

    };
    }

    public static Object create(DynamicProxyCodebaseAccessor proxy, ClassLoader streamLoader, Collection context) throws IOException {
    Class proxyClass = proxy.getClass();
    if (proxy instanceof RemoteMethodControl //JERI
        && Proxy.isProxyClass(proxyClass)
        && getProvider(streamLoader).substitute(proxyClass, streamLoader)
        )
    {
        // REMIND: InvocationHandler must be available locally, for now
        // it must be an instance of BasicInvocationHandler.
        InvocationHandler h = Proxy.getInvocationHandler(proxy);
        return new ProxySerializer(
        (CodebaseAccessor) Proxy.newProxyInstance(getProxyLoader(proxyClass),
            BOOTSTRAP_PROXY_INTERFACES,
            h
        ),
        proxy,
        context
        );

    }
    return proxy;
    }

    public static Object create(ProxyAccessor svc, ClassLoader streamLoader, Collection context) throws IOException{
    Object proxy = svc.getProxy();
    Class proxyClass = proxy != null ? proxy.getClass() : null;
    if (proxyClass == null ) LOGGER.log(Level.FINE, "Warning Proxy was null for {0}", svc.getClass());
    if (proxy instanceof RemoteMethodControl //JERI
        && proxy instanceof CodebaseAccessor
        && getProvider(streamLoader).substitute(proxyClass, streamLoader)
        )
    {
        // REMIND: InvocationHandler must be available locally, for now
        // it must be an instance of BasicInvocationHandler.
        InvocationHandler h = Proxy.getInvocationHandler(proxy); // throws IllegalArgumentException if not a proxy.
        return new ProxySerializer(
        (CodebaseAccessor) Proxy.newProxyInstance(getProxyLoader(proxyClass),
            BOOTSTRAP_PROXY_INTERFACES,
            h
        ),
        svc,
        context
        );

    }
    return svc;
    }

    private final CodebaseAccessor bootstrapProxy;
    private final MarshalledInstance serviceProxy;
    private final /*transient*/ Collection context;
    private final /*transient*/ RO read;

    ProxySerializer(CodebaseAccessor p, DynamicProxyCodebaseAccessor a, Collection context) throws IOException{
    this(p, new AtomicMarshalledInstance(a, context, false), null, null);

    }

    ProxySerializer(CodebaseAccessor p, ProxyAccessor a, Collection context) throws IOException{
    this(p, new AtomicMarshalledInstance(a, context, false), null, null);

    }

    ProxySerializer(CodebaseAccessor p, MarshalledInstance m, Collection context, RO read){
    bootstrapProxy = p;
    serviceProxy = m;
    this.context = context;
    this.read = read;
    }

    private static CodebaseAccessor check(CodebaseAccessor c) throws InvalidObjectException{
    if (Proxy.isProxyClass(c.getClass())) return c;
    throw new InvalidObjectException(
        "bootstrap proxy must be a dynamically generated instance of java.lang.reflect.Proxy");
    }

    ProxySerializer(GetArg arg) throws IOException, ClassNotFoundException{
    this(check(Valid.notNull(
        arg.get(BOOTSTRAP_PROXY, null, CodebaseAccessor.class),
        "bootstrapProxy cannot be null")),
        Valid.notNull(
            arg.get(SERVICE_PROXY, null, MarshalledInstance.class),
            "serviceProxy cannot be null"),
        arg.getObjectStreamContext(),
        (RO) arg.getReader()
    );
    }

    Object readResolve() throws IOException, ClassNotFoundException {
    return getProvider(read.defaultLoader).resolve(bootstrapProxy, serviceProxy, read.defaultLoader,
        read.verifierLoader, context);
    }

    // So we can implement ReadObject
    private void writeObject(ObjectOutputStream out) throws IOException {
    out.defaultWriteObject();
    }

    @AtomicSerial.ReadInput
    static ReadObject getReader(){
    return new RO();
    }

    private static class RO implements ReadObject {

    private ClassLoader defaultLoader = null;
    private ClassLoader verifierLoader = null;

    public void read(final ObjectInput input) throws IOException, ClassNotFoundException {
        if (input instanceof MarshalInputStream){
        defaultLoader = AccessController.doPrivileged(new PrivilegedAction<ClassLoader>(){
            public ClassLoader run() {
            return ((MarshalInputStream) input).getDefaultClassLoader();
            }
        });
        verifierLoader = AccessController.doPrivileged(new PrivilegedAction<ClassLoader>(){
            public ClassLoader run() {
            return ((MarshalInputStream) input).getVerifierClassLoader();
            }
        });
        }
    }

    }
}


/*
 * Licensed to the Apache Software Foundation (ASF) under one
 * or more contributor license agreements.  See the NOTICE file
 * distributed with this work for additional information
 * regarding copyright ownership. The ASF licenses this file
 * to you under the Apache License, Version 2.0 (the
 * "License"); you may not use this file except in compliance
 * with the License. You may obtain a copy of the License at
 *
 * http://www.apache.org/licenses/LICENSE-2.0
 *
 * Unless required by applicable law or agreed to in writing, software
 * distributed under the License is distributed on an "AS IS" BASIS,
 * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
 * See the License for the specific language governing permissions and
 * limitations under the License.
 */

package net.jini.export;

import java.io.ByteArrayInputStream;
import java.io.IOException;
import java.rmi.Remote;
import java.security.cert.CertPath;
import java.security.cert.CertificateFactory;
import org.apache.river.api.security.PermissionGrant;
import org.apache.river.api.security.RevocablePolicy;

/**
 * After authenticating a bootstrap token proxy, the ProxyPreparer can
 * dynamically grant DownloadPermission and DeSerializationPermission
 * as required using the information provided, to allow downloading
 * of a smart proxy.
 *
 * To make a ProtectionDomain or CodeSource based grant requires a
 * {@link RevocablePolicy#grant(PermissionGrant) }
 *
 * A service needn't implement this if a proxy doesn't require a codebase
 * download.
 *
 * Certificates et al are sent in encoded format.  The choice was made not
 * to Serialize Certificate or CodeSigner in case the CertificateFactory
 * provider isn't installed and to also allow low level {@link java.io.DataInput} and
 * {@link java.io.DataOutput} based communication.
 *
 * @see RevocablePolicy
 * @see PermissionGrant
 */
public interface CodebaseAccessor extends Remote {

    /**
     * Obtains the service class annotation as defined in
     * <code> ClassLoading.getClassAnnotation(Class)</code>.
     *
     * @return the codebase annotation.
     * @throws IOException if a connection problem occurs.
     */
    public String getClassAnnotation() throws IOException;

    /**
     * Get the CertificateFactory type.
     *
     * @return CertificateFactory type or null.
     * @throws IOException if a connection problem occurs.
     * @see CertificateFactory#getInstance(java.lang.String)
     */
    public String getCertFactoryType() throws IOException;

    /**
     * Get the CertPath encoding;
     * @return CertPath encoding or null.
     * @throws IOException if a connection problem occurs.
     * @see CertPath#CertPath(java.lang.String)
     */
    public String getCertPathEncoding() throws IOException;

    /**
     * The byte array can be passed to a ByteArrayInputStream, which can be
     * passed to a CertificateFactory to generate a Collection of Certificates,
     * or CertPath.
     *
     * @return a byte array containing certificates or null.
     * @throws IOException if a connection problem occurs.
     * @see ByteArrayInputStream
     * @see CertificateFactory#generateCertPath(java.io.InputStream)
     * @see CertificateFactory#generateCertificates(java.io.InputStream)
     */
    public byte [] getEncodedCerts() throws IOException;

}


On 15/02/2022 1:07 am, Michał Kłeczek wrote:
Hi All,

Based on the excelent work of (mainly) Peter Firmstone (et al) I am personally 
working on “River 4.0” right now and was planning to share my work with the 
community soon(tm).

1. Implementation of my old idea of making codebase annotations objects 
implementing a specific interface (instead of Strings containing space 
delimited list of URLs). One might think of it as “smart codebase annotations” 
being installers of software necessary to deserialise an object.

2. Refactoring of River codebase to make it modular (in terms of JPMS modules).

3. Some cleanup to modernise the codebase (ie. use generics instead of raw 
types, enums, records where possible/useful)

What it gives us is:
a) uniform representation of code and data: there is no need to have separate 
specifications for code metadata languages (manifests etc.). Examples:
- module dependency graph is represented as serialised Java objects
- Expressions specifying module compatibility (such as semantic version ranges) 
are provided as objects implementing a specific interface
- in particular it is possible not to rely on version numbers but have “animal 
sniffers” that decide if local code is compatible with downloaded classes
- installation packages can be provided as serialised objects (either with code 
serialised in-band or out-of-band depending on the needs)

In other words: “everything is an object”.

b) security
To instantiate any class and deserialise an object - installer first makes sure 
code is verified before execution (the verification algorithm is pluggable as 
the installer can be any object implementing an interface).
Installers are treated the same way - the code of the installer has to be 
verified first.
The installer itself grants permissions to the code it installs and is 
constrained by any GrantPermissions it has.

So no more unverified code execution - contrary to old Jini way of 
instantiating an unknown class to ask for a verifier.

c) great flexibility of the way how code is downloaded and installed:
- in particular it is possible to express a module dependency graph that can be 
instantiated in the client VM
- since codebase annotations are objects that are also annotated - it is now 
possible to provide new protocols to download and verify software.

d) since we can now express non-hierarchical module dependencies it is possible 
to implement scenarios not possible (or really cumbersome) to implement 
previously:
- “adapter/wrapper services” - ie. smart proxies that wrap other services 
proxies. For example RemoteEventListener that pushes events to a JavaSpace.



I am only doing it for fun and sentiment to Jini idea of mobile code.
 From reading the mailing list and taking part in some discussions here I can 
see there is almost no interest in moving River forward.
And even if there is - there is no clear idea on what direction it should be.

My own opinion on the matter is that the single DEFINING idea of Jini/River is 
mobile code. Without mobile code there is no reason for Jini to exist:
- it is 2022 - there are so many excellent networking libraries on the market 
that River does not bring anything to the table
- web protocols and new developments such as QUIC make River as a networking 
solution (even if excellent) less and less relevant.
- River codebase is outdated and does not play well with the broad Java 
ecosystem (DI frameworks, pluggable logging, JFR etc.)
- looks like “static linking” is the current buzz in the industry (either as 
executables or container images).

On the other hand - having worked quite extensively with 
k8s/Helm/containers/Docker/microservices and what not - I still think there is 
still a (big) case for Jini style mobile code. But only if it is re-designed to 
be secure, robust and flexible.
There is also Project Loom - it will make River scalable and - hopefully - 
relevant again.


Michal


On 8 Feb 2022, at 01:41, Roy T. Fielding<field...@gbiv.com>  wrote:

Hello everyone,

It's that time of year when I try to figure out what I am doing and
what I am not, and try to cut back on the stuff that seems unlikely
to succeed. I suspect the same is true of others.

I had hoped that more new people at River would result in more activity,
but that hasn't occurred over the past 9 months and doesn't seem likely
in the future. Aside from ASF echoes and Infra-driven website replacement,
there has been nothing to report about River the entire time that I have
been the chair pro tem, and there hasn't been any chatter by users either.

Please feel free to let me know if I am missing something.

If not, I'd like us to accept the reality of this situation and move the
River project to the Attic. The code will still be available there, and
folks are welcome to copy it under the license, move it to Github, or
otherwise seek to re-mold it into a collaborative project wherever is
most convenient for them.

Cheers,

....Roy

Reply via email to