I want to reiterate what Peter is saying with perhaps some different words and 
experiences.  When Java 5 came out, for the first time, JDK/JRE development 
started focusing on removing “locks”.  That is improving scaling, by changing 
libraries and the runtime environment to not assert “fences” or other cache 
synchronization any more than necessary, given the “view” of the associated 
data that those libraries had, and advertised as being “side effects”.

As I’ve discussed here before, one of the most destructive activities that has 
happened in the JDK, has been the “enforcement” of “volatile” vs “non-volatile” 
correctness in the activities of the JIT.  The standing example, is this:

class foo implements Runnable {
        bool done;

        public void run() {
                while(!done) {
                        doAllTheWork();
                }
        }
        public void stop() {
                done = true;
        }
}

In JDK 1.4 and before, this class would function correctly on single processor 
machines (which most developers used), and would hardly ever exhibit a “bug” on 
multi-core and multi-processor machines.  In JDK 5 and later, the JIT rewrites 
this class to look like

class foo implements Runnable {
        bool done;

        public void run() {
                if(!done) {
                        while( true ) {
                                doAllTheWork();
                        }
                }
        }
        public void stop() {
                done = true;
        }
}

This is all because “done” is not declared volatile.   Volatile was in the 
language but had no effect before JDK1.5/5.0 and this kept most people from 
putting it into their code.  If done is declared volatile, then the compiler 
will not rewrite the loop, because it will then “know” that the value of done 
can change outside of the loop’s view.

There are other places that such “volatile” consideration is happening.

The second point is that Vector and Hashtable, used to cause a lot 
“synchronization” to happen automatically, because their “locks” created 
“happens before” relationships in code such that cache flushes occurred 
regularly.  This kills performance in a multi-processor environment, and thus 
we’ve seen Doug Lea and others’ activities that have been removing “as much as 
possible” behind the scenes cache manipulation and/or fences that would cause 
internal processor state synchronization across cores.

Ultimately, all of this is what Peter has been so dedicated to go after and 
deal with.  Many of you probably don’t have “problems” with your code, that you 
recognize, because you are being “helped” but the use of some “heavy weight” 
collection that is “synchronizing” all of your other activities between cores.

I would encourage you to think long and hard about “odd behaviors” that you see 
in your long running Java applications.  Do you see certain activities slightly 
delayed, yet they occur, and near the moment of some other “Collection’s” 
usage?  Are there other things which seem to “wait” longer than you’d expect.  
These delays are the kinds of things which are a result of “delayed” 
synchronization created by another object’s interaction with the memory/cache 
subsystem.

It’s amazingly frustrating and has made Java a very complex language to use, 
because the focus is first on “all out performance” and not on “developer 
success”.

Gregg


On Dec 17, 2013, at 8:12 AM, Peter <j...@zeus.net.au> wrote:

> Remote objects, when exported are exposed to multiple threads.
> 
> Even when a Remote object only has one client, the clients calls will be 
> despatched via a thread pool to the exported object, every method call is 
> likely to be invoked by a different thread.
> 
> If your exported objects aren't thread safe the method despatch threads are 
> not guaranteed to see each others changes to the mutable state of your 
> exported object.
> 
> No Jini releases were designed for the new Java Memory Model that accompanied 
> Java 5, in fact the first release designed for Java 5 was an Apache River 
> release.
> 
> We have been strangely fortunate in spite of no services in any existing 
> releases to date being safely constructed, we've experienced few noticable 
> consequences, other than some random test failures on Jenkins.   To continue 
> using the existing codebase and do nothing would ensure River's extinction.
> 
> The skunk/qa_refactor branch uses safe construction tehniques and export 
> after construction, it's had sweeping changes made internally to classes to 
> significantly improve Java Memory Model compliance.
> 
> If you haven't checked out this branch, now's the time to do so.
> 
> Regards,
> 
> Peter.

Reply via email to