Looks like we don't need to be concerned, they've fixed it in Java 8.
7093090
*Votes* 0
*Synopsis* Reduce synchronization in java.security.Policy.getPolicyNoCheck *
Category* java:classes_security
*Reported Against* *Release Fixed* 8(b15) *
State* 10-Fix Delivered, bug *Priority:* 2-High *Related Bugs* *Submit
Date* 20-SEP-2011 *Description*
java.security.Policy.getPolicyNoCheck() is synchronized which causes some
thread contention.
Posted Date : 2011-09-20 23:44:03.0
*Work Around*
N/A
*Evaluation*
The fix involved adding an initialized flag to indicate when the system-wide
policy has been initialized and storing both the flag and the Policy object in
an AtomicReference. Then, I also used the double-check locking idiom to avoid
locking the Policy class when the Policy had already been initialized.
Changeset: http://hg.openjdk.java.net/jdk8/tl/jdk/rev/1945abeb82a0
Posted Date : 2011-11-22 15:15:14.0
Dan Creswell wrote:
On 8 January 2012 22:48, Peter Firmstone <[email protected]> wrote:
Dan Creswell wrote:
On 8 January 2012 11:40, Peter Firmstone <[email protected]>
wrote:
How much can this one synchronized method spoil scalability?
Not much as far as I can see - there's going to be a one off
initialisation cost and after that it's a fast path with a single
reference check and a return. I can't think of much that's less
compute intensive and thus lower contention.
I think you'd have to be running some very trivial code that called
this method many many times whilst it didn't do much for it to turn up
as high cost.
That's what I thought as well, true on today's hardware.
The other thing that bothers me is it's synchronized on the
java.security.Policy class monitor, a very effective denial of service
attack is to obtain the policy class lock, no permission is required, then
all permission checks block. Still there are other DOS attacks that can be
performed on the jvm, like memory errors, although I have found it possible
to create an executor that can recover safely from that state.
I'm not so sure about Tomorrow, with future processors exploiting die
shrinks by increasing core count. Most of the concurrent software we write
today, is only scalable to about 8 cores.
By removing the cache from the ConcurrentPolicyFile, it becomes almost
entirely immutable.
The trick to scalability is to mutate in a single thread, then publish
immutable objects, creating non blocking code paths, so every thread can
proceed, which is how the new policy basically works.
I haven't made any decisions, yet, it's just a smell that bothers me.
I think this is lacking context - what kind of service would one write
that needs this many cores and thrashes that particular lock so hard
it matters as compared to all the other compute it's doing?
I'd also observe that running multiple processes gets you out of this
predicament.
In essence, I'm not sure it's a problem worth worrying about until
it's a real-world problem worth worrying about.
Regards,
Peter.