Brief comment on ECC attack below, the code download can be prevented by granting DownloadPermission only to code signers and not user principals.  In this case the imposter service would only be able to cause a signed code source to class-load.   Since Java serialization is disabled, the attacker would probably be unable to penetrate the jvm, but still could potentially steal data.

On 28/04/2022 3:25 pm, Peter Firmstone wrote:
Hi Martin,

Your arguments are the reasons why we use the principle of least privilege.   It creates a headache for attackers, similar to the developer who's enabled SM for the first time and must manually add every required permission for their software to function (who thought that was a good idea lol?).   The attacker requires an intimate knowledge of the permissions their attack vectors or gadgets require, including those a thread of execution has already been granted as well as the features that those permissions will grant the attacker access to.   If the thread of execution doesn't have all required permissions, it will cause the jvm to exit with a SecurityException.   How does the attacker obtain all the required information?  With great difficulty.

As soon as the software does something a generated polp policy file hasn't captured, a security exception is almost inevitably thrown, even if it wasn't designed to protect an intended target, it almost inevitably gets in the way.  You will find that it's almost impossible to do anything unintended.  Although once you can impersonate a user or service, say with the recent ECC exploit, you can at least do what that user or service is allowed to do, but it still won't allow the attacker to achieve their intended end goal unless the user or has all the required permissions.  In our case if the attacker can impersonate a service, then they can load code, that's a problem, as our software assumes ECDSA provides strong confidentiality.  We recognise that once you get to the stage of loading code into the jvm, it's pretty much game over.   A polp policy file won't defend against the recent ECC exploit.

https://github.com/pfirmstone/JGDMS/blob/trunk/JGDMS/jgdms-jeri/src/main/java/net/jini/jeri/ssl/ConfidentialityStrength.java

What polp can protect against however, is an exploit in a feature that you don't use, it protected against the recent Log4J vulnerability.

An example of a polp policy file: https://github.com/pfirmstone/JGDMS/blob/trunk/qa/harness/policy/defaultsecuresharedvm.policy

One of the improvements we can make (when re-implementing access controls), is to reduce the size of the jdk's trusted computing base, instead of having many trusted protection domains with all permission (characterised with a null protection domain), we can give each module a separate protection domain identity, and limit each only to the permissions required for our software to function as intended.   This means that jvm modules we don't use will have no permissions at all.  To get around the large trusted jdk code base, we provided two methods which append a ProtectionDomain, with only the user's required permissions, it also prevents injection of a user Subject's permission's into less privileged service domains.  It's use hasn't really caught on though, no doubt due to complexity.

https://github.com/pfirmstone/JGDMS/blob/c1edf5892306f24f8f97f459f499dec54984b08f/JGDMS/jgdms-platform/src/main/java/net/jini/security/Security.java#L590

This is really a hack because the jdk's trusted computing base is too large, also user permissions should be granted only to protection domains that have the necessary user principals and code signers, to avoid injecting additional permissions into less privileged service protection domains.

All data parsing the jvm performs, should also be moved into separate modules, so that data parsing access controls and privileges could have been managed (this is one of the missing checks you mention), and yes it did provide a false sense of security for many years that Java serialization was secure when it wasn't.  I had many difficulties explaining to developers in 2010 that Java serialization wasn't secure, they didn't believe me and it cause problems.

Had the Java 2SE security infrastructure never been introduced, perhaps something else would have evolved, or at the very least, our software wouldn't depend on it.  Java's access controls have certainly suffered from a lack of investment.

Unfortunately our software is dependent on it and designed around it at a fundamental level, even if SM is null (which incidentally I've haven't yet tested, I think we have code that checks that SM is enabled), our software is still using AccessControlContext's to establish TLS connections and authenticate users.   Personally I would like to see parts of AccessController and AccessControlContext retained for retrieving the Subject for establishing secure connections in a way that's compatible across all Java versions.

After removing access controls, it effectively means AllPermission is granted to every authenticated user (and in our case service).   Any access controls that we create at a higher level, can be circumvented by the lack of lower level access controls. So the only access control is authentication.

We have no desire or want to instrument the jvm, we have been advised that this is the new way to implement access controls by OpenJDK, it's simply that we want to continue to support future versions of Java and cannot do so without access controls.  Our software has been designed with and depends on access controls.

Java has had access controls for 24 years, we couldn't foresee that it would be removed in a breaking manner in such a short time frame.  Remember how long Thread.stop and similar bad api's remained in deprecated form?

https://bugs.java.com/bugdatabase/view_bug.do?bug_id=JDK-8204243

This isn't something we wanted or planned to do, it's a task that has been created for us and we have few resources to address it.

Regards,

Peter.

On 28/04/2022 3:37 am, Martin Balao wrote:
David,

I understand the reasons behind seeing authorization checks at the runtime layer as something that just adds, without any harm in the worst case (all of this putting the maintenance cost and other arguments aside.)

My concern is more about the general security principles underpinning the idea. We will probably agree that half-barriers are not barriers, and might cause a false sense of security. If we have authorization checks at the runtime level, they must be comprehensive, coherent and well-maintained. Their availability suggests that mixing high-level checks with runtime-level ones can be part of a good security design in modern application development. For the reasons that we've been discussing, I'm not convinced of that. And even when subtle, I prefer the runtime not to make the suggestion. If you still want it, you can go ahead with instrumentation; but it's clear that for the runtime developers that is a workaround and not a desirable security design.

What I mean by splitting responsibility is that application developers can use a mix of high and low level checks, at different layers, with more complexity. As Sean said, letting the unauthorized user to move towards the edge of the action is more risky. We can lose sight of workarounds and holes with the additional attack surface and complexity that comes at a lower layer. What I want to stress is the value of clarity, simplicity and division of responsibilities as a general security principle.

Martin.-

Reply via email to