OK, interesting.  Your interpretation seems right; that getting the MH is a big part of the expense of the LDC approach.  Caching simulates using Constant_MethodHandle_info, so that's a fair comparision.  It's not clear whether there's a performance difference getting the MH via lookup (as your approach does) vs CP resolution; measuring this is harder because once the CP is resolved, it's already cached.  But it does point at this approach being a tradeoff, where we're using a more expensive means to get our hands on the Method, in exchange for hopefully resolving fewer Methods overall (or pushing the cost to a less critical path.)

You can replace Class.forName("java.lang.Object") with Object.class (which turns into an LDC) to factor out the cost of that particular reflection.

It's unfortunate that InvocationHandler exposes the Method directly to the Proxy client, since a Method is a heavier object than we necessarily want, but we have limited ability to refactor here.

Another direction here would be as follows: generate code like the following:

    class Proxy$1 {
        private static final Class<?> c1 = Object.class;

        private Method equals$bootstrap(Lookup, String, Class) {
            return c1.getMethod("equals", new Class[] { c1 });
        }

        public boolean equals(Object o) {
           return ... __LDC__[equals$bootstrap...] ...
        }
    }

In other words, use condy to delay evaluation, but don't go through the MH machinery.  There's a subtle proof we would have to do to prove this safe; we want reflective lookups to be done using the right access control context.  This is why I factored out the Class resolution and left that in the <clinit>; since the methods of a proxy are public (can only proxy interfaces), the interesting access control is done on the class lookup.  This would result in similar costs for looking up the Method, but delaying it to first use (still paying the one-by-one condy dispatch vs doing them all as a group in <clinit>.)

On 11/23/2019 5:54 PM, Johannes Kuhn wrote:
On 11/23/2019 10:40 PM, Brian Goetz wrote:


Finally, we can benchmark the current approach against the LDC approach on a per-Method basis.  The LDC approach may well be doing more work per Method, so it's a tradeoff to determine whether deferring that work is a win.

By this last bit, I mean JMH'ing:

    Method m1() {
       return Class.forName("java.lang.Object").getMethod("equals", new Class[] { Class.forName("java.lang.Object") });
    }

vs

    Method m2() {
        return bootstrap(... constant bootstrap args for above ...)
    }


Thanks for the pointers, I did run one round on my machine - I don't have a dedicated one - so, maybe treat it with a fine grain of salt. The test code and results can be found here: https://gist.github.com/DasBrain/501fd5ac1e2ade2e28347ec666299473

Benchmark                              Mode  Cnt Score Error  Units
ProxyFindMethod.getMethod             thrpt   25  1505594.515 ± 42238.663  ops/s ProxyFindMethod.lookupMethod          thrpt   25   760530.602 ± 25074.003  ops/s ProxyFindMethod.lookupMethodCachedMH  thrpt   25  4327814.284 ± 103456.616  ops/s

I wrote three tests, the first one (getMethod) uses the Class.forName().getMethod(), the second uses lookup.findVirtual to get the MethodHandle every iteration, while the third one caches it in a static final field.

My interpretation: If the method has to be looked up (and the VM has to do that at least once), then the old getMethod call is faster. I also suspect the Class.forName call to be optimized - on some old java versions, iirc pre-1.5, Foo.class was compiled into Class.forName("Foo"), so the JIT might do something there.
So, now I have to learn how to read assembly. Good.


Reply via email to