On Jul 8, 2014, at 9:09 PM, John Rose <john.r.r...@oracle.com> wrote:

> Regarding the extra cast in accessor logic that Paul picked up on:  That may 
> be a left-over from obsolete versions of the code, or it may cover for some 
> corner cases, or it could possibly be a re-assurance to the JIT.
> 
> Generally speaking, we lean heavily on MH types to guarantee a priori 
> correctness of argument types.  Paul is right that the stored value type is 
> already validated and (except for corner cases we may be neglecting) does not 
> need re-validation via a cast.
> 
> It might be an interesting experiment to replace the cast with an assert.
> 

I did briefly look at that a few months ago, i would need to systematically 
revisit. My memory is hazy, I seem to recall removing casts perturbed the 
compilation process more than i expected.


> Sometimes corner cases bite you.  For example, an array store check is 
> necessary, if the type is an interface, because interfaces are weakly checked 
> all the way up to aastore or invokeinterface.
> 
> Sometimes the JIT cannot see the type assurances implicit in a MH type, and 
> so (when choosing internal MH code shapes) we must choose between handing the 
> JIT code that is not locally verifiable, or adding "reassurance" casts to 
> repair the local verifiability of the code.  If the JIT thinks it sees 
> badly-typed code, it might bail out.Note that "locality" of verifiability is 
> a fluid concept, depending sometime on vagaries of inlining decisions.  This 
> is the reason for some awkward looking "belt and suspenders" MH internals, 
> such as the free use of casts in LF bytecode rendering.
> 
> Usually, redundant type verifications (including casts and signatures of 
> incoming arguments) are eliminated, but they can cause (as noted) an extra 
> null check.  In theory, that should fold up also, if the null value is 
> replaced by "another" null, as (p == null ? null : identityFunction(p)).
> 

I quickly verified the fold up does as you expect. Also, if i do the following 
the null check goes away:

public class A {

   volatile String a = "A";
   volatile String snull = null;
   public volatile String b;

   static final MethodHandle b_setter;

   static {
       try {
           b_setter = MethodHandles.lookup().findSetter(A.class, "b", 
String.class);
       }
       catch (Exception e) {
           throw new Error(e);
       }
   }

   public static void main(String[] args) {
       A a = new A();
       a.testLoop();
   }

   void testLoop() {
       for (int i = 0; i < 1000000; i++) {
           testLoopOne(a);
           testLoopOne(snull);
       }
   }
   void testLoopOne(String s) {
       try {
             b_setter.invokeExact(this, s);
       } catch (Throwable t) {
           throw new Error(t);
       }
   }
}

I am probably obsessing too much over some micro/nano-benchmarks, but i am 
wondering if this could cause some unwanted de-opt/recompilation effects when 
all is good with no null values then suddenly a null triggers de-optimization.

Paul.

Attachment: signature.asc
Description: Message signed with OpenPGP using GPGMail

_______________________________________________
mlvm-dev mailing list
mlvm-dev@openjdk.java.net
http://mail.openjdk.java.net/mailman/listinfo/mlvm-dev

Reply via email to