RE: for review: 8236522: "always atomic" modifier for inline classes to enforce atomicity

2020-03-09 Thread Daniel Heidinga
Making it a property of the type makes sense and covers the otherwise missing case of the creator of the type wanting to express that it's never torn.  For inline fields there's already both the LvsQ distinction and the "volatile" keyword to ensure they can't be torn (though it comes with some additional baggage of memory barriers).  Arrays can also use the LvsQ distinction to prevent tearing.
 
> I’d like to
> suggest that IdentityObject implements NonTearable, so that
> bounds like Record & NonTearable allow identity and inline
 
Allowing both to implement the interface is fine from our perspective.  Implementation wise it's trivial to convert NonTearable to a bit and inherit it when creating subclasses.  And it makes sense at an API level.
 
The one caution here is in how we present the NonTearable interface when used with Identity classes as the interface won't protect users from data races - they will still need safe publication or see inconsistent values - while it will protect Inline classes.
 
--Dan
- Original message -From: John Rose Sent by: "valhalla-spec-experts" To: "Rémi Forax" Cc: valhalla-spec-experts Subject: [EXTERNAL] Re: for review: 8236522: "always atomic" modifier for inline classes to enforce atomicityDate: Sat, Mar 7, 2020 6:18 PM On Mar 7, 2020, at 2:22 PM, fo...@univ-mlv.fr wrote:

 
Marker interface are usually problematic because:- they can inherited, for inline classes, you can put them on our new kind of abstract class, which will make things just harder to diagnose.
 
As always the flexibility of inheritance cuts both ways.
Suppose I define AbstractSlice with subtypes MemorySlice,
ArraySlice, etc. and I intend it for secure applications.
I then mark AbstractSlice as NonTearable, and all its subs
are therefore also NonTearable.  You cannot do that with
an ad hoc keyword, even if you want to.  You have to make
sure that every concrete subtype mentions the keyword.
 
It’s a trade-off, of course, but for me the cost of a new keyword
pushes me towards using types making the property inherited.
It’s a decision which falls squarely in the center of the language. 

- they can be uses as a type, like Serializable is used where it should not. By example, what an array of java.lang.NonTearable means exactly. There is a potential for a lot of confusion. 
 
Again, if I have an algorithm that works for a range of value
types (via an interface or abstract super), I can express the
requirement that the inputs to the algorithm be non-tearable,
using subtypes.  For example, the bound (Record & NonTearable)
expresses and enforces the intention that the algorithm will
operate on non-tearable record values.
  

and in the specific case of NonTerable, a non inline class can implement it, again creating confusion. 

The confusion comes from the incomplete story here.  I’d like to
suggest that IdentityObject implements NonTearable, so that
bounds like Record & NonTearable allow identity and inline
objects.
 
— John
 



Re: RefObject and ValObject

2019-04-12 Thread Daniel Heidinga
My original thought had been that this would benefit new code that knew it had to do something identity-full (lock, make weak references, etc) and wanted to enforce it would never see a value.
 
The migration path adds a cherry on top.
 
--Dan
 
- Original message -From: Brian Goetz To: Daniel Heidinga Cc: valhalla-spec-experts Subject: Re: RefObject and ValObjectDate: Fri, Apr 12, 2019 11:51 AM 
High-order tradeoffs:  - Having R/VObject be classes helps from the pedagogical perspective(it paints an accurate map of the object model.)  - There were some anomalies raised that were the result of rewritingan Object supertype to RefObject, and some concerns about "all ourtables got one level deeper."  I don't really have a strong opinion onthese.  - Using interfaces is less intrusive, but less powerful.  - None of the approaches give an obvious solution for the "make me alock Object" problem.I think the useful new observation in this line of discussion is this:  - The premise of L-World is that legacy Object-consuming code can keepworking with values.  - We think that's a good thing.  - But  we also think there will be some cases where that's not agood thing, and that code will wish it had said `m(RefObject)` insteadof `m(Object)`.  [ this is the new thing ]Combining this with the migration stuff going on in a separate thread, Ithink what you're saying is you want to be able to take a method: m(Object o) { }and _migrate_ it to be m(RefObject o) { }with a forwarder @ForwardTo( m(RefObject) ) m(Object o);So that code could, eventually, be migrated to RefObject-consuming code,and all is good again.  And the JIT can see that o is a RefObject andcredibly fall back to a legacy interpretation of ACMP and locking.On 4/12/2019 11:16 AM, Daniel Heidinga wrote:> During the last EG call, I suggested there are benefits to having both RefObject and ValObject be classes rather than interfaces.>> Old code should be able work with both values and references (that's the promise of L-World after all!). New code should be able to opt into whether it wants to handle only references or values as there are APIs that may only make sense for one or the other. A good example of this is java.lang.Reference-subtypes which can't reasonably deal with values. Having RefObject in their method signatures would ensure that they're never passed a ValObject. (ie: the ctor becomes WeakReference(RefObject o) {...})>> For good or ill, interfaces are not checked by the verifier. They're passed as though they are object and the interface check is delayed until invokeinterface, etc. Using interfaces for Ref/Val Object doesn't provide verifier guarantees that the methods will never be passed the wrong type. Javac may not generate the code but the VM can't count on that being the case due to bytecode instrumentation, other compilers, etc.>> Using classes does provide a strong guarantee to the VM which will help to alleviate any costs (acmp, array access) for methods that are declared in terms of RefObject and ensures that the user is getting exactly what they asked for when they declared their method to take RefObject.>> It does leave some oddities as you mention:> * new Object() -> returns a new RefObject> * getSuperclass() for old code may return a new superclass (though this may be the case already when using instrumentation in the classfile load hook)> * others?>> though adding interfaces does as well:> * getInterfaces() would return an interface not declared in the source> * Object would need to implement RefObject for the 'new Object()` case which would mean all values implemented RefObject (yuck!)>> Letting users say what they mean and have it strongly enforced by the verifier is preferable in my view, especially as getSuperclass() issue will only apply to old code as newly compiled code will have the correct superclass in its classfile.>> --Dan>>> -"valhalla-spec-experts"  wrote: ->>> To: valhalla-spec-experts >> From: Brian Goetz>> Sent by: "valhalla-spec-experts">> Date: 04/08/2019 04:00PM>> Subject: RefObject and ValObject>>>> We never reached consensus on how to surface Ref/ValObject.>>>> Here are some places we might want to use these type names:>>>> - Parameter types / variables: we might want to restrict the domain>> of a parameter or variable to only hold a reference, or a value:>>>> void m(RefObject ro) { … }>>>> - Type bounds: we might want to restrict the instantiation of a>> generic class to only hold a reference (say, because we’re going to>> lock on it):>>>> class Foo { … }>>>> - Dynamic tests: if locking on a value is to throw, there must be a>> reasonable idiom that users can use to detect l

Re: RefObject and ValObject

2019-04-12 Thread Daniel Heidinga
During the last EG call, I suggested there are benefits to having both 
RefObject and ValObject be classes rather than interfaces.

Old code should be able work with both values and references (that's the 
promise of L-World after all!). New code should be able to opt into whether it 
wants to handle only references or values as there are APIs that may only make 
sense for one or the other. A good example of this is 
java.lang.Reference-subtypes which can't reasonably deal with values. Having 
RefObject in their method signatures would ensure that they're never passed a 
ValObject. (ie: the ctor becomes WeakReference(RefObject o) {...})

For good or ill, interfaces are not checked by the verifier. They're passed as 
though they are object and the interface check is delayed until 
invokeinterface, etc. Using interfaces for Ref/Val Object doesn't provide 
verifier guarantees that the methods will never be passed the wrong type. Javac 
may not generate the code but the VM can't count on that being the case due to 
bytecode instrumentation, other compilers, etc.

Using classes does provide a strong guarantee to the VM which will help to 
alleviate any costs (acmp, array access) for methods that are declared in terms 
of RefObject and ensures that the user is getting exactly what they asked for 
when they declared their method to take RefObject. 

It does leave some oddities as you mention:
* new Object() -> returns a new RefObject
* getSuperclass() for old code may return a new superclass (though this may be 
the case already when using instrumentation in the classfile load hook)
* others?

though adding interfaces does as well:
* getInterfaces() would return an interface not declared in the source
* Object would need to implement RefObject for the 'new Object()` case which 
would mean all values implemented RefObject (yuck!)

Letting users say what they mean and have it strongly enforced by the verifier 
is preferable in my view, especially as getSuperclass() issue will only apply 
to old code as newly compiled code will have the correct superclass in its 
classfile.

--Dan


-"valhalla-spec-experts"  
wrote: -

>To: valhalla-spec-experts 
>From: Brian Goetz 
>Sent by: "valhalla-spec-experts" 
>Date: 04/08/2019 04:00PM
>Subject: RefObject and ValObject
>
>We never reached consensus on how to surface Ref/ValObject. 
>
>Here are some places we might want to use these type names:
>
> - Parameter types / variables: we might want to restrict the domain
>of a parameter or variable to only hold a reference, or a value: 
>
> void m(RefObject ro) { … }
>
> - Type bounds: we might want to restrict the instantiation of a
>generic class to only hold a reference (say, because we’re going to
>lock on it):
>
> class Foo { … }
>
> - Dynamic tests: if locking on a value is to throw, there must be a
>reasonable idiom that users can use to detect lockability without
>just trying to lock:
>
> if (x instanceof RefObject) {
> synchronized(x) { … }
> }
>
> - Ref- or Val-specific methods. This one is more vague, but its
>conceivable we may want methods on ValObject that are members of all
>values. 
>
>
>There’s been three ways proposed (so far) that we might reflect these
>as top types: 
>
> - RefObject and ValObject are (somewhat special) classes. We spell
>(at least in the class file) “value class” as “class X extends
>ValObject”. We implicitly rewrite reference classes at runtime that
>extend Object to extend RefObject instead. This has obvious
>pedagogical value, but there are some (small) risks of anomalies. 
>
> - RefObject and ValObject are interfaces. We ensure that no class
>can implement both. (Open question whether an interface could extend
>one or the other, acting as an implicit constraint that it only be
>implemented by value classes or reference classes.). Harder to do
>things like put final implementations of wait/notify in ValObject,
>though maybe this isn’t of as much value as it would have been if
>we’d done this 25 years ago. 
>
> - Split the difference; ValObject is a class, RefObject is an
>interface. Sounds weird at first, but acknowledges that we’re
>grafting this on to refs after the fact, and eliminates most of the
>obvious anomalies. 
>
>No matter which way we go, we end up with an odd anomaly: “new
>Object()” should yield an instance of RefObject, but we don’t want
>Object <: RefObject for obvious reasons. Its possible that “new
>Object()” could result in an instance of a _species_ of Object that
>implement RefObject… but our theory of species doesn’t quite go there
>and it seems a little silly to add new requirements just for this. 
>
>
>
>



Re: generic specialization design discussion

2019-04-09 Thread Daniel Heidinga
Riffing on the "inline" term and tying things back to the flattenable discussions - what about using "flat" as the keyword?
 
flat class Foo { }
 
flat record R (int i);
 
--Dan
- Original message -From: Maurizio Cimadamore Sent by: "valhalla-spec-experts" To: Brian Goetz , Doug Lea Cc: valhalla-spec-experts Subject: Re: generic specialization design discussionDate: Tue, Apr 9, 2019 4:10 PM 
On 09/04/2019 18:04, Brian Goetz wrote:> In addition to liking the sound of it, I like that it is more “modifer-y” than “value”, meaning that it could conceivably be applied to other entities:>>      inline record R(int a);>>      inline enum Foo { A, B };>I like it too - especially because in C/C++ "inline" doesn't actually_force_ the compiler to do anything. So, I like the hint-y nature ofthis keyword and I think it brings front & center what this feature isabout in a way that 'value' never really did (users asking about thedifference between records and values is, I think, a proof of thatparticular failure).Maurizio 
 



Re: Finding the spirit of L-World

2019-01-23 Thread Daniel Heidinga
Thanks for writing this up Brian.  It clearly lists a number of the problematic areas that were identified as potentially needing re-validation.
 
The key questions are around the mental model of what we're trying to accomplish and how to make it easy (easier?) for users to migrate to use value types or handle when their pre-value code is passed a valuetype.  There's a cost for some group of users regardless of how we address each of these issues.  Who pays these costs?  Those migrating to use the new value types functionality?  Those needing to address the performance costs of migrating to a values capable runtime (JDK-N?).
 
One concern writ large across our response is performance.  I know we're looking at user model here but performance is part of that model.  Java has a well understood performance model for array access, == (acmp), and it would be unfortunate if we damaged that model significantly when introducing value types.
 
Is this a fair statement of the projects goals: to improve memory locality in Java by introducing flattenable data?  The rest of where we've gotten to has been working all the threads of that key desire through the rest of the java platform.  The L/Q world design has come about from starting from a VM perspective based on what's implementable in ways that allows the JVM to optimize the layout.
 
One of the other driving factors has been the desire to have valuetypes work with existing collections classes.  And a further goal of enabling generic specialization to allow those collections to get the benefits of the flattened data representations (ie: backed by flattened data arrays).
 
You made an important point when talking the ValObject / RefObject split - "This is probably what we'd have done if values had always been part of the Java platform".  I think we need to ask that same question about some of the other proposals and really look at if they're the choices we'd be making if values had always been part of the platform.
 
The other goal we discussed in Burlington was that pre-value code should be minimally penalized when values are introduced, especially for code that isn't using them.  Otherwise, it will be a hard sell for users to take a new JDK release that regresses their existing code.
 
Does that accurate sum up the goals we've been aiming for?
 
 
I’ve been processing the discussions at the Burlington meeting.  While I think we made a lot of progress, I think we fell into a few wishful-thinking traps with regard to the object model that we are exposing to users.  What follows is what I think is the natural conclusion of the L-World design — which is a model I think users can love, but requires us to go a little farther in what the VM does to support it.A sensible rationalization of the object model for L-World would be tohave special subclasses of `Object` for references and values:```class Object { ... }class RefObject extends Object { ... }class ValObject extends Object { ... }``` 
Would the intention here be to retcon existing Object subclasses to instead subclass RefObject?  While this is arguable the type hierarchy we'd have if creating Java today, it will require additional speculation from the JIT on all Object references in the bytecode to bias the code one way or the other.  Some extra checks plus a potential performance cliff if the speculation is wrong and a single valuetype hits a previous RefObject only callsite.
 
How magic would these classes be in the VM?  Would things like jvmti's classfile load hook be sent for them?  Adding fields to Object or ValObject would grow all the ValueTypes loaded which would be expensive for the sweet spot of small values.
 
We can pull the same move with nullability, by declaring an interface`Nullable`:```interface Nullable { }```which is implemented by `RefObject`, and, if we support value classesbeing declared as nullable, would be implemented by those valueclasses as well.  Again, this allows us to use `Nullable` as aparameter type or field type, or as a type bound (`Nullable>`).  
I'm still unclear on the nullability story.  None of the options previously discussed (memcmp to ensure its all 0, some pivot field, a special bit pattern, did I miss any?) are particularly attractive and all come with different levels of costs. Even the vulls? (convert "null value" to null reference on the stack) story has additional costs and complexity that leak throughout the system - ie: the gc will need to know about the null check to know whether to mark / update / copy the reference fields of the value type - and have knock-on affects to the equality discussion below.
 
Which model of nullability would map to this interface?
 
Do nullable values help meet the primary (flattenable data) or secondary goals (interop with collections)?  While they may help the second goal, I think they fail the first one.  Gaining collections at the cost of flattenability suggests we've missed our design center here.
 
## TotalityThe biggest pain poin

Re: value based class vs nullable value type

2018-08-02 Thread Daniel Heidinga
>Hi all,
>just to write my current state of thinking somewhere,
>
>I see 3 ways to implement value based class which all have their pros
>and cons

Thanks for writing this up, Remi. I've responded with some initial 
questions (and concerns). Hopefully we'll get a chance to discuss
further in person while many of us are here at jvmls / ocw.

>
>- the value based class bit,
> a value based class is tagged with ACC_VALUE_BASED, which means
>that they have no identity. It's not a value class so instances of a
>value based class are not flattened for fields and arrays but the JIT
>can flattened them and re-materialize them.

This sounds similar to the heisenbox proposals where identity was accidental
and could come and go, correct? Or are all identity-ful operations
(acmp, synchronization, etc) on these tagged classes rejected?

If this is the same as the heisenbox proposal, then the previous concerns
about the about the user model when identity can come and go still apply.

>
>- the interface trick,
> a nullable value type is like an interface that contains a value
>class. The idea is to change the semantics of invokevirtual to be
>able to call methods of an interface. 

Would this apply to all interfaces? Would invokevirtual now need to do
both virtual and interface style invokes for all types of classes?

>From a VM perspective, I'm not a fan of merging the semantics of 
invokevirtual and invokeinterface. These kinds of merges usually
come with a performance cost which is especially evident at startup.

> This way we can swap the
>implementation of the value based classes to use interfaces instead.
>Like the previous proposal it means that because it's an interface
>there is no flattening for fields and array but the JIT is able to
>remove allocation on stack because the interface implementation is a
>value type.

Others could implement the same interface as well so there would either
need to be two paths or a fallback option when a non-valuetype receiver
was passed.

I'm hesitant about this approach given the extra costs it puts on
invokevirtual. 

>
>- nullable value type,
> as decribed before on this list, the developer that create a value
>based class indicate a field which if is zero encodes null. A
>nullable value type is flattened for fields and arrays. The main
>drawback of this approach is that non nullable value type pay the
>cost of the possibility to having nullable value type on operation
>like acmp. This model is also far more complex to implement because
>it introduce another kind of world, the N-world.

Modifying the descriptors comes at a high cost in terms of determining
overriding, bridges, vtable packing, etc as discovered by the Q world
prototypes.

>
>To summarize, the first two proposals allow a value based class to be
>null with the cost of not having their values being flattened if not
>on stack the last proposal allow value based class to be fully
>flattened with the cost of making the VM more complex and slowing
>Object operations because it introduce a new world. 
>
>Given that we want to support value based class only for retrofitting
>purpose, my money is on 2 actually :)
>
>Rémi
>




Re: Valhalla EG Notes June 20, 2018

2018-07-06 Thread Daniel Heidinga
> AI: Karen - double check potential JVMTI bug
 
I checked our code base for this and we have the same behaviour.  Would be good to get this fixed at the spec level.
 
--Dan
 
- Original message -From: Karen Kinnear Sent by: "valhalla-spec-experts" To: valhalla-spec-experts Cc:Subject: Valhalla EG Notes June 20, 2018Date: Fri, Jun 29, 2018 6:36 PM 
NO meeting July 4th, 2018 - US Independence day holiday. Next Meeting July 18th.Karen will be on vacation week of July 18th - looking for a volunteer to run the meeting please.AIs:All: review Nestmates GetNestHost minor rewording of javadocAll: review Value Type Consistency Checking proposal:http://cr.openjdk.java.net/~acorn/value-types-consistency-checking-details.pdfAll: see follow-up request - please approve LW1 temporary static method consistency checking before preparation, to be revisited:http://mail.openjdk.java.net/pipermail/valhalla-spec-experts/2018-June/000717.htmlKaren: update Value Types Consistency Checking proposal with BootStrapMethod infoattendees: John, Dan S, Tobias, Dan H, Frederic, Remi, KarenI. Nestmates:Please review GetNestHost minor javadoc requestII. CondyRemi: when will javac use condy for constant lambdas?Dan S: some experiments have been done, would like to do this, no timeframe yetCondy next step: not require Looking and Name&Type argumentRemi: ElasticSearch guy: indy metafactory not do all the needed casting - works for java but not for scala and other languages - will dig and findIII. Value Types1. Equals/Hashcode/toStringRemi - saw initial prototype implementation- two different approaches - Records in Amber vs. ValhallaRemi has a version he could clean up and offer for all us to use - weave custom MethodHandles for each typeJohn: using loop combinators?Remi - try not toJohn: good - love to see it** follow-on email(many thanks Remi!)2. Value Types Consistency Checking proposalKaren walked through overviewSummary:Two types of checks1. Value Types attribute vs. reality2. Value Types attribute of two different classes - e.g. caller-calleeUsers of Value Types attribute:1. verifier (with no loading) - catching mismatched bytecode usage2. optimizationsGoal: avoid eager loadingTerminology:pre-load: load before completing load of containing class   - analogous to supertype handling   - only proposed for flattenable instance fields, information needed for layout   - risk of circularityeager loading: loading at other times - e.g. linking, preparation, etc.Proposed checking against reality:1. instance fields (all flattenable in LW1) - pre-loaded: test vs. real2. flattenable static fields - link phase, prior to preparation (post-LW1): test vs. real3. local methods: prior to preparation check all (in ValueTypes attribute or not) parameters/return vs. real4. CONSTANT_Class resolution: for all classes (in ValueTypes attribute or not)test vs. realProposed checking inter-class consistency5. Preparation (selection cache creation): method declarer vs. method overrider consistency6. Field or Method Resolution: For all types in signatures, check caller-callee consistencyNote: these checks should essentially match where loader constraint checks are performed today.Note: all the inter-class consistency checks check all the signature types, whether or not they are in the Value Types attributeRemi: if a method is never called, why load parameters?Tobi: why not load one first invocation?John: if load before call - add a new barrier.   - challenge with overriding hierarchy - deopt - sudden unpredictable performance drop   - preparation is better than 1st callKaren: note: if there is a null on the stack, they might not have loaded a parameter at first callFrederic: Overriding example    A.m, B.m, C.m    if A is correct, B is incorrect, C is maybe wrong   - body of the local method may be incorrectRemi: if the super type is correct but the subtype is notKaren: preparation checks are NOT vs. the real type - they just check overrider/overridden - they could both be wrong and pass that checkFrederic: This is more complex with interfacesDan H: if never call method, want to continue to run, throw an exception when realize inconsistencyDan S: alternative - hotspot implementation could perform the check early and cache and throw the exception at first invocationAI: Karen - investigate possibilities including either delaying checking or offering the option to check earlier but delay throwing any exceptionsed. note - sent follow-up email: started the exploration - too complex for LW1 timeframe - asked for approval to keep proposalfor now and revisit after we get early access binaries into people’s handsJohn: Constant_Class resolution - need to also check BootStrapMethod evaluation for indy and condy - spec says “as if by ldc”.Karen: Issue 1: Note that it is possible for class A to declare a field of V, not know it is a value type, and class C to also not knowand to store null in the field, because field resolution only checks between the caller-callee, not re

Re: CONSTANT_Dynamic bootstrap signature restriction

2018-03-05 Thread Daniel Heidinga
>
>In discussions about future directions for CONSTANT_Dynamic, we've
>decided it would be helpful to restrict the set of legal bootstrap
>signatures. The first parameter type would be required to be declared
>with type MethodHandles.Lookup.

Dan, can you expand on why this restriction is helpful? It helps when 
evaluating a specification to have the rationale for the changes - both
for the EG and the observers.

Thanks,
--Dan



Re: Valhalla EG minutes Feb 14, 2018

2018-02-28 Thread Daniel Heidinga
>All this is to say, what you are saying sounds like a difficultly
>with one
>or more implementations, and not with the logic of the spec. Am I
>missing
>something?

I haven't seen a rationale for the proposed spec change yet (maybe I 
missed it?) which only leaves implementation costs as a discussion point.  
Either the cost to bring the RI into compliance with the current spec or 
to update other implementations to bless the spec divergence.

If you can share rationale beyond "this is what Hotspot has chosen to do", 
then we can a spec discussion on that basis.

Regards,
--Dan


>
>— John
>> 
>> Adding static methods, private or not, is less problematic apart
>from interfaces due to the knock-on effects to iTables. 
>> 
>> We can live with the later (static) though we'd like to avoid the
>former (instance).
>> 
>
>



Re: Valhalla EG minutes Feb 14, 2018

2018-02-21 Thread Daniel Heidinga
Thanks Karen for the link to the bug.
 
By "private methods" does that imply both static and instance methods?  I hope not.  
 
With NestMates, the JVMS has been updated with the (non-normative) text:
---
Because private methods may now be invoked from a nestmate class, it is no longer recommended to compile their invocation to invokespecial. (invokespecial may only be used for methods declared in the current class or a superclass.) A standard usage of invokevirtual or invokeinterface works just fine instead, with no special discussion necessary.
---
implies that private instance methods will have vtable slots and adding a new private instance method will be a heavy weight operation due to modifying a vtable(!).
 
Adding static methods, private or not, is less problematic apart from interfaces due to the knock-on effects to iTables.  
 
We can live with the later (static) though we'd like to avoid the former (instance).
 
--Dan
 
- Original message -From: Karen Kinnear Sent by: "valhalla-spec-experts" To: valhalla-spec-experts Cc:Subject: Re: Valhalla EG minutes Feb 14, 2018Date: Wed, Feb 21, 2018 12:52 PM 
JVMTI RedefineClasses spec handling of private methods is being tracked via:https://urldefense.proofpoint.com/v2/url?u=https-3A__bugs.openjdk.java.net_browse_JDK-2D8192936&d=DwIFaQ&c=jf_iaSHvJObTbx-siA1ZOg&r=LBQnmyrHQkEBElM8bAxhzfwLG2HbsYDdzEznFrQoob4&m=KyGIkr3_m7an4VEDrmlaWzy_eUhQvpypda7FucEbf-M&s=PAZDayMHueNWfP6wI6s3I-XfDpiobFf3OPhEerTcI7s&e=thanks,Karen> On Feb 20, 2018, at 10:52 AM, Karen Kinnear  wrote:>> attendees: Tobi, Mr Simms, Dan H, Dan S, Frederic, Remi, Karen>> I. Condy>> 1. Condy reference implementation was pushed last week into JDK 11.>> 2. StackOverFlow handling/future LDC early cycle detection> Dan S walked us through his StackOverFlow JVMS clarification for condy, specifically the ordering of resolution> prior to throwing StackOverFlowError for JDK11 initial Condy release>> https://urldefense.proofpoint.com/v2/url?u=http-3A__mail.openjdk.java.net_pipermail_valhalla-2Dspec-2Dexperts_2018-2DFebruary_000560.html&d=DwIFaQ&c=jf_iaSHvJObTbx-siA1ZOg&r=LBQnmyrHQkEBElM8bAxhzfwLG2HbsYDdzEznFrQoob4&m=KyGIkr3_m7an4VEDrmlaWzy_eUhQvpypda7FucEbf-M&s=wjLLp1GHaRDE0Jpm_PQVRmoCst7uwiVJ7luwjBu_E7c&e=>> AI: implementors - check if this clarification matches implementable behavior>> Dan: also described an incremental ldc early detection circularity proposal>   - not requiring candy’s to refer to entries earlier in the classfile>   - not depending on an attribute to keep current during retransformation>   - assume earlier references are  the common case, so that is fastest>   - still work if not in order - need to do static cycle tracking - so slower>> question for ASM users - e.g. JRuby, Groovy - as they add Condy support - how> often do they need forward references?>> AI: all - double-check implementation implications> Dan S - if you want to ask Charlie Nutter to let us know for JRuby going forward ...>> post-meeting Update from Dan Smith:> https://urldefense.proofpoint.com/v2/url?u=http-3A__mail.openjdk.java.net_pipermail_valhalla-2Dspec-2Dexperts_2018-2DFebruary_000570.html&d=DwIFaQ&c=jf_iaSHvJObTbx-siA1ZOg&r=LBQnmyrHQkEBElM8bAxhzfwLG2HbsYDdzEznFrQoob4&m=KyGIkr3_m7an4VEDrmlaWzy_eUhQvpypda7FucEbf-M&s=64MFsDQjWyxhx3hjvJwpEU1DNm9KwcaP1QH0InfASu8&e=>> AI: All - check if works for ASM and implementors>> 3. Planned uses for condy in jdk?>   - Nothing in imminent plans>   - expect longer term constant Lambdas to use condy - lightweight>   - future: still exploring APIs for constants, switch, pattern match, …>>  Remi: Python, JRuby - all lambdas are constant>  Remi: wants support in javac behind a flag>  Dan S: it is in Amber>  Remi: wants a binary :-) - Dan S will pass on that message>>> II. Nestmates>> 1. Lookup handling>    AI: Karen to send email with details> - here it is:  https://urldefense.proofpoint.com/v2/url?u=http-3A__mail.openjdk.java.net_pipermail_valhalla-2Dspec-2Dexperts_2018-2DFebruary_000567.html&d=DwIFaQ&c=jf_iaSHvJObTbx-siA1ZOg&r=LBQnmyrHQkEBElM8bAxhzfwLG2HbsYDdzEznFrQoob4&m=KyGIkr3_m7an4VEDrmlaWzy_eUhQvpypda7FucEbf-M&s=-fdQS3jfPiI_HfNasVUIBYyqRqmyOYMMHwPH89N2DjE&e=>> Note: javac will not be generating bridges for private members when nestmate support goes into JDK 11 (soon)> protected members will still require bridges>> 2. Spec updates to JVM Ti, JDWP, JDI, java.lang.instrument> https://urldefense.proofpoint.com/v2/url?u=http-3A__mail.openjdk.java.net_pipermail_valhalla-2Dspec-2Dexperts_2018-2DFebruary_000541.html&d=DwIFaQ&c=jf_iaSHvJObTbx-siA1ZOg&r=LBQnmyrHQkEBElM8bAxhzfwLG2HbsYDdzEznFrQoob4&m=KyGIkr3_m7an4VEDrmlaWzy_eUhQvpypda7FucEbf-M&s=EuwhQi4t5_6WZczp-jStDfWc5jIRMy1mVR-8yQOJWko&e=>> Request to update JVMTI retransformation to describe ability to add private methods. Recognize this> is independent of Nestmates, but perhaps overdue if we intend this to be supported behavior.>> AI: Karen - review with past owners of JVMTI specification changes.>> III. Value Types>> Latest LWorld 

Re: Final CONSTANT_Dynamic spec

2018-02-14 Thread Daniel Heidinga
Thanks for the clarification on this during the call today.
 
Summarizing for any observers:  The BSMs won't have run yet because the JVM would still be setting up the arguments prior to invoking the BSM.  
To resolve CP entry 10, we would need to
* resolve the static arguments to BSM1 (#1, #2, #11, #3)
* which for CP entry 11, would require resolving the static arguments to BSM2 (#4, #12, #5)
* which for CP entry 12, would require resolving the static arguments to BSM2 (#6, #10)
Which is the loop back to CP entry 10.
 
The stackoverflow in this example occurs in the static argument resolution phase, not the BSM invocation.  
 
BSMs that in use condy their implementation can result in side effects as expected when running user code.
 
--Dan
 
- Original message -From: Dan Smith To: Daniel Heidinga Cc: valhalla-spec-experts@openjdk.java.netSubject: Re: Final CONSTANT_Dynamic specDate: Tue, Feb 13, 2018 6:55 PM 
On Feb 12, 2018, at 10:05 PM, Daniel Heidinga <daniel_heidi...@ca.ibm.com> wrote: 

> That said, what side-effects are you concerned about? My claim is that re-resolution in this scenario would not, for example, trigger any class loading or bootstrap invocations.
 
The simplest implementation strategy for a resolution that is guaranteed to throw SOF is to re-execute it each time the constant needs to be re-resolved and let the SOF reoccur naturally.  This would result in new BSM invocations.
 Nope, it shouldn't.

 
Example:
 
Constant pool:
...
#10 CONSTANT_Dynamic(BSM1, foo:I)
#11 CONSTANT_Dynamic(BSM2, bar:D)
#12 CONSTANT_Dynamic(BSM3, baz:J)
 
BootstrapMethods:
BSM1=invokestatic Bootstraps.m1(#1, #2, #11, #3)
BSM2=invokestatic Bootstraps.m2(#4, #12, #5)
BSM3=invokestatic Bootstraps.m3(#6, #10)
 
The spec says that, at the point where resolution of #10 is required by the program (e.g., an ldc), the following are resolved in order:
#1
#2
#4
#6
 
Then the recursion reaches #10, and a SOE occurs. Rather than throwing immediately, the implementation may choose to begin resolution of #10 again, which will trigger re-resolution of #1, #2, #4, and #6, but all of those have completed previously so they're just cache lookups.
 
None of the bootstrap methods m1, m2, or m3 would be invoked before the SOE, because we're still working on building an argument list. If #1, #2, #4, or #6 have bootstrap methods of their own, those would be invoked the first time and never again.
 
You might be thinking of a different case: a bootstrap method that triggers another call to itself within its own body. That's not addressed at all by this rule. The expected behavior falls out from the rules for bytecode evaluation: either there will be some control flow changes to break the cycle, or the stack will run out and you'll get a vanilla SOE.
 
—Dan
 



Re: Final CONSTANT_Dynamic spec

2018-02-12 Thread Daniel Heidinga
> That said, what side-effects are you concerned about? My claim is that re-resolution in this scenario would not, for example, trigger any class loading or bootstrap invocations.
 
The simplest implementation strategy for a resolution that is guaranteed to throw SOF is to re-execute it each time the constant needs to be re-resolved and let the SOF reoccur naturally.  This would result in new BSM invocations.
 
> My use if "observable to users" was not meant to include things that are implementation details anyway, like stack traces, CPU usage,
 
Ok.  There's a lot of different ways re-resolution can be observed though - the most obvious would be a static counter updated each time the BSM is invoked.  The value would be updated by re-resolutions.
 
--Dan
 
- Original message -From: Dan Smith To: Daniel Heidinga Cc: valhalla-spec-experts@openjdk.java.netSubject: Re: Final CONSTANT_Dynamic specDate: Wed, Feb 7, 2018 6:55 PM 
 
On Feb 4, 2018, at 9:59 AM, Daniel Heidinga <daniel_heidi...@ca.ibm.com> wrote: 

Dan,
 
Can you clarify this sentence:

Note that an implementation is free to try to re-resolve X multiple times and keep looping until the stack actually overflows—once you've reached the nested X the first time, all further computation is not observable by users.

 
In particular this statement  "not observable by users" has some unpleasant implications.  Does that mean that side-effects that occur during the re-resolution should be undone?That the StackOverflowError's backtrace should only include a single loop of the re-resolution?
 
There are a lot of ways that users could observe further computation and if the JVM needs to detect and prevent them from seeing the effects, this actually mandates early detection.  Was that the intention or am I being overly pedantic?
 The dashes (-) are to delineate the actual spec. The sentence you're asking about is just me explaining for this mailing list how this rule interacts with our current implementation strategy.

 
So, nothing being mandated here.
 
That said, what side-effects are you concerned about? My claim is that re-resolution in this scenario would not, for example, trigger any class loading or bootstrap invocations.
 
My use if "observable to users" was not meant to include things that are implementation details anyway, like stack traces, CPU usage, debugger interactions, etc.
 
—Dan
 



Re: Final CONSTANT_Dynamic spec

2018-02-04 Thread Daniel Heidinga
Dan,
 
Can you clarify this sentence:

Note that an implementation is free to try to re-resolve X multiple times and keep looping until the stack actually overflows—once you've reached the nested X the first time, all further computation is not observable by users.

 
In particular this statement  "not observable by users" has some unpleasant implications.  Does that mean that side-effects that occur during the re-resolution should be undone?That the StackOverflowError's backtrace should only include a single loop of the re-resolution?
 
There are a lot of ways that users could observe further computation and if the JVM needs to detect and prevent them from seeing the effects, this actually mandates early detection.  Was that the intention or am I being overly pedantic?
 
--Dan
- Original message -From: Dan Smith Sent by: "valhalla-spec-experts" To: valhalla-spec-experts Cc:Subject: Re: Final CONSTANT_Dynamic specDate: Wed, Jan 31, 2018 9:47 PM 
> On Jan 18, 2018, at 5:14 PM, Dan Smith  wrote:>> A proposed final spec for CONSTANT_Dynamic is here:>> https://urldefense.proofpoint.com/v2/url?u=http-3A__cr.openjdk.java.net_-7Edlsmith_constant-2Ddynamic.html&d=DwIFaQ&c=jf_iaSHvJObTbx-siA1ZOg&r=LBQnmyrHQkEBElM8bAxhzfwLG2HbsYDdzEznFrQoob4&m=gItW7uu58mA6Y7JF5L0w7HfDcUi2UDkAVf1EY72Rslo&s=ZTLlXZISKOh3fs-q-Xdj4U4skQd3eVZZaYPrOpgMx5g&e=>> There are two significant changes:>> 5.4.3: Expanded the rule about concurrent resolution to account for nested resolution in a single thread>> 5.4.3.6: Added a resolution-time rule for detecting cycles in static arguments, with some additional discussion about cyclesDiscussing this cycle-detection rule with Alex, we agreed that its nondeterminism was not helpful, and that we can make it deterministic without putting undue constraints on implementations. So here's a tweaked version of the rule we'd like to use instead:-Let X be the symbolic reference currently being resolved, and let Y be a static argument of X to be resolved as described above. If X and Y are both dynamically-computed constants, and if Y is either the same as X or has a static argument that references X through its static arguments, directly or indirectly, resolution fails with a StackOverflowError **at the point where re-resolution of X would be required**.~~This rule allows some leeway in the sequencing of the error check: the implementation may resolve some of the static arguments of Y, or may not.~~-Note that an implementation is free to try to re-resolve X multiple times and keep looping until the stack actually overflows—once you've reached the nested X the first time, all further computation is not observable by users.—Dan
 



Re: API Updates: 8191116: [Nestmates] Update core reflection, MethodHandle and varhandle APIs to allow for nestmate access

2018-01-30 Thread Daniel Heidinga
Thanks David.
 
These changes all seem reasonable.  I was going to complain about changing 'will occur' to 'may occur' as 'may' make makes it difficult to determine when it will vs won't occur but I think it's correct in this case.
 
--Dan
 
- Original message -From: David Holmes Sent by: "valhalla-spec-experts" To: valhalla-spec-experts@openjdk.java.netCc:Subject: API Updates: 8191116: [Nestmates] Update core reflection, MethodHandle and varhandle APIs to allow for nestmate accessDate: Tue, Jan 30, 2018 4:55 AM 
I've gone through the API specifications for core reflection, MethodHandles and VarHandles to see what changes are needed to accommodate nestmates and the related invocation rule changes. Turns out there is very little needed and most of what there is is non-normative, just correcting or clarifying particular things. For VarHandle nothing is needed as it defers all access checking to MethodHandle.Lookup.
Bug: https://bugs.openjdk.java.net/browse/JDK-8191116
Webrev: http://cr.openjdk.java.net/~dholmes/8191116/webrev/
Core reflection changes:
These are minimal as core reflection already expresses all access control in terms of "Java language access control" which already allows for nestmate access.
- java/lang/reflect/AccessibleObject.java
  *  Java language access control prevents use of private members outside- * their class; package access members outside their package; protected members+ * their top-level class; package access members outside their package; protected members
This corrects the definition of private access in the Java language.
-  java/lang/reflect/Method.java
  * using dynamic method lookup as documented in The Java Language- * Specification, Second Edition, section 15.12.4.4; in particular,- * overriding based on the runtime type of the target object will occur.+ * Specification, section 15.12.4.4; in particular,+ * overriding based on the runtime type of the target object may occur.
Removed unnecessary reference to "Second Edition". Changed 'will occur' to 'may occur' to account for the different forms of invocation that may apply.
MethodHandle API Changes:
- java/lang/invoke/MethodHandle.java
  * A non-virtual method handle to a specific virtual method implementation  * can also be created.  These do not perform virtual lookup based on  * receiver type.  Such a method handle simulates the effect of- * an {@code invokespecial} instruction to the same method.+ * an {@code invokespecial} instruction to the same non-private method;+ * or an {@code invokevirtual} or {@code invokeinterface} instruction to the+ * same private method (as applicable).
I tried to clarify that non-virtual invocations are not limited to invokespecial - as private invocations via invokevirtual or invokeinterface are also non-virtual.
- java/lang/invoke/MethodHandles.java      * -     * In some cases, access between nested classes is obtained by the Java compiler by creating-     * an wrapper method to access a private method of another class-     * in the same top-level declaration.+     * Since JDK 11 the relationship between nested types can be expressed directly through the+     * {@code NestHost} and {@code NestMembers} attributes.+     * (See the Java Virtual Machine Specification, sections 4.7.28 and 4.7.29.)+     * In that case, the lookup class has direct access to private members of all its nestmates, and+     * that is true of the associated {@code Lookup} object as well.+     * Otherwise, access between nested classes is obtained by the Java compiler creating+     * a wrapper method to access a private method of another class in the same nest.      * For example, a nested class {@code C.D}
Updated the nested classes description to cover legacy approach and new nestmate approach.
-     * {@code C.E} would be unable to those private members.+     * {@code C.E} would be unable to access those private members.
Fixed typo: "access" was missing.
      * Discussion of private access:      * We say that a lookup has private access      * if its {@linkplain #lookupModes lookup modes}-     * include the possibility of accessing {@code private} members.+     * include the possibility of accessing {@code private} members+     * (which includes the private members of nestmates).      * As documented in the relevant methods elsewhere,      * only lookups with private access possess the following capabilities:      * -     * access private fields, methods, and constructors of the lookup class+     * access private fields, methods, and constructors of the lookup class and its nestmates
Clarify that private access includes nestmate access.
-     *  access all members of the caller's class, all public types in the caller's module,+     *  access all members of the caller's class and nestmates, all public types in the caller's module,
Ditto.
      * When called, the handle will treat the first argument as a receiver-     * and dispatch on the receiver's type to determine which method+     * and, for non-pr

Re: nestmates JVMTI spec proposal changes

2017-12-20 Thread Daniel Heidinga
We're on board with this proposal.  Any change to the NestMembers, adding or removing members, should fail redefinition.  As should changes to NestHost.
 
Taking this approach now eases the implementation costs and allows us to re-examine this in the future.  
 
--Dan
 
- Original message -From: Karen Kinnear Sent by: "valhalla-spec-experts" To: valhalla-spec-experts Cc:Subject: nestmates JVMTI spec proposal changesDate: Tue, Dec 19, 2017 5:47 PM I believe we are all in agreement that:
1. Redefinition should NOT be allowed to change the NestHost attribute and
2. Redefinition should NOT be allowed to remove NestMembers (equivalent to reducing access controls)
 
The question was - should we allow Redefinition to add NestMembers or not?
 
Hotspot team would like to propose that at least for the initial nestmates release - we do NOT allow
Redefinition to add NestMembers.
 
In all cases, NOT allow would mean a failure to redefine a class.
Open to suggestions on error that should be returned.
 
This is the least risky approach and allows future reducing restrictions if we find we need to. It is 
extremely difficult to increase restrictions.
 
Discussion:
 
JVMTI redefinition restrictions today:
 
The redefinition may change method bodies, the constant pool and attributes. The redefinition must not add, remove or rename fields or methods, change the signatures of methods, change modifiers, or change inheritance. These restrictions may be lifted in future versions. See the error return description below for information on error codes returned if an unsupported redefinition is attempted. The class file bytes are not verified or installed until they have passed through the chain of ClassFileLoadHookevents, thus the returned error code reflects the result of the transformations applied to the bytes passed into class_definitions. If any error code is returned other than JVMTI_ERROR_NONE, none of the classes to be redefined will have a new definition installed. When this function returns (with the error code of JVMTI_ERROR_NONE) all of the classes to be redefined will have their new definitions installed.
 
Note that redefinition today is not allowed to change modifiers or change inheritance, so no changes that could change access control
behavior.
 
The jigsaw AddExports allows dynamic increasing of access to types.  To ensure that resolution
of a constant pool type entries that fails always fails in the same way, the implementation must cache those
failures.
 
If we were to allow dynamic increases in member access, to ensure that resolution of content pool member
entries (fields, methods) would fail in the same way, the implementation would need to cache those
resolution failures, which is non-trivial, and would need to ensure that the redefinition implementation
retained errors for resolution from other types.
 
Note that Redefinition is allowed to change inner class attributes today; however those do not
have any affect on the JVM, they just change what reflection returns. Redefinition does not
allow adding trampoline methods.
 
thanks,
Karen
 
 
 
 
 



Re: minutes Valhalla EG June 07, 2017

2017-07-19 Thread Daniel Heidinga
Thanks John.
 
Can you expand on why "int.class.isPrimary() == false"?  Does this depend on being able to retroactively make Integer the box of the int.class value type?
 
> +     * Value type mirrors are never primaries; their corresponding
> +     * box reference types are primaries.
 
Is this an MVT statement or a longer term statement?  With the goal of moving away from MVT's box first model, I would have thought eventually the value type would be the primary and the box would be the secondary.  Is that correct?
 
> +     * TBD:An array type returns the primary class
> +     * of its component type.  
 
This is definitely a useful addition!  We have an equivalent VM-level api for this that makes certain kinds of code easier to write.
 
 
> +     *                                  A primitive type returns the
> +     * corresponding wrapper type.
 
Same as above - does this depend on making Integer <--> int a valuetype relationship?
 
> +     * The user is expected to perform relevant interning,
What action should the VM take if the user doesn't perform interning?  Does the VM need to do any validation of the name / userData pair?  Can the same name be used with different data?
 
makeSecondaryClass(Foo.class, "Foo_A", new Object());
makeSecondaryClass(Foo.class, "Foo_A", new Object());
What's the expected behaviour in this case?
 
From a VM perspective, this api returns new j.l.Class objects with different names and userData, but exactly the same bytecodes, methods, & fields?
 
What name is expected to be printed in a stacktrace?  I don't know about Hotspot but in J9, we record PCs when creating the stacktrace and only decode to names when necessary which would print the Foo.class name, not the user-supplied name.
 
--Dan
 
- Original message -From: John Rose To: Daniel Heidinga Cc: Karen Kinnear , valhalla-spec-experts@openjdk.java.netSubject: Re: minutes Valhalla EG June 07, 2017Date: Wed, Jun 21, 2017 11:51 AM On Jun 21, 2017, at 6:55 AM, Daniel Heidinga <daniel_heidi...@ca.ibm.com> wrote:

 
>  AI: John - send out javadoc to EG> derived class := Class.derivedClassFactory(Class mainClass, T userData, String name)
 
In the spirit of the usual "5 min before the meeting" ritual action item panic, I'm trying to review the javadoc for this and can't seem to find it.  Can it be sent again?
 
Apologies; here it is, 10 min before the meeting.
 
diff --git a/src/java.base/share/classes/java/lang/Class.java b/src/java.base/share/classes/java/lang/Class.java
--- a/src/java.base/share/classes/java/lang/Class.java
+++ b/src/java.base/share/classes/java/lang/Class.java
@@ -681,6 +681,65 @@
     public native boolean isPrimitive();
 
     /**
+     * Determines if the specified {@code Class} object is the
+     * primary representative of an underlying class file.
+     * Array and primitive classes are never primaries.
+     * Other reference type constants of the form {@code X.class}
+     * are always primaries.
+     * Value type mirrors are never primaries; their corresponding
+     * box reference types are primaries.
+     *
+     * @return true if and only if this class is the primary
+     * representative of its underlying class file
+     */
+    @HotSpotIntrinsicCandidate
+    public native boolean isPrimary();
+
+    /**
+     * Obtains the primary class corresponding to the specified
+     * {@code Class} object, if this class is a secondary class
+     * derived from a primary class
+     * If this class object is a primary class, it returns the
+     * same class object.
+     * TBD:An array type returns the primary class
+     * of its component type.  A primitive type returns the
+     * corresponding wrapper type.  A value type returns the
+     * primary class of its box type.  A specialized generic
+     * returns the primary class of its template type.
+     * OR, an non-total version:Primitive and array classes
+     * do not have associated primary classes; they return
+     * {@code null} for this query.
+     *
+     * @return the primary representative of the underlying class file
+     */
+    @HotSpotIntrinsicCandidate
+    public native Class getPrimaryClass();
+
+    /**
+     * Creates a new non-primary class for the given primary.
+     * This is an internal factory for non-primary classes.
+     * The user is expected to perform relevant interning,
+     * and manage the type of the user-data component.
+     * @param primary the primary class for the new secondary
+     * @param name arbitrary name string, to be the name of the new secondary
+     * @param userData arbitrary reference to associate with the new secondary
+     * @return a fresh secondary class
+     * @throws IllegalArgumentException if the first argument is not a primary class
+     */
+    /*non-public*/
+    @HotSpotIntrinsicCandidate
+    static native Class makeSecondaryClass(Class primary

Re: Valhalla EG minutes 6/21/17

2017-07-19 Thread Daniel Heidinga
There are a couple of different terms that have been used to describe early access features - incubator (jep 11), experiment, or optional.  At least for me, these different terms result in different mental models for how this should work for the VM.  Hopefully, we're all thinking about the same behaviour semantics and are just using different terms but I think it's worth clarifying.
 
1) JEP 11 does a good job of describing incubator and makes it clear that it refers to non-standard modules / java-level code.  Not VM features.
 
2) An optional feature is something you can chose to implement (or not) but must exist as part of the current JVM spec.  All of its JVM spec changes would need to be in the spec and their enablement / enforcement would depend on whether the VM chose to implement that optional feature.  Removal of the optional feature from the JVM spec would basically be impossible - it may fall into disuse through VMs not implementing it, but it would be hard to remove.
 
3) An experimental feature on the other hand, is something that would be allowed to have an appendix or set of experimental JVM spec changes that are only enabled by command line option + present in classes with new minor classfile versions.  These features would start life deprecated and expectation is that classfiles with this particular minor version would no longer be recognized by future VMs when the experimental features graduate to the real JVM features, providing freedom to experiment without requiring the VM to carry the support burden long term.
 
My understanding has been that with the MVT prototype work, we've been aiming for the 3rd case.  Does this match everyone's expectations?  Anyone think we're aiming for some other point on the spectrum?
 
One of my fears is that we're going to end up with the VM required to support multiple ways of recognizing ValueCapableClasses / ValueTypes, especially if there are spec changes between the different ways (think the mess that is invokespecial and the ACC_Super flag) based on attribute vs annotation or classfile version, etc.
 
>> On Jul 5, 2017, at 8:12 AM, Karen Kinnear  wrote:> > Dan S: class loading in the proposed JVMS: if you see $Value>>> >    1) first derive the VCC name and see if already resolved>>> >    2) if not - load the VCC, check properties and derive>>> >    (ed. note - if see VCC - lazily derive derived value class on touch)>>>  >>> It's not a requirement that the value class derivation is lazy, correct?>> Let’s double-check with Dan Smith at today’s meeting. The way I read 5.3 Creation and Loading in>> http://cr.openjdk.java.net/~dlsmith/values.html>> it appears to allow lazy derivation as well as eager derivation, which I think is what we both want>> since it allows implementations to optimize.>> Our current derivation is also eager.> > Summary of today's discussion, supplemented with some reflection on what I see as the requirements:> > - The specification shouldn't care when the class is derived (though it must occur, naturally, no later than resolution of "Foo$Value"); the specification *might* care when error checking occurs, because that's observable.> > - Current specification draft says error checking occurs when Foo$Value is loaded (5.3), and that "Class or interface creation is triggered by another class or interface D, which references C through its run-time constant pool." So, as a test case: if my program has no references to Foo$Value (direct or reflective), no VCC-related errors would occur.> > - We could redesign this so that the VCC properties are checked during loading/verification of Foo. I am concerned that, where Foo is version 54.0 and has attribute ValueCapableClass, this sort of error checking will violate the Java SE 10 spec.
 
Doesn't the use of experimental features allow the new classfile versions to define new behaviour?  I would expect the ValueCapableClass attribute to be ignored in a v.54 classfile and only take affect in a v.54.1 classfile so that the semantics can be changed in the future.
> > Elaborating: we're presenting values as an optional feature of Java SE 10. For a JVM that does not implement the optional feature, JVMS 10 says that a ValueCapableClass attribute on a version 54.0 class file will be ignored. JVMS 4.7.1: "any attribute not defined as part of the class file specification must not affect the semantics of the class file. Java Virtual Machine implementations are required to silently ignore attributes they do not recognize." My interpretation of our mission, in adding an optional feature, is to provide new capabilities while having no impact on existing behavior. We can do new things where JVMS 10 specifies an error; we can't generate new errors where JVMS 10 specifies none.
 
New classfile version == new behaviour, right?
> > - We could limit usage/interpretation of "ValueCapableClass" to 54.1 class files. Then eager error checks when loading Foo would be fine. But the ability to work with 54.0 Value

Re: minutes Valhalla EG June 07, 2017

2017-06-21 Thread Daniel Heidinga
>  AI: John - send out javadoc to EG> derived class := Class.derivedClassFactory(Class mainClass, T userData, String name)
 
In the spirit of the usual "5 min before the meeting" ritual action item panic, I'm trying to review the javadoc for this and can't seem to find it.  Can it be sent again?
 
Thanks,
--Dan
 
- Original message -From: Karen Kinnear Sent by: "valhalla-spec-experts" To: valhalla-spec-experts@openjdk.java.netCc:Subject: minutes Valhalla EG June 07, 2017Date: Wed, Jun 7, 2017 4:28 PM 
Valhalla EG Minutes June 07, 2017
 
attendees: Bjorn, Dan H, Dan S, John, Vlad, Frederic, Lois, Brian, Maurizio, Karen
 
AI ALL:
  Dan Smith sent an initial draft of a JVMS with experimental support for MVT for review.
  Feedback in email requested - sooner rather than later please.
AI ALL:
  Review embedded proposal for issue 1 - John javadoc to avoid exposing internal derived value type name
  Review embedded proposal for EA for handling CONSTANT_Class
 
Timing note:
  Value type exploration is following three timeframes:
     Minimal Value Types Early Access (EA) - goal: ASAP so we can get feedback from initial users
     Minimal Value Types (MVT)   - goal: w/JDK10 for much broader feedback
     Valhalla Value Types - "real" vs. shady values - much richer feature set
  Some of the issues we are exploring - such as type vs. class will need to evolve, so we need
  to reach decisions on our initial EA stake in the ground ASAP.
  For that - review of and conclusions to JVMS and other open issues is needed.
 
Issue 1: Exposure of mirror and mirror name for the value class
Bjorn: (please correct any inaccuracies)
  IBM implementation does NOT expose the value type mirror name
  ValueType.valueClass is the only way to get the value type mirror
  getClassName returns the same answer
  2 java objects, same underlying data
  no internal derived value type name is exposed
 
John: proposal for breaking the link to the secondary mirror
   Model is that there is one primary mirror and multiple secondary mirrors
   Brian: one nominal class and multiple derived classes analogous to a DirectMethodHandle and derived
MethodHandles
   Later reflection couild add APIs at the java level to get the secondary mirrors
   - has an initial proposal in which you pass in
       head class, user data (e.g. value type descriptor), user-chosen name
       name is not resolvable, doesn't work for findClass, but visible when reflecting
 
Dan H: do we need to ensure user name/user data consistent? That has been an issue in related APIs?
John: no
Karen: assume we can not use this name to look up a class (forName)? just for reflection to print?
John: not for lookup
 
Maurizio: this could be useful today (i.e. for EA) for a value class
Issue: Reflection behavior for EA
   Karen: we already agreed reflection will not work - will throw an exception
   Maurizio: it could be actually easier to use John's factory than to throw an exception
 
Timing:
  AI: John - send out javadoc to EG
derived class := Class.derivedClassFactory(Class mainClass, T userData, String name)
  All: evaluate proposal both for doability
       also evaluate for timing: EA vs. MVT?
 
 
Issue 2: Constant Pool representation for derived value type (JVMS term: value class)
 
Goals:
  1. cache point for usage - need separate storage for DVT and VCC
  2. prefer not to do string parsing over and over to get the mode
  3. verifier ensure type safety without additional eager class loading
  4. ensure single resolution of underlying value-capable-class
     (longer-term want single resolution of underlying source classfile)
  5. allow implementations to support older classfiles
  6. tool support - make sure this works for a mix of constant pool changes
     e.g. tools that do not know about new versions still instrument new classfiles
     - need to make sure these still work as much as possible
     - so for these folks we need to not change the meaning of CONSTANT_Class
  7. future - make sure the model works for future derivation from more than one type
     - e.g. Foo
    7a. request that for a Parameterized Type: this_class (name and CONSTANT_Class today)
        allows lazy resolution of the list
    (ed. note: need to discuss details of "lazy" here - loading the class file perhaps,
    but instantiating a type from it will need the parameterizations, so far we have
    conceptually recorded the loaded class file under the "head" type, with default/erased
    parameterizations)
  8. upside opportunity: Constable and pattern matching - helpful if all class objects
     were represented the same way when generating bytecode
     e.g. int.class vs. Integer.class require different handling today
  9. migration: a class should be able to migrate to being a value type
     approach: will require boxing to access, but if you pass for example a boxed value type
     the current client should continue to work
  10. migration: value type to reference?  Open question
 
   11. ed.