On Thu, May 12, 2022 at 5:22 AM Brian Goetz <brian.go...@oracle.com> wrote:

     - there is a nullability-injecting conversion from T! to T? (this
    is a widening conversion)


I think we'd expect full subtyping here, right? It needs to work for covariant arrays, covariant returns, type argument bounds, etc.

There's two questions here, one at the language level and one at the VM level.

At the VM level, `I` is not going to be a subtype of `LInteger`.  At the language level, we have a choice of whether to use subtyping or widening conversions, but given that the VM is expecting a widening conversion, it is probably better to align to that.  (Similarly, the distinction between int and Integer in overload selection is based on the assumption that they are not subtypes, but instead related by conversions.)

So while abstractly, the value sets may form subsets, which says that at least _structurally_ they are subtypes, we try to avoid having subtype relationships between things that use different representations, because it creates difficult seams in translation, and lean on conversion machinery instead.

In practice, the distinction between "int widens to long" and "int <: long" is not particularly visible, except in corner cases like "boxing is allowed in loose invocation contexts but not strict invocation contexts."


    and then we get to decide: which conversions are allowed in
    assignment context?  Clearly a nullability-injecting conversion is
    OK here (assigning String! to String? is clearly OK, it's a
    widening), so the question is: how do you go from `T?` to `T!` ? 
    Options include:

     - it's like unboxing, let the assignment through, perhaps with a
    warning, and NPE if it fails
     - require a narrowing cast


Yes, I do think we want a cast there (a special operator for it is very helpful so you don't have to repeat the base type), but as far as I know the case could be made either way for error vs. warning if the cast isn't there.

This is the decision point I want to highlight; while one might at first assume "well obviously you should explicitly convert", there are actually more choices than the obvious one, and it is a decision that should be made deliberately.

But, suppose the *class* is identifiable in some way as friendly to that default value. I'm still struggling to think through whether we also strictly need to have something at the use site equivalent to `.val`. Or if just knowing the nullness bit is enough. It may be fundamentally the same question you're asking; I'm not sure.

I think we may be saying the same thing.  It is a declaration-site property as to whether we want to tolerate uninitialized values.  We do for int; we probably also do for Complex, not only because "its a number and the existing numbers work that way", but because there's a performance tradeoff, which is that being intolerant of uninitialized values has a footprint cost, and effectively doubling the size of a flat `Complex[]` will not be appreciated.

For such a zero-tolerant class, there is still room to make the choice at the use site which flavor you want.  One positive consequence of having decomplected atomicity from { nullity, primitive-ness } is that it becomes *possible* to spell this distinction with emotional sigils, rather than some weirder thing (e.g., .val.)


    What this short discussion has revealed is that there really are
    two interpretations of non-null here:

     - In the traditional cardinality-based interpretation, T! means:
    "a reference, but it definitely holds an instance of T, so you
    better have initialized it properly"
     - In the B3 interpretation, it means: "the zero (uninitialized,
    not-run-through-the-ctor) value is a valid value, so you don't
    need to have initialized it."


I'm not sure these are that different. I think that as types they are the same. It's the conjuring of default values, specifically, that differs: we can do it for B2, B3, and B3!, and we don't know how to find one for B2!. But that's not a complication, it's just precisely what we're saying B2 exists for: to stop that from happening.


This question is at the heart of this sub-thread.

I think what you are saying is that for ref-only classes (B1 and B2), then T! is a _restriction_ type (which we will probably erase to the erasure of T), whereas for for zero-capable classes (B3), then `T!` is a true projection which makes the null value *unrepresentable*, and that you're OK with that.

Reply via email to