Note that is really just about primitives, as they are the only ones whose value sets have non-trivial intersection.  Value types, being non-polymorphic and having no nontrivial overlap, won't have this problem.

Arguably, for strongly typed literals ("case 0.0f"), we could allow them against a target type of Object or Number, since there's only one type they could mean, but I don't see the return-on-spec-complexity here.



On 11/3/2017 4:30 PM, Remi Forax wrote:
I'm happy with choice #3 too.

#2 is a sad choice because this semantics is not explicit,
#2 means instanceof + unboxing + widening but nowhere in the syntax the wrapper type used for the instanceof and the unboxing appears. Not having the wrapper type mentioned doesn't pass my semantics smell check.

regards,
Rémi

------------------------------------------------------------------------

    *De: *"Brian Goetz" <[email protected]>
    *À: *"Gavin Bierman" <[email protected]>,
    "amber-spec-experts" <[email protected]>
    *Envoyé: *Vendredi 3 Novembre 2017 20:37:20
    *Objet: *Re: Patterns design question: Primitive type tests

    As I outlined in the mail on the survey, I think there are three
    possible ways to treat primitive type test patterns and numeric
    constant patterns (when the target type is a reference type):

    1.  Treat them as if they were synonyms for their box type.
    2.  Treat them as matching a set of values; for example, "int x"
    matches integers in the traditional 32 bit range, unboxing numeric
    targets and comparing their values.
    3.  Outlaw them, to avoid confusion or to preserve the opportunity
    to do either (1) or (2) later.

    For my mind, I think #2 is the "right" answer; I think #1 would be
    a sad answer.  But, there are two additional considerations I'd add:
     - As the survey showed, there would be a significant education
    component of choosing #2, and;
     - There isn't really an overwhelming need for being able to say
    "Is this Object a numeric zero" or "Is this object a boxed
    primitive in the range of int."

    Taken together, these lead me to #3 -- rather than choose between
    something sad and something that makes developers heads explode,
    just do neither.  I don't think this is a bad choice.

    Concretely, what I'd propose is:

    Only allow primitive type test patterns in type-restating
    contexts.  This means that

        switch (anObject) {
            case int x: ...
        }

    is no good -- you'd have to say Integer x or Number x or something
    more specific.  But you could say:

        switch (anObject) {
            case Point(int x, int y): ...
        }

    because the types of the extracted components of Point are int,
    and therefore the type test pattern is type-restating (statically
    provable to match.)

    Similarly, for numeric constant patterns, only allow them in
    switches where the target type is a primitive or a primitive box.

    There are ample workarounds where the user can explicitly say what
    they want, if they need to -- but I don't think it will actually
    come up very often.  And this choice leaves us the option to
    pursue either #1 or #2 later, if it turns out that we
    underestimated how often people want to do this.

    This also sidesteps the question of dominance, since the confusing
    cases below (like Integer vs int) will not come up except in
    situations where we can prove they are equivalent.


    On 11/3/2017 6:47 AM, Gavin Bierman wrote:


            Primitive type-test patterns

        Given that patterns include constant expressions, and type
        tests possibly including generic types; it seems reasonable to
        consider the possibility of allowing primitive type tests in
        pattern matching. (This answers a sometimes-requested feature:
        can |instanceof| support primitive types?)

        However, it is not wholly obvious what this test might mean.
        One possibility is that a “type-restating” equivalent for
        primitive type-test patterns is assignment conversion; e.g. if
        I have

        |case int x:|

        then a target whose static type is |byte|, |short|, |char|, or
        |int| – or their boxes – will be statically deemed to match.

        A target whose /dynamic/ type can be assigned to the primitive
        type through a combination of unboxing and widening (again,
        assignment conversion) matches a primitive type test. So if we
        have:

        |switch (o) { case int i: ...|

        we have to do |instanceof| tests against
        {|Integer|,|Short|,|Character|,|Boolean|} to determine a match.

        A primitive type test pattern dominates other primitive type
        patterns according to assingment compatibility;
        |int| dominates |byte|/|short|/|char|, |long| dominates
        |int|/|byte|/|short|/|char|, and |double| dominates |float|.

        A primitive type test pattern is inapplicable (dead) if cast
        conversion from the static type of the target fails:

        |Map m; switch (m) { case int x: // compile error }|

        The dominance interaction between primitive type-tests and
        reference type-tests for the wrapper types (and their
        supertypes) seems messy. Consider the following combinations:

        |case int n: case Integer n: // dead case Integer n: case int
        n: // not dead -- still matches Short, Byte case Byte b: case
        byte b: // dead case Number n: case int n: // dead|

        Is there some unifying theory that makes sense here? One
        possibility is to take a more denotational view: a type is a
        set of values, so type restatement is really about semantic
        set inclusion, and dynamic testing is about set membership. Is
        this adding too much complexity? Do developers really care
        about this feature?




Reply via email to