I suspect there are other orderings too, such as "any nulls beat any novels" or vice versa, which would also be deterministic and potentially more natural to the user.  But before we go there, I want to make sure we have something where users can understand the exceptions that are thrown without too much head-scratching.

If a user had:

    case Box(Head)
    case Box(Tail)

and a Box(null) arrived unexpectedly at the switch, would NPE really be what they expect?  An NPE happens when you _dereference_ a null. But no one is deferencing anything here; it's just that Box(null) fell into that middle space of "well, you didn't really cover it, but it's such a silly case that I didn't want to make you cover it either, but here we are and we have to do something."  So maybe want some sort of SillyCaseException (perhaps with a less silly name) for at least the null residue.

On the other hand, ICCE for Box(novel) does seem reasonable because the world really has changed in an incompatible way since the user wrote the code, and they probably do want to be alerted to the fact that their code is out of sync with the world.

Separately (but not really separately), I'd like to refine my claim that `switch` is null-hostile.  In reality, `switch` NPEs on null in three cases: a null enum, String, or primitive box.  And, in each of these cases, it NPEs because (the implementation) really does dereference the target!  For a `String`, it calls `hashCode()`.  For an `enum`, it calls `ordinal()`.  And for a box, it calls `xxxValue()`.  It is _those_ methods that NPE, not the switch. (Yes, we could have designed it so that the implementation did a null check before calling those things.)



The ambiguity that this analysis still does not addresses situations such as D(E(novel, null)); this example is briefly alluded to at the end of Brian’s initial sketch of the formalism, but unfortunately the sketch does not address multi-parameter deconstructs in detail.  So let’s go through this example: suppose that there are explicit cases that are optimistically total (I like the terminology Brian has provided) on D(E(Shape, Coin)), which might look like this:

D(E(Round, Head))
D(E(Round, Tail))
D(E(Rect, Head))
D(E(Rect, Tail))

Then I think the residue would consist of

D(null)
D(novel)
D(E(null, null))
D(E(null, Head))
D(E(null, Tail))
D(E(null, novel))
D(E(Round, null))
D(E(Rect, null))
D(E(Round, novel))
D(E(Rect, novel))
D(E(novel, null))
D(E(novel, Head))
D(E(novel, Tail))
D(E(novel, novel))

The order shown above is permissible, but some pairs may be traded, under the constraint that if two cases differ in one position and one of them has “null” in that position, then that one must come earlier.

If we wish behavior to be deterministic, it would be Java-like to insist that (1) the cases be listed consistent with an increasing lexicographic partial order, where null < novel, and (2) that sub-patterns effectively be processed from left right.  Under these rules, the cases

D(E(null, null))
D(E(null, novel))

would raise NPE, and

D(E(novel, null))
D(E(novel, novel))

would raise ICCE.



Reply via email to