Just so we don't lose this history, a reminder that back when we settled on the 3 
buckets, we viewed it as a useful simplification from a more general approach with lots 
of "knobs". Instead of asking developers to think about 3-4 mostly-orthogonal 
properties and set them all appropriately, we preferred a model in which *objects* and 
*primitive values* were distinct entities with distinct properties. Atomicity, 
nullability, etc., weren't extra things to have to reason about independently, they were 
natural consequences of what it meant to be (or not) a variable that stores objects.

Indeed; it is often a process of "spiraling", where we seem to return to places we've already been, but perhaps in a lower energy state.  We came by the earlier bucket model honestly, as it approximated the use cases we envisioned as most important.   I think its time to rethink the three-bucket model, not because three is too big or small a number, but because (a) the relationship between the buckets is complex, (b) it puts users to some difficult choices between semantics and performance, and (c) we have real concerns that hiding the permission to tear behind some proxy (e.g., "non null" or "B3") will be too subtle and potentially astonishing.

That was awhile ago, we may have learned some things since then, but I think there's 
still something to the idea that we can expect everybody to understand the difference 
between objects and primitives, even if they don't totally understand all the 
implications. (When they eventually discover some corner of the implications, we hope 
they'll say, "oh, sure, that makes sense because this is/isn't an object.")

I think this is true for all the aspects _except_ tearing.   I tried the argument "it can tear because its not an object" on for size, and I just can't imagine people not forgetting it routinely.


My inclination would probably be to abandon the object/value dichotomy, revert to "everything is an object", perhaps revisit our ideas about conversions/subtyping between ref and val types, and develop a model that allows tearing of some objects. Probably all do-able, but I'm not sure it's a better model.

I don't think we have to go so far as this.  Just as Valhalla questions the previously-universal property of "all objects have identity", we can play the same game with "all objects provide integrity guarantees" (final field semantics.)  Some classes can shed identity; some further can shed the integrity requirements. (Both require a judgement on the part of the class author.)  We can then optimize accordingly.

By factoring out atomicity/integrity as an orthogonal semantic constraint, we get to a lower energy state for B2 vs B3: "does this class have a good zero".  Complex does; LocalDate does not.  And we get to a simpler performance consequence of B3.ref vs B3.val: at most an extra bit of footprint.  These are both easier to understand.


Reply via email to