In the general case of user-defined patterns, the compiler may not be able to prove this.  At the other extreme, in the specific case of deconstructors automatically provided for records, it may always be safe to perform such optimizations (we should look at this).  In between, maybe some flow analysis would allow a simple proof of safety for some common cases of user-defined deconstructors or patterns.

Because of separate compilation, I'm not sure there's even much we can do for records.  Just as we can declare a record without an explicit constructor:

    record R(int a) { }

or with an explicit canonical constructor:

    record R(int a) {
        R(int a) { println("foo!"); this.a = a; }
    }

and the clients can't tell the difference, the same is true for the deconstructor.  A record declared without a deconstructor will have one provided at no cost, but if you declare a canonical deconstructor, it replaces the effect of the default (whether compiler or runtime provided.)  Separate compilation means that we have to choose our client code generation strategy in the absence of knowing whether an explicit or implicit deconstruction pattern is being used.

Just as with record constructors, we wanted it to be a binary-compatible change to add or remove an explicit canonical ctor; the same should be true for the dtor.

Reply via email to