On Tuesday, 6 January 2015 at 03:29:39 UTC, Zach the Mystic wrote:
A more likely scenario is that your library starts small enough not to need the @api attribute, then at some point it gets really, really huge. Then in one fell swoop you decide to "@api:" your whole file so that the public interface won't change so often. I'm picking the most extreme case I can think of, in order to argue the point from a different perspective.

Note that if you want auto-inferred attributes during the alpha phase of library development, it's just as trivial to put a general @autoinfer: or @noapi: or whatever you like, and that in turn is a pretty nice signifier to the user "this function's attributes are not guaranteed to be stable".

Attribute inference provides convenience, not guarantees.

Indeed. But any publicly-available API is a guarantee of sorts. From the moment people are using something, you can no longer vicariously break things.

If a user was relying on the purity of a function which was never marked 'pure', it's only convenience which allows him to do it, both on the part of the user, for adding 'pure', and the library writer, for *not* adding it.

Nevertheless, if a user relies on that inferred purity (which they will do), and you tweak things so the function is no longer pure, you have broken that downstream user's code. Worse, you'll have done it in a way which flies under the radar until someone actually tries to build against your updated library. You, as the library developer, won't get any automatic warning that you've broken backwards compatibility with your earlier implementation; the downstream user won't get any automatically-documented warnings of this breaking change.

If instead you have an explicit "please auto-infer attributes for this function" marker, then at least the user has a very clear warning that any attributes possessed by this function cannot be relied on. (Absence of a guarantee != presence of a statement that there is NO guarantee:-)

Adding @api (or 'extern (noinfer)') cancels that convenience for the sake of modularity. It's a tradeoff. The problem itself is solved either by the library writer marking the function 'pure', or the user removing 'pure' from his own function.

As a library writer, I don't think you can responsibly expect users to bear the burden of fixing undocumented breaking change.

Without @api,  the problem only arises when the library writer
actually does something impure, which makes perfect sense.
It's @api (and D's existing default, by the way) which adds
the artificiality to the process, not my suggested default.

I'm not sure what exactly you mean when you talk about D's existing default, but one aspect that I think is important is: D's default position is that a function has no guarantees, and you _add_ guarantees to it via attributes.

This whole discussion would be quite different if the default was that a function is expected to be @safe, pure, nothrow, etc., and the developer is expected to use attributes to indicate _weakening_ of those guarantees.

It's quite analogous in this respect to the argument about final vs. virtual by default for class methods.

I don't think so, because of so-called covariance. Final and virtual each have their own advantages and disadvantages, whereas inferring attributes only goes one way. There is no cost to inferring in the general case.

I think you have missed the point I was making.

If you have final-by-default for classes, and you accidentally forget to tag a public method as 'virtual', then you can fix that without breaking any downstream user's code. If by contrast you have virtual-by-default and you accidentally forget to tag a public method as 'final', then you can't fix that without the risk of breaking downstream; _someone_ may have relied on that function being virtual.

The situation is very similar here. If your function has no attributes, and then later you add one (say, 'pure'), then you don't do any downstream user any harm. If on the other hand your function _does_ have attributes -- whether explicit or inferred -- and then you remove them, you risk breaking downstream code.

If you don't auto-infer, this is not really an issue, because you have to manually add and remove attributes, and so you can never unintentionally or unknowingly remove an attribute. But if you _do_ auto-infer, then it's very easy indeed to accidentally remove attributes that your downstream may have relied on.

My suggestion, (I now prefer 'extern(noinfer)'), does absolutely
nothing except to restore D's existing default, for what I think
are the rare cases it is needed. I could be wrong about just how
rare using extern(noinfer) will actually be, but consider that
phobos, for example, just doesn't need it, because it's too small
a library to cause trouble if all of a sudden one of its
non-templated functions becomes impure.

I don't think you can reasonably anticipate how much trouble breaking change can cause for your downstreams. At a minimum, _accidental_ and undocumented breaking change is completely unacceptable, and this proposal introduces many easy ways to see it happen.

A quick recompile, a new interface file, and now everyone's using the new thing. Even today, it's not even marked up with attributes completely, thus indicating that you never even *could* have used it for all it's worth.

Have I convinced you?

I understand why you want it, I just think you underestimate the importance of avoiding accidentally breaking things for downstream.

Note that I'd be much more prepared to be convinced if the proposal was that attributes should be auto-inferred _except_ for public functions, although that has its own range of nastinesses in terms of inconsistent behaviour.

Reply via email to