Emphatically "wearing no hat" here, just speaking as a long-time participant:
On Sat, Apr 8, 2023 at 2:13 PM Mark Alley <mark.alley= 40tekmarc....@dmarc.ietf.org> wrote: > Re-looking at the definition of "SHOULD NOT", I don't see why it can't be > considered. > > "SHOULD NOT - This phrase, or the phrase "NOT RECOMMENDED" mean that there > may exist valid reasons in particular circumstances when the particular > behavior is acceptable or even useful, but the full implications should be > understood and the case carefully weighed before implementing any behavior > described with this label." > > Seems to fit perfectly with how domain owners currently can pick and > choose interoperability with p=none over more strict protection, or vice > versa with p=reject, in my opinion. Is that not considered "acceptable" by > this definition's context? > IMHO, absolutely not. Since one of the IETF's main goals in producing a technical specification is interoperability, and since improperly deployed "p=reject" results in the very essence of non-interoperability in the deployed email base, I'm having trouble imagining why the standard should leave operators with any choice here. That is, in direct reply to the cited definition of "SHOULD NOT", I claim there do not exist valid reasons in particular circumstances when the particular behavior is acceptable, even when the full implications are understood and the case carefully weighed. (Note, here, that Barry has in his proposed text limited the constraint to those types of deployments where the damage is likely. I concur. DMARC, as currently defined, works just fine when deployed in transactional situations. Or, at least, I haven't seen that identified as a problem case.) Mike Hammer asks, reasonably, whether an IETF standard containing a "MUST NOT" that we know people will ignore calls into question the IETFs relevance or legitimacy. But I submit that the IETF issuing a standards track document which fails to take the strongest possible stance against deploying DMARC in a way that knowingly imposes substantial breakage, for any reason, is irresponsible and is the greater threat to our legitimacy. Keep in mind that improper deployment of DMARC results in damage to innocent third parties: It's not the sender or the MLM that's impacted, it's everyone else on the list. It's breathtaking to me that we can feel comfortable shrugging this off under the banner of "security" or "brand protection". In a separate email, Doug Foster just said: > I also have a hard time with the notion that any domain with a potential exception becomes a domain that MUST NOT protect itself from impersonation. But it's not "MUST NOT protect itself from impersonation", it's "MUST NOT use DMARC to protect itself from impersonation" when the use of the domain includes non-transactional operations likely to be disruptive. Imagine a web server protocol that states, when receiving a proxied connection, if the web server looks at it and sees something it didn't like, it rejects the request, but also fouls up some other active, legitimate operation in the process. Imagine further that the only defensive posture about this disruption is a "SHOULD NOT". Whatever benefit such an algorithm might claim, should it be given a place among the other standards the IETF produces? I would hope the answer is obvious. And if we're not willing to tolerate it in the web world, why are we willing to tolerate it for email? The IETF has no illusion that it is the standards or protocol police. It is sufficient, however, to be able to say in the face of such breakage that this is not how the IETF intended DMARC to be deployed. (A similar debate exists already, for what it's worth, in the domain registration space.) That is, if you do "p=reject" when you know what you're doing is going to clobber other people's legitimate operations, you can't claim to be operating in compliance with the standard. We need to be able to say that, even if the offender doesn't care to listen, and "SHOULD NOT" simply doesn't cut it. Mike also likes to invoke King Canute, but I think that's a faulty analogy. DMARC does not deserve elevation in our calculus to the equivalent of a force of nature. It was built and deployed by humans, who often make mistakes or have agendas. The same cannot be said of the ocean or tides. Finally, and for the only part with my AD on but askew: Scott has proposed a couple of good alternatives to consider, though one of them includes "MUST consider". I have placed a DISCUSS on formulations like this in other documents before because I don't know how one would evaluate compliance with such a normative assertion. It reduces in my mind to "OK I've thought about it, thus I have complied", so it doesn't actually say much in defense of interoperability. -MSK, participating
_______________________________________________ dmarc mailing list dmarc@ietf.org https://www.ietf.org/mailman/listinfo/dmarc