It is not at all clear that my goals for this effort match with others, so I will state mine:
My goal is to develop documents that help evaluators make better disposition decisions, to save civilization from as much malicious content as possible. An inference from my piece of reality is that this effort has a large upside potential. Sender authentication is the core of this effort because it is something that we can actually specify. Defining a standard for defenses against malicious content seems infeasible. For all its difficulties, sender authentication is relatively feasible. Verification of RFC5322.From is the linchpin to Sender Authentication because the RFC5322.From address links the user-visible content to the invisible document metadata and SMTP protocol parameters. DMARC defines a formula for declaring the RFC5322.From address verified, thereby separating a general mailstream into two sub-streams: the verified portion and the unverified portion. This group has taken the position that a message is only verified if the domain owner has published a DMARC policy and the message produces DMARC PASS. From my data, that works out to about 40% of the traffic, most of which is Gmail. My results seem roughly consistent with data from other sources. This means that 60% of the traffic has no defined method for detecting possible impersonation, so network safety depends entirely on the strength of your content filtering. However, it is easy to note that the DMARC concept of Aligned SPF PASS or Aligned DKIM PASS is applicable to any message, with or without a DMARC policy. There is a minor complication about choosing between relaxed and strict authentication, which is solvable by local policy. Applying the DMARC formula to a general mailstream, the DMARC-equivalent PASS percentage suddenly jumps into the vicinity of 85%. The remaining 15% of mail has uncertain impersonation risk, but is much more manageable than 60%. It has been a fertile field for investigation. When I ask, "Why is this message not impersonated?", the investigation produces one of three answers: - The message stream is acceptable, but needs a local policy to allow future messages to appear authenticated. - The message stream is unwanted even though it is not an impersonation, so this and future messages from this sender should be blocked. - The message stream is from a malicious impersonator, so this and future messages from this sender should be blocked. Whichever conclusion is reached, investigation is a one-time effort per message source. So a technical path exists for ensuring 100% authentication of all allowed messages, and quarantining the uncertain messages for investigation. In my experience, the middle result dominates. As my recurring and wanted traffic gets an authentication rule, and unwanted traffic gets blocked, the volume of investigations goes down while the percentage of block results goes up. When I examine the language of RFC 7489 and our proposed documents, I find no path toward 100% authentication. Instead, I see language that drives people to fixate on the 1% of traffic that has a DMARC policy with p=reject. Then we are disappointed if they don't look for exceptions, including mailing list messages, within the Failure subset of the 1% subset. If we don't change strategy, nothing will change. Desperate evaluators will continue to read our document as a ticket to unconditionally block that tiny portion of their mailstream which produces Fail with p=reject, and more importantly, will continue to be vulnerable to impersonation attacks buried in the other 99%. Doug Foster On Thu, Jul 20, 2023 at 3:53 PM Jan Dušátko <jan= 40dusatko....@dmarc.ietf.org> wrote: > Dne 20. 7. 2023 v 14:46 Barry Leiba napsal(a): > >> I think that it shouldn't affect the answer about what to put in the > document. Those of us here are a > >> miniscule slice of the overall user base for email, I think it's a > serious mistake to think peculiarities of > >> the exact lists we use is relevant to anything. > > Indeed: I caution everyone about making assumptions based on what we > > think we know, and extending those assumptions to the entire Internet. > > There are things we can study (by, for example, doing DNS queries and > > analyzing results), and there are things about which we just say, "I > > don't know anyone who does [or doesn't do] this, so that must be the > > case overall." The latter is hazardous. > > > > Barry > > > > > I couldn't agree more. Thinking and knowing are two different things. > Assuming what others want on the Internet is another thing. > For that reason, I would also like to ask. What are that technologies > supposed to be for? The ability to define a domain owner's policy? > Covering tools to provide protection against counterfeiting? Or a > meta-tool for authenticity? > In my opinion, this is an effort to secure what email technology lacks. > The ability to protect against communication counterfeiting and, under > tight conditions, to verify the authenticity of the source. The problem > is the wide mutual possibilities of communication, which have been > designed with accessibility and flexibility in mind. > I apologize for such a broad response. I'm trying to understand what > your goals are to avoid misunderstanding. Sometimes I'm lost in > translation. > > Regards > > Jan > > _______________________________________________ > dmarc mailing list > dmarc@ietf.org > https://www.ietf.org/mailman/listinfo/dmarc >
_______________________________________________ dmarc mailing list dmarc@ietf.org https://www.ietf.org/mailman/listinfo/dmarc