After I sent this, I wondered if I needed to add more text to explain that
"manual review" means "manual review, followed by local policy changes, so
that manual review will no longer be necessary on messages with the same
identifiers."   If a reader thinks "manual review" means "look at every
message individually, now and forever," the reader will laugh it off.   But
in an introduction, we cannot say everything.  So I am open to suggestions.

I can say with some confidence that the false-positive problem is not
understood in most of the commercial product market.   I shopped for a
commercial product by asking this question,

"Example.com moved to Office365, but did not update their SPF record.   How
do I treat their Office365 messages as SPF-PASS without enabling
impersonation from other sources?"


I found many that could not answer.   Vendors do not understand that an
"allow" override for sender-authentication must mimic the original:  Start
with a trusted identifier that is also verified, then use it to
authenticate an unverified identifier which has an expected value and is
received in the same message.  This type of rule has three parts
(independent identifier, verification mechanism, and dependent identifier),
most of the products that I surveyed could only implement single-part
rules.   Single-part rules work well for message blocking, because a
distrusted identifier generally remains untrusted in all contexts and is
untrusted whether true or spoofed.   But using them for allow rules can
undermine security.

This text introduces the concept that when an automated process applies a
uniform disposition to a mix of wanted and unwanted messages, the results
will always be suboptimal.   The only way to achieve optimal results is to
perform a manual review and classification.

Since unverified messages must also pass filters based on source reputation
and content, the expected consequences of allowed impersonation is low.
Consequently, this draft assumes that the automation will only block sender
authentication failure when the probability of a true impersonation has
reached a high value.  I think this assumption should be made explicit
somewhere in the document.

Doug Foster

On Sat, Dec 4, 2021 at 1:52 AM Murray S. Kucherawy <superu...@gmail.com>
wrote:

> On Fri, Dec 3, 2021 at 6:16 PM Douglas Foster <
> dougfoster.emailstanda...@gmail.com> wrote:
>
>> I propose that a paragraph along these lines be inserted into the
>> introduction:
>>
>> The DMARC test is characterized by a one-tailed error distribution:
>>  Messages which pass verification have a low probability of being false
>> positives of actual impersonation. When a recipient intends to exempt a
>> high-value sender from content filtering, identity verification ensures
>> that such special treatment can be done safely, without concern for
>> impersonation.    However, the same cannot be said about verification
>> failures.  Verification failures can occur for many reasons, and many such
>> messages will be safe and desired by the recipient.   Messages which do not
>> verify are optimally handled with manual review, but this may not be
>> feasible due to message volume.    DMARC provides a way for senders and
>> receivers to safely cooperate to minimize the probability that automated
>> disposition decisions will be suboptimal.
>>
>
> I have no objection to adding text such as this.  It's worth noting,
> though, that DKIM certainly says this already (at least in its Section
> 6.1), SPF probably says something similar, and DMARC clearly depends on
> those.  DMARC alludes to this in RFC 7489, Section 10.5, but it's not as
> explicit as what's proposed here.
>
> -MSK
>
_______________________________________________
dmarc mailing list
dmarc@ietf.org
https://www.ietf.org/mailman/listinfo/dmarc

Reply via email to