On Tue, Jul 1, 2025 at 12:45 PM Alessandro Vesely <[email protected]> wrote:
> On Tue 01/Jul/2025 12:22:13 +0200 Douglas Foster wrote:
> > We can define a specification for us by others because all evaluators
> face a
> > similar threat profile. Anyone who has worked the problem, or can
> reasonably
> > imagine himself working the problem, should be able to draw similar
> conclusions.
> >
> > We cannot talk about how to specify Failure Reporting until we
> understand the
> > process of which it will be a part, and that process analysis begins
> with use
> > cases. I see four:
> >
> > 1. The From address is accurate. The sender and his message are also
> acceptable.
> > 2. The From address is accurate. The sender and his message are
> unacceptable.
> > 3. The From address is fraudulent. The From domain is a secondary
> victim of
> > the attack.
> > 4. The From address is fraudulent. The From domain is also controlled
> by the
> > attacker.
> >
> > The first and most obvious conclusion is that for use cases #2 and #4,
> nothing
> > good can come from sending a a Failure Report. Reporting can only serve
> to
> > help the attacker perfect his attack.
> >
> > For use case #1, the goal of a failure report is to help the Domain
> Owner
> > correct problems that are within his control, or to make him aware that
> I am
> > working around problems that he cannot correct because they are
> downstream from
> > his control.
> >
> > For use case #3, the goal of a failure report is to cooperate with the
> Domain
> > Owner to find ways to minimize the harm that the attacker can cause.
>
>
> I think #1 and #3 are the two cases Michael addressed in his proposed
> change.
>
>
> > If failure reporting does occur, we are imposing on the report sender an
> > obligation to perform record keeping to distinguish between repeat
> occurrences
> > and to minimize excessive duplication. We need to at least think how we
> would
> > define the reporting strategy for these two cases if we were creating
> our own
> > implementation. The current specification does some hand waving on
> this topic
> > because we have not thought it through.
>
>
> Different strategies are possible, also depending on the amount of
> "intelligence" one is going to put in the effort. The draft mentions rate
> limiting and consolidation of like incidents.
>
>
> > Finally, these use cases create a problem for the recurring fiction that
> DMARC
> > can produce correct results on a fully automated basis without exerting
> any
> > human effort to collect additional information. Our document needs to
> move
> > away from fiction.
>
>
> As far as I can see, the draft does discourage from blindly providing
> failure
> reports for every failure, but doesn't discuss friend or foe. Maybe a
> paragraph like the first one quoted above after the list ("The first and
> most
> obvious conclusion...") could be inserted somewhere.
>
>
> Best
> Ale
I think we shouldn't go too deep down this rabbit hole because it quickly
gets into local policy. It should be sufficient to have verbiage that
failure reports shouldn't be sent to domains that are potentially abusive
but the determination of which domains should or shouldn't receive reports
is determined by local policy. Going more deeply into this would more
appropriately be done in a separate BCP document.
Michael Hammer
_______________________________________________
dmarc mailing list -- [email protected]
To unsubscribe send an email to [email protected]