Hi,

I agree with your comments.

My input.

Keep in mind, that ignorant (non-supportive) ARC nodes will continue to process all DKIM-signature(s) found, such as what I see with your message:

Authentication-Results: dkim.winserver.com;
 dkim=pass header.d=ietf.org header.s=ietf1 header.i=ietf.org;
 adsp=none author.d=gmail.com signer.d=ietf.org;
 dkim=pass header.d=ietf.org header.s=ietf1 header.i=ietf.org;
 adsp=none author.d=gmail.com signer.d=ietf.org;
dkim=fail (DKIM_BODY_HASH_MISMATCH) header.d=gmail.com header.s=20161025 header.i=gmail.com;
 adsp=none author.d=gmail.com signer.d=gmail.com;

That should be the first consideration and understanding in the pre-ARC DKIM world, i.e. how things are done now.

The order of the DKIM-signatures is generaly important here since the typical scenario (with a list) is:

 1) Good 1st original Author Domain signature is submitted,
 2) 1st signature invalidated at middle ware (LIST),
 3) Good 2nd Signature (resign) for LIST distribution
 4) MDA see failed original AUTHOR ADSP/DMARC Policy.

Overall, the MDA has no explicit 3rd party trust/policy knowledge of LIST or signatures beyond the 1st Author Domain (now broken) signature. That's the basic issue and 12+ years old DKIM Policy model protocol dilemma.

I suppose there could be more middle ware, nodes, etc during the transport, but the First and Final processing nodes should be the essential protocol consideration. If there is an cv=fail at #2, does that mean the resigning at #3 is invalidated as well? We never really cared what happens in between during the transport, but with ARC it appears to be a major consideration to better understand transport paths. How ARC will turn all this around, it remains to be be seen. I wish to understand this complex protocol first before justifying the massive overhead and experimentation cost being asked of SMTP/Email developers.

Thanks

Hector Santos, CTO
Santronics Software, Inc.


On 8/20/2018 6:28 AM, Murray S. Kucherawy wrote:
(back from vacation and catching up)

On Tue, Aug 14, 2018 at 8:58 PM, Seth Blank <s...@sethblank.com
<mailto:s...@sethblank.com>> wrote:

    There are THREE consumers of ARC data (forgive me for the names,
    they're less specific than I'd like):

    1) The ARC Validator. When the Validator sees a cv=fail,
    processing stops, the chain is dead, and shall never be less dead.
    What is Sealed is irrelevant.


Right.

    2) The Receiver. An initial design decision inherent in the
    protocol is that excess trace information will be collected, as
    it's unclear what will actually be useful to receivers. 11.3.3
    calls this out in detail. Without Sealing the entire chain when
    attaching a cv=fail verdict, none of the trace information is
    authenticatable to a receiver (see earlier message in this thread
    as to why), which is the exact opposite of the design decision the
    entire protocol is built on. To guarantee this trace information
    can be authenticated, the Seal that contains cv=fail must include
    the entire chain in its scope. This is where this thread started.


I see two possible workflows here:

(1) Verifier (to use the DKIM term) detects "cv=fail" and stops,
because there's nothing more to do.  But the Receiver now has no ARC
information except a raw "cv=fail" to relay via A-R or whatever.  But
this, as you point out, flies in the face of the notion of giving
receivers details about the message.  The Receiver now has to
implement ARC to get whatever details might be prudent from the message.

(2) Verifier sees "cv=fail" but will still attempt to verify it and
maybe extract other salient details to add to an A-R.

When you say "you see cv=fail and stop", I think of the first thing,
which is alarming layer mush, and is also ambiguous in that if the
Verifier stops dead at seeing "cv=fail", it doesn't matter at all what
content got sealed.

So if you mean the second thing, part of my issue goes away.

    3) The receiver of reports that provide ARC data. For a domain
    owner to get a report with ARC information in it, there needs to
    be some level of trust in the information reported back. When a
    Chain passes, all the intermediaries' header field signatures can
    be authenticated, and the mailflow can be cleanly reported back.
    When a Chain fails, that is important information to a domain
    owner (where is my mailflow failing me, how can I figure this out
    so I can fix it?). Again, without Sealing over the entire Chain
    when a failure is detected, this information is unauthenticatable
    (and worse, totally forgeable now without even needing a valid
    Chain to replay), and nothing of substance can be reported back.
    Sealing the Chain when a cv=fail is determined blocks forgery as a
    vector to report bogus information, and allows authenticatable
    information to be reported back.


I think we're talking about distinct failure modes.  I totally agree
with you in the case where the chain has failed because content was
altered.  But doesn't your assertion here presuppose an at least
syntactically intact chain?  If the chain is damaged to the point
where it cannot be deterministically interpreted, the sealer adding
the "cv=fail" might add a seal that a downstream verifier cannot
correctly interpret.

I understand what you're after but I also understand the intent behind
5.1.2, which is to produce something unambiguous.  My problem with
5.1.2 as it stands is that a verifier now has to try verifying the
"cv=fail" two ways (one with everything, one with just the last
instance), and at least one of them has to work.  We've cornered
ourselves here by rejecting "cv=invalid".

    And to be even clearer: what is Sealed when cv=fail is reached
    (itself, the entire chain, or nothing at all) DOES NOT AFFECT
    INTEROPERABILITY. But it does effect preserving trace information
    and preventing forged data from being reportable.


I disagree, as stated above; a mangled chain cannot be sealed in a way
guaranteed to interoperate.

    This is my very strong INDIVIDUAL opinion. But I'm fine if the
    group sees differently, as this could be investigated as part of
    the experiment (i.e. do any of the above points matter in the real
    world? I say they do, hence the strong opinion.). As an editor,
    I'll make sure whatever the consensus of the group is is reflected
    in the document.


I've no objection to collecting superfluous trace information to
support the experiment.  What I'm concerned about is the introduction
of weird protocol artifacts or ambiguities that could get baked in and
hard to remove later.

-MSK



_______________________________________________
dmarc mailing list
dmarc@ietf.org
https://www.ietf.org/mailman/listinfo/dmarc

Reply via email to