Re: [ietf-dkim] Protocol layering / Software vs. Protocol
On Thu, 05 May 2011 21:24:00 +0100, Barry Leiba barryle...@computer.org wrote: Doug says... This can *only* be achieved by some mandatory test within the Verifier. Not at all; that's exactly Dave's point in discussing the difference between the protocol and the software system that wraps around it. The Verifier is a component that verifies the signature, and that's all we're defining normatively here. Other parts of the system will evaluate things whether the verified signature can be relied upon, and what it can be relied upon for; whether the domain that signed it is trustworthy; whether a failed signature can nonetheless provide useful information; and so on. Not so. As you should know from off-list discussions, that sentence is actually mine, though used in a marginally different context than Doug used it. IF there were to be some mandatory test within the Verifier, then that test would be, ipso facto, a part of the protocol and not part of the software system that wraps around it. So your argument was circular :-( . -- Charles H. Lindsey -At Home, doing my own thing Tel: +44 161 436 6131 Web: http://www.cs.man.ac.uk/~chl Email: c...@clerew.man.ac.uk Snail: 5 Clerewood Ave, CHEADLE, SK8 3JU, U.K. PGP: 2C15F1A9 Fingerprint: 73 6D C2 51 93 A0 01 E7 65 E8 64 7E 14 A4 AB A5 ___ NOTE WELL: This list operates according to http://mipassoc.org/dkim/ietf-list-rules.html
Re: [ietf-dkim] Protocol layering / Software vs. Protocol
On 5/6/11 6:28 AM, Charles Lindsey wrote: On Thu, 05 May 2011 21:24:00 +0100, Barry Leibabarryle...@computer.org wrote: Doug says... This can *only* be achieved by some mandatory test within the Verifier. Not at all; that's exactly Dave's point in discussing the difference between the protocol and the software system that wraps around it. The Verifier is a component that verifies the signature, and that's all we're defining normatively here. Other parts of the system will evaluate things whether the verified signature can be relied upon, and what it can be relied upon for; whether the domain that signed it is trustworthy; whether a failed signature can nonetheless provide useful information; and so on. Not so. As you should know from off-list discussions, that sentence is actually mine, though used in a marginally different context than Doug used it. IF there were to be some mandatory test within the Verifier, then that test would be, ipso facto, a part of the protocol and not part of the software system that wraps around it. So your argument was circular :-( . This sentence was indeed written by Charles. He is far more eloquent, and I wanted to respond but was rushed by other pressing security matters. While we have collaborated in creating something that should better clarify DKIM's role, it is my opinion verification requirements are better stated as MUST. Unreliable Assurances are Worse than NONE. From the aspect of proper protocol design and layering aspects, proper design must not expect consumers of the protocol to second guess the validity of the inputs used, especially when these inputs become less clear in the process. SMTP can not be expected to ensure header field ordering, especially those defined by DKIM. The modern version of the message format also clearly stipulates which header fields are required not to repeat. The premise used by DKIM in requiring the From header field to be signed, incorrectly assumed that by requiring this header field to be included in the signature's validation, the particular header field would be obvious to subsequent consumers of its output. This was clearly a mistake made by DKIM that MUST BE corrected. As email transitions into the use of UTF-8, DKIM's role in better securing messages becomes even more significant. As such, it is important to embrace the incumbent obligations. DKIM must not blame SMTP or DNS for its failings. -Doug ___ NOTE WELL: This list operates according to http://mipassoc.org/dkim/ietf-list-rules.html
Re: [ietf-dkim] Protocol layering / Software vs. Protocol
On 5/4/11 10:01 AM, Dave CROCKER wrote: Folks, \ In terms of working group process, one line of criticism demands re-opening (and, apparently, reversing) the work of the Update (RFC 5672). I haven't seen any working group consensus to do this nor any industry feedback indicating this is necessary. Consequently, attempts to pursue the content of that work is entirely out of scope for the current working group effort. There are two continuing threads of other, technical dissatisfaction being expressed that are based on fundamental misunderstandings of protocol design concepts. The discussion on Wikipedia looks pretty good, for background: http://en.wikipedia.org/wiki/Network_protocol#A_basis_for_protocol_design The easy misunderstanding is about the basic difference between software design and protocol design. When a discussion is about a protocol specification, reference to the vagaries of software implementers' choices means that the discussion is no longer about the protocol. A protocol is a contract between two or more sides of an exchange. The contract needs to be extremely precise so that all sides know exactly what is required and what is meant. This includes semantics, syntax, and rules of exchange. Semantics means all of the meaning, not just the meaning of individual fields. And it means inputs and outputs. DKIM is unable to _only_ consider signature confirmation and not also expect existing email agents to not also adopt DKIM's unusual header selection methods retro-actively. To be compatible with existing email infrastructure and transparent to the fullest extent possible, one can not expect new supporting infrastructure or modified clients; This can *only* be achieved by some mandatory test within the Verifier. -Doug ___ NOTE WELL: This list operates according to http://mipassoc.org/dkim/ietf-list-rules.html
Re: [ietf-dkim] Protocol layering / Software vs. Protocol
Dave says... In terms of working group process, one line of criticism demands re-opening (and, apparently, reversing) the work of the Update (RFC 5672). I haven't seen any working group consensus to do this nor any industry feedback indicating this is necessary. Consequently, attempts to pursue the content of that work is entirely out of scope for the current working group effort. I'll point out that Dave is offering his *opinion* about whether this is in or out of scope. I'll provide the chair's judgment on that, as the one who has the task of determining scope: it's out of scope. We had this discussion back when we did 5672, and got rough consensus on it. Not unanimity, but rough consensus. We're not going over that again. 4871bis is, and should be a merging of 4871 and 5672. Barry, as chair Doug says... This can *only* be achieved by some mandatory test within the Verifier. Not at all; that's exactly Dave's point in discussing the difference between the protocol and the software system that wraps around it. The Verifier is a component that verifies the signature, and that's all we're defining normatively here. Other parts of the system will evaluate things whether the verified signature can be relied upon, and what it can be relied upon for; whether the domain that signed it is trustworthy; whether a failed signature can nonetheless provide useful information; and so on. It's reasonable to give non-normative advice, perhaps strong advice, about what other system components might do in that regard. Most of that advice should be in the other, informational documents, and some might even reasonably be here (and some of it is). But it can't be mandatory. I'll point out what Paul Hoffman had said many times in earlier discussions: you can't control what the receiver [meaning the overall system on the verification side] will do... you can only give it the information. Barry, as participant ___ NOTE WELL: This list operates according to http://mipassoc.org/dkim/ietf-list-rules.html
Re: [ietf-dkim] Protocol layering / Software vs. Protocol
On 5/5/11 1:24 PM, Barry Leiba wrote: Dave says... In terms of working group process, one line of criticism demands re-opening (and, apparently, reversing) the work of the Update (RFC 5672). I haven't seen any working group consensus to do this nor any industry feedback indicating this is necessary. Consequently, attempts to pursue the content of that work is entirely out of scope for the current working group effort. I'll point out that Dave is offering his *opinion* about whether this is in or out of scope. I'll provide the chair's judgment on that, as the one who has the task of determining scope: it's out of scope. We had this discussion back when we did 5672, and got rough consensus on it. Not unanimity, but rough consensus. We're not going over that again. 4871bis is, and should be a merging of 4871 and 5672. Barry, as chair Doug says... This can *only* be achieved by some mandatory test within the Verifier. Not at all; that's exactly Dave's point in discussing the difference between the protocol and the software system that wraps around it. The Verifier is a component that verifies the signature, and that's all we're defining normatively here. Other parts of the system will evaluate things whether the verified signature can be relied upon, and what it can be relied upon for; whether the domain that signed it is trustworthy; whether a failed signature can nonetheless provide useful information; and so on. Providing unreliable assurance is worse than none. DKIM can not expect other agents, especially those predating DKIM, that when message acceptance is based upon DKIM PASS from trusted domains that they must also be aware of bad assumptions made by DKIM. As it currently stands, DKIM offers PASS for highly deceptive messages having invalid formats. A narrow view about what entails a DKIM verified message *MUST* include what is needed to safely employ its output by subsequent agents. Not considering likely deceptive forms of a message represents a _truly_ dangerous specification. The current DKIM introduction states the following: 1. is compatible with the existing email infrastructure and transparent to the fullest extent possible; 2. requires minimal new infrastructure; 3. can be implemented independently of clients in order to reduce deployment time; 4. can be deployed incrementally; 5. allows delegation of signing to third parties. This view expressed as: Other parts of the system will evaluate things whether the verified signature can be relied upon, and what it can be relied upon for; whether the domain that signed it is trustworthy; whether a failed signature can nonetheless provide useful information; and so on. is clearly at odds with points 1, 2, 3, and 4 when this also entails re-evaluation of message elements already examined by DKIM. It's reasonable to give non-normative advice, perhaps strong advice, about what other system components might do in that regard. Most of that advice should be in the other, informational documents, and some might even reasonably be here (and some of it is). But it can't be mandatory. For DKIM to be trustworthy and not cause greater harm, it MUST confirm the form of the message is valid. A much simpler task than public key cryptography easily done simultaneously. It would be negligent and unreasonable to blame the transport for multiple From header fields when there are more than one, or their order, or which will be used by subsequent agents. Why is it reasonable to expect other protocols will repair DKIM's oversights and bad assumptions? Responsibly respond to exploits as they are discovered irrespective of the specification's status. I'll point out what Paul Hoffman had said many times in earlier discussions: you can't control what the receiver [meaning the overall system on the verification side] will do... you can only give it the information. There is no argument with Paul's statement. Nevertheless, don't excuse the deceptive nature of bad advice offered by DKIM verification process when it overlooks both invalid and likely highly deceptive messages. -Doug ___ NOTE WELL: This list operates according to http://mipassoc.org/dkim/ietf-list-rules.html
[ietf-dkim] Protocol layering / Software vs. Protocol
Folks, \ In terms of working group process, one line of criticism demands re-opening (and, apparently, reversing) the work of the Update (RFC 5672). I haven't seen any working group consensus to do this nor any industry feedback indicating this is necessary. Consequently, attempts to pursue the content of that work is entirely out of scope for the current working group effort. There are two continuing threads of other, technical dissatisfaction being expressed that are based on fundamental misunderstandings of protocol design concepts. The discussion on Wikipedia looks pretty good, for background: http://en.wikipedia.org/wiki/Network_protocol#A_basis_for_protocol_design The easy misunderstanding is about the basic difference between software design and protocol design. When a discussion is about a protocol specification, reference to the vagaries of software implementers' choices means that the discussion is no longer about the protocol. A protocol is a contract between two or more sides of an exchange. The contract needs to be extremely precise so that all sides know exactly what is required and what is meant. This includes semantics, syntax, and rules of exchange. Semantics means all of the meaning, not just the meaning of individual fields. And it means inputs and outputs. That an implementer might choose to use fields creatively is their choice -- and might even be useful -- but it goes beyond the protocol specification. Implmenters can choose to combine specifications, implement only parts of specification, or create additional functionality. This is all well and good, but it is an entirely separate exercise from the details a specific protocol. As for protocol layering, this is merely the standard construct of design divide and conquor with attendant information hiding outside of a layer. That software might have access to information hidden from a particular layern is sometimes useful, but it means that the software is going beyond the specification of that layer. Apparently the distinction between payload versus overhead is proving challenging for some folk. The logical extension of the view that the recipient at one layer has access to the information below its layer means that a MIME module must automatically and always has access to the IP and ethernet headers of the data that contained the MIME. In terms of formal protocol specification, it doesn't. Payload is what's handed to the next layer up. Everything else is overhead for getting work done at the current layer. This distinction is basic to the concept of design layering and the attendant information hiding. Without it, there is no information discipline in the design of protocols. Spaghetti protocol design is even worse than spaghetti software. d/ -- Dave Crocker Brandenburg InternetWorking bbiw.net ___ NOTE WELL: This list operates according to http://mipassoc.org/dkim/ietf-list-rules.html
Re: [ietf-dkim] Protocol layering / Software vs. Protocol
On Wednesday, May 04, 2011 01:01:57 PM Dave CROCKER wrote: In terms of working group process, one line of criticism demands re-opening (and, apparently, reversing) the work of the Update (RFC 5672). I haven't seen any working group consensus to do this nor any industry feedback indicating this is necessary. Consequently, attempts to pursue the content of that work is entirely out of scope for the current working group effort. If you're looking for consensus, count me in. I don't think it was progress. Scott K ___ NOTE WELL: This list operates according to http://mipassoc.org/dkim/ietf-list-rules.html