Re: [websec] DNS publication of HSTS and PKP header data using CAA
On Wed, April 8, 2015 4:40 pm, Phillip Hallam-Baker wrote: Who said anything about DNSSEC being required? If it isn't, then it's not equivalent. HSTS requires an error free connection - in part to ensure the policy is securely delivered. HPKP requires an error free connection that is consistent with the policy expressed - in part to ensure the policy is securely delivered and correctly formed. If you don't require secure delivery of that, then you're not developing a secure solution. If you're doing it for out of band discovery, then it would help to say that. But I very much doubt you are. Having more than one solution for a problem is usually a good reason to pick one. http://xkcd.com/927/ ___ websec mailing list websec@ietf.org https://www.ietf.org/mailman/listinfo/websec
Re: [websec] Comments on draft-ietf-websec-key-pinning
On Thu, February 19, 2015 11:38 am, Jeffrey Walton wrote: I don't believe this proposal - in its current form - will prove effective in the canonical cases: (1) CA failure like Diginotar; and (2) MitM attacks like Superfish. Are there other obvious cases I missed that this control will be effective? Jeff You're factually wrong in cases like (1), and woefully misguided in cases like (2) if you believe _any_ software, on a general purpose operating system with 1-layer security principals (such as Windows and OS X), can defend against a program with same-or-higher privileges. That's like complaining a program with root access can interfere with your usermode application. Well, yes, that's exactly correct, that's how it's supposed to work. If you can conceive a model where a super-user process would not be able to circumvent, then congrats, you've solved one of the most vexatious problems in computer security, and I know a dozen of anti-virus and anti-malware companies that will let you name your price if only you share your solution with them. Consider the following ways that an application like Superfish could have dealt with this (courtesy of a colleague far brighter than I) - PTRACE_POKETEXT a code modification at runtime - LD_PRELOAD a shim that intercepts strcmp and returns false for Strict-Transport-Security - Patch GTK+ (or equiv) so that the lock icon always appears on the omnibox - Rebuild a version of the browser with the code to disable such pinning commented out. There is no sane world in which anything in the HPKP spec can or should deal with this. It's doubly true and hopefully self-evident that raising the bar is not at all an acceptable, or even reasonable, justification. Malicious actors have every reason to escalate to more nefarious means (whether for profit or interception), while legitimate actors get shut out or, equally, burrow further into internals and cause worse experiences for everyone. I understand that you disagree. But you're also wrong if you think HPKP can or should have dealt with this. Best regards, Ryan ___ websec mailing list websec@ietf.org https://www.ietf.org/mailman/listinfo/websec
Re: [websec] Requiring OCSP Stapling as a directive in HSTS
On Mon, January 19, 2015 9:55 am, Tom Ritter wrote: The threat model I'm trying to address is an attacker who can get a valid certificate misissued for a domain. Because of the revocation situation, the attacker can then MITM users with impunity, blocking revocation lookups if they occur. While UAs would receive a crlset or other push of revoked certificates, I believe that the UA will still work if that connection is blocked, and it's not clear how long that connection must be blocked before the UA stops working entirely. (If that happens at all.) In this threat model, where does NTP fit? It seems like you're assuming the attacker can intercept both the target site and the UA's distribution mechanisms, so I would presume that they're also privileged enough to mount NTP attacks? While I'm not opposed to making the language say Hard Fail Revocation Checking I would expect UAs to interpret that as OCSP stapling, and would not want to delay implementation to account for unlikely-used corner cases. And if we say OCSP Stapling, there's no reason that down the road we could add a new directive for 'hard-fail-revocation' and let it encompass many different checks. To make sure this is clear: You're suggesting hard-fail OCSP checking when the OCSP directive is present in the header (always), and you're just quibbling over the naming of that directive here, right? I'm just making sure, since soft-fail OCSP stapling doesn't seem to make a lot of sense? As far as 'Where?' and why 'In HSTS'? I see four options: a HTTP Header, a TLS Extension, a DNSSEC record, and a Certificate Extension. You missed a fifth. A .well-known URI (RFC 5785) used to configure security-policy at the domain level. I don't suggest this as something that the browser would background fetch (although certainly, it could be induced so coupled with a header). Moreso, I'm suggesting that this be used to build such preload lists of security policy so that they can be distributed. The failure mode of a bad HSTS header is not too bad - you're stuck on TLS. If accidentally set / by an attacker / things really hit the fan, you could stick one of the many CDNs in front of your site to handle the TLS to your unencrypted backend. The failure mode of a bad HPKP header is decidedly worse - total site denial if you do key rotation / change CAs. Preloading is one way to mitigate this (by providing some consistent view of effective policy), although even that can be botched (e.g. CryptoCat recently rotated CAs and thus committed pinning suicide, at least until Chrome 41). The security considerations of hostile pinning alone accounted for quite a bit of discussion. OCSP stapling I think falls somewhere in the middle here on the risk proposition - there's still a vast, depressing majority of server software that doesn't support OCSP stapling. What are the implications for server admins who botch it themselves or who are attacked? I'm much more concerned by the former, see it much more likely, and think it's much more likely to result in self-inflicted DOS. For example, configuring ngingx to serve an OCSP response, but then forgetting to setup the cron job to rotate it = site inaccessible once the response expires (presuming hard fail, which I think is the sensible path) The advantage (or disadvantage, depending on your POV) of a .well-known URI + preloading allows some degree of curation of policy for sanity, which can't be expressed by a point-in-time snapshot of headers (at least for HPKP and for this hypothetical OCSP must staple) Certificate Extension is being worked on (https://datatracker.ietf.org/doc/draft-hallambaker-tlsfeature/) but I see it as a compliment to this. The certificate extension only dictates policy for this certificate. If we assume an attacker can get a misissued certificate for a domain, the cert extension is only useful when it is the default issuing status and an attacker must have additional privledges to circumvent it. And it doesn't make sense to have a certificate extension dictate policy for a whole domain, that's a very strange location to put this sort of data. To be fair, the certificate extension is dictating policy for the certificate. You're absolutely correct, however, that misissuance without including that extension is undetectable from CA rollover. A TLS Extension makes a certain amount of sense. We're trying to dictate a policy for TLS connections, not HTTP. But it's more difficult to deploy TLS Extensions than HTTP headers, the TLS working group is tremendously busy, and technically this would fit more with the closed PKIX group. Doesn't this suffer many of the attendant issues you raised with a certificate extension? Namely, that the MITM attacker who has obtained a fraudulent cert (since that would be the only reason I can think of to worry about OCSP stapling) would just not send the extension. Or are you implying that the
Re: [websec] HPKP: The strict directive and TLS proxies
On Fri, November 29, 2013 2:50 pm, Tom Ritter wrote: On 29 November 2013 17:39, Trevor Perrin tr...@trevp.net wrote: On Fri, Nov 29, 2013 at 2:15 PM, Tom Ritter t...@ritter.vg wrote: On 29 November 2013 15:24, Trevor Perrin tr...@trevp.net wrote: * Why is there a Public-Key-Pins-Report-Only header instead of a report-only directive? Most of the document is written as if there was a single PKP header field, so a directive would make more sense. If it becomes a directive, we should be sure that we can still apply two headers, one more loose in enforcing mode, one stricter in report only mode. Would you expect both headers to be noted? The current spec doesn't support that. It specifies 2 different (and incompatible) ways of handling this case: - 2.1.3: If a Host sets both the Public-Key-Pins header and the Public-Key- Pins-Report-Only header, the UA MUST NOT enforce Pin Validation, and MUST note only the pins and directives given in the Public-Key-Pins- Report-Only header. - 2.3.1: If a UA receives more than one PKP header field in an HTTP response message over secure transport, then the UA MUST process only the first such header field. Oh yea. Heh. Why is that? CSP supports an enforcing header and a reporting header, and both of them are applied simultaneously. I would expect the same from HPKP. -tom ___ websec mailing list websec@ietf.org https://www.ietf.org/mailman/listinfo/websec Spec bug. I'll see about getting that fixed. The PKP + PKP-Report-Only mode are meant to be parallel to their concepts from CSP. That is, the PKP-Report-Only directive is not enforced, but if a PKP header is present, or PKPs are noted from previous, they are still enforced. ___ websec mailing list websec@ietf.org https://www.ietf.org/mailman/listinfo/websec
Re: [websec] HPKP: The strict directive and TLS proxies
On Fri, November 29, 2013 12:24 pm, Trevor Perrin wrote: On Tue, Nov 26, 2013 at 12:14 AM, Yoav Nir syn...@live.com wrote: To summarize, although there has been much discussion since version -06, most of it did not result in massive changes to the document, so IMO we don't need another WGLC. * Weren't we going to discuss the relationship of preloaded to dynamic pins? See email [1]. * The rationale in thread [2] for strict seems different from the rationale in previous list discussions [3]. Ryan now argues that strict is not needed. I think that's worth considering. * I had feedback on an earlier draft which is still relevant [4], see below. [1] http://www.ietf.org/mail-archive/web/websec/current/msg01938.html [2] http://www.ietf.org/mail-archive/web/websec/current/msg01942.html [3] http://www.ietf.org/mail-archive/web/websec/current/msg01484.html [4] http://www.ietf.org/mail-archive/web/websec/current/msg01692.html FEEDBACK ON DRAFT-07, STILL RELEVANT * Interaction with cookies needs discussion. Cookie scoping rules pose a serious problem for pinning, e.g. a pin at example.com could be undermined by a MITM inventing a badguy.example.com and using it to steal or force cookies. I still disagree that the *pin* can be undermined. * Why is there a Public-Key-Pins-Report-Only header instead of a report-only directive? Most of the document is written as if there was a single PKP header field, so a directive would make more sense. Similar to CSP. A PKP might specify a looser-policy, while PKP-Report-Only is used to test a more restrictive policy for deployment. * In draft-05, client processing was changed from noting a single expiry value to noting two values: Effective Pin Date and max-age. The previous approach was simpler, stored less data, and was more aligned with HSTS. * Section 2.3.1 fails to update the Effective Date of a noted pin when it is noted again. * Section 2.6 mandates When a UA connects to a Pinned Host, if the TLS connection has errors, the UA MUST terminate the connection without allowing the user to proceed. HSTS allows the server to specify this, so it seems unnecessary and inflexible to mandate it here. It's also unclear whether a report-only pin counts as a Pinned Host. This is a feature, not a bug. PKP is useful without mandating HSTS. However, the security advantages of PKP can be undermined, much the way HSTS's can, if the UA allows users to bypass. It would be unnecessary and inflexible to mandate HSTS in order to obtain security with PKP. * UA processing rules are confusingly spread across the document. For example, 2.3.1 and 2.5 describe the same process so should be combined. * Section 2.5 - Does a failed validation of a report-only pin count as an error that will inhibit noting of new pins in the connection? No. Happy to clarify that. * Specified UA behavior with multiple header fields is contradictory: - 2.1.3: If a Host sets both the Public-Key-Pins header and the Public-Key- Pins-Report-Only header, the UA MUST NOT enforce Pin Validation, and MUST note only the pins and directives given in the Public-Key-Pins- Report-Only header. - 2.3.1: If a UA receives more than one PKP header field in an HTTP response message over secure transport, then the UA MUST process only the first such header field. Not contradictory, but ambiguous. The presence of 1 PKP or 1 PKP-R-O cause the first (PKP || PKP-R-O) field to be processed, and the rest ignored. However, if both a PKP and PKP-R-O field are present, they are both processed (independently). The ambiguity is the term PKP header field, which may be alternatively written more than one PKP header field or more than one PKP-R-O header field. Does that satisfy your concern? * Section 3 - Should the failure report contain the certificates sent by the server, as well as the validated chain? Could help debugging and attack analysis. * Why is SHA1 supported? If the reason is size, why not remove it but allow the server to pin to a truncated hash (e.g. pin to the first 16 or 20 bytes of SHA256) I won't attempt to assume any argument for you, but it sounds like you oppose SHA-1. Perhaps you could state that, and your reasons why. * Should there be a list of excluded keys that must not appear in the validated certificate chain? Chrome's preloaded pinning supports this, apparently to exclude 3rd-party subCAs that might have been issued under a pinned CA. This was a temporary experiment that is no longer in use. If such a feature is desirable, it seems like something that could be addressed with additional directives in a later spec. * Would it be better to use the term pin for the entire record associating (hostname, HPKP data), instead of calling each individual hash a pin? The hostname is pinned to the 1-of-n key set as a whole,
Re: [websec] HPKP: The strict directive and TLS proxies
On Fri, November 29, 2013 8:44 pm, Trevor Perrin wrote: On Fri, Nov 29, 2013 at 4:14 PM, Ryan Sleevi ryan-ietfhas...@sleevi.com wrote: On Fri, November 29, 2013 12:24 pm, Trevor Perrin wrote: FEEDBACK ON DRAFT-07, STILL RELEVANT * Interaction with cookies needs discussion. Cookie scoping rules pose a serious problem for pinning, e.g. a pin at example.com could be undermined by a MITM inventing a badguy.example.com and using it to steal or force cookies. I still disagree that the *pin* can be undermined. Semantics aside, there should be Security Considerations about cookies. * Why is there a Public-Key-Pins-Report-Only header instead of a report-only directive? Most of the document is written as if there was a single PKP header field, so a directive would make more sense. Similar to CSP. A PKP might specify a looser-policy, while PKP-Report-Only is used to test a more restrictive policy for deployment. So the UA is going to maintain 2 stores of pins (PKP and PKP-R-O), and process the PKP and PKP-R-O headers only against the appropriate store? That means asserting PKP:new-pin isn't sufficient to overwrite existing pins. You'd also have to assert PKP-R-O:max-age=0. Seems OK, but none of this is in the draft. An artifact of when a PKP header was required (eg: implicit Max-Age). This is because Report-Only doesn't/wouldn't need to be stored, because it is a report-only mechanism. You simply evaluate the R-O on the current connection (if a directive is present) and ping back. As noted during Orlando, this is not a security mechanism, nor intended to be one. It's a reporting/deployment testing mechanism. In that sense, R-O still doesn't need to be stored, it could simply be If present, evaluate the connection against these [potential] pins. If absent, no R-O implied. With the implementer hat on, I can go either way, but with a preference towards the following: To me, R-O makes most sense when treated like CSP - if present, report, otherwise, nothing. This is consistent with an implicit Max-Age: 0, because no storage is required or performed. * In draft-05, client processing was changed from noting a single expiry value to noting two values: Effective Pin Date and max-age. The previous approach was simpler, stored less data, and was more aligned with HSTS. Yet was very much intended to address the same concerns you raised regarding the intersection between 'preloaded' and 'dyamic' pins, and the WG seemed to support. Do you have a proposal on how to reconcile these two items? Otherwise, it seems like an issue that's impossible to address - on the one hand, an objection to the ways that preloaded and dynamic may intersect, yet on the other, an objection to attempts to attempts to clarify the ways that they do. * Section 2.3.1 fails to update the Effective Date of a noted pin when it is noted again. * Section 2.6 mandates When a UA connects to a Pinned Host, if the TLS connection has errors, the UA MUST terminate the connection without allowing the user to proceed. HSTS allows the server to specify this, so it seems unnecessary and inflexible to mandate it here. It's also unclear whether a report-only pin counts as a Pinned Host. This is a feature, not a bug. PKP is useful without mandating HSTS. However, the security advantages of PKP can be undermined, much the way HSTS's can, if the UA allows users to bypass. I think it's a mistake to mandate UI here. Users clicking through legitimate security errors is one risk. Users being denied access to legitimate sites due to pinning failures is another. It's hard to know how to balance these. We should allow UAs latitude. I suggest changing the text from MUST to SHOULD, at minimum. Thanks for your feedback. I'd like to hear from others in the WG if that view is shared. To this point, we haven't heard any concerns about the MUST, and the security properties are seemingly well understood. Certainly, no UA vendor has asked or proposed such latitude. But then again, we're all individuals here. I do fail to understand your distinction between legitimate security errors and pinning failures. I would certainly count the latter amongst the former, and so I fail to see how the risks differ, beyond being a superset/subset. * UA processing rules are confusingly spread across the document. For example, 2.3.1 and 2.5 describe the same process so should be combined. * Section 2.5 - Does a failed validation of a report-only pin count as an error that will inhibit noting of new pins in the connection? No. Happy to clarify that. Should failed validation of a report-only pin inhibit noting of new report-only pins? See above re: noting for R-O. * Specified UA behavior with multiple header fields is contradictory: - 2.1.3: If a Host sets both the Public-Key-Pins header and the Public
Re: [websec] HPKP: The strict directive and TLS proxies
On Tue, November 19, 2013 3:36 pm, Yoav Nir wrote: Hm. Interesting predicament. Two thoughts: If the goal is to allow clients to note and obey the 'strict' directive even in the face of a SSL Interception proxy... what you propose won't work. The proxies will be built to just remove the strict directive, or the header all together. If the organization running the proxy wanted to block access to the site in question, they would do so. If they want to monitor ongoing access, they will strip the directive/header so the clients continue to connect. If they are ambivalent - 'access it or not, we don't care, but must monitor you if you do' - the stripping feature may or may not be enabled. What you propose would only help in the latter case, it will not actually provide more security if the website considers the proxy to be an adversary. And if they add the strict header, I would imagine that the website does consider them to be an adversary, or at least that they do not wish the connection to succeed.. I agree that it does not help security, but it increases the chances of the server policy being enforced. This is not far-fetched. Most TLS proxies are next-generation firewalls or caching proxies. They won't remove the strict directive. Of course, snooping proxies will act differently. So I don't think what you propose is worthwhile, specifically because it does add risk via sites bricking themselves, while not improving security considerably. As far as the laptop moving between home and work... I get the impression this situation may be regarded as a 'failure' of the protocol. That is, we have unintentionally broke something. I disagree. I think the situation has worked as desired. It's the other way around. The desktop that remains at the office and keeps ignoring the strict pins is the failure. Because if we replace the players with a rogue government performing TLS MitM, with the specific goal of isolating clients from receiving up-to-date security policies... we would declare this same situation a success. I don't see a way to reconcile the two situations, as they behave almost exactly the same. TLS Proxies add a third player into what was previously a (primarily) two-party protocol*. I think the strict directive preserves the reasonable property that any one of the three parties can choose not to participate based on the settings of the protocol. - The TLS proxy can block access or choose not to intercept - The user can not visit the site - The site can declare I don't want anyone else seeing the data I consider to be for the user's eyes only So the benevolent TLS proxy should note the strict directive and block the connection by itself? It makes sense, but that would require an upgrade of the TLS proxies. Changing client behavior would work with the proxies that are deployed now. I expect that Firefox and Google may even continue to preload entries in their browsers that apply the 'strict' directive specifically to provide websites the power to assert their right to a MitM-free connection. I know several websites who would like to exercise that right. What? And have their sites not work in all the places that have newer firewalls? I doubt it. Anyway, I can see how this would allow a buggy proxy to brick the users, so I now agree that this is not worthwhile. Yoav ___ websec mailing list websec@ietf.org https://www.ietf.org/mailman/listinfo/websec Apologies that this discussion has happened due to the editors not having refreshed the draft. As I recall (perhaps incorrectly), our takeaway from IETF 86 was the removal of the 'strict' directive, since the notion of having a remote server provide a directive to override local policy is just escalating a policy arms race. One of the key discussions was had was regarding pinning and how, in current implementations (eg: Chrome), one can only pin to chains that terminate in a 'well-known root'. The 'strict' directive (or its absence) was an attempt to provide some indication that the site accepts being pinned to an internal trust anchor. I recall (and again, perhaps incorrectly), that it was either Eric Rescorla or Jeff Hodges that suggested an implementation change might better accomodate this: 1) If noting pins, and it chains to a well-known trust anchor, local policy (read: potentially MITM enterprise devices) may be able to override 2) If noting pins, and it does not chain to a well-known trust anchor, effectively note the pins with no override capability (eg: the pins exactly as specified) The goal of this was less about enabling MITM proxies (although they're an unfortunate reality and, to at least some, a perceived necessity), and instead finding a way to balance pinning for both public sites as well as internal sites.
Re: [websec] Strict-Transport-Security and mixed-content warnings
On Wed, May 22, 2013 2:56 pm, Daniel Kahn Gillmor wrote: hi websec folks-- I am wondering what people think the proper intersection is between a web browser's mixed-content warnings and HSTS. For example, if https://example.net has asserted Strict-Transport-Security: max-age=15768000 but the homepage at https://example.net/ also contains img src=http://example.net/example.jpg/ should an HSTS-compatible browser show its standard mixed-content warning even though it knows to rewrite that img src to https://example.net/example.jpg ? My intuition is that this shouldn't trigger the browser's mixed-content warning, but chromium 26.0.1410.43 does show it, and https://code.google.com/p/chromium/issues/detail?id=122548#c26 suggests that palmer@ and rsleevi@ think that is the correct behavior because they want: to signal to the site author of an error - for example, if users which did not have HSTS visited. I'm not sure how the author is supposed to get this signal from the site visitor's browser, though -- perhaps the expectation is that site visitors will independently report the broken lock to the site administrator? Yes. The view is that it's still a legitimate error for the site operator, in that any user without HSTS protections (or with expired HSTS) is still at risk. While HSTS may be providing protection to the user, the site itself is still configured to serve resources insecurely, thus we still surface it as user visible. I also note that firefox 21.0 (when security.mixed_content.block_display_content = true) doesn't show the media at all, and when security.mixed_content.block_display_content = false it shows the image but removes the lock from the address bar (which i think is the equivalent of the mixed content warning these days). I believe the Firefox behaviour is an artifact of their current (partial) implementation of mixed content blocking. According to https://blog.mozilla.org/security/2013/05/16/mixed-content-blocking-in-firefox-aurora/ , future versions should distinguish between active and passive content, and thus more closely mirror the Chromium behaviour by default (namely, that it will block active content, but not passive content) Do other folks have any thoughts about the right thing is to do here? ISTM that UI behaviours are largely outside the realm of what the IETF standardizes. That said, certainly feedback is welcomed on this, and I would love to see consistent handling (which, AFAIK, we will soon have). I'm just not sure if this is really a spec issue. ___ websec mailing list websec@ietf.org https://www.ietf.org/mailman/listinfo/websec
Re: [websec] HPKP and Certificate Revocation Check
On Thu, May 2, 2013 2:50 am, vinod kumar wrote: Hi, I would like to discuss a possible vulnerability in Public Key Pinning scheme (http://datatracker.ietf.org/doc/draft-ietf-websec-key-pinning/ ), related to certificate revocation checking. I would appreciate if anybody can review and verify my observations. Section 2.6 the draft states that when the UA connects to a pinned host, it must terminate the TLS connection, if there are any errors. The referred rfc, RFC 6797, clarifies that error could be related to certificate revocation checking or server identity checking. But it is not mandatory that during TLS handshake, UA performs certificate revocation or an assertive answer is found about the revocation status of the certificate. The reasons could be: 1. OCSP is disabled by default at the UA and the user has not changed it. 2. OCSP checking is disabled by the user. 3. OCSP check is performed but no answer is obtained from the OCSP server within the stipulated timeout. So it is very much possible that the UA accepts the PKP header from the host without verifying the revocation status of the host certificate used in TLS handshake. In the case of the host losing its private key to an attacker, the attacker may be able to block the host permanently from connecting to the UA, as explained below. The hosts CA would have revoked the certificate but the unreliability in revocation checking creates a vulnerability. Suppose an attacker is in possession of the private key of the host and the host certificate is duly revoked by the issued CA. Attacker may succeed in diverting the UA to his server when UA tries to connect to the valid host. Attacker may mount a DNS attack or a MIM attack to achieve this. Since he posses the valid private key, the TLS connection will be successful. The only other thing he needs is that UA doesnt get to know the revocation status of the certificate. Here, the attacker is able to make the UA accept whatever PKP headers sent in the HTTP response. Say, the attacker has two key pairs, KeyPair1 and Keypair2, and holds a valid certificate for KeyPair1. He constructs a PKP header, Public-Key-Pins: max-age= (maximum possible value); pin-sha1= (pin corresponding to compromised key); pin-sha1= (pin corresponding to KeyPair1); When UA accepts this PKP header, effectively the attacker is able to set a new backup pin. The attacker can set a new PKP header in subsequent connection. He can use KeyPair1 in the TLS handshake. Public-Key-Pins: max-age= (maximum possible value); pin-sha1= (pin corresponding to KeyPair1); pin-sha1= (pin corresponding to KeyPair2); Now the UA sets both pins to be attackers pins and the valid host is permanently blocked from connecting to the UA. To avoid this attack, shouldnt we mandate revocation checking when PKP headers are accepted? Thanks, Vinod. This attack exists even if you're not in possession of the server's private key - you may have obtained a fraudulently issued certificate for a site that wasn't using HPKP already, an attack that's been discussed at some length so far. If I understand your proposal correctly, you would see UAs establish an SSL/TLS connection, send a request (which may include sensitive data that is otherwise desired to be protected - such as cookies), and then on receipt of the request, when the server sends back an HPKP header, perform a secondary revocation check? 1) What if the user has explicitly disabled revocation checking 2) I'm assuming you mean revocation checking in hard(est) fail(ure) mode, since anything short of that would be pointless, given all the attacks that trivially exist. If we think about the risk/benefits carefully - from the side of the attacker, the site operator who deploys HPKP, the site operator that doesn't deploy HPKP, and the user - I think the solution is much less compelling. From the attacker: - Strengths * Attacker can DOS a site by deploying a malicious HPKP header - Weaknesses * Deploying an HPKP header would force revocation checking, potentially revealing their attack. However, by simply *not deploying an HPKP header*, their attack MAY not be noticed (even by clients who DO revocation checking) * It requires some form of privileged connection/MITM to deploy. An attacker with such a position can equally DOS a site at lower layers in the protocol. - Questions: * Whether or not the window of time that an attacker can DOS a site via HPKP is less than, equal to, or greater than the window of time that an attacker can DOS a site via lower-level protocol. This is the ongoing max-age discussion. From the site operator who deploys HPKP: - Strengths: * None particular over HPKP - Weaknesses: * Users will pay significant additional latency costs when connecting to a site, and a non-trivial
[websec] Issue 53 - Key pinning should clarify status of pin validation with private trust anchors
This was raised during our discussions in Atlanta, and continued on the mailing list under http://trac.tools.ietf.org/wg/websec/trac/ticket/53 As discussed during Atlanta, the way that pinning is currently implemented within Google Chrome, pinning is only enforced as it relates to so-called public trust anchors (eg: those shipped by default as part of a browser or OS installation, not those installed by a user). The motivation for this was and is simple: If you have sufficient local privilege to install additional trust anchors on a device, then it's presumed you have sufficient privilege to take any number of other actions, including disabling pinning enforcement entirely. As such, having the UA disable enforcement selectively is strictly less worse, from a security perspective, than having the UA disable enforcement entirely, and still provides significant benefit through the reduction of risk through public trust anchors. However, this creates an interesting split between the specification language and implementations. In draft-04, we've tried to clarify this through the text in Section 2.6, http://tools.ietf.org/html/draft-ietf-websec-key-pinning-04#section-2.6 , along with the addition of a strict mode, as described in Section 2.1.4, http://tools.ietf.org/html/draft-ietf-websec-key-pinning-04#section-2.1.4 While there is an open question as to whether or not such user-agent behaviour is appropriate to specify here, does the group feel the proposed text sufficiently addresses the issue as originally raised? ___ websec mailing list websec@ietf.org https://www.ietf.org/mailman/listinfo/websec
[websec] Issue 55 - Key pinning should clarify that the newest pinning information takes precedence
This was raised with http://trac.tools.ietf.org/wg/websec/trac/ticket/55 , as a concern about the interaction between static/preloaded pin entries and those that were noted later (eg: through the observation of the header) In draft-04, Section 2.7 http://tools.ietf.org/html/draft-ietf-websec-key-pinning-04#section-2.7 addresses this by indicating that UAs MUST use the newest information available. Note: This does not normatively describe how a UA must determine newest. The assumption here is that this represents the newest observed, but the question is whether or not that should be explicitly specified. For example, imagine a UA that supports automatic updates to Preloaded Pin Lists. One interpretation of newest information would be to look at the most recent update to the preloaded pin list as a whole. Another interpretation of newest information may be to look at the date/time that the entry was originally added/updated in the preloaded pin list. Imagine this UA had a preload for Site 1 to a set of pins, with the preload created at T=0. Later, at T=5, it observes a Max-Age of 0, effectively unpinning the host. At T=10, the UA vendor ships a new update to the preloaded pin list that adds Site 2, but which has not been updated to unpin Site 1. Under the first interpretation of newest information, Site 1 would be reactivated, by virtue of observing an update to the preloaded pin list. Under the second interpretation, Site 1 would remain unpinned. The authors belief is that the issues that arise from either implementations are artifacts of the implementation and distribution of preloaded pins, rather than an issue intrinsic to this specification. That is, the correct answer is that the preloaded pin list should be updated for Site 1 - however that information is distributed between the site operator and the creator of the preloaded pin list. Are there concerns with this interpretation, or can we close out Issue 55? ___ websec mailing list websec@ietf.org https://www.ietf.org/mailman/listinfo/websec
[websec] Issue 56 - specify includeSubDomains for key pinning
With Key Pinning being split out from HTTP Strict Transport Security, one aspect that was lost was the includeSubDomains directive. This was raised as Issue 56 - http://trac.tools.ietf.org/wg/websec/trac/ticket/56 - against draft-03 draft-04 introduces the same directive, and with the same semantics, in Section 2.1.2 - http://tools.ietf.org/html/draft-ietf-websec-key-pinning-04#section-2.1.2 Is the added language acceptable? Are there any concerns with the validation/processing model that would prevent us from closing out this issue? ___ websec mailing list websec@ietf.org https://www.ietf.org/mailman/listinfo/websec
[websec] Issue 54 - Adding a report-only mode
As discussed during Atlanta, and as raised in http://trac.tools.ietf.org/wg/websec/trac/ticket/54 , there's a strong desire for a Content Security Policy-like report and report-only mode. The use of a report mode is not as an attack mitigation, but as a way of sites to be informed of misconfigurations. The use of a report-only mode is as a way to allow sites to experiment with and deploy a Pinning Policy effectively. Given that pinning is effectively ultimately dependent on client trust and PKI policies, it's important for site operators to be able to ensure their proposed pinning policy will work effectively. To that end, draft-04 has introduced the report-uri directive, Section 2.1.3, http://tools.ietf.org/html/draft-ietf-websec-key-pinning-04#section-2.1.3 , which allows a site to specify a URL to direct reports, as described in Section 3 - http://tools.ietf.org/html/draft-ietf-websec-key-pinning-04#section-3 In addition, and in the spirit of CSP, we'd like to propose the addition of a Public-Key-Pins-Report-Only header, as described in Section 2.1 http://tools.ietf.org/html/draft-ietf-websec-key-pinning-04#section-2.1 - as a compliment to the Public-Key-Pins header. This header would follow the same syntax and semantics of the Public-Key-Pins header, with the exception of not actually enforcing the pins (as described Section 2.6). I'd like to solicit feedback and make sure that both the discussions from Atlanta and from the list have been accurately captured. Are there concerns with a Report-Only mode that have not been accurately captured? ___ websec mailing list websec@ietf.org https://www.ietf.org/mailman/listinfo/websec
Re: [websec] #54: Specify a report-only mode
On Thu, October 18, 2012 5:17 pm, Chris Palmer wrote: On Thu, Oct 18, 2012 at 4:56 PM, websec issue tracker trac+web...@trac.tools.ietf.org wrote: #54: Specify a report-only mode Should there be a report-only mode, allowing site operators to see how using HPKP would affect their site's operation in browsers supporting HPKP? (Probably.) If so, specify how that mode would work. What are people's thoughts on this? The motivation for a report-only mode is twofold: (1) site operators want to see what would happen before going live with pinning; and (2) site operators often don't know all their keys, or all their intermediate signers' keys, or all their trust anchors' keys, and a reporting mode could help them find out. (2) implies that the reporting interface would have to allow the UA to tell the site not just pin validation succeeded/failed, but also why (probably by simply reporting the entire validated certificate chain that the UA computed/observed). The reporting interface must be one that is easy for site operators to implement â writing code to collect the reports should not be a huge burden for developers. Perhaps a simple JSON blob: { pin-validation-succeeded: (true|false), expected-pins: [ sha1/blahblah, sha256/foobar, ... ], validated-chain: [ PEM blob of EE, PEM blob of intermediate, ..., PEM blob of anchor ] } The next issue is, should the site be able to specify a URL to which the UA will POST the JSON blob, or should we specify a single, well-known URL path? Using a well-known path seems simpler and less error-prone generally. To wit, this was also brought up by several people during the IETF 84 presentation. As an example of how such behaviour might be specified, a similar report-only mode is implemented with Content Security Policy [1]. In CSP, both the Content-Security-Policy and the Content-Security-Policy-Report-Only headers may be present, both are considered, and both may contain independent/conflicting policies. In CSP, one of the token/directives is report-uri, which indicates the URI to use to send violation reports to, as well as a defined format for what the contents of the report should contain. The distinction here between PKP and CSP is that CSP is evaluated per-resource load within a web origin, whereas PKP is enforced at the transport layer. A PKP violation is nominally detected during the TLS handshake, and depending on client implementation, may either be treated as an error during the TLS handshake or immediately following it, prior to using the connection to transmit the (original) HTTPS request. If a report-only mode is supported, the following considerations should be worked out: - Should the origin of the report URI be constrained the the origin of the target URI? - Should the report URI be allowed to specify HTTPS? - If the report URI specifies HTTPS, and the report URI origin is the same as the target URI, but the report URI violates either the PKP or PKP-Report-Only policy, should the report still be submitted? - Is there a blacklist or whitelist of headers that should be used to prevent against abuse or compromise. For example, presumably including cookies in the report submission for an invalid PKP over the same connection that generated the invalid PKP would be bad, as it may (will) lead to the compromise of the users' data. - If the report contains validated certificates, what should the format be? draft-josepfsson-pkix-textual [3] may be of normative use here. [1] http://www.w3.org/TR/CSP/#content-security-policy-report-only-header-field , with the expired, less-comprehensive draft submitted to IETF at http://tools.ietf.org/html/draft-gondrom-websec-csp-header-00#section-4.2 [2] http://www.w3.org/TR/CSP/#report-uri [3] http://tools.ietf.org/html/draft-josefsson-pkix-textual ___ websec mailing list websec@ietf.org https://www.ietf.org/mailman/listinfo/websec
Re: [websec] Fwd: New Version Notification for draft-ietf-websec-key-pinning-03.txt
On Wed, October 17, 2012 7:23 am, Tom Ritter wrote: Section 2.4: Validating Pinned Connections: For the purposes of Pin Validation, the UA MUST ignore certificates who SPKI cannot be taken in isolation and superfluous certificates in the chain that do not form part of the validating chain. I know I just modified this, but the second phrase just hit me. Because path construction is non-standard, could a client wind up in a situation where the site pinned to CA_Alice, with the intended path CA_Alice - CA_Bob - Intermediate - Leaf; but because CA_Bob was trusted, the ultimate validating chain was simply CA_Bob - Intermediate - Leaf? I'm not sure what the right way to counteract that would be... Yes. This is one of the troubling areas of the current draft, and that has been particularly challenging when implementing within Chromium. It's not clear whether your case was demonstrative of the inter-organizational cross-signing relationships (a new CA getting cross-certified by an existing, established CA) or whether it was an intra-organizational relationship, such as a SHA-1 root cross-signing the SHA-2 root, until such a time as the SHA-2 root is included in the trust stores. In the case of inter-organizational, where CA_Alice and CA_Bob are two distinct entities, it's generally seen as unlikely that the site would pin to CA_Alice, since their business relationship is almost certainly with CA_Bob (by virtue of CA_Bob having certified Intermediate). So if the site were to pin to CA_Bob, their issue would be mitigated. However, for the intra-organizational scenario, where intermediates and roots are changed or re-issued more commonly than site operators may realized or be re-configuring, I agree that this is a very real problem. The mitigation for this has been to make the pins an appropriate cross-section of both root and intermediates - intermediates in the event of new roots being issued, and roots in the event of the CA issuing new intermediates (and serving them up via AIA, as the common case is) This leaves the broader question of How does the site operator know about CA_Alice and CA_Bob to begin with. One possible solution for this is a report-but-unenforced mode, where user agents could describe their observed chains to the site. As unseemly as this is, it's very likely that many site operators - even Very Large, High Value sites - may not have a full understanding of the PKI that they're a participant in. Another solution is to rely on policy changes in root stores, such as Mozilla's recent proposed CA Certificate Store requirements change, which would encourage (by requiring, with only one acceptable alternative) the public disclosure of such CA hierarchies. As a result of such changes, there would be knowledge of the relationship between CA_Alice and CA_Bob, which under today's model, is actually quite hard for site operators to discover. Because pinning permits multiple entries, and is considered satisfied so long as at least one SPKI appears in the client validated chain, the issue here is largely not a technical challenge, but an operational one, and that's why this issue is not critical, but remains very important. ___ websec mailing list websec@ietf.org https://www.ietf.org/mailman/listinfo/websec
Re: [websec] Fwd: New Version Notification for draft-ietf-websec-key-pinning-03.txt
On Wed, October 17, 2012 10:04 am, Carl Wallace wrote: On 10/17/12 12:36 PM, Ryan Sleevi ryan-ietfhas...@sleevi.com wrote: large snip This leaves the broader question of How does the site operator know about CA_Alice and CA_Bob to begin with. One possible solution for this is a report-but-unenforced mode, where user agents could describe their observed chains to the site. As unseemly as this is, it's very likely that many site operators - even Very Large, High Value sites - may not have a full understanding of the PKI that they're a participant in. A tool that builds all possible paths that the site operator can run without involving any users would be good too. The site operator mainly needs to know where its certificate chains against public stuff and could check that independently. This should come close to relegating the user report tool to oddball instances. The problem in general with such tools (and I'm sure this will be a talking point during the WPKOPS and CERTTRANS BOFs) is that it's actually a rather hard problem to know exactly what certificates exist. Beyond the issues such as What user agents have what behaviour and what intermediates are baked into what platforms, one of the more problematic areas is related to the AIA chasing that is functionally and fundamentally necessary to maintain parity and access to a depressingly vast majority of sites. When the site operator installs their cert, they may only install on their HTTPS server (Site_Cert, Intermediate). This is because CA_Bob and the site operator are assuming that CA_Bob can be omitted, since it's the root of trust. For clients that don't trust CA_Bob, Intermediate has an AIA to a URL that contains CA_Bob, which is cross-certified by CA_Alice. So the possible chains are: Site_Cert (via TLS) - Intermediate (via TLS) - CA_Bob (obtained out-of-band) Site_Cert (via TLS) - Intermediate (via TLS) - CA_Bob (obtained via AIA) - CA_Alice (obtained out of band) Now what if CA_Bob goes out of business/is acquired? Well, one common trick is to update the URL of Intermediate to point to a new version, signed by CA_Charles. When the path building for Intermediate (via TLS) fails, user agents may unwind their depth search, and attempt the AIA fetch from Site_Cert, leading to a new chain. Site_Cert (via TLS) - Intermediate (via AIA) - CA_Charles (obtained out-of-band) Or CA_Bob may decide they want to sever ties with CA_Alice, and go with CA_Charlene. They simply replace the AIA URL used to serve up the cross-certified CA_Bob. This is all well and good for point in time evaluation - but the added crux on this is that the site operator may be noticing all these shenanigans, and now explicitly configure their server to send a particular chain they believe is accurate/optimal. Also, clients may be caching intermediates as well. So it's Very Hard to actually know what *all* possible paths are, because the possible paths change over time, and some paths may no longer be knowable by such a tool when the certificate is being evaluated, because the old certificates served at URL X are now replaced with newer ones. You may think I'm just grasping at edge cases, but unfortunately every situation I just described above has been done by major CAs, including some that might be considered of the Too big too fail variety. So it's definitely an issue, and that's some from of client reporting seems necessary (with transparency + disclosure to make it easier to write such tools in the future) ___ websec mailing list websec@ietf.org https://www.ietf.org/mailman/listinfo/websec
Re: [websec] Fwd: New Version Notification for draft-ietf-websec-key-pinning-03.txt
On Wed, October 17, 2012 11:07 am, Carl Wallace wrote: snip I don't doubt any of this, but still think a crawler tool would be sufficient in many (if not most) cases. It'd probably be instructive to run for the site operators in the worst case. Note, I did not suggest that a user report feature was not a good or necessary thing, only that a builder tool for the site operator to run without bothering any users would be nice to have in the toolbox too. Thinking about it, they may well be the same tool if the user reporting tool is aggressive enough. Ah, I misunderstood your point to be suggesting that a crawler would be sufficient, and reporting would be unnecessary. I'm a big fan of the crawler - both for purposes of pinning and for purposes of generally understanding the nature of the web PKI. Public datasets such as the EFF SSL Observatory data [1], along with private datasets such as those that inform tools such as Qualys' SSL Labs [2] or the Google's Certificate Catalog [3], would go a good deal to establish what the known-possible certificate hierarchies are. I just thing there will be a tail of oddities and legacy that are only picked up by reporting. [1] https://www.eff.org/observatory [2] https://www.ssllabs.com/ [3] http://googleonlinesecurity.blogspot.com/2011/04/improving-ssl-certificate-security.html ___ websec mailing list websec@ietf.org https://www.ietf.org/mailman/listinfo/websec
[websec] Strict-Transport-Security syntax redux
Following Julian Reschke's questions, I also had a few questions related to the draft-02 syntax. I've been basing my understanding on RFC 5234, which I understand to be the most current/relevant to understanding the syntax. First, maxAge and includeSubDomains both include value extension points, defined as: maxAge = max-age OWS = OWS delta-seconds [ OWS v-ext ] includeSubDomains = includeSubDomains [ OWS v-ext ] v-ext = value ; STS extension value However, the rule for OWS is specified as: OWS = *( [ CRLF ] WSP ) As written, it would seem that the OWS between either delta-seconds or includeSubDomains can legitimately be omitted before the v-ext. As best I can tell, this would mean that the following values would still be valid: max-age=123.456 includeSubDomainsabc In the first case .456 is the v-ext, while in the second, abc is. In both cases, because the OWS is truly optional (valid to have 0 occurrences), only the v-ext is present. For maxAge in particular, this can lead to very silly parsing interpretations. Considering the following value: max-age=123 If I'm not mistaken, ABNF doesn't specify any sort of greedy matching, so this could be interpreted as: delta-seconds = 1 (1DIGIT), v-ext = 23 (0OWS 2tchar) delta-seconds = 12 (2DIGIT), v-ext = 3 (0OWS 1tchar) To remedy this, I believe some form of explicit delimiter between the existing values and the v-ext should be defined for the existing headers. If the intent was to have extension values whitespace separated, then would the following modification to introduce required whitespace solve it? maxAge = max-age OWS = OWS delta-seconds [ RWS v-ext ] includeSubDomains = includeSubDomains [ RWS v-ext ] v-ext = value ; STS extension value OWS = *( [ CRLF ] WSP ) RWS = 1*( [ CRLF ] WSP ) Alternatively, can/should the v-ext be dropped entirely, and extensions to the STS header be accomplished via defining a new STS-d-ext? Second, as currently specified, extension directives take the form of: STS-d = STS-d-cur / STS-d-ext STS-d-ext = name ; STS extension directive name = token token = 1*tchar tchar = ! / # / $ / % / / ' / * / + / - / . / ^ / _ / ` / | / ~ / DIGIT / ALPHA As currently written, STS-d-ext doesn't seem to be able to contain name = value pairs such as those used by Evans' and Palmer's pinning draft, because = is not a valid token character. Likewise, it wouldn't be able to contain #rule style lists, because comma is not a valid tchar. Should STS-d-ext be defined as value instead, so that it contain a wider range of characters? Alternatively, if the intent for STS-d-ext was that it would always include some name, should it be defined as name [ OWS = OWS value ]? Either way seems to offer a solution. It seems that if a parser were written based on the current draft, it would be correct/valid to reject an STS header that included cert pins, as specified in the current pinning draft. This would be because the cert pinning draft makes use of = within the extension definitions, which is not a legal character for an STS-d-ext in the base spec. Best regards, Ryan ___ websec mailing list websec@ietf.org https://www.ietf.org/mailman/listinfo/websec