Re: LC comments on draft-laurie-pki-sunlight-05 - "acceptable root certificates" ?

2013-01-28 Thread =JeffH

> Apologies for responding to recent comments in random order: I'm
> travelling and have accumulated something of a backlog.

no worries :)

thx again for your thoughts.


BenL replied:
> On 22 January 2013 03:11, =JeffH  wrote:


>>>  - is there a
>>> standard reference for that? I've refereced HTML 4.01, but perhaps
>>> there's a better one?
>>
>> hm, AFAICT, there is not a standard for URI query component formating and
>> thus parameter encoding, so this spec will have to explicitly specify
>> something. Section 3.4 of RFC3986 gives allowed chars for the query
>> component, but that's about it.
>>
>> Have you mocked up code that parses the log client messages? If so, what
>> query component syntax does it handle?
>
> I have specified the "standard" format via HTML 4.01.

ok, i assume you're referring to section 17.13 "form submission" of HTML 4.01, 
and using the  application/x-www-form-urlencoded  content type, with the 
parameters appended to the url encoded according to S17.13.4 ?




>
>>>> 3. There appear to be three defined methods for TLS servers to provide
>>>> TLS
>>>> clients with CT data, in S3.2.  For this experiment, which approach is
>>>> mandatory to implement for servers and clients?  Or, is it the case that
>>>> participating TLS clients (ie web browsers etc) implement all three
>>>> methods,
>>>> and TLS servers can choose any of them?
>>>
>>> The latter.
>>
>> That should be made very clear. Is the reason for doing so to obtain
>> operational experience wrt the three defined methods such that they perhaps
>> can be narrowed down in the future, or is the expectation that TLS-CT
>> clients will need to support all three methods in perpetuity?
>
> I think I made that clear.
>
> The reason three methods exist are as follows (I don't intend to get
> into this in the RFC, but for your edification):
>
> 1. TLS extension is the right way to do it, but requires a server s/w
> change - this adds many years to full deployment.
>
> 2. So, alternatives must be provided. One is to put the stuff in the
> certificate, but...
>
> 3. ... some CAs have said they'd rather not gate issuance on the log,
> so alternatively, wedge the stuff in an OCSP response (which must be
> stapled - servers exist that support this option already).

ok, thx for elucidation.



>>>> 6. Signed tree heads (STHs) are denoted in terms of "tree size" (number
>>>> of
>>>> entries), but SCTs are denoted in terms of a timestamp.  Should there be
>>>> a
>>>> log client message supporting the return of the nearest STH (and thus
>>>> tree
>>>> size) to a given timestamp?
>>>
>>> I'm not sure why? Any STH (that includes that SCT) will do.
>>
>> Hm, it was sort of a gut feel that it might be useful, but perhaps not.
>>
>> S5.2. Auditor says..
>>
>>A certificate accompanied by an SCT can be verified against any STH
>>dated after the SCT timestamp + the Maximum Merge Delay by requesting
>>a Merkle Audit Proof using Section 4.5.
>>
>> S4.5 get-proof-by-hash stipulates tree_size as an input,  but
>> if a log auditor doesn't already have tree_size, then I suppose it first
>> calls S4.3 get-sth, which will return a timestamp and a tree_size, which if
>> generated max merge delay (MMD) after the SCT was gen'd, ought to be
>> sufficient, yes?
>
> Yes.
>
>> I don't see in the spec where/how MMD is published.  Does MMD vary per log
>> service?  The latter isn't stipulated in the spec it seems AFAICT ?
>
> We have not really figured out how MMD is specified. I suspect it is
> something that will be agreed between browser vendors and logs.

ok, tho specified in terms of an operational value agreed between browser 
vendors and logs is different than whatever mechanism a log service uses to 
"publish" its chosen MMD value, and log monitors/auditors will want to get the 
MMD value(s), yes?



>>>> 7. S3 paragraph 2 states that "TLS clients MUST reject certificates that
>>>> do
>>>> not have a valid SCT for the end-entity certificate" (i.e., hard-fail).
>>>> Presummably this requirement is only for TLS clients participating in the
>>>> CT
>>>> experiment and that understand this protocol.
>>>
>>> Of course - what other way could it be? In other words, all RFCs can
>>> only say what implementations that conform with them do.
>>>
>>>> 

Re: LC comments on draft-laurie-pki-sunlight-05 - "acceptable root certificates" ?

2013-01-22 Thread =JeffH



>>> 3.1. Log Entries
>>>
>>>Anyone can submit a certificate to any log.  In order to enable
>>>attribution of each logged certificate to its issuer, the log SHALL
>>>publish a list of acceptable root certificates (this list might
>>>usefully be the union of root certificates trusted by major browser
>>>vendors).  Each submitted certificate MUST be accompanied by all
>>>additional certificates required to verify the certificate chain up
>>>to an accepted root certificate.  The root certificate itself MAY be
>>>omitted from this list.

a question I neglected to add here is: how do log services publish their lists 
of "acceptable root certificates" ?



=JeffH




Re: LC comments on draft-laurie-pki-sunlight-05

2013-01-21 Thread =JeffH

apologies for latency, many meetings and a conference in the last couple of 
weeks.

BenL replied:
> On 1 January 2013 21:50, =JeffH  wrote:

[ in the below discussion:

 "the spec", "this spec" refers to draft-laurie-pki-sunlight-05.

"TLS-CT client"  refers to a TLS client capable of processing CT information 
that is included in the TLS handshake in any of the specified manners.


"ok" means in general: "ok, will check this in next rev of the spec..".

]


>>
>> comments on draft-laurie-pki-sunlight-05
>>
>> substantive comments (in somewhat arbitrary order)
>> --
>>

[ I demoted the comments wrt "JSON object" terminology and put them down at the 
end of this msg ]



>> Also, the syntax for GETS isn't fully specified. Are the URL parameters to
>> be encoded as (order independent) key-value pairs, or just as
>> order-dependent values?  Which separator character is to be used between
>> parameters? RFC3986 should be cited.
>
> RFC 3986 says nothing about parameter format, though

correct, it doesn't, and I wasn't trying to imply that it did, sorry.  I was 
just trying to say that RFC3986 should be cited in this spec because this spec 
normatively employs URLs, but perhaps referencing RFC2616 HTTP is better because 
it defines "http_URL".


>  - is there a
> standard reference for that? I've refereced HTML 4.01, but perhaps
> there's a better one?

hm, AFAICT, there is not a standard for URI query component formating and thus 
parameter encoding, so this spec will have to explicitly specify something. 
Section 3.4 of RFC3986 gives allowed chars for the query component, but that's 
about it.


Have you mocked up code that parses the log client messages? If so, what query 
component syntax does it handle?




>> 2. "4. Client Messages" doesn't define error handling, i.e., responses to
>> inputs the log service doesn't understand and/or is unable to parse, and/or
>> have other errors. If the log service is to simply return a 4xx or 5xx error
>> code, this should at least be mentioned.
>
> For now, I will specify 4xx/5xx. We may have more to say once we've
> gained some experience.

Ok.


>> 3. There appear to be three defined methods for TLS servers to provide TLS
>> clients with CT data, in S3.2.  For this experiment, which approach is
>> mandatory to implement for servers and clients?  Or, is it the case that
>> participating TLS clients (ie web browsers etc) implement all three methods,
>> and TLS servers can choose any of them?
>
> The latter.

That should be made very clear. Is the reason for doing so to obtain operational 
experience wrt the three defined methods such that they perhaps can be narrowed 
down in the future, or is the expectation that TLS-CT clients will need to 
support all three methods in perpetuity?


>> 4. "Leaf Hash" as used in S4.5 appears to be formally undefined. It
>> apparently would be:
>>
>>   SHA-256(0x00 || MerkleTreeLeaf)
>>
>> ..it should also be noted in S3.3.
>
> You are right.

:)


>> 5. The recursive equations in S2.1 describe how to calculate a Merkle Tree
>> Hash (MTH) (aka "root hash"), and thus as a side effect generate a Merkle
>> Tree, for a given set of input data. However, there doesn't seem to be a
>> defined algorithm (or even hints, really) for adding further inputs to an
>> existing tree. Even though this may be reasonably left as an exercise for
>> implementers, it should probably be discussed to some degree in the spec.
>> E.g., note that leaf hashes are "frozen" and various interior tree node
>> hashes become "frozen" as the tree grows. Is it not sub-optimal to employ
>> the obvious default brute-force mechanism of rebuilding a tree entirely from
>> scratch when new inputs are available?  Would not a recursive algorithm for
>> adding new inputs to an existing tree be straightforward to provide?
>
> I dunno about straightforward.

yeah, agreed.


> I'll think about it.

ok. At least providing some hints would be useful it seems.



>> 6. Signed tree heads (STHs) are denoted in terms of "tree size" (number of
>> entries), but SCTs are denoted in terms of a timestamp.  Should there be a
>> log client message supporting the return of the nearest STH (and thus tree
>> size) to a given timestamp?
>
> I'm not sure why? Any STH (that includes that SCT) will do.

Hm, it was sort of a gut feel that it might be useful, but perhaps not.

S5.2. Auditor says..

   A certificate accompanied by an SCT can be verified against any STH
   d

LC comments on draft-laurie-pki-sunlight-05

2013-01-01 Thread =JeffH

Hi,

Here are some last call comments on draft-laurie-pki-sunlight-05.

Overall the spec is in basically overall reasonable shape but I do have some 
substantive comments that if I'm not totally misunderstanding things (which 
could be the case) ought to be discussed and addressed in some fashion.


The plain overall comments are to some degree "take 'em or leave 'em" depending 
upon folks' sense of urgency to get the spec through the IETF pipeline, but the 
degree likely depends upon the observer.


I hope this is helpful,

=JeffH
--

comments on draft-laurie-pki-sunlight-05

substantive comments (in somewhat arbitrary order)
--

1. The client messages S4 don't explicitly lay out the syntax for request 
messages or responses. E.g., for S4.1 "Add Chain to Log", is the input a 
stand-alone JSON text array, or a JSON text object containing a JSON text array?


The term "JSON object" as used in the first paragraph is ambiguous and perhaps 
what is mean is simply "JSON texts" or "JSON text objects or JSON text arrays". 
RFC4627 clearly defines "JSON text", and should be cited. But RFC4627 is a 
little ambiguous itself regarding "JSON object" and so I suggest these definitions:


JSON text object:   A JSON text matching the "object" ABNF production
   in Section 2.2 of [RFC4627].

JSON text array:   A JSON text matching the "array" ABNF production
   in Section 2.3 of [RFC4627].

Also, the syntax for GETS isn't fully specified. Are the URL parameters to be 
encoded as (order independent) key-value pairs, or just as order-dependent 
values?  Which separator character is to be used between parameters? RFC3986 
should be cited.


Examples for both JSON text inputs and outputs, as well as URL parameters would 
be helpful.



2. "4. Client Messages" doesn't define error handling, i.e., responses to inputs 
the log service doesn't understand and/or is unable to parse, and/or have other 
errors. If the log service is to simply return a 4xx or 5xx error code, this 
should at least be mentioned.



3. There appear to be three defined methods for TLS servers to provide TLS 
clients with CT data, in S3.2.  For this experiment, which approach is mandatory 
to implement for servers and clients?  Or, is it the case that participating TLS 
clients (ie web browsers etc) implement all three methods, and TLS servers can 
choose any of them?


Also, S3.2 probably doesn't belong in S3 and perhaps should be a separate 
top-level section on its own, and have three subsections, one for each method.



4. "Leaf Hash" as used in S4.5 appears to be formally undefined. It apparently 
would be:


  SHA-256(0x00 || MerkleTreeLeaf)

..it should also be noted in S3.3.


5. The recursive equations in S2.1 describe how to calculate a Merkle Tree Hash 
(MTH) (aka "root hash"), and thus as a side effect generate a Merkle Tree, for a 
given set of input data. However, there doesn't seem to be a defined algorithm 
(or even hints, really) for adding further inputs to an existing tree. Even 
though this may be reasonably left as an exercise for implementers, it should 
probably be discussed to some degree in the spec. E.g., note that leaf hashes 
are "frozen" and various interior tree node hashes become "frozen" as the tree 
grows. Is it not sub-optimal to employ the obvious default brute-force mechanism 
of rebuilding a tree entirely from scratch when new inputs are available?  Would 
not a recursive algorithm for adding new inputs to an existing tree be 
straightforward to provide?



6. Signed tree heads (STHs) are denoted in terms of "tree size" (number of 
entries), but SCTs are denoted in terms of a timestamp.  Should there be a log 
client message supporting the return of the nearest STH (and thus tree size) to 
a given timestamp?



7. S3 paragraph 2 states that "TLS clients MUST reject certificates that do not 
have a valid SCT for the end-entity certificate" (i.e., hard-fail).  Presummably 
this requirement is only for TLS clients participating in the CT experiment and 
that understand this protocol. This, or whatever the requirement actually is, 
should be further explained.


For example, does the simple presence of SCT(s) in the TLS handshake serve to 
signal to participating TLS clients that hard-fail is expected if there are any 
issues with CT validation?



8. The spec implies, but doesn't clearly describe, especially in S3.1, that the 
hashes are "labels" for tree entries, and that given a leaf hash, the log 
implementation should be able to look up and present the LogEntry data defined 
in that section.



9. Validating an SCT presummably requires having the Log Service's public key, 
yes?  This isn't clearly discussed,

Re: Gen-ART LC Review of draft-ietf-websec-strict-transport-sec-11

2012-08-10 Thread =JeffH

Hi,

I believe I've made requite changes to draft-ietf-websec-strict-transport-sec
per this thread as well as the WG f2f discussion @IETF-84 Vancouver and also my
f2f discussion with Ben after the websec wg meeting.

In the below I'm just going to try to identify the individual issues from Ben's
review without quoting all the thread discussion, but also highlight the changes
I have queued in my -12 working copy of draft-ietf-websec-strict-transport-sec.

If the below looks nominally OK I can submit my -12 working copy, please let me 
know (Alexey/Barry). (note: I'm going to be mostly offline for the next several 
days)


thanks,

=JeffH
--

Ben's "minor items":

M1) update 2818 ?

>> -- Does this draft update any other RFCs (e.g. 2616 or 2818)?

Maybe 2818, but based on the informational status of 2818, websec wg 
discussions, and discussions with Ben and EKR, there's no statement of updating 
2818 in my -12 working copy.



M2) non-conformant UAs ?

>>> -- I did not find any guidance on how to handle UAs that do not
>>> understand this extension. I don't know if this needs to be normative,
>>> but the draft should at least mention the possibility and implications.
>>
>> Agreed. My -12 working copy now contains these new subsections..
>>
>> 
>>
> That's all good text, but I'm not sure it actually captures my concern.
>
>  That is, the server can't merely select the extension and forget
> about things--it still needs to to take the same care to avoid leaking
> resources over unprotected connections that it would need to do if this
> extension did not exist in the first place.
>
> I think this is implied by your last sentence above, but it would be better
> to say it explicitly.

I've added text to my -12 working copy, it now states...

###
14.1.  Non-Conformant User Agent Implications

   Non-conformant user agents ignore the Strict-Transport-Security
   header field, thus non-conformant user agents do not address the
   threats described in Section 2.3.1 "Threats Addressed".

   This means that the web application and its users wielding non-
   conformant UAs will be vulnerable to both:

   o  Passive network attacks due to web site development and deployment
  bugs:

 For example, if the web application contains any insecure,
 non-"https", references to the web application server, and if
 not all of its cookies are flagged as "Secure", then its
 cookies will be vulnerable to passive network sniffing, and
 potentially subsequent misuse of user credentials.

   o  Active network attacks:

 For example, if an attacker is able to place a man-in-the-
 middle, secure transport connection attempts will likely yield
 warnings to the user, but without HSTS Policy being enforced,
 the present common practice is to allow the user to "click-
 through" and proceed.  This renders the user and possibly the
 web application open to abuse by such an attacker.

   This is essentially the status-quo for all web applications and their
   users in the absence of HSTS Policy.  Since web application providers
   typically do not control the type or version of UAs their web
   applications interact with, the implication is that HSTS Host
   deployers must generally exercise the same level of care to avoid web
   site development and deployment bugs (see Section 2.3.1.3) as they
   would if they were not asserting HSTS Policy.
###


M3) the superdomain match wins (?) question  (section 8.x generally)

>>> -- How should a UA handle potential conflicts between a the policy
>>> record that includes the includeSubdomain, and any records for subdomains
>>> that might have different parameters?
>>
>> this is in the draft. the short answer is that at policy enforcement time,
>> "superdomain matches win".
>>
>> At "noting an HSTS Host" time, the HSTS host's policy (if expressed) is
>> noted regardless of whether there are superdomain HSTS hosts asserting
>> "includeSubDomains".
>>
>> perhaps this needs to be made more clear?
>
> Maybe I'm missing something, but I'm not getting that answer from the text.

In our f2f discussion, Ben and I agreed that we need to clear that we stop on 
first match when doing policy enforcement -- but it turns out to be not quite 
that simple, due to the includeSubDomains semantics. Here's the relvant text now 
in my -12 working copy, the alterations are in step 5...


###
8.3.  URI Loading and Port Mapping

   Whenever the UA prepares to "load", also known as "dereference", any
   "http" URI [RFC3986] (including when following HTTP redirects
   [RFC2616]),

Re: Gen-ART LC Review of draft-ietf-websec-strict-transport-sec-11

2012-08-10 Thread =JeffH

Thanks Ben.

> Jeff and I had a f2f discussion about this point in Vancouver. To paraphrase
> (and I assume he will correct me if if I mischaracterize anything), Jeff
> indicated that this really wasn't a MUST level requirement due to the
> variation and vagaries in application behavior and abilities.

Yes, see the NOTE in section 7.2.

> Rather, it's
> more of a "do the best you can" sort of thing. Specifically, he indicated
> that an implementation that chose to go ahead and serve unprotected content
> due to the listed caveats on redirecting to HTTPS would necessarily be
> out-of-compliance.

I presume you actually mean "not necessarily", which would then be correct, 
unless I'm misunderstanding something.



> If the requirement really that you SHOULD NOT (rather than MUST NOT) serve
> unprotected content, then I think the original language is okay.

agreed.

thanks,

=JeffH




Re: Gen-ART LC Review of draft-ietf-websec-strict-transport-sec-11

2012-07-29 Thread =JeffH
overall question of how complex this 
simple solution really needs to be and whether we really think we'll need any 
extensions. Something for us to discuss in the working group meeting on Tue 
morning I think.


>
> -- section 7.2:
>
> Am I correct to assume that the server must never just serve the content over
> a non-secure connection? If so, it would be helpful to mention that, maybe
> even normatively.

It's a SHOULD, see the Note in that section, so it's already effectively stated 
normatively, though one needs to understand HTTP workings to realize it in the 
way you stated it above.  Perhaps could add a simple statement as you suggest to 
the intro para for section 7 Server Processing Model, to address this concern?



>
> -- section 8.4:
>
> Does this imply a duty for compliant UAs to check for revocation one way or
> another?

yes. though, per other relevant specifications, as duly cited. AFAIK the HSTS 
spec doesn't need to get into the details because the underlying security 
transport specs, namely TLS, already do this.




>
>
> *** Nits/editorial comments:
>
> -- idnits reports an uncited reference:
>
> == Unused Reference: 'RFC6376' is defined on line 1709, but no explicit
> reference was found in the text


fixed in my -12 working copy.


> -- section 1.2:
>
> The description of indented notes is almost precisely the opposite of how
> they are described in the RFC editor's style guide. It describes them as
> "parenthetical" notes, which is how experienced RFC readers are likely to
> perceive them. While it doesn't say so explicitly, I think putting normative
> text in parenthetical notes should be avoided. If these are intended to be
> taken more strongly than that (and by the description, I take it they should
> be taken more strongly than the surrounding text), then I suggest choosing a
> stronger prefix than "NOTE:"

As it turns out, almost all the Notes are parenthetical.

I'll render the one(s) that are normative as a regular paragraph(s) and leave 
the others as-is. Will that address your concern?



>
> -- section 7:
>
> Does the reference to I-D.ietf-tls-ssl-version3 indicate a requirement for
> SSL3?

no, it's just that SSLv3 remains a fact of life and is referenced for 
completeness' sake.




>
> -- section 8.2, paragraph 5 (first non-numbered paragraph after numbered
> list)
>
> To be pedantic, this could be taken to mean a congruent match only applies if
> the includeSubdomains flag is not present. I assume it's intended to apply
> whether or not the flag is present.

[ I am assuming you actually are referring to section 8.3, as section 8.2 
doesn't mention the includeSubdomains flag and does not contain a numbered list. ]


yes, a congruent match is intended to apply whether or not the flag is present.



> -- section 12 and subsections:
>
> I was surprised to see more apparently normative material after the
> non-normative guidance sections. I think it would improve the organization to
> put this closer to the normative rules for UAs.

We can move section 12 up ahead of the non-normative guidance sections.


>
> -- section 14.1, 4th paragraph (first non-bulleted paragraph following bullet
> list)
>
> This issue is only true for proxies that act as a TLS MiTM, right?

yes.


> Would
> proxies that tunnel TLS via the CONNECT method have this issue?

I don't think so in the general case.

I'm not sure what terminology to use to differentiate such proxies if this is a 
detail worth addressing.



thanks again,

=JeffH








wrt RL "Bob" Morgan

2012-07-18 Thread =JeffH

Hi,

I'm very sorry for repetition if you've already heard this, and sorrier still to 
feel compelled to bring it to you all's broad attention in any case...


RL "Bob" Morgan, long-time IETF participant (since at least the Stanford IETF-14 
in Summer 1989), passed away due to complications of cancer on 12-Jul-2012.


He is sorely missed, and IETF meetings/activities won't seem the same without 
his most trenchant perspectives, observations, contributions, and camaraderie.


For more information and tributes, see..

 https://spaces.internet2.edu/display/rlbob/


=JeffH



Re: Last Call:

2012-04-25 Thread =JeffH
or a given
>   host name, thus enabling the client to construct multiple TLSA
>   certificate associations that reflect different DANE assertions.
>   No support is provided to combine two TLSA certificate
>   associations in a single operation.
>
>Roll-over  -- TLSA records are processed in the normal manner within
>   the scope of DNS protocol, including the TTL expiration of the
>   records.  This ensures that clients will not latch onto assertions
>   made by expired TLSA records, and will be able to transition from
>   using one DANE public key or certificate usage type to another.


suggested rewrite eliminating the not-defined-in-RFC6394 terms "DANE assertion" 
and "DANE public key"...


   Combination  -- Multiple TLSA records can be published for a given
  host name, thus enabling the client to construct multiple different
  TLS certificate associations. No support is provided to combine two
  TLS certificate associations in a single operation.

   Roll-over  -- TLSA records are processed in the normal manner within
  the scope of DNS protocol, including the TTL expiration of the
  records.  This ensures that clients will not latch onto assertions
  made by expired TLSA records, and will be able to transition from
  using one TLSA-asserted public key or certificate usage type to
  another.



> 7.2. TLSA Usages
>
>
>This document creates a new registry, "Certificate Usages for TLSA
>Resource Records".

suggested modest revision for terminological consistency:

 7.2. TLSA Certificate Usage Types

   This document creates a new registry, "TLSA Resource Record Certificate
   Usage Types"




HTH,

=JeffH





Re: Last Call: - Referring to the DANE protocol and DANE-denoted certificate associations

2012-04-24 Thread =JeffH
[ these are excerpts from a current thread on dane@ that I'm now denoting as an 
IETF-wide Last Call comment ]


Paul Hoffman replied on Fri, 20 Apr 2012 13:57:28 -0700:
>
> On Apr 20, 2012, at 10:50 AM, =JeffH wrote:
>
>> Various specs are going to have need to refer to the DANE protocol
>> specification a well as describe the notion of domain names that map to
>> TLSA records describing certificate associations.
>>
>> In working on such language in draft-ietf-websec-strict-transport-sec,
>> here's the terms I'm using at this time and their (contextual) meaning..
>>
>> DANE protocol
>>   The protocol specified in draft-ietf-dane-protocol (RFC# tbd).
>>
>
> There is an issue here that we haven't dealt with, which is that "DANE
> protocol" doesn't really make sense because we might be adding additional
> protocols for certificate associations for things other than TLS. For your
> doc, you should be saying "TLSA protocol", not "DANE protocol" because HSTS
> is specific to TLS. (More below.)


After further perusal of draft-ietf-dane-protocol-19, if I understand 
correctly, the term "DANE" (and its expansion) names a class of Secure 
DNS-based cert/key-to-domain-name associations, and protocols for particular 
instances will nominally be assigned their own names, where a case-in-point is 
the "TLSA Protocol", yes=?


i.e. we could define another separate spec for mapping Foo protocol's 
keys/certs to DNS RRs, and call 'em FOOA, and then in following this naming 
approach, refer to the protocol of using them while establishing Foo 
connections as the "FOOA protocol", yes?




Paul Hoffman further explained on Sat, 21 Apr 2012 13:38:38 -0700:
>
> On Apr 20, 2012, at 3:34 PM, =JeffH wrote:
>>
>> Paul Hoffman replied on Fri, 20 Apr 2012 13:57:28 -0700:
>>
>> > On Apr 20, 2012, at 10:50 AM, =JeffH wrote:
>> >
>> > There is an issue here that we haven't dealt with, which is that "DANE
>> > protocol" doesn't really make sense because we might be adding additional
>> > protocols for certificate associations for things other than TLS.
>>
>> Yep. "DANE" is a working group name. But, I was working from the
>> specification name per the present spec.
>>
>> > ...
>> > Proposal for [-dane-protocol] spec:
>> >
>> > The protocol in this document can generally be referred to as the "TLSA
>> > protocol".
>>
>> So as a practical matter, if we wish to refer to this particular spec as
>> defining the "TLSA protocol", then perhaps the spec title should reflect
>> that such that the RFC Index is searchable for that "TLSA" term.
>
> The WG already decided against that (unfortunately).


I agree it is unfortunate and respectfully suggest that this decision be 
revisited.


Many (most?) people have been referring to the protocol being worked on by the 
working group (which is now draft-ietf-dane-protocol) as "the DANE protocol" or 
simply "DANE" for as long as the WG has been formed, /plus/, the present title 
of spec is..


  The DNS-Based Authentication of Named Entities (DANE) Protocol for
  Transport Layer Security (TLS)


I think it will just continue to unnecessarily sow confusion if the term "TLSA" 
doesn't somehow get into the spec title and thus into the various RFC indexes 
(whether or not the suggested statement above explicitly naming the protocol 
"TLSA protocol" is added to the spec (I think it should be added)).



Ways to accomplish addressing the spec title issue could be..

  TLSA: The DNS-Based Authentication of Named Entities (DANE) Protocol for
  Transport Layer Security (TLS)


..or..

  The DNS-Based Authentication of Named Entities (DANE) Protocol for
  Transport Layer Security (TLS): TLSA


HTH,

=JeffH





Re: EFF calls for signatures from Internet Engineers against censorship (SOPA,ProtectIP/PIPA)

2011-12-17 Thread =JeffH

The letter was posted thur 15-Dec-2011..

An Open Letter From Internet Engineers to the U.S. Congress
December 15, 2011 | By Parker Higgins and Peter Eckersley
<https://www.eff.org/deeplinks/2011/12/internet-inventors-warn-against-sopa-and-pipa> 



(wish I hadn't missed the call-for-signatures going by)


Seems we can all just GetYourCensorOn ..or we can go after SOPA / ProtectIP 
(PIPA):

  <http://getyourcensoron.com/>

  <http://americancensorship.org/>


HTH,

=JeffH
___
Ietf mailing list
Ietf@ietf.org
https://www.ietf.org/mailman/listinfo/ietf


IETF-80 Technical Plenary minutes (was: For Monday's technical plenary - Review of draft-tschofenig-post-standardization-00)

2011-05-09 Thread =JeffH


Subject: [www.ietf.org/rt #37575] transcript of IETF-80 tech plenary discussion?
From: "Wanda Lo via RT" 
Date: Mon, 09 May 2011 10:28:23 -0700
To: jeff.hod...@kingsmountain.com

Hi Jeff,

http://www.ietf.org/proceedings/80/minutes/plenaryt.txt

The minutes are based on Renee's transcript.


Wanda

___
Ietf mailing list
Ietf@ietf.org
https://www.ietf.org/mailman/listinfo/ietf


Re: Second Last Call: draft-saintandre-tls-server-id-check to BCP

2010-12-15 Thread =JeffH

fyi (technically, the 2nd last call ends tomorrow)...


Subject: [certid] version 12
From: Peter Saint-Andre 
Date: Mon, 13 Dec 2010 12:49:49 -0700 (11:49 PST)
To: IETF cert-based identity 
Cc: Ben Campbell 

Jeff and I have published version -12. The changes were driven by the
Gen-ART review that Ben Campbell did, some feedback from our sponsoring
Area Director, a few list discussions, and several rounds of editorial
improvement.

http://www.ietf.org/id/draft-saintandre-tls-server-id-check-12.txt

The diff from -11 is here:

http://tools.ietf.org/rfcdiff?url2=draft-saintandre-tls-server-id-check-12.txt

Thanks!

Peter

--
Peter Saint-Andre
https://stpeter.im/
___
Ietf mailing list
Ietf@ietf.org
https://www.ietf.org/mailman/listinfo/ietf


Re: Second Last Call: draft-saintandre-tls-server-id-check (...) to BCP

2010-12-08 Thread =JeffH
ust and with
   which the client communicates over a connection that provides both
   mutual authentication and integrity checking).  These considerations
   apply only to extraction of the source domain from the inputs;
   naturally, if the inputs themselves are invalid or corrupt (e.g., a
   user has clicked a link provided by a malicious entity in a phishing
   attack), then the client might end up communicating with an
   unexpected application service.



> Section 4.3 discusses about how to seek a match against the list of
> reference identifiers.  I found the thread at
> http://www.ietf.org/mail-archive/web/certid/current/msg00318.html informative.
>
> In Section 4.4.3:
>
>"A client employing this specification's rules MAY match the reference
> identifier against a presented identifier whose DNS domain name
> portion contains the wildcard character '*' as part or all of a label
> (following the definition of "label" from [DNS])"
>
> According to the definition of label in RFC 1035, the wildcard
> character cannot be part of a label.  I suggest removing the last
> part of that sentence.

You mean removing the parenthetical "(following the definition of "label" from 
[DNS])", yes?


In reviewing RFC 1035 I see your concern, tho we'd like to reference a 
description of "label". I note that RFC 1034 [S3.1] seems to appropriately 
supply this, so I propose we keep the parenthetical but alter it to be..


  (following the description of labels and domain names in [DNS-CONCEPTS])



> FWIW, RFC 4592 updates the wildcard
> definition in RFC 1034 and uses the term "asterisk label".

Yes, but that definition (and term) appears to be specific to underlying DNS 
internals, not to (pseudo) domain names as wielded (or "presented" (eg in 
certs)) in other protocols.



> Was the comment about the security note (
> http://www.ietf.org/mail-archive/web/certid/current/msg00427.html )
> in Section 4.6.4 addressed?

Yes, we believe so.


thanks again for your review,

=JeffH





























___
Ietf mailing list
Ietf@ietf.org
https://www.ietf.org/mailman/listinfo/ietf


Re: [http-state] Gen-ART LC review of draft-ietf-httpstate-cookie-18

2010-12-02 Thread =JeffH

Hey Richard (& Adam),

Thanks very much for the detailed & thorough review, and to you both for 
promptly resolving the raised issues.


=JeffH


___
Ietf mailing list
Ietf@ietf.org
https://www.ietf.org/mailman/listinfo/ietf


Re: Review of draft-saintandre-tls-server-id-check

2010-08-31 Thread =JeffH

fwiw, I concur with Peter's analysis and conclusions.

=JeffH
___
Ietf mailing list
Ietf@ietf.org
https://www.ietf.org/mailman/listinfo/ietf


Re: Last Call: draft-saintandre-tls-server-id-check

2010-07-18 Thread =JeffH

Paul Hoffman replied..
>
> At 5:22 AM -0400 7/17/10, John C Klensin wrote:
>> (1) In Section 4.4.1, the reference should be to the IDNA2008 discussion.
>> The explanations are a little better vis-a-vis the DNS specs and it is a
>> bad idea to reference an obsolete spec.
>
> +1. I accept blame on this one, since I was tasked on an earlier version to
> bring the IDNA discussion up to date.

Well, I wrote the "traditional domain name" text in -tls-server-id-check, and 
yes I looked at IDNA2008, but only -idnabis-protocol I think, and missed 
-idnabis-defs where said discussion resides. So mea culpa. Yes, the latter 
discussion is even better than the one in IDNA2003. Thanks for catching this.


Here's a re-write of the first para of -tls-server-id-check Section 4.4.1, I've 
divided it into two paragraphs..


   The term "traditional domain name" is a contraction of this more
   formal and accurate name: "traditional US-ASCII
   letter-digit-hyphen DNS domain name". Note that
   letter-digit-hyphen is often contracted as "LDH". (Traditional)
   domain names were originally defined in [DNS-CONCEPTS] and
   [DNS] in conjunction with [HOSTS], though
   [I-D.ietf-idnabis-defs-13] provides a complete, up-to-date
   domain name label taxonomy.

   Traditional domain names consist of a set of one or more
   non-IDNA LDH labels (e.g., "www", "example", and "com"), with
   the labels usually shown separated by dots (e.g.,
   "www.example.com"). There are additional qualifications, see
   [I-D.ietf-idnabis-defs-13], but they are not germane to this
   specification.


how does that look?


thanks,

=JeffH










___
Ietf mailing list
Ietf@ietf.org
https://www.ietf.org/mailman/listinfo/ietf


Re: web security happenings

2010-07-13 Thread =JeffH

On 7/13/10 3:26 PM, Iljitsch van Beijnum wrote:
> On 13 jul 2010, at 18:49, Peter Saint-Andre wrote:
>
>> fun technologies like AJAX but also opens up the possibility for
>> new attacks (cross-site scripting, cross-site request forgery,
>> malvertising, clickjacking, and all the rest).
>
> Isn't this W3C stuff?


Peter Saint-Andre replied in part:
>
> Good question. We've had discussions about that with folks from the W3C
> and there's broad agreement that we'll divide up the work by having the
> IETF focus on topics that are more closely related to HTTP (e.g., new
> headers) and by having the W3C focus on topics that are more closely
> related to HTML and web browsers (e.g., Mozilla's Content Security
> Policy and the W3C's "Web Security Context: User Interface Guidelines"
> document).


See also this recent position paper by myself and Andy Steingruebl..

  The Need for Coherent Web Security Policy Framework(s)
  http://w2spconf.com/2010/papers/p11.pdf

..in Section 5 "How and where to organize the effort?" we discuss this overall 
question.


> But the exact dividing line for that division of labor is a good issue
> for discussion at the HASMAT BoF.

I suspect the dividing line won't be "exact" but rather is something that we'll 
need to decide on a case-by-case on-going basis.


Regardless, this overall topic area is one we (the greater Internet/Web 
community) needs to pay attention to.



HTH,

=JeffH
--
Internet Standards and Governance Team
PayPal Information Risk Management


___
Ietf mailing list
Ietf@ietf.org
https://www.ietf.org/mailman/listinfo/ietf


Re: fyi: Paper: "State of the Internet & Challenges ahead "

2008-03-20 Thread &#x27; =JeffH '
[EMAIL PROTECTED] said:
> With an opening sentence of:
> ...
> The document does not set a very helpful stage. 


Well, ..  nevermind.


=JeffH


___
IETF mailing list
IETF@ietf.org
https://www.ietf.org/mailman/listinfo/ietf


fyi: Paper: "State of the Internet & Challenges ahead "

2008-03-20 Thread &#x27; =JeffH '
Of relevance to pleanary/working group/hallway discussions of late. 

pdf available here (link to orginal .doc format below)..

http://kingsmountain.com/doc/NEC2007-OHMartin.pdf

(note that due to intra-document references in the original, the .pdf has 
"spurious" numbers interspersed in some of the text. .pdf produced by 
openoffice.)

I minorly reformated the below message for readability.

=JeffH

--- Forwarded Message

From: Steve Goldstein [EMAIL PROTECTED]
Sent: Thursday, March 20, 2008 10:59 AM
To: Dewayne Hendricks; David Farber
Subject: Paper: "State of the Internet & Challenges ahead "

Olivier Martin, a long-time colleague, formerly from CERN (the  
European high energy accelerator lab in Geneva), has a fine
"State of   the Internet" preprint on his web site. Be sure to
view it in Page   Layout so that you can see the footnotes.  I
did not at first, and   missed them entirely, which was very
confusing, as there are numbered   references listed at the end
as well.

http://www.ictconsulting.ch/reports/NEC2007-OHMartin.doc

Olivier has retired, but apparently keeps busy as a consultant. 
He   was a major player in CERN networking from the very start of
things.In the early days, CERN was the hub of European
Internet Protocol   networking, and it still is a major player
because of the HUGE amounts   of data that are generated by the
accelerators and shared globally.



State of the Internet & Challenges ahead

"How is the Internet likely to evolve in the coming decade"

To be published in the NEC'2007 conference proceedings
 (see footnote (1) below)

Olivier H. Martin (2)

ICTConsulting, Gingins (VD), Switzerland


Abstract

After a fairly extensive review of the state of the Commercial
and Research & Education, aka Academic, Internet the problematic
behind the, still hypothetic, IPv4 to IPv6 migration will be
examined in detail. A short review of the ongoing efforts to
re-design the Internet in a clean-slate approach will then be
made. This will include the National Science Foundation (NSF)
funded programs such as FIND (Future Internet Network Design) 
and GENI (Global Environment for Network Innovations), European
Union (EU) Framework Program 7 (FP7), but also more specific
architectural proposals such as the publish/subscribe (pub/sub)
paradigm and Data Oriented Network Architecture (DONA).

Key words: Internet, GÉANT2, Internet2, NLR, NSF, GENI, FIND,
DONA, OECD, IETF, IAB, IGF, ICANN, RIPE , IPv6, EU, FP7,
clean-slate, new paradigms.

1 Introduction

While there appears to be a wide consensus about the fact that
the Internet has stalled or ossified, some would even say that it
is in a rapid state of degeneracy, there is no agreement on a
plan of action to rescue the Internet. There are two competing
approaches, evolutionary or cleanslate. While a clean-slate
approach has a lot of attractiveness it does not seem to be
realistic given the time constraints arising from the fact that
the IPv4 address space will be exhausted in a few years time,
despite the fact that IANA (3) (the Internet Assigned Numbers
Authority) is about to allow an IPv4 "trading model" to
be created (4). Therefore, the migration to IPv6 looks
"almost" unavoidable, though by no means certain (5), as
the widespread usage of Network Address Translators (NAT) and
Application Level Gateways (ALG) is both unlikely to scale
indefinitely and/or to meet the ever evolving Internet
users" expectations and requirements. However, new ideas
arising from more radical and innovative approaches could
probably be retrofitted into the existing Internet, e.g.
self-certifying names, à la "DONA" (6). The purpose of
this paper is to raise awareness about the ongoing initiatives
with a special emphasis on technical issues and possible remedies
or solutions, it does not attempt in any way to be exhaustive as
the subject of the Internet evolution including the societal,
ethical and governance aspects are far too wide and complex to be
addressed in a single article.


_
1 http://nec2007.jinr.ru/

2 [EMAIL PROTECTED]

3 http://www.iana.org

4 Could IP address plan mean another IPv6 delay? - Network World

5 the cost/benefit ratio is still far too high to build a
convincing business case

6 Data Oriented Network Architecture



Regards,

- --SteveG

- ---
Archives: http://www.listbox.com/member/archive/247/=3Dnow
RSS Feed: http://www.listbox.com/member/archive/rss/247/
Powered by Listbox: http://www.listbox.com

--- End of Forwarded Message



___
IETF mailing list
IETF@ietf.org
https://www.ietf.org/mailman/listinfo/ietf