Re: libpkix maintenance plan (was Re: What exactly are the benefits of libpkix over the old certificate path validation library?)

2012-01-25 Thread Sean Leonard
I ended up writing a lot of text in response to this post, so, I am 
breaking up the response into three mini-responses.


Part I

On 1/18/2012 4:23 PM, Brian Smith wrote:
 Sean Leonard wrote:
 The most glaring problem however is that when validation fails, such
 as in the case of a revoked certificate, the API returns no
 certificate chains

 My understanding is that when you are doing certificate path 
building, and you have to account for multiple possibilities any any 
point in the path, there is no partial chain that is better to return 
than any other one, so libpkix is better off not even trying to return a 
partial chain. The old code could return a partial chain somewhat 
sensibly because it only ever considered one possible cert (the best 
one, ha ha) at each point in the chain.



For our application--and I would venture to generalize that for all 
sophisticated certificate-using applications (i.e., applications that 
can act upon more than just valid/not valid)--more information is a 
lot better than less.


I have been writing notes on Sean's Comprehensive Guide to Certification 
Path Validation. Here's a few paragraphs of Draft 0:


Say you have a cert. You want to know if it's valid. How do you 
determine if it's valid?


A certificate is valid if it satisfies the RFC 5280 Certification Path 
Validation Algorithm. Given:
* a certification path of length n (the leaf cert and all certs up to 
the trust anchor--in RFC 5280, it is said that cert #1 is the one 
closest to the trust anchor, and cert n is the leaf cert you're validating),

* the time,
* policy-stuff, -- hand-wavy because few people in the SSL/TLS world 
worry about this but it's actually given a lot of space in the RFC

* permitted name subtrees,
* excluded name subtrees,
* trust anchor information (issuer name, public key info)

you run the algorithm, and out pops:
* success/failure,
* the working public key (of the cert you're validating),
* policy-stuff, -- again, hand-wavy
and anything else that you could have gleaned on the way.


But, this doesn't answer the obvious initial question: how do you 
construct a certification path of length n if you only have the 
initial cert? RFC 5280 doesn't prescribe any particular algorithm, but 
it does have some requirements (i.e., if you say you support X, you MUST 
support it by doing it Y way).


Certification Path Construction is where we get into a little bit more 
black art and try to make some tradeoffs based on speed, privacy, 
comprehensiveness, and so forth.


Imagine that you know all the certificates ever issued in the known 
universe. Given a set of trust anchors (ca name + public key), you 
should be able to draw lines from your cert through some subset of 
certificates to your trust anchors. What you'll find is that you've got 
a big tree (visually, but not necessarily in the computer science sense; 
it's actually a directed acyclic graph), where your cert is at the root 
and the TAs are at the leaves. The nodes are linked by virtue of the 
fact that the issuer DN in the prior cert is equal to the subject DN in 
the next cert, or to the ca name in the trust anchor.


Practically, you search the local database(s) for all certificates that 
match the issuer DN in the subject. If no certificates (or in your 
opinion, an insufficient number of certificates) are returned, then, you 
will want to resort to other methods, such as using the caIssuers AIA 
extension (HTTP or LDAP), looking in other remote stores, or otherwise.


The ideal way (Way #1) to represent the output is by a tree, where each 
node has zero or more children, and the root node is your target cert. 
In lieu of a tree, you can represent it as an array of cert paths 
(chains) (way #2). Way #2 is the way that Microsoft 
CertGetCertificateChain validation function returns its results, 
more-or-less.


Once you have all of these possibilities, you'll want to start pruning, 
which involves non-cryptography (e.g., checking for basic constraints), 
actual cryptography (digital signature verification), and more 
non-cryptography (e.g., time bounds and name constraints). The general 
received wisdom is to start verifying signatures from the trust anchor 
public key(s) down to the leaf, rather than the other way around, 
because otherwise an attacker can DoS your algorithm by putting in a 
 bit RSA key or some such. Incidentally, this is also one 
argument why unknown/untrusted issuer is much worse than some folks 
want to assume, but I understand that is a sensitive point among some 
technical people so the main point is that you have to provide as much 
of this information as possible to the validation-using application 
(Firefox, Thunderbird, Penango, IPsec kernel, whatever) so that the 
application can figure out these tradeoffs. If you keep it in the tree 
form, you can eliminate whole branches of the tree. Eliminate could mean 
a) don't report the path at all, or b) report the path anyway but stop 
reporting 

Re: libpkix maintenance plan (was Re: What exactly are the benefits of libpkix over the old certificate path validation library?)

2012-01-25 Thread Sean Leonard

Part II

On 1/18/2012 4:23 PM, Brian Smith wrote:
 Sean Leonard wrote:

 and no log information.

 Firefox has also been bitten by this and this is one of the things 
blocking the switch to libpkix as the default mechanism in Firefox. 
However, sometime soon I may just propose that we change to handle 
certificate overrides like Chrome does, in which case the log would 
become much less important for us. See bug 699874 and the bugs that are 
referred to by that bug.


 The only output (in the revoked case) is
 SEC_ERROR_REVOKED_CERTIFICATE. This is extremely unhelpful because it
 is a material distinction to know that the EE cert was revoked,
 versus an intermediary or root CA.

 Does libpkix return SEC_ERROR_REVOKED_CERTIFICATE in the case where 
an intermediate has been revoked? I would kind of expect that it would 
return whatever error it returns for could not build a path to a trust 
anchor instead, for the same reason I think it cannot return a partial 
chain.


When I last tested it, I recall that SEC_ERROR_REVOKED_CERTIFICATE was 
returned for intermediate certs.


When certLog is returned from CERT_VerifyCertificate, all validation 
errors with all certs (in the single path) are added. The 
CERTVerifyLogNode (certt.h) includes the depth, so multiple log entries 
can have the same depth (aka, same cert) but different error codes. It 
is up to the application to make sense of it and to correlate them 
together, but at least you can get all of the errors out.


 Such an error also masks other possible problems, such as whether
 a certificate has expired, lacks trust bits, or other information.

 Hopefully, libpkix at least returns the most serious problem. Have 
you found this to be the case? I realize that most serious is a 
judgement call that may vary by application, but at least Firefox 
separates cert errors into two buckets: overridable (e.g. expriation, 
untrusted issuer) and too-bad-to-allow-user-override (e.g. revocation).


As suggested in Part I, most serious problem really depends on your 
perspective and application. Let's take revoked as an example. 
Revocation has reason codes in CRLs, and in OCSP responses too under the 
RevokedInfo - revocationReason element. keyCompromise(1) is a fairly 
serious situation, but in that case, you may actually want to invalidate 
(i.e., treat as not valid) the cert *prior to* the revocation time, such 
as with the RFC 5280 sec. 5.3.2 Invalidity Date extension.


Contrast this with privilegeWithdrawn(9), which we joke internally is 
the failure to pay reason code. If someone fails to pay for their 
cert, that is bad, but probably not *as* bad in the grand scheme of 
things as keyCompromise(1). It also may trigger a different UI: this 
deadbeat failed to pay versus some Evil Eve stole this person's 
private key. In contrast, expiration--particularly expiration from a 
long time ago--is probably worse than privilegeWithdrawn(9).


Regarding the buckets: that is all well and good. It's worth driving 
home that it would be nice if all applications that use NSS/libpkix are 
starting with the same, fat deck of cards, that they can then separate 
into buckets of their choosing.



 Per above, we never used non-blocking I/O from libpkix; we use it in
 blocking mode but call it on a worker thread. Non-blocking I/O never
 seemed to work when we tried it, and in general we felt that doing
 anything more than absolutely necessary on the main thread was a
 recipe for non-deterministic behavior.

 This is also what Firefox and Chrome do internally, and this is why 
the non-blocking I/O feature is not seen as being necessary.


ok

Removing non-blocking I/O completely from libpkix may also save a 
non-negligible amount of codegen. Some libpkix entry points (such as 
PKIX_ValidateChain_NB) are not used at all, and therefore should be 
optimized away, but there are non-trivial parts of functions that check 
if (nonblocking) and such that are almost certainly not optimized away 
in the current code.


 The downside to blocking mode is that the API is one-shot: it is not
 possible to check on the progress of validation until it magically
 completes. When you have CRLs that are  10MB, this is an issue.
 However, this can be worked around (e.g., calling it twice: once for
 constructing a chain without revocation checking, and another time
 with revocation checking), and one-shot definitely simplifies the
 API for everyone.

 As I mentioned in another thread, it may be the case that we have to 
completely change the way CRL, OCSP, and cert fetching is done in 
libpkix, or in libpkix-based applications anyway, for performance 
reasons. I have definitely been thinking about doing things in Gecko in 
a way that is similar to what you suggest above.


Which thread?

Correction: I said a chain but I should have said a chain, but 
ideally, chains.


On the topic of chains, comparing the behavior of 
CertGetCertificateChain is very useful. In the MS API (which has been 
around 

Re: libpkix maintenance plan (was Re: What exactly are the benefits of libpkix over the old certificate path validation library?)

2012-01-25 Thread Sean Leonard

Part III

On 1/18/2012 4:23 PM, Brian Smith wrote:

Sean Leonard wrote:

 We do not currently use HTTP or LDAP certificate stores with respect
 to libpkix/the functionality that is exposed by CERT_PKIXVerifyCert.
 That being said, it is conceivable that others could use this feature,
 and we could use it in the future. We have definitely seen LDAP URLs in
 certificates that we have to validate (for example), and although
 Firefox does not ship with the Mozilla Directory (LDAP) SDK,
 Thunderbird does. Therefore, we encourage the maintainers to leave it
 in. We can contribute some test LDAP services if that is necessary for
 real-world testing.

 Definitely, I am concerned about how to test and maintain the LDAP 
code. And, I am not sure LDAP support is important for a modern web 
browser at least. Email clients may be a different story. One option may 
be to provide an option to CERT_PKIXVerifyCert to disable LDAP fetching 
but keep HTTP fetching enabled, to allow applications to minimize 
exposure to any possible LDAP-related exploits.


I'll see what we can do about setting up some example LDAP servers. From 
my own experience, I have seen several major CAs run by governments in 
production that include LDAP URLs. If the web browsers are being used on 
internal/intranet networks (as is increasingly the case with webapps 
taking over the world) then LDAP URLs remain useful for web browsers. In 
my review of RFC 5280 vs. CERT_PKIXVerifyCert, I devoted a section to 
this topic (see Access Methods).


nsNSSCallbacks.cpp is where the NSS-Necko bindings live. See 
nsNSSHttpInterface. SEC_RegisterDefaultHttpClient registers these 
bindings with NSS; then, libpkix (in pkix_pl_httpcertstore.c, and 
pkix_pl_ocspresponse.c) obtains these pointers with 
SEC_GetRegisteredHttpClient.


LDAP services are channeled through pkix_pl_ldapcertstore.c, and 
serviced by the default LDAP client, which exists in 
pkix_pl_ldapdefaultclient.c. This in turn relies on pkix_pl_socket.c for 
sundries such as pkix_pl_Socket_Create, which in turn (finally!) rely on 
NSPR sockets, with functions like PR_NewTCPSocket and PR_Send. Unlike 
HTTP, LDAP is actually implemented by libpkix itself. The advantage is 
that LDAP should work on every platform, without OpenLDAP or Wldap32, 
and without the Mozilla Directory (LDAP) SDK--which means that it ought 
to work in Firefox. The disadvantage is that LDAP may not take advantage 
of SOCKS or other proxies that are configured at the Necko layer.



 Congruence or mostly-similar
 behavior with Thunderbird is also important, as it is awkward to
 explain to users why Penango provides materially different
 validation results from Thunderbird.

 I expect that Thunderbird to change to use CERT_PKIXVerifyCert 
exclusively around the time that we make that change in Firefox, if not 
exactly at the same time.


ok

As I understand it, there are currently no less than six APIs (and four 
different sets of functionality) that can be used to verify certificates:


CERT_PKIXVerifyCert, the long-term preferred one.


CERT_VerifyCertChain, which depending on 
CERT_GetUsePKIXForValidation/CERT_SetUsePKIXForValidation, calls 
cert_VerifyCertChainPkix (which uses libpkix but actually uses a 
slightly different code path compared to CERT_PKIXVerifyCert) or 
cert_VerifyCertChainOld [which is REALLY old]
 NB: by setting the 
not-really-documented-but-appears-in-a-few-scattered-bugzilla-bugs 
environment variable, NSS_ENABLE_PKIX_VERIFY, a user can flip the 
SetUsePKIXForValidation switch.



CERT_VerifyCertificate, which is a gross amalgamation of a lot of 
hairballs improved over time, but seems to be the one that is actually 
used by the vast majority of Mozilla applications; unless you call one 
of the PSM functions and set the boolean pref 
security.use_libpkix_verification, in which case PSM will attempt 
(mostly) to use CERT_PKIXVerifyCert. However, an application that calls 
CERT_VerifyCertificate directly will not be affected.

 - CERT_VerifyCertificateNow (just uses PR_Now())


CERT_VerifyCert, which is a likewise gross amalgamation, except that in 
the middle of the gross amalgamation it calls CERT_VerifyCertChain (so 
it has mostly equivalent but not exactly the same functionality as 
CERT_VerifyCertChain, including the NSS_ENABLE_PKIX_VERIFY detour). I 
thought this one was not supposed to be used, as there is a comment: 
obsolete, do not use for new code on CERT_VerifyCertNow, but there it 
is, plain as day, in nsNSSCallbacks.cpp, nsNSSCertificate.cpp, and 
nsNSSCertificateDB.cpp.

 - CERT_VerifyCertNow (just uses PR_Now())


Firefox and Thunderbird appear to use CERT_PKIXVerifyCert and 
CERT_VerifyCertificate(Now), and CERT_VerifyCert(Now) in different places.


Consolidating these API calls to one API would seem to be sorely desired 
(and, if alternate APIs are removed or simplified, may result in a 
non-trivial size reduction); *except* that each API call has its own 
strange idiosyncrasies and are 

Re: libpkix maintenance plan (was Re: What exactly are the benefits of libpkix over the old certificate path validation library?)

2012-01-25 Thread Ryan Sleevi
Sean,

The Path Building logic/requirements/concerned you described is best
described within RFC 4158, which has been mentioned previously.

As Brian mentioned in the past, this was 'lumped in' with the description
of RFC 5280, but it's really its own thing.

libpkix reflects the union of RFC 4158's practices and RFC 5280's
requirements. As you note in your spreadsheet, libpkix already implements
the majority of 5280 (at least, the important to browsers / commonly
used in PKIs including Internet PKIs). While libpkix tries for some of
4158, it isn't exactly the most robust, nor is 4158 the end-all and be-all
of path building strategies.

I believe that over time, it would be useful (ergo likely) to implement
some of the scoring logic described in 4158 and hand-waved at by
Microsoft's CryptoAPI documentation, rather than its current logic of just
applying its checkers to see if the path MIGHT be valid in a DFS search,
so that libpkix returns not just a good path, but a close-to-optimal path,
and can also provide diagnostics for the paths not taken.

Ryan

  I ended up writing a lot of text in response to this post, so, I am
  breaking up the response into three mini-responses.

  Part I

  On 1/18/2012 4:23 PM, Brian Smith wrote:
Sean Leonard wrote:
The most glaring problem however is that when validation fails, such
as in the case of a revoked certificate, the API returns no
certificate chains
   
My understanding is that when you are doing certificate path
  building, and you have to account for multiple possibilities any any
  point in the path, there is no partial chain that is better to return
  than any other one, so libpkix is better off not even trying to return a
  partial chain. The old code could return a partial chain somewhat
  sensibly because it only ever considered one possible cert (the best
  one, ha ha) at each point in the chain.
   

  For our application--and I would venture to generalize that for all
  sophisticated certificate-using applications (i.e., applications that
  can act upon more than just valid/not valid)--more information is a
  lot better than less.

  I have been writing notes on Sean's Comprehensive Guide to Certification
  Path Validation. Here's a few paragraphs of Draft 0:

  Say you have a cert. You want to know if it's valid. How do you
  determine if it's valid?

  A certificate is valid if it satisfies the RFC 5280 Certification Path
  Validation Algorithm. Given:
  * a certification path of length n (the leaf cert and all certs up to
  the trust anchor--in RFC 5280, it is said that cert #1 is the one
  closest to the trust anchor, and cert n is the leaf cert you're
  validating),
  * the time,
  * policy-stuff, -- hand-wavy because few people in the SSL/TLS world
  worry about this but it's actually given a lot of space in the RFC
  * permitted name subtrees,
  * excluded name subtrees,
  * trust anchor information (issuer name, public key info)

  you run the algorithm, and out pops:
  * success/failure,
  * the working public key (of the cert you're validating),
  * policy-stuff, -- again, hand-wavy
  and anything else that you could have gleaned on the way.


  But, this doesn't answer the obvious initial question: how do you
  construct a certification path of length n if you only have the
  initial cert? RFC 5280 doesn't prescribe any particular algorithm, but
  it does have some requirements (i.e., if you say you support X, you MUST
  support it by doing it Y way).

  Certification Path Construction is where we get into a little bit more
  black art and try to make some tradeoffs based on speed, privacy,
  comprehensiveness, and so forth.

  Imagine that you know all the certificates ever issued in the known
  universe. Given a set of trust anchors (ca name + public key), you
  should be able to draw lines from your cert through some subset of
  certificates to your trust anchors. What you'll find is that you've got
  a big tree (visually, but not necessarily in the computer science sense;
  it's actually a directed acyclic graph), where your cert is at the root
  and the TAs are at the leaves. The nodes are linked by virtue of the
  fact that the issuer DN in the prior cert is equal to the subject DN in
  the next cert, or to the ca name in the trust anchor.

  Practically, you search the local database(s) for all certificates that
  match the issuer DN in the subject. If no certificates (or in your
  opinion, an insufficient number of certificates) are returned, then, you
  will want to resort to other methods, such as using the caIssuers AIA
  extension (HTTP or LDAP), looking in other remote stores, or otherwise.

  The ideal way (Way #1) to represent the output is by a tree, where each
  node has zero or more children, and the root node is your target cert.
  In lieu of a tree, you can represent it as an array of cert paths
  (chains) (way #2). Way #2 is the way that Microsoft
  CertGetCertificateChain validation function returns 

Re: libpkix maintenance plan (was Re: What exactly are the benefits of libpkix over the old certificate path validation library?)

2012-01-25 Thread Sean Leonard

Ryan,

I agree; while I did not mention RFC 4158, it is a good reference. I 
echo your hope that someday, CERT_PKIXVerifyCert/libpkix will provide 
additional diagnostic information.


Some of my own observations:
- while a scoring method is useful (and certainly, an objective method 
is best), there is no universal scoring algorithm. We can, however, sort 
into two big piles: valid paths, and invalid paths.


- scoring and returning multiple paths imply that the system will 
compute all paths, rather than the minimum number of paths to identity a 
valid path (and then, if a valid path is found, quit).


- in the current libpkix design an application could supply 
PKIX_CertSelector_MatchCallback (see PKIX_CertSelector -matchCallback 
and pkix_Build_InitiateBuildChain) to execute custom selection logic. I 
put application in quotes, because CERT_PKIXVerifyCert does not appear 
to have a mechanism to set the matchCallback.


- failing this, an application could attempt to search the local 
stores itself, then supply the candidate certificate path in 
cert_pi_certList. Unfortunately, the quotes apply here too: 
CERT_PKIXVerifyCert does not actually implement cert_pi_certList!


-Sean

On 1/25/2012 6:10 PM, Ryan Sleevi wrote:

Sean,

The Path Building logic/requirements/concerned you described is best
described within RFC 4158, which has been mentioned previously.

As Brian mentioned in the past, this was 'lumped in' with the description
of RFC 5280, but it's really its own thing.

libpkix reflects the union of RFC 4158's practices and RFC 5280's
requirements. As you note in your spreadsheet, libpkix already implements
the majority of 5280 (at least, the important to browsers / commonly
used in PKIs including Internet PKIs). While libpkix tries for some of
4158, it isn't exactly the most robust, nor is 4158 the end-all and be-all
of path building strategies.

I believe that over time, it would be useful (ergo likely) to implement
some of the scoring logic described in 4158 and hand-waved at by
Microsoft's CryptoAPI documentation, rather than its current logic of just
applying its checkers to see if the path MIGHT be valid in a DFS search,
so that libpkix returns not just a good path, but a close-to-optimal path,
and can also provide diagnostics for the paths not taken.

Ryan


  I ended up writing a lot of text in response to this post, so, I am
  breaking up the response into three mini-responses.

  Part I

  On 1/18/2012 4:23 PM, Brian Smith wrote:
 Sean Leonard wrote:
 The most glaring problem however is that when validation fails, such
 as in the case of a revoked certificate, the API returns no
 certificate chains
   
 My understanding is that when you are doing certificate path
  building, and you have to account for multiple possibilities any any
  point in the path, there is no partial chain that is better to return
  than any other one, so libpkix is better off not even trying to return a
  partial chain. The old code could return a partial chain somewhat
  sensibly because it only ever considered one possible cert (the best
  one, ha ha) at each point in the chain.
   

  For our application--and I would venture to generalize that for all
  sophisticated certificate-using applications (i.e., applications that
  can act upon more than just valid/not valid)--more information is a
  lot better than less.

  I have been writing notes on Sean's Comprehensive Guide to Certification
  Path Validation. Here's a few paragraphs of Draft 0:

  Say you have a cert. You want to know if it's valid. How do you
  determine if it's valid?

  A certificate is valid if it satisfies the RFC 5280 Certification Path
  Validation Algorithm. Given:
  * a certification path of length n (the leaf cert and all certs up to
  the trust anchor--in RFC 5280, it is said that cert #1 is the one
  closest to the trust anchor, and cert n is the leaf cert you're
  validating),
  * the time,
  * policy-stuff,-- hand-wavy because few people in the SSL/TLS world
  worry about this but it's actually given a lot of space in the RFC
  * permitted name subtrees,
  * excluded name subtrees,
  * trust anchor information (issuer name, public key info)

  you run the algorithm, and out pops:
  * success/failure,
  * the working public key (of the cert you're validating),
  * policy-stuff,-- again, hand-wavy
  and anything else that you could have gleaned on the way.


  But, this doesn't answer the obvious initial question: how do you
  construct a certification path of length n if you only have the
  initial cert? RFC 5280 doesn't prescribe any particular algorithm, but
  it does have some requirements (i.e., if you say you support X, you MUST
  support it by doing it Y way).

  Certification Path Construction is where we get into a little bit more
  black art and try to make some tradeoffs based on speed, privacy,
  comprehensiveness, and so forth.

  Imagine that you know all the certificates ever issued in the known
  

Re: libpkix maintenance plan (was Re: What exactly are the benefits of libpkix over the old certificate path validation library?)

2012-01-18 Thread Brian Smith
Sean Leonard wrote:
 The most glaring problem however is that when validation fails, such
 as in the case of a revoked certificate, the API returns no
 certificate chains 

My understanding is that when you are doing certificate path building, and you 
have to account for multiple possibilities any any point in the path, there is 
no partial chain that is better to return than any other one, so libpkix is 
better off not even trying to return a partial chain. The old code could return 
a partial chain somewhat sensibly because it only ever considered one possible 
cert (the best one, ha ha) at each point in the chain.

 and no log information.

Firefox has also been bitten by this and this is one of the things blocking the 
switch to libpkix as the default mechanism in Firefox. However, sometime soon I 
may just propose that we change to handle certificate overrides like Chrome 
does, in which case the log would become much less important for us. See bug 
699874 and the bugs that are referred to by that bug.

 The only output (in the revoked case) is
 SEC_ERROR_REVOKED_CERTIFICATE. This is extremely unhelpful because it
 is a material distinction to know that the EE cert was revoked,
 versus an intermediary or root CA.

Does libpkix return SEC_ERROR_REVOKED_CERTIFICATE in the case where an 
intermediate has been revoked? I would kind of expect that it would return 
whatever error it returns for could not build a path to a trust anchor 
instead, for the same reason I think it cannot return a partial chain.

 Such an error also masks other possible problems, such as whether
 a certificate has expired, lacks trust bits, or other information.

Hopefully, libpkix at least returns the most serious problem. Have you found 
this to be the case? I realize that most serious is a judgement call that may 
vary by application, but at least Firefox separates cert errors into two 
buckets: overridable (e.g. expriation, untrusted issuer) and 
too-bad-to-allow-user-override (e.g. revocation).

 Per above, we never used non-blocking I/O from libpkix; we use it in
 blocking mode but call it on a worker thread. Non-blocking I/O never
 seemed to work when we tried it, and in general we felt that doing
 anything more than absolutely necessary on the main thread was a
 recipe for non-deterministic behavior.

This is also what Firefox and Chrome do internally, and this is why the 
non-blocking I/O feature is not seen as being necessary.

 The downside to blocking mode is that the API is one-shot: it is not
 possible to check on the progress of validation until it magically
 completes. When you have CRLs that are  10MB, this is an issue.
 However, this can be worked around (e.g., calling it twice: once for
 constructing a chain without revocation checking, and another time
 with revocation checking), and one-shot definitely simplifies the
 API for everyone.

As I mentioned in another thread, it may be the case that we have to completely 
change the way CRL, OCSP, and cert fetching is done in libpkix, or in 
libpkix-based applications anyway, for performance reasons. I have definitely 
been thinking about doing things in Gecko in a way that is similar to what you 
suggest above.

 We do not currently use HTTP or LDAP certificate stores with respect
 to libpkix/the functionality that is exposed by CERT_PKIXVerifyCert.
 That being said, it is conceivable that others could use this feature,
 and we could use it in the future. We have definitely seen LDAP URLs in
 certificates that we have to validate (for example), and although
 Firefox does not ship with the Mozilla Directory (LDAP) SDK,
 Thunderbird does. Therefore, we encourage the maintainers to leave it
 in. We can contribute some test LDAP services if that is necessary for
 real-world testing.

Definitely, I am concerned about how to test and maintain the LDAP code. And, I 
am not sure LDAP support is important for a modern web browser at least. Email 
clients may be a different story. One option may be to provide an option to 
CERT_PKIXVerifyCert to disable LDAP fetching but keep HTTP fetching enabled, to 
allow applications to minimize exposure to any possible LDAP-related exploits.

 Congruence or mostly-similar
 behavior with Thunderbird is also important, as it is awkward to
 explain to users why Penango provides materially different
 validation results from Thunderbird.

I expect that Thunderbird to change to use CERT_PKIXVerifyCert exclusively 
around the time that we make that change in Firefox, if not exactly at the same 
time.

  From our testing, libpkix/PKIX_CERTVerifyCert is pretty close to RFC
 5280 as it stands. It would be cheaper and more useful for the
 Internet community if the maintainers put the 5% more effort necessary
 to finish the job, than the 95% to break compliance. If this is
 something that you want to see to believe, I can try to compile some
 kind of a spreadsheet that illustrates how RFC 5280 stacks up with
 the current PKIX_CERTVerifyCert 

Re: libpkix maintenance plan (was Re: What exactly are the benefits of libpkix over the old certificate path validation library?)

2012-01-13 Thread Gervase Markham
On 13/01/12 00:01, Brian Smith wrote:
 Ryan seems to be a great addition to the team. Welcome, Ryan!

Ryan - could you take a moment to introduce yourself? (Apologies if I
missed an earlier introduction.)

* We will drop the idea of supporting non-NSS certificate 
  library APIs, and we will remove the abstraction layers
  over NSS's certhigh library. That means dropping the idea
  of using libpkix in OpenSSL or in any OS kernel, for
  example. 

For my info: has anyone ever expressed interest in doing that, or did it
just seem like a useful capability to have in case someone needed it?

Thanks for this summary - it's great to hear that the NSS team are of
one mind :-))

Gerv
-- 
dev-tech-crypto mailing list
dev-tech-crypto@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-tech-crypto


RE: libpkix maintenance plan (was Re: What exactly are the benefits of libpkix over the old certificate path validation library?)

2012-01-13 Thread Stephen Hanna
Let me just jump in and say that I'm also glad to see
libpkix being used and useful. I was the leader of the
team at Sun Labs that created libpkix (and the Java
CertPath libraries before them). Actually, it's an
exaggeration to say we created libpkix. We started
the work on it and then it took off. Lots of other
people have worked on it since then, probably putting
in many more hours than we did in creating it.

I'm mainly a lurker on this list since I don't do
much with PKI any more. I moved on to a new job
more than seven years ago, working on security
integration standards like TNC and NEA.

But if I can help answer an occasional question,
I'd be glad to do that. I'm having lunch today
with Yassir Elley, who did most of the coding
for the first version of libpkix. He works on
the same team as I do now, at Juniper. We'll
mull over this question and see if we can recall
why we included those layers of abstraction APIs.
I suspect it was because we wanted this to be
a PKIX-compliant library that could be used by
any project for any purpose in any environment.
That's also why it ended up being a bit bloated.
Maybe you could say it was a bit of a second
system effect, following CertPath as it did.

I apologize for whatever weaknesses we put into
libpkix but I'm glad to see that it's useful.
Feel free to adapt it as you see fit.

Thanks,

Steve Hanna

 -Original Message-
 From: dev-tech-crypto-bounces+shanna=funk@lists.mozilla.org
 [mailto:dev-tech-crypto-bounces+shanna=funk@lists.mozilla.org] On
 Behalf Of Gervase Markham
 Sent: Friday, January 13, 2012 6:01 AM
 To: mozilla-dev-tech-cry...@lists.mozilla.org
 Cc: Brian Smith
 Subject: Re: libpkix maintenance plan (was Re: What exactly are the
 benefits of libpkix over the old certificate path validation library?)
 
 On 13/01/12 00:01, Brian Smith wrote:
  Ryan seems to be a great addition to the team. Welcome, Ryan!
 
 Ryan - could you take a moment to introduce yourself? (Apologies if I
 missed an earlier introduction.)
 
 * We will drop the idea of supporting non-NSS certificate
   library APIs, and we will remove the abstraction layers
   over NSS's certhigh library. That means dropping the idea
   of using libpkix in OpenSSL or in any OS kernel, for
   example.
 
 For my info: has anyone ever expressed interest in doing that, or did
 it
 just seem like a useful capability to have in case someone needed it?
 
 Thanks for this summary - it's great to hear that the NSS team are of
 one mind :-))
 
 Gerv
 --
 dev-tech-crypto mailing list
 dev-tech-crypto@lists.mozilla.org
 https://lists.mozilla.org/listinfo/dev-tech-crypto
-- 
dev-tech-crypto mailing list
dev-tech-crypto@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-tech-crypto