Re: Problems importing private keys that already exist
Thanks for filing the bug Nelson. I don't suppose anyone has any idea of how I might be able to work around this issue for the time being? My app is based on XULRunner which will be released with NSS_3_12_RC3 so for the time being I have to work with that. I can see from the implementation that I could extract the SECKEYPublicKeyInfo structure myself but not get any further than that. Is there a way for me to spot that a matching key is already there using just that at all? Dave ___ dev-tech-crypto mailing list dev-tech-crypto@lists.mozilla.org https://lists.mozilla.org/listinfo/dev-tech-crypto
Re: Modulus length (was Re: Draft CA information checklist)
At 9:45 PM -0700 5/29/08, Justin Dolske wrote: Paul Hoffman wrote: Unless Mozilla says we are going to yank that particular Verisign certificate, and all the ones with similar key lengths, decades before they expire, there is absolutely no reason for us to, 20 years in advance, start requiring new CAs to use stronger keys. It is just not justified. I don't think it's nearly that black-and-white. Changing existing roots is a high-cost, long-lead process; raising the bar on new roots is cheap and fast. I don't understand why the two are incompatible, nor why progress should be gated upon perfection. See http://en.wikipedia.org/wiki/Security_theater. Adding strong locks to the front doors while the back doors still have weak locks is useless from a security standpoint. Are new CAs objecting to the use of stronger certs? Probably not, but why is that relevant? Mallory will always attack the weakest part of the system. Proposal: [...] A three-phase migration might be a bit more orderly: 1) short-term: raise bar on new CAs 2) mid-term: get existing CAs to switch to stronger roots 3) long-term: remove weak roots. #2 helps mitigate the impact of #3 on end-users, lest something force the issue sooner than desired. I see no difference between your list and mine other than terminology. If a significant browser like Firefox says that in five years, all CA roots have to be 2048 bits, that fact will get existing CAs to switch to stronger roots. BTW, 1024 bit roots are not weak. Even a decade from now, it will be incredibly expensive to break a 1024 bit RSA key, and the payback for doing so on a CA root will be very low because it is relatively easy to revoke a broken root in popular browsers. I predict that it would cost Mallory much less to simply set up a CA today, go through the audits and so on, and then lay low until he wants to attack. ___ dev-tech-crypto mailing list dev-tech-crypto@lists.mozilla.org https://lists.mozilla.org/listinfo/dev-tech-crypto
Re: Modulus length (was Re: Draft CA information checklist)
Paul Hoffman wrote: What does is cause for concern mean when the majority of the certificates in our list are 1024 bits? (I think that is still true) As noted by others, the checklist is for new roots, not legacy roots. If we're going to have a gradual transition to 2048-bit modulus length for RSA keys, I think it's legitimate to question why a CA is applying to have a 1024-bit root included. I'd be glad to soften the language about cause for concern, but I still want to flag 1024-bit roots as worthy of a further explanation. (E.g., is this a root created some time ago that is only now being proposed for inclusion? Was/is the root intended for use in low-end devices where performance was deemed an issue? Did the CA not think about the issue of modulus length at all? And so on.) As for having a formal schedule for transition (i.e., not accepting new 1024-bit roots after a certain date), I think that's a good idea. As for the ECC question: 256 bits is equivalent to 128 bits of symmetric strength, as in AES-128. Thanks! Frank -- Frank Hecker [EMAIL PROTECTED] ___ dev-tech-crypto mailing list dev-tech-crypto@lists.mozilla.org https://lists.mozilla.org/listinfo/dev-tech-crypto
Re: Conflicts in type defines
Wan-Teh Chang wrote: Hi, I am not familiar with npapi.h. I just took a quick look at it. As far as I can tell, you didn't do anything wrong. We need to make the NSS headers usable with NO_NSPR_10_SUPPORT defined. I filed a bug for this issue: https://bugzilla.mozilla.org/show_bug.cgi?id=436430 For now, you'll need to work around this bug by separating the files that need to include npapi.h and NSS headers. Does your npapi.h include prtypes.h? If so, you may need to define NO_NSPR_10_SUPPORT when you compile npapi.h. Wan-Teh ___ dev-tech-crypto mailing list dev-tech-crypto@lists.mozilla.org https://lists.mozilla.org/listinfo/dev-tech-crypto Hi, I did compile npapi.h with NO_NSPR_10_SUPPORT defined. That removed the errors on types (int32 etc) but gave new errors related to PRArenaPool etc which is quite obvious with the current library. I guess I should wait for the fix to happen for the bug filed. Thanks Ruchi ___ dev-tech-crypto mailing list dev-tech-crypto@lists.mozilla.org https://lists.mozilla.org/listinfo/dev-tech-crypto
Re: Modulus length (was Re: Draft CA information checklist)
At 11:02 AM -0400 5/30/08, Frank Hecker wrote: I'd be glad to soften the language about cause for concern, but I still want to flag 1024-bit roots as worthy of a further explanation. (E.g., is this a root created some time ago that is only now being proposed for inclusion? Was/is the root intended for use in low-end devices where performance was deemed an issue? Did the CA not think about the issue of modulus length at all? And so on.) Ah! That sounds reasonable. Cause for further checking covers that without making it seem that we're concerned just about the length. BTW, I would flag *all* ECC certs with Cause for further checking due to the very low amount of interop testing that has been done with them. Again, not to say don't do this, just we want to ask a few questions that might start a dialog. --Paul Hoffman ___ dev-tech-crypto mailing list dev-tech-crypto@lists.mozilla.org https://lists.mozilla.org/listinfo/dev-tech-crypto
Re: Modulus length (was Re: Draft CA information checklist)
Paul Hoffman wrote, On 2008-05-30 07:17: Adding strong locks to the front doors while the back doors still have weak locks is useless from a security standpoint. You seem to be arguing that no-one should bother to put locks on their doors while there remain some people who have no locks on their doors. If we all lived in one house, and all our valuables were available to anyone who penetrated any door, that analogy would be apt. But the information that Mallory actually gets from successfully attacking a connection (opening a door) is not the same for all connections. The information going over various connections is compartmentalized, analogous to separate items of value in separate houses with separate doors with separate locks of various strengths. Mallory will always attack the weakest part of the system. There will always be people who refuse to take adequate security measures. They will always be fair game for Mallory. The success of locks on doors is measured by how well they protect those who wish to use them and who do deploy them. Off hand, I can't think of a good physical analogy to the strange world of crypto-based security, in which our locks get weaker over time. Because physical locks do not tend to get weaker with time, people are not accustomed to upgrading their locks with time. They tend to install one lock and forget it. Here in this thread we hear Mozilla community members vocalizing their desire to make the world aware of the need to strengthen their locks, and to help prod the lock makers in that direction. ___ dev-tech-crypto mailing list dev-tech-crypto@lists.mozilla.org https://lists.mozilla.org/listinfo/dev-tech-crypto
Re: Problems importing private keys that already exist
Dave Townsend wrote, On 2008-05-30 03:59: Thanks for filing the bug Nelson. I don't suppose anyone has any idea of how I might be able to work around this issue for the time being? Earlier, you wrote: I also tried this with a shared db as Robert suggested and it appears to work correctly there in the test program I have. Is that an adequate solution? My app is based on XULRunner which will be released with NSS_3_12_RC3 so for the time being I have to work with that. I don't know much about your application, but it it is stand-alone, then it seems to me that the shareable DB solution is a good one. Those shareable DBs are the wave of the future for NSS. Just think of yourself as an early adopter. Maybe yours will be the first application to prominently feature that new generate of NSS DBs. As I understand it, by using the shareable DB, you don't need any NSS modification. A solution that requires no modification to NSS seems far better than one that does. I can see from the implementation that I could extract the SECKEYPublicKeyInfo structure myself but not get any further than that. You can extract an SPKI from an EncryptedPrivateKeyInfo ? Is there a way for me to spot that a matching key is already there using just that at all? If you can get the public key info, then I believe there is a pretty simple way to ask if you have the private key for it. I can look that up. But if shareable DBs solve your problem, I think that's your best answer. /Nelson ___ dev-tech-crypto mailing list dev-tech-crypto@lists.mozilla.org https://lists.mozilla.org/listinfo/dev-tech-crypto
Re: Modulus length (was Re: Draft CA information checklist)
At 9:49 PM +0300 5/30/08, Eddy Nigg (StartCom Ltd.) wrote: Paul Hoffman: Again, I strongly strongly doubt that Mallory will try to break a 1024-bit key for this attack, at least for 20 years or more. I'm not sure from where you got this information RFC 3766, which is considered the best current practice for the IETF. I am the co-author of the document, and before being published, it was widely reviewed by cryptographers whose names you would recognize. , because apparently a group of people succeeded in cracking the key with 650 and something bytes already about two years ago with about 40 64bit AMD dual machines in four month time. Googling that is failing me. I write this all from memory because I can't find that article again. OK, but an actual reference would be helpful. I'm sure a big cluster of always getting stronger CPUs (dual, quad, oct cores) will able to to get on 1024 bit keys in an ever shorter time until the point to make it economically interesting. Please say why you are sure. Yes, the existence of someone who is richer that Bill Gates and who wanted to spend all of his money to break a single key in about a decade would be economically interesting, but not in the way I think you meant. RFC 3766 is still used for making many important security decisions. The numbers and math in it are essentially the same as those used by NIST in the guidance that Nelson posted yesterday. To date, no one has asked us to update it, or even to make any significant corrections. If you know something we don't, it would be really useful to the whole Internet community to hear more. ___ dev-tech-crypto mailing list dev-tech-crypto@lists.mozilla.org https://lists.mozilla.org/listinfo/dev-tech-crypto
Re: Modulus length (was Re: Draft CA information checklist)
Paul Hoffman: I write this all from memory because I can't find that article again. OK, but an actual reference would be helpful. Yes, and it's obviously pretty bad from me not being able to back it up. I tried to locate it and even went through mails I sent in 2006 where I could have possibly mentioned it, but no dice. If I remember correctly I saw it initially at heise.de or theregister.com. And I haven't bookmarked it either :-( I'm sure a big cluster of always getting stronger CPUs (dual, quad, oct cores) will able to to get on 1024 bit keys in an ever shorter time until the point to make it economically interesting. Please say why you are sure. Yes, the existence of someone who is richer that Bill Gates and who wanted to spend all of his money to break a single key in about a decade would be economically interesting, but not in the way I think you meant. RFC 3766 is still used for making many important security decisions. Do you believe it to be still accurate? I understand that it was written at a time before 2004 with references to Itanium 500, Celeron 400 and Dual Pentium II-350 which looks like childsplay to today's 64 bit quad processors with speeds of 3GH per core and 12MB direct cache. I guess those aren't even the strongest chips out there, but certainly in the same price league when comparing. What we are looking at is the to derive the private key from the public key which would be enough to compromise the CA key and with it the whole pile of roots in NSS (as you love to say). The numbers and math in it are essentially the same as those used by NIST in the guidance that Nelson posted yesterday. To date, no one has asked us to update it, or even to make any significant corrections. As the author, how do you estimate the situation? Do you feel it's still accurate or have developments and capabilities improved beyond expectations (and despite Moore's law)? If you know something we don't, it would be really useful to the whole Internet community to hear more. I will look for it somewhat more...it can't have disappeared like that... Regards Signer: Eddy Nigg, StartCom Ltd. http://www.startcom.org Jabber: [EMAIL PROTECTED] xmpp:[EMAIL PROTECTED] Blog: Join the Revolution! http://blog.startcom.org Phone: +1.213.341.0390 ___ dev-tech-crypto mailing list dev-tech-crypto@lists.mozilla.org https://lists.mozilla.org/listinfo/dev-tech-crypto
Re: Modulus length (was Re: Draft CA information checklist)
Eddy Nigg (StartCom Ltd.): If you know something we don't, it would be really useful to the whole Internet community to hear more. I will look for it somewhat more...it can't have disappeared like that... The only thing I found so far (and which isn't the one I was referring to) is http://www.ercim.org/publication/Ercim_News/enw42/girard.html which must have been known to you at the time of writing the RFC. It's nevertheless interesting, considering that they used some 10,000 PCs and today's botnets comprise usually of many, many more compromised computers (some sources say up to a million). Also in that article there is the reference to crack a public-key system like RSA of at least 600 bits. Regards Signer: Eddy Nigg, StartCom Ltd. http://www.startcom.org Jabber: [EMAIL PROTECTED] xmpp:[EMAIL PROTECTED] Blog: Join the Revolution! http://blog.startcom.org Phone: +1.213.341.0390 ___ dev-tech-crypto mailing list dev-tech-crypto@lists.mozilla.org https://lists.mozilla.org/listinfo/dev-tech-crypto
Re: Conflicts in type defines
On Fri, May 30, 2008 at 9:33 AM, Ruchi Lohani [EMAIL PROTECTED] wrote: Hi, I did compile npapi.h with NO_NSPR_10_SUPPORT defined. That removed the errors on types (int32 etc) but gave new errors related to PRArenaPool etc which is quite obvious with the current library. I guess I should wait for the fix to happen for the bug filed. You can work around this bug in several ways before the bug is fixed. 1. You can separate your code that needs npapi.h and your code that needs NSS headers into different source files, so that no source file includes both npapi.h and NSS headers. This is the best solution. 2. You can compile your code with -DNO_NSPR_10_SUPPORT to work around the typedef conflict. But you need to supply the typedef and defines that NSS headers need. You can do that as follows: #include prtypes.h /* typedef and defines for NSPR 1.0 compatibility */ #define PRArenaPool PLArenaPool typedef PRInt64 int64; /* add more as needed, see below */ /* Include NSS headers*/ #include nss.h If the two typedef/defines are not sufficient, you can add the following as needed: typedef PRUint16 uint16; typedef PRUint32 uint32; #define BITS_PER_BYTE PR_BITS_PER_BYTE typedef PRUint8 uint8; typedef PRUintn uint; #define PR_PUBLIC_API PR_IMPLEMENT typedef PRInt16 int16; typedef PRInt32 int32; 3. You can patch npapi.h or obsolete/protypes.h so that only one of them defines the types int16, uint16, int32, and uint32. Hopefully one of these methods is acceptable to you. Wan-Teh ___ dev-tech-crypto mailing list dev-tech-crypto@lists.mozilla.org https://lists.mozilla.org/listinfo/dev-tech-crypto