Re: Are NSS bug fix releases still FIPS 140-2 certified?

2017-04-12 Thread Julien Pierre

Ernie,


On 4/10/2017 2:58 PM, Ernie Kovak wrote:

That means NSS does not provide FIPS compliance on any platform other than the 
one they tested on. So, not on Windows. Not anywhere other than Red Hat 
Enterprise Linux on a few platforms.

Many other vendors have done NSS FIPS validation on tons of other 
platforms in the past. This includes Windows. Sun did so in the past, 
for example. You can browse the CMVP certificate list for yourself to 
find that out. I don't know about the state of current validations on 
Windows, though.


Julien
--
dev-tech-crypto mailing list
dev-tech-crypto@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-tech-crypto


Re: NSS open multiple NSS-Databses at once?

2017-01-10 Thread Julien Pierre

I think the main restriction you are likely to run into is with trust.

You can likely explain how this works far better than I can, but I think 
essentially, you can't treat your multiple cert/key databases as 
entirely separate for purposes of trust. Ie. if you try to trust one CA 
in one DB/slot and not trust it in another DB/slot, you won't actually 
be able to do that. I think pkcs11.txt will dictate where trust actually 
ends up getting stored. But there will be one one value of trust flags 
per cert per process at a time.


So, op should make sure the limitations with trust are understood before 
trying to use this call. IMO, it's a big can of worms. Like John, I 
would also recommend avoiding the use of multiple DBs per process. We 
didn't go all the way and did not implement multiple trust domains, 
which would be needed to accomplish true separation of trust.


Ideally, NSS should support multiple, completely separate 
initializations per process, without any shared state between them. But 
it wasn't designed that way. I think this could be fixed for the upper 
layers like libnss/libssl/libsmime .


It is more difficult to fix for the lower layers, especially PKCS#11, 
unless the different trust domains have no overlapping PKCS#11 shared 
libs in common . I don't think the PKCS#11 semantics allow for multiple 
independent states within one shared lib. So this would likely need to 
be an extension of the spec.


Julien


On 1/10/2017 3:24 PM, Robert Relyea wrote:

On 01/10/2017 01:40 PM, John Dennis wrote:

On 01/10/2017 04:23 PM, Robert Relyea wrote:

2) To open additional databases you want to use SECMOD_OpenUserDB:


Bob, is SECMOD_OpenUserDB new?


No, it's been around for quite some time.


bob



--
dev-tech-crypto mailing list
dev-tech-crypto@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-tech-crypto


Re: LIBPKIX How To Use? (Windows)

2016-11-09 Thread Julien Pierre
Which LIB file are you using ? If it is a small LIB file, it is probably 
just the import library for the DLL .


PKIX_PL functions are internal functions not exported from NSS3.dll . 
Why do you want to use those functions directly ?


There is a public PKIX API, CERT_PKIXVerifyCert, which you should use.

Julien


On 11/9/2016 05:56, Opa114 wrote:

Hi there,

how can i use the LIBPKIX Library on Windows? Did it everytime only compile a 
*.lib file instead of a *.dll file like the nss3.dll? Everytime i try to use 
the PKIX_PL_Cert_VerifySignature Function for example i got the error that the 
reference to the function is undefined, which tells me that it has a problem 
with linking to the library - right?

Anybody out there who can help?


--
dev-tech-crypto mailing list
dev-tech-crypto@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-tech-crypto


Re: Problem on building NSS with Windows

2016-08-20 Thread Julien Pierre
This is an issue with the shell's inability to run "pwd.exe". 
It may be either a built-in shell, or a separately configured shell.
What is the value of the SHELL environment variable, if any ?

See
https://www.gnu.org/software/make/manual/make.html#Choosing-the-Shell
for more details.

"shell pwd" is what's used in the NSS build within coreconf to find the 
absolute path for source files - ie. this is the change that was made 10 years 
ago.

quickder.c is the first source file to be compiled in NSS, which I originally 
authored.

Julien

- Original Message -
From: john.sha.ji...@gmail.com
To: dev-tech-crypto@lists.mozilla.org
Sent: Saturday, August 20, 2016 6:58:02 AM GMT -08:00 US/Canada Pacific
Subject: Re: Problem on building NSS with Windows

2016-08-20 20:25 GMT+08:00 Manuel Dejonghe :

> On Sat, Aug 20, 2016 at 4:00 AM, John Jiang 
> wrote:
> > I checked the full logs. Many "execvp: pwd: Permission denied" in the
> logs,
> > like the below,
> > ...
> > make[1]: Leaving directory
> > `/d/security/nss/nss-latest/nspr/WIN954.0_x86_64_64_DBG.OBJ'
> > cd coreconf; make export
> > make[1]: execvp: pwd: Permission denied
> > make[1]: Entering directory `/d/security/nss/nss-latest/nss/coreconf'
> > cd nsinstall; make export
> > make[2]: execvp: pwd: Permission denied
> > make[2]: Entering directory
> > `/d/security/nss/nss-latest/nss/coreconf/nsinstall'
> > make[2]: Nothing to be done for `export'.
> > make[2]: Leaving directory
> > `/d/security/nss/nss-latest/nss/coreconf/nsinstall'
> > cd nsinstall; make libs
> > ...
> >
> > Does it impact the building?
>
> Most certainly it does.
> "pwd" is a command with which the build system could find out where
> (in which directory) it is sitting. As long as that fails, I would not
> be amazed that files are being searched/built in the wrong directory.
>
I'm really confused by this issue.
If run "pwd" by manual in that command prompt, I didn't get any error or
warning.
It did work well.


> ~manuel
> --
> dev-tech-crypto mailing list
> dev-tech-crypto@lists.mozilla.org
> https://lists.mozilla.org/listinfo/dev-tech-crypto
>
-- 
dev-tech-crypto mailing list
dev-tech-crypto@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-tech-crypto
-- 
dev-tech-crypto mailing list
dev-tech-crypto@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-tech-crypto


Re: Problem on building NSS with Windows

2016-08-19 Thread Julien Pierre

That looks correct; must be a different issue then.

Julien


On 8/19/2016 18:44, John Jiang wrote:

Run "make -version" in the MSYS/BASH command prompt, which is launched by
MozillaBuild, and got the following info,
GNU Make 3.81.90
Copyright (C) 2006  Free Software Foundation, Inc.
This is free software; see the source for copying conditions.
There is NO warranty; not even for MERCHANTABILITY or FITNESS FOR A
PARTICULAR PURPOSE.

This program built for i686-pc-msys32

2016-08-20 9:24 GMT+08:00 Julien Pierre :


Which version of GNU make do you have in your PATH ?

The NSS build system tries to get the absolute location of all source
files. It does so to facilitate debugging, so that debug binaries can
automatically locate the files in a debugger. It uses gmake macros/function
that don't work in very old gmake. This is a change I made 10+ years ago.

If you have gmake < 3.81, I think that path computation will fail. Just
check which gmake or make you have with gmake.exe -v .

You might also have the Microsoft make.exe / nmake.exe in your path .

Make sure the version of make in your path is GNU make and not another
make.

Julien



On 8/19/2016 00:24, John Jiang wrote:


Hi,
I tried to build NSS on Windows 7 x86_64 machine, and followed the
instructions at:
https://developer.mozilla.org/en-US/docs/Mozilla/Projects/NSS/Building

VS 2015 community and Latest MozillaBuild have been installed, and USE_64
was set to 1.
When "make nss_build_all" finished, I got the below errors:
c1: fatal error C1083: Cannot open source file:
'C:/mozilla-build/msys/quickder.c': No such file or directory
make[2]: *** [WIN954.0_x86_64_64_DBG.OBJ/quickder.obj] Error 2
make[2]: Leaving directory `/d/security/nss/nss-latest/nss/lib/util'
make[1]: *** [libs] Error 2
make[1]: Leaving directory `/d/security/nss/nss-latest/nss/lib'
make: *** [libs] Error 2

How to resolve this problem?
Thanks!


--
dev-tech-crypto mailing list
dev-tech-crypto@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-tech-crypto



--
dev-tech-crypto mailing list
dev-tech-crypto@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-tech-crypto


Re: Problem on building NSS with Windows

2016-08-19 Thread Julien Pierre

Which version of GNU make do you have in your PATH ?

The NSS build system tries to get the absolute location of all source 
files. It does so to facilitate debugging, so that debug binaries can 
automatically locate the files in a debugger. It uses gmake 
macros/function that don't work in very old gmake. This is a change I 
made 10+ years ago.


If you have gmake < 3.81, I think that path computation will fail. Just 
check which gmake or make you have with gmake.exe -v .


You might also have the Microsoft make.exe / nmake.exe in your path .

Make sure the version of make in your path is GNU make and not another make.

Julien


On 8/19/2016 00:24, John Jiang wrote:

Hi,
I tried to build NSS on Windows 7 x86_64 machine, and followed the
instructions at:
https://developer.mozilla.org/en-US/docs/Mozilla/Projects/NSS/Building

VS 2015 community and Latest MozillaBuild have been installed, and USE_64
was set to 1.
When "make nss_build_all" finished, I got the below errors:
c1: fatal error C1083: Cannot open source file:
'C:/mozilla-build/msys/quickder.c': No such file or directory
make[2]: *** [WIN954.0_x86_64_64_DBG.OBJ/quickder.obj] Error 2
make[2]: Leaving directory `/d/security/nss/nss-latest/nss/lib/util'
make[1]: *** [libs] Error 2
make[1]: Leaving directory `/d/security/nss/nss-latest/nss/lib'
make: *** [libs] Error 2

How to resolve this problem?
Thanks!


--
dev-tech-crypto mailing list
dev-tech-crypto@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-tech-crypto


Re: [ANNOUNCE] NSS 3.24 Release

2016-05-23 Thread Julien Pierre

Kai,

On 5/22/2016 13:45, Kai Engert wrote:

Notable Changes:
* The following functions have been deprecated (applications should use the
   new SSL_ConfigServerCert function instead):
   * SSL_SetStapledOCSPResponses
   * SSL_SetSignedCertTimestamps
   * SSL_ConfigSecureServer
   * SSL_ConfigSecureServerWithCertChain
* Function NSS_FindCertKEAType is now deprecated, as it reports a misleading
   value for certificates that might be used for signing rather than key
   exchange.
* SSLAuthType has been updated to define a larger number of authentication
   key types.
* The member attribute authAlgorithm of type SSLCipherSuiteInfo has been
   deprecated. Instead, applications should use the newly added attribute
   authType.
* ssl_auth_rsa has been renamed to ssl_auth_rsa_decrypt.

Will the deprecated functions stop working right away ? Or is there a 
scheduled time at which they won't be supported anymore in the future ?
The SSL_ConfigSecureServer function is very commonly used, pretty much 
in all Oracle applications.
In the past, NSS has maintained binary compatibility, except in cases 
where security cannot be fixed, such as SSL2 .


Julien

--
dev-tech-crypto mailing list
dev-tech-crypto@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-tech-crypto


Re: Cipher suits, signature algorithms, curves in Firefox

2016-05-06 Thread Julien Pierre

Zoogtfyz,


On 5/6/2016 07:34, Zoogtfyz wrote:
Websites that prefer AES-256, such as internal websites, can always 
instruct their users/customers to toggle a switch in Firefox to enable 
AES-256. I am proposing having AES-256 ciphersuits toggled off by 
default. 

IMO, that is impractical. I would recommend against doing this.

It was discussed on the Chrome mailing list. They are not yet enabled by 
default in Chrome stable, it is not yet decided if/when it will be enabled.
Nevertheless, other AES-256 cipher suites are already enabled in 
Chrome.  I don't think anyone is proposing to remove those from Chrome. 
MO, we should not remove any AES-256 cipher suites from Firefox/NSS.
I would agree with the proposal to reorder them, however, and prioritize 
AES-GCM over AES-CBC. Since application developers may have different 
opinions about priority order of cipher suites, I think it would be 
helpful to implement the following 2 NSS ERs which I filed recently :


https://bugzilla.mozilla.org/show_bug.cgi?id=1267894
https://bugzilla.mozilla.org/show_bug.cgi?id=1267896

Only the first one is related to Firefox, but both are related.

There are other considerations to take into account other than "strength".
Indeed, and those considerations might be application-specific, or 
hardware-specific, which is why I think the above 2 ERs make sense to 
implement.


When it comes to signature algorithms and curves, IMO, there should be 
some runtime support for configuring them and prioritizing them.
Right now, AFAIK, we don't have any kind of runtime configuration for 
either. Both are hardcoded at compile-time. IMO, it is time for this to 
change.
We should have at the very least runtime APIs to to enable/disable 
curves and enable/disable signature algorithms.  Several other libraries 
already offer this.


Preferably, we should also have a configurable ordered list for those, 
as I'm proposing we add for cipher suites.


Julien



--
dev-tech-crypto mailing list
dev-tech-crypto@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-tech-crypto


Re: RFC7512 PKCS#11 URI support

2016-04-07 Thread Julien Pierre

David,

On 4/7/2016 15:49, David Woodhouse wrote:

On Thu, 2016-04-07 at 05:01 -0700, Julien Pierre wrote:

The problem really stems from the design of NSS, specifically the
CERTCertificate*, which maps to a unique DER encoded cert, but not to
a single PKCS#11 object in a single token. Since the same cert can
exist in multiple tokens, but there can only be one CERTCertificate*
pointer for all of them, the only way to resolve the issue would be
for the lookup function to return something other than just
CERTCertificate* alone. PK11_ListCerts does that.

Hm, I thought PK11_ListCerts tried to eliminate duplicates too, which
is why it has that crappy O(n²) behaviour? Does it *really* return more
than one of the 'same' certificate? Don't you *still* get a randomly-
chosen one with unpredictable contents of cert->slot?

It depends on the argument to PK11_ListCerts .

typedef enum {
 PK11CertListUnique = 0, /* get one instance of all certs */
 PK11CertListUser = 1,   /* get all instances of user certs */
 PK11CertListRootUnique = 2, /* get one instance of CA certs 
without a private key.

  * deprecated. Use PK11CertListCAUnique
  */
 PK11CertListCA = 3, /* get all instances of CA certs */
 PK11CertListCAUnique = 4,   /* get one instance of CA certs */
 PK11CertListUserUnique = 5, /* get one instance of user certs */
 PK11CertListAll = 6 /* get all instances of all certs */
} PK11CertListType;

The cert->slot is still random. But you will get multiple cert list 
entries with the same CERTCertificate*, but a different appData, if you 
use the value that don't have "Unique" in the name.



OK, but you have the cert in your hand; it doesn't matter where it came
from as long as it's the right one. Hell, in OpenConnect I support
modes where the cert is in a file and only the key is in PKCS#11 or
TPM. Who *cares* where the cert came from?

It's only the *key* which really needs to be found in the right place,
since you have to *use* it from there, right?
In theory, yes, the slot for the key is what matters. The problem is if 
you use cert->slot to look for the key (or even a NULL slot, to search 
all slots).
You might end up with the "wrong" key object. Unlike for 
CERTCertificate, identical keys in multiple tokens don't share a single 
SECKEYPrivateKey object.
Of course, since the private key may not be extractable, it's not really 
possible to compare private keys between tokens. But you could match 
them by public key.
Fortunately, that is not an issue here, but the behavior is still 
confusing since it's different than for DER certs.


I believe there can be multiple SECKEYPublicKeyStr objects also, even if 
they are identical public keys, but not 100% sure.


I understand why 'key may not match the "token" in the original lookup'
might matter. Not clear why the others would matter — are those really
requirements that we should try to fulfil?
Well, the fact cert may not match "token" isn't necessarily an issue. 
But it's definitely an issue for the key.



I'd combine those final three into a single string. It can be a
filename (perhaps starting with 'file:'), it can be a PKCS#11 URI
starting with 'pkcs11:', or it can be another form, which can happily
include subject/issuer/serial in some combination.

But yes, that makes a certain amount of sense.

I'm glad :)



I plead ignorance with TPMs. Is there a technical reasons why you
couldn't manipulate TPM objects as PKCS#11 objects ?

I forget the precise details but it's to do with the different PINs and
the management thereof, IIRC. The model isn't directly compatible.

Thanks.

Yeah, it's a separate engine. Working on fixing that :)

And if you find any app shipped in Fedora which *doesn't* support it,
please do file a bug.
Since I don't use Fedora, that is unlikely. The Linux we support is 
either RHEL, or OEL which is some RH fork, not quite sure from what 
tree. Dev is mostly done on OEL.

Yeah, but solving it for PKCS#11 is still a very big step forward for
usability. On Linux platforms it still gives us *consistent* access to
a key either in a hardware token, or in a software token like GNOME-
keyring's or even the NSS user database in ~/.pki/nssdb (although
there's more work to be done before that's easy).

Yeah, AFAIK Firefox/Thunderbird still don't use DB in the user's 
directory. Also, they still don't use the sqlite DB.
Most server apps want to have a separate DB, also, in their own 
location, not the home dir.


Julien

--
dev-tech-crypto mailing list
dev-tech-crypto@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-tech-crypto


Re: RFC7512 PKCS#11 URI support

2016-04-07 Thread Julien Pierre
David,

Responses inline.

- Original Message -
> It certainly makes sense to add a new function which can locate objects
> *purely* by their PKCS#11 URI. And if I can spend a little time trying
> to properly understand the reasons you currently eschew
> PK11_FindCertsFromNickname(), perhaps we can make sure that
> PK11_FindCertsFromUri() doesn't have the same problems for you.

The problem really stems from the design of NSS, specifically the 
CERTCertificate*, which maps to a unique DER encoded cert, but not to a single 
PKCS#11 object in a single token. Since the same cert can exist in multiple 
tokens, but there can only be one CERTCertificate* pointer for all of them, the 
only way to resolve the issue would be for the lookup function to return 
something other than just CERTCertificate* alone. PK11_ListCerts does that.


> Some applications use PK11_FindCertFromNickname, and others don't.
> The API call is really treacherous in what it does, and the results are 
> really not well-defined in ambiguous cases - for example, if a cert and 
> key exists in multiple tokens.

>Hm, purely for finding the *cert*, why doesn't the token: prefix
>resolve that?

The token: prefix is only used as a starting point for the lookup.
But if the same DER cert exists in multiple tokens, then the value of 
CERTCertificate->slot pointer is unpredictable.
It may or may not match what was used during the lookup. Same issue for the 
nickname field.

>Hm. I see you making an explicit attempt to find a private key in the
>*same* token that the cert came from, and I understand the need for
>that. What I'm missing though, is the reason you can't use
>PK11_FindCertsFromNickname() to find the *cert*. I'm missing the
>difference in behaviour that your servnss_get_cert_from_nickname() has.

The difference that servnss_get_cert_from_nickname has is that it uses 
PK11_ListCerts, and checks the nickname against the certlist's entry's appData 
field.
PK11_FindsCertsFromNickname doesn't work the same way. Also, the optional 
arguments to servnss_get_cert_from_nickname let it determine whether to search 
for user certs or non-user certs, and whether to pass in a pre-existing cert 
list or not, as PK11_ListCerts is an expensive call.

Even if the cert lookup function ultimately returns a CERTCertificate in this 
case, you still need to find the corresponding key. It's possible for the cert 
and key to live in different tokens. That wouldn't be the intended usage, so we 
forbid it. If you use PK11_FindKeyByCert

Basically, what it comes down to, is that if you use the following sequence :
cert = PK11_FindCertFromNickname("token:subject")
key PK11_FindKeyByCert(cert);

Your cert and key may not match the "token" in the original lookup string.
cert->slot and key->slot may not match.

And both of these slots could differ from the PK11SlotInfo that matches the 
"token" of the original lookup.
The goal of the code I wrote was to ensure that the cert and key in the 
intended device were used, even when multiple copies existed.

Even if you use the alternate sequence :
cert = PK11_FindCertFromNickname("token:subject")
slot = cert->slot
key PK11_FindKeyByCertInSlot(cert, cert->slot);

It's no better, since cert->slot may not match "token".

The code I wrote properly allows, for example the use of multiple HSMs or SSL 
accelerators from different vendors within one process, for different virtual 
servers, in combination with the softoken. It may be convoluted, but this was 
actually tested. I believe the code is also used as part of the admin UI that 
manages among other things, trust flags for certs.

Ultimately, when you are searching for a user cert, usually you want to locate 
the private key at the same time. It makes sense to combine the lookup for both.
Some generic (non-PKCS#11 specific) lookup function to uniquely identify a cert 
and key could look like :
bool FindCertAndKey(Cert** outCert, Key** outKey, const char* reqdSubject, 
const char* optionalIssuer, const char* optionalSerial);
And if you want to make it PKCS#11 specific, add some sort of identifier for 
the module and/or token.

>(Well, there's the TPM, and GnuTLS has a 'tpm:…' identifier string
>which references objects there. And TPM really *doesn't* fit well into
>the PKCS#11 model. So yeah, perhaps I might one day find myself trying
>to standardise a 'TPM URI' and ensure that it's supported consistently.
>But that day isn't imminent :)

I plead ignorance with TPMs. Is there a technical reasons why you couldn't 
manipulate TPM objects as PKCS#11 objects ?

>The scope of what I'm doing here, across GnuTLS, OpenSSL and NSS, is
>"where objects are identified in a PKCS#11 store, they can be
>identified by a RFC7512 identifier string instead of just some random
>home-grown application-specific or crypto-library-specific form."

I'm pretty sure the PKCS#11 support in OpenSSL is optional, and many apps don't 
use it. NSS is the one that stands out in terms of requiring it - som

Re: RFC7512 PKCS#11 URI support

2016-04-06 Thread Julien Pierre

David,

On 4/6/2016 05:57, David Woodhouse wrote:

I also want to mention that there are some fairly major deficiencies in
NSS when it comes to finding certificates. The nickname only represents
a subject. It does not uniquely identify a certificate. Even
token:nickname  - which is really token:subject - still does not
uniquely identify a certificate.

... all this is mostly solved. You can use any or all of CKA_CLASS,
CKA_ID and CKA_LABEL to identify the objects you want to use.

Except that CKA_ID does not uniquely identify an object of any single type.
Even if it did, you would still need to uniquely identify the token 
first, and combine it with one or more of those other attributes.

Seems like the PKCS#11 URI support is supposed to be that unique ID.
So, you need a string like :
token_identifier : pkcs_uri

And have the CKA_CLASS be implicitly set to certificate, and then the 
application can just look for the corresponding private key, in the same 
token - .


But you would still have to define the format of the string this way in 
your application . It could be two separate strings, one for token and 
one for ID, or a combination string, or something else.
The existing usage for the nickname string in those apps is not defined 
that way, though. Trying to change it may result in unexpected behavior.





)


The idea of the changes we're making (in NSS and elsewhere) is that
regardless of which crypto library is being used, users should be able
to specify certs/keys/etc. by their PKCS#11 URI in a manner which is
consistent across *all* applications.

So I'd *expect* your users to put a PKCS#11 URI into your server config
file. That is the *standard* form for identifying such objects. And I'd
expect them to file a bug if it doesn't work.
It would only be a bug if the application claimed to comply with that 
RC, which obviously can't be the case for applications that predate it.


It's that bug which we're trying to fix, by making it easy (and in most
cases a no-op) for NSS-using applications to do the Right Thing. Just
like it's trivial with GnuTLS and relatively easy with OpenSSL (now
that I've mostly fixed OpenSSL's ENGINE_PKCS11).
Well, it isn't going to be a no-op. I think it would be best to have a 
new, well-defined API to uniquely identify a cert, rather than try to 
fit the new scheme into existing APIs that run into ambiguity and 
sometimes make strange and unexpected choices, choices that may not be 
what the application expects.

Some applications use PK11_FindCertFromNickname, and others don't.
The API call is really treacherous in what it does, and the results are 
really not well-defined in ambiguous cases - for example, if a cert and 
key exists in multiple tokens.
For this reason, in iPlanet web server, I wrote an alternate version of 
this call that better suits the needs of the application.  This is 
really old code, maybe 15 years or so.


You take a look at it at :
https://github.com/jvirkki/heliod/blob/master/src/server/base/sslconf.cpp#L175
https://github.com/jvirkki/heliod/blob/master/src/server/base/servnss.cpp#L860

Gory details, I know, but the point was to deal the possibility of keys 
and certs in multiple tokens, and the CERTCertificate's unique slot 
pointer pointing to a different slot than requested in the 
token:nickname string, if the private key in several places.
So, regardless of what changes you make to PK11_FindCertFromNickname or 
not, it's not going to make this scheme work in iWS (or OTD 11g).
The proper way would be a new API, explicitly defined, and for 
applications to make proper use of it, and define the string fields in 
their config files appropriate to their own needs. I'm speaking as both 
application developer and former NSS developer, here.
Of note, also, is that not all applications are strictly PKCS#11 
exclusive. Certs and keys can sometimes live in other stores that are 
not managed by PKCS#11 module. Not in NSS applications, mind you, but 
such applications are out there.
PKCS#11, and the layers on top of it, is inefficient in terms of 
runtime. Most accelerator vendors, and even some HSM vendors, have 
completely given up support for PKCS#11 for this reason. IMO, trying to 
design a "universal" configuration string in a world where not all 
applications are PKCS#11 is doomed to fail. OTD 12c for example does not 
use NSS and does not support PKCS#11, and has moved away from the 
token:nickname string. It uses PKCS#12 wallets as keystores, and a 
triplet of "subject:issuer:serial" as the identifier for certs and keys. 
Only subject is required, issuer and serial are optional. Unlike 
PK11_FindCertFromNickname, if there is any ambiguity, the server does 
not choose an arbitrary cert and key based on guesses of the library, 
but instead fails to come up and asks the administrator to correct the 
ambiguity by uniquely identifying the cert and key it actually wants to 
use (and it lists all of them if log is verbose, also). PKCS#11 support 
may 

Re: RFC7512 PKCS#11 URI support

2016-04-05 Thread Julien Pierre
The API itself may not have been documented, but products using the API 
have documented this token:nickname usage. That is the case for some 
Oracle server products. Now, I can't say that we really envisioned 
anyone entering a URI in the nickname field of our server config files.
It would certainly be unexpected if the server found a cert and came up 
in this case. Would we consider it a bug ? I think it's unlikely that 
any customer would ever run into this and complain about it. It's not an 
impossibility. But this would be borderline user error.


I think there is a difference between extending the syntax of something 
like SSL_OptionSet and PK11_FindCertFromNickname .
In the first case, the cipher suite values passed to the API are 
normally not exposed to the end-user of the application.
But in the later case, the nickname string often is. Thus, it might make 
more sense to create a separate API, that takes a URI as a separate 
argument.

Maybe PK11_FindCertFromNicknameOrURI, or something like that.

I also want to mention that there are some fairly major deficiencies in 
NSS when it comes to finding certificates. The nickname only represents 
a subject. It does not uniquely identify a certificate. Even 
token:nickname  - which is really token:subject - still does not 
uniquely identify a certificate.
There can be multiple certs with different validities, key types, in a 
single token. Only searching on a nickname argument, or even 
token:nickname, is of limited value, as it does not handle many of those 
possibilities. Applications that care about those cases - and server 
administrators tend to want to make sure they are using the right 
certificate - usually have to write their own cert selection functions. 
This is what we have had to do in several products. IMO, the whole 
nickname - or token:nickname - is not really precise enough. In cases of 
ambiguity, it should either fail, or return a list of certs and private 
keys. Then the application could filter that list further. Why cert and 
key ? Because there is a single CERTCertificate even a cert/key lives in 
several tokens. But there are distinct, slot-specific private key 
structures.
IMO, a function that finds a cert with the combination following 
criteria would be helpful :

1) token (PK11Slot*)
2) subject (or nickname)
3) issuer subject  (or nickname of issuer)
4) serial number
These criteria are necessary to uniquely identify a cert and private key 
in a particular token in the NSS world.

This is quite a bit more than just "nickname" or "token:nickname".

Of course, the moment you do a cert renewal, you need to specify the new 
cert properly. If you want this to happen automatically, then you can't 
use uniquely-identifying criteria. Personally, I believe it is 
preferable to have unique identifiers in the runtime. Admin GUIs can be 
used to present the user with a list of certs that are fit for a 
particular purpose in the application. Beyond that one can come up with 
logic to possibly to auto-selection of renewed cert, but the criteria 
would likely be application specific. When even the definition of 
"CERT_IsNewer" is ambiguous, it's very hard to come up with a one-size 
fits all solution.


Anyway, I have digressed quite a bit from the PKCS#11 URI subject at 
this point, so I will stop.


Julien

On 4/5/2016 09:49, John Dennis wrote:
One of the problems I have with the argument Ryan presents concerning 
API contracts and breakage is that "API contract" Ryan talks about is 
to the best of my knowledge undocumented, it's a API "convention" 
observed by a select group of developers "in the know". I don't see 
anything about a token plus colon prefix in the documentation:


https://developer.mozilla.org/en-US/docs/Mozilla/Projects/NSS/SSL_functions/pkfnc.html#1035673 



If the API does not have documented behavior constraints then you 
can't be causing a API breakage.


P.S.: CERT_FindCertByNickname is also undocumented. Nor is there any 
documentation on the syntax of nicknames in Cert DB.


--
dev-tech-crypto mailing list
dev-tech-crypto@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-tech-crypto


Re: [NSS] X509 Certificate Chain Verification Example

2016-02-10 Thread Julien Pierre
As an aside, I would strongly advise you to use the first method - put 
the root CA in your cert DB, ahead of time, prior to starting your 
applications.
Dynamically and blindly trusting a root CA, especially one received over 
a network, is asking for trouble and a big security no-no.
You should never do so unless you have some other trusted proof that you 
should do so - say, a signed message from a CA you already trust.
If you start your app without any trusted CA in your DB, you will not 
have any real security.


Julien

On 2/10/2016 16:30, Julien Pierre wrote:

Nicholas,

Your root certificate needs to be trusted. Self-signed is fine, but 
you still need to trust it.


It would either need to be present in your cert DB, with the proper 
trust flag, or you would need to dynamically set the trust on that 
root certificate using the API .

You can use CERT_ChangeCertTrust or CERT_ChangeCertTrustByUsage to do so.

Your root CA needs to be trusted prior to attempting any chain 
build/verification, otherwise your verification will always fail.

If you have no trusted CAs, then all verifications will always fail.

The same will be true whether you are using the legacy chain 
verification code in NSS, or libpkix.


Julien

On 2/10/2016 05:26, Nicholas Mainardi wrote:

I go on with my investigation, and I find that error -8172 should be
related to the fact that the root certificate is self-signed, even if 
it's
in the trust store contained in Root Certs module. Indeed, I search 
through
the reference SEC_ERROR_UNTRUSTED_ISSUER, and I find this error seems 
to be

set only in this function
<http://mxr.mozilla.org/nss/source/lib/certhigh/certvfy.c#404> in two 
cases:


1. The issuer cert is explicitly distrusted (code
<http://mxr.mozilla.org/nss/source/lib/certhigh/certvfy.c#710>). 
However,

CERTDB_TERMINAL_RECORD is never set in the certificates I parse;
2. The issuer cert is self-signed (code
<http://mxr.mozilla.org/nss/source/lib/certhigh/certvfy.c#750>), as 
it can

be seen by the comment. The root certificate of the Apple chain is
self-signed, so I'm afraid that this is the check which fails. It seems
pretty weird since usually root certificates are self-signed.

Another test I perform seems to confirm that error -8172 is relative 
to the
root certificate. Indeed, if I pass as a chain the server certificate 
and

an intermediate CA certificate, without loading Root Certs module, I get
error -8179, issuer not found. However, with the same input but with the
module loaded, the error turns into -8172. Hence, either the 
aforementioned
checks are done after the chain has been built, or the the error is 
raised
when the root cert found in the module is being added to the built 
chain.


Thank You,

Nicholas

2016-02-09 18:24 GMT+01:00 Nicholas Mainardi 
:



About error -8101 with Facebook CA certificate, I found it should be
related with this bug
<https://bugzilla.mozilla.org/show_bug.cgi?id=1049176>, so it's a
certificate issue. However, with Apple's certificate chain, I got error
-8102 when I try to validate only the CA certificate, while error -8172
trying to validate the whole chain.
It's likely I got the issue related to error -8102 by looking at the
source code. After a while, I got to this
<http://mxr.mozilla.org/security/source/security/nss/lib/certdb/certdb.c#1219> 
function.

Here, the key Encipherment Usage is setted as mandatory for every
certificate using RSA as pk algorithm. However, while this setting 
could

make sense in end entity certificates (even if it's not correct because
there is no mandatory constraint about it in RFC), it's quite silly 
with CA
certificates. Of course RSA can be used to encrypt a key also by CA, 
but
it's not that common, hence it's a really strong requirement. I have 
still
to figure out where does error -8172 come from (I suppose that the 
issuer

is untrusted because the CA certificate has -1802 error), and why I got
invalid_args error by set as parameter of CERT_PKIXVerifyCert some 
usages.

If someone can point me out why this happens, and confirm the possible
issues I have found, it would save me a lot of time.

Thank You,

Nicholas

2016-02-09 13:57 GMT+01:00 Nicholas Mainardi 
:



Anyone up for a possible solution?

2016-02-06 14:51 GMT+01:00 Nicholas Mainardi 


:


If I remove cert_pi_certList from the array, invalid_args error turns
into untrusted_issuer error (-8172). So, it seems that even if I 
don't add
the intermediate CA certificate in certList, the lookup in cert DB 
is fine,
but it doesn't manage to validate the CA certificate. Indeed, if I 
give
only the CA certificate as input, I got inadequate_cert_type error 
(-8101).
Same result by removing also cert_pi_useAIACertFetch. I try to 
change the
certificate usages  parameter, but the error varies from 
invalid_args to

inadeauqte_key_usage(-8102).

I know that the certificate chain is correct, I have already u

Re: [NSS] X509 Certificate Chain Verification Example

2016-02-10 Thread Julien Pierre

Nicholas,

Your root certificate needs to be trusted. Self-signed is fine, but you 
still need to trust it.


It would either need to be present in your cert DB, with the proper 
trust flag, or you would need to dynamically set the trust on that root 
certificate using the API .

You can use CERT_ChangeCertTrust or CERT_ChangeCertTrustByUsage to do so.

Your root CA needs to be trusted prior to attempting any chain 
build/verification, otherwise your verification will always fail.

If you have no trusted CAs, then all verifications will always fail.

The same will be true whether you are using the legacy chain 
verification code in NSS, or libpkix.


Julien

On 2/10/2016 05:26, Nicholas Mainardi wrote:

I go on with my investigation, and I find that error -8172 should be
related to the fact that the root certificate is self-signed, even if it's
in the trust store contained in Root Certs module. Indeed, I search through
the reference SEC_ERROR_UNTRUSTED_ISSUER, and I find this error seems to be
set only in this function
<http://mxr.mozilla.org/nss/source/lib/certhigh/certvfy.c#404> in two cases:

1. The issuer cert is explicitly distrusted (code
<http://mxr.mozilla.org/nss/source/lib/certhigh/certvfy.c#710>). However,
CERTDB_TERMINAL_RECORD is never set in the certificates I parse;
2. The issuer cert is self-signed (code
<http://mxr.mozilla.org/nss/source/lib/certhigh/certvfy.c#750>), as it can
be seen by the comment. The root certificate of the Apple chain is
self-signed, so I'm afraid that this is the check which fails. It seems
pretty weird since usually root certificates are self-signed.

Another test I perform seems to confirm that error -8172 is relative to the
root certificate. Indeed, if I pass as a chain the server certificate and
an intermediate CA certificate, without loading Root Certs module, I get
error -8179, issuer not found. However, with the same input but with the
module loaded, the error turns into -8172. Hence, either the aforementioned
checks are done after the chain has been built, or the the error is raised
when the root cert found in the module is being added to the built chain.

Thank You,

Nicholas

2016-02-09 18:24 GMT+01:00 Nicholas Mainardi :


About error -8101 with Facebook CA certificate, I found it should be
related with this bug
<https://bugzilla.mozilla.org/show_bug.cgi?id=1049176>, so it's a
certificate issue. However, with Apple's certificate chain, I got error
-8102 when I try to validate only the CA certificate, while error -8172
trying to validate the whole chain.
It's likely I got the issue related to error -8102 by looking at the
source code. After a while, I got to this
<http://mxr.mozilla.org/security/source/security/nss/lib/certdb/certdb.c#1219> 
function.
Here, the key Encipherment Usage is setted as mandatory for every
certificate using RSA as pk algorithm. However, while this setting could
make sense in end entity certificates (even if it's not correct because
there is no mandatory constraint about it in RFC), it's quite silly with CA
certificates. Of course RSA can be used to encrypt a key also by CA, but
it's not that common, hence it's a really strong requirement. I have still
to figure out where does error -8172 come from (I suppose that the issuer
is untrusted because the CA certificate has -1802 error), and why I got
invalid_args error by set as parameter of CERT_PKIXVerifyCert some usages.
If someone can point me out why this happens, and confirm the possible
issues I have found, it would save me a lot of time.

Thank You,

Nicholas

2016-02-09 13:57 GMT+01:00 Nicholas Mainardi :


Anyone up for a possible solution?

2016-02-06 14:51 GMT+01:00 Nicholas Mainardi 
:


If I remove cert_pi_certList from the array, invalid_args error turns
into untrusted_issuer error (-8172). So, it seems that even if I don't add
the intermediate CA certificate in certList, the lookup in cert DB is fine,
but it doesn't manage to validate the CA certificate. Indeed, if I give
only the CA certificate as input, I got inadequate_cert_type error (-8101).
Same result by removing also cert_pi_useAIACertFetch. I try to change the
certificate usages  parameter, but the error varies from invalid_args to
inadeauqte_key_usage(-8102).

I know that the certificate chain is correct, I have already used it as
a testing input for other libraries, and I know I have a trust anchor for
the CA certificate in my system root certificates. I think that the issue
is the error inadequate_cert_type on the CA certificate, but I have no idea
about what can cause this error. Moreover, I got invalid_args error even
passing trustAnchors instead of cert_pi_certList. So, I suppose there are
some issues with the processing made by Cert_PKIXVerifyCert function.

Thank You,

Nicholas

2016-02-06 2:42 GMT+01:00 Julien Pierre :


Nicholas,

It looks like

cert_pi_certList

is indeed never processed. So that seems to be unimp

Re: [NSS] X509 Certificate Chain Verification Example

2016-02-05 Thread Julien Pierre

Nicholas,

It looks like

cert_pi_certList

is indeed never processed. So that seems to be unimplemented. I'm not 
quite sure why that is. It's been a long type since I worked on NSS/libpkix.

What happens if you remove that parameter from your list ?

Once the certs are decoded, presumably in your parse_cert function, they 
will be available in the NSS softoken as temp certs, and will be 
searchable and findable by CERT_PKIXVerifyCert .
The chain building should rebuild the chain (or possibly another chain). 
If you are using AIA fetch with cert_pi_useAIACertFetch, then 
presumably,  your chain is possibly incomplete.
Thus, you don't really want to use cert_pi_certList anyway, as that 
would imply no more building.


I think if you remove the cert_pi_certList, and if you have a trust 
anchor in your softoken cert DB, then the rebuilding+validation should work.


Julien

On 2/5/2016 06:03, Nicholas Mainardi wrote:

Hello,

Thank you for your reply. I looked for the function you mentioned and I
looked at the usage examples. I edit <http://pastebin.com/4BQsinXM> my
previous code to use the function, but I'm getting error invalid_args
(-8187). After some trials, I figure out it's caused by the
cert_pi_certList type in input parameter. Looking at how these parameters
are processed, I got to this function
<http://mxr.mozilla.org/security/source/security/nss/lib/certhigh/certvfypkix.c#1509>,
which contains a switch on the param type. However, it doesn't exist a case
for every types listed here
<http://mxr.mozilla.org/security/source/security/nss/lib/certdb/certt.h#898>,
and the default case raise invalid_args. Isn't this a bug of this function?

However, I tried also with cert_pi_trustAnchors type (which has a case in
the function), but I got the same error. And also if I change the
certificate usage parameter, I got this error. So, is there something wrong
in the code I have written?

Thanks,

Nicholas

2016-02-04 1:14 GMT+01:00 Julien Pierre :


CERT_VerifyCertNow is a legacy API that does not support the full set of
RFC 3280/5280 features.
To support things like policy checks, you can use libpkix .
Look for CERT_PKIXVerifyCert . There are examples of usage in the NSS test
programs vfychain and tstclnt .
The library supports many more options than may be tested, though.

Julien

On 2/3/2016 08:37, Nicholas Mainardi wrote:


Hello,

I'm comparing different libraries to verify X509 certificate chains. I had
some issues to find how to use NSS to perform this task. At the end, I
managed to get a working code with one certificate chain. You can find the
code in this question
<
http://stackoverflow.com/questions/34982796/how-to-parse-and-validate-certificates-with-nss
I asked on stack overflow. I would like to know if the code I wrote is the
correct way to verify a certificate chain using NSS, and if there are
other
parameters to customize the verify algorithm which can be set (i.e. a flag
to enable policy check etc.). If the code is correct, I suggest it could
be
added to NSS examples on the documentation.

Thank You,

Nicholas


--
dev-tech-crypto mailing list
dev-tech-crypto@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-tech-crypto



--
dev-tech-crypto mailing list
dev-tech-crypto@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-tech-crypto


Re: [NSS] X509 Certificate Chain Verification Example

2016-02-03 Thread Julien Pierre
CERT_VerifyCertNow is a legacy API that does not support the full set of 
RFC 3280/5280 features.

To support things like policy checks, you can use libpkix .
Look for CERT_PKIXVerifyCert . There are examples of usage in the NSS 
test programs vfychain and tstclnt .

The library supports many more options than may be tested, though.

Julien

On 2/3/2016 08:37, Nicholas Mainardi wrote:

Hello,

I'm comparing different libraries to verify X509 certificate chains. I had
some issues to find how to use NSS to perform this task. At the end, I
managed to get a working code with one certificate chain. You can find the
code in this question

I asked on stack overflow. I would like to know if the code I wrote is the
correct way to verify a certificate chain using NSS, and if there are other
parameters to customize the verify algorithm which can be set (i.e. a flag
to enable policy check etc.). If the code is correct, I suggest it could be
added to NSS examples on the documentation.

Thank You,

Nicholas


--
dev-tech-crypto mailing list
dev-tech-crypto@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-tech-crypto


Re: Prevent "proxyfying" PKCS#11

2015-09-29 Thread Julien Pierre
Personally, I don't think authenticating the scanner is an issue. I can 
see the document physically being scanned in the scanner.

And I can see the resulting image in the java applet on my screen.
If the document that appears is not what I scanned, I would simply not 
submit it. I'm not worried about authenticating my scanner.


My bank feels differently, however.  They won't allow users to just 
upload any random JPG file - it has to be input directly from a scanner, 
or through a mobile phone's camera. It probably doesn't make that much 
sense - any human being who can upload a fake check JPG could probably 
also print it, and then scan it or take a picture of it with the mobile 
phone.


This security measure really only prevents an automated program from 
generating and submitting the fake check JPG, until they can fake a 
phone camera or a Twain scanner in software, which is a bit more involved.


Julien

On 9/29/2015 01:23, helpcrypto helpcrypto wrote:

Julien: you and me have "at the end" the same problem.

Java Web applets are passing away and we are looking for alternatives.


If you are just talking about "scanning", there 3 options AFAIK to do that:
  - From web invoke 127.0.0.1:port application(service) which listens on
port X and do all the job with the twain scanner
  - From the Web invoke myscan:// application protocol (look for registering
protocols on system at Google) and although you'll get a warning dialog,
the application could twain-scan and send result to a server
  - Forget about twain. The trust-chain of scanning is broken, but you can
scan over network and the upload JPG to your webform :P


If you are somehow worried about security (ie: "certified scanning") then
we are both "on the same problem".

Developing your own twain driver for network scanners will be -probably-
much more expensive that using xane or buying a USB new one.
You still have the scanner-web issue, but that can be solved easily (using
option #1+random number/token)


--
dev-tech-crypto mailing list
dev-tech-crypto@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-tech-crypto


Re: Prevent "proxyfying" PKCS#11

2015-09-29 Thread Julien Pierre

Yes, I think you are right, and we have digressed.

On 9/28/2015 17:30, Robert Relyea wrote:

On 09/25/2015 09:13 AM, Erwann Abalea wrote:
Le vendredi 25 septembre 2015 14:39:04 UTC+2, helpcrypto helpcrypto a 
écrit :
On Fri, Sep 25, 2015 at 11:52 AM, Erwann Abalea  
wrote:

[...]

Although it won't solve my problem, this will make possible to kill
signature applets forever, which indeed it's my real objective.
This objective will soon be reached. Java isn't supported anymore in 
Chrome 45 and is supposedly to be deprecated in Firefox soon, and 
ActiveX (whence Java also) won't be supported in Edge.
I thought helpcrypto was suggesting using a javacard applet, not java 
that runs on the client. Of course to install a javacard applet you 
need the card keys, which I believe he said he didn't have.


bob






--
dev-tech-crypto mailing list
dev-tech-crypto@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-tech-crypto


Re: Prevent "proxyfying" PKCS#11

2015-09-29 Thread Julien Pierre

Erwann,

On 9/28/2015 12:21, Erwann Abalea wrote:
I mistaken with Firefox, which still supports NPAPI, and all Java 
applets are in "click-to-play" mode. 

OK, great !

I certainly need Java in the browser, for other reasons (running a
scanner applet to use with my bank).

Then you can't use an Android or iOS device or the new MS browser for this 
activity. Not cool.
It's quite fine with me - I don't expect my scanner to work with my cell 
phone.


On my phone, I have the option to use a mobile check deposit app 
instead. But I find this much less reliable than using a scanner.
Lighting is often bad, there are shadows, focus issues, and the angle 
makes them look strange, etc.

It's much more cumbersome to do this with a smartphone than with a scanner.

While the scanner is networked, there are no Twain drivers for the 
scanner for Android, only for PC and Mac.
There is a separate scanning app that works on the phone, but that 
doesn't interact with the browser.
Java and the browser plug-in are needed to tie the scanner to the 
browser and server.


Julien

--
dev-tech-crypto mailing list
dev-tech-crypto@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-tech-crypto


Re: Prevent "proxyfying" PKCS#11

2015-09-29 Thread Julien Pierre



On 9/28/2015 01:50, helpcrypto helpcrypto wrote:

On Sat, Sep 26, 2015 at 1:17 AM, Julien Pierre 
wrote:


Erwann,

What are the replacement plug-in API mechanisms following the deprecation
of NPAPI ?


Oracle porting to PPAPI? (Perhaps you could give us some privileged
information about this)
I was under the impression that only Google chrome is supporting PPAPI, 
but not Firefox.
Anyway, I do not work on Java and have no information about porting the 
browser plug-in to a new API.


Julien

--
dev-tech-crypto mailing list
dev-tech-crypto@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-tech-crypto


Re: Prevent "proxyfying" PKCS#11

2015-09-25 Thread Julien Pierre

Erwann,

What are the replacement plug-in API mechanisms following the 
deprecation of NPAPI ?

Can't they be used to write another Java plug-in ?

I certainly need Java in the browser, for other reasons (running a 
scanner applet to use with my bank).

Julien

On 9/25/2015 09:13, Erwann Abalea wrote:

Le vendredi 25 septembre 2015 14:39:04 UTC+2, helpcrypto helpcrypto a écrit :

On Fri, Sep 25, 2015 at 11:52 AM, Erwann Abalea  wrote:

[...]

Although it won't solve my problem, this will make possible to kill
signature applets forever, which indeed it's my real objective.

This objective will soon be reached. Java isn't supported anymore in Chrome 45 
and is supposedly to be deprecated in Firefox soon, and ActiveX (whence Java 
also) won't be supported in Edge.


--
dev-tech-crypto mailing list
dev-tech-crypto@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-tech-crypto


Re: Adding a test only option to the NSS server to disable sending the renego extension

2015-09-22 Thread Julien Pierre
That's odd. Were you using the correct NSS_SSL_ENABLE_RENEGOTIATION 
variable name and not SSL_ENABLE_RENEGOTIATION ?
SSL_ENABLE_RENEGOTIATION is an internal name for the socket option, but 
not the name of the environment variable.


Julirn

On 9/21/2015 23:14, Cykesiopka wrote:

Hi Julien,

Thanks for the response. I tried all of the relevant options for 
SSL_ENABLE_RENEGOTIATION, but none of them seemed to work.


Reading the descriptions, it looks like these options have more to do 
with how NSS reacts to peers that send or don't send the renego 
extension.


Unfortunately, I need to test that Firefox prints out an appropriate 
web console message when connecting to a non-RFC5746 compliant server.

Currently, the NSS server seems to always send the extension.

Cykesiopka

On Mon 2015-09-21 05:43 PM, Julien Pierre wrote:

|You can read about the following environment variable
NSS_SSL_ENABLE_RENEGOTIATION 
<http://mxr.mozilla.org/security/search?string=NSS_SSL_ENABLE_RENEGOTIATION>

at
|https://developer.mozilla.org/en-US/docs/Mozilla/Projects/NSS/Reference/NSS_environment_variables 



This may be all you need to set in your tests to change the extension 
behavior .

Julien

On 9/20/2015 23:50, Cykesiopka wrote:

Hi,

As part of my work on creating tests for 
https://bugzilla.mozilla.org/show_bug.cgi?id=883674, I need some way 
to control whether or not the NSS server sends the renegotiation 
extension.


My current idea is to add a debug only SSL_ option for this (I have 
no interest in letting such an option be used in production).

Does this sound like a reasonable solution?

Or, maybe this already exists and I'm not looking in the right place?

Thanks,
Cykesiopka







--
dev-tech-crypto mailing list
dev-tech-crypto@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-tech-crypto


Re: Adding a test only option to the NSS server to disable sending the renego extension

2015-09-21 Thread Julien Pierre

|You can read about the following environment variable
NSS_SSL_ENABLE_RENEGOTIATION 


at
|https://developer.mozilla.org/en-US/docs/Mozilla/Projects/NSS/Reference/NSS_environment_variables

This may be all you need to set in your tests to change the extension 
behavior .

Julien

On 9/20/2015 23:50, Cykesiopka wrote:

Hi,

As part of my work on creating tests for 
https://bugzilla.mozilla.org/show_bug.cgi?id=883674, I need some way 
to control whether or not the NSS server sends the renegotiation 
extension.


My current idea is to add a debug only SSL_ option for this (I have no 
interest in letting such an option be used in production).

Does this sound like a reasonable solution?

Or, maybe this already exists and I'm not looking in the right place?

Thanks,
Cykesiopka



--
dev-tech-crypto mailing list
dev-tech-crypto@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-tech-crypto


Re: Build error for NSS 3.17.4 (Windows 7)--needs to be addressed in NSPR

2015-02-02 Thread Julien Pierre

Kai.,
On 2/2/2015 04:17, Kai Engert wrote:

Please use OS_TARGET=WIN95

That's the newer and supported configuration.

If you found any place that suggests to use WINNT, we should update that
location.

Kai

Please note that Oracle still uses WINNT for the Windows build, and 
needs this build to continue to be supported as well.


Julien

--
dev-tech-crypto mailing list
dev-tech-crypto@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-tech-crypto


Re: Accessing Firefox keystore

2014-12-08 Thread Julien Pierre

Jean,

On 12/8/2014 02:38, Jean Bave wrote:

Thank you for your answer.
We tried the SunPKCS11 class but the thing is we are trying to access
Firefox's keystore to reach the certificates of a physical token stored in
it.
Apparently the Sun provider cannot deal with physical tokens through
Firefox's keystore. Does that seem plausible to you?


The Sun provider accesses PKCS#11 libraries directly. You can write code 
with it to access the NSS softoken database.

But SunPKCS11 itself won't directly read the NSS secmod.db/pkcs11.txt .
You probably would have to write additional code to do that, enumerate 
all those PKCS#11 modules, and load them with SunPKCS11 .


Julien

--
dev-tech-crypto mailing list
dev-tech-crypto@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-tech-crypto


Re: Reducing NSS's allocation rate

2014-11-10 Thread Julien Pierre
Personally, I would like to encourage your efforts. If you are able to 
move many of these allocations from heap-based with locks, to something 
stack-based instead, this will improve NSS server performance 
tremendously. I would be surprised if it was a significant boost to 
client apps like Firefox, though.


On the other hand, I'm not sure that there is that much low-hanging 
fruit, based on the stacks you list in the bug.
Many are dictated by the design of NSS, the PKCS#11 API, and the current 
softoken implementation.
Working within these constraints is not so simple. Keep in mind that 
many things you might want to change cannot be, in order to preserve NSS 
API binary compatibility.


Julien

On 11/10/2014 18:51, Nicholas Nethercote wrote:

Hi,

I've been doing some heap allocation profiling and found that during
basic usage NSS accounts for 1/3 of all of Firefox's cumulative (*not*
live) heap allocations. We're talking gigabytes of allocations in
short browsing sessions. That is *insane*.

I filed https://bugzilla.mozilla.org/show_bug.cgi?id=1095272 about
this. I've written several patches that fix problems, one of which has
r+ and is awaiting checkin; check the dependent bugs.

This is making Ryan Sleevi is nervous and he wanted me to post
something here about my plans. So here they are: I want to reduce
unnecessary allocations. I want to do so in a very non-intrusive
fashion: I'm aware that NSS is security-sensitive code, and TBH it's
not that enjoyable to read or modify. I will plug away at this as long
as there is low-hanging fruit to be found, which may not be that much
longer.

Nick


--
dev-tech-crypto mailing list
dev-tech-crypto@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-tech-crypto


Re: Proposal: Disable SSLv3 in Firefox ESR 31

2014-10-23 Thread Julien Pierre

Hubert,

On 10/23/2014 07:53, Hubert Kario wrote:

Are there phone/tablets which can't install any 3rd party browsers at all ?

AFAIK, iOS devices require you to use the system TLS stack.

I see, I didn't know.
But it still would seem that any second connection (fallback) would be 
dictated by the browser implementation itself, and not the stack.
  

Anyway, the very fallback we are talking about here is a known
vulnerability.
It sounds like we want a browser that is current on vulnerability fixes,
except for this one.

I'm not saying it isn't. But it is behaviour that is expected by users.
I think most users are woefully unaware of any TLS connection retry / 
fallback being done by the browser.
I think you meant to say that users expect the browser to continue to 
work with all their legacy TLS-intolerant devices somehow.
That doesn't mean that a legacy mode of operation in the browser 
wouldn't be an acceptable solution to them.



Do you have any pointer to the versions and data for this 99% / 89% ?
http://www.ietf.org/proceedings/90/slides/slides-90-tls-0.pdf
  
Thank you. The 11% of TLS 1.3 intolerant servers is scary indeed. Do we 
have any idea which SSL stacks / server vendors are affected ?


Julien

--
dev-tech-crypto mailing list
dev-tech-crypto@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-tech-crypto


Re: Proposal: Disable SSLv3 in Firefox ESR 31

2014-10-22 Thread Julien Pierre

Hubert,

On 10/22/2014 05:27, Hubert Kario wrote:
  
Problem is that if something doesn't work in one browser and does in another

users blame the browser. Even if the browser that doesn't work does the right
thing.

What if all browsers started doing the right thing ?


Recommending the use of obsolete browsers is also a bad idea - they have well
known vulnerabilities. It also may simply be not possible in walled gardens
(phones/tablets).

Are there phone/tablets which can't install any 3rd party browsers at all ?

Anyway, the very fallback we are talking about here is a known 
vulnerability.
It sounds like we want a browser that is current on vulnerability fixes, 
except for this one.
That would seem to make the case for some sort of "legacy mode" in 
current browsers.



This way, browsers won't subject the requests to 99.999% of servers that
are not TLS-intolerant to needless MITM attacks, not to mention extra
network bandwidth and round trips.

It's closer to below 99% or 89%, depending on which TLS version you look at.

Do you have any pointer to the versions and data for this 99% / 89% ?

It's rare, but it's not unheard of, and that's internet facing dedicated web
servers. I'm afraid what the statistics would be for devices where the TLS
part is secondary (routers/automation systems/smart devices/etc.) which we
can't really probe.

For legacy devices, a "legacy mode" in the browser seems most appropriate.

Julien

--
dev-tech-crypto mailing list
dev-tech-crypto@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-tech-crypto


Re: Updates to the Server Side TLS guide

2014-10-21 Thread Julien Pierre

Julien,
On 10/21/2014 18:02, Julien Vehent wrote:

NSS is very rarely used in servers.
Perhaps so statistically, but the products are still around. I notice 
that Oracle/iPlanet/RedHat products are absent from the document.
Oracle still ships at the very least iPlanet Web Server, iPlanet Proxy 
Server, Oracle Traffic Director, which use NSS currently (I should know 
since I work on these).
Some of these have been around for about 18 years in various 
incarnations, so we must be doing at least something right.
There are several more as well - Messaging and Directory, which are 
still be maintained elsewhere within Oracle.
Red Hat used to ship some servers with mod_nss , though I don't know how 
widely it is used. Same with RedHat CMS.

I am sure others from RH can chime in.

Julien

--
dev-tech-crypto mailing list
dev-tech-crypto@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-tech-crypto


Re: Proposal: Disable SSLv3 in Firefox ESR 31

2014-10-21 Thread Julien Pierre

Florian,

On 10/21/2014 05:24, Florian Weimer wrote:

* Julien Pierre:


The whole TLS_FALLBACK_SCSV would be unnecessary if not for this
browser misbehavior - and I hope the IETF will reject it.

Technically, we still need the codepoint assignments from the IETF
draft because of their widespread use, and that requires Standards
Action, which means publication of a standards-track RFC.  Although
the RFC could simply say, "do not use these codepoints".
Sorry I misspoke - I'm not that familiar with the intricacies of the 
IETF RFC process.


Julien

--
dev-tech-crypto mailing list
dev-tech-crypto@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-tech-crypto


Re: Proposal: Disable SSLv3 in Firefox ESR 31

2014-10-21 Thread Julien Pierre

Kai,

On 10/21/2014 05:31, Kai Engert wrote:

So, let's get this clarified with test results.

I've tested Firefox 34 beta 1.

Because bug 1076983 hasn't landed on the beta branch yet, the current
Firefox 34 beta 1 still has SSL3 enabled.

With this current default configuration (SSL3 enabled), Firefox will
fall back to SSL3.

Then I used about:config and changed security.tls.version.min to 1
(which means TLSv1, thereby disabling SSL3).

With SSL3 disabled, Firefox 34 no longer falls back to SSL3.

When attempting to connect to a SSL3-only server, I see Firefox 34
attempting three connections, with TLS 1.2 {3,3}, TLS 1.1 {3,2} and TLS
1.0 {3,1}, but not SSL3.


That's a lot of fallbacks.
Do we know of TLS 1.0 servers that reject connections with TLS 1.2 or 
1.1  in ClientHello instead of falling back to 1.0 ?
Or TLS 1.1 servers that reject connections with 1.2 in ClientHello 
instead of falling back to 1.1 ?


Just how many broken servers are there out there ?

Julien

--
dev-tech-crypto mailing list
dev-tech-crypto@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-tech-crypto


Re: Updates to the Server Side TLS guide

2014-10-21 Thread Julien Pierre

Chris,

On 10/21/2014 11:43, Chris Newman wrote:

At this point, the OpenSSL-style cipher suite adjustment string has become a
de-facto standard. So I believe NSS should be modified to follow that de-facto
standard rather than expecting those writing security advice to do extra work:

  

It's not a sexy change to NSS, but it would be very useful. Enterprise
administrators of Firefox would probably appreciate this as well as server
admins for servers using NSS.

- Chris

I wasn't even specifically referring to cipher strings, but the whole 
document seems to be about servers running OpenSSL, though I did see a 
few references to GnuTLS as well.
There are also servers running NSS, Microsoft SSL stacks, proprietary 
SSL stacks, etc.


Julien

--
dev-tech-crypto mailing list
dev-tech-crypto@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-tech-crypto


Re: Proposal: Disable SSLv3 in Firefox ESR 31

2014-10-21 Thread Julien Pierre

Florian,

On 10/21/2014 06:38, Florian Weimer wrote:


I still think the fallback behavior you have shown is a browser bug,
and should be fixed there, but its removal.  There seems to be rather
vehement disagreement, but I don't get way.

+1 , any fallback is a bug. SSL has built-in protocol version negotiation.

People who desparately need to connect to old devices can keep old
browser versions around, or you could offer a per-site configuration
knob (chances are you need that for SSL 3.0 support anyway).  These
old devices frequently demand old browser or Java versions, so yet
another reason to keep an old browser around does not seem
particularly cumbersome to me.

We think alike


The benefit from that would be that regular users are protected even
if servers do not implement TLS_FALLBACK_SCSV.
Indeed. Servers that care about the SSL3 issue aren't going to upgrade 
software to pick up TLS_FALLBACK_SCSV . They will just disable SSL3 in 
the configuration, which is much simpler and much quicker to do . Some 
servers may do both, but it won't be the majority.


TLS_FALLBACK_SCSV serves no purpose except to perpetuate the illusion 
that the browser fallback makes some sort of sense.


Julien

--
dev-tech-crypto mailing list
dev-tech-crypto@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-tech-crypto


Re: Proposal: Disable SSLv3 in Firefox ESR 31

2014-10-21 Thread Julien Pierre

Hubert,

On 10/21/2014 05:06, Hubert Kario wrote:


Yes, it's external to the TLS, and yes, it's bad that browsers do use
the manual fallback. Yes, the servers should be regularly updated and
as such bugs that cause it fixed. Yes, the configurations should be
updated to align them with current recommendations.

But it doesn't happen in real world.

So either we can push for policies which will never be implemented and
be workable in real world, or we can try to make the systems secure in
real world for people that care (both users and server admins that
do apply updates regularly).

Yes, I'd like to live in a world where it's not necessary, but we don't.
IMO, reasonable decisions can be made to drop support for TLS intolerant 
servers.


Those who have legacy devices that can't be updated could still use 
legacy browsers to connect to them, or there could be an explicit legacy 
mode of operation in current browsers that preserves it.


This way, browsers won't subject the requests to 99.999% of servers that 
are not TLS-intolerant to needless MITM attacks, not to mention extra 
network bandwidth and round trips.


Julien

--
dev-tech-crypto mailing list
dev-tech-crypto@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-tech-crypto


Re: Proposal: Disable SSLv3 in Firefox ESR 31

2014-10-20 Thread Julien Pierre

Kai,

On 10/20/2014 16:47, Kai Engert wrote:

On Mon, 2014-10-20 at 16:45 -0700, Julien Pierre wrote:

What is the purpose of Firefox continuing to do any fallback at all ?
IMO, making a second connection with any lower version of SSL/TLS
defeats the intent of the SSL/TLS protocol, which have built-in defenses
against protocol version downgrade.
Isn't it time this fallback gets eliminated at last ?

I'm stating what I found, I'm not making that decision.

Sorry, I didn't mean to blame you for that decision - but IMO this 
should be pointed out to whoever made that call.


The whole TLS_FALLBACK_SCSV would be unnecessary if not for this browser 
misbehavior - and I hope the IETF will reject it.


Julien

--
dev-tech-crypto mailing list
dev-tech-crypto@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-tech-crypto


Re: Proposal: Disable SSLv3 in Firefox ESR 31

2014-10-20 Thread Julien Pierre

Kai,

What is the purpose of Firefox continuing to do any fallback at all ?
IMO, making a second connection with any lower version of SSL/TLS 
defeats the intent of the SSL/TLS protocol, which have built-in defenses 
against protocol version downgrade.

Isn't it time this fallback gets eliminated at last ?

Julien

On 10/20/2014 16:40, Kai Engert wrote:

On Thu, 2014-10-16 at 20:51 +0200, Kai Engert wrote:

Do you claim that Firefox 34 will continue to fall back to SSL 3 when
necessary?

Yes. If I understand correctly, it seems that Firefox indeed still falls
back to SSL3, even with SSL3 disabled.

I found
   https://bugzilla.mozilla.org/show_bug.cgi?id=1083058
which intends to implement a preference to configure the oldest allowed
protocol version to fallback to, with a propose mininum of 1 (TLS1).

Kai




--
dev-tech-crypto mailing list
dev-tech-crypto@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-tech-crypto


Re: Updates to the Server Side TLS guide

2014-10-20 Thread Julien Pierre

Hubert,

On 10/20/2014 05:10, Hubert Kario wrote:

So I went over the https://wiki.mozilla.org/Security/Server_Side_TLS
article with a bit more attention to detail and I think we should
extend it in few places.

Especially if it is supposed to be also the general recommendation
for servers, not just for ones that are part of Mozilla network.
This document seems to be fairly OpenSSL-centric. Some servers actually 
use Mozilla's NSS library., as well as other libraries.


Julien


--
dev-tech-crypto mailing list
dev-tech-crypto@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-tech-crypto


Re: Proposal: Disable SSLv3 in Firefox ESR 31

2014-10-16 Thread Julien Pierre

Florian,

On 10/16/2014 12:50, Florian Weimer wrote:

Neither.  I'm talking about the out-of-protocol insecure version
negotiation for TLS implemented in Firefox.  That's a broader scope
than bug 689814, which is strictly about fallback to SSL 3.0.

+1
This fallback needs to get removed, yesterday.
SSL/TLS have had a secure mechanism for preventing protocol version 
downgrade attacks from day 1.
Firefox circumvents this. It's about time Firefox - and others - to 
conform to the standard.


TLS_FALLBACK_SCSV is a one-time band-aid that won't do any good in the 
long run.
Any server administrator that cares about security will simply disable 
SSL3 in their server, rather than go through the process of upgrading 
their software to support this draft.


Julien

--
dev-tech-crypto mailing list
dev-tech-crypto@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-tech-crypto


Re: Announcing Mozilla::PKIX, a New Certificate Verification Library

2014-08-15 Thread Julien Pierre

Brian,

I just ran into the Netscape Cert Type critical extension issue with an 
internal cert.

Is there an override setting to allow this cert to work in Firefox still ?

IMO, the Firefox behavior is particularly bad, because Firefox won't 
even let you look at the cert details to see what the problematic 
extension is. I had to look at the cert details in Chrome (which still 
uses libpkix) .


Julien

--
dev-tech-crypto mailing list
dev-tech-crypto@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-tech-crypto


Re: Road to RC4-free web (the case for YouTube without RC4)

2014-07-01 Thread Julien Pierre

Brian,

On 7/1/2014 14:05, Brian Smith wrote:
I think, in parallel with that, we can figure out why so many sites 
are still using TLS_ECDHE_*_WITH_RC4_* instead of 
TLS_ECDHE_*_WITH_AES* and start the technical evangelism efforts to 
help them. Cheers, Brian 
The reason for sites choosing RC4 over AES_CBC might be due to the 
various vulnerabilities against CBC mode, at least for sites that 
support TLS 1.0 .
I think a more useful form of evangelism would be to get sites to stop 
accepting SSL 3.0 and TLS 1.0 protocols.


Julien

--
dev-tech-crypto mailing list
dev-tech-crypto@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-tech-crypto


Re: Other ECC Curves

2014-06-10 Thread Julien Pierre
Oracle ships products with NSS built with a set of 25 curves. These are 
mostly server products, but they also can act as client.


The full curve list is in :

http://bonsai.mozilla.org/cvsblame.cgi?file=mozilla/security/nss/lib/freebl/ecl/ecl-curve.h&rev=1.4&root=/cvsroot

However, Mozilla and others typically don't support the full set and 
build with the following file :


http://bonsai.mozilla.org/cvsblame.cgi?file=mozilla/security/nss/lib/freebl/ecl/ecl-curve.h&rev=1.7&root=/cvsroot

Julien

On 6/9/2014 16:27, Rick Andrews wrote:

AFAIK, Symantec and other CAs have added ECC roots to Mozilla's root store 
using NIST curves. Are any other ECC curves supported by Mozilla, in case one 
wanted to use a different curve? Is the list of supported algorithms and key 
sizes published somewhere?


--
dev-tech-crypto mailing list
dev-tech-crypto@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-tech-crypto


Re: Regain trust into SSL/TLS

2014-03-11 Thread Julien Pierre


On 3/11/2014 03:10, Alan Braggins wrote:

On 09/03/14 22:59, Raphael Wegmann wrote:

What about creating a distributed hash-table, where we could count
collectively, which public-key has been used by a particular server
how often?
When I visit amazon.com and my browser tells me, that I am the only
one who got that public-key I'm having, I know immediately, that
I am not really communicating with Amazon.


If an MITM attack is pointing you at a fake Amazon, how are you going
to ensure the same attacker isn't going to show you a fake hash-table?

One possible answer is certificate pinning, but if you've used
Amazon.com before, certificate pinning can warn you it's using a
different key (and different CA) from last time without the table.
Of course, certificate pinning is orthogonal to the way certificates are 
verified in PKIX.
PKIX allows plenty of neat things, like using different certificates 
with different keys on different hosts, renewing the certificate with a 
different, key, etc.
And load-balancers can make it impossible to tell which backend you are 
actually talking to.
I'm not saying Amazon or other companies are doing those things, but 
they are all perfectly legal to do in PKIX, but would trip so-called 
"certificate pinning".
One could argue PKIX is too powerful and too complex, but it's the 
standard we have.


--
dev-tech-crypto mailing list
dev-tech-crypto@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-tech-crypto


NSS algorithm performance

2014-03-04 Thread Julien Pierre
Did anyone ever write a script that measures the performance of all the 
low-level algorithms in freebl, and collects the data in a way that's 
easy to compare ? This would probably be using bltest.
This is for the purpose of evaluating different compilers/optimization 
options.

If so, sharing would be much appreciated.

--
dev-tech-crypto mailing list
dev-tech-crypto@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-tech-crypto


Re: Chrome: From NSS to OpenSSL

2014-01-31 Thread Julien Pierre

Ryan,

On 1/31/2014 10:28, Ryan Sleevi wrote:
I tried not to write too much on the negatives of NSS or OpenSSL, 
because both are worthy of long rants, but I'm surprised to hear 
anyone who has worked at length with PKCS#11 - like Oracle has (and 
Sun before) - would be particularly praising it. It's good for interop 
with smart cards. That's about it. Cheers, Ryan 
I wasn't praising it. I agree with most of your points about the 
shortcomings of PKCS#11.
However, as far as your table is concerned, the entry "PKCS#11 support" 
under Cons, without any explanation, is confusing.

All the other entries have at least a sentence to explain each pro/con.

Julien

--
dev-tech-crypto mailing list
dev-tech-crypto@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-tech-crypto


Re: Chrome: From NSS to OpenSSL

2014-01-31 Thread Julien Pierre


On 1/27/2014 10:28, Kathleen Wilson wrote:
Draft Design Doc posted by Ryan Sleevi regarding Chrome migrating from 
NSS to OpenSSL:


https://docs.google.com/document/d/1ML11ZyyMpnAr6clIAwWrXD53pQgNR-DppMYwt9XvE6s/edit?pli=1 

"Switching to OpenSSL, however, has the opportunity to bring 
significant performance and stability advantages to iOS, Mac, Windows, 
and ChromeOS immediately out of the gate. Switching Linux to use 
OpenSSL will take longer, due to the desire to continue to support 
PKCS#11-based smart card authentication, which will require more work. 
The biggest risk/cost to such a switch is no longer being able to help 
Firefox benefit from these efforts, nor benefiting from Firefox’s 
efforts in these areas."


Kathleen

Strange that "PKCS#11 support" is listed as a "con" for NSS .


--
dev-tech-crypto mailing list
dev-tech-crypto@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-tech-crypto


Re: Some TLS servers are intolerant to SSL/TLS session caching

2014-01-13 Thread Julien Pierre

Kai,

On 1/12/2014 03:26, Kai Engert wrote:

Have you ever seen a TLS server that was incompatible with TLS session
IDs?

No.

Do you agree this is bug on the server side?

Yes.

RFC 5246 section 7.3 says this :

   The client sends a ClientHello using the Session ID of the session to
   be resumed.  The server then checks its session cache for a match.
   If a match is found, and the server is willing to re-establish the
   connection under the specified session state, it will send a
   ServerHello with the same Session ID value.  At this point, both
   client and server MUST send ChangeCipherSpec messages and proceed
   directly to Finished messages.  Once the re-establishment is
   complete, the client and server MAY begin to exchange application
   layer data.  (See flow chart below.)  If a Session ID match is not
   found, the server generates a new session ID, and the TLS client and
   server perform a full handshake.




That last sentence is quite clear. The server is not compliant.


Should we attempt to
identify which TLS toolkits and versions show this broken behaviour?

That would be a good shaming exercise.


At least NSS/PSM currently don't expect such behaviour. We don't
automatically retry without a TLS session ID. Should we?

No. The fix belongs on the server side, not the client side.

Julien

--
dev-tech-crypto mailing list
dev-tech-crypto@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-tech-crypto


Re: NSS OCSP stapling tests

2014-01-08 Thread Julien Pierre

Kai,

On 1/3/2014 02:40, Kai Engert wrote:

On Do, 2014-01-02 at 19:34 -0800, Julien Pierre wrote:

The new OCSP stapling tests in NSS 3.15.3 are all failing on our Solaris
machines. See error log below.
We have a slightly smaller number of failures on Linux.

Are these tests going out to a public OCSP responder on the Internet ?

For most of the errors you cited:
No, see https://bugzilla.mozilla.org/show_bug.cgi?id=811331

There are few errors that are indeed attempting to connect to the public
web, but those will be removed in 3.15.4:
https://bugzilla.mozilla.org/show_bug.cgi?id=936778

Thanks, I applied the patch from that bug.

The following still tests are still failing on the internal network on 
Linux, though.


tstclnt: TCP Connection failed: PR_IO_TIMEOUT_ERROR: I/O operation timed out
chains.sh: #2452: Test that OCSP server is reachable - FAILED

tstclnt: TCP Connection failed: PR_IO_TIMEOUT_ERROR: I/O operation timed out
chains.sh: #4286: Test that OCSP server is reachable - FAILED

tstclnt: TCP Connection failed: PR_IO_TIMEOUT_ERROR: I/O operation timed out
chains.sh: #6750: Test that OCSP server is reachable - FAILED

It could be because we have Internet DNS capability, but not direct 
Internet TCP connectivity .
Either way, it seems to me that even with the patch, the NSS test suite 
still can't run properly on a private network.


Julien

--
dev-tech-crypto mailing list
dev-tech-crypto@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-tech-crypto


Re: How to create a temporay NSS database in RAM

2014-01-07 Thread Julien Pierre

You can use NSS_NoDB_Init .

On 1/6/2014 19:19, chingp...@gmail.com wrote:

I am working on a program that verifies the PKCS7 signature attached in a file 
with the cert provided by the user. Since the purpose of the program is to test 
whether the cert can verify the signature or not, I only need a temporary NSS 
database to import the cert and drop the database when it's done.

Currently, I create a new dir in /tmp every time the program starts and remove 
the dir when it exits, but I would like to avoid unnecessary file system 
access. Is there any way to create a temporary NSS database in RAM?


--
dev-tech-crypto mailing list
dev-tech-crypto@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-tech-crypto


Re: NSS OCSP stapling tests

2014-01-03 Thread Julien Pierre

Kai,

On 1/3/2014 02:40, Kai Engert wrote:

On Do, 2014-01-02 at 19:34 -0800, Julien Pierre wrote:

The new OCSP stapling tests in NSS 3.15.3 are all failing on our Solaris
machines. See error log below.
We have a slightly smaller number of failures on Linux.

Are these tests going out to a public OCSP responder on the Internet ?

For most of the errors you cited:
No, see https://bugzilla.mozilla.org/show_bug.cgi?id=811331

OK, thanks.

There are few errors that are indeed attempting to connect to the public
web, but those will be removed in 3.15.4:
https://bugzilla.mozilla.org/show_bug.cgi?id=936778


Or are they trying to go to a locally built one ?

Yes, most of them are. Since 3.15 httpserv has the ability to run as an
OCSP server.



I see. This last bug might explain the failures we still see on Linux, 
but not the failures on Solaris. I guess there will be a new bug filed 
for these soon.


--
dev-tech-crypto mailing list
dev-tech-crypto@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-tech-crypto


NSS OCSP stapling tests

2014-01-02 Thread Julien Pierre
The new OCSP stapling tests in NSS 3.15.3 are all failing on our Solaris 
machines. See error log below.

We have a slightly smaller number of failures on Linux.

Are these tests going out to a public OCSP responder on the Internet ? 
Or are they trying to go to a locally built one ?

(sorry, I am not the one who built / ran these, just the messenger here).

If it's trying to go out to a public server, as I suspect, that would 
explain the failures. We would have to use an HTTP proxy from our 
network here. There is no direct Internet connectivity. Is there a way 
to make these tests go through an HTTP proxy ? Or can these tests be 
selectively turned off ?


ssl.sh: #1907: OCSP stapling, signed response, good status produced a 
returncode of 1, expected is 0 - FAILED
ssl.sh: #1908: OCSP stapling, signed response, revoked status produced a 
returncode of 1, expected is 3 - FAILED
ssl.sh: #1909: OCSP stapling, signed response, unknown status produced a 
returncode of 1, expected is 2 - FAILED
ssl.sh: #1910: OCSP stapling, unsigned failure response produced a 
returncode of 1, expected is 2 - FAILED
ssl.sh: #1911: OCSP stapling, good status, bad signature produced a 
returncode of 1, expected is 2 - FAILED
ssl.sh: #1912: OCSP stapling, invalid cert status data produced a 
returncode of 1, expected is 2 - FAILED
ssl.sh: #1913: Valid cert, Server doesn't staple produced a returncode 
of 1, expected is 2 - FAILED
ssl.sh: #1914: Stress OCSP stapling, server uses random status produced 
a returncode of 1, expected is 0. - FAILED
ssl.sh: #2337: OCSP stapling, signed response, good status produced a 
returncode of 1, expected is 0 - FAILED
ssl.sh: #2338: OCSP stapling, signed response, revoked status produced a 
returncode of 1, expected is 3 - FAILED
ssl.sh: #2339: OCSP stapling, signed response, unknown status produced a 
returncode of 1, expected is 2 - FAILED
ssl.sh: #2340: OCSP stapling, unsigned failure response produced a 
returncode of 1, expected is 2 - FAILED
ssl.sh: #2341: OCSP stapling, good status, bad signature produced a 
returncode of 1, expected is 2 - FAILED
ssl.sh: #2342: OCSP stapling, invalid cert status data produced a 
returncode of 1, expected is 2 - FAILED
ssl.sh: #2343: Valid cert, Server doesn't staple produced a returncode 
of 1, expected is 2 - FAILED
ssl.sh: #2344: Stress OCSP stapling, server uses random status produced 
a returncode of 1, expected is 0. - FAILED

ocsp.sh: #3293: startssl valid, supports OCSP stapling  - FAILED
ocsp.sh: #3294: startssl revoked, supports OCSP stapling  - FAILED
ocsp.sh: #3298: digicert valid, supports OCSP stapling  - FAILED
ocsp.sh: #3299: digicert revoked, supports OCSP stapling  - FAILED
ocsp.sh: #3300: live valid, supports OCSP stapling  - FAILED
ocsp.sh: #3301: startssl valid, doesn't support OCSP stapling  - FAILED
chains.sh: #4013: Test that OCSP server is reachable - FAILED
ssl.sh: #5886: OCSP stapling, signed response, good status produced a 
returncode of 1, expected is 0 - FAILED
ssl.sh: #5887: OCSP stapling, signed response, revoked status produced a 
returncode of 1, expected is 3 - FAILED
ssl.sh: #5888: OCSP stapling, signed response, unknown status produced a 
returncode of 1, expected is 2 - FAILED
ssl.sh: #5889: OCSP stapling, unsigned failure response produced a 
returncode of 1, expected is 2 - FAILED
ssl.sh: #5890: OCSP stapling, good status, bad signature produced a 
returncode of 1, expected is 2 - FAILED
ssl.sh: #5891: OCSP stapling, invalid cert status data produced a 
returncode of 1, expected is 2 - FAILED
ssl.sh: #5892: Valid cert, Server doesn't staple produced a returncode 
of 1, expected is 2 - FAILED
ssl.sh: #5893: Stress OCSP stapling, server uses random status produced 
a returncode of 1, expected is 0. - FAILED

ocsp.sh: #6127: startssl valid, supports OCSP stapling  - FAILED
ocsp.sh: #6128: startssl revoked, supports OCSP stapling  - FAILED
ocsp.sh: #6132: digicert valid, supports OCSP stapling  - FAILED
ocsp.sh: #6133: digicert revoked, supports OCSP stapling  - FAILED
ocsp.sh: #6134: live valid, supports OCSP stapling  - FAILED
ocsp.sh: #6135: startssl valid, doesn't support OCSP stapling  - FAILED
chains.sh: #6832: Test that OCSP server is reachable - FAILED
ssl.sh: #8014: OCSP stapling, signed response, good status produced a 
returncode of 1, expected is 0 - FAILED
ssl.sh: #8015: OCSP stapling, signed response, revoked status produced a 
returncode of 1, expected is 3 - FAILED
ssl.sh: #8016: OCSP stapling, signed response, unknown status produced a 
returncode of 1, expected is 2 - FAILED
ssl.sh: #8017: OCSP stapling, unsigned failure response produced a 
returncode of 1, expected is 2 - FAILED
ssl.sh: #8018: OCSP stapling, good status, bad signature produced a 
returncode of 1, expected is 2 - FAILED
ssl.sh: #8019: OCSP stapling, invalid cert status data produced a 
returncode of 1, expected is 2 - FAILED
ssl.sh: #8020: Valid cert, Server doesn't staple produced a returncode 
of 1, expected is 2 - FAILED
ssl.sh:

Re: cert validation failure when root cert is in chain

2013-12-31 Thread Julien Pierre

John,

On 12/21/2013 12:16, John Dennis wrote:

I'm trying to debug a validation failure when using
CERT_VerifyCertificate(). The cert being validated is a SSL Server Cert,
it is signed by a root cert. I have confirmed the server cert validates
using CERT_VerifyCertificate() in a stand alone program an the root cert
imported and trusted into an NSS database. I've also confirms it
validates with openssl verify.





The problem seems to come when the cert is used in an SSL handshake (in
this particular instance when the openldap libary is making a TLS
connection to an openldap server (the openldap library is using NSS,
e.g. tls_m.c).

Stepping through CERT_VerifyCertificate as called by the openldap
library I have found where verification failure occurs. First also let
me say that I've also run the connection through the NSS ssltap tool and
I can see that the server is sending the client 2 certs, the server cert
and the root ca cert that signed it. Hence during the connection attempt
there is cert chain of length 2.

The verify failure occurs cert_VerifyCertChainOld() in this code:


There are 2 PKIX implementations in NSS . The "old" implementation is 
not so smart and does not properly handle cases for which there are 
multiple matches in the cert chain.


See http://lxr.mozilla.org/nss/source/lib/certhigh/certvfy.c#692

Please make sure your application is using the newest implementation.
You can do so by calling
CERT_SetUsePKIXForValidation(PR_TRUE);




/* make sure that the issuer is not self signed.  If it is, then
 * stop here to prevent looping.
 */
if (issuerCert->isRoot) {
PORT_SetError(SEC_ERROR_UNTRUSTED_ISSUER);
LOG_ERROR(log, issuerCert, count+1, 0);
goto loser;
}

This suggests to me that NSS will not accept a cert chain with the root
cert in it. Is that correct?

No, it should accept such a chain.
But problems can occur if you have similar roots sent locally and over 
the wire. For example, if your root was renewed, and you have an old one 
installed and trusted locally, only that one is trusted. If the remote 
server is sending over a newer root, then that one won't be trusted.
The old PKIX implementation only explores one certification path and 
this is probably what's causing your failure.
If the root cert installed and trusted locally and sent over the wire 
are really the same cert, then either the old or new implementation 
should work.


My understanding (and verified via some additional research) is that
while it's not optimal/common to include the root cert in the chain it
is in fact permissible. The basic idea I believe is the root cert in the
chain is ignored and previous cert in the chain is validated by finding
the root issuer in the trust store. Yes/No/Comments?

It is permissible.


The stand alone validation succeeds apparently because there is no chain
to traverse with a root cert in it.

Is NSS behaving incorrectly by rejecting a chain with a root cert?

Depends if that identical DER cert is locally trusted or not.


Is the server behaving incorrectly by sending a chain with a root cert?
No. But my guess is that the root certs sent over the wire and the one 
you have locally trusted are not identical.
They may be similar in terms of public key and key ID, but other 
elements like the NotBefore/NotAfter date probably differ.
They must actually be the same cert if you want to use the old PKIX 
implementation.
The newer PKIX implementation is more flexible. But that will only help 
if your cert truly can chain back to the locally-installed root cert.


What causes a root cert to be included in a chain?
Something in your server configuration. You should be able to remove the 
root from the server config, and then it should stop sending it. This is 
specific to your server and the security library it uses.


Julien

--
dev-tech-crypto mailing list
dev-tech-crypto@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-tech-crypto


Re: SHA-256 support

2013-11-18 Thread Julien Pierre

SHA-256 was added in NSS 3.8 , according to :

http://www-archive.mozilla.org/projects/security/pki/nss/

On 11/18/2013 07:00, Gervase Markham wrote:

Hi everyone,

Following Microsoft's announcement re: SHA-1, some CAs are asking
browser and OS vendors about the ubiquity of SHA-256 support. It would
be a help to them if we could say:

- Which version of NSS first supported SHA-256
- Which versions of Mozilla/Firefox/SeaMonkey/Thunderbird that translates to

They can use the NSS version number info to work out the answer for
other NSS-using applications.

Is anyone from the NSS team able to easily provide that info? I could go
repo and Bugzilla-mining, but I'd be worried about making a mistake.

Gerv


--
dev-tech-crypto mailing list
dev-tech-crypto@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-tech-crypto


Re: how to make firefox replace a certificate on import ?

2013-11-11 Thread Julien Pierre

Jean-Philippe,

On 11/7/2013 23:48, Jean-Philippe Franchini wrote:

Hello,

Our java application generates certificates with the Bouncy Castle library.
When a certificate C1 imported in Firefox is about to expire, the application 
can renew it and creates a certificate C2 based on C1 information. The field 
values are the same except the serial number and the security keys.
But when importing C2, C1 is not replaced.

Why do you expect C1 to be replaced ?
NSS will handle both certificates. They will show up under the same 
nickname, however. All certs with the same subject have the same nickname.
If you want C1 to be replaced, you would have to delete it after C2 is 
imported. This is not necessarily what users would want, though. You 
could just leave it alone.


Julien

--
dev-tech-crypto mailing list
dev-tech-crypto@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-tech-crypto


Re: Proposal to Change the Default TLS Ciphersuites Offered by Browsers

2013-09-12 Thread Julien Pierre

Julien,

On 9/12/2013 19:35, Julien Vehent wrote:
aes-256-cbc with AES-NI does 543763.11kB/s. That's 4.35Gbps of AES 
bandwidth on a single core.
On a decent 8 core load balancer, dedicate 4 to TLS, and you get 
17.40Gbps of AES bandwidth.
I don't this AES is close to being the limiting factor here. 
Processing HTTP is probably 20 times more expensive than that.


That's not correct. Basic HTTP processing is much less CPU intensive 
than the overhead of SSL/TLS, regardless of which cipher suite used, 
usually by at least an order of magnitude. The crypto is very much the 
limiting factor. Choosing a more CPU intensive algorithm will result in 
more server hardware being required in general in data centers.


Of course, the server can always disable AES-256 cipher suites 
altogether if it doesn't want to spend cycles on it. It would then 
choose the AES-128 cipher suites if the client also had them enabled, 
which I believe is the case in this proposal. Only an ordering change is 
proposed.


Some servers also ignore the order of cipher suites in the Clienthelo 
anyway in some cases, and choose whatever they prefer among the client 
cipher suite list regardless of order, even though this doesn't follow 
the TLS specs.


Julien

--
dev-tech-crypto mailing list
dev-tech-crypto@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-tech-crypto


Re: Proposal to Change the Default TLS Ciphersuites Offered by Browsers

2013-09-12 Thread Julien Pierre

Julien,

On 9/12/2013 07:06, Julien Vehent wrote:
If performance was the only reason to prefer AES-128, I would disagree 
with the proposal. But your other arguments regarding AES-256 not 
provided additional security, are convincing.


The performance is still an issue for servers. More servers are needed 
if more CPU-intensive crypto algorithms are used.


Julien


--
dev-tech-crypto mailing list
dev-tech-crypto@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-tech-crypto


Re: Introductions - want to contribute to NSS developer friendliness

2013-06-17 Thread Julien Pierre

Chris,

On 6/17/2013 10:58, Chris Newman wrote:
I'll mention one other usability issue. I am getting pressure from my 
employer to stop using NSS due to the MPL 2 license. I got less 
pressure when I could use NSS under the LGPL 2.1 branch of the 
tri-license. Switching to OpenSSL has been suggested.


I believe we have just resolved this issue. More details by email.

Julien


--
dev-tech-crypto mailing list
dev-tech-crypto@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-tech-crypto


Re: Removal of "Revocation Lists" feature (Options -> Advanced -> Revocation Lists)

2013-05-08 Thread Julien Pierre

Brian,

If this is just about changing the UI in Firefox, I have no objection.

If this is about removing the feature from NSS altogether on the other 
hand, I would like to state that we have several several products at 
Oracle that use NSS and rely on the ability to have CRLs stored in the 
database, and processed during certificate validation. These 
applications act as both SSL servers and clients, and we expect the CRLs 
to be supported in both.


While some Oracle products are moving away from NSS, old versions 
continue to be supported and we are picking up new NSS releases to get 
certain security fixes. We couldn't do that anymore if the CRL feature 
went away. In the past, before Oracle, Sun went to great pains to work 
on the common public NSS tree for these products. We certainly don't 
want to fork NSS again at this stage.


Julien

On 4/30/2013 14:28, Brian Smith wrote:

Hi all,

I propose we remove the "Revocation Lists" feature (Options -> Advanced -> 
Revocation Lists). Are there any objections? If so, please explain your objection.

A certificate revocation list (CRL) is a list of revoked certificates, 
published by the certificate authority that issued the certificates. These 
lists vary from 1KB to potentially hundreds of megabytes in size.

Very large CRLs are not super common but they exist: Reportedly, GoDaddy (A CA 
in our root CA program) has a 41MB CRL. And, Verisign has at least one CRL that 
is close to 1MB on its own, and that's not the only CRL that they have. the US 
Department of Defense is another example of an organization known to have 
extremely large CRLs.

The "Revocation Lists" feature allows a user to configure Firefox to poll the CAs server on a regular interval. As far 
as I know, Firefox is the only browser to have such a feature. Other browser either ignore CRLs completely or download CRLs on an 
"as needed" basis based on a URL embedded in the certificate. For example, in its default configuration, Google Chrome 
ignores CRLs, AFAICT (they use some indirect mechanism for handling revocation, which will be discussed in another thread). 
AFAICT, the "Revocation Lists" feature was added to Firefox a long time ago when there were IPR concerns about the 
"as needed" behavior. However, my understanding is that those concerns are no longer justified. In another thread, we 
will be discussing about whether or not we should implement the "as needed" mechanism. However, I think that we can 
make this decision independently of that decision.

Obviously, the vast majority of users have no hope of figuring out what this 
feature is, what it does, or how to use it.

Because of the potential bandwidth usage issues, and UX issues, it doesn't seem 
like a good idea to add this feature to Mobile. But, also, if a certificate 
feature isn't important enough for mobile*, then why is it important for 
desktop? We should be striving for platform parity here.

Finally, this feature complicates significant improvements to the core 
certificate validation logic that we are making.

For all these reasons, I think it is time for this feature to go.

Cheers,
Brian

[*] Note: I make a distinction between things that haven't been done *yet* for 
mobile vs. things that we really have no intention to do.


--
dev-tech-crypto mailing list
dev-tech-crypto@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-tech-crypto


Re: Secure credit-card payments? Re: Proposing: Interactive Domain Verification Approval

2013-01-02 Thread Julien Pierre

Anders,

On 1/1/2013 12:47, Anders Rundgren wrote:
Although the recent CA failures cast a shadow over the web they have 
AFAIK not led to any major losses for anybody. The credit-card system 
OTOH is a major source of losses and hassles. IMO the only parties 
that can fix it are the browser vendors. In the EU and Asia hundreds 
of millions of EMV-cards are in circulation but since there is no 
useful system on the Internet these cards are still equipped with 
mag-strip and CCV "passwords" printed in clear on the back of the 
cards which makes them subject to attacks in spite of the chip.


Are you sure that internet use is the only reason for the mag-stripe and 
CCV passwords being on the card ?

Are 100% of the physical card readers EMV capable in EU and Asia ?

It's not clear to me how any single browser vendor could design a 
solution for this, given the huge variety of browsing devices nowadays.

Even the hardware on those devices is quite different, let alone software.

Developing card readers that physically can connect to all those 
devices, as well as software stacks for each OS and browser, is going to 
be a very expensive task.


A much less expensive and simpler approach might be some kind of 
universal standalone device that provides power to the card, and allows 
doing some challenge/response type authentication with the card, 
resulting in a dynamic number that the user could enter into any SSL web 
form.


Julien

--
dev-tech-crypto mailing list
dev-tech-crypto@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-tech-crypto


Re: Proposing: Interactive Domain Verification Approval

2012-12-31 Thread Julien Pierre

Ryan,

On 12/31/2012 11:43, Ryan Sleevi wrote:


So far, the two proposals are:
1) Nag the user whenever they want to make a new secure connection. This
nag screen is not shown over HTTP, so clearly, HTTP is preferable here.
2) Respect national borders on the Internet.

If anything, the more user interaction, even once, of a technically
complex nature, is enough to disincentivize any site operator from using
SSL. "Oh, my Firefox users are going to see a prompt? I don't want to send
them to SSL then, because they'll complain / it will be a lost sale."
Indeed. If we want to educate any users, it should be about not 
submitting things over plaintext connections


Once upon a time, Mozilla and Firefox had a warning for submitting data 
over plain insecure HTTP.

There isn't even UI to this warning on now.
I had to manually go to about:config to turn it back on yesterday, sigh.
This warning still comes with a checkbox to deactivate it also, which it 
shouldn't, I don't want it to ever turn it off again.


I don't believe there is any hope we can educate users about security 
through pop-up dialogs.


If we want better security, I would suggest some kind of global 
high-security mode of operation where all those warnings would be 
enabled and could not be turned off.
And maybe even make the users solve some CAPTCHA to get past the warning 
for insecure submit, to ensure that users are really reading it. No 
ability to just blindly or accidentally hit ENTER and submit insecurely.







Even once is enough. Otherwise, why would sites even bother getting a CA
certificate, since they can already condition users to 'pin' to their
self-signed cert by virtue of clicking through.

Right, IMO the CA trust selection should not happen at connection time.

It should happen before. Maybe when one chooses to turn on this "high 
security mode" , they would be presented with the CA list.


And maybe they could be grouped by various categories, such as EV/non-EV 
certificates, country of operation, etc. And the user would be able to 
choose which CAs to distrust from the built-in list.
There are far too many built-in CAs as it is IMO for it to be practical 
for anyone to deselect them on a one-off basis.




If the goal is to reduce power or risk, then something like Certificate
Transparency should be the game. Having transparent reports of issuance
and the ability to monitor for misissuance should be the end goal -
whether you're operating a CA that serves 50 domains or 500 million
domains. It's highly elitist to suggest those 500 million are worth more
than those 50 - especially if real users' lives are at risk for those 50
domains.

The problem is that a root CA doesn't necessarily know how many certs 
are chaining back to it. It's up to the intermediates to issue the final 
certs.
If an intermediate got compromised, it could well issue 50 million bogus 
certs without the root knowing until it's exposed.
Certificate transparency may require some new infrastructure to 
dynamically discover the cert hierarchy.


--
dev-tech-crypto mailing list
dev-tech-crypto@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-tech-crypto


Re: PSM module ownership, switching my focus to NSS

2012-12-13 Thread Julien Pierre

Hi Kai,

Good to see you stick around in the Mozilla crypto world .
Are there big projects coming up in NSS land ? Or did somebody leave the 
project ?


Thanks,
Julien

On 12/13/2012 08:10, Kai Engert wrote:

Brendan Eich suggested posting to this list, too
(already posted yesterday to Mozilla's dev-planning list).


Hello Mozilla, I'd like to announce a change.

PSM is the name of Mozilla's glue code for PKI related [1] security
features, such as certificate management, web based certificate
enrollment, tracking the security state of web pages (padlock/EV),
application preferences for certificate validation,
SSL error reporting, handling of certificate exceptions,
user interface for SSL client authentication, etc.

After having contributed to this module for over 11 years,
it's time for me to step down from the PSM module ownership role.

The new module owner of PSM will be Brian Smith.

I've switched my main focus to the NSS security libraries [2],
and to PKI features across Linux applications in general.

PSM operates on top of NSS, thereby I'll continue to indirectly
contribute to Mozilla's projects.

I'd like to thank the people who have contributed to the PSM module
over time, and I'd like to thank my employer Red Hat, Inc., which has
allowed me to make PSM a priority during the previous 7 years and
continues to support my work on NSS.

Regards
Kai

[1] http://en.wikipedia.org/wiki/Public_key_infrastructure
[2] https://developer.mozilla.org/en-US/docs/NSS




--
dev-tech-crypto mailing list
dev-tech-crypto@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-tech-crypto


Re: NSS 3.14 release

2012-10-25 Thread Julien Pierre

Wan-Teh,

Thanks for your response, comments inline.

On 10/25/2012 11:17, Wan-Teh Chang wrote:


Any client apps that care about the exact cipher suites enabled need
to enable and disable each cipher suite explicitly. This Chromium code
in this file can be used as code example:

http://src.chromium.org/viewvc/chrome/trunk/src/net/socket/nss_ssl_util.cc?revision=151846&view=markup
I know what code changes are necessary. I'm only a developer on a couple 
of NSS applications at this point, not an NSS maintainer.
If this was only about those couple of apps, it wouldn't be an issue. 
But there are other apps in Oracle that could be affected.
I can safely say that tracking and modifying every single app that this 
binary compatibility change may affect is not going to happen at Oracle 
at this point.
Many other apps may not have the same kind of tests we have for ciphers 
and won't even catch the issue.
As NSS gets distributed as patches to many existing application, binary 
compatibility is a requirement.
In year 2012, AES cipher suites, rather than (single) DES cipher 
suites, should be enabled by default. We decided to break this 
compatibility to improve security. 
I agree that they should be, but the decision of the defaults was always 
up to the application until now.


This is also why we disabled SSL 2.0 by default in NSS 3.13 
(https://bugzilla.mozilla.org/show_bug.cgi?id=593080). 
SSL 2.0 has been broken for some time, and nobody can argue with 
changing that default, certainly not me.
But adding new ciphers to the default list is a different kind of 
change. Unless the DES ciphers were broken, I don't see the rationale 
for this change.



Q: will unmodified applications that use the deprecated interfaces still
continue to work identically ? This appears to be the case from reading the
above bug, but I want to make sure that is correct.
Yes, I confirm that.

Thanks !



4) SSL PKCS#11 bypass is now conditionally built.
https://bugzilla.mozilla.org/show_bug.cgi?id=745281

...
I would like to know if the bypass feature got tested when the patch was
created, and whether it will still be getting tested at all going forward
other than at Oracle.

Yes. The default NSS build still compiles the SSL PKCS#11 bypass code.

Great !
--
dev-tech-crypto mailing list
dev-tech-crypto@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-tech-crypto


Re: NSS 3.14 release

2012-10-25 Thread Julien Pierre

Chris,

On 10/25/2012 16:18, Chris Newman wrote:


Will vulnerability fixes can be provided on the NSS 3.13.x patch 
train? And if so, is there a date when vulnerability fixes will no 
longer be provided for that version?


- Chris


As I'm no longer a developer on NSS, I will let others answer that question.

Julien

--
dev-tech-crypto mailing list
dev-tech-crypto@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-tech-crypto


Re: NSS 3.14 release

2012-10-24 Thread Julien Pierre
Oracle still ships NSS with many products even though we are no longer 
actively involved with its development. We do pick up new releases from 
time to time. We picked up 3.13.x last year and I'm looking into picking 
up 3.14 .


The following changes may be problematic :

1) * New default cipher suites
( https://bugzilla.mozilla.org/show_bug.cgi?id=792681 )

The default cipher suites in NSS 3.14 have been changed to better
reflect the current security landscape. The defaults now better match
the set that most major Web browsers enable by default.


This doesn't just affect browsers. There are other client apps that were 
written with the existing defaults in mind.


I could understand if this change was only about removing cipher suites 
that have had vulnerabilities removed from the default list. But this 
not the case, and some ciphers were also added.
It would appear to be a binary compatibility problem. Some applications 
may not behave as intended without both a source change and 
recompilation, ie. some ciphers will be enabled when they are not 
expected to be.
This change will break one of the test suites we have with our web 
server and traffic director applications, in particular.


If this change was done in order to save a few lines of code in the 
browser at the cost of breaking existing applications, it doesn't seem 
like a good tradeoff.
In the past, binary compatibility was always maintained for minor NSS 
releases. Was it the deliberate intent of NSS 3.14 to break binary 
compatibility ?


2)
- The NSS license has changed to MPL 2.0. Previous releases were
released under a MPL 1.1/GPL 2.0/LGPL  2.1 tri-license. For more
information about MPL 2.0, please see
http://www.mozilla.org/MPL/2.0/FAQ.html. For an additional explantation
on GPL/LGPL compatibility, see security/nss/COPYING in the source code.


This may be a serious problem also, but IANAL, so that is not for me to 
decide.


3)* Support for TLS 1.1 (RFC 4346) has been added
( https://bugzilla.mozilla.org/show_bug.cgi?id=565047 )

To better support TLS 1.1 and future versions of TLS, a new version
range API was introduced to allow applications to specify the desired
minimum and maximum versions. These functions are intended to replace
the now-deprecated use of the SSL_ENABLE_SSL3 and SSL_ENABLE_TLS socket
options.

Q: will unmodified applications that use the deprecated interfaces still 
continue to work identically ? This appears to be the case from reading the 
above bug, but I want to make sure that is correct.

4) SSL PKCS#11 bypass is now conditionally built.
https://bugzilla.mozilla.org/show_bug.cgi?id=745281

I understand that nobody but Oracle is using bypass at this time. I 
appreciate the efforts not to delete the code altogether.
I would like to know if the bypass feature got tested when the patch was 
created, and whether it will still be getting tested at all going 
forward other than at Oracle.



--
dev-tech-crypto mailing list
dev-tech-crypto@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-tech-crypto


Re: Update on Intel's Identity Protection Technology

2012-08-21 Thread Julien Pierre

Anders,

On 8/21/2012 00:45, Anders Rundgren wrote:

On 2012-08-21 05:42, Julien Pierre wrote:

Anders,

On 8/14/2012 20:40, Anders Rundgren wrote:

http://communities.intel.com/community/vproexpert/blog/2012/05/18/intel-ipt-with-embedded-pki-and-protected-transaction-display

Apparently your next PC already has it.

Some PCs based on Intel chips may have it. A few of us out there do not
use Intel chips.

I guess Intel is still "testing the waters" which I think is a good alternative
to politically, commercially and technically awkward standardization efforts
that seem to take forever and in the end often are circumvented by other
developments in the market.  Been there, done that :-)

True enough.

But I still can't get very enthusiastic about this. We live in a world 
with so many different devices, not just PCs. These mobile devices do 
not run Intel chips either.
It is rather a replacement for passwords. Embedded credentials is the 
thing that will at last/finally make client-side PKI a main-stream 
authentication solution. 
That's fine if you only plan on ever logging in from the one device that 
has the credentials embedded. It seems a bit restrictive.
Unless you always have that device with you. In which case it's probably 
a smartphone, not a PC.


Julien

--
dev-tech-crypto mailing list
dev-tech-crypto@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-tech-crypto


Re: Update on Intel's Identity Protection Technology

2012-08-20 Thread Julien Pierre

Anders,

On 8/14/2012 20:40, Anders Rundgren wrote:

http://communities.intel.com/community/vproexpert/blog/2012/05/18/intel-ipt-with-embedded-pki-and-protected-transaction-display

Apparently your next PC already has it.
Some PCs based on Intel chips may have it. A few of us out there do not 
use Intel chips.
Unless an enterprise is planning to replace all of their PCs, the value 
proposition doesn't seem that great vs using standalone smartcard/HSM.
I wonder how they do key backup and recovery also if the CPU is 
destroyed/lost.


Details seem pretty sketchy.



What's missing is a provisioning facility for unleashing the power of this 
scheme so that it isn't limited to one OS, one CA (?), and Enterprises.

Anders



--
dev-tech-crypto mailing list
dev-tech-crypto@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-tech-crypto


Re: Shared system database

2012-07-25 Thread Julien Pierre

Anders.

On 7/24/2012 23:33, Anders Rundgren wrote:

Yes. It's an issue I'm actively trying to solve. NSS seems to have made
some *attempt* at solving it... which has some issues, and which doesn't
even seem to have been picked up by Mozilla's own products.
For the record, some Oracle server products such as Oracle Traffic 
Director use the NSS shared database.
The main reason for doing so is in order for admin server to be able to 
reliably edit the NSS cert/trust/keystore while the server is running.
It still uses application stores. Each server instance can have its own 
store. The store is not per-user.


It is questionable to me that the trust should actually be shared for 
all applications running under the same user. Certainly in the case of 
server apps you may only want specific CAs trusted for client auth for 
example. If the trust is per-user, you would be forced to create new 
users specifically for running those servers. This is sometimes the way 
it's done, but not always. My same concerns would apply to private keys.


I see the most value in unconditionally sharing strictly the public 
data, ie. the CA and certs, between all processes and users.

But for trust and keys it really depends more on specific usage cases.

It would sure be nice if Firefox and Thunderbird could share the DBs, 
though. I don't believe they were the primary drivers of the NSS shared 
database, just one one of them.


Julien

--
dev-tech-crypto mailing list
dev-tech-crypto@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-tech-crypto


Re: For discussion: MECAI: Mutually Endorsing CA Infrastructure

2012-02-17 Thread Julien Pierre

Kai,

On 2/7/2012 12:58, Kai Engert wrote:


That's a reason why I propose vouchers to be IP specific.

In my understanding, each IP will have only a single certificate, 
regardless from where in the world you connect to it.



That's definitely an incorrect assumption to make.
There can be a very large number of different certs on a single port/IP 
combination.
The server name indication extension is one reason - there may be 
different certs for different values of SNI.
Different cipher suites in the ClientHelo message as previously 
mentioned can lead to certs with different KEAs.
Load balancers are yet another reason - you may end up connecting to 
separate servers which could have their own separate certificates  - 
though not necessarily, the keys and certs could just be cloned across a 
server farm .
The new Oracle Traffic Director product, which is NSS-based, supports 
all the above different configurations.


Julien

--
dev-tech-crypto mailing list
dev-tech-crypto@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-tech-crypto


Re: libpkix maintenance plan (was Re: What exactly are the benefits of libpkix over the old certificate path validation library?)

2012-01-13 Thread Julien Pierre

Steve,

On 1/13/2012 10:46, Stephen Hanna wrote:
Yeah, that's what Yassir said also. He thought it was pretty funny 
that you're going to get rid of the HTTP certstore and non-blocking 
I/O. Apparently, we only put those in at the request of the NSS team! 
I guess requirements have a way of changing over the years... 
Well, let's just say that two NSS major contributors who wanted this 
feature, including myself, no longer work on NSS, and thus you are not 
likely to see pushback on removing it now.
I think it's sad to see this feature go away, even if there were some 
bugs in it.


Julien

--
dev-tech-crypto mailing list
dev-tech-crypto@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-tech-crypto


Re: NSS 3.12.* maintanence after the NSS 3.13 release?

2011-10-18 Thread Julien Pierre

Brian,

On 10/18/2011 14:42, Brian Smith wrote:

There is one known regression.
Do you mean one separate from the SSL 2.0 change, and BEAST ? If so, 
which one ?

Also, the BEAST workaround is an incompatible change for some applications.
From what I have read of the BEAST workaround discussion, it breaks 
certain older existing SSL servers, notably some of Oracle's servers 
(not NSS based servers). But this only affects client code.
The reverse BEAST code change is is on the server side too. Do we know 
that it breaks any old browsers ?


I'm more concerned about server side. My understanding is that the BEAST 
workaround doesn't really help a server app. It is the client that 
really needs to be patched for the specific exploit. The server cannot 
really prevent the exploit with an SSL/TLS stack fix. The server-side 
code change would help only if someone create a theoretical reverse 
BEAST type of exploit.


Julien

--
dev-tech-crypto mailing list
dev-tech-crypto@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-tech-crypto


Re: NSS 3.12.* maintanence after the NSS 3.13 release?

2011-10-18 Thread Julien Pierre

Brian,

On 10/17/2011 15:55, Brian Smith wrote:

NSS release announcements are made on the Mozilla dev-tech-crypto mailing list:

http://groups.google.com/group/mozilla.dev.tech.crypto/browse_thread/thread/28c9fd2d65f7bd55#

Thanks, I wasn't on the list then.

 It looks like there is one binary incompatible change, SSL 2.0 
disabled by default. I'm not sure yet if this will be a problem.


Other than this change, do we expect this release to be a binary 
compatible drop-in replacement for 3.12.x ?


Julien

--
dev-tech-crypto mailing list
dev-tech-crypto@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-tech-crypto


Re: NSS 3.12.* maintanence after the NSS 3.13 release?

2011-10-18 Thread Julien Pierre

Brian,

Thanks for adding me to this list. I had not heard that NSS 3.13 had 
shipped. What does this release include ? I don't see any release notes 
beyond 3.12.6 at 
http://www.mozilla.org/projects/security/pki/nss/release_notes.html .


Julien

On 10/17/2011 14:28, Brian Smith wrote:

Are we going to stop maintaining NSS 3.12 after the 3.13 release? People have 
asked if we were going to backport bug 665814 to 3.12, specifically. My 
understanding is that Bob proposed that the 3.13 release will mark the end of 
3.12 maintenance. This is why we (Mozilla) upgraded to 3.13 instead of a 3.12.* 
release.

Thanks,
Brian

--
dev-tech-crypto mailing list
dev-tech-crypto@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-tech-crypto


Re: failed to add a new API in cryptohi (in my local client)

2006-10-27 Thread Julien Pierre

Wei Shao wrote:


In my local set up, I have added a new method in cryptohi.h and
implement it in secsign.c.
The compilation is okay. But I try to use it in certutil/certutil.c and
got an undefined symbol linking error for my added API. Same error if
after I make  clean first.

I noticed the public .h file under dist/ is updated with my API.  The
libnss3.so is also including the new secsign.o.  What is missing?


1) You need to update nss.def to list your symbol in there
2) Don't forget to contribute your NSS changes, as required by the MPL
___
dev-tech-crypto mailing list
dev-tech-crypto@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-tech-crypto


Re: Help on building NSPR, NSS on Windows

2006-10-25 Thread Julien Pierre

Frank wrote:

Yeah, I'm using make from cygwin, but the problem still exists.


Please check that you have these exact cygwin tools :
http://developer.mozilla.org/en/docs/Windows_Build_Prerequisites#GNU_Tools_for_Microsoft_Windows_.28Cygwin.29
___
dev-tech-crypto mailing list
dev-tech-crypto@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-tech-crypto


Re: Help on building NSPR, NSS on Windows

2006-10-25 Thread Julien Pierre

Wei,

[EMAIL PROTECTED] wrote:


Are you using cygwin's make program ? Please do a "which make" to
verify. If not, you need to do so.


I have the same issue. make does not exist under cygwin. I used gmake
from moztools.


Yes, cygwin has its own verison of make. Just not in your own cygwin 
installation. You need to download cygwin's make. One of the most 
frustrating things with cygwin is its installer. You need to pick and 
choose the tools you want to download and install by hand. make is not 
one of the default tools that gets downloaded/installed with cygwin.


There is probably a page somewhere on mozilla.org that lists the 
specific parts of cygwin you need ...

___
dev-tech-crypto mailing list
dev-tech-crypto@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-tech-crypto


Re: Help on building NSPR, NSS on Windows

2006-10-25 Thread Julien Pierre

Frank,

Frank Lee wrote:
sh ../../build/cygwin-wrapper 
cl -Fonow.obj -c  -W3 -nologo -GF -Gy -MD -O2  -UDEBUG -U_DEBUG -UWINNT  
 -DNDEBUG=1 -DXP_PC=1 -DWIN32=1 -DWIN95=1 -D_PR_GLOBAL_THREADS_ONLY=1 -D_X86_=1 
  -DFORCE_PR_LOG 
/cygdrive/c/Frank_Lee/Eclipse/Academy_workspace/NSS_SignTools/mozilla/nsprpub/config/now.c

make[2]: *** [now.obj] Error 53
make[2]: Leaving directory 
`/cygdrive/c/Frank_Lee/Eclipse/Academy_workspace/NSS_SignTools/mozilla/nsprpub/WIN954.0_OPT.OBJ/config'

make[1]: *** [export] Error 2
make[1]: Leaving directory 
`/cygdrive/c/Frank_Lee/Eclipse/Academy_workspace/NSS_SignTools/mozilla/nsprpub/WIN954.0_OPT.OBJ'

make: *** [build_nspr] Error 2


is this a problem with the cl.exe file?  I'm using "Microsoft Visual C++ 
2005 Express Edition", and haven't downloaded  Windows® Server 2003 SP1 
Platform SDK or the Windows® Server 2003 R2 Platform SDK yet.  Are the 
platform SDKs necessary for building NSS?  It seems more and more stuff are 
needed to be installed... 





Are you using cygwin's make program ? Please do a "which make" to 
verify. If not, you need to do so.

___
dev-tech-crypto mailing list
dev-tech-crypto@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-tech-crypto


Re: SSL connection fails on the server with SSL_ERROR_HANDSHAKE_FAILURE_ALERT

2006-10-24 Thread Julien Pierre

Honzab wrote:

All right, everything is working now. We found a mistake in our code -
setting for ECC suites were inside of #ifdef NSS_ENABLE_ECC which was
not defined. We define this symbol now and disable all ECC suites for
all prototypes of socket we use (client and server too).

I used Wireshark to watch the traffic and found a very strange behavior
(the reason of the connection failure):
- ClientHello packet contains (among others) suite 0xC014
(TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA).
- ServerHello packet contains this suite as negotiated to be used for
the ssl session
- Client answers with fatal alert: Handshake Failure (40)

I did not investigate the reason deeply, but it might be potentialy a
bug in NSS 3.11 (?). Code in mozilla\security\nss\lib\ssl\ssl3con.c
line 4488 doesn't consider the suite as suitable for the session and
breaks the negotiation with fatal alert. This is strange, because the
client socket sent this suite in the list of suits as available for the
session.



Does this error also occur with NSS 3.11.3 ? Many ECC related bugs were 
fixed after the original NSS 3.11 . If you aren't using the latest, 
please upgrade.
If the problem still occurs, you might need to file a bug. You'll need 
to provide all the information to reproduce the problem, ideally your 
server cert and private key in a PKCS#12 file if you can, and 
instructions on how to reproduce with the NSS tools tstclnt and selfserv.

___
dev-tech-crypto mailing list
dev-tech-crypto@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-tech-crypto


Re: SSL connection fails on the server with SSL_ERROR_HANDSHAKE_FAILURE_ALERT

2006-10-23 Thread Julien Pierre

Honzab,

Honzab wrote:

Julien Pierre napsal:


NSS only supports RSA ECDHE cipher suites on the client side at this
time, so this is expected. If you are using NSS on the server side, you
need to enable alternate cipher suites - and of course you need to
enable them on the client side as well.



Thanks for advise, unfortunatelly this invokes another problem. I
enabled for client and sever another 4 suites:

TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA
TLS_ECDHE_ECDSA_WITH_AES_256_CBC_SHA
TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA
TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA


These cipher suites all require a certificate with an EC public key.
I believe for the first 2, the certificate must be signed by ECDSA, for 
the last 2, by RSA.



And yet another question: why do you restrict usage to just the ECC
cryptography? Means this to stop using classic DH and RSA?


I'm sorry, I made a mistake earlier. All the EC cipher suites are 
supported on both sides.


Only the DHE/RSA cipher suites are supported on the client-side only. 
The list of client-side only cipher suites is :


SSL_DHE_DSS_EXPORT_WITH_DES40_CBC_SHA
SSL_DHE_DSS_WITH_DES_CBC_SHA
SSL_DHE_DSS_WITH_3DES_EDE_CBC_SHA
SSL_DHE_RSA_EXPORT_WITH_DES40_CBC_SHA
SSL_DHE_RSA_WITH_DES_CBC_SHA
SSL_DHE_RSA_WITH_3DES_EDE_CBC_SHA

All other cipher suites are supported for both client and server sides.

___
dev-tech-crypto mailing list
dev-tech-crypto@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-tech-crypto


Re: SSL connection fails on the server with SSL_ERROR_HANDSHAKE_FAILURE_ALERT

2006-10-23 Thread Julien Pierre

Honzab wrote:


What is strange, that the cipher suite sent from the server is c014 -
TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA. This suite is disabled on the
server.

The client socket gets to state with error SSL_ERROR_NO_CYPHER_OVERLAP
and send to the server our SSL_ERROR_HANDSHAKE_FAILURE_ALERT alert. The
connection is then broken.


NSS only supports RSA ECDHE cipher suites on the client side at this 
time, so this is expected. If you are using NSS on the server side, you 
need to enable alternate cipher suites - and of course you need to 
enable them on the client side as well.

___
dev-tech-crypto mailing list
dev-tech-crypto@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-tech-crypto


Re: NSS_SetDomesticPolicy() return 12266

2006-09-29 Thread Julien Pierre

Alex,

Alex wrote:

Hello,

I wrote a program like this:

PRInt32 mod_ssl_startup(char *dbdir, PRInt32 clearCert)
{
 char  *dbpath=NULL;
 char  *certfile=NULL;
 PRErrorCode ercode;

 SECStatus rv;

 PK11SlotInfo *slot=NULL;
..
 rv = NSS_InitReadWrite(dbpath);

 rv = NSS_SetDomesticPolicy();
 if(rv!=SECSuccess)
 {
  ercode = PR_GetError();
  printf("set policy failure..%d\n", ercode);
  goto cleanup;
 }
..
Why NSS_SetDomesticPolicy always return 12266?
"An unknown SSL cipher suite has been requested."

The application has attempted to configure SSL to use an unknown cipher 
suite.


But I didn't do anything. Please tell me why?




I can't reproduce your error on the NSS tip. What version of NSS are you 
using ? And how was it built ?

___
dev-tech-crypto mailing list
dev-tech-crypto@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-tech-crypto


Re: How to disable the SSL cache?

2006-09-25 Thread Julien Pierre

ben wrote:

Hi there,

I am running a Client-side SSL connection using Firefox browser 1.5. I
found a problem with it is the SSL caching. Open an FF browser and
started a Client-side SSL connection to a web server. It's fine.

Now I want to open a different SSL connection with a different user
account and keep the current connection up. I went to the file menu and
open a "New Window", type in https://www.server address.com.

After hitting the Enter key, the browser does not pop up the cerficate
selection list. It simply uses the existingly cached SSL credential for
this connection.

Is there anyway to turn off the SSL caching for Firefox and Netscape
browsers?



I believe there is no UI to do this in PSM.
Try logging out from the crypto module.
In the Mozilla suite :

Edit/Preferences/Privacy & Security/Certificates/Manage Security 
Devices/Software Security Device/Log Out .


This should invalidate all keys and thus all SSL sessions as well.

Sorry, I don't use Firefox and I don't know the equivalent UI there, if any.
___
dev-tech-crypto mailing list
dev-tech-crypto@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-tech-crypto


Re: Building (running) NSS cmd tools?

2006-09-19 Thread Julien Pierre

Hi,

Nothing obvious comes to mind about this crash. Run rsaperf.exe within a 
debugger and see where it crashes .


[EMAIL PROTECTED] wrote:

I've followed the build instructions on checkout and building NSS
(after giving up on getting it to build the cmd utils inside my main
mozilla tree). It also compiles fine, but I cannot seem to actually run
the cmd utils.

I've included the MS redist files, and try to run rsaperf from the lib
directory:
[EMAIL PROTECTED] /cygdrive/c/nss/mozilla/dist/WINNT5.1_DBG.OBJ/lib
$ ../bin/rsaperf.exe

Which only results in:
"The application failed to initialize properly (0xc022). Click on
OK to terminate the application."

My env. is (cygwin) Windows XP, and I'm usually a Linux guy, so it
might be quite simple :)


___
dev-tech-crypto mailing list
dev-tech-crypto@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-tech-crypto


Re: compile NSS 3.11.2 question?

2006-08-31 Thread Julien Pierre

lihb wrote:

When i compile NSS 3.11.2,

i set environment variables:
NS_USE_GCC=1
configure well other.
use command:
cd mozilla/security/nss (or, on Windows, cd mozilla\security\nss)
gmake nss_build_all
The Error Log:
 cd nsinstall; f:/vc8-moztools/bin/gmake.exe export


When using cygwin, you must use the version of make (probably make.exe) 
that comes with it that is cygwin-path aware. Using gmake.exe from 
moztools won't work .

___
dev-tech-crypto mailing list
dev-tech-crypto@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-tech-crypto


Re: NSS Cache question

2006-08-22 Thread Julien Pierre

Rob Crittenden wrote:


b) do all the work on the first module load.  Don't really shut down the
   module after the first load (that is, pretend that you shut down).
   Then do nothing on the second load, and continue to use the stuff 
loaded

   the first time.


I don't have the choice with b). Apache forcibly unloads the module 
(dlcose()). If I haven't shut things down right I'll get a 
SEC_ERROR_BUSY error during subsequent reloads.


Actually, you can do b by doing an extra dlopen() in your module's own 
shared library when it gets initialized. This will cause the refcount on 
your module to go up. So, the dlclose() in Apache will have no effect. 
This is essentially leaking the module, but it solves the problem very well.

___
dev-tech-crypto mailing list
dev-tech-crypto@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-tech-crypto


Re: CERT_VerifyCertificate question

2006-07-28 Thread Julien Pierre

David,

David Stutzman wrote:
I'm looking at the functions CERT_VerifyCertificate and 
CERT_VerifyCertificateNow and see it has 2 parameters of type 
SECCertificateUsage, one required and one returned.  What is the purpose 
of the returned one?


SECCertificateUsage is a bit-field. If you requested several usages to 
be checked, the returned one will contain the usages for which the cert 
actually verified .
If you only request one, then I believe the output argument is optional 
(ie. you can pass NULL).


I checked the certutil code and the same variable 
is being passed into the verify function and the return is never 
checked.  (ValidateCert on line 750 of certutil.c, "usage" declared on 
756, passed into the verify method on 816 and never looked at again in 
the method.)


certutil only checks one usage at a time, so it doesn't need to check 
the output argument. The SECStatus return from CERT_VerifyCertificate is 
sufficient .


I'm generating and verifying digital signatures in my application.  Do I 
need to slurp out the key usages from the certificate and make sure 
digital signature and non-repudiation are present before I do the verify 
or is passing in the requiredusages of "certificateUsageEmailSigner" to 
CERT_VerifyCertificate good enough?  Does NSS care that the signing 
going on has nothing to do with email?  I figured object signing wasn't 
really appropriate.


I'm using NSS 3.11.2.


NSS won't know what you are trying to do with the cert.

If you pass certificateUsageEmailSigner to CERT_VerifyCertificate, NSS 
will check that the cert is appropriate for e-mail signing - including 
key usage/extended key usage extension .


What purpose are you using the digital signatures for in your 
application ? That may help determine the right usage to check .

___
dev-tech-crypto mailing list
dev-tech-crypto@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-tech-crypto


Re: CERT_VerifyCertificate question

2006-07-28 Thread Julien Pierre

David,

David Stutzman wrote:
I'm looking at the functions CERT_VerifyCertificate and 
CERT_VerifyCertificateNow and see it has 2 parameters of type 
SECCertificateUsage, one required and one returned.  What is the purpose 
of the returned one?


SECCertificateUsage is a bit-field. If you requested several usages to 
be checked, the returned one will contain the usages for which the cert 
actually verified .
If you only request one, then I believe the output argument is optional 
(ie. you can pass NULL).


I checked the certutil code and the same variable 
is being passed into the verify function and the return is never 
checked.  (ValidateCert on line 750 of certutil.c, "usage" declared on 
756, passed into the verify method on 816 and never looked at again in 
the method.)


certutil only checks one usage at a time, so it doesn't need to check 
the output argument. The SECStatus return from CERT_VerifyCertificate is 
sufficient .


I'm generating and verifying digital signatures in my application.  Do I 
need to slurp out the key usages from the certificate and make sure 
digital signature and non-repudiation are present before I do the verify 
or is passing in the requiredusages of "certificateUsageEmailSigner" to 
CERT_VerifyCertificate good enough?  Does NSS care that the signing 
going on has nothing to do with email?  I figured object signing wasn't 
really appropriate.


I'm using NSS 3.11.2.


NSS won't know what you are trying to do with the cert.

If you pass certificateUsageEmailSigner to CERT_VerifyCertificate, NSS 
will check that the cert is appropriate for e-mail signing - including 
key usage/extended key usage extension .


What purpose are you using the digital signatures for in your 
application ? That may help determine the right usage to check .

___
dev-tech-crypto mailing list
dev-tech-crypto@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-tech-crypto


Re: changing the nickname of a certificate in the db

2006-07-20 Thread Julien Pierre

David,

David Stutzman wrote:
I am importing into a certdb the contents of a pl2 file using pk12util. 
   I am ending up with certificate nicknames that = the DN of the 
certificates.  I would like to change the nickname of some of these 
certificates.  I see there is no way to do this with certutil and there 
is no way to specify this with pk12util.  I see a bug 
(https://bugzilla.mozilla.org/show_bug.cgi?id=72296) filed a long time 
ago that talks somewhat about the issue.


Multiple certs with the same subject name aren't an issue.

The p12s were generated by a java application using an RSA toolkit.  I 
looked through the 900 and some page reference guide for that toolkit 
and the only reference to friendly name was in a section defining 
certificate attributes and it made a reference to PKCS9.  Based on this 
I'm unsure I'll be able to create P12s that will import more smoothly.


Is there any way at all (even programmatically) to change the nickname 
in the db?


There is no API to do this directly, but it's possible. However, it'll 
take some work. Try the following :
1) read and backup the DER cert (or certs, if you have multiple with the 
same subject name) from the DB . There is a "SECItem derCert" field in 
the CERTCertificate structure to get at it .

If you have trust, use CERT_GetCertTrust to save it also.
2) delete the cert(s) from the DB . Use SEC_DeletePermCertificate .
3) import the cert(s) with a new nickname . Use CERT_ImportCerts to do this.
If you have trust, use CERT_ChangeCertTrust to restore it .
___
dev-tech-crypto mailing list
dev-tech-crypto@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-tech-crypto


Re: Platform Attestation. was:To SSL-client-auth or not to SSL-client-auth, that is the question(?)

2006-07-17 Thread Julien Pierre

Anders,

Anders Rundgren wrote:

Another reason why SSL client authentication may go bust is that it does not support the inclusion 
of platform attestations, something that may be required when TPMs become standard.  That is, you 
may [in the future] not be able to access corporate web-mail (or other sensitive web apps), from a 
machine that does not appear to run a "safe" operating system.  Some organizations may 
not allow employees to access web-mail from an unknown machine even if it is "safe".  
Alternative authentication mechanisms, typically riding on top of an SSL channel, can with ease 
provide platform attestations together with the authentication response.


Does that "platform attestation" really belong at the transport level ?

It seems like such a repugnant idea anyway. Services like webmail are 
supposed to be usable from any browser, on any computer, on any OS. 
That's why they are successful in the first place. This kind of lock 
certainly belongs in a proprietary client-server system, but for use on 
the open Internet, I'm skeptical.

___
dev-tech-crypto mailing list
dev-tech-crypto@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-tech-crypto


Re: Proper method for iterating through certificates on a token?

2006-06-20 Thread Julien Pierre

Jim,

Jim Spring wrote:


The question is, what is the best way, given the scenario,
to keep a valid reference to the certificate?  I can easily
call PK11_GetCertFromPrivateKey, but that seems silly.


Call CERT_DupCertificate on your cert before you destroy the cert list.
___
dev-tech-crypto mailing list
dev-tech-crypto@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-tech-crypto


Re: A couple of CRL import questions

2006-06-06 Thread Julien Pierre

Paul,

Paul Neyman wrote:

Hi!

You guys have been very helpful with my NSS questions :) Mind if I ask a 
couple more?


1. Is it possible to import a CRL during runtime?

I.e:
- a process has NSS initialized and is using NSS db.
- a user then runs crlutil and imports a CRL (this has worked for me, 
btw). crlutil -L lists CRL as imported


Would the original process be able to see this new CRL and its effects 
on certificates without reinitializing?


You can import the CRL during runtime.
However, the NSS cert DB is not safe for writing by multiple processes, 
or even reading from one and reading from another.

So, when you run crlutil, no other process should have the DB open.
The preferred way is for your application, which opened the DB 
read/write, to import the CRL itself, using the APIs previously 
mentioned in this newsgroup.



2. SEC_ERROR_BAD_DER error

I've taken the code from crlutil utility and massaged it to fit into our 
application. All it does, is it opens the CRL file in DER format and 
imports it using PK11_ImportCRL.


I've generated a CRL using crlutil and reimported it back into db using 
CRL. That worked fine. However, the same call with the same decode and 
import options results in a SEC_ERROR_BAD_DER error in a recursive call 
to DecodeItem when I run it from within our application.


A little comment says that:
/* a required component is missing. abort */

Is there anything extra that needs to be set that I missed?

Thanks a lot.


You are going to need to be more specific about the call and the options 
you are passing. If you are really passing in a buffer to the same CRL, 
there is no reason why it would decode in one process and not the other.


Maybe in one case you are not decoding the entries, and in the other 
case you are. Check the decode options. It's possible that there is an 
encoding error in the CRL with the entries that would show up with one 
set of options and not another. But you said you were using the same 
decode options, so that's probably not it.

___
dev-tech-crypto mailing list
dev-tech-crypto@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-tech-crypto


Re: crlutil: stuck in infinite loop when creating a new crl

2006-06-01 Thread Julien Pierre

Paul Neyman wrote:

Hello!

I've been trying to create a new CRL using crlutil, and it gets stuck in 
an infinite loop. I've traced it down to SECU_FindCrlIssuer function. 
Here's the excerpt from the code:


 while ( ! CERT_LIST_END(node, certList) ) {
cert = node->cert;
 if (CERT_CheckCertUsage(cert, KU_CRL_SIGN) != SECSuccess ||
 !cert->trust) {
 continue;
 }
 /* select the first (newest) user cert */
 if (CERT_IsUserCert(cert)) {
 rv = SECSuccess;
 goto success;
 }
 }


So, if the certificate is not trusted, and the call to CheckCertUsage 
does not return a success, the loop will restart from the head, because 
there's no advancement over the list.


Am I missing something here?
Thanks.


This bug was fixed in NSS 3.11.1 . Pull source from the NSS_3_11_1_RTM 
CVS tag .

See https://bugzilla.mozilla.org/show_bug.cgi?id=325307 .
___
dev-tech-crypto mailing list
dev-tech-crypto@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-tech-crypto


Re: Importing CRL using NSS API

2006-05-30 Thread Julien Pierre

[EMAIL PROTECTED] wrote:

Hi!

I'm trying to import a CRL (in DER format) using NSS API. Since 3.4 API 
does not have an import function available, I took the source code from 
the crlutil and massaged it to fit into our application.


NSS 3.4 did have import functions available for CRLs : CERT_ImportCRL 
and SEC_NewCrl . The main difference is that the first call performs 
some checks on the CRL and the second one does not.


NSS 3.6 added PK11_ImportCRL . This is a more flexible function that 
allows you to specify different options when importing, and can achieve 
all that the 2 older APIs did through these options. So, it supercedes 
them both.


NSS 3.10 added CERT_CacheCRL, which allows you to import the CRL 
temporariy into the NSS CRL cache directly, without adding it to a 
permanent storage token.

___
dev-tech-crypto mailing list
dev-tech-crypto@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-tech-crypto


Re: NSS Apache module - mod_nss

2006-05-23 Thread Julien Pierre

Peter Djalaliev wrote:

I am modifying mod_nss to implement TLS upgrades (RFC2817) to use in a
special-purpose web client-server system. 


We don't need to discuss the security issues of using RFC2817 on the 
Internet again. The last one on 3/31 - 4/07 went through them already.


Would you mind explaining what special-purpose system you have that 
requires TLS upgrade ?


Do you have a private network with 4 billion IP addresses in use 
already, such that you are constrained in IP addresses, and need to be 
able to use multiple SSL server certificates on the same IP address/port 
? If so, I would like to hear about it.


Otherwise, I can't imagine how RFC2817 helps anything. Just create as 
many IP addresses on your private network as you need SSL certificates. 
The server setup will be much simpler, and this will save you the 
trouble of needing a special-purpose client to access your server.



In fact, I think the
modifications to mod_nss are done, but I am not yet done with
implementing TLS upgrades in Firefox, so I haven't tested the mod_nss
modifications.


I think you are wasting your time with Firefox modifications. It's clear 
that RFC2817 will not see the light of the day in official Mozilla/PSM 
clients because of the security issues we previously discussed.

___
dev-tech-crypto mailing list
dev-tech-crypto@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-tech-crypto


Re: hashing without calling NSS_Init()?

2006-05-19 Thread Julien Pierre

Brian,

Brian Ryner wrote:
I'll do some profiling to make sure it's the DB initialization that's 
causing the performance hit.


I guess maybe I should have mentioned that I'm currently using these 
methods through the nsICryptoHash XPCOM wrapper.  So we'd either need to 
change that object to know that it can do a NoDB_Init if full 
initialization hasn't happened yet, or I could switch over to using the 
NSS functions directly.  Are there any problems with a Firefox extension 
linking directly to NSS?


There should be no problem linking to NSS in a Firefox extension, but 
you definitely want to let the browser/PSM do the NSS initialization, as 
opposed to your own code. If you call NSS_NoDB_Init before PSM 
initializes NSS, then no NSS databases will be available to Firefox, eg. 
all SSL connections will fail due to the lack of trusted CA certs.


I can't help you with which PSM functions you need to call to ensure 
that PSM is initialized unfortunately, but Kai Engert should know the 
answer.

___
dev-tech-crypto mailing list
dev-tech-crypto@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-tech-crypto


Re: NSS Apache module - mod_nss

2006-05-18 Thread Julien Pierre

Wan-Teh Chang wrote:

Rob Crittenden wrote:

A fair bit of work has been done to mod_nss, an SSL module for Apache 
that uses NSS instead of OpenSSL, since it was released last September.


Changes since then include use the NSS OCSP client, addition of a FIPS 
mode (similar to modutil -fips true -dbdir /path/to/database), options 
to seed the NSS Random Number Generator, support for Apache 2.2 as 
well as a number of important bug fixes.



We recently fixed a bug in our selfserv test program
that it can't find its private key when NSS is in FIPS
mode.  The function that had the bug is PK11_FindKeyByAnyCert.
(See https://bugzilla.mozilla.org/show_bug.cgi?id=337789.)

Is mod_nss not using PK11_FindKeyByAnyCert?


It's possible that mod_nss didn't run into the above bug if it logged in 
to the token before looking for the server private key.

___
dev-tech-crypto mailing list
dev-tech-crypto@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-tech-crypto


Re: Client Authentication Problem (and solution!)

2006-05-04 Thread Julien Pierre

Michael,

Michael Pratt wrote:


So yeah, it was definitely an oversight on our part, but it would still be
nice if this was documented.  I couldn't find in any of the docs (
http://docs.sun.com/app/docs/coll/S1_DirectoryServer_52) where it stated DS
would behave that way if the serial number wasn't unique.  



Of course, this
may be such common practice (or even the standard) that documenting it 
seems

silly, I don't know.


That's correct. A certificate must be uniquely defined by its issuer and 
serial number. That's one of the foundations of PKI and required by X509 
. You can't expect NSS, DS, Mozilla or any other product to operate when 
this assumption is broken.


You should revoke that serial number if you reused it multiple times, 
and start issuing new ones. Or better yet, create a new issuer and start 
fresh. It helps if you use a CA product to manage your PKI rather than 
scripts.

___
dev-tech-crypto mailing list
dev-tech-crypto@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-tech-crypto


Re: Availability of certutil on Windows?

2006-05-01 Thread Julien Pierre

Michael Baffoni wrote:
I can't find the .chk files (although I did find the other .dll files) 
in the .zip archives.  Are they required for correct operation, and is 
there an alternate location to find them?  If it matters, I'm using the 
NSS 3.9 from the FTP server.


Right now, I've created a new DB, imported one CA certificate, then 
running the command:

$ ./certutil.exe -V -n "RootCA" -b 06042900 -u V
nss-3.9\bin\certutil.exe: NSS_Initialize failed: security library: bad 
database.



Thank you for your help,



The chk files are only required for FIPS140 mode, which you can enabled 
with modutil.
Your problem is that you didn't specify a directory for the NSS database 
with -d  to certutil .

___
dev-tech-crypto mailing list
dev-tech-crypto@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-tech-crypto


Re: Adding Ciphers

2006-04-04 Thread Julien Pierre

Jay Potter wrote:

   Any suggestions on what I would need to do to get this implimented?


A lot of convincing that it is worth doing, to begin with. IMO, 
pre-shared keys have no place in a general-purpose Internet browser such 
as Mozilla. The authors of RFC4279 agree - see section 1.1 .


"  The ciphersuites defined in this document are intended for a rather
   limited set of applications, usually involving only a very small
   number of clients and servers.  Even in such environments, other
   alternatives may be more appropriate."

I don't think this should be implemented in Mozilla or NSS.
___
dev-tech-crypto mailing list
dev-tech-crypto@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-tech-crypto


Re: SSL/TLS upgrades - RFC2817

2006-03-31 Thread Julien Pierre

Peter,

Peter Djalaliev wrote:
Apache/mod_ssl version 2.2.0 implements RFC2817 in a way that, I 
believe, prevents MITM attacks.  The RFC itself admits the possibility 
for a MITM attack, but only when the server is willing to provide some 
resource both through HTTP or through HTTPS to start with.  In mod_ssl, 
the SSL upgrade is used together with SSLRequireSSL, so the resource 
cannot be obtained unless an SSL is established first.


The problem is that the security depends entirely on the server deciding 
not to serve the content over plain unencrypted HTTP . The client has no 
control. If the server is misconfigured, or there is an MITM, the client 
will go through with an unencrypted HTTP connection.


In the old discussion thread from netscape.public.mozilla.crypto Julien 
Pierre said that he doesn't believe that the http scheme is sufficient 
to provide the secutiry provided by https.  I am not sure I quite 
understand that.  Are the 'http' and 'https' schemes inherently 
different?  Aren't they the same on the application level, but different 
only in the underlying transport protocol - TCP for http and TLS for 
https?  Can somebody explain me this?


Yes, they are only different in the transport. However, security cannot 
be considered to be outside the scope of the application . In many cases 
it is inherent to it. For example, think banking. If your application 
has a security requirement, then you don't want to leave it to chance 
whether you are going to get a secure connection or not.
You need something that tells the browser that security is required in 
your application, so that it can enforce it, and not just the server.
The https protocol scheme, as imprecise as it is, tells the client only 
to establish secure connections. The http protocol scheme specifies 
plaintext connections.
There is no protocol scheme you can include in an HTML form to tell the 
client "go to this server and do a TLS upgrade. don't fall back to 
plaintext HTTP". That's what makes it unsuitable.


Also, think of banks that create HTML login forms which you obtain with 
a plaintext HTTP connection. This is ugly to begin with. But at least, 
those forms contain HTTPS links, and if you have your browser set not to 
submit anything unencrypted, then you will get a pop-up before you 
submit your login and password telling you it's insecure.
With TLS upgrade, the client will POST the data unencrypted, even if the 
server requires TLS upgrade, just because the form said so. It will go 
in the clear anywhere.


This is why IMO the TLS upgrade as currently specified should not be 
implemented in general-purpose web browsers and servers.
Once there is a differentiator on the client side that browser can 
clearly interpret, then it will be suitable. I suggested using "httpt" 
(t for TLS ugprade) . But there wasn't strong support for this in the 
HTTP working group, so I didn't write a draft.
It doesn't seem that there has been much demand for it in the last 6 
years since the discussion occurred.


RFC2817 also talks about another advantage of this upgrade mechanism - 
providing the hostname during the initial TLS handshake.  This feature 
is also included as a TLS extension in RFC3546, but is RFC3546 widely 
implemented?  NSS doesn't implement it as far as I know...are there any 
plans about this?  Are there any issues/concerns about the TLS extension 
mechanism?


There is ongoing work on the server name indication extension in NSS, at 
least for the client side for now. Nelson can tell you more.


In the Apache/mod_ssl discussion forums, somebody said that optional SSL 
upgrades would allow for more compact, modular security services to be 
implemented - I interpret this as securty-on-demand.  Do you think this 
could be an advantage - would we lose on security if we make it 
optional?


Yes, we very much lose on security if it's optional. Nobody will know if 
their application is secure or not.

___
dev-tech-crypto mailing list
dev-tech-crypto@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-tech-crypto


Re: Firefox 1.5 and importing CRLs?

2006-03-23 Thread Julien Pierre

Nelson B Bolyard wrote:

Bruce Keats wrote:

I am having problems importing CRLs and managing CRLs within firefox. 
In the linux version, the import button opens a window that allows me to
enter a file name for the CRL.  The CRL is in PEM format is called 
"root.crl".  When I select OK, there are no error messages, how the CRL
is not imported.  



Yeah, mozilla security error dialogs leave a lot to be desired
https://bugzilla.mozilla.org/show_bug.cgi?id=107491

In this case, the CRL has to have been signed by a trusted CA.
If the CA certs isn't already in your profile and marked trusted,
the CRL import will fail.  That's my guess about your experience.


Unfortunately no. Mozilla doesn't do any check on issuer when importing 
CRLs. It doesn't verify the CRL. It only checks the ASN.1 encoding.
This is required because we don't keep intermediate certs around in our 
DB. If mozilla did the check, it would be impossible to import CRLs for 
intermediate CAs. We have NSS APIs that do the check when importing, but 
they aren't used in this case.


On the Windows version, this functionality works OK. 
However, if I remove the CRL then try and import a more up to date CRL,

I get an error.



What version of NSS are you using?

I vaguely (and perhaps erroneously) recall that there is (er, once was) a
problem that occurs when your only CRL expires or is removed.  The problem
is that if NSS thinks you have (or had) a CRL for a CA, then NSS cannot
thereafter verify any signatures without the CRL for that CA, INCLUDING
the signatures on new CRLs.  I think that was fixed in NSS 3.10 or 3.11,
but my memory of this is pretty hazy.

Perhaps MisterCRL will reply to this soon.


;)
I don't remember either about this specific problem, sorry.
___
dev-tech-crypto mailing list
dev-tech-crypto@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-tech-crypto


Re: pkcs11 provider password issues

2006-02-09 Thread Julien Pierre

Hi Robert,

robert dugal wrote:
SSL_AuthCertificate() is called to verify a certificate chain during an 
SSL/TLS handshake. It ends up calling pk11_RetrieveCrls() which then 
calls PK11_GetAllTokens() which loads ever P11 token, including those 
that need a login.  I am not certain how I can get around this.


This search isn't unnecessary. The cert verification algorithm is 
looking for CRLs and needs to search for objects in the token. If it 
didn't authenticate at this step, it would authenticate to find 
certificates.


One way around this is to make your token "friendly", which means it 
will allow C_FindObjects to work without being logged in, and will only 
require you to be logged in if you are using private keys in the token.

___
dev-tech-crypto mailing list
dev-tech-crypto@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-tech-crypto


  1   2   >