Additions with [JM] prefix


-----Original Message-----
From: Manger, James H [mailto:james.h.man...@team.telstra.com]
Sent: Wednesday, August 15, 2012 5:47 PM
To: Mike Jones; jose@ietf.org<mailto:jose@ietf.org>
Subject: private keys, leading zeros



Mike,



Thanks for making the effort to produce a draft:

http://tools.ietf.org/html/draft-jones-jose-json-private-key



You’re welcome. ;-)



Four comments:



1.

Your choice to extend the public key format to also hold the private key info 
might have some interesting consequences. It suggests you can have a single 
file (or single JSON object) that can be used by tools that needs a public key, 
and by tools that need a private key. However, I doubt that is very good 
practice. It just adds a risk that a private key is inadvertently exposed where 
it shouldn't be.



Perhaps the following sort of format would be better:

  {

    "public":{"alg":"...},

    "private":{"d":...}

  }



For what it’s worth, having a format that can represent either public or 
public/private seems to be fairly common practice for crypto APIs.  For 
instance, see 
http://msdn.microsoft.com/en-us/library/system.security.cryptography.cngkeyblobformat.aspx.
  I believe that similar functions are also available for Java, etc.



[JM] I think this shows the opposite. The CngKeyBlobFormat class represents a 
format, not an actual key. It has separate formats for private keys 
(EccPrivateBlob, GenericPrivateBlob, Pkcs8PrivateBlob) and public keys 
(EccPublicBlob, GenericPublicBlob).





2.

The rule that integers MUST not be shortened if they have leading 0's is silly. 
It implies you cannot encode an integer without knowing a larger context (eg 
you cannot encode d without specifying n, in the context of an RSA key). This 
rule applies to the private exponent, but the opposite rule applies to the 
public exponent. Yuck.



Either always forbidding leading zeros, or always allowing them (so receivers 
MUST support them), would be much better rules.



This language was actually just copied from JWK.  The goal is to facilitate 
direct conversion from the base64url encoded representations of the byte arrays 
to the byte arrays for the key values, with no “shifting” or “prepending” 
required.  Many crypto APIs will expect byte arrays of specific lengths, that 
include the leading zeros.  It seemed better to mandate that the leading zeros 
be included so that the resulting byte arrays can be used as-is, rather than 
potentially having to prepend them if they are missing before using the byte 
array.



That being said, if the working group decides to change this to say that 
leading zeros may be omitted, we could do that, although it would mean that 
implementations have more to do when importing keys.



I just noticed the "MUST not omit leading 0's" rule applies to the RSA modulus 
in JWA (section 5.3.1 "mod" Modulus Parameter). The rule is even sillier in 
this case. It effectively says "the modulus MUST be encoded as the size 
someone/something though the modulus should be, instead of as the size it 
actually is". This seems to have no purpose other than to mislead (eg trick a 
component that only looks at the encoded length into thinking the key is 
longer/stronger than it actually is).



3.

An RSA private key format needs to support calculations that use the Chinese 
Remainder Theorum (CRT), by allowing p, q, d mod p-1, d mod q-1, and qinv to be 
specified -- not just d. CRT calculations offer too much of a performance boost 
to ignore (2 or 4 times faster, can't remember which off the top of my head).



This makes sense.  Is there existing practice for such representations that you 
can point us to that we could borrow from?



[JM] RFC 3447 “PKCS #1: RSA Cryptography Specifications, version 2.1”, section 
A.1.2 RSA private key syntax, for the RSAPrivateKey ASN.1 type.



An RSA modulus can actually have more than 2 prime factors. An RSA private key 
format should theoretically support that as well. That is, support an array of 
{prime, exponent, coefficient}, instead of hardwiring fields for 2 primes.

See RFC3447 "PKCS #1: RSA Cryptography Specifications, version 2.1".



Is this used in practice?  The code that I’ve looked at doesn’t support this.  
If it’s just a theoretical case, rather than a practical one, I don’t see any 
need for us to support it.



4.

I am not sure if it has been asked before (by me or someone else), but is there 
really any benefit from wrapping the JSON array of key objects in another JSON 
object with a single member "keys" [JWK section 5 "JSON Web Key Set (JWK Set) 
Format"]? This is adding an extension point with no currently-known use, but 
without any backward compatibility since any extension MUST be understood.



This allows deployments to define and use other members to provide additional 
information about the key set.  Jim Schaad had suggested defining the registry 
for these member for exactly this purpose, which made sense to me.



[JM] Except that “providing additional information about the key set” will be 
barely deployable as it will break all existing systems (such as crypto 
libraries) at that time due to the MUST understand rule.



For a set of keys, how about using a JSON object with field names that are 
key-ids, and the values are JWKs.

  { “2012-06”:{“alg”:”RSA”,…}, “2012-05”:{“alg”:”RSA”,…}}

That makes it easy/intuitive to find the key needed for a JOSE message from the 
key-id in that message.

It means the JSON fragment syntax defined in the JSON pointer draft spec can be 
sensible used with a URI that points to a set of keys:  
{"jku":"http://example.com/~jim/keys.jwk#/2012-06"; }





--

James Manger


_______________________________________________
jose mailing list
jose@ietf.org
https://www.ietf.org/mailman/listinfo/jose

Reply via email to