On 12/24/2013 03:20 AM, Axel Nennker wrote:
> this document describes the current Persona formats and the 
> differences to JOSE. My understanding from the dev-identity emails
> is that Persona and FxA want to align with JOSE and get rid of the 
> differences: 
> https://github.com/djc/id-specs/blob/prod/browserid/json-formats.md

Yes, I think it's fair to say that the Persona team's plan is to align
with the JOSE specs. However, the identity community is larger than the
Persona team and we've raised the possibility that we'll be challenging
the notion that identity solutions on the Web should utilize the JOSE
specs for the reasons outlined in the review.

> As a developer I don't buy the Anti-base64-encoding / simplicity 
> argument because I only have to output the decoded message to my 
> payment-server's logs. I never sniff the messages from the wire
> which would be much harder because they come through an SSL channel 
> anyway.

You are looking at it from the standpoint of having access to the
server-side. Many web developers spend a large chunk of their time
interfacing with services from the client side, where they don't have
access to the server side (interfacing w/ Twitter, Facebook, Google
services, etc.). Many of these developers use the built-in browser tools
to do development, and would be hampered by seeing large chunks of
non-binary base64 data coming across the wire.

On 12/24/2013 03:20 AM, Axel Nennker wrote:
> I think that canonicalization / normalization of the to-be-signed 
> payment message is a HUGE mistake. That was a major pain with
> xmldsig in the past. base64 encoding stuff and working on that is
> MUCH better for interoperability.
Peter Saint-Andre wrote:
> A big +1 to that. Going down the canonicalization road can only harm
>  interoperability.

One of the major reasons it was a big pain in XMLDSIG is because you
could express the same message in a bunch of different syntactic ways.
Another reason was because of namespace injection. We do not attempt to
canonicalize the JSON message due to the variation in serialization wrt.
JSON processors. Rather, we canonicalize at the data model layer using a
very simple syntax. Thus, we don't fall into the same trap that XMLDSIG did.

The major issues with XMLDSIG are not issues with the canonicalization
mechanism used for the SM spec. We have 3 interoperable implementations
at this point and a fairly expansive test suite to ensure compliance
with the digital signature algorithm. To date, we haven't hit upon the
same issues that XMLDSIG did.

Perhaps you could elaborate on why the approach that we took, which is
very different from XMLDSIG, is going to harm interoperability. The
current arguments put forward for "canonicalization is bad" assume that
we took the same path that XMLDSIG took, which is an uninformed starting
point.

-- manu

-- 
Manu Sporny (skype: msporny, twitter: manusporny, G+: +Manu Sporny)
Founder/CEO - Digital Bazaar, Inc.
blog: The Worlds First Web Payments Workshop
http://www.w3.org/2013/10/payments/
_______________________________________________
jose mailing list
[email protected]
https://www.ietf.org/mailman/listinfo/jose

Reply via email to