I think it's more robust to verify than to generate. In option 2, you have to "guess" what the signer actually signed. To be fair, you have pretty good signals, like the HTTP request line, the Host: header, etc., but in the end, you don't _know_ that the signer really saw the same thing when they generated the signature. I can't help but feeling that that's a bit of a hack. In option 1, you always know what the signer saw when they generated the signature, and it's up to you (the verifier) to decide whether that matches your idea of what your endpoint looks like.

Generating does not imply guessing: The signer can specify what he signed without providing the data directly. Quoting from another thread:
"
1. Signer computes signature sig_val over data object:
  { user_agent: "Mozilla", method: "GET" }
2. Signer sends { signed_fields: ['user_agent', 'method'], sig: sig_val }
3. Recipient reconstructs data object using signed_fields
4. Recipient verifies sig_val == sign(reconstructed_object)
"

If the spec is written properly, the recipient should be able to look at the names of the fields ('user_agent', 'method') and use them to reconstruct the original object.

The idea of allowing signed fields to change en route to the server strikes me as a little odd. Sure, you could ignore the method, path, and host values in HTTP and just act on the enveloped data, but at that point, why not just do away with the overhead of HTTP and run the whole thing over TCP?

--Richard
_______________________________________________
OAuth mailing list
OAuth@ietf.org
https://www.ietf.org/mailman/listinfo/oauth

Reply via email to