On Fri, Sep 24, 2010 at 10:08 AM, Richard L. Barnes <rbar...@bbn.com> wrote:

> I think it's more robust to verify than to generate. In option 2, you have
>> to "guess" what the signer actually signed. To be fair, you have pretty good
>> signals, like the HTTP request line, the Host: header, etc., but in the end,
>> you don't _know_ that the signer really saw the same thing when they
>> generated the signature. I can't help but feeling that that's a bit of a
>> hack. In option 1, you always know what the signer saw when they generated
>> the signature, and it's up to you (the verifier) to decide whether that
>> matches your idea of what your endpoint looks like.
>>
>
> Generating does not imply guessing: The signer can specify what he signed
> without providing the data directly.  Quoting from another thread:
> "
> 1. Signer computes signature sig_val over data object:
>  { user_agent: "Mozilla", method: "GET" }
> 2. Signer sends { signed_fields: ['user_agent', 'method'], sig: sig_val }
> 3. Recipient reconstructs data object using signed_fields
> 4. Recipient verifies sig_val == sign(reconstructed_object)
> "
>
> If the spec is written properly, the recipient should be able to look at
> the names of the fields ('user_agent', 'method') and use them to reconstruct
> the original object.
>

User-agent and method are well-defined both for senders and receivers of
HTTP requests. What's less well-defined is the URL, which is what Eran is
objecting to. So in practice, it looks more like this:

1. Signer generates URL using some library, e.g.:
    paramsMap = new Map();
    paramsMap.put('param1', 'value1');
    paramsMap.put('param2', 'value2');

    uri = new UriBuilder()
      .setScheme(Scheme.HTTP)
      .setHost('WWW.foo.com')
      .setPath('/somePath')
      .setQueryParams(paramsMap)
      .build().toString();
   // uri now looks something like "
http://WWW.foo.com/somePath?param1=value1&param2=value2";

2. They then use a different library to send the HTTP request
    request = new GetRequest();
    request.setHeader('signed-token', sign('GET', uri));
    request.execute(uri);

The problem is that we don't know what the execute method on GetRequest does
with the URI. It probably will use a library (possibly different from the
one used in step 1) to decompose the URI back into its parts, so it can
figure out whether to use SSL, which host and port to connect to, etc. Is it
going to normalize the hostname to lowercase in the process? Is it going to
escape the query parameters? Is it going to add ":80" to the Host:-header
because that's the port it's going to connect to? Is it going to put the
query parameters into a different order?, etc., all of which would cause the
recipient of the message to put back together a _different_ URI from what
the sender saw.

OAuth1 therefore defined a bunch of rules on how to "normalize" the URI to
make sure that both the sender and the receiver saw the same URI even if the
http library does something funny. Many people thought that those rules were
too complicated. There is currently an argument over whether or not the
complexity of the rules can be hidden in libraries, and I'm personally a bit
on the fence on this. What I _do_ object to, more on a philosophical level,
is that we can never know for sure what the http library is doing to the
request, and that therefore we can never be sure whether the normalization
rules we have come up with cover all the crazy libraries out there. There is
a symmetric problem on the receiver side - where the servlet APIs may or may
not have messed with the parameters before you get to reconstruct the URI.

The JSON token proposal does something simpler: you get to see the URI as
the sender saw it (in this case with the uppercase WWW, without the :80,
etc.), and you get to decide whether that matches your endpoint. So instead
of wondering in what order the signer saw the query parameters when he
signed them, and whether they were escaped or not, you simply check that all
the query parameters that he signed (as evidenced in the JSON token) are
indeed present in the HTTP request, and vice versa, etc. It's a comparable
amount of work, but it seems cleaner, less hacky to me.

Dirk.


> The idea of allowing signed fields to change en route to the server strikes
> me as a little odd.  Sure, you could ignore the method, path, and host
> values in HTTP and just act on the enveloped data, but at that point, why
> not just do away with the overhead of HTTP and run the whole thing over TCP?
>
> --Richard
>
_______________________________________________
OAuth mailing list
OAuth@ietf.org
https://www.ietf.org/mailman/listinfo/oauth

Reply via email to