Re: Why the restriction on unauthenticated GET in CORS?

2012-07-20 Thread Adam Barth
On Thu, Jul 19, 2012 at 7:50 AM, Cameron Jones  wrote:
> On Thu, Jul 19, 2012 at 3:19 PM, Anne van Kesteren  wrote:
>> On Thu, Jul 19, 2012 at 4:10 PM, Cameron Jones  wrote:
>>> Isn't this mitigated by the Origin header?
>>
>> No.
>
> Could you expand on this response, please?
>
> My understanding is that requests generate from XHR will have Origin
> applied. This can be used to reject requests from 3rd party websites
> within browsers. Therefore, intranets have the potential to restrict
> access from internal user browsing habits.

They have the potential, but existing networks don't do that.  We need
to protect legacy systems that don't understand the Origin header.

>>> Also, what about the point that this is unethically pushing the costs
>>> of securing private resources onto public access providers?
>>
>> It is far more unethical to expose a user's private data.
>
> Yes, but if no user private data is being exposed then there is cost
> being paid for no benefit.

I think it's difficult to discuss ethics without agreeing on an
ethical theory.  Let's stick to technical, rather than ethical,
discussions.

Adam



Re: Why the restriction on unauthenticated GET in CORS?

2012-07-20 Thread Cameron Jones
On Fri, Jul 20, 2012 at 8:29 AM, Adam Barth  wrote:
> On Thu, Jul 19, 2012 at 7:50 AM, Cameron Jones  wrote:
>> On Thu, Jul 19, 2012 at 3:19 PM, Anne van Kesteren  wrote:
>>> On Thu, Jul 19, 2012 at 4:10 PM, Cameron Jones  wrote:
 Isn't this mitigated by the Origin header?
>>>
>>> No.
>>
>> Could you expand on this response, please?
>>
>> My understanding is that requests generate from XHR will have Origin
>> applied. This can be used to reject requests from 3rd party websites
>> within browsers. Therefore, intranets have the potential to restrict
>> access from internal user browsing habits.
>
> They have the potential, but existing networks don't do that.  We need
> to protect legacy systems that don't understand the Origin header.
>

Yes, i understand that. When new features are introduced someone's
security policy is impacted, in this case (and by policy always the
case) it is those who provide public services who's security policy is
broken.

It just depends on who's perspective you look at it from.

The costs of private security *is* being paid by the public, although
it seems the public has to pay a high price for everything nowadays.

 Also, what about the point that this is unethically pushing the costs
 of securing private resources onto public access providers?
>>>
>>> It is far more unethical to expose a user's private data.
>>
>> Yes, but if no user private data is being exposed then there is cost
>> being paid for no benefit.
>
> I think it's difficult to discuss ethics without agreeing on an
> ethical theory.  Let's stick to technical, rather than ethical,
> discussions.
>

Yes, but as custodians of a public space there is an ethical duty and
responsibility to represent the interests of all users of that space.
This is why the concerns deserve attention even if they may have been
visited before.

Given the level of impact affects the entire corpus of global public
data, it is valuable to do a impact and risk assessment to garner
whether the costs are significantly outweighed by either party.

With some further consideration, i can't see any other way to protect
IP authentication against targeted attacks through to their systems
without the mandatory upgrade of these systems to IP + Origin
Authentication.

So, this is a non-starter. Thanks for all the fish.

> Adam

Thanks,
Cameron Jones



Re: Why the restriction on unauthenticated GET in CORS?

2012-07-20 Thread Adam Barth
On Fri, Jul 20, 2012 at 4:37 AM, Cameron Jones  wrote:
> On Fri, Jul 20, 2012 at 8:29 AM, Adam Barth  wrote:
>> On Thu, Jul 19, 2012 at 7:50 AM, Cameron Jones  wrote:
>>> On Thu, Jul 19, 2012 at 3:19 PM, Anne van Kesteren  wrote:
 On Thu, Jul 19, 2012 at 4:10 PM, Cameron Jones  wrote:
> Isn't this mitigated by the Origin header?

 No.
>>>
>>> Could you expand on this response, please?
>>>
>>> My understanding is that requests generate from XHR will have Origin
>>> applied. This can be used to reject requests from 3rd party websites
>>> within browsers. Therefore, intranets have the potential to restrict
>>> access from internal user browsing habits.
>>
>> They have the potential, but existing networks don't do that.  We need
>> to protect legacy systems that don't understand the Origin header.
>>
>
> Yes, i understand that. When new features are introduced someone's
> security policy is impacted, in this case (and by policy always the
> case) it is those who provide public services who's security policy is
> broken.
>
> It just depends on who's perspective you look at it from.
>
> The costs of private security *is* being paid by the public, although
> it seems the public has to pay a high price for everything nowadays.

I'm not sure I understand the point you're making, but it's doesn't
really matter.  We're not going to introduce vulnerabilities into
legacy systems.

> Also, what about the point that this is unethically pushing the costs
> of securing private resources onto public access providers?

 It is far more unethical to expose a user's private data.
>>>
>>> Yes, but if no user private data is being exposed then there is cost
>>> being paid for no benefit.
>>
>> I think it's difficult to discuss ethics without agreeing on an
>> ethical theory.  Let's stick to technical, rather than ethical,
>> discussions.
>
> Yes, but as custodians of a public space there is an ethical duty and
> responsibility to represent the interests of all users of that space.
> This is why the concerns deserve attention even if they may have been
> visited before.

I'm sorry, but I'm unable to respond to any ethical arguments.  I can
only respond to technical arguments.

> Given the level of impact affects the entire corpus of global public
> data, it is valuable to do a impact and risk assessment to garner
> whether the costs are significantly outweighed by either party.
>
> With some further consideration, i can't see any other way to protect
> IP authentication against targeted attacks through to their systems
> without the mandatory upgrade of these systems to IP + Origin
> Authentication.
>
> So, this is a non-starter. Thanks for all the fish.

That's why we have the current design.

Adam



Re: Why the restriction on unauthenticated GET in CORS?

2012-07-20 Thread Cameron Jones
On Fri, Jul 20, 2012 at 4:50 PM, Adam Barth  wrote:
> On Fri, Jul 20, 2012 at 4:37 AM, Cameron Jones  wrote:
>> So, this is a non-starter. Thanks for all the fish.
>
> That's why we have the current design.

Yes, i note the use of the word "current" and not "final".

Ethics are a starting point for designing technology responsibly. If
the goals can not be met for valid technological reasons then that it
a unfortunate outcome and one that should be avoided at all costs.

The costs of supporting legacy systems has real financial implications
notwithstanding an ethical ideology. If those costs become too great,
legacy systems loose their impenetrable pedestal.

The architectural impact of supporting for non-maintained legacy
systems is that web proxy intermediates are something we will all have
to live with.

Thanks,
Cameron Jones



Re: Why the restriction on unauthenticated GET in CORS?

2012-07-20 Thread Adam Barth
On Fri, Jul 20, 2012 at 9:55 AM, Cameron Jones  wrote:
> On Fri, Jul 20, 2012 at 4:50 PM, Adam Barth  wrote:
>> On Fri, Jul 20, 2012 at 4:37 AM, Cameron Jones  wrote:
>>> So, this is a non-starter. Thanks for all the fish.
>>
>> That's why we have the current design.
>
> Yes, i note the use of the word "current" and not "final".
>
> Ethics are a starting point for designing technology responsibly. If
> the goals can not be met for valid technological reasons then that it
> a unfortunate outcome and one that should be avoided at all costs.
>
> The costs of supporting legacy systems has real financial implications
> notwithstanding an ethical ideology. If those costs become too great,
> legacy systems loose their impenetrable pedestal.
>
> The architectural impact of supporting for non-maintained legacy
> systems is that web proxy intermediates are something we will all have
> to live with.

Welcome to the web.  We support legacy systems.  If you don't want to
support legacy systems, you might not enjoy working on improving the
web platform.

Adam



Re: Why the restriction on unauthenticated GET in CORS?

2012-07-20 Thread Henry Story

On 20 Jul 2012, at 18:59, Adam Barth wrote:

> On Fri, Jul 20, 2012 at 9:55 AM, Cameron Jones  wrote:
>> On Fri, Jul 20, 2012 at 4:50 PM, Adam Barth  wrote:
>>> On Fri, Jul 20, 2012 at 4:37 AM, Cameron Jones  wrote:
 So, this is a non-starter. Thanks for all the fish.
>>> 
>>> That's why we have the current design.
>> 
>> Yes, i note the use of the word "current" and not "final".
>> 
>> Ethics are a starting point for designing technology responsibly. If
>> the goals can not be met for valid technological reasons then that it
>> a unfortunate outcome and one that should be avoided at all costs.
>> 
>> The costs of supporting legacy systems has real financial implications
>> notwithstanding an ethical ideology. If those costs become too great,
>> legacy systems loose their impenetrable pedestal.
>> 
>> The architectural impact of supporting for non-maintained legacy
>> systems is that web proxy intermediates are something we will all have
>> to live with.
> 
> Welcome to the web.  We support legacy systems.  If you don't want to
> support legacy systems, you might not enjoy working on improving the
> web platform.

Of course, but you seem to want to support hidden legacy systems, that is 
systems none of us know about or can see. It is still a worth while inquiry to 
find out how many systems there are for which this is a problem, if any. That 
is:

  a) systems that use non standard internal ip addresses
  b) systems that use ip-address provenance for access control
  c) ? potentially other issues that we have not covered

Systems with a) are going to be very rare it seems to me, and the question 
would be whether they can't really move over to standard internal ip addresses. 
Perhaps IPV6 makes that easy.

It is not clear that anyone should bother with designs such as b) - that's bad 
practice anyway I would guess.

  Anything else?

Henry

> 
> Adam

Social Web Architect
http://bblfish.net/




Re: Why the restriction on unauthenticated GET in CORS?

2012-07-20 Thread Tab Atkins Jr.
On Fri, Jul 20, 2012 at 11:58 AM, Henry Story  wrote:
> Of course, but you seem to want to support hidden legacy systems, that is 
> systems none of us know about or can see. It is still a worth while inquiry 
> to find out how many systems there are for which this is a problem, if any. 
> That is:
>
>   a) systems that use non standard internal ip addresses
>   b) systems that use ip-address provenance for access control
>   c) ? potentially other issues that we have not covered
>
> Systems with a) are going to be very rare it seems to me, and the question 
> would be whether they can't really move over to standard internal ip 
> addresses. Perhaps IPV6 makes that easy.
>
> It is not clear that anyone should bother with designs such as b) - that's 
> bad practice anyway I would guess.

We know that systems which base their security at least in part on
network topology (are you on a computer inside the DMZ?) are common
(because it's easy).

~TJ



Re: Why the restriction on unauthenticated GET in CORS?

2012-07-20 Thread Henry Story

On 20 Jul 2012, at 21:02, Tab Atkins Jr. wrote:

> On Fri, Jul 20, 2012 at 11:58 AM, Henry Story  wrote:
>> Of course, but you seem to want to support hidden legacy systems, that is 
>> systems none of us know about or can see. It is still a worth while inquiry 
>> to find out how many systems there are for which this is a problem, if any. 
>> That is:
>> 
>>  a) systems that use non standard internal ip addresses
>>  b) systems that use ip-address provenance for access control
>>  c) ? potentially other issues that we have not covered
>> 
>> Systems with a) are going to be very rare it seems to me, and the question 
>> would be whether they can't really move over to standard internal ip 
>> addresses. Perhaps IPV6 makes that easy.
>> 
>> It is not clear that anyone should bother with designs such as b) - that's 
>> bad practice anyway I would guess.
> 
> We know that systems which base their security at least in part on
> network topology (are you on a computer inside the DMZ?) are common
> (because it's easy).

How many of those would use ip addresses that are not standard private ip 
addresses?
( Because if they do, then they would not be affected ).
Of those that do not, would IPV6 offer them a scheme where they could easily 
use standard private ip addresses? 

> 
> ~TJ

Social Web Architect
http://bblfish.net/




Re: Why the restriction on unauthenticated GET in CORS?

2012-07-20 Thread Ian Hickson
On Fri, 20 Jul 2012, Henry Story wrote:
> 
> How many of those would use ip addresses that are not standard private 
> ip addresses? (Because if they do, then they would not be affected). Of 
> those that do not, would IPV6 offer them a scheme where they could 
> easily use standard private ip addresses?

I think you're missing the point, which is that Web browser implementors 
are not willing to risk breaking any such deployments, however convoluted 
that makes the resulting technology. If you want a technology to be 
implemented, you have to consider implementators' constraints as hard 
constraints on your designs. In this case, the constraint is that they 
will not implement anything that increases the potential attack surface 
area, whether or not the potentially vulnerable deployed services are 
designed sanely or not. Once you realise that this is a hard constraint, 
questions such as yours above are obviously moot.

HTH,
-- 
Ian Hickson   U+1047E)\._.,--,'``.fL
http://ln.hixie.ch/   U+263A/,   _.. \   _\  ;`._ ,.
Things that are impossible just take longer.   `._.-(,_..'--(,_..'`-.;.'



Re: Why the restriction on unauthenticated GET in CORS?

2012-07-20 Thread Jonas Sicking
On Fri, Jul 20, 2012 at 11:58 AM, Henry Story  wrote:
>
> On 20 Jul 2012, at 18:59, Adam Barth wrote:
>
>> On Fri, Jul 20, 2012 at 9:55 AM, Cameron Jones  wrote:
>>> On Fri, Jul 20, 2012 at 4:50 PM, Adam Barth  wrote:
 On Fri, Jul 20, 2012 at 4:37 AM, Cameron Jones  wrote:
> So, this is a non-starter. Thanks for all the fish.

 That's why we have the current design.
>>>
>>> Yes, i note the use of the word "current" and not "final".
>>>
>>> Ethics are a starting point for designing technology responsibly. If
>>> the goals can not be met for valid technological reasons then that it
>>> a unfortunate outcome and one that should be avoided at all costs.
>>>
>>> The costs of supporting legacy systems has real financial implications
>>> notwithstanding an ethical ideology. If those costs become too great,
>>> legacy systems loose their impenetrable pedestal.
>>>
>>> The architectural impact of supporting for non-maintained legacy
>>> systems is that web proxy intermediates are something we will all have
>>> to live with.
>>
>> Welcome to the web.  We support legacy systems.  If you don't want to
>> support legacy systems, you might not enjoy working on improving the
>> web platform.
>
> Of course, but you seem to want to support hidden legacy systems, that is 
> systems none of us know about or can see. It is still a worth while inquiry 
> to find out how many systems there are for which this is a problem, if any. 
> That is:
>
>   a) systems that use non standard internal ip addresses
>   b) systems that use ip-address provenance for access control
>   c) ? potentially other issues that we have not covered

One important group to consider is home routers. Routers are often
secured only by checking that requests are coming through an internal
connection. I.e. either through wifi or through the ethernet port. If
web pages can place arbitrary requests to such routers it would mean
that they can redirect traffic arbitrarily and transparently.

/ Jonas



Re: Why the restriction on unauthenticated GET in CORS?

2012-07-20 Thread Henry Story

On 21 Jul 2012, at 05:39, Jonas Sicking wrote:

> On Fri, Jul 20, 2012 at 11:58 AM, Henry Story  wrote:
>> 
>> On 20 Jul 2012, at 18:59, Adam Barth wrote:
>> 
>>> On Fri, Jul 20, 2012 at 9:55 AM, Cameron Jones  wrote:
 On Fri, Jul 20, 2012 at 4:50 PM, Adam Barth  wrote:
> On Fri, Jul 20, 2012 at 4:37 AM, Cameron Jones  wrote:
>> So, this is a non-starter. Thanks for all the fish.
> 
> That's why we have the current design.
 
 Yes, i note the use of the word "current" and not "final".
 
 Ethics are a starting point for designing technology responsibly. If
 the goals can not be met for valid technological reasons then that it
 a unfortunate outcome and one that should be avoided at all costs.
 
 The costs of supporting legacy systems has real financial implications
 notwithstanding an ethical ideology. If those costs become too great,
 legacy systems loose their impenetrable pedestal.
 
 The architectural impact of supporting for non-maintained legacy
 systems is that web proxy intermediates are something we will all have
 to live with.
>>> 
>>> Welcome to the web.  We support legacy systems.  If you don't want to
>>> support legacy systems, you might not enjoy working on improving the
>>> web platform.
>> 
>> Of course, but you seem to want to support hidden legacy systems, that is 
>> systems none of us know about or can see. It is still a worth while inquiry 
>> to find out how many systems there are for which this is a problem, if any. 
>> That is:
>> 
>>  a) systems that use non standard internal ip addresses
>>  b) systems that use ip-address provenance for access control
>>  c) ? potentially other issues that we have not covered
> 
> One important group to consider is home routers. Routers are often
> secured only by checking that requests are coming through an internal
> connection. I.e. either through wifi or through the ethernet port. If
> web pages can place arbitrary requests to such routers it would mean
> that they can redirect traffic arbitrarily and transparently.

The proposal is that requests to machines on private ip-ranges - i.e. machines
on 192.168.x.x and 10.x.x.x addresses in IPv4, or in IPV6 coming from 
the unique unicast address space [1] - would still require the full CORS 
handshake as described currently. The proposal only  affects GET requests 
requiring no authentication,  to machines with public ip addresses: the 
responses to these requests would be allowed through to a CORS javascript 
request without requiring the server to add the Access-Control-Allow-Origin 
header to his response. Furthermore it was added that the browser should 
still add the Origin: Header. 

The argument is that machines on such public IP addresses that would 
respond to such GET requests would be accessible via the public internet 
and so would be in any case accessible via a CORS proxy.

This proposal would clearly not affect home routers as currently deployed. The 
dangerous access to those are always to the machine when accessed via the 
192.168.x.x ip address range ( or the 10.x.x.x one ). If a router were insecure
when reached via its public name space and ip address, then it would be simply 
an insecure router.

I agree that there is some part of risk that is being taken in making this 
decision here. The above does not quite follow analytically from primitives.
It is possible that internal networks use public ip addresses for their own
machines - they would need to do this because the 10.x.x.x address space was
too small, or the ipv-6 equivalent was too small. Doing this they would make
access to public sites with those ip-ranges impossible (since traffic would be
redirected to the internal machines). My guess is that networks with this type
of setup, don't allow just anybody to open a connection in them. At least 
seems very likely to be so for ipv4. I am not sure what the situation with ipv6
is, or what it should be. ( I am thinking by analogy there. ) Machines on ipv6 
addresses would be machines deployed by experienced people who would probably
be able to change their software to respond differently to GET requests on 
internal
networks with an Origin: header whose value was not an internal machine.

Henry

[1] http://www.simpledns.com/private-ipv6.aspx


> 
> / Jonas

Social Web Architect
http://bblfish.net/