To me these controls are not mutually exclusive, but rather a series of 
controls that provide mitigations against slightly different threats. 

1. Require the app host to have SSL?
2. Require the app to be static HTML/JS/CSS (and prevent loading of dynamic 
code)?
3. Require the app to be hosted on a Trusted App Host (i.e. under the stores 
control, or a trusted third party)?
4. Require code to be signed?

These all mitigate different threats: 

- SSL mitigates network compromise
- Static apps are easier to review (reduce chance of vulnerable or malicious 
code)
- Deploying from a trusted location (in theory) reduces the risk of change code 
due to app host compromise
- Code signing (with effective key management) prevents static code from being 
modified on the app host, network or device itself

Perhaps I am oversimplifying here but to me its more a case of what security 
features are we going to support in B2G. I think that 1& 2 are mandatory:

1. I can't think of any reason not to deploy privileged applications over SSL, 
and the more strict the better (HSTS, limited certs, additional checks etc)
2. If the application is not static, how can the store make an assertion over 
the quality of the code? (short of trusting the developer through third-party 
contracts, but to me the user is not longer in control of their device, since 
their trusted store is diluting this trust)

In terms of 3 & 4, I am not sure, but my gut feeling is going towards a Trusted 
Host instead of code signing is passing the responsibility for security from 
B2G to the server which hosts the code, and takes the control away from the 
device itself. There is also no protection against modification of app files on 
the device itself, although I am not sure how big this threat is since there is 
an assumption that no B2G apps will have access to modify files at the OS level.

So my vote, for what it counts, is that we actually want both trusted hosts, 
and code signing as part of a defense in depth approach to securing critical 
B2G applications.




On Mar 22, 2012, at 7:32 AM, Jim Straus wrote:

> As I've been reading, there are two divergent proposals for privileged app 
> deployment.  The primary concern is that the code that has been granted 
> privileges is not changed so that malicious code can't get privileges.  I 
> think we want protection for any privileges that are granted to an 
> application without the users consent, not just sensitive privileges..  The 
> two methods are 1) SSL delivery of app from a store with a known certificate 
> (checking for a set of specific certificates, not just that the certificate 
> is a validly issued certificate) and use CPS to define that the code must 
> come from the trusted server.  Note that according to the proposal from ianb, 
> the device would need to validate that the app uses a CPS for its code that 
> points to the store);  2) have the developer specify their code resources in 
> a manifest/receipt and have the manifest include a signature from the store  
> (or at least a hash if the whole manifest is signed) for each f those code 
> resources.  
> 
> So, lets look at the pros and cons of each.  This isn't meant to be 
> exhaustive, but hopefully people can contribute other pros and cons and ways 
> to ameliorate them.  Hopefully we can use this to come up with the best 
> solution.
> 
> Common:
>   Both require the store to be actively involved in the release of new 
> privileged code.
>   Neither allows for dynamically delivered code.
>   Non-privileged apps don't need to make use of either method.
> 
> Method 1:
>   Pros:
>     Makes use of known web technologies.
>       - with some limitations (require CPS and from known source.)
>     Code mostly exists already to handle this case.
>   Cons:
>     Store sites are high visibility targets and if compromised, can affect 
> lots of apps and users.
>       - store could sign the code and then the device validates the code, but 
> then we're at method 2, I think.
>     Single point of failure.  If the store server is unavailable, lots of 
> apps are unavailable.
>     Store has to deliver part of content of an app.  This would be increase 
> dramatically if an app didn't locally cache.
>       - I think we believe most or all apps will locally cache.
>     Before the loading of privileged code, the CPS and its source need to be 
> validated.
> 
> Method 2:
>   Pros:
>     Validation occurs on device, a store does not need o be involved once the 
> manifest is delivered.
>     Distributed nature of the delivery of apps removes single points of 
> failure or subversion.
>     Signing keys are (hopefully!) kept off public servers, so a hacker can't 
> (easily) sign code.
>     Loss of a server only affects apps delivered by that server.
>     Even with self-signed certificates, can handle the case of modified code 
> making use of user granted privileges.
>     Even with self-signed certificates, can handle the case of a hacker 
> modifying an app to not perform as developed.
>   Cons:
>     New standard,  It would need to be run through standard committee(s).
>       - though signing is occurring for receipts anyways, so there is SOME 
> precedent.
>     Before accepting privileged code, the signature(s) on the code need to be 
> validated.
> 
> So, they both solve the basic problem.  Signed code does have some additional 
> benefits.  Fill in missing pros and cons and where we should fall on the 
> spectrum.
> 
> On Mar 20, 2012, at 4:12 AM, Ian Bicking wrote:
> 
>> On Tue, Mar 20, 2012 at 2:08 AM, lkcl <luke.leigh...@gmail.com> wrote:
>>   ok. so. a summary of the problems with using SSL - and CSP,
>>  and "pinning" - is described here:
>> 
>>     https://wiki.mozilla.org/Apps/Security#The_Problem_With_Using_SSL
>> 
>>  the summary: it's too complex to deploy, and its deployment results in
>>  the site becoming a single-point-of-failure [think: 1,000,000 downloads
>>  of angri burds a day].
>> 
>> I don't think I entirely understand what that section is referring to – in 
>> the first section maybe it is referring to client certificates?  I don't 
>> think https alone is unscalable.  And yes, there are other kinds of attacks 
>> on SSL; some of which we can actually handle (like with pinning – I'm not 
>> sure why pinning causes problems?)
>> 
>> The reason to use CSP and third party reviewed code hosting is to avoid a 
>> different set of problems IMHO.  One is that a web application hosted on an 
>> arbitrary server does not have code that can be reviewed in any way – any 
>> request may be dynamic, there is no enumeration of all the source files, and 
>> files can be updated at any time.  Now... I'm not sure I believe we can 
>> meaningfully review applications... but imagine a Mozilla-hosted server, 
>> with clear security standards, which might require developers to sign their 
>> uploads, even if users don't get signed downloads, and which has no 
>> dynamicism and so is immune to quite a few attacks as a result.  Also, if we 
>> force applications to have appropriate CSP rules if they want access to 
>> high-privilege APIs, those applications will be immune to some XSS attacks.  
>> If we require those CSP rules, then a server compromise that removes the CSP 
>> rules will also disable the privileged APIs.  And a server compromise 
>> wouldn't be enough to upload arbitrary Javascript without also being able to 
>> access the management API for the Mozilla code-hosting server (at least if 
>> the people who upload code to the code-hosting server don't do it with the 
>> same web servers that they serve the site from).  Also we can do things like 
>> send out email notifications when code is updated, so even if some 
>> developer's personal machine is compromised (assuming the developer has the 
>> keys available to upload code) then the attack can't be entirely silent.  So 
>> even if we don't really review any code that is uploaded to our hosted 
>> servers, we'll still have added a lot of additional security with the 
>> process.
>> 
>> Note that I only think this applies to highly privileged APIs, the kind we 
>> don't have yet.  I think the permission conversation got confusing here when 
>> we lost sight of the starting point: web applications, deployed how they are 
>> deployed now, using the security we have now.  Applications which don't need 
>> access to new sensitive APIs (and most probably don't) shouldn't have 
>> requirements beyond what already exists.
> 

_______________________________________________
dev-security mailing list
dev-security@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security

Reply via email to