As I've been reading, there are two divergent proposals for privileged app 
deployment.  The primary concern is that the code that has been granted 
privileges is not changed so that malicious code can't get privileges.  I think 
we want protection for any privileges that are granted to an application 
without the users consent, not just sensitive privileges..  The two methods are 
1) SSL delivery of app from a store with a known certificate (checking for a 
set of specific certificates, not just that the certificate is a validly issued 
certificate) and use CPS to define that the code must come from the trusted 
server.  Note that according to the proposal from ianb, the device would need 
to validate that the app uses a CPS for its code that points to the store);  2) 
have the developer specify their code resources in a manifest/receipt and have 
the manifest include a signature from the store  (or at least a hash if the 
whole manifest is signed) for each f those code resources.  

So, lets look at the pros and cons of each.  This isn't meant to be exhaustive, 
but hopefully people can contribute other pros and cons and ways to ameliorate 
them.  Hopefully we can use this to come up with the best solution.

Common:
  Both require the store to be actively involved in the release of new 
privileged code.
  Neither allows for dynamically delivered code.
  Non-privileged apps don't need to make use of either method.

Method 1:
  Pros:
    Makes use of known web technologies.
      - with some limitations (require CPS and from known source.)
    Code mostly exists already to handle this case.
  Cons:
    Store sites are high visibility targets and if compromised, can affect lots 
of apps and users.
      - store could sign the code and then the device validates the code, but 
then we're at method 2, I think.
    Single point of failure.  If the store server is unavailable, lots of apps 
are unavailable.
    Store has to deliver part of content of an app.  This would be increase 
dramatically if an app didn't locally cache.
      - I think we believe most or all apps will locally cache.
    Before the loading of privileged code, the CPS and its source need to be 
validated.

Method 2:
  Pros:
    Validation occurs on device, a store does not need o be involved once the 
manifest is delivered.
    Distributed nature of the delivery of apps removes single points of failure 
or subversion.
    Signing keys are (hopefully!) kept off public servers, so a hacker can't 
(easily) sign code.
    Loss of a server only affects apps delivered by that server.
    Even with self-signed certificates, can handle the case of modified code 
making use of user granted privileges.
    Even with self-signed certificates, can handle the case of a hacker 
modifying an app to not perform as developed.
  Cons:
    New standard,  It would need to be run through standard committee(s).
      - though signing is occurring for receipts anyways, so there is SOME 
precedent.
    Before accepting privileged code, the signature(s) on the code need to be 
validated.

So, they both solve the basic problem.  Signed code does have some additional 
benefits.  Fill in missing pros and cons and where we should fall on the 
spectrum.

On Mar 20, 2012, at 4:12 AM, Ian Bicking wrote:

> On Tue, Mar 20, 2012 at 2:08 AM, lkcl <[email protected]> wrote:
>   ok. so. a summary of the problems with using SSL - and CSP,
>  and "pinning" - is described here:
> 
>     https://wiki.mozilla.org/Apps/Security#The_Problem_With_Using_SSL
> 
>  the summary: it's too complex to deploy, and its deployment results in
>  the site becoming a single-point-of-failure [think: 1,000,000 downloads
>  of angri burds a day].
> 
> I don't think I entirely understand what that section is referring to – in 
> the first section maybe it is referring to client certificates?  I don't 
> think https alone is unscalable.  And yes, there are other kinds of attacks 
> on SSL; some of which we can actually handle (like with pinning – I'm not 
> sure why pinning causes problems?)
> 
> The reason to use CSP and third party reviewed code hosting is to avoid a 
> different set of problems IMHO.  One is that a web application hosted on an 
> arbitrary server does not have code that can be reviewed in any way – any 
> request may be dynamic, there is no enumeration of all the source files, and 
> files can be updated at any time.  Now... I'm not sure I believe we can 
> meaningfully review applications... but imagine a Mozilla-hosted server, with 
> clear security standards, which might require developers to sign their 
> uploads, even if users don't get signed downloads, and which has no 
> dynamicism and so is immune to quite a few attacks as a result.  Also, if we 
> force applications to have appropriate CSP rules if they want access to 
> high-privilege APIs, those applications will be immune to some XSS attacks.  
> If we require those CSP rules, then a server compromise that removes the CSP 
> rules will also disable the privileged APIs.  And a server compromise 
> wouldn't be enough to upload arbitrary Javascript without also being able to 
> access the management API for the Mozilla code-hosting server (at least if 
> the people who upload code to the code-hosting server don't do it with the 
> same web servers that they serve the site from).  Also we can do things like 
> send out email notifications when code is updated, so even if some 
> developer's personal machine is compromised (assuming the developer has the 
> keys available to upload code) then the attack can't be entirely silent.  So 
> even if we don't really review any code that is uploaded to our hosted 
> servers, we'll still have added a lot of additional security with the process.
> 
> Note that I only think this applies to highly privileged APIs, the kind we 
> don't have yet.  I think the permission conversation got confusing here when 
> we lost sight of the starting point: web applications, deployed how they are 
> deployed now, using the security we have now.  Applications which don't need 
> access to new sensitive APIs (and most probably don't) shouldn't have 
> requirements beyond what already exists.

_______________________________________________
dev-security mailing list
[email protected]
https://lists.mozilla.org/listinfo/dev-security

Reply via email to