Comments in line below:

On Mar 21, 2012, at 6:05 PM, Ian Bicking wrote:

> On Wed, Mar 21, 2012 at 3:32 PM, Jim Straus <jstr...@mozilla.com> wrote:
> As I've been reading, there are two divergent proposals for privileged app 
> deployment.  The primary concern is that the code that has been granted 
> privileges is not changed so that malicious code can't get privileges.  I 
> think we want protection for any privileges that are granted to an 
> application without the users consent, not just sensitive privileges..  The 
> two methods are 1) SSL delivery of app from a store with a known certificate 
> (checking for a set of specific certificates, not just that the certificate 
> is a validly issued certificate) and use CPS to define that the code must 
> come from the trusted server.  Note that according to the proposal from ianb, 
> the device would need to validate that the app uses a CPS for its code that 
> points to the store);  2) have the developer specify their code resources in 
> a manifest/receipt and have the manifest include a signature from the store  
> (or at least a hash if the whole manifest is signed) for each f those code 
> resources.  
> 
> So, lets look at the pros and cons of each.  This isn't meant to be 
> exhaustive, but hopefully people can contribute other pros and cons and ways 
> to ameliorate them.  Hopefully we can use this to come up with the best 
> solution.
> 
> Common:
>   Both require the store to be actively involved in the release of new 
> privileged code.
>   Neither allows for dynamically delivered code.
>   Non-privileged apps don't need to make use of either method.
> 
> Method 1:
>   Pros:
>     Makes use of known web technologies.
>       - with some limitations (require CPS and from known source.)
>     Code mostly exists already to handle this case.
> 
> I think the most important advantages of this are the things we can do as 
> part of the upload process, none of which have to be defined in the client 
> itself.  We would be introducing a server that itself served as a kind of 
> vetting process for the code.  Some examples of things we could do as part of 
> that process:

I don't think there is any proposal that works with a vanilla client.  There is 
going to be something extra we're going to have to implement if we're going to 
secure the code in any way.  Either validate the CSP to ensure it is pointing 
at the store and using https, and whatever else needs to be validated.

> 
> - Use developer keys so uploads are signed; or continue to add new or better 
> authentication over time to keep the uploading process secure

Works with either method.  In the case of signed code in a manifest the 
developer would still need to provide the code to the store (both so the store 
can do whatever vetting of the code they want) and to provide the 
signatures/hash for the code and the manifest back to the developer.

> - Keep a public log of updates

See previous.  It can be provided with both.

> - Remove or revert code that was found to be malicious (i.e., Mozilla could 
> remove that code, not wait for the developer to act)

That is a good point.  Mozilla could retain the code in either case, but in the 
base of signed manifests there is no way to distribute the code.  On the other 
hand, even if older code is distributed, there is no assurance that the other 
resources would correspond to the code.

In the case of removing code that is found to be malicious, there are already 
plans for a blacklist.  And that would still be necessary, since an app that is 
locally cached may not go back to the developer or store in a timely manner.

> - Do some automated review of the code
> - Potentially do manual review (manual review of code has at least been 
> mentioned by some people, often based on Mozilla's review of addon code – I'm 
> not sure if this is really practical, but maybe?)
> - We could obfuscate and compress code on our servers, so that we have access 
> to review code before this process (while still maintaining developer privacy)
> - We can force developers to explain, in a somewhat structured way, what 
> their updates do or why permissions have changed
> - How aggressive any of this review is can also depend on what permissions 
> are being asked for, or what agreements developers are making with users

For all of these, the developer submits the code to the store in either case.  
Either to be distributed or signed.

> 
> This all adds another big con:
> 
> - Developers give up a lot of control to these trusted code servers

I think another way of saying this is that the developer now has to trust the 
store servers to be part of their distribution network.  Another thought 
occurred to me is that the SSL/CSP method their customers are now depending on 
reasonably quick access to the store.  So, if someone in China purchases and 
app from the Mozilla store, they are constantly going back to the Mozilla Store 
server.  With the signed code, once the developer has the signatures, they can 
host the application, including the code in China.

> 
> How reasonable this all is also depends on how what permissions require this 
> level of scrutiny.  If we're talking about just things like dialers or SMS 
> messaging systems we can expect to make the bar considerably higher than if 
> we're talking about an app like Instagram.
> 
> 

For something like Instagram, it probably wants automatic granting of access to 
the camera and the ability to connect to Facebook and Twitter servers (servers 
not in the same domain as the application).  If the code is not controlled and 
someone hacks the Instagram server, they can then push new code on to all the 
users, taking photos of the person even if they're not explicitly trying to 
take a picture and posting those to Facebook and Twitter.  The hacked code 
could also re-prompt for the users login credentials and do something else 
malicious with them.  Or just post lot of potentially embarrassing pictures to 
the users account.

There may be some API that are can't be abused, but if that's so, why are we 
requiring permissions for them in the first place?  I agree some permissions 
may require more vetting than others (phone and SMS certainly come to mind).   
But I don't think we can have two tiers of security for delivering code that 
requires any permissions.
_______________________________________________
dev-security mailing list
dev-security@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security

Reply via email to