On Wed, Mar 21, 2012 at 3:32 PM, Jim Straus <jstr...@mozilla.com> wrote:

> As I've been reading, there are two divergent proposals for privileged app
> deployment.  The primary concern is that the code that has been granted
> privileges is not changed so that malicious code can't get privileges.  I
> think we want protection for any privileges that are granted to an
> application without the users consent, not just sensitive privileges..  The
> two methods are 1) SSL delivery of app from a store with a known
> certificate (checking for a set of specific certificates, not just that the
> certificate is a validly issued certificate) and use CPS to define that the
> code must come from the trusted server.  Note that according to the
> proposal from ianb, the device would need to validate that the app uses a
> CPS for its code that points to the store);  2) have the developer specify
> their code resources in a manifest/receipt and have the manifest include a
> signature from the store  (or at least a hash if the whole manifest is
> signed) for each f those code resources.
>
> So, lets look at the pros and cons of each.  This isn't meant to be
> exhaustive, but hopefully people can contribute other pros and cons and
> ways to ameliorate them.  Hopefully we can use this to come up with the
> best solution.
>
> Common:
>   Both require the store to be actively involved in the release of new
> privileged code.
>   Neither allows for dynamically delivered code.
>   Non-privileged apps don't need to make use of either method.
>
> Method 1:
>   Pros:
>     Makes use of known web technologies.
>       - with some limitations (require CPS and from known source.)
>     Code mostly exists already to handle this case.
>

I think the most important advantages of this are the things we can do as
part of the upload process, none of which have to be defined in the client
itself.  We would be introducing a server that itself served as a kind of
vetting process for the code.  Some examples of things we could do as part
of that process:

- Use developer keys so uploads are signed; or continue to add new or
better authentication over time to keep the uploading process secure
- Keep a public log of updates
- Remove or revert code that was found to be malicious (i.e., Mozilla could
remove that code, not wait for the developer to act)
- Do some automated review of the code
- Potentially do manual review (manual review of code has at least been
mentioned by some people, often based on Mozilla's review of addon code –
I'm not sure if this is really practical, but maybe?)
- We could obfuscate and compress code on our servers, so that we have
access to review code before this process (while still maintaining
developer privacy)
- We can force developers to explain, in a somewhat structured way, what
their updates do or why permissions have changed
- How aggressive any of this review is can also depend on what permissions
are being asked for, or what agreements developers are making with users

This all adds another big con:

- Developers give up a lot of control to these trusted code servers

How reasonable this all is also depends on how what permissions require
this level of scrutiny.  If we're talking about just things like dialers or
SMS messaging systems we can expect to make the bar considerably higher
than if we're talking about an app like Instagram.
_______________________________________________
dev-security mailing list
dev-security@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security

Reply via email to