Hi Jonas,

Thank you for sending this out!  I really like the model overall.  

With sensitive APIs, even if a 3d party vouches for the capabilities of the 
app, I believe we would still want to communicate that to the user somehow at 
installation time?  I'm concerned we'd end up with a pretty long and arcane 
list.  Maybe we could map those to a general "system access" meta-capability.

Actually, does this proposal assume all apps will go through the same 
installation experience (i.e. do we have the concept of an app without an 
explicit installation)?

Adding dev-security for more brains.
  Lucas. 

-- 
Nothing can be more abhorrent to democracy than to imprison a person or keep 
him in prison because he is unpopular. This is really the test of civilization. 
- Sir Winston Churchill

On Mar 8, 2012, at 2:25 AM, Jonas Sicking wrote:

> Hi All,
> 
> I'm way over due to write a proposal for the Open Web Apps and
> Boot-to-Gecko security models.
> 
> Background:
> 
> In general our aim should always be to design any API such that we can
> expose it to as broad of set of web pages/apps as possible. A good
> example of this is the Vibration API [1] which was designed such that
> it covers the vast majority of use cases, while still safe enough that
> we can expose it to all web pages with risk of annoying the user too
> much, or putting him/her at a security or privacy risk.
> 
> But we can't always do that, and so we'll need a way to safely grant
> certain pages/apps higher privilege. This gets very complicated in
> scenarios where describing the security impact to the user
> 
> There are plenty of examples of bad security models that we don't want
> to follow. One example is the security model that "traditional" OSs,
> like Windows and OS X, uses which is "anything installed has full
> access, so don't install something you don't trust". I.e. it's fully
> the responsibility of the user to not install something that they
> don't 100% trust. Not only that, but the decision that the user has to
> make is pretty extreme, either grant full privileges to the
> application, or don't run the application at all. The result, as we
> all know, is plenty of malware/grayware out there, with users having a
> terrible experience as a result.
> 
> A slightly better security model is that of Android, which when you
> install an app shows you a list of what capabilities the app will
> have. This is somewhat better, but we're still relying heavily on the
> user to understand what all capabilities mean. And the user still has
> to make a pretty extreme decision: Either grant all the capabilities
> that the app is requesting, or don't run the app at all.
> 
> Another security model that often comes up is the Apple iOS one. Here
> the security model is basically that Apple takes on the full
> responsibility to check that the app doesn't do anything harmful to
> the user. The nice thing about this is that we're no longer relying on
> the user to make informed decisions about what is and what isn't safe.
> However Apple further has the restriction that *only* they can say
> what is safe and what is not. Additionally they deny apps for reasons
> other than security/privacy problems. The result is that even when
> there are safe apps being developed, that the user wants to run, the
> user can't do so if apple says "no". Another problem that iOS has, and
> which has made headlines recently, is that Apple enforces some of its
> privacy policies not using technical means, but rather using social
> means. This has lately lead to discoveries of apps which extracts the
> users contact list and sends it to a server, without the users
> consent. This is things that Apple tries to catch during their review,
> but it's obviously hard to do so perfectly.
> 
> 
> Proposal:
> 
> The basic ideas of my proposal is as follows. For privacy-related
> questions, we generally want to defer to the user. For example for
> almost all apps that want to have access to the users addressbook, we
> should check with the user that this is ok. Most of the time we should
> be able to show a "remember this decision" box, which many times can
> default to checked, so the user is only faced with this question once
> per app.
> 
> For especially sensitive APIs, in particular security related ones,
> asking the user is harder. For example asking the user "do you want to
> allow USB access for this app" is unlikely a good idea since most
> people don't know what that means. Similarly, for the ability to send
> SMS messages, only relying on the user to make the right decision
> seems like a big risk.
> 
> For such sensitive APIs I think we need to have a trusted party verify
> and ensure that the app won't do anything harmful. This verification
> doesn't need to happen by inspecting the code, it can be enforced
> through non-technical means. For example if the fitbit company comes
> to mozilla and says that they want to write an App which needs USB
> access so that they can talk with their fitbit hardware, and that they
> won't use that access to wipe the data on people's fitbit hardware, we
> can either choose to trust them on this, or we can hold them to it
> through contractual means.
> 
> However we also don't want all app developers which need access to
> sensitive APIs to have to come to mozilla (and any other browser
> vendor which implement OWA). We should be able to delegate the ability
> to hand out this trust to parties that we trust. So if someone else
> that we trust wants to open a web store, we could give them the
> ability to sell apps which are granted access to these especially
> sensitive APIs.
> 
> This basically creates a chain of trust from the user to the apps. The
> user trusts the web browser (or other OWA-runtime) developers. The
> browser developers trusts the store owners. The store owners trust the
> app developers.
> 
> Of course, in the vast majority of cases apps shouldn't need access to
> these especially sensitive APIs. But we need a solution for the apps
> that do.
> 
> [1] https://bugzilla.mozilla.org/show_bug.cgi?id=679966
> 
> 
> How to implement this:
> 
> We create a set of capabilities, such as "allowed to see the users
> idle state", "allowed to modify files in the users photo directory",
> "allowed low-level access to the USB port", "allowed unlimited storage
> space on the device", "allowed access to raw TCP sockets".
> 
> Each API which requires some sort of elevated privileges will require
> one of these capabilities. There can be multiple APIs which
> semantically have the same security implications and thus might map to
> the same capabilities. However it should never need to be the case
> that an API requires to separate capabilities. This will keep the
> model simpler.
> 
> For each of these capabilities we'll basically have 4 levels of
> access: "deny", "prompt default to remember", "prompt default to not
> remember", "allow". For the two "prompt..." ones we'll pop up UI and
> show the user yes/no buttons and a "remember this decision" box. The
> box is checked for the "prompt default to remember" level.
> 
> We then enhance the OWA format such that an app can list which
> capabilities it wants. We could possibly also allow listing which
> level of access it wants for each capability, but I don't think that
> is needed.
> 
> When a call is made to the OWA .install function, to install an app,
> the store also passes along a list of capabilities that the store
> entrusts the app with, and which level of trust for these
> capabilities. The browser internally knows which stores it trusts to
> hand out which capabilities and which level of trust it trusts the
> store to hand out. The capabilities granted the app is basically the
> intersection of these two lists. I.e. the lowest level in either of
> these lists for either capability.
> 
> In the installation UI we could enable the user to see which
> capabilities will be granted, and which level is granted. However it
> should always be safe for the user to click yes, so we have a lot of
> freedom in how we display this.
> 
> Further, we should allow the user to modify these settings during the
> installation process as well as after an app is installed. We should
> even allow users to set a default policy like "always 'deny' TCP
> socket access", though this is mostly useful for advanced users. If
> the user does that we intersect with this list too before granting
> permissions to an app.
> 
> For any privacy-sensitive capabilities, we simply don't grant stores
> the ability to hand out trust higher than one of the "prompt ..."
> levels. That way we ensure that users are always asked before their
> data is shared.
> 
> In addition to this, I think we should have a default set of
> capabilities which are granted to installed apps. For example the
> ability to use unlimited amount of device storage, the ability to
> replace context menus and the ability to run background workers (once
> we have those). This fits nicely with this model since we can simply
> some capabilities to all installed apps (we'd need to decide if they
> should still be required to list these capabilities in the manifest or
> not).
> 
> Another thing which came up during a recent security review is that
> we'll likely want to have some technical restrictions on which sites
> can be granted some of these capabilities. For example something as
> sensitive as SMS access might require that the site uses STS (strict
> transport security) and/or EV-certs. This is also applies to the
> stores which we trust to hand out these capabilities.
> 
> There's also very interesting things we can do by playing around with
> cookies, but I'll leave that for a separate thread as that's a more
> narrow discussion.
> 
> Let me know what you think.
> 
> / Jonas
> _______________________________________________
> dev-b2g mailing list
> dev-...@lists.mozilla.org
> https://lists.mozilla.org/listinfo/dev-b2g

_______________________________________________
dev-security mailing list
dev-security@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security

Reply via email to