On Thu, Mar 8, 2012 at 2:25 AM, Jonas Sicking <[email protected]> wrote: > Hi All, > > I'm way over due to write a proposal for the Open Web Apps and > Boot-to-Gecko security models. > > Background: > > In general our aim should always be to design any API such that we can > expose it to as broad of set of web pages/apps as possible. A good > example of this is the Vibration API [1] which was designed such that > it covers the vast majority of use cases, while still safe enough that > we can expose it to all web pages with risk of annoying the user too > much, or putting him/her at a security or privacy risk. > > But we can't always do that, and so we'll need a way to safely grant > certain pages/apps higher privilege. This gets very complicated in > scenarios where describing the security impact to the user > > There are plenty of examples of bad security models that we don't want > to follow. One example is the security model that "traditional" OSs, > like Windows and OS X, uses which is "anything installed has full > access, so don't install something you don't trust". I.e. it's fully > the responsibility of the user to not install something that they > don't 100% trust. Not only that, but the decision that the user has to > make is pretty extreme, either grant full privileges to the > application, or don't run the application at all. The result, as we > all know, is plenty of malware/grayware out there, with users having a > terrible experience as a result. > > A slightly better security model is that of Android, which when you > install an app shows you a list of what capabilities the app will > have. This is somewhat better, but we're still relying heavily on the > user to understand what all capabilities mean. And the user still has > to make a pretty extreme decision: Either grant all the capabilities > that the app is requesting, or don't run the app at all. > > Another security model that often comes up is the Apple iOS one. Here > the security model is basically that Apple takes on the full > responsibility to check that the app doesn't do anything harmful to > the user. The nice thing about this is that we're no longer relying on > the user to make informed decisions about what is and what isn't safe. > However Apple further has the restriction that *only* they can say > what is safe and what is not. Additionally they deny apps for reasons > other than security/privacy problems. The result is that even when > there are safe apps being developed, that the user wants to run, the > user can't do so if apple says "no". Another problem that iOS has, and > which has made headlines recently, is that Apple enforces some of its > privacy policies not using technical means, but rather using social > means. This has lately lead to discoveries of apps which extracts the > users contact list and sends it to a server, without the users > consent. This is things that Apple tries to catch during their review, > but it's obviously hard to do so perfectly. > > > Proposal: > > The basic ideas of my proposal is as follows. For privacy-related > questions, we generally want to defer to the user. For example for > almost all apps that want to have access to the users addressbook, we > should check with the user that this is ok. Most of the time we should > be able to show a "remember this decision" box, which many times can > default to checked, so the user is only faced with this question once > per app. > > For especially sensitive APIs, in particular security related ones, > asking the user is harder. For example asking the user "do you want to > allow USB access for this app" is unlikely a good idea since most > people don't know what that means. Similarly, for the ability to send > SMS messages, only relying on the user to make the right decision > seems like a big risk. > > For such sensitive APIs I think we need to have a trusted party verify > and ensure that the app won't do anything harmful. This verification > doesn't need to happen by inspecting the code, it can be enforced > through non-technical means. For example if the fitbit company comes > to mozilla and says that they want to write an App which needs USB > access so that they can talk with their fitbit hardware, and that they > won't use that access to wipe the data on people's fitbit hardware, we > can either choose to trust them on this, or we can hold them to it > through contractual means. > > However we also don't want all app developers which need access to > sensitive APIs to have to come to mozilla (and any other browser > vendor which implement OWA). We should be able to delegate the ability > to hand out this trust to parties that we trust. So if someone else > that we trust wants to open a web store, we could give them the > ability to sell apps which are granted access to these especially > sensitive APIs. > > This basically creates a chain of trust from the user to the apps. The > user trusts the web browser (or other OWA-runtime) developers. The > browser developers trusts the store owners. The store owners trust the > app developers. > > Of course, in the vast majority of cases apps shouldn't need access to > these especially sensitive APIs. But we need a solution for the apps > that do. > > [1] https://bugzilla.mozilla.org/show_bug.cgi?id=679966 > > > How to implement this: > > We create a set of capabilities, such as "allowed to see the users > idle state", "allowed to modify files in the users photo directory", > "allowed low-level access to the USB port", "allowed unlimited storage > space on the device", "allowed access to raw TCP sockets". > > Each API which requires some sort of elevated privileges will require > one of these capabilities. There can be multiple APIs which > semantically have the same security implications and thus might map to > the same capabilities. However it should never need to be the case > that an API requires to separate capabilities. This will keep the > model simpler. > > For each of these capabilities we'll basically have 4 levels of > access: "deny", "prompt default to remember", "prompt default to not > remember", "allow". For the two "prompt..." ones we'll pop up UI and > show the user yes/no buttons and a "remember this decision" box. The > box is checked for the "prompt default to remember" level. > > We then enhance the OWA format such that an app can list which > capabilities it wants. We could possibly also allow listing which > level of access it wants for each capability, but I don't think that > is needed. > > When a call is made to the OWA .install function, to install an app, > the store also passes along a list of capabilities that the store > entrusts the app with, and which level of trust for these > capabilities. The browser internally knows which stores it trusts to > hand out which capabilities and which level of trust it trusts the > store to hand out. The capabilities granted the app is basically the > intersection of these two lists. I.e. the lowest level in either of > these lists for either capability. > > In the installation UI we could enable the user to see which > capabilities will be granted, and which level is granted. However it > should always be safe for the user to click yes, so we have a lot of > freedom in how we display this. > > Further, we should allow the user to modify these settings during the > installation process as well as after an app is installed. We should > even allow users to set a default policy like "always 'deny' TCP > socket access", though this is mostly useful for advanced users. If > the user does that we intersect with this list too before granting > permissions to an app. > > For any privacy-sensitive capabilities, we simply don't grant stores > the ability to hand out trust higher than one of the "prompt ..." > levels. That way we ensure that users are always asked before their > data is shared. > > In addition to this, I think we should have a default set of > capabilities which are granted to installed apps. For example the > ability to use unlimited amount of device storage, the ability to > replace context menus and the ability to run background workers (once > we have those). This fits nicely with this model since we can simply > some capabilities to all installed apps (we'd need to decide if they > should still be required to list these capabilities in the manifest or > not). > > Another thing which came up during a recent security review is that > we'll likely want to have some technical restrictions on which sites > can be granted some of these capabilities. For example something as > sensitive as SMS access might require that the site uses STS (strict > transport security) and/or EV-certs. This is also applies to the > stores which we trust to hand out these capabilities. > > There's also very interesting things we can do by playing around with > cookies, but I'll leave that for a separate thread as that's a more > narrow discussion. > > Let me know what you think. > > / Jonas
There were a couple of pieces that I forgot to mention above and which I think are quite important. User control: I think it's very important in all this that we put the user in ultimate control. I don't think we want to rely on the user to make security decisions for all APIs, however I think it's important that we enable users to do so if they so desire. And I think that users should be able to make security decisions in "both directions", I.e. both enable more access as well as less access than the above system provides. So during installation I think users should be able to tune down access on a capability-by-capability basis. I.e. the user should be able to say, "I want to run this SMS app, but I want to completely disable the ability to send SMS messages" Additionally, we should have some way for a user to install an app from a completely untrusted source and grant it any privilege that he/she wants to. This needs to be a quite complicated UI so that users don't do this accidentally, but I think it's important to allow as to not create situations like on iOS where certain apps require users to hack the device to get to install at all. Block listing: We need to figure out a good system for blocking apps if it's detected that they are harming users. This can happen if for example an app is hacked, or if it's detected after access has been granted that the app is intentionally doing malicious things such as sending SMS messages to high-cost phone numbers. In these situations it needs to be possible the app developer, for the store as well as for mozilla to push a message to the device that immediately disables the app. We also need to make this system be user friendly. Right now in firefox we often end up choosing not to blocklist an addon becuse the user experience of doing so isn't good enough in some way or another. So for example an app might have a bug which causes it to use tremendous amounts of system resources. Enough that it brings the phone to a crawl and thus needs to be disabled until this bug is fixed. However some companies might be critically depending on this app for their employees. If it's completely impossible for users to override the block, then the various parties might be reluctant to disable the app since it'd harm some of the users too much. I don't know exactly what requirements we'd have for this blocklisting system, but we should look at the experience we have from the firefox addon blocklisting mechanism, in particular in the cases where we've chosen *not* to use it, and learn from that. / Jonas _______________________________________________ dev-security mailing list [email protected] https://lists.mozilla.org/listinfo/dev-security
