Jonas, Paul, etc. -
  For any app, but in particular, a third party hosted apps, we could require 
that the manifest contain a signed cryptographic hash of the core of the 
application (javascript, html, css?), along with the signature of the trusted 
store.  This hash would be validated as signed by a trusted source (like or the 
same as SSL certs) and that the applications core matches the hash.  This would 
require that the browser/device pre-load the given content, but hopefully apps 
will be using the local-cache mechanism, so this should not be burdensome.  
Using this, once a trusted store has validate an application, the application 
can't be changed, even if it is hosted by a third party.  We would have to 
enforce that a signed application can't download untrusted javascript (eval 
becomes a sensitive API?).  This would allow a third party to host the apps 
approved by a given store.  It would also prevent a hacked store site from 
distributing hacked apps (well, things  like images could still be
  hacked, but not functionally) as long as the hacker doesn't have access to 
the signing system (which should clearly not be on a public machine).  This 
doesn't prevent a hacker from gaining access to information communicated bak to 
a server, but at least makes sure that it isn't re-directed somewhere else.
  The signing mechanism can also be used to black list an app.  If Mozilla 
maintains a site with a list of blacklisted signatures and devices query that 
site, the apps could be disabled.  In whatever UI we have to view the list of 
apps and control their permissions, a blacklisted app would show up as black 
listed and all permissions denied.  A user, who needs the app would then 
explicitly re-enable it and re-add permissions (making it a pain to go through 
the process of looking at the permissions and enabling them), along with 
suitable warnings when they do so.  Probably the black list site should contain 
both the signatures to deny and an explanation of why, (consumes excess 
resources, connects to high-cost SMS servers, leaks contacts, etc.) so that the 
user can make an informed choice such as  to allow an app that consumes excess 
resources, but not allow an app that leaks personal information or incurs 
excessive costs.
-Jim Straus

On Mar 9, 2012, at 11:01 PM, Jonas Sicking wrote:

> On Thu, Mar 8, 2012 at 1:48 PM, ptheriault <ptheria...@mozilla.com> wrote:
>> Jonas,
>> 
>> Thanks for taking the time to document your thoughts. I also caught up with
>> Chris Jones from B2G yesterday to go security, and we discussed app
>> permissions as well. I have written up a couple pages of notes, but I'd like
>> to a key difference. From our discussion yesterday (and Chris correct me if
>> I misunderstood) there will be two levels of trusted stores required:  one
>> that sells and hosts "privileged installed apps", and one that provides Web
>> Apps under the current third party model. As I understood it from Chris you
>> would have:
>> 
>> - Privileged store: this is the store which your B2G device comes
>> preconfigured with - mozilla's marketplace, and/or the telco's etc. Apps
>> from this store will be able to grant permissions like dialer, SMS -
>> services which are critical to the phone operation, and for which regulatory
>> contrasts exist (emergency calls etc). Detailed technical review of both
>> code and permissions will be required, and will contractual terms for any
>> apps that are third party. These Web Apps would be hosted on the marketplace
>> (not a third party web server) so that code integrity can be maintained post
>> review. All updates would be brokered by the store. The install process
>> would be downloading a discrete package of files, to be served in an offline
>> manner on a unique synthetic domain. I don't know that we would want users
>> to be able to add privileged stores - or maybe they could but it might void
>> their warranty or something?
> 
> I'm not sure that we need a dedicated store for the built-in apps.
> 
> As described in my initial email, mozilla would have relationships
> with a number of stores (including its own) which would allow those
> stores to grant apps higher privileges. So for each of those stores we
> would list the capabilities that that store is allowed to grant, and
> how high level of access for each capability.
> 
> I could certainly see that a telco shipping a B2G device would want to
> add to that list its own store. And configure it such that the telco's
> store has the ability to install apps with most capabilities and with
> very high level of access for them. And then only grant such high
> level to apps that they write themselves or otherwise feel that they
> can trust. For example to do things like dialers.
> 
> If the telco so wants (or is required to by law) it can host these
> apps on its own servers and do whatever review needed before they
> indicate trust to them through the store.
> 
> However there is no reason we also couldn't allow "preinstalled" apps
> on B2G. I.e. apps that are already on the device and in the various
> internal databases describing what level of trust each app should
> have. So a telco could grant preintalled apps any levels of privilege
> they want independent on any stores.
> 
> Beyond that I don't think we need a concept of a "Privileged store".
> 
>> - Trusted store: this is the store where you get your "apps" (i.e. analogous
>> to the Appstore or Android Marketplace). This is as you have described
>> below. Some permissions would not be allowed to be granted here (consider
>> the regulatory requirements for the dialer App for example), but this store
>> is trusted to review/verify Third Party Web Apps. The challenge as I see it
>> for these stores, is that as far as I understand it, Web Apps are hosted by
>> third parties. The store has a local copy of the manifest, so it can check
>> that doesn't change, but the Web App site can change whatever it likes, so
>> any review is meaningless down the track. So reviewing permissions in the
>> manifest, and enforcing contractual terms are the main controls here? I
>> think it then becomes a question of which permissions we can trust to such a
>> mode, maybe permissions granted by this store must only be those which users
>> would currently grant to a website that they trust.
> 
> I don't have a whole lot of faith in code reviews catching people not
> misusing permissions given to them. Code review is notoriously hard
> and if developers try to get something past you they generally can.
> And it has a tendency to get in the way of things like security
> updates that need to happen on extremely tight schedules.
> 
> A better solution is to establish non-technical relationships with the
> application authors and use in combination with technical means to
> enforce policies. So for example mozilla could have as a policy to
> only grant applications the ability to send SMS messages to companies
> that have signed contracts saying that they won't send messages other
> than in response to explicit user actions to do so.
> 
>>> For such sensitive APIs I think we need to have a trusted party verify
>>> and ensure that the app won't do anything harmful. This verification
>>> doesn't need to happen by inspecting the code, it can be enforced
>>> through non-technical means. For example if the fitbit company comes
>>> to mozilla and says that they want to write an App which needs USB
>>> access so that they can talk with their fitbit hardware, and that they
>>> won't use that access to wipe the data on people's fitbit hardware, we
>>> can either choose to trust them on this, or we can hold them to it
>>> through contractual means.
>> 
>> Isn't this just like Apple's policy except now without the technical review
>> component?  At a minimum we should be reviewing the manifests to ensure the
>> permissions remain as requested, but even so, this doesn't seem like a
>> strong control to me. Do we need to be careful about what permission we
>> grant under such a scheme?
> 
> Reviewing manifests won't really be needed. Access is granted based on
> what the *store* tells us that it wants to grant that app, not based
> on what is requested in the manifest. Or rather, it'd be the lowest of
> the two. So even if the manifest changes, that won't automatically
> grant the app any additional privileges.
> 
> We will need a way for an app to request additional privileges during
> an upgrade though. But at that point we need to verify from the store
> that the app was installed through that the store is willing to grant
> the app those additional privileges.
> 
>>  (My assumption is that If fitbit's Web App
>> website gets owned, then all their user's devices owned too - is that
>> correct? Maybe I am missing something, or maybe we can do something
>> technically to mitigate this risk...)
> 
> This is correct. I don't think it's possible to create a permission
> model where we can tell an app appart from hacked code running inside
> the app. If we could we would simply shut down the app any time we
> detected that it was hacked.
> 
>> As Lucas mentions, I also thing we
>> should be informing the user somehow, but it sounds like this will be part
>> of the installation UI where the user can disable permissions an app as
>> requested.
> 
> I'm all for showing the user what privileges will be granted. However
> the model should be very clear that this is strictly informative UI.
> It should always be safe for users to simply press "yes". The store is
> responsible for making sure that unsafe privileges aren't granted.
> 
>>> However we also don't want all app developers which need access to
>>> sensitive APIs to have to come to mozilla (and any other browser
>>> vendor which implement OWA). We should be able to delegate the ability
>>> to hand out this trust to parties that we trust. So if someone else
>>> that we trust wants to open a web store, we could give them the
>>> ability to sell apps which are granted access to these especially
>>> sensitive APIs.
>>> 
>>> This basically creates a chain of trust from the user to the apps. The
>>> user trusts the web browser (or other OWA-runtime) developers. The
>>> browser developers trusts the store owners. The store owners trust the
>>> app developers.
>> 
>> Will this create a financial disincentive between marketplaces to review
>> Apps properly? (more review = more cost = higher app prices?) Just a
>> thought.
> 
> Yes. Though again, I don't think stores should get into the business
> of reviewing App's code. It's mozilla's responsibility to hand out
> trust to stores which will handle that trust responsibly.
> 
>> Also how will a user know which store's to trust?
> 
> It's not the users that makes that decision. As described in my email,
> it's Mozilla's job to create the list of stores that we make Firefox
> trust. Similarly it'll be Google's job to select who they want to put
> in the list of trusted webapp stores and Apple's job to select who
> they put in Safari's etc.
> 
>> How do domains which install themselves as Web Apps fit into this model?  Is
>> there perhaps a default lower set of permissions that websites can install
>> themselves with - basically the same types as websites, except that with
>> apps permissions might be able t get "prompt to remember" instead of just
>> "prompt"?)
> 
> Such store's generally won't be trusted. So those stores will work
> just fine, however they won't be able to install apps which need SMS
> privileges.
> 
> It would be great if we could come up with a way where sites could
> sell their own apps on their own website, but have them provide a
> pointer to a trusted store which could vouch that they are a
> trustworthy app. Or to make an example.
> 
> Say that SMSMessagingInc has developeded their AwesomeSMS+ app. They
> go to mozilla and get mozilla to verify that they are to be trusted
> with the "ability to send SMS messages" capability after a simple user
> prompt. Mozilla put the AwesomeSMS+ app in the Mozilla webstore. When
> Firefox sees a .install(...) call from the Mozilla webstore for the
> AwesomeSMS+ app which says that the app is to be granted "ability to
> send SMS messages" capability after a simple user prompt, it knows
> that the Mozilla store has the permission to grant that capability and
> so the app will work as expected.
> 
> Additionally SMSMessagingInc want to sell the app on their website.
> They do so and somehow provide a pointer to the Mozilla webstore. When
> firefox gets the .install call from the SMSMessagingInc website, it
> goes to the Mozilla webstore and checks which privileges Mozilla says
> that the app should have. The Mozilla webstore says that the app
> should have "ability to send SMS messages" capability after a simple
> user prompt and so the app gets installed with that capability and
> works as it should.
> 
>>> Each API which requires some sort of elevated privileges will require
>>> one of these capabilities. There can be multiple APIs which
>>> semantically have the same security implications and thus might map to
>>> the same capabilities. However it should never need to be the case
>>> that an API requires to separate capabilities. This will keep the
>>> model simpler.
>> 
>> Agree but APIs then need to be split accordingly into trust groups, like
>> Camera API and Camera Control API.
> 
> Note that an "API" can be defined as a single function. So we can, and
> likely will, have functions on the same object which have different
> capability levels.
> 
> So for example we will likely have different capability requirements
> for DeviceStorage.add and DeviceStorage.delete
> 
>>> For each of these capabilities we'll basically have 4 levels of
>>> access: "deny", "prompt default to remember", "prompt default to not
>>> remember", "allow". For the two "prompt..." ones we'll pop up UI and
>>> show the user yes/no buttons and a "remember this decision" box. The
>>> box is checked for the "prompt default to remember" level.
>> 
>>> We then enhance the OWA format such that an app can list which
>>> capabilities it wants. We could possibly also allow listing which
>>> level of access it wants for each capability, but I don't think that
>>> is needed.
>> 
>> Allow is quite different to prompt? Is this isn't in the manifest, where is
>> it set - would a store set these for an App?
> 
> As stated in the original email. The list of capabilities, and their
> access level, is handed to the .install() function when a store
> installs an app. So yes, this comes from the store.
> 
>>> Another thing which came up during a recent security review is that
>>> we'll likely want to have some technical restrictions on which sites
>>> can be granted some of these capabilities. For example something as
>>> sensitive as SMS access might require that the site uses STS (strict
>>> transport security) and/or EV-certs. This is also applies to the
>>> stores which we trust to hand out these capabilities.
>> 
>> Chris brought up the issue of regulatory controls for functions like the
>> dialer. (e.g. phones always need to be able to make emergency calls).
>> Hence the description of the privileged store concept above, where the store
>> hosts the code of the Web App. It would likely also be a completely offline
>> web app, so might also be able to add technical controls like CSP. (e.g.
>> dialer should be restricted from making connections to the internet?)
> 
> I hope I answered the parts about privileged stores above. I.e. I
> don't think we need them.
> 
> I also don't see why we wouldn't let dialer apps connect to the
> internet? Especially in the scenario where the dialer app is
> preinstalled and thus fully trusted.
> 
> / Jonas
> _______________________________________________
> dev-b2g mailing list
> dev-...@lists.mozilla.org
> https://lists.mozilla.org/listinfo/dev-b2g

_______________________________________________
dev-security mailing list
dev-security@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security

Reply via email to