On Jun 27, 2012, at 12:09 AM, Adrienne Porter Felt wrote:

> As it stands, users can only find out what applications do with their data
> if they go read a long privacy policy.  Consequently, users likely assume
> that their data is only being used for the functionality that they see.
> For example, consider a game that asks for contacts in the context of
> finding friends who also use the same game.  Without reading the long
> privacy policy, the user has little way of knowing that this app will now
> *also* add those contacts' e-mail addresses to their mailing list.

No argument there!

> Now imagine that developers had to specify the rationale for their actions
> as part of the request. To continue my earlier example, the user would
> immediately know that the data would be used for both friend-finding and
> spam.  Most developers are incentivized to be honest: if they are caught
> lying, they'll face civil suits, removal from "official" markets, bad
> press, and a decline in popularity.
> 
> The odds of encountering outright malware are fairly small, but users
> routinely install applications that want to stretch the bounds of data
> usage.  I'm personally in favor of designing for the much more common case.

I'm more concerned with overtly malicious apps.  The sanctions you mention 
above don't seem to have significantly dissuaded overtly bad actors from 
distributing malicious apps on Android devices.  If they get a few thousand 
victims before they get pulled, they are still happy.  Having apparently 
trustworthy UI that something like "this app would like to have your 
location/picture for the purposes of verifying your Bank of Whatever account 
information" seems like a serious issue to me.  We have always treated security 
bugs in our chrome UI which let a 3rd party confuse or deceive the user as 
significantly worse & very different than simply the ability to display 
deceptive content.

Keep in mind that web installed apps (aka untrusted) don't have to be 
distributed by any app store, so blocking them is tricky.  The ability to 
blacklist + review process are the two reasons I'm pretty happy to display 
"intended usage" for trusted apps.
  Lucas.

> Here are some examples from the WWDC iOS 6 demo: http://twitpic.com/9yo9n4.
> 
> 
> 
> On Tue, Jun 26, 2012 at 6:45 PM, Lucas Adamski <[email protected]> wrote:
> 
>> On May 24, 2012, at 7:56 PM, Adrienne Porter Felt wrote:
>>>> Malware is going to use other forms of social engineering anyway.  Non-
>>> malware won't lie because of the fear of ramifications.  Why not
>>> include it for untrusted as well?  You could design the UI with big
>>> quotes around it or something to make it clear that it is something
>>> the developer says, not something the browser/OS says.
>> 
>> Sure, but I'm more comfortable if users get phished the old fashioned way;
>> less so if we enable new and improved ways of doing so. :)
>> 
>> I'm not sure if your example would be accurately interpreted by most
>> users.  If the prompt said something like "This developer claims they want
>> to access your <insert API here> for the supposed purposes of <insert
>> rationale here>, but have no idea what they'll actually do with it" would
>> it still be worth having?
>>  Lucas.
>> 
>> 
> _______________________________________________
> dev-webapps mailing list
> [email protected]
> https://lists.mozilla.org/listinfo/dev-webapps

_______________________________________________
dev-webapps mailing list
[email protected]
https://lists.mozilla.org/listinfo/dev-webapps

Reply via email to