On 27/03/12 07:45 AM, Lucas Adamski wrote:
[snip]
For example, this type of app can dial a phone number directly without any user 
involvement or knowledge.

OK.  So this "badness" leads to another point.  If (for whatever logic) we are 
led to the point where a carrier /
manufacturer has "granted" some permissions that are considered to be highly interesting 
("dangerous" ?) then we need
to look at the fuller meaning of that.

Dangerous means they can go wrong.  If they never go wrong there isn't an issue 
and we don't care.

In contrast to that, when they do go wrong, this is the moment when we care.  
We are forced to take care, we can no
longer pretent.  So let's look at that, as if it is important.

Say the AngryBudgies app did go wrong and turned out to be HungryAlligators in 
disguise.  It does damage (doesn't
matter what).

What now?  This is where the rubber of a security system meets the road of 
reality.  What happens when it all falls
apart?

Does Alice re-install?  Buy a new computer?  A new house? Does she damn Carol 
the Carrier on some ebay-like reputation
outlet?  Does she sue for damages?  Does developer Carol's insurance fund pay 
out?  Does Carol's private vigilante
police force hunt down the Alligators and reinstall with prejudice?  Does Bob 
the WebAPI builder form a standards
committee to deal with this, and in the process shut out any user complaints?

Without an answer to this, we're talking tech only.  Worthless.  We need to 
understand the full business cycle we're
trying to protect, because only in that context do we understand the attacks.

Maybe the answer is nothing?  In which case we do "best efforts, all love, no 
responsibility" which is the case with
most Internet security models.  On the other hand, do we go an extra mile?  
Which?


Its "our" responsibility to build a model with the right incentives and 
mitigations in place to maximize the number of
great apps developers can build while minimizing the risk to our users.  We are 
responsible for the overall health and
security of this ecosystem.

In which case, "we" are responsible for answering if the question of what happens when the app goes tropo or rogue or steals all the money. Because that is directly a question of the right incentives and mitigations :)


Its the developers responsibility to build great apps that don't put the user 
at risk, and their fault when they don't.


Wwhen they don't, what are "we" going to do about it?

The current web app model makes this extremely hard to do well, and very easy 
to mess up.  We can't expect web
developers to be security experts; its our responsibility to build a model that 
nudges them (sometimes forcefully)
towards taking the necessary precautions.

Its the app stores responsibility to provide apps that don't put users at undue 
security and privacy risk, and to remove
apps when they have done so.  This is true both for malicious apps, and for 
apps with serious security issues.

Some skeleton under that fine facade of marketing words - what happens when it goes wrong?

Its the users responsibility to make informed decisions when choosing which 
apps to trust.  Its everyone else's
responsibility to ensure they are presented with accurate information and 
relevant decisions to make sure they can do so
effectively.


Ah, sorry, red flag! This is old 1990s security writings that are now reversed, upturned, deprecated, dead.

Firstly, reliance on the users to make informed choices is a known bad. Epic fail. Indeed, it's so well known that users make no informed choices, and cannot be relied upon to make informed choices, and can't reasonably do so anyway, that the claim is one of the biggest deception of the business. We know they can't, don't and won't - to say otherwise risks deception.

Users instead rely totally on their brands and their friends to protect them, end of story. This is why Apple is so successful - it worked out that it had to take a complete and over-arching responsibility. And why Microsoft lost the 00s, because it didn't. Remember Bill Gates memo? He said we have to take responsibility for security - and his organisation failed to do that. End of Microsoft decade.

This is why the Firefox security GUI is not on the chrome - because the vast majority of users can't cope with any security info, let alone the complicated, wrong, confused and deceptive information that is thrown at them if they dig deep and ask what's going on. BTW, that's practically Mozilla policy, to not show the user any info.

(BTW, in case you are wondering about this, there is about half a decade of security UI research that makes that case. Over in the CA world, they have thrown in the towel on this debate - vendors are the relying parties. Vendors are responsible for looking after the reliance issues to do with CAs. That's because they do, already, and can. While users don't, never did and cannot. Very bitter debate, but over now.)


Nobody is responsible for delivering a panacea, however.


It's tricky! And, not only because by the time you get where you're going with those 1990s design assumptions like reliance, here's what you'll be up against:

http://www.ibtimes.com/articles/316996/20120320/apple-iwallet-iphone-5-feature-mobile-payments.htm

Just one quote:

"However, unlike [their competition], Apple promises its service to be highly secure and reliable."

And they will achieve that.



iang



PS: in the above quote, they are taking aim at ... what is called Internet transactions ... which happens to be credit cards ... over SSL. See where this is going?
_______________________________________________
dev-security mailing list
dev-security@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security

Reply via email to