On Mar 1, 2013, at 2:54 , Robin Wilton <[email protected]> wrote:

> Two areas that I'm aware of:
> 
> 2 - The concept of "harm" is also a key one for privacy risk models, but has 
> distinct shortcomings. For instance, it can't deal well with data breaches 
> where you suspect data has been lost, but you can't tell whether anything bad 
> has happened as a result. It has also been a rather crude metric up to now, 
> with the US, for instance, tending to rule that "harm" must be financial in 
> order to qualify for redress. However, the "harm" model is gradually becoming 
> more nuanced, for instance by classification into 'physical harm, financial 
> harm and reputational harm'. A far as I'm aware, though, that kind of model 
> has yet to be turned into a clear methodology…


Yes, I think the 'harm' framing is problematic in (at least) two respects.

The first is the 'degree' problem. No-one minds if they appear in occasional 
photos taken by others in public places.  So you are OK with having your photo 
taken, then?  I can follow you around and take photos or continuous video of 
you, or face-recognize you in surveillance cameras?  No-one minds if the person 
behind them in line at a cash desk sees what they buy, or if the storekeeper 
remembers your favorite brand.  So it's OK to build a dossier of your complete 
purchase history, then?

The second is the 'prospective harm' question. We all have the instinct that 
our private data is something that could be used to our detriment in the 
future. There may be no immediate harm in an ad system keeping track of our 
browsing visits, purchases, and contributions, and so on.  But if that database 
'falls into the wrong hands' there may be harm indeed.

David Singer
Multimedia and Software Standards, Apple Inc.

_______________________________________________
ietf-privacy mailing list
[email protected]
https://www.ietf.org/mailman/listinfo/ietf-privacy

Reply via email to