Hi Robin,

On 03/01/2013 10:54 AM, Robin Wilton wrote:
> Two areas that I'm aware of:
> 
> 1 - There's a growing and reasonably mature body of work (mostly in the 
> public sector) on Privacy Impact Analysis. Most of that is based on 
> classifying data as "Personal", "Sensitive Personal" or "Neither of the 
> above" and then assessing the risk and impact of its inappropriate disclosure.
> 
> 2 - The concept of "harm" is also a key one for privacy risk models, but has 
> distinct shortcomings. For instance, it can't deal well with data breaches 
> where you suspect data has been lost, but you can't tell whether anything bad 
> has happened as a result. It has also been a rather crude metric up to now, 
> with the US, for instance, tending to rule that "harm" must be financial in 
> order to qualify for redress. However, the "harm" model is gradually becoming 
> more nuanced, for instance by classification into 'physical harm, financial 
> harm and reputational harm'. A far as I'm aware, though, that kind of model 
> has yet to be turned into a clear methodology...

Got any references? Be good to take a peek.

I guess where I'm unsure is that the risk analysis approach assumes that
we know about vulnerabilities that affect the protocol (either specific
or possible) and we then do a coarse-grained assessment of the threat
(in terms of impact and probability of occurrence); iteratively add
countermeasures to the design, then rinse and repeat until all the
really bad stuff we know about is countered or we've run out of cycles.

That doesn't capture e.g. that an identifier that I keep using for an
extended duration can become more and more personally identifying over
time, nor that the payloads associated with that identifier might get
more (or less) sensitive over time, from a user's perspective. And
probably a bunch of other privacy related things get missed as well
for example the kinds of temporal correlation that was problematic
in the netflix case, [1] or the potential privacy concerns with
the data that a protocol accumulates over time on hosts that take
part in that protocol. (Now that I mention that, maybe there's an
approach there - to do a risk analysis not on the protocol but
on the set of data that hosts playing or watching the new
protocol can possibly accumulate over time, and also considering
the likely other protocols those hosts or watchers can see or
run?)

Basically, I'm puzzled as to how to do a good job here, but I
suspect I'm not alone;-)

BTW, this doesn't impact much on the IAB draft, I don't believe we
have anything much better we can say at this point. It might be
worth mentioning the concern with methodology in that draft though,
and I don't think that's currently done (didn't read it all again
though.)

S.

[1] http://arxiv.org/abs/cs/0610105


> 
> HTH,
> 
> Robin
> 
> 
> 
> Robin Wilton
> 
> Technical Outreach Director - Identity and Privacy
> 
> On 1 Mar 2013, at 09:37, Stephen Farrell <[email protected]> wrote:
> 
>>
>>
>> On 03/01/2013 05:48 AM, SM wrote:
>>>
>>>
>>> At 18:25 28-02-2013, David Singer wrote:
>>>> in 'privacy considerations' I think we need to explore the privacy
>>>> consequences of using protocols 'appropriately'.  And there are, and
>>>> it's no longer OK not to worry about them as we design protocols.
>>>
>>> Yes.
>>
>> +1
>>
>> Personally, I have a not-worked-out theory that the kind of
>> risk analysis we use in security doesn't apply much to
>> considering privacy and that some other methodology would be
>> better, or is needed.
>>
>> Anyone know of generic worked-out methodologies for analysing
>> privacy issues?
>>
>> S.
>> _______________________________________________
>> ietf-privacy mailing list
>> [email protected]
>> https://www.ietf.org/mailman/listinfo/ietf-privacy
> 
> 
_______________________________________________
ietf-privacy mailing list
[email protected]
https://www.ietf.org/mailman/listinfo/ietf-privacy

Reply via email to