Perhaps we should get away from whether something is easy or difficult to 
implement or whether the algorithm may be more (or less) efficient.

I think the point of this material is to ENCOURAGE random assignment rather 
than sequential to improve privacy- so keep it at that. Let implementers worry 
about how efficient an algorithm is?

- Bernie

-----Original Message-----
From: Robert Sparks [mailto:rjspa...@nostrum.com] 
Sent: Monday, February 15, 2016 4:15 PM
To: Tomek Mrugalski <tomasz.mrugal...@gmail.com>; General Area Review Team 
<gen-art@ietf.org>; i...@ietf.org; dh...@ietf.org; 
draft-ietf-dhc-dhcpv6-privacy....@ietf.org
Subject: Re: Gen-ART LC review: draft-ietf-dhc-dhcpv6-privacy-03

Hi Tomek -

Thanks for these edits. My points are all addressed, though I wish to push a 
little more on changing the focus of the following in the document:

On 2/15/16 2:45 PM, Tomek Mrugalski wrote:

<snip/>
>
>> In section 4.3, the paragraph on Random allocation comments on the 
>> poor performance of a specific simplistic implementation of random 
>> selection. More efficient algorithms exist. But the discussion is 
>> mostly irrelevant to the document. Please simplify this paragraph to 
>> focus on the benefits of random allocation.
> I somewhat disagree. First, there are more efficient algorithms known, 
> but they are not always used. This document is a analysis, so we tried 
> to provide an analysis of what's actually happening in actual 
> implementations. In an ideal world where privacy is the top priority 
> every implementation would use truly random allocation using hardware 
> entropy generators. Sadly, the reality of DHCP servers market is that 
> performance matters a lot. Lowered performance is a price for better 
> privacy. This fact directly affects DHCP server implementors, which 
> are the target audience for this document. These are my arguments why 
> the discussion about performance penalty is there and should stay in my 
> opinion.
Then I suggest this variation:

"In deployed systems, address allocation strategies such as choosing the next 
sequentially available address are chosen over strategies such as pseudo-random 
allocation across the available address pool because the latter are more 
difficult to implement efficiently."

To be very clear (and I do _not_ want to put this discussion in the document), 
an implementation could keep a structure (probably a
linked-list) of blocks of unallocated addresses, and very efficiently select an 
address pseudo-randomly from the addresses in that structure, modifying the 
structure to make the next selection just as efficient (similarly modifying the 
structure when addresses are returned to the pool).
This would not have the performance degradation that the simple strategy you 
currently discuss has.
While they're entertaining, I think discussing _either_ way of approaching such 
a random selection algorithm distracts the reader from the point of 
understanding the privacy implications.
>
>

<snip/>


RjS
_______________________________________________
Gen-art mailing list
Gen-art@ietf.org
https://www.ietf.org/mailman/listinfo/gen-art

Reply via email to