Suresh Krishnan wrote:
Brian Dickson wrote:
james woodyatt wrote:
e.g. RFC 3041, SEND, et cetera.
For example, RFC 3041 is *very* easy to fix.

Where it uses "upper 64 bits" and "lower 64 bits", change it to upper (length(ii)) bits, and lower (128 - length(ii)) bits. If the II happens to be 48 bits, the upper 48 of the MD5 become the new temporary II, and the lower 80 become the stored value. The "fixed" II 48 + stored 80 become the new 128 bits used as input to MD5 on the next cycle.

One observation related to 3041: having what is effectively VLSM, makes it orders of magnitude more difficult to first identify the location of the network/host split, on mixed traffic sniffed at random locations on the internet.

Fixed /64 size makes finding the split point (network vs II) tautologically trivial - it's always after bit 64.

If sniffing/tracking is targeted, it becomes considerably easier to identify sources, e.g. if initial value of EUI-64 is known. It becomes a deterministic sequence of II's. And once one II value is found, subsequent sniffing is equally trivial.

Your characterization of the privacy address mechanism is incorrect. I assert that given an II you CANNOT determine the next II. That is where the privacy part comes in.

Oops, You're right. I stand corrected (on the deterministic point).

However, there is another issue, which might be described as "implicit stenography".

If the overall addressing scheme falls in the general class of "variable length subnet mask" (VLSM), then arbitrary addresses seen at random, can't be determisitically grouped into sets on the same network or subnet, without intimate knowledge of the subnetting used on the more-specific prefixes of the globally visible netblock.

On the other hand, if all subnets are /64, it is trivial to group them.

So, disambiguating traffic where the II is randomized, on a statistical basis, is much harder in the VLSM case. The randomness generates "fuzz" on the boundary of prefix vs host, since the bits will vary on the full range of the II - but only if the II randomization happens on all the available bits for hosts on the prefix.

This is especially true on sparsely populated address blocks. In a more closely grouped set of subnets, with higher per net density, the collective chaos of the address pool, is likely to make the boundaries of the subnets difficult to ascertain.

And know the subnetting scheme is a means to coordinate the frequency of new addresses with the absence of old addresses, and thus implicitly track the sequence of IIs used by individual hosts.

Which defeats the intended purpose of the RFC.

So, VLSM is something that strengthens the technique, notwithstanding the fact that it would require modification to the underlying algorithms slightly.

Hope this clarifies things.
Yep, sorry for mischaracterizing things.
Brian
Cheers
Suresh



--------------------------------------------------------------------
IETF IPv6 working group mailing list
ipv6@ietf.org
Administrative Requests: https://www1.ietf.org/mailman/listinfo/ipv6
--------------------------------------------------------------------

Reply via email to