I was attempting to get to the bottom of what is, to me, a mystery:

Why does IPv6 use EUI-64 for Interface Identifiers, instead of MAC-48?

The previous version of the RFCs used MAC-48. There seems to have been,
at some point, some discussion regarding the difference between EUI-48 and
MAC-48, as relate to their use in EUI-64.

And the mailing list archives only go back to some time in 2003, while the
RFCs are 1998 (current) and 1996 (replaced by 1998 versions).

As far as I can tell, synthesis of II's relies on adding a fixed 16-bit
value.

Did we collectively just throw away 16 bits of usable space from our 128,
for no good reason?

That's what it looks like to me...

If someone can explain the rationale, that'd be great.

(I suppose it's a bit late in the game to go back to MAC-48 as II?)

BTW - this isn't just a pedantic question, it bears on the policies that
depend on the autoconfiguration prefix size, for minimum prefix size that
is useable, which relates to address allocation policies by RIRs.

Thanks in advance, to anyone who can shed light on this...

Brian Dickson

P.S. 48 bits of MAC is more than enough, I believe, for a few more
centuries of network growth and device/node assignments... so why we would
jump the gun so soon, appears rather odd to me.


--------------------------------------------------------------------
IETF IPv6 working group mailing list
ipv6@ietf.org
Administrative Requests: https://www1.ietf.org/mailman/listinfo/ipv6
--------------------------------------------------------------------

Reply via email to