> On Dec 20, 2015, at 08:57 , Mike Hammett <na...@ics-il.net> wrote:
> 
> There's nothing that can really be done about it now and I certainly wasn't 
> able to participate when these things were decided. 
> 
> However, keeping back 64 bits for the host was a stupid move from the 
> beginning. We're reserving 64 bits for what's currently a 48 bit number. You 
> can use every single MAC address whereas IPS are lost to subnetting and other 
> such things. I could have seen maybe holding back 56 bits for the host if for 
> some reason we need to replace the current system of MAC addresses at some 
> point before IPv6 is replaced. 

That’s not what happened. What happened was that we added 64 bits to the 
address space (the original thought was a 64 bit address space) in order to 
allow for simplified host autoconf based on EUI-64 addresses. It did seem like 
a good idea at the time.

At the time, IEEE had realized that they were running out of EUI-48 addresses 
and had decided that the next generation would be EUI-64 and in fact, if you 
look at newer interfaces (e.g. firewire) you will see that they do, in fact, 
ship with EUI-64 addresses baked in. Given that IEEE had already decided on 
EUI-64 as the way forward for “MAC” addresses, it seems to me that 64 bits 
makes more sense than 56.

> There may be address space to support it, but is there nimble boundary space 
> for it?

I think you mean nibble-boundary space for it and the answer is yes.

> The idea that there's a possible need for more than 4 bits worth of subnets 
> in a home is simply ludicrous and we have people advocating 16 bits worth of 
> subnets. How does that compare to the entire IPv4 Internet? 

I have more than 16 subnets in my house, so I can cite at least one house with 
need for more than 4 bits just in a hand-coded network.

Considering the future possibilities for automated topological hierarchies 
using DHCP-PD with dynamic joining and pruning routers, I think 8 bits is 
simply not enough to allow for the kind of flexibility we’d like to give to 
developers, so 16 bits seems like a reasonable compromise.

> There is little that can be done about much of this now, but at least we can 
> label some of these past decisions as ridiculous and hopefully a lesson for 
> next time. 


TL;DR version: Below is a detailed explanation of why giving a /48 to every 
residence is harmless and just makes sense.

If you find that adequate, stop here. If you are still skeptical, read on…

Except that the decisions weren’t ridiculous. They not only made sense then, 
but for the most part, if you consider a bigger picture and a wider longer-term 
view than just what we are experiencing today, they make even more sense.

First, unlike the 100 gallon or 10,000 gallon fuel tank analogy, extra bits 
added to the address space come at a near zero cost, so adding them if there’s 
any potential use is what I would classify as a no-brainer. At the time IPv6 
was developed, 64-bit processors were beginning to be deployed and there was no 
expectation that we’d see 128-bit processors. As such, 128 bit addresses were 
cheap and easily implementable in anticipated hardware and feasible in existing 
hardware, so 128-bits made a lot of sense from that perspective.

From the 64-bits we were considering, adding another 64 bits so that we could 
do EUI-based addressing also made a lot of sense. 48-bits didn’t make much 
sense because we already knew that IEEE was looking at moving from 48-bits to 
64-bits for EUI addresses. A very simple mechanism for translating EUI-48 into 
a valid unique EUI-64 address was already documented by IEEE (Add an FF suffix 
to the OUI portion and an EE Prefix to the ESI portion, and ensure that the 
Locally Generated bit is 1). As such, a locally generated 02:a9:3e:8c:7f:1d 
address becomes 02:a9:3e:ff:ee:8c:7f:1d while a registered address 
ac:87:a3:23:45:67 would become ae:87:a3:ff:fe:23:45:67.

The justification for 16 bits of subnetting is a little more pie-in-the-sky, 
I’ll grant you, but given a 64-bit network numbering space, there’s really no 
disadvantage to giving out /48s and very little (or no) advantage to giving out 
smaller chunks to end-sites, regardless of their residential or commercial 
nature.

Let’s assume that ISPs come in essentially 3 flavors. MEGA (The Verizons, 
AT&Ts, Comcasts, etc. of the world) having more than 5 million customers, LARGE 
(having between 100,000and 5 million customers) and SMALL (having fewer than 
100,000 customers).

Let’s assume the worst possible splits and add 1 nibble to the minimum needed 
for each ISP and another nibble for overhead.

Further, let’s assume that 7 billion people on earth all live in individual 
households and that each of them runs their own small business bringing the 
total customer base worldwide to 14 billion.

If everyone subscribes to a MEGA and each MEGA serves 5 million customers, we 
need 2,800 MEGA ISPs. Each of those will need 5,000,000 /48s which would 
require a /24. Let’s give each of those an additional 8 bits for overhead and 
bad splits and say each of them gets a /16. That’s 2,800 out of
65,536 /16s and we’ve served every customer on the planet with a lot of extra 
overhead, using approximately 4% of the address space.

Now, let’s make another copy of earth and serve everyone on a LARGE ISP with 
only 100,000 customers each. This requires  140,000 LARGE ISPs each of whom 
will need a /28 (100,000 /48s doesn’t fit in a /32, so we bump them up to /28). 
Adding in bad splits and overhead at a nibble each, we give each of them a /20. 
140,000 /20s out of 1,048,576 total of which we used 44,800 for the MEGA ISPS 
leaves us with 863,776 /20s still available. We’ve now managed to burn 
approximately 18% of the total address space and we’ve served the entire world 
twice.

Finally, let us serve every customer in the world using a small ISP. Let’s 
assume that each small ISP only serves about 5,000 customers. For 5,000 
customers, we would need a /32. Backing that off two nibbles for bad splits and 
overhead, we give each one a /24.

This will require 2,800,000 /24s. (I realize lots of ISPs server fewer than 
5,000 customers, but those ISPs also don’t serve a total of 14 billion end 
sites,
so I think in terms of averages, this is not an unreasonable place to throw the 
dart).

There are 16,777,216 /24s in total, but we’ve already used 2,956,800 for the 
MEGA and LARGE ISPs, bringing our total utilization to 5,756,800 /24s.

We have now built three complete copies of the internet with some really huge 
assumptions about number of households and businesses added in and we still 
have only used roughly 34% of the total address space, including nibble 
boundary round-ups and everything else.

I propose the following: Let’s give out /48s for now. If we manage to hit 
either of the following two conditions in less than 50 years, I will happily 
(assuming I am still alive when it happens) assist in efforts to shift to more 
restrictive allocations.

        Condition 1: If any RIR fully allocates more than 3 /12s worth of 
address space total
        Condition 2: If we somehow manage to completely allocate all of 2000::/3

I realize that Condition 2 is almost impossible without meeting condition 1 
much much earlier, but I put it there just in case.

If we reach a point where EITHER of those conditions becomes true, I will be 
happy to support more restrictive allocation policy. In the worst case, we have 
roughly 3/4 of the address space still unallocated when we switch to more 
restrictive policies. In the case of condition 1, we have a whole lot more. (At 
most we’ve used roughly 15[1] of the 512 /12s in 2000::/3 or less than 0.004% 
of the total address space.

My bet is that we can completely roll out IPv6 to everyone with every end-site 
getting a /48 and still not burn more than 0.004% of the total address space.

If anyone can prove me wrong, then I’ll help to push for more restrictive 
policies. Until then, let’s just give out /48s and stop hand wringing about how 
wasteful it is. Addresses that sit in the free pool beyond the end of the 
useful life of a protocol are also wasted.


Owen



[1] This figure could go up if we add more RIRs. However, even if we double it, 
we move from 0.004% to 0.008% utilization risk with 10 RIRs.


> 
> 
> 
> 
> ----- 
> Mike Hammett 
> Intelligent Computing Solutions 
> http://www.ics-il.com 
> 
> ----- Original Message -----
> 
> From: "Daniel Corbe" <co...@corbe.net> 
> To: "Mike Hammett" <na...@ics-il.net> 
> Cc: "Mark Andrews" <ma...@isc.org>, "North American Network Operators' Group" 
> <nanog@nanog.org> 
> Sent: Saturday, December 19, 2015 10:55:03 AM 
> Subject: Re: Nat 
> 
> Hi. 
> 
>> On Dec 19, 2015, at 11:41 AM, Mike Hammett <na...@ics-il.net> wrote: 
>> 
>> "A single /64 has never been enough and it is time to grind that 
>> myth into the ground. ISP's that say a single /64 is enough are 
>> clueless." 
>> 
>> 
>> 
>> LLLLOOOOOOLLLLL 
>> 
>> 
>> A 100 gallon fuel tank is fine for most forms of transportation most people 
>> think of. For some reason we built IPv6 like a fighter jet requiring 
>> everyone have 10,000 gallon fuel tanks... for what purpose remains to be 
>> seen, if ever. 
>> 
>> 
> 
> You’re being deliberately flippant. 
> 
> There are technical reasons why a single /64 is not enough for an end user. A 
> lot of it has to do with the way auto configuration works. The lower 64 bits 
> of the IP address are essentially host entropy. EUI-64 (for example) is a 64 
> bit number derived from the mac address of the NIC. 
> 
> The requirement for the host portion of the address to be 64 bits long isn’t 
> likely to change. Which means a /64 is the smallest possible prefix that can 
> be assigned to an end user and it limits said end user to a single subnet. 
> 
> Handing out a /56 or a /48 allows the customer premise equipment to have 
> multiple networks behind it. It’s a good practice and there’s certainly 
> enough address space available to support it. 
> 
> 
> 

Reply via email to