>Has anyone done some analysis of what this might look like?  Especially
with growth etc.

Sure, probably lots of people lots of times.
Off the top of my head, using some current/common allocations sizes:
        Current "Global Unicast" space --> 2000::/3
        An "average" RIR --> /12
        an "average" ISP --> /32
        an average enterprise --> /48
        an average home user --> /56

So, "the current IPv6 world" (2000::/3) can support 512 standard RIR sized
allocations.
Each standard RIR can support 1M standard ISPs.
Each standard ISP can support 64K enterprises or 16M standard home-users, or
some combination thereof.

So -    How much do we want held in reserve?
        How "flexibly" (ref RFC3531) are we allocating our addresses?
        How many total (enterprise | home) clients do we want to support?

Off the cuff, let's say we use left-most (sparse) allocation and only hit
50% efficiency (keeping the right-most bit totally in reserve!) ... If I am
an ISP, and I have 300M home users (/56s) I just need a /26, and that
actually gives me a lot of room for more clients (like 200M more).  So -
what was the problem again?

Let's make it even more interesting - let's say I am an ISP, I am allocating
/48s, and I need to support - say - 6B assignments for every person in the
world + 2B for every organization in the world (#s chosen arbitrarily, feel
free to add another bit if it makes you feel better).  Bearing in mind that
this means every single person and organization has 64k subnets, each of
which contains "as many hosts as is appropriate", and all of these are
globally routable ... I "just" need a /15 to cover this absolute worst case.
Heck, let's make it /14 for good measure.  So now each standard RIR can
"only" support 4 of this size service provider, but we still have 512 RIR
sized allocations.  If the individuals got /56s instead these numbers
getting even bigger ...  So - what was the problem again?


Oh, and this is just from the 2000::/3 range ... next up, 4000::/3 ...
6000::/3, 8000::/3, a000::/3, c000::/3.  
And if we feel like we burned through 2000::/3 too fast at some point in the
future, maybe we revisit the rules around the time we start thinking about
allocating from 4000::/3?  (Or "skip one", and star the new rules with
6000::/3 ... I am not picky).


Note, I am _NOT_ saying we should be careless or cavalier about address
allocation, just saying we don't live in a constrained situation.  
And if there is a choice to be made between
scalability/flexibility/summarization'ability (is that a word?) and strict
efficiency ... the efficiency loses.



/TJ
PS - Yes, 4.3B seemed really big at one point ... but seriously, do the
above numbers not _really_ sound big enough?


Reply via email to