RE: 128 bits should be enough for everyone, was: IPv6 vs. Stupid NAT tricks: false dichotomy? (Was: Re: StupidNAT tricks and how to stop them.)
> From: Tim Chown [mailto:[EMAIL PROTECTED] > I noticed that by deafult MS Vista doesn't use autoconf as > per 2462, rather it uses a 3041-like random address. See: > http://www.microsoft.com/technet/itsolutions/network/evaluate/ > new_network.mspx This should hardly be a surprise. The inability of the IETF to accept as legitimate real world security concerns at the time 2462 was written was as notorious as its insistence on certain irrelevant concerns being absolute. It the real world I am actually very concerned if someone is able to determine my MAC address. It opens up a significant amount of information about my internal network that in the real world I have no intention of sharing. During the 1990s many people, myself included mistook applying cryptography for security. In the real world what network managers want to do and will insist on having tools to acomplish is to guard the boundary between the internal and external network as closely as possible and to prevent any piece of unnecessary information crossing that barrier in either direction. It is certainly a mistake to consider this practice a sufficient condition for security but anyonje who does not understand that it is a necessary party of a security strategy has little to contribute to today's security architecture. smime.p7s Description: S/MIME cryptographic signature ___ Ietf mailing list Ietf@ietf.org https://www1.ietf.org/mailman/listinfo/ietf
Re: 128 bits should be enough for everyone, was: IPv6 vs. Stupid NAT tricks: false dichotomy? (Was: Re: StupidNAT tricks and how to stop them.)
Thus spake "Anthony G. Atkielski" <[EMAIL PROTECTED]> Iljitsch van Beijnum writes: However, since that time I've learned to appreciate stateless autoconfiguration and the potential usefulness of having the lower 64 bits of the IPv6 address as a place to carry some limited security information (see SEND and shim6 HBA). Once it's carrying information, it's no longer just an address, so counting it as pure address space is dangerous. An IPv4/6 address is both a routing locator and an interface identifier. Unfortunately, the v6 architects decided not to separate these into separate address spaces, so an address _must_ contain routing information until that problem is fixed. It doesn't seem to be likely we'll do so without having to replace IPv6 and/or BGP4+, and there's no motion on either front, so we're stuck with the locator/identifier problem for quite a while. Building in space means not allocating it--not even _planning_ to allocate it. Nobody has any idea what the Internet might be like a hundred years from now, so why are so many people hellbent on "planning" for something they can't even imagine? That's why 85% of the address space is reserved. The /3 we are using (and even then only a tiny fraction thereof) will last a long, long time even with the most pessimistic projections. If it turns out we're still wrong about that, we can come up with a different policy for the next /3 we use. Or we could change the policy for the existing /3(s) to avoid needing to consume new ones. If IPv6 is supposed to last 100 years, that means we have ~12.5 years to burn through each /3, most likely using progressively stricter policies. It's been a decade since we started and we're nowhere near using up the first /3 yet, so it appears we're in no danger at this point. Will we be in 50 years? None of us know, which is why we've reserved space for the folks running the Internet then to make use of -- provided IPv6 hasn't been replaced by then and making this whole debate moot. Unfortunately, at the time IPv6 was created variable length addresses weren't considered viable. Variable-length addresses are the only permanent solution, unless IP addresses are assigned serially (meaning that all routing information has to be removed). Variable-length addresses work very well for the telephone system, and they'd work just as well for the Internet, if only someone had taken the time to work it out. Variable-length addresses only work if there is no maximum length. E.164 has a maximum of 15 digits, meaning there are at most 10^15 numbers. Here in +1 we only use eleven digit numbers, meaning we're burning them 10^4 times as fast as we could. That's not a great endorsement. Also, telephone numbers have the same locator/identifier problem that IPv4/6 addresses do. In fact, IPv6's original addressing model looked strikingly similar to the country codes and area/city codes (aka TLAs and NLAs) that you're apparently fond of. Even OSI's "variable length" addresses had a maximum length, and most deployments used the maximum length; they degenerated into fixed-length addresses almost immediately. The only thing I'm not too happy about is the current one address / one subnet / /48 trichotomy. Ignoring the single address for a moment, the choice between one subnet and 65536 isn't a great one, as many things require a number of subnets that's greater than one, but not by much. It's a good example of waste that results from short-sightedness. It happened in IPv4, too. The difference is that in IPv6, it's merely a convention and implementors are explicitly told that they must not assume the above boundaries. In IPv4, it was hardcoded into the protocol and every implementation had to be replaced to move to VLSM and CIDR. Conventions are for human benefit, but they can be dropped when it becomes necessary. Folks who use RFC 1918 space almost always assign /24s for each subnet regardless of the number of hosts; folks using public addresses used to do the same, but instead now determine the minimum subnet that meets their needs. Hopefully the conventions in IPv6 won't be under attack for a long time, but if they need to go one day we can drop them easily enough. The thing that is good about IPv6 is that once you get yourself a / 64, you can subdivide it yourself and still have four billion times the IPv4 address space. It sounds like NAT. Not at all. You'd still have one address per host, you'd just move the subnet boundary over a few bits as needed. With the apparent move to random IIDs, there's no reason to stick to /64s for subnets -- we could go to /96s for subnets without any noticeable problems (including NAT). The RIRs could also change policies so that /32 and /48 are not the default allocation and assignment sizes, respectively. That is also another convention that we could easily dispense with, but it saves us a lot of paperwork to abide by i
Re: 128 bits should be enough for everyone, was: IPv6 vs. Stupid NAT tricks: false dichotomy? (Was: Re: StupidNAT tricks and how to stop them.)
Thus spake "Anthony G. Atkielski" <[EMAIL PROTECTED]> Iljitsch van Beijnum writes: So how big would you like addresses to be, then? It's not how big they are, it's how they are allocated. And they are allocated very poorly, even recklessly, which is why they run out so quickly. It's true that engineers always underestimate required capacity, but 128-bit addresses would be enough for anything ... IF they were fully allocated. But I know they won't be, and so the address space will be exhausted soon enough. I once read that engineers are generally incapable of designing anything that will last (without significant redesign) beyond their lifespan. Consider the original NANP and how it ran out of area codes and exchanges around 40 years after its design -- roughly the same timeframe as the expected death of its designers. Will IPv6 last even that long? IMHO we'll find a reason to replace it long before we run out of addresses, even at the current wasteful allocation rates. We currently have 1/8th of the IPv6 address space set aside for global unicast purposes ... Do you know how many addresses that is? One eighth of 128 bits is a 125-bit address space, or 42,535,295,865,117,307,932,921,825,928,971,026,432 addresses. That's enough to assign 735 IP addresses to every cubic centimetre in the currently observable universe (yes, I calculated it). Am I the only person who sees the absurdity of wasting addresses this way? It doesn't matter how many bits you put in an address, if you assign them this carelessly. That's one way of looking at it. The other is that even with just the currently allocated space, we can have 35,184,372,088,832 sites of 65,536 subnets of 18,446,744,073,709,551,616 hosts. Is this wasteful? Sure. Is it even conceivable to someone alive today how we could possibly run out of addresses? No. Will someone 25 years from now reach the same conclusion? Perhaps, perhaps not. That's why we're leaving the majority of the address space reserve for them to use in light of future requirements. ... with the idea that ISPs give their customers /48 blocks. Thank you for illustrating the classic engineer's mistake. Stop thinking in terms of _bits_, and think in terms of the _actual number of addresses_ available. Of better still, start thinking in terms of the _number of addresses you throw away_ each time you set aside entire bit spans in the address for any predetermined purpose. Remember, trying to encode information in the address (which is what you are doing when you reserve bit spans) results in exponential (read incomprehensibly huge) reductions in the number of available addresses. It's trivially easy to exhaust the entire address space this way. If you want exponential capacity from an address space, you have to assign the addresses consecutively and serially out of that address space. You cannot encode information in the address. You cannot divided the address in a linear way based on the bits it contains and still claim to have the benefits of the exponential number of addresses for which it supposedly provides. Why is this so difficult for people to understand? And sequential assignments become pointless even with 32-bit addresses because our routing infrastructure can't possibly handle the demands of such an allocation policy. The IETF has made the decision to leave the current routing infrastructure in place, and that necessitates a bitwise allocation model. Railing against this decision is pointless unless you have a new routing paradigm ready to deploy that can handle the demands of a non-bitwise allocation model. Why is this so difficult for you to understand? That gives us 45 bits worth of address space to use up. You're doing it again. It's not 45 bits; it's a factor of 35,184,372,088,832. But rest assured, they'll be gone in the blink of an eye if the address space continues to be mismanaged in this way. I take it you mean "the blick of an eye" to mean a span of decades? That is not the common understanding of the term, yet that's how long we've been using the current system and it shows absolutely no signs of strain. It's generally accepted that an HD ratio of 80% should be reachable without trouble, which means we get to waste 20% of those bits in aggregation hierarchies. No. It's not 20% of the bits, it's 99.9756% of your address space that you are wasting. Do engineers really study math? To achieve bitwise aggregation, you necessarily cannot achieve better than 50% use on each delegation boundary. There are currently three boundaries (RIR, LIR, site), so better than 12.5% address usage is a lofty goal. Again, if you want something better than this, you need to come up with a better routing model than what we have today. (And then throw in the /64 per subnet and you're effectively wasting 100% of the address space anyways, so none of this matters until that's gone) This gives us 36 bits = 6
Re: 128 bits should be enough for everyone, was: IPv6 vs. Stupid NAT tricks: false dichotomy? (Was: Re: StupidNAT tricks and how to stop them.)
On Thu, 30 Mar 2006 20:43:14 -0600, "Stephen Sprunk" <[EMAIL PROTECTED]> wrote: > > That's why 85% of the address space is reserved. The /3 we are using (and > even then only a tiny fraction thereof) will last a long, long time even > with the most pessimistic projections. If it turns out we're still wrong > about that, we can come up with a different policy for the next /3 we use. > Or we could change the policy for the existing /3(s) to avoid needing to > consume new ones. > I really shouldn't waste my time on this thread; I really do know better. You're absolutely right about the /3 business -- this was a very deliberate design decision. So, by the way, was the decision to use 128-bit, fixed-length addresses -- we really did think about this stuff, way back when. When the IPng directorate was designing/selecting what's now IPv6, there was a variable-length address candidate on the table: CLNP. It was strongly favored by some because of the flexibility; others pointed out how slow that would be, especially in hardware. There was another proposal, one that was almost adopted, for something very much like today's IPv6 but with 64/128/192/256-bit addresses, controlled by the high-order two bits. That looked fast enough in hardware, albeit with the the destination address coming first in the packet. OTOH, that would have slowed down source address checking (think BCP 38), so maybe it wasn't a great idea. There was enough opposition to that scheme that a compromise was reached -- those who favored the 64/128/192/256 scheme would accept fixed-length addresses if the length was changed to 128 bits from 64, partially for future-proofing and partially for flexibility in usage. That decision was derided because it seemed to be too much address space to some, space we'd never use. I'm carefully not saying which option I supported. I now think, though, that 128 bits has worked well. --Steven M. Bellovin, http://www.cs.columbia.edu/~smb ___ Ietf mailing list Ietf@ietf.org https://www1.ietf.org/mailman/listinfo/ietf
Re: 128 bits should be enough for everyone, was: IPv6 vs. Stupid NAT tricks: false dichotomy? (Was: Re: StupidNAT tricks and how to stop them.)
Stephen Sprunk writes: > And sequential assignments become pointless even with 32-bit > addresses because our routing infrastructure can't possibly handle > the demands of such an allocation policy. They are pointless for the reasons you state, but they are also the only way to get 2^128 addresses out of 128 bits. Anything else encodes information in the address and reduces the usable address space exponentially. > Railing against this decision is pointless unless you have a new > routing paradigm ready to deploy that can handle the demands of a > non-bitwise allocation model. The bitwise allocations I'm hearing about are not based on routing. > I take it you mean "the blick of an eye" to mean a span of decades? At best. > That is not the common understanding of the term, yet that's how > long we've been using the current system and it shows absolutely no > signs of strain. So IPv6 is not needed? > To achieve bitwise aggregation, you necessarily cannot achieve > better than 50% use on each delegation boundary. There are currently > three boundaries (RIR, LIR, site), so better than 12.5% address > usage is a lofty goal. Again, if you want something better than > this, you need to come up with a better routing model than what we > have today. I did, but nobody was interested. > Again, the current identifier/location conflation combined with the > routing paradigm leaves us no choice but to encode information into > the IP address. In that case, any predictions of longevity for the system based on the address space providing 2^n addresses for n bits are invalid. Strangely, such predictions seem to be almost exclusively based on this, and thus are necessarily wrong. ___ Ietf mailing list Ietf@ietf.org https://www1.ietf.org/mailman/listinfo/ietf
Re: 128 bits should be enough for everyone, was: IPv6 vs. Stupid NAT tricks: false dichotomy? (Was: Re: StupidNAT tricks and how to stop them.)
Stephen Sprunk writes: > An IPv4/6 address is both a routing locator and an interface identifier. And so engineers should stop saying that n bits of addressing provides 2^n addresses, because that is never true if any information is encoded into the address. In fact, as soon as any information is placed into the address itself, the total address space shrinks exponentially. > Unfortunately, the v6 architects decided not to separate these into > separate address spaces, so an address _must_ contain routing > information until that problem is fixed. It doesn't seem to be > likely we'll do so without having to replace IPv6 and/or BGP4+, and > there's no motion on either front, so we're stuck with the > locator/identifier problem for quite a while. Then we need to make predictions for the longevity of the scheme based on the exponentially reduced address space imposed by encoding information into the address. In other words, 128 bits does _not_ provide 2^128 addresses; it does not even come close. Ultimately, it will barely provide anything more than what IPv4 provides, if current trends continue. > That's why 85% of the address space is reserved. The /3 we are using (and > even then only a tiny fraction thereof) will last a long, long time even > with the most pessimistic projections. If it turns out we're still wrong > about that, we can come up with a different policy for the next /3 we use. > Or we could change the policy for the existing /3(s) to avoid needing to > consume new ones. Or simply stop trying to define policies for an unknown future, and thereby avoid all these problems to begin with. > It's been a decade since we started and we're nowhere near using up the > first /3 yet, so it appears we're in no danger at this point. As soon as you chop off 64 bits for another field, you've lost just under 100% of it. > Variable-length addresses only work if there is no maximum length. Ultimately, yes. But there is no reason why a maximum length must be imposed. > E.164 has a maximum of 15 digits, meaning there are at most 10^15 > numbers. Here in +1 we only use eleven digit numbers, meaning we're > burning them 10^4 times as fast as we could. That's not a great > endorsement. Telephone engineers make the same mistakes as anyone else; no natural physical law imposes E.164, however. > Also, telephone numbers have the same locator/identifier problem > that IPv4/6 addresses do. In fact, IPv6's original addressing model > looked strikingly similar to the country codes and area/city codes > (aka TLAs and NLAs) that you're apparently fond of. Maybe the problem is in trying to make addresses do both. Nobody tries to identify General Electric by its street address, and nobody tries to obtain a street address based on the identifier General Electric alone. > The difference is that in IPv6, it's merely a convention ... Conventions cripple society in many cases, so "merely a convention" may be almost an oxymoron. > The folks who designed IPv4 definitely suffered from that problem. The > folks who designed IPv6 might also have suffered from it, but at least they > were aware of that chance and did their best to mitigate it. Could they > have done better? It's always possible to second-guess someone ten years > later. There's also plenty of time to fix it if we develop consensus > there's a problem. Sometimes the most important design criterion is ignorance. In other words, the best thing an engineer can say to himself in certain aspects of design is "I don't know." ___ Ietf mailing list Ietf@ietf.org https://www1.ietf.org/mailman/listinfo/ietf
Re: 128 bits should be enough for everyone, was: IPv6 vs. Stupid NAT tricks: false dichotomy? (Was: Re: StupidNAT tricks and how to stop them.)
On 31-mrt-2006, at 6:11, Steven M. Bellovin wrote: You're absolutely right about the /3 business -- this was a very deliberate design decision. So, by the way, was the decision to use 128-bit, fixed-length addresses -- we really did think about this stuff, way back when. I reviewed some old IPng mail archives last year and it was very illuminating to see people worry both about stuff that is completely a non-issue today and stuff that's still as big a problem as ever today. However, a lot has changed in over a decade, and even if fixed length addresses was the right answer then (which I'm not necessarily conceding), that doesn't necessarily make it the right answer today. When the IPng directorate was designing/selecting what's now IPv6, there was a variable-length address candidate on the table: CLNP. I'm no OSI expert, but what I gather is that within a domain, all addresses must be the same length, so variable length addressing doesn't really work out in practice. It was strongly favored by some because of the flexibility; others pointed out how slow that would be, especially in hardware. I guess that argument can be made for the traditional "this address is X bits and here are enough bytes to hold them" type of variable length address encoding that we also use in BGP, for example. But there are other ways to do this that are more hardware-friendly: There was another proposal, one that was almost adopted, for something very much like today's IPv6 but with 64/128/192/256-bit addresses, controlled by the high-order two bits. That looked fast enough in hardware, albeit with the the destination address coming first in the packet. OTOH, that would have slowed down source address checking (think BCP 38), so maybe it wasn't a great idea. On the other hand having a protocol chain in IPv6 makes checking TCP/ UDP ports a nightmare, so there's more than enough precedent for that. That's one lesson we can learn from the OSI guys: the port number should really be part of the address. A way to encode variable length addresses that would presumably be even easier to implement in hardware is to split the address in several fields, and then have some bits that indicate the presence/ absence of these fields. For instance, the IPv6 address could be 8 16- bit values. The address 3ffe:2500:0:310::1 would be transformed into 3ffe-2500-0310-0001 (64 bits) with the control bits 11010001 indicating that the first, second, fourth and eighth 16-bit values are present but the third and fifth - seventh aren't. It should be fairly simple to shift the zero bits in and out in hardware so the full maximum length version of the address can be available in places where that's convenient. And in reaction to other posts: there is no need to make the maximum address length unlimited, just as long as it's pretty big, such as ~256 bits. The point is not to make the longest possible addresses, but to use shorter addresses without shooting ourselves in the foot later when more address space is needed. For instance, I have a /48 at home and one for my colocated server. For that server, I could use the /48 as the actual address for the server, or add a very small number of bits. At home, stateless autoconf is useful so 94 bits would be sufficient (/48 + 46 bit MAC address), maybe add a couple of bits for future subnetting. So the server address would be 7 bytes (with the length field) rather than 16 and the laptop address 13, saving 12 bytes per packet between the two over today's IPv6... I'm carefully not saying which option I supported. I now think, though, that 128 bits has worked well. It would be rather disastrous if 128 bits didn't work well at this stage. :-) ___ Ietf mailing list Ietf@ietf.org https://www1.ietf.org/mailman/listinfo/ietf
Re: 128 bits should be enough for everyone, was: IPv6 vs. Stupid NAT tricks: false dichotomy? (Was: Re: StupidNAT tricks and how to stop them.)
At 04:43 31/03/2006, Stephen Sprunk wrote: If IPv6 is supposed to last 100 years, that means we have ~12.5 years to burn through each /3, most likely using progressively stricter policies. I suppose you want to say 16,66 years (only 5 /3 are available). This is a way of seeing things. This means that there are still 4 years to go for the next /3 to start being used. It seems a good forecast, in line with observation and demand. But it means that decisions are to be taken now. There's also plenty of time to fix it if we develop consensus there's a problem. - don't you think it is clear now there is a market rough consensus? - what makes you believe that the IETF is the proper place to take care of such a "fix"? I love competition when it makes sense. I certainly would favor the IETF and the ITU to best compete for an IPv6 service the Internet users dearly miss. Before we have other grassroots solutions (are you sure that NATs are the only solution?). There are obviously two schools about IPv6 numbering: - "what exists is nearly perfect and we need to implement it to prove it, but we do not know how to get it implemented." - "what exists is wrong and this is the reason why it is not implemented." IMHO both schools should be given an equal chance to show they are right. And probably to address different types of problems. BTW, deploying IPv6, I suggest that every new ICANN TLD should only accept registrations with IPv6 addresses (why new TLDs if not for a new netwok?). The day ".xxx" is accepted the network would turn IPv6. jfc ___ Ietf mailing list Ietf@ietf.org https://www1.ietf.org/mailman/listinfo/ietf
RE: 128 bits should be enough for everyone, was: IPv6 vs. Stupid NAT tricks: false dichotomy? (Was: Re: StupidNAT tricks and how to stop them.)
I agree with Steve here, we have plenty of tools at our disposal here and eight tries to get it right. Variable length addresses would be much more expensive to support and there really is no reason to expect 128 bits to be insufficient unless the allocation mechanism is completely broken, something that more bits will not cure. If variable length addressing had been proposed when IPv4 was being designed it might well have avoided the need for IPv6, or at least the need for IPv6 to affect the end user to the extent it does. At this point the IPv6 address space is a decision that has already been made and would take over a decade to change. IPv4 space runs into exhaustion first so that is not an acceptable option. > -Original Message- > From: Steven M. Bellovin [mailto:[EMAIL PROTECTED] > Sent: Thursday, March 30, 2006 11:11 PM > To: ietf@ietf.org > Subject: Re: 128 bits should be enough for everyone, was: > IPv6 vs. Stupid NAT tricks: false dichotomy? (Was: Re: > StupidNAT tricks and how to stop them.) > > On Thu, 30 Mar 2006 20:43:14 -0600, "Stephen Sprunk" > <[EMAIL PROTECTED]> wrote: > > > > > That's why 85% of the address space is reserved. The /3 we > are using > > (and even then only a tiny fraction thereof) will last a long, long > > time even with the most pessimistic projections. If it turns out > > we're still wrong about that, we can come up with a > different policy for the next /3 we use. > > Or we could change the policy for the existing /3(s) to > avoid needing > > to consume new ones. > > > > I really shouldn't waste my time on this thread; I really do know > better. > > You're absolutely right about the /3 business -- this was a very > deliberate design decision. So, by the way, was the decision to use > 128-bit, fixed-length addresses -- we really did think about this > stuff, way back when. > > When the IPng directorate was designing/selecting what's now IPv6, > there was a variable-length address candidate on the table: CLNP. It > was strongly favored by some because of the flexibility; > others pointed > out how slow that would be, especially in hardware. > > There was another proposal, one that was almost adopted, for something > very much like today's IPv6 but with 64/128/192/256-bit addresses, > controlled by the high-order two bits. That looked fast enough in > hardware, albeit with the the destination address coming first in the > packet. OTOH, that would have slowed down source address checking > (think BCP 38), so maybe it wasn't a great idea. > > There was enough opposition to that scheme that a compromise was > reached -- those who favored the 64/128/192/256 scheme would accept > fixed-length addresses if the length was changed to 128 bits from 64, > partially for future-proofing and partially for flexibility in usage. > That decision was derided because it seemed to be too much address > space to some, space we'd never use. > > I'm carefully not saying which option I supported. I now > think, though, > that 128 bits has worked well. > > --Steven M. Bellovin, http://www.cs.columbia.edu/~smb > > ___ > Ietf mailing list > Ietf@ietf.org > https://www1.ietf.org/mailman/listinfo/ietf > > smime.p7s Description: S/MIME cryptographic signature ___ Ietf mailing list Ietf@ietf.org https://www1.ietf.org/mailman/listinfo/ietf
Re: 128 bits should be enough for everyone, was: IPv6 vs. Stupid NAT tricks: false dichotomy? (Was: Re: StupidNAT tricks and how to stop them.)
Iljitsch van Beijnum writes: > And in reaction to other posts: there is no need to make the maximum > address length unlimited, just as long as it's pretty big, such as > ~256 bits. But there isn't much reason to not make it unlimited, as the overhead is very small, and specific implementations can still limit the actual address length to a compromise between infinity and the real-world network that the implementation is expected to support. > The point is not to make the longest possible addresses, > but to use shorter addresses without shooting ourselves in the foot > later when more address space is needed. Use unlimited-length addresses that can expand at _either_ end, and the problem is solved. When more addresses are needed in one location, you add bits to the addresses on the right; when networks are combined and must have unique addresses, you add bits on the left. ___ Ietf mailing list Ietf@ietf.org https://www1.ietf.org/mailman/listinfo/ietf