On 20 okt 2003, at 6:23, Dan Lanciani wrote:

|I don't see the upgrade costs for regular users. Users are by now
|used to upgrading monthly (if not more often) to plug the latest and
|greatest security holes, so a software upgrade to install IPv6
|functionality somewhere in the next three years or so isn't a huge
|hardship.

I strongly doubt that IPv6 will be available as a software-only upgrade for any but the latest equipment.

This is indeed a problem for some types of equipment, such as multilayer switches. Those typically have IPv4 hardware on board which isn't easily upgradable. However, this isn't very relevant to end-users who spend most of their money on general-purpose computers that can be upgraded with new software. But all the main operating systems support IPv6 in their latest versions today anyway.


There is just too little incentive for vendors (especially ones who have gone out of business :) to support "legacy"
hardware (where "legacy" seems to mean over six months old).

Yes, that's why I have to buy a new GSM phone to get new features rather than simply update the software. Small stuff such as residential "routers" could be upgraded with new firmware but it's more likely that the vendors want to sell new boxes. Given the prices for this stuff I don't see a huge problem here. For the expensive stuff such a vendor policy wouldn't be accepted.


|Functionality won't disappear unless people turn off IPv4, which I
|don't expect them to do.

Sure, but that's basically saying that IPv6 + IPv4+NAT can replace IPv4+NAT, which isn't very interesting. :)

Why not? This way you get to have the best of both worlds: run existing NAT-compatible v4 apps without having to wait for v6 upgrades, but also run stuff that won't work over NAT.


|(I even get strange looks from people in IPv6
|circles when I say I want to run IPv6-only hosts for test purposes.)

That's a rather telling response...

Telling what about who?


|But I think
|we can and should add all the missing stuff to IPv6.

Naturally I agree, but it seems like one of the key features (address
independence) is a stumbling block.

Have a look at what's happening with locator/identifier separation in the multi6 working group. A batch of new drafts is on its way.


|If we're going to be doing NAT anyway why bother with IPv6?

I've been wondering that myself lately.

However, note that if we want to get NAT right (ie support incoming connections and referential integrity) we need just as many painful changes as we need for adopting IPv6. The difference being that IPv6 is here today, even if some desired features are still missing.


Possibly IPv6 will be useful in circumstances where the provider can control the user's environment via restricted firmware, legal means, etc. (The canonical example seems to be smart phones.)

??? Why would IPv6 be more useful here?


|> Unfortunately, being "superior" in some abstract sense is not
|> sufficient for something as utilitarian as a networking protocol
|> suite.  We have to examine actual usage requirements.

|Yes. But note that this doesn't have to be the actual end-user.

I understand that actual end-user requirements are not a major consideration for IPv6 development. This has been discussed before and I've commented that I'm not thrilled with the decision to concentrate on provider requirements,

The way something is developed and the way it is subsequently used are two very different things. Obviously ignoring user requirements during development would be a huge mistake. But once the product is there, there are many ways in which it may be adopted by the market place. End-user demand is one, lower operational costs somewhere in the chain is another one.


Eventually you have to convince the end-users to use the service or it will fail. End-users have evolved a bit over the years.

End-users care about applications and services, not whether the first 4 bits in IP packets are "0100" or "0110". If something needs IPv6 in order to work, and that application or service manages to install IPv6 connecivity without bothering the user (by doing toredo on the user's box or 6to4 in the NAT gateway) the user won't mind.


I see three possible paths:

-We do something to satisfy the users' requirements without making them resort to NAT, possibly upsetting providers who will need to adjust their business models.

Since the IPv6 business model includes far fewer worms (because it's pretty much impossible to infect boxes by randomly trying addresses in IPv6) I think this will work out.


-We do nothing special and the market supplies IPv6 NAT. Things stay pretty much as they are.

Even if IPv6 NAT becomes available, I don't think very many people will use it.


-We do something radical to prevent users from employing NAT-like solutions, possibly failing by succeeding if those users reject the protocol.

Why would we do this?


|It is becoming quite apparent that having a
|stable internal network and an ephemeral externally reachable network
|provide seamless functionality can only be achieved using very dirty
|hacks.

It is not at all apparent to me.

The trouble is that if you have stable address A and you want to connect to someone who has stable address B, you have no way of knowing whether the infrastructure in the middle supports routing packets A->B and B->A. So either you always prefer the stable addresses and you end up waiting for timeouts often, or you always prefer PA addresses and the advantages of stable addressing are largely lost.


There are many ways to achieve that goal including PI space handled directly at the routing level, various forms of source routing including locator/identifier separation, and overlay networks.

Yes, hopefully we can get this to work.


|The internet supported address portability until 10 years
|ago. This was a poor design because it doesn't scale, regardless of
|the usefulness of the functionality.

This is a common but confusing generalization. Portable address usage grows linearly in the actual number of users. It is very hard to do better than this without NAT-like hacks.

CIDR has done relatively well. Could have done much better without the v4 address squeeze.


Even those merely divide by a constant without changing
the order of growth. On the other hand, hierarchical provider-based allocation consumes addresses exponentially in the number of provider levels,
Not sure about this being exponential, but yes, there is more waste in overhead. However, this is in the addresses used, while the problem is the number of distinct blocks.

assuring that there will always be an address shortage at the lowest level(s).

How do you figure? If ISPs want they can request more addresses from the RIRs as long as they provide documentation of how those addresses are going to be used.


Increasing hardware capability has taken the wind out of the memory size argument, so the current fashion seems to be to consider the computational load of building the table in the first place, again constraining the process to work exactly as it does today. This is a harder problem to overcome by brute force, but it still has nothing to do with the scalability of portable addresses themselves.

Why not?


It just means that we can't necessarily support an arbitrary number
of portable addresses *and* still constrain the routing process to work exactly as it does today.

In what way exactly would you like to change routing?


Sure, there is lots of optimization to be had (I talked about encoding reachability for multihomers in bitmaps at one point) but you can't get around the fact that when a link goes down, you must send an update and every ISP has to process it. And the bigger your routing table is, the longer the processing takes... This scales worse than linear.

One obvious approach (if we did want to support portable addresses) is to return to a dumb network/smart host model and push the route computation to the edges of the network where the available resources grow with the consumers.

Sounds nice, but how is this going to work in practice?


Routers could go back to simply forwarding packets as fast as possible rather than spending lots of cycles enforcing economic policies by applying complicated filters to AS paths to determine whether this or that network really deserves transit service.

You need this regardless but it's moot anyway as route processing and packet forwarding are decoupled in fast routers today.


It looks like you snipped a question. Here it is in context:

Others already talked about the right vs wrong analogy, no need to repeat it.


Iljitsch


-------------------------------------------------------------------- IETF IPv6 working group mailing list [EMAIL PROTECTED] Administrative Requests: https://www1.ietf.org/mailman/listinfo/ipv6 --------------------------------------------------------------------

Reply via email to