(Randy removed from CC as I only had him in CC for the comments on the RIPE meeting)

Most end-user network managers I deal with require these
characteristics
of their public network address allocations:

1) uniqueness (sometimes expressed as "autonomy")
Wait. This is interesting. From what people here was saying before - I
drew the conclusion that end-users wanted non-uniqueness aka
site-locals to hide their topology. You are saying something else?

Please note the keyword "public" in the statement above. This applies (in
the cases of most of my clients) to address spaces which are advertised to
the public networks (AKA The Internet).
[snip]

Given the choice, most (in fact, nearly all) of my clients would prefer to
run their internal networks on registered, unique, globally routable
address space; this greatly simplifies the task of providing access to
resources on the public network, and of providing access from the public
networks to resouces which the external customers of the business desire
to use, usually with the result of generating revenue for the business.
Furthermore, the use of unique, globally routable address space vastly
simplifies the task of establishing connections to networks operated by
business partners (eg, vendors and larger customers), whether via the
public network or over private links. However, my clients are wholly
unwilling to run even the slightest risk of a forced renumbering on their
internal networks. Full stop. No exceptions, and no equivocations.
I still like what I read.

If unique and stable globally routable space is not available for use in
their internal networks, my clients see non-unique, globally non-routable
space, coupled with NAT, as a feasible (but not desirable) alternative: at
least they have a reasonable expectation that such addressing will be
stable, and that a forced renumbering is unlikely. For IPv6, the site
local space meets the requirement for internal networks of address
stability. That the SL (or, for IPv4, PNN) space is globally non-routable
Well, SLs are more or less RFC1918 all over. (at least in my view)

3) stability
Do you mean as a derivate of portability or for some other reason?

No. The stability requirement is quite independent of portability. My
clients desire to avoid renumbering at any cost short of summary hanging;
where it is not posiible to avoid renumbering, they wish to renumber as
few systems as possible, and they would much rather change a static
translation mapping than reconfigure a host. Where these clients are
I would still claim that his is a side effect of portable address space.

Most of the end-user-network managers among my clients now multihome,
and
will continue to require multihomed service in future. In every case
where the user's network is multihomed, the multiple independent
connections are seen as crucial for maintenance of high availability of
I find this funny. A number of studies have shown that if this is what
you are after, multihoming and BGP is the wrong way to go - but never
mind.

Your comment may be true, but my clients are nonetheless unwilling to risk
the possibility of an extended network outage on a single ISP (while not
frequent, these events are far from unprecedented) rendering their online
customer-support environment unavailable for several hours, much less for
a day. Shorter outages (on the order of minutes in the single digits) are
tolerated, provided that such outages are infrequent.
Well, there are a number of ways to minimize such effects as well. And there are a number of ways you could engineer around this with other methods than Globally Routable PI space.

Regards,

- kurtis -

--------------------------------------------------------------------
IETF IPng Working Group Mailing List
IPng Home Page: http://playground.sun.com/ipng
FTP archive: ftp://playground.sun.com/pub/ipng
Direct all administrative requests to [EMAIL PROTECTED]
--------------------------------------------------------------------

Reply via email to