Re: What is multihoming was (design of a real routing v. endpoint id seperation)

2005-10-25 Thread william(at)elan.net



On Mon, 24 Oct 2005, Owen DeLong wrote:


Yes... The network is still multihomed, but, instead of using routing to
handle the source/dest addr. selection, it is managed at each end host
independent of the routers.  The routers function sort of like the
network is single homed.  It's very convoluted.


That is to say the least. Offices who want to be multihomed would want
to do it once for all the computers there using one device like they
now can do with a router. Web farms would similarly want to do it for 
all the servers there again as they now do with one router or 
load-balancer, etc. Managing it if multihoming is entirely host-based 
would be hard (I note that for office multihoming you could potentially 
create one router that would do shim6 on its out interfaces and would do
NAT between that and its inside network - but we don't want NAT for ipv6 
if I understand IETF and IAB direction).


So while I really do think that we need some-kind of multi6 design which 
works for small multi-homing networks without need for them to have to

use ASN and have their routes in global BGP table (leaving all that
primarily to NSPs with /32 and larger as IETF envisioned), the current
shim6 design does not seem properly done to be usable for that audience
and as somebody noticed yesterday it would instead be great for multi-dsl 
users, especially gamers and p2p.


Now if we resurrected A6 with its ability to separately enter ip address 
with host and network parts at the dns level - then we're at least part 
the way done as far as multi6 multi-homing setup in dns for entire network 
at once. But I still don't see easy way to do it for the device management
and yet another new protocol would probably be needed for automatic 
assignment of locators and secondary ipv6 addresses (BTW - did I hear 
right that there is going to be new WG related to MIP6 to work out issues 
of assignment and using of multiple ipv6 addresses and interfaces - IETF 
seems to be doing lots of things in parallel at this potential L3.5 layer 
that could be done lot better together as part of proper TCP/IP redesign).


---
William Leibzon
Elan Networks
[EMAIL PROTECTED]


Re: What is multihoming was (design of a real routing v. endpoint id seperation)

2005-10-25 Thread Pekka Savola


On Mon, 24 Oct 2005 [EMAIL PROTECTED] wrote:

A single tier-2 ISP who uses BGP multihoming with several
tier 1 ISPs can provide multihoming to it's customers
without BGP. For instance, if this tier-2 has two PoPs
in a city and peering links exist at both PoPs and they
sell a resilient access service where the customer has
two links, one to each PoP, then it is possible to route
around many failures. This is probably sufficient for most
people and if the tier-2 provider takes this service seriously
they can engineer things to make total network collapse exteremely
unlikely.


From RFC 3582, this is not multihoming (see the defs below). The above 
is referred to as multi-connecting or multi-attaching (also see 
RFC 4116).


I agree, this is sufficient for many sites.  Especially in academic 
world, many universities are just multi-connected, trusting the 
stability of their NREN's backbone and transit providers.  Lots of 
commercial sites do it too, but some are wary due to events like 
L3/Cogent, L3 backbone downtime, etc.


.

A multihomed site is one with more than one transit provider.
Site-multihoming is the practice of arranging a site to be
multihomed.

and:

A transit provider operates a site that directly provides
connectivity to the Internet to one or more external sites.  The
connectivity provided extends beyond the transit provider's own site.
A transit provider's site is directly connected to the sites for
which it provides transit.

--
Pekka Savola You each name yourselves king, yet the
Netcore Oykingdom bleeds.
Systems. Networks. Security. -- George R.R. Martin: A Clash of Kings


Re: ICANN and Verisign settle over SiteFinder

2005-10-25 Thread william(at)elan.net



On Tue, 25 Oct 2005, Florian Weimer wrote:


http://www.businessweek.com/ap/financialnews/D8DEL2TO7.htm?
campaign_id=apn_tech_downchan=tc


I don't understand what VeriSign receives in return for their kowtow
(under the agreement, they basically waive any right to criticize
ICANN's role).


They get to continue to be .COM registry forever as new agreement
would extend to 2012 and then automatically extended further without 
formal process as it happened recently for .NET. They also are going

to be able to increase registry fees for .COM by 7% per year which to
put it in perspective can potentially be $2 increase 4 years from now.


Two possible explanations:


2+2=5, right? :)


 * ICANN signalled a positive outcome of a future Sitefinder review
   under the new process.

 * ICANN promised to grant VeriSign the DNSSEC root and .ARPA
   maintenance without tender (the Root Server Management Transition
   Agreement goes into that direction; actually, the .ARPA stuff is
   the interesting one).

 * VeriSign has recognized that they couldn't win in court, and
   suddenly want to play nice.



--
William Leibzon
Elan Networks
[EMAIL PROTECTED]


Re: What is multihoming was (design of a real routing v. endpoint id seperation)

2005-10-25 Thread Robert Bonomi

 From [EMAIL PROTECTED]  Mon Oct 24 15:33:02 2005
 Date: Mon, 24 Oct 2005 13:31:17 -0700
 Subject: Re: What is multihoming was (design of a real routing v. endpoint id
  seperation)

 Stephen Sprunk wrote:
 [snip]

  Other people use this term in very different ways. To some people
  it means using having multiple IP addresses bound to a single
  network interface. To others it means multiple websites on one
  server.
  
  
  That is virtual hosting in a NANOG context.  Some undereducated MCSEs 
  might call it multihoming, but let's not endorse that here.

 Unfortunately, this is a common and standards blessed way to refer to
 any host with multiple interfaces/addresses (real or virtual). For example,
 from the Terminology section, 1.1.3, of RFC1122, Requirements for
 Internet Hosts -- Communication Layers, says,

   Multihomed
A host is said to be multihomed if it has multiple IP
addresses.  For a discussion of multihoming, see Section
3.3.4 below.


*sigh*  Multi-homing simply means 'having external connections to more than 
one network' -- be it a network with multiple, disjoint, ingress/egress paths,
or a host with interfaces (real or virtual) on distinct LAN subnets (even if
those subnets are agregated into a single net somewhere upstream.

A host with multiple adresses utilizing the _same_ netblock/netmask _should_
_not_ be called multi-homed (because there is only one path to that host), it
is simply a single-homed host with multiple identities.  might be called
poly-ip-any or some such.  grin





Re: What is multihoming was (design of a real routing v. endpoint id seperation)

2005-10-25 Thread Joe Abley



On 25-Oct-2005, at 05:56, Robert Bonomi wrote:


*sigh*  Multi-homing simply means [...]


As became clear when we wrote the draft that became RFC 3582,  
apparently simple terms such as transit provider and multi-homing  
mean surprisingly different things to different people.


The important thing is not who is right, and not which definition is  
the best, but that everybody uses the same definitions so that they  
can talk to each other without running around in circles.



Joe



Re: ICANN and Verisign settle over SiteFinder

2005-10-25 Thread Florian Weimer

* william elan net:

 They get to continue to be .COM registry forever as new agreement
 would extend to 2012 and then automatically extended further without 
 formal process as it happened recently for .NET. They also are going
 to be able to increase registry fees for .COM by 7% per year which to
 put it in perspective can potentially be $2 increase 4 years from now.

So the deal makes indeed sense from a business perspective.  Thanks.

 Two possible explanations:

 2+2=5, right? :)

Oops. 8-)


Re: Scalability issues in the Internet routing system

2005-10-25 Thread Christopher L. Morrow

On Mon, 24 Oct 2005, Blaine Christian wrote:


 
 
  As of the last time that I looked at it (admittedly quite awhile
  ago), something like 80% of the forwarding table had at least one
  hit per minute.  This may well have changed given the number of
  traffic engineering prefixes that are circulating.
 
  Tony
 

 Yea, but that's just me pinging everything and google and yahoo
 fighting over who has the most complete list of x rated sites.

and this probably depends greatly on the network, user-population,
business involved. Is it even a metric worth tracking?


Re: Scalability issues in the Internet routing system

2005-10-25 Thread Valdis . Kletnieks
On Tue, 25 Oct 2005 16:28:05 -, Christopher L. Morrow said:
 On Mon, 24 Oct 2005, Blaine Christian wrote:
  Yea, but that's just me pinging everything and google and yahoo
  fighting over who has the most complete list of x rated sites.
 
 and this probably depends greatly on the network, user-population,
 business involved. Is it even a metric worth tracking?

It's a fight for eyeballs, isn't it?  Routing table hits caused by spidering
from search engines will give a good indication of what percent of the
address space the spiders are covering.  Of course, you need views from
a number of places, and some adjusting for the fact that the webservers are
usually clumped in very small pockets of address space.

On the other hand, if it can be established that 80% of the routing table
is hit every N minutes, which would tend to argue against caching a very
small subset, but the vast majority of the routing table hits are just spiders,
that may mean that a cache miss isn't as important as we thought...

Anybody got actual measured numbers on how much of the hits are just spiders
and Microsoft malware scanning for vulnerable hosts?



pgpuZ7kJs8OEa.pgp
Description: PGP signature


IRR Coordination mailing list

2005-10-25 Thread Larry Blunk



 This IRR Coordination mailing list was mentioned this morning 
during the BGP Filtering talk.We'd like
to invite anyone interested in improving the trust, consistency, and 
coordination of IRR's

to join.  The archive and subscription details can be found at

http://www.merit.edu/mail.archives/irrc/

  -Larry Blunk
   Merit



Re: Scalability issues in the Internet routing system

2005-10-25 Thread Christopher L. Morrow

On Tue, 25 Oct 2005 [EMAIL PROTECTED] wrote:

 On Tue, 25 Oct 2005 16:28:05 -, Christopher L. Morrow said:
  On Mon, 24 Oct 2005, Blaine Christian wrote:
   Yea, but that's just me pinging everything and google and yahoo
   fighting over who has the most complete list of x rated sites.
 
  and this probably depends greatly on the network, user-population,
  business involved. Is it even a metric worth tracking?

 It's a fight for eyeballs, isn't it?  Routing table hits caused by spidering
 from search engines will give a good indication of what percent of the

oops, I should have not replied to Blaine/Tony but directly to tony's
message :( The real question was:

If the percentage of hits is dependent on 'user population', 'business',
'network' is it even worth metricing for the purpose of design of the
device/protocol?

Unless of course you want a 'sport' and 'offroad' switch on your router :)


Re: What is multihoming was (design of a real routing v. endpoint id seperation)

2005-10-25 Thread Crist Clark


Robert Bonomi wrote:

From [EMAIL PROTECTED]  Mon Oct 24 15:33:02 2005
Date: Mon, 24 Oct 2005 13:31:17 -0700
Subject: Re: What is multihoming was (design of a real routing v. endpoint id
seperation)

Stephen Sprunk wrote:
[snip]



Other people use this term in very different ways. To some people
it means using having multiple IP addresses bound to a single
network interface. To others it means multiple websites on one
server.



That is virtual hosting in a NANOG context.  Some undereducated MCSEs 
might call it multihoming, but let's not endorse that here.


Unfortunately, this is a common and standards blessed way to refer to
any host with multiple interfaces/addresses (real or virtual). For example,
from the Terminology section, 1.1.3, of RFC1122, Requirements for
Internet Hosts -- Communication Layers, says,

 Multihomed
  A host is said to be multihomed if it has multiple IP
  addresses.  For a discussion of multihoming, see Section
  3.3.4 below.




*sigh*  Multi-homing simply means 'having external connections to more than 
one network' -- be it a network with multiple, disjoint, ingress/egress paths,

or a host with interfaces (real or virtual) on distinct LAN subnets (even if
those subnets are agregated into a single net somewhere upstream.

A host with multiple adresses utilizing the _same_ netblock/netmask _should_
_not_ be called multi-homed (because there is only one path to that host), it
is simply a single-homed host with multiple identities.  might be called
poly-ip-any or some such.  grin


Depends who you ask. Again, RFC1122 says (section 1.1.1),

 A host is generally said to be multihomed if it has more than
 one interface to the same or to different networks.

And also section 3.3.4.1,

A multihomed host has multiple IP addresses, which we may
think of as logical interfaces.  These logical interfaces
may be associated with one or more physical interfaces, and
these physical interfaces may be connected to the same or
different networks.

As far as a multihomed host is concerned, RFC1122 sure seems to call
anything with multiple IPs multihomed. Multihomed is a trait of the host
independent of any network topology around the host.

But whatever. It just means people need to be clear what they are talking
about when they say multihomed. As is clear from this thread, there is
not clear agreement on what the precise meaning is.
--
Crist J. Clark   [EMAIL PROTECTED]
Globalstar Communications(408) 933-4387



Re: ICANN and Verisign settle over SiteFinder

2005-10-25 Thread Todd Vierling

On Tue, 25 Oct 2005, Florian Weimer wrote:

  Two possible explanations:
 
  2+2=5, right? :)

 Oops. 8-)

tongue location=cheek

No, you got it right.  The [third] option at the end, play nice, has only
a passing association to the realm of possibility.

/tongue

-- 
-- Todd Vierling [EMAIL PROTECTED] [EMAIL PROTECTED] [EMAIL PROTECTED]


Re: ICANN and Verisign settle over SiteFinder

2005-10-25 Thread John Levine

I don't understand what VeriSign receives in return for their kowtow
(under the agreement, they basically waive any right to criticize
ICANN's role).

As someone else noted, a perpetual cash cow in .COM with 7%/year
escalator clause.

  * ICANN signalled a positive outcome of a future Sitefinder review
under the new process.

Nope, there's this complex process with outside experts to review any
new proposed sitefinder like thing.

  * ICANN promised to grant VeriSign the DNSSEC root and .ARPA
maintenance without tender (the Root Server Management Transition
Agreement goes into that direction; actually, the .ARPA stuff is
the interesting one).

My reading is the opposite, ICANN will create the root zone now.

  * VeriSign has recognized that they couldn't win in court, and
suddenly want to play nice.

Quite possibly and don't be silly.  More concretely, they probably
decided they were unlikely to win more than this agreement gives them.

R's,
John




fleet.navy.mil DNS / network ops contact please

2005-10-25 Thread Suresh Ramasubramanian

Could a dns ops contact for any of these hosts please email me offlist
to help troubleshoot a network reachablity / dns lookup issue from our
servers.

thanks
-srs

ccsg3.navy.mil. 30M IN NS   ns2.fleet.navy.mil.
ccsg3.navy.mil. 30M IN NS   dnsmail.uar.navy.mil.
ccsg3.navy.mil. 30M IN NS   ns1.fleet.navy.mil.

--
Suresh Ramasubramanian ([EMAIL PROTECTED])


Re: Scalability issues in the Internet routing system

2005-10-25 Thread Rubens Kuhl Jr.

Assume you have determined that a percentage (20%, 80%, whatever) of
the routing table is really used for a fixed time period. If you
design a forwarding system that can do some packets per second for
those most used routes, all you need to DDoS it is a zombie network
that would send packets to all other destinations... rate-limiting and
dampening would probably come into place, and a new arms race would
start, killing operator's abilities to fast renumber sites or entire
networks and new troubleshooting issues for network operators.

Isn't just simpler to forward at line-rate ? IP look ups are fast
nowadays, due to algorithmic and architecture improvements... even
packet classification (which is n-tuple version of the IP look up
problem) is not that hard anymore. Algorithms can be updated on
software-based routers, and performance gains far exceed Moore's Law
and projected prefix growth rates... and routers that cannot cope with
that can always be changed to handle IGP-only routes and default
gateway to a router that can keep up with full routing.
(actually, hardware-based routers based on limited size CAMs are more
vulnerable to obsolescence by routing table growth than software ones)

Let's celebrate the death of ip route-cache, not hellraise this fragility.


Rubens





On 10/24/05, Alexei Roudnev [EMAIL PROTECTED] wrote:

 One question - which percent of routing table  of any particular router is
 REALLY used, say, during 1 week?

 I have a strong impression, that answer wil not be more than 20% even in
 biggerst backbones, and
 will be (more likely) below 1% in the rest of the world. Which makes a hige
 space for optimization.


 - Original Message -
 From: Daniel Senie [EMAIL PROTECTED]
 To: [EMAIL PROTECTED]
 Sent: Tuesday, October 18, 2005 9:50 AM
 Subject: Re: Scalability issues in the Internet routing system


 
  At 11:30 AM 10/18/2005, Andre Oppermann wrote:
 
  I guess it's time to have a look at the actual scalability issues we
  face in the Internet routing system.  Maybe the area of action becomes
  a bit more clear with such an assessment.
  
  In the current Internet routing system we face two distinctive
 scalability
  issues:
  
  1. The number of prefixes*paths in the routing table and interdomain
  routing system (BGP)
  
  This problem scales with the number of prefixes and available paths
  to a particlar router/network in addition to constant churn in the
  reachablility state.  The required capacity for a routers control
  plane is:
  
capacity = prefix * path * churnfactor / second
  
  I think it is safe, even with projected AS and IP uptake, to assume
  Moore's law can cope with this.
 
  Moore will keep up reasonably with both the CPU needed to keep BGP
  perking, and with memory requirements for the RIB, as well as other
  non-data-path functions of routers.
 
 
 
  2. The number of longest match prefixes in the forwarding table
  
  This problem scales with the number of prefixes and the number of
  packets per second the router has to process under full or expected
  load.  The required capacity for a routers forwarding plane is:
  
capacity = prefixes * packets / second
  
  This one is much harder to cope with as the number of prefixes and
  the link speeds are rising.  Thus the problem is multiplicative to
  quadratic.
  
  Here I think Moore's law doesn't cope with the increase in projected
  growth in longest prefix match prefixes and link speed.  Doing longest
  prefix matches in hardware is relatively complex.  Even more so for
  the additional bits in IPv6.  Doing perfect matches in hardware is
  much easier though...
 
  Several items regarding FIB lookup:
 
  1) The design of the FIB need not be the same as the RIB. There is
  plenty of room for creativity in router design in this space.
  Specifically, the FIB could be dramatically reduced in size via
  aggregation. The number of egress points (real or virtual) and/or
  policies within a router is likely FAR smaller than the total number
  of routes. It's unclear if any significant effort has been put into this.
 
  2) Nothing says the design of the FIB lookup hardware has to be
  longest match. Other designs are quite possible. Again, some
  creativity in design could go a long way. The end result must match
  that which would be provided by longest-match lookup, but that
  doesn't mean the ASIC/FPGA or general purpose CPUs on the line card
  actually have to implement the mechanism in that fashion.
 
  3) Don't discount novel uses of commodity components. There are fast
  CPU chips available today that may be appropriate to embed on line
  cards with a bit of firmware, and may be a lot more cost effective
  and sufficiently fast compared to custom ASICs of a few years ago.
  The definition of what's hardware and what's software on line cards
  need not be entirely defined by whether the design is executed
  entirely by a hardware engineer or a software engineer.
 
  Finally, don't 

Re: Scalability issues in the Internet routing system

2005-10-25 Thread Alexei Roudnev

Vice versa. DDOS attack will never work by this way, because this router
will (de facto) prioritize
long established streams vs. new and random ones, so it will not notice DDOS
attack at all - just some DDOS packets will be delayed or lost.

You do not need to forward 100% packets on line card rate; forwarding 95%
packets on card rate and have other processing (with possible delays) thru
central CPU can work good enough.

It is all about tricks and optimizations - fast routing is not state of art
and can be optimized by many ways.
For now, it was not necessary; when it became necessary - it will be done in
1/2 year.


- Original Message - 
From: Rubens Kuhl Jr. [EMAIL PROTECTED]
To: [EMAIL PROTECTED]
Sent: Tuesday, October 25, 2005 9:21 PM
Subject: Re: Scalability issues in the Internet routing system



Assume you have determined that a percentage (20%, 80%, whatever) of
the routing table is really used for a fixed time period. If you
design a forwarding system that can do some packets per second for
those most used routes, all you need to DDoS it is a zombie network
that would send packets to all other destinations... rate-limiting and
dampening would probably come into place, and a new arms race would
start, killing operator's abilities to fast renumber sites or entire
networks and new troubleshooting issues for network operators.

Isn't just simpler to forward at line-rate ? IP look ups are fast
nowadays, due to algorithmic and architecture improvements... even
packet classification (which is n-tuple version of the IP look up
problem) is not that hard anymore. Algorithms can be updated on
software-based routers, and performance gains far exceed Moore's Law
and projected prefix growth rates... and routers that cannot cope with
that can always be changed to handle IGP-only routes and default
gateway to a router that can keep up with full routing.
(actually, hardware-based routers based on limited size CAMs are more
vulnerable to obsolescence by routing table growth than software ones)

Let's celebrate the death of ip route-cache, not hellraise this fragility.


Rubens





On 10/24/05, Alexei Roudnev [EMAIL PROTECTED] wrote:

 One question - which percent of routing table  of any particular router is
 REALLY used, say, during 1 week?

 I have a strong impression, that answer wil not be more than 20% even in
 biggerst backbones, and
 will be (more likely) below 1% in the rest of the world. Which makes a
hige
 space for optimization.


 - Original Message -
 From: Daniel Senie [EMAIL PROTECTED]
 To: [EMAIL PROTECTED]
 Sent: Tuesday, October 18, 2005 9:50 AM
 Subject: Re: Scalability issues in the Internet routing system


 
  At 11:30 AM 10/18/2005, Andre Oppermann wrote:
 
  I guess it's time to have a look at the actual scalability issues we
  face in the Internet routing system.  Maybe the area of action becomes
  a bit more clear with such an assessment.
  
  In the current Internet routing system we face two distinctive
 scalability
  issues:
  
  1. The number of prefixes*paths in the routing table and interdomain
  routing system (BGP)
  
  This problem scales with the number of prefixes and available paths
  to a particlar router/network in addition to constant churn in the
  reachablility state.  The required capacity for a routers control
  plane is:
  
capacity = prefix * path * churnfactor / second
  
  I think it is safe, even with projected AS and IP uptake, to assume
  Moore's law can cope with this.
 
  Moore will keep up reasonably with both the CPU needed to keep BGP
  perking, and with memory requirements for the RIB, as well as other
  non-data-path functions of routers.
 
 
 
  2. The number of longest match prefixes in the forwarding table
  
  This problem scales with the number of prefixes and the number of
  packets per second the router has to process under full or expected
  load.  The required capacity for a routers forwarding plane is:
  
capacity = prefixes * packets / second
  
  This one is much harder to cope with as the number of prefixes and
  the link speeds are rising.  Thus the problem is multiplicative to
  quadratic.
  
  Here I think Moore's law doesn't cope with the increase in projected
  growth in longest prefix match prefixes and link speed.  Doing longest
  prefix matches in hardware is relatively complex.  Even more so for
  the additional bits in IPv6.  Doing perfect matches in hardware is
  much easier though...
 
  Several items regarding FIB lookup:
 
  1) The design of the FIB need not be the same as the RIB. There is
  plenty of room for creativity in router design in this space.
  Specifically, the FIB could be dramatically reduced in size via
  aggregation. The number of egress points (real or virtual) and/or
  policies within a router is likely FAR smaller than the total number
  of routes. It's unclear if any significant effort has been put into
this.
 
  2) Nothing says the design of the FIB lookup hardware has to be