RE: PI space (was: Stupid NAT tricks and how to stop them)

2006-03-30 Thread Noel Chiappa
> From: "Michel Py" <[EMAIL PROTECTED]>

>> Needless to say, the real-time taken for this process to complete
>> - i.e. for routes to a particular destination to stabilize, after a
>> topology change which affects some subset of them - is dominated by
>> the speed-of-light transmission delays across the Internet fabric. You
>> can make the speed of your processors infinite and it wqwon't make
>> much of a difference.

> The past stability issues in BGP have little to do with latency and
> everything to do with processing power and bandwidth available to
> propagate updates.

The past stability issues had a number of causes, including protocol
implementation issues, IIRC.

In any event, I was speaking of the present/future, not the past. Yes, *in
the past*, processing power and bandwidth limits were an *additional* issue.
However, that was in the past - *now*, the principal term in stabilization
time is propogation delay.

> In other words, it does not make any difference in the real world if
> you're using a 150ms oceanic cable or a 800ms geosynchronous satlink as
> long as the pipe is big enough and there are enough horses under the
> hood.

If you think there aren't still stability issues, why don't you try getting
rid of all the BGP dampening stuff, then? Have any major ISP's out there done
that?

Noel

___
Ietf mailing list
Ietf@ietf.org
https://www1.ietf.org/mailman/listinfo/ietf


RE: PI space (was: Stupid NAT tricks and how to stop them)

2006-03-30 Thread Michel Py
> Noel Chiappa wrote:
> Needless to say, the real-time taken for this process to complete
> - i.e. for routes to a particular destination to stabilize, after
> a topology change which affects some subset of them - is dominated
> by the speed-of-light transmission delays across the Internet
> fabric. You can make the speed of your processors infinite and it
> won't make much of a difference.

This is total bull. The past stability issues in BGP have little to do
with latency and everything to do with processing power and bandwidth
available to propagate updates. In other words, it does not make any
difference in the real world if you're using a 150ms oceanic cable or a
800ms geosynchronous satlink as long as the pipe is big enough and there
are enough horses under the hood.

Only if we were shooting for a sub-second global BGP convergence the
speed of light would matter.


> Stephen Sprunk wrote:
> The IPv4 core is running around 180k routes today, and even
> the chicken littles aren't complaining the sky is falling.

I was about to make the same point. Ever heard whining about a 7500 with
RSP2s not being able to handle it? Yes. Ever heard about a decently
configured GSR not being able to handle it? No. Heard whining about
receiving a full table over a T1? Yes. Heard whining about receiving a
full table over an OC-48? No.

Anybody still filtering at /20 like everybody did a few years back?


> and the vendors can easily raise those limits if customers demand
> it (though they'd much prefer charging $1000 for $1 worth of RAM
> that's too old to work in a modern PC).

You're slightly exaggerating here. I remember paying $1,900 for 32MB of
ram worth $50 in the street :-D

Michel.


___
Ietf mailing list
Ietf@ietf.org
https://www1.ietf.org/mailman/listinfo/ietf


Re: PI space (was: Stupid NAT tricks and how to stop them)

2006-03-29 Thread Stephen Sprunk

Thus spake "Noel Chiappa" <[EMAIL PROTECTED]>

   > From: "Michel Py" <[EMAIL PROTECTED]>

   >> We aren't *ever* going to give everyone PI space (at least, PI space
   >> in whatever namespace the routers use to forward packets) ...
   >> Routing (i.e. path-finding) algorithms simply cannot cope with
   >> tracking 10^9 individual destinations (see prior message).

   > I think you're dead wrong on this. This reasoning was valid with
   > 10^8 Hz processors and 10^8 bytes of memory it's no longer true
   > with 10^11 or 10^12 Hz processors and memory (we're almost at
   > 10^10 cheap ones).

The last time I heard, the speed of light was still a constant. And the
current routing architecture is based on distributed computation.

I.e. router A does some computing, passes partial results to router B,
which does some more computing, and in turn passes the partial
results to router C.  After some amount of this back and forth across
the network, the route is eventually computed and installed.

Needless to say, the real-time taken for this process to complete - i.e.
for routes to a particular destination to stabilize, after a topology
change which affects some subset of them - is dominated by the
speed-of-light transmission delays across the Internet fabric. You can
make the speed of your processors infinite and it won't make much
of a difference.


Nothing has changed here.  The propogation of an individual route is limited 
by the speed of light (in fiber or copper), yes, but faster CPUs and bigger 
memories mean that more of those routes can be propogating at the same time 
with the same or less effect than a few years ago.


The IPv4 core is running around 180k routes today, and even the chicken 
littles aren't complaining the sky is falling.  Compare to how many routes 
were around pre-CIDR and the resulting chaos.  Routers have gotten much, 
much better since then, and in most cases they're using technology 5+ years 
behind the PC market (200MHz CPUs, SDRAM, etc.).  We'd have to seriously 
screw up to run afoul of today's limits, and the vendors can easily raise 
those limits if customers demand it (though they'd much prefer charging 
$1000 for $1 worth of RAM that's too old to work in a modern PC).


S

Stephen Sprunk"Stupid people surround themselves with smart
CCIE #3723   people.  Smart people surround themselves with
K5SSS smart people who disagree with them."  --Aaron Sorkin 



___
Ietf mailing list
Ietf@ietf.org
https://www1.ietf.org/mailman/listinfo/ietf


PI space (was: Stupid NAT tricks and how to stop them)

2006-03-29 Thread Noel Chiappa
> From: "Michel Py" <[EMAIL PROTECTED]>

>> We aren't *ever* going to give everyone PI space (at least, PI space
>> in whatever namespace the routers use to forward packets) ...
>> Routing (i.e. path-finding) algorithms simply cannot cope with
>> tracking 10^9 individual destinations (see prior message).

> I think you're dead wrong on this. This reasoning was valid with 10^8
> Hz processors and 10^8 bytes of memory it's no longer true with 10^11
> or 10^12 Hz processors and memory (we're almost at 10^10 cheap ones).

The last time I heard, the speed of light was still a constant. And the
current routing architecture is based on distributed computation.

I.e. router A does some computing, passes partial results to router B, which
does some more computing, and in turn passes the partial results to router C.
After some amount of this back and forth across the network, the route is
eventually computed and installed.

Needless to say, the real-time taken for this process to complete - i.e. for
routes to a particular destination to stabilize, after a topology change
which affects some subset of them - is dominated by the speed-of-light
transmission delays across the Internet fabric. You can make the speed of
your processors infinite and it won't make much of a difference.

Noel

___
Ietf mailing list
Ietf@ietf.org
https://www1.ietf.org/mailman/listinfo/ietf