Rather than dragging this on any further, I'll just restate my basic
point, and then drop it, which is that optimization must be viewed
holistically, at a system-wide (and network-wide) level.
The fact is that the folks implementing the most widely-used protocol
code choose their optimizations carefully. There is a cost to every
optimization, in the time and effort (and money) to do it, and in the
additional complexity that inevitably comes with it, which impacts
stability and maintainability.
There are obvious, easy optimizations (don't run SPF when a leaf node
changes, don't run SPF if you know that the LSA contents didn't
change, etc.) that everybody does. However, when you get into the
more arcane stuff (like partial SPFs and the like) where the code is
decidedly nontrivial and the systemic benefits are marginal, it
becomes much more murky. In the case of large routers, where the
forwarding engine is typically somewhat decoupled from the route
engine, the value such optimizations is marginal indeed.
I'm happier, and my customers are happier, when I can give them a box
that works and is stable and doesn't break when the code gets
changed. Could I shave cycles off of the process? Absolutely. Do I
care? No, because I'm not in any danger of running out of cycles,
and the benefit of shaving those cycles is low, and I worry about
optimizing the cases that really *do* matter. This is what's known
as "engineering."
--Dave
On Aug 11, 2006, at 2:40 PM, Erblichs wrote:
Curtis Villamizar and Mr Katz,
First, let me state as I did earlier, I do feel more resources
dealing with handling of Updates pkts containing refresh age
LSAs WOULD yield a higher benefit. And since we are not able
to speed up a REMOTE router's execution of of various tasks, we
need to concentrate on our on local tasks to allow US to LOCALLY
converge ASAP.
But, we are in this forwarding table vs OSPF routing table
costs, benefit analysis..
Lets see if we agree on these things.
1) Most LSAs recieved are just age refresh LSAs, thus they
have no effect on SPF calcs.
2) Most SPF calcs do not result in a lower cost route or
in some implementations a additional equal cost route.
3) Even if a OSPF routing table will yield a better cost
route than earlier in time, due to administrative
distances that newer cost, may not still be used.
Then the forwarding table mods are rarely even done!
So, why would you want to spend time to decrease the
latency of a operation on such a infrequent event?????
If subsec hellos or equivalent operation was used, I would
first verify that nbr lookups are efficent..
The freq/repeativeness of a operation, if speeded up, should
yield the best results..
So, if we aggregate the number of repeativeness operations, we
need to do 1 lookup per recived LSA within our routing table.
Most of these will result in age refreshes. So, first, the
LSA lookup is stressed the most.
Then if we had a extremely fast SPF type calcs based on different
LSA types / triggers, we COULD re-run repetive calcs withoout forcing
a delay. A delay is standand in our industry. However, if we agree
with #2 and #3, then the only reason for the delay is due to slower
SPF calcs. I agree some minimal delay is necessary to verify that
all new LSAs within a single Update, given that a full ajacency has
already been established, should be implemented.
Lastly, bringing back up the forwarding table. The table COULD be
considered somewhat like a cache. Specific atomic operations are
normally
implimented because of single change entries. A operation could
simply Invalidate an entry. This is a single small amount of data
that
needs to be transfered and even slow PCI buses are capable of
handling
a infrequent data with minimal latency. Most of this is done in
hardware. Sorry, it is a don't care whether their are multiple
readers
for a entry if that entry needs to be invalidated or updated. We
don't
wait for the readers to finish. We need to pre-empt them NOW.
Thus, given appropriate resouces, I would creare a set of equal LM
Bench
micro benchmarks, against the routing table and identify what
items are
given higher priorities. I would not be surprised if these items had
not already been given quite a bit of fine tuning..
The only major issues are the best case versus worse case numbers,
and whether those best case numbers can be improved while not
increasing the worse case numbers.
Lastly, the assumption is whether BGP is in the environment? If it
is,
IMO TCP would be the limiting factor. TCP will not allow a large
burst
of data on a periodiclly idle connection. Most implmentations will
fall
into a slow start methodology. This is the same problem with LDP even
when a path has been established with a specified bandwidth. TCP most
likely will initially get in the way and only after a sustained
number
of segments transmitted will it allow the bandwidth to increase 1
segment per round-trip-time (rtt). This is known as the congestion
avoidance (CA) phase.
Mitchell Erblich
-------------------
Curtis Villamizar wrote:
In message <[EMAIL PROTECTED]>
Erblichs writes:
The biggest reason for this is because the forwarding
tables need faster memory. If we assume that forwarding
table memory is 4x faster than routing table memory,
Nice try but ...
The forwarding memory is on another card in a reasonably big router
and therefore the information has to be trasferred from one processor
to another and then installed in the forwarding memory.
Also the forwarding memory is primarily used for forwarding and in a
big busy router not too many cycles are left over for the writes that
update the forwarding memory due to the reads that are occurring
concurrently. In some architectures this is minimized. For example,
some may use memory caching techniques others dual memory banks, etc.
But perhaps the biggest factor is that the few hundred or few
thousand
IGP routes are the basis for BGP next hops so after they are computed
the few 100,000 BGP routes and forwarding entries are mapped onto the
SPF results. Since there are usually more than 100 times as many BGP
routes as IGP routes doubling the speed of the SPF has no effect.
In any case, Dave is right. The SPF typically takes far less time
than the things that happen as a result of the SPF.
Curtis
ps - OTOH the speed of the MPLS/TE CSPF is very significant.
_______________________________________________
OSPF mailing list
[email protected]
https://www1.ietf.org/mailman/listinfo/ospf
_______________________________________________
OSPF mailing list
[email protected]
https://www1.ietf.org/mailman/listinfo/ospf