A simple end to end solution is to send multiple copies of a data to
multiple pathes. Mission critical application over SONET is already
doing so and it is said that, even if a path fails, not even a single
bit is lost.
No.
- kurtis -
Kurt Erik Lindqvist wrote:
This I thought was more or less standard. I was talking about
less than 100ms convergence.
Dude, this requires a keepalive or hello at 10ms intervals and a 25~30
ms rtt. You might need to talk to a guy named Albert Einstein; he
wrote
interesting RFCs about the speed of
Kurt Erik Lindqvist wrote:
This I thought was more or less standard. I was talking about
less than 100ms convergence.
Dude, this requires a keepalive or hello at 10ms intervals and a 25~30
ms rtt. You might need to talk to a guy named Albert Einstein; he wrote
interesting RFCs about the speed of
There is no technical reason why a single service provider network
can
do better than a similar network that consists of several smaller
See Abha and Craigs paper on convergence of BGP. Personally I would go
for a large provider with multiple connections.
Based on this paper? What I see is
multihoming [Re: Enforcing
unreachability of site local addresses]
There is no technical reason why a single service provider
network
can
do better than a similar network that consists of several smaller
See Abha and Craigs paper on convergence of BGP. Personally I
would go
for a large provider
Kurt Erik Lindqvist wrote:
This I thought was more or less standard. I was talking about
less than 100ms convergence.
Dude, this requires a keepalive or hello at 10ms intervals and a 25~30
ms rtt. You might need to talk to a guy named Albert Einstein; he wrote
interesting RFCs about the speed
Folks,
I am on multi6 and just listen and send clarification questions to
various participants. But a short comment here.
Has the end-to-end principle failed to teach us anything?
Reliability begins and ends in the end hosts. If each host is
connected over two service providers there are
I'll take one particular issue, and Cc: to multi6 as I believe it is a
very important thing to consider.
On Fri, 14 Feb 2003, Alan E. Beard wrote:
Most of the end-user-network managers among my clients now
multihome,
and
will continue to require multihomed service in future. In every
case
where
In a perfect world I'm sure I'd agree with you. In real life however,
the fact of the matter is that customers want multihoming, and it
doesn't matter to the customers if that is a problematic approach that
doesn't scale for the SPs. Doesn't even matter if it's technically the
best solution for
There is no technical reason why a single service provider network can
do better than a similar network that consists of several smaller
See Abha and Craigs paper on convergence of BGP. Personally I would go
for a large provider with multiple connections.
Last fall I was invited to a conference
10 matches
Mail list logo