Hello NANOG!
Is anyone having routing issues or packet loss with
MCI/UUNet today? I have an AS701 connection at my orginization, and we've
had thousands of customer calls starting at about 2:13AM CDT. We've
shutdown 701 as a peer because traceroutes seem to expose some packet loss and
Well, Corning had to do something with all that extra fiber they
couldn't sell, so they make a gigantic spool and made it a light buffer.
On
Thu, 3 Oct 2002, Marshall Eubanks wrote:
>
> Where are they diverting it to, the Moon (1.5 light seconds away) ?
>
> Really - I have seen some multise
On Sat, 5 Oct 2002 18:29:38 +0200 (CEST)
Iljitsch van Beijnum <[EMAIL PROTECTED]> wrote:
>
> On Sat, 5 Oct 2002, Rafi Sadowsky wrote:
>
> > IvB> Obviously "some" packet loss and jitter are normal. But how much is
> > IvB> normal? Even at a few tenths of a percent packet loss hurts TCP
> > IvB>
>
>
>
>
> IIRC the maximum TCP(theoretical)session BW under these conditions
>Is less than 1Mb/sec (for 600msec RTT)
>
873.8kbps payload, add headers with assumed 1500 byte MTU and you'll
have 897.8kbps.
This assumes zero latency on the hosts reacting to the packets.
Pete
On Sat, 5 Oct 2002, Rafi Sadowsky wrote:
> IvB> Obviously "some" packet loss and jitter are normal. But how much is
> IvB> normal? Even at a few tenths of a percent packet loss hurts TCP
> IvB> performance. The only way to keep jitter really low without dropping large
> IvB> numbers of packets i
## On 2002-10-04 23:50 +0200 Iljitsch van Beijnum typed:
IvB>
IvB> Obviously "some" packet loss and jitter are normal. But how much is
IvB> normal? Even at a few tenths of a percent packet loss hurts TCP
IvB> performance. The only way to keep jitter really low without dropping large
IvB> numbe
On Fri, 4 Oct 2002, Petri Helenius wrote:
> >Kind of an arms race between the routers and the hosts to see which can
> >buffer more data.
> You usually end up with 64k window with modern systems anyway. Hardly
> anything uses window scaling bits actively.
I also see ~17k a lot. I guess most ap
>OK. I'll bite - is it feasible if you're a caspian engineer? ;)
Obviously, as most of the audience knows, it´s a function of the speed you want
to achieve, the number of flows you expect to be interested in and what you want
to do with the flows. Getting traffic split up in a few million flows
On Fri, 04 Oct 2002 22:28:01 +0300, Petri Helenius said:
> You usually end up with 64k window with modern systems anyway. Hardly
> anything uses window scaling bits actively. Obviously by dropping select packets
> you can keep the window at a more moderate size. Doing this effectively would
> req
>Curious. Then the objective of buffering would be to absorb the entire
>window for each TCP flow. Is this a good thing to do? That will only add
>more delay, so TCP will use larger windows and you need more buffering...
>Kind of an arms race between the routers and the hosts to see which can
>bu
>
> In my experience, TCP deals better with packet loss than a jittery RTT
> (caused by huge buffering capability on linecards)
>
Unfortunately most people writing up SLAs have RTT measured as a very
long average (so a little bouncing around does not matter) but have quite low
packet loss targe
On Fri, 4 Oct 2002, Petri Helenius wrote:
> Vendor C sells packet memory up to 256M each way for a line card. Whether
> this makes any sense depends obviously on your interfaces.
Hm, even at 10 Gbps 256M would add up to a delay of something like 200 ms.
I doubt this is something customers like.
Thus spake "Iljitsch van Beijnum" <[EMAIL PROTECTED]>
> At 155 Mbps you need 32 MB worth of buffer space to arrive at a delay like
> this. I wouldn't put it past ATM vendors to think of this kind of
> over-enthusiastic buffering as a feature rather than a bug.
Traditionally, it's ATM switches th
> At 155 Mbps you need 32 MB worth of buffer space to arrive at a delay like
> this. I wouldn't put it past ATM vendors to think of this kind of
> over-enthusiastic buffering as a feature rather than a bug.
>
Vendor C sells packet memory up to 256M each way for a line card. Whether
this makes any
On Thu, 3 Oct 2002, Marshall Eubanks wrote:
> Where are they diverting it to, the Moon (1.5 light seconds away) ?
> Really - I have seen some multisecond latencies on network links we were
> testing, and I always wondered how these could come to be.
Good question. Cisco routers use a default q
The Juniper routers (it appears they are based on
the interface naming scheme) tend to have incredible buffering capabilities
as compared to the predecasors of the time. This allows a full link
to not drop packets and fully buffer them over a period of time.
This obviously has r
Where are they diverting it to, the Moon (1.5 light seconds away) ?
Really - I have seen some multisecond latencies on network links we were
testing, and I always wondered how these could come to be.
--
Regards
Marshall Euban
The only thing I've noticed is high latency between UUNet and Sprint
(around 2 second latency) in at least one traffic exchange point between
them, maybe more. Probably because of the diversion of traffic on UUNet's
network.
At 04:30 PM 10/3/2002 -0400, Matt Levine wrote:
>On Thursday, Octo
On Thursday, October 3, 2002, at 04:07 PM, Chris Adams wrote:
>
> Once upon a time, [EMAIL PROTECTED] <[EMAIL PROTECTED]> said:
>> There still seem to be problems. Earlier today CHI->ATL was 2000ms.
>> Now
>> it's improved to 1000ms.
>>
>> 9 0.so-5-0-0.XL2.CHI13.ALTER.NET (152.63.73.21) 2
Once upon a time, [EMAIL PROTECTED] <[EMAIL PROTECTED]> said:
> There still seem to be problems. Earlier today CHI->ATL was 2000ms. Now
> it's improved to 1000ms.
>
> 9 0.so-5-0-0.XL2.CHI13.ALTER.NET (152.63.73.21) 24.466 ms 24.311 ms 24.382 ms
> 10 0.so-0-0-0.TL2.CHI2.ALTER.NET (152.63.6
> I'm tempted to call in and see if I can get
> a grasp of the scope and nature of the problem.
> But maybe it would be best if someone simply
> posted a brief summary of what is publicly
> known about the issueto be followed by
> resonable speculation peppered with some
> wild speculation.
anolo Hernandez; Nanog; [EMAIL PROTECTED]
Subject: Re: UUNET Routing issues
For T-1 customers, the master Ticket Number is 651744
For customers with DS/OC gear, that master ticket number is 651751.
I came to this information after calling their noc and asking. :)
-Eric
For T-1 customers, the master Ticket Number is 651744
For customers with DS/OC gear, that master ticket number is 651751.
I came to this information after calling their noc and asking. :)
-Eric
HernandezTo: [EMAIL PROTECTED]
,
[EMAIL PROTECTED]
ne.com> Subject: Re: UUNET Routing iss
To: Nanog <[EMAIL PROTECTED]>
>
>
> ne.com> Subject: UUNET Routing issues
>
> Sent by:
HernandezTo: Nanog <[EMAIL PROTECTED]>
Subject: UUNET Routing issues
S
Anyone know the cause of todays routing problem at UUNET? It looks like
an IBGP loop but I could be wrong.
--
Manolo Hernandez - Network Administrator
Dialtone Internet - Extremely Fast Linux Web Servers
phone://954-581-0097 fax://954-581-7629
mailto:[EMAIL PROTECTED] http://www.dialtone.com
"
27 matches
Mail list logo