On Sat, 6 Nov 2010, Saqib Ilyas wrote:
A friend of mine is doing some testing where he wishes to emulate a
cellular-like interfaces with random drops and all, out of an ethernet
interface. Since we have plenty of network and system ops on the list, I
I would say that a cellular interface
On 11/6/2010 1:53 AM, Saqib Ilyas wrote:
Greetings NANOGers
A friend of mine is doing some testing where he wishes to emulate a
cellular-like interfaces with random drops and all, out of an ethernet
interface. Since we have plenty of network and system ops on the list, I
thought we might have
lol Andrew. Miakel, yea, it's something like that we're trying to emulate.
On Sat, Nov 6, 2010 at 11:13 AM, Andrew Kirch trel...@trelane.net wrote:
On 11/6/2010 1:53 AM, Saqib Ilyas wrote:
Greetings NANOGers
A friend of mine is doing some testing where he wishes to emulate a
Subject: RINA - scott whaps at the nanog hornets nest :-) Date: Fri, Nov 05,
2010 at 03:32:30PM -0700 Quoting Scott Weeks (sur...@mauigateway.com):
It's really quiet in here. So, for some Friday fun let me whap at the
hornets nest and see what happens... ;-)
On Nov 5, 2010, at 3:19 PM, Santino Codispoti wrote:
Does anyone have an up to date list of the carriers that are within
the NAP of the Capital Region?
Abovenet
Att
Level3
Verizon
TATA
Cogent (I am being told is joining or has just joined)..
Terremark's transit network
I actually have a
On Fri, 2010-11-05 at 21:50 -0500, Tony Varriale wrote:
somebody said:
They could make it out of the box but this is why Dylan made his statement.
His statement is far fetched at best. Unless of course he's speaking of 100
million line ACLs.
Can I just ask out of technical curiosity:
Q:
On 11/1/10 9:42 PM, Nathan Eisenberg wrote:
My guess is that the millions of residential users will be less and
less enthused with (pure) PA each time they change service providers...
Hi, almost everytime I open my laptop it gets a different ip address,
sometimes I'm home and it gets that same
On 6 Nov 2010, at 05:53, Saqib Ilyas wrote:
A friend of mine is doing some testing where he wishes to emulate a
cellular-like interfaces with random drops and all, out of an ethernet
interface. Since we have plenty of network and system ops on the list, I
thought we might have luck posting
- Original Message -
From: gordon b slater gordsla...@ieee.org
To: Tony Varriale tvarri...@comcast.net
Cc: nanog@nanog.org
Sent: Saturday, November 06, 2010 4:38 AM
Subject: Re: BGP support on ASA5585-X
On Fri, 2010-11-05 at 21:50 -0500, Tony Varriale wrote:
somebody said:
They
On Fri, 5 Nov 2010 21:40:30 -0400
Marshall Eubanks t...@americafree.tv wrote:
On Nov 5, 2010, at 7:26 PM, Mark Smith wrote:
On Fri, 5 Nov 2010 15:32:30 -0700
Scott Weeks sur...@mauigateway.com wrote:
It's really quiet in here. So, for some Friday fun let me whap at the
On 11/5/2010 5:32 PM, Scott Weeks wrote:
It's really quiet in here. So, for some Friday fun let me whap at the hornets
nest and see what happens...;-)
http://www.ionary.com/PSOC-MovingBeyondTCP.pdf
SCTP is a great protocol. It has already been implemented in a number of
stacks. With
Sent: Saturday, November 06, 2010 9:45 AM
To: nanog@nanog.org
Subject: Re: RINA - scott whaps at the nanog hornets nest :-)
On 11/5/2010 5:32 PM, Scott Weeks wrote:
It's really quiet in here. So, for some Friday fun let me whap at
the hornets nest and see what happens...;-)
Thank you
On Sat, Nov 6, 2010 at 3:12 AM, Mehmet Akcin meh...@akcin.net wrote:
On Nov 5, 2010, at 3:19 PM, Santino Codispoti wrote:
Does anyone have an up to date list of the carriers that are within
the NAP of the Capital Region?
Abovenet
Att
Level3
Verizon
TATA
Cogent (I am being
Le samedi 06 novembre 2010 à 12:15 -0700, George Bonser a écrit :
Sent: Saturday, November 06, 2010 9:45 AM
To: nanog@nanog.org
Subject: Re: RINA - scott whaps at the nanog hornets nest :-)
On 11/5/2010 5:32 PM, Scott Weeks wrote:
It's really quiet in here. So, for some Friday
I doubt that 1500 is (still) widely used in our Internet... Might be,
though, that most of us don't go all the way to 9k.
mh
Last week I asked the operator of fairly major public peering points if they
supported anything larger than 1500 MTU. The answer was no.
On Sat, Nov 6, 2010 at 12:32 PM, George Bonser gbon...@seven.com wrote:
I doubt that 1500 is (still) widely used in our Internet... Might be,
though, that most of us don't go all the way to 9k.
mh
Last week I asked the operator of fairly major public peering points if they
supported
There's still a metric buttload of SONET interfaces in the core that
won't go above 4470.
So, you might conceivably get 4k MTU at some point in the future, but
it's really, *really* unlikely you'll get to 9k MTU any time in the
next
decade.
Matt
Agreed. But even 4470 is better than
1500 was fine for 10G
I meant, of course, 10M ethernet.
Last week I asked the operator of fairly major public peering points
if they supported anything larger than 1500 MTU. The answer was no.
There's still a metric buttload of SONET interfaces in the core that
won't go above 4470.
So, you might conceivably get 4k MTU at some point in
On Sat, Nov 06, 2010 at 12:32:55PM -0700, George Bonser wrote:
I doubt that 1500 is (still) widely used in our Internet... Might be,
though, that most of us don't go all the way to 9k.
Last week I asked the operator of fairly major public peering points
if they supported anything larger
Completely agree with you on that point. I'd love to see Equinix,
AMSIX, LINX,
DECIX, and the rest of the large exchange points put out statements
indicating
their ability to transparently support jumbo frames through their
fabrics, or at
least indicate a roadmap and a timeline to when
On 11/6/2010 3:36 PM, Richard A Steenbergen wrote:
#2. The major vendors can't even agree on how they represent MTU sizes,
so entering the same # into routers from two different vendors can
easily result in incompatible MTUs. For example, on Juniper when you
type mtu 9192, this is INCLUSIVE of
Le samedi 06 novembre 2010 à 13:01 -0700, Matthew Petach a écrit :
On Sat, Nov 6, 2010 at 12:32 PM, George Bonser gbon...@seven.com wrote:
I doubt that 1500 is (still) widely used in our Internet... Might be,
though, that most of us don't go all the way to 9k.
mh
Last week I asked the
It's perfectly safe to have the L2 networks in the middle support the
largest MTU values possible (other than maybe triggering an obscure
Force10 bug or something :P), so they could roll that out today and
you
probably wouldn't notice. The real issue is with the L3 networks on
either end of
Le samedi 06 novembre 2010 à 13:29 -0700, Matthew Petach a écrit :
On Sat, Nov 6, 2010 at 1:22 PM, George Bonser gbon...@seven.com wrote:
Last week I asked the operator of fairly major public peering points
if they supported anything larger than 1500 MTU. The answer was no.
On Sat, Nov 6, 2010 at 2:21 PM, George Bonser gbon...@seven.com wrote:
...
As for the configuration differences between units, how does that change
from the way things are now? A person configuring a Juniper for 1500
byte packets already must know the difference as that quirk of including
Completely agree with you on that point. I'd love to see Equinix, AMSIX,
LINX,
DECIX, and the rest of the large exchange points put out statements indicating
their ability to transparently support jumbo frames through their
fabrics, or at
least indicate a roadmap and a timeline to when
RFC 4821 PMTUD is that negotiation that is lacking. It is there.
It is deployed. It actually works. No more relying on someone sending
the ICMP packets through in order for PMTUD to work!
For some value of works. There are way too many places filtering
ICMP for PMTUD to work consistently.
-Original Message-
From: sth...@nethelp.no [mailto:sth...@nethelp.no]
Sent: Saturday, November 06, 2010 2:40 PM
To: George Bonser
Cc: r...@e-gerbil.net; nanog@nanog.org
Subject: Re: RINA - scott whaps at the nanog hornets nest :-)
RFC 4821 PMTUD is that negotiation that is
On 11/6/2010 4:40 PM, sth...@nethelp.no wrote:
For some value of works. There are way too many places filtering
ICMP for PMTUD to work consistently. PMTUD is *not* the solution,
unfortunately.
He was referring to the updated RFC 4821.
In the absence of ICMP messages, the proper MTU is
While I think 9k for exchange points is an excellent target, I'll
reiterate
that there's a *lot* of SONET interfaces out there that won't be going
away any time soon, so practically speaking, you won't really get more
than 4400 end-to-end, even if you set your hosts to 9k as well.
Agreed.
He was referring to the updated RFC 4821.
In the absence of ICMP messages, the proper MTU is determined by
starting
with small packets and probing with successively larger packets.
The
bulk of the algorithm is implemented above IP, in the transport
layer
(e.g., TCP) or
RFC 4821 PMTUD is that negotiation that is lacking. It is there.
It is deployed. It actually works. No more relying on someone sending
the ICMP packets through in order for PMTUD to work!
For some value of works. There are way too many places filtering
ICMP for PMTUD to work
On 11/6/2010 4:52 PM, George Bonser wrote:
That is also somewhat mitigated in that it operates in two modes. The
first mode is what I would call passive mode and only comes into play
once a black hole is detected. It does not change the operation of TCP
until a packet disappears. The second
On 06/11/10 15:56 -0500, Jack Bates wrote:
On 11/6/2010 3:36 PM, Richard A Steenbergen wrote:
#2. The major vendors can't even agree on how they represent MTU sizes,
so entering the same # into routers from two different vendors can
easily result in incompatible MTUs. For example, on Juniper
As long as the implementations are few and far between:
https://www.psc.edu/~mathis/MTU/
http://www.ietf.org/mail-archive/web/rrg/current/msg05816.html
the traditional ICMP-based PMTUD is what most of use face today.
Steinar Haug, Nethelp consulting, sth...@nethelp.no
It is
While it reads well, what implementations are actually in use? As with
most protocols, it is useless if it doesn't have a high penetration.
Jack
Solaris 10, in use and on by default. Available on Windows for a very
long time as blackhole router detection was off by default originally,
on
On Sat, Nov 06, 2010 at 02:21:51PM -0700, George Bonser wrote:
That is not a new problem. That is also true to today with last
mile links (e.g. dialup) that support 1500 byte MTU. What is
different today is RFC 4821 PMTU discovery which deals with the black
holes.
RFC 4821 PMTUD is
On 11/6/2010 3:14 PM, George Bonser wrote:
It ships with Microsoft Windows as Blackhole
Router Detection and is on by default since Windows 2003 SP2.
The first item returned on a blekko search is the following article
which indicates that it is on by default in Windows
The only thing this adds is trial-and-error probing mechanism per
flow,
to try and recover from the infinite blackholing that would occur if
your ICMP is blocked in classic PMTUD. If this actually happened in
any
scale, it would create a performance and overhead penalty that is far
worse
and that verified that the problem was an MTU black hole. A little
reading revealed why Solaris wasn't having the problem but Linux did.
Setting the Linux ip_no_pmtu_disc sysctl to 1 resulted in the Linux
behavior matching the Solaris behavior.
Oops, meant tcp_mtu_probing
Or Linux Netem
http://www.linuxfoundation.org/collaborate/workgroups/networking/netem
Suresh
On Sat, Nov 6, 2010 at 6:50 AM, Andy Davidson a...@nosignal.org wrote:
On 6 Nov 2010, at 05:53, Saqib Ilyas wrote:
A friend of mine is doing some testing where he wishes to emulate a
cellular-like
Re: large MTU
One place where this has the potential to greatly improve performance is
in transfers of large amounts of data such as vendors supporting the
downloading of movies, cloud storage vendors, and movement of other
large content and streaming. The *first* step in being able to realize
On Sat, Nov 06, 2010 at 03:49:19PM -0700, George Bonser wrote:
When the TCP/IP connection is opened between the routers for a routing
session, they should each send the other an MSS value that says how
large a packet they can accept. You already have that information
available. TCP
On Nov 6, 2010, at 10:38 AM, Mark Smith wrote:
On Fri, 5 Nov 2010 21:40:30 -0400
Marshall Eubanks t...@americafree.tv wrote:
On Nov 5, 2010, at 7:26 PM, Mark Smith wrote:
On Fri, 5 Nov 2010 15:32:30 -0700
Scott Weeks sur...@mauigateway.com wrote:
It's really quiet in here. So,
* gbon...@seven.com (George Bonser) [Sun 07 Nov 2010, 00:30 CET]:
Re: large MTU
One place where this has the potential to greatly improve
performance is in transfers of large amounts of data such as vendors
supporting the downloading of movies, cloud storage vendors, and
movement of other
On the contrary. You're proposing to fuck around with the one place
on the whole Internet that has pretty clear and well adhered-to rules
and expectations about MTU size supported by participants, and
basically re-live the problems from MAE-East and other shared
Ethernet/FDDI platforms
So if you consider 5x performance boost to be minimal yeah, I
guess.
Or being able to operate at todays transfer rates in the face of 36x
more packet loss to be minimal improvement, I suppose.
And those improvements in performance get larger the longer the latency
of the connection. For
On Sat, Nov 06, 2010, Andy Davidson wrote:
Not withstanding Mikael's comments that it shouldn't be lossy, at times when
you want to simulate lossy (and jittery, and shaped, and ) conditions,
the best way I have found to do this is FreeBSD's dummynet :
On Sat, 06 Nov 2010 11:45:01 -0500
Jack Bates jba...@brightok.net wrote:
On 11/5/2010 5:32 PM, Scott Weeks wrote:
It's really quiet in here. So, for some Friday fun let me whap at the
hornets nest and see what happens...;-)
http://www.ionary.com/PSOC-MovingBeyondTCP.pdf
SCTP
On 11/6/2010 7:21 PM, George Bonser wrote:
(quote)
Let's take an example: New York to Los Angeles. Round Trip Time (rtt) is
about 40 msec, and let's say packet loss is 0.1% (0.001). With an MTU of
1500 bytes (MSS of 1460), TCP throughput will have an upper bound of
about 6.5 Mbps! And no, that
Hi all,
do you know if I will be able to use two different vendors to execute
these tests ? For example, let's say that I have one JDSU unit in the
side A and a EXFO unit in the side B. Will these tests work ?
If not, is there a way to execute these tests having two different vendors ?
Thanks
On Sat, Nov 6, 2010 at 5:21 PM, George Bonser gbon...@seven.com wrote:
...
(quote)
Let's take an example: New York to Los Angeles. Round Trip Time (rtt) is
about 40 msec, and let's say packet loss is 0.1% (0.001). With an MTU of
1500 bytes (MSS of 1460), TCP throughput will have an upper bound
I prefer much less packet loss in a majority of my transmissions,
which
in turn brings those numbers closer together.
Jack
True, though t the idea that it greatly reduces packets in flight for a
given amount of data gives a lot of benefit, particularly over high
latency connections.
I'd like to order a dozen of those 40ms RTT LA to NYC wavelengths,
please.
If you could just arrange a suitable demonstration of packet-level
delivery
time of 40ms from Los Angeles to New York and back, I'm sure there
would
be a *long* line of people behind me, checks in hand.^_^
* gbon...@seven.com (George Bonser) [Sun 07 Nov 2010, 04:27 CET]:
It just seems a shame that two servers with FDDI interfaces using SONET
Earth to George Bonser: IT IS NOT 1998 ANYMORE.
-- Niels.
On 11/6/2010 10:31 PM, Niels Bakker wrote:
* gbon...@seven.com (George Bonser) [Sun 07 Nov 2010, 04:27 CET]:
It just seems a shame that two servers with FDDI interfaces using SONET
Earth to George Bonser: IT IS NOT 1998 ANYMORE.
We don't fly sr71s or use bigger MTU interfaces. Get with the
-Original Message-
From: Niels Bakker [mailto:niels=na...@bakker.net]
Sent: Saturday, November 06, 2010 8:32 PM
To: nanog@nanog.org
Subject: Re: RINA - scott whaps at the nanog hornets nest :-)
* gbon...@seven.com (George Bonser) [Sun 07 Nov 2010, 04:27 CET]:
It just seems a
* gbon...@seven.com (George Bonser) [Sun 07 Nov 2010, 04:27 CET]:
It just seems a shame that two servers with FDDI interfaces using
SONET
Earth to George Bonser: IT IS NOT 1998 ANYMORE.
Exactly my point. Why should we adopt newer technology while using
configuration parameters
I won't speak to the wrong solution for the wrong market but as far as
large ACLs, I would agree with Tony.
I've seen hundreds of different ASA configurations for a variety of
customers in a variety of markets and generally once you start
reaching the limits of the box you start losing sight of
I'm seeing DNS lookup failures for us.af.mil, usmc.mil, us.army.mil, and
navy.mil. Possibly more .mil are affected. This is getting way too
frequent. Anybody got a good out-of-band (not .mil) contact for reporting
this?
Antonio Querubin
808-545-5282 x3003
e-mail/xmpp: t...@lava.net
61 matches
Mail list logo