Re: GLBX ICMP rate limiting (was RE: Tier-1 without their own backbone?)

2003-08-29 Thread Jack Bates
Temkin, David wrote:

We've noticed that one of our upstreams (Global Crossing) has introduced 
ICMP rate limiting 4/5 days ago.  This means that any traceroutes/pings 
through them look awful (up to 60% apparent packet loss).  After 
contacting their NOC, they said that the directive to install the ICMP 
rate limiting was from the Homeland Security folks and that they would not 
remove them or change the rate at which they limit in the foreseeable 
future.

rant
Are people idiots or do they just not possess equipment capable of 
trashing 92 byte icmp traffic and letting the small amount of normal 
traffic through unhindered? They are raising freakin' complaints from 
users who think the Microsoft ICMP tracert command is just the end all, 
be all and is of course completely WRONG with rate-limiting in effect.
/rant

-Jack



Re: GLBX ICMP rate limiting (was RE: Tier-1 without their own backbone?)

2003-08-29 Thread alex

 Once upon a time, Jack Bates [EMAIL PROTECTED] said:
  Are people idiots or do they just not possess equipment capable of 
  trashing 92 byte icmp traffic and letting the small amount of normal 
  traffic through unhindered?
 
 Well, when we used the policy routing example from the Cisco advisory to
 drop just 92 byte ICMP traffic, we had other random types of traffic
 dropped as well (possibly an IOS bug, but who knows).

It is cisco. There are no bugs. They are unknown features. When Cisco does
figure out what that those packets are, they will document it.

Alex



Re: GLBX ICMP rate limiting (was RE: Tier-1 without their own backbone?)

2003-08-28 Thread Jared Mauch

On Thu, Aug 28, 2003 at 01:23:40PM +0100, [EMAIL PROTECTED] wrote:
 
 On Wed, 27 Aug 2003, [EMAIL PROTECTED] wrote:
 
  We have a similarly sized connection to MFN/AboveNet, which I won't
  recommend at this time due to some very questionable null routing they're
  doing (propogating routes to destinations, then bitbucketing traffic sent
  to them) which is causing complaints from some of our customers and
  forcing us to make routing adjustments as the customers notice
  MFN/AboveNet has broken our connectivity to these destinations.
 
 We've noticed that one of our upstreams (Global Crossing) has introduced 
 ICMP rate limiting 4/5 days ago.  This means that any traceroutes/pings 
 through them look awful (up to 60% apparent packet loss).  After 
 contacting their NOC, they said that the directive to install the ICMP 
 rate limiting was from the Homeland Security folks and that they would not 
 remove them or change the rate at which they limit in the foreseeable 
 future.

I guess this depends on the type of
interconnect you have with them.  If you're speaking across
a public-IX or private (or even paid) peering link, this doesn't
seem unreasonable that they would limit traffic to a particular
percentage across that circuit.

I think the key is to determine what is 'normal' and what
obviously constitutes an out of the ordinary amount of ICMP traffic.

If you're a customer, there's not really a good reason
to rate-limit your icmp traffic.  customers tend to notice and
gripe.  they expect a bit of loss when transiting a peering
circuit or public fabric, and if the loss is only of icmp they
tend to not care.  This is why when I receive escalated tickets
I check using non-icmp based tools as well as using icmp
based tools.

 What are other transit providers doing about this or is it just GLBX?

here's one of many i've posted in the past, note it's also
related to securing machines.

http://www.ultraviolet.org/mail-archives/nanog.2002/0168.html

I recommend everyone do such icmp rate-limits on their
peering circuits and public exchange fabrics to what is a 'normal'
traffic flow on your network.  The above message from the archives
is from Jan 2002, if these were a problem then and still are now,
perhaps people should either 1) accept that this is part of normal
internet operations, or 2) decide that this is enough and it's time
to seriously do something about these things.

- Jared

-- 
Jared Mauch  | pgp key available via finger from [EMAIL PROTECTED]
clue++;  | http://puck.nether.net/~jared/  My statements are only mine.


RE: GLBX ICMP rate limiting (was RE: Tier-1 without their own backbone?)

2003-08-28 Thread Temkin, David

Not that Yipes is necessarily a transit provider by any means, but they have
done the same thing within the cores of their network.  I was
troubleshooting an issue yesterday that was pointing to them for 15-20%
packet loss, and I called them and they stated that they started rate
limiting ICMP last weekend, but that it was only on a temporary basis.

-Original Message-
From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED] 
Sent: Thursday, August 28, 2003 8:24 AM
To: [EMAIL PROTECTED]
Subject: GLBX ICMP rate limiting (was RE: Tier-1 without their own
backbone?)



On Wed, 27 Aug 2003, [EMAIL PROTECTED] wrote:

 We have a similarly sized connection to MFN/AboveNet, which I won't 
 recommend at this time due to some very questionable null routing 
 they're doing (propogating routes to destinations, then bitbucketing 
 traffic sent to them) which is causing complaints from some of our 
 customers and forcing us to make routing adjustments as the customers 
 notice MFN/AboveNet has broken our connectivity to these destinations.

We've noticed that one of our upstreams (Global Crossing) has introduced 
ICMP rate limiting 4/5 days ago.  This means that any traceroutes/pings 
through them look awful (up to 60% apparent packet loss).  After 
contacting their NOC, they said that the directive to install the ICMP 
rate limiting was from the Homeland Security folks and that they would not 
remove them or change the rate at which they limit in the foreseeable 
future.

What are other transit providers doing about this or is it just GLBX?

Cheers,

Rich


Re: GLBX ICMP rate limiting (was RE: Tier-1 without their own backbone?)

2003-08-28 Thread Wayne E. Bouchard
On Thu, Aug 28, 2003 at 08:48:50AM -0400, Jared Mauch wrote:
 they [customers] expect a bit of loss when transiting a peering
 circuit or public fabric, and if the loss is only of icmp they
 tend to not care. 

Um, since when? My customers expect perfection and if they don't get
it, they're gonna gripe. Even if it's just the appearance of a problem
(through traceroute and ICMP echo or similar), I'm going to hear about
it. Personally, I tollerate a little loss. But I'm an engineer. I'm
not a customer who has little or no concept of how the internet works
and who doesn't really want to. The customer just wants it to work and
when it doesn't they expect me to fix it, not explain to them that
there really isn't a problem and that it's all in their head.

  What are other transit providers doing about this or is it just GLBX?
 
 here's one of many i've posted in the past, note it's also
 related to securing machines.
 
 http://www.ultraviolet.org/mail-archives/nanog.2002/0168.html
 
   I recommend everyone do such icmp rate-limits on their
 peering circuits and public exchange fabrics to what is a 'normal'
 traffic flow on your network.  The above message from the archives
 is from Jan 2002, if these were a problem then and still are now,
 perhaps people should either 1) accept that this is part of normal
 internet operations, or 2) decide that this is enough and it's time
 to seriously do something about these things.

While rate limiting ICMP can be a good thing, it has to be done
carefully and probably can't be uniform across the backbone. (think of
a common site that gets pinged whenever someone wants to test to see
if their connection went down or if it's just loaded.. Limit ICMP into
them impropperly and lots of folks notice.) Such limiting also has to
undergo periodic tuning as traffic levels increase, traffic patterns
shift, and so forth.

If a provider is willing to put the effort into it to do it right, I'm
all for it. If they're just gonna arbitrarily decide that the
allowable flow rate is 200k across an OC48 and never touch it again
then that policy is going to cause problems.

---

Wayne Bouchard
[EMAIL PROTECTED]
Network Dude
http://www.typo.org/~web/


pgp0.pgp
Description: PGP signature


Re: GLBX ICMP rate limiting (was RE: Tier-1 without their own backbone?)

2003-08-28 Thread Robert Boyle
At 09:26 AM 8/28/2003, you wrote:
It takes some education to the customers, but after they understand why,
most are receptive.
Especially when they get DOS'ed.
We have been rate limiting ICMP for a long time, however, it is only 
recently that the percentage limit has been reached and people have started 
to see packet loss as a result. However, the fact that customers stay up 
and are not affected by the latest DOS attacks and real traffic makes it to 
the proper destination makes a slight increase in support calls well worth it.

-Robert

Tellurian Networks - The Ultimate Internet Connection
http://www.tellurian.com | 888-TELLURIAN | 973-300-9211
Good will, like a good name, is got by many actions, and lost by one. - 
Francis Jeffrey



Re: GLBX ICMP rate limiting (was RE: Tier-1 without their own backbone?)

2003-08-28 Thread Steve Carter

* [EMAIL PROTECTED] said:
 
 On Wed, 27 Aug 2003, [EMAIL PROTECTED] wrote:
 
  We have a similarly sized connection to MFN/AboveNet, which I won't
  recommend at this time due to some very questionable null routing they're
  doing (propogating routes to destinations, then bitbucketing traffic sent
  to them) which is causing complaints from some of our customers and
  forcing us to make routing adjustments as the customers notice
  MFN/AboveNet has broken our connectivity to these destinations.
 
 We've noticed that one of our upstreams (Global Crossing) has introduced 
 ICMP rate limiting 4/5 days ago.  This means that any traceroutes/pings 
 through them look awful (up to 60% apparent packet loss).  After 
 contacting their NOC, they said that the directive to install the ICMP 
 rate limiting was from the Homeland Security folks and that they would not 
 remove them or change the rate at which they limit in the foreseeable 
 future.

Homeland Security recommended the filtering of ports 137-139 but have not,
to my knowledge, recommended rate limiting ICMP.

I speak for Global Crossing when I say that ICMP rate limiting has existed
on the Global Crossing network, inbound from peers, for a long time ... we
learned our lesson from the Yahoo DDoS attack (when they were one of our
customers) back in the day and it was shortly thereafter that we
implemented the rate limiters.  Over the past 24 hours we've performed
some experimentation that shows outbound rate limiters being also of value
and we're looking at the specifics of differentiating between happy ICMP
and naughty 92 byte packet ICMP and treating the latter with very strict
rules ... like we would dump it on the floor.  This, I believe, will stomp 
on the bad traffic but allow the happy traffic to pass unmolested.

The rate-limiters have become more interesting recently, meaning they've
actually started dropping packets (quite a lot in some cases) because of
the widespread exploitation of unpatched windows machines.

Our results show that were we to raise the size of the queues the quantity
of ICMP is such that it would just fill it up and if we permit all ICMP to
pass unfettered we would find some peering circuits that become conjested.  
Our customers would not appreciate the latter either.

-Steve


Re: GLBX ICMP rate limiting (was RE: Tier-1 without their own backbone?)

2003-08-28 Thread Jared Mauch

On Thu, Aug 28, 2003 at 03:55:26PM +, Christopher L. Morrow wrote:
 On Thu, 28 Aug 2003, Wayne E. Bouchard wrote:
 
 
  While rate limiting ICMP can be a good thing, it has to be done
  carefully and probably can't be uniform across the backbone. (think of
  a common site that gets pinged whenever someone wants to test to see
  if their connection went down or if it's just loaded.. Limit ICMP into
  them impropperly and lots of folks notice.) Such limiting also has to
  undergo periodic tuning as traffic levels increase, traffic patterns
  shift, and so forth.
 
 Along these lines, how does this limiting affect akamai or other 'ping for
 distance' type localization services? I'd think their data would get
 somewhat skewed, right?

Perhaps they'll come up with a more advanced system of
monitoring?

probally the best way to do that is to track the download speed
either with cookies (with subnet info) or by subnet only to determine
the best localization.

With an imperfect system of tracking localization, you will
get imperfect results.

- jared

-- 
Jared Mauch  | pgp key available via finger from [EMAIL PROTECTED]
clue++;  | http://puck.nether.net/~jared/  My statements are only mine.


Re: GLBX ICMP rate limiting (was RE: Tier-1 without their own backbone?)

2003-08-28 Thread Robert Boyle
At 12:39 PM 8/28/2003, you wrote:
 Along these lines, how does this limiting affect akamai or other 'ping for
 distance' type localization services? I'd think their data would get
 somewhat skewed, right?
Perhaps they'll come up with a more advanced system of
monitoring?
probally the best way to do that is to track the download speed
either with cookies (with subnet info) or by subnet only to determine
the best localization.
With an imperfect system of tracking localization, you will
get imperfect results.
I'm not sure about other implementations, but our Akamai boxes in our 
datacenter receive all traffic requests which originate from our address 
space as predefined with Akamai. I believe they also somehow factor in 
address space announcements originated via our AS as well since they asked 
for our AS when we originally started working with them.

-Robert

Tellurian Networks - The Ultimate Internet Connection
http://www.tellurian.com | 888-TELLURIAN | 973-300-9211
Good will, like a good name, is got by many actions, and lost by one. - 
Francis Jeffrey



Re: GLBX ICMP rate limiting (was RE: Tier-1 without their own backbone?)

2003-08-28 Thread Paul Vixie

  Along these lines, how does this limiting affect akamai or other 'ping
  for distance' type localization services? I'd think their data would
  get somewhat skewed, right?

using icmp to predict tcp performance has always been a silly idea; it
doesn't take any icmp rate limit policy changes to make it silly.  other
silly ways to try to predict tcp performance include aspath length
comparisons, stupid dns tricks, or geographic distance comparisons.

the only reliable way to know what tcp will do is execute it.  not just
the syn/synack as in some blast protocols i know of, but the whole session.
and the predictive value of the information you'll gain from this decays
rather quickly unless you have a lot of it for trending/aggregation.

gee, ping was faster to A but tcp was faster to B, do you s'pose there
could be a satellite link, or a 9600 baud modem, in the system somewhere?
-- 
Paul Vixie