Pekka,

In the case of logical (pt-to-pt) interfaces, actual per-source traffic
measurements may be readily available. In such cases, why would
an implementation use an aggregate estimator when it can easily
compute actual per-source values for rate limiting??

I vote for the "SHOULD" option for per-interface rate limiting
when per-source traffic measurements are available.

Fred
[EMAIL PROTECTED]

Pekka Savola wrote:

On Mon, 16 Aug 2004, Alex Conta wrote:


You may have an assumption that the rate-limiting would have to be a
percentage of the interface speed. That's (IMHO) a bad strategy,
exactly why you describe: it doesn't handle fast/slow interfaces
appropriately.


This is a misinterpretation. Essentially I am saying that identical rate-limiting parameters (N,B in draft terms) on slow/fast interfaces would not handle ICMP traffic appropriately.



Depends on what you want to achieve with the rate-limiting.

If you want to make sure that ICMP responses don't eat up all of a
very slow link (e.g., 64 kbit/s), a single parameter might not be
sufficient, yes.

But my point is that we shouldn't be overly concerned about this. That kind of very slow interfaces will need some kind of queuing, rate-limiting, etc. mechanisms to be usable in any case. *)

The main point is to create a reasonable upper bound for the amount of
ICMP packets generated, so that you don't generate an ICMP error out
of potentially every packet.

But please stop for a moment to consider the example default token
bucket values: N=10: that would result in the maximum of 4 kbit/s of
generated ICMP traffic (double this when bursting).  Even if you used
N=100 (which would be sufficient even with 10 Gbit/s interfaces), you
would have the maximum the maximum of 40 Kbit/s of ICMP traffic.

Is this *REALLY* too much message generation?  Something that makes it
worth to *specify* interface-specific values, make implementations
check the interfaces where the packets would be destined to go out to,
implement some logic to use for virtual interfaces, etc.?  Doesn't
look like that to me.  Seems like unnecessary complexity to me.

If you really, really think this is needed, and the WG agrees with that, I'd suggest we put interface-specific values as a MAY -- definitely not a SHOULD.



(However, you could limit the upper bound for token
bucket based on the interface speed, I guess.)



Rephrasing your statement using draft parameters terminology, that is, N=average rate of transmission, and B=upper bound of transmission,
you're agreeing that Bx for a slow interface "X" should be different than Bz for a fast interface "Z".



No, I don't really agree with that. I think B shouldn't need to be varied based on the interface speed. Just picking a sufficiently large value, e.g., 10, 20, 50 or 100 should allow sufficient amount of ICMP generation on fast interfaces, while still not generating too much of it with slow interfaces.


Please remember that to be usable, very slow interfaces need more
generic rate-limiters as well (compare to your reference to particular
implementations e.g., Linux, earlier on this thread).  That's
orthogonal to this limiter.  Hence, we shouldn't need to make a
"perfect" ICMP generation limiter, which would be sufficient on its
own even in the fringe cases, but just a "good enough" ICMP generation
limiter. (See the paragraph above marked with *).






-------------------------------------------------------------------- IETF IPv6 working group mailing list [EMAIL PROTECTED] Administrative Requests: https://www1.ietf.org/mailman/listinfo/ipv6 --------------------------------------------------------------------

Reply via email to