Pekka Savola wrote:
On Wed, 18 Aug 2004 [EMAIL PROTECTED] wrote:

I think everyone agrees that per-interface configuration
would be a perfect solution and will provide a fine grained
control to the user.  Is there anyone who disagrees with
this ? (Pekka ??)


My objection to this stems from the fact that an implementation which
would like to do something like that is likely doing something wrong
in the first place

Traffic management in its very nature is linked to interfaces: its functional components are dispersed among other protocol functional components in a router's functional topology that start at an "ingress" interface and ends at an "egress" interface. There is plenty of documentation on this in IETF and elsewhere.

A separation of ICMP rate limiting, which is a form of traffic management, from interfaces goes against this very nature.

(or something which is implementation-specific in
any case, and doesn't need "IETF blessing" for their approach in any
case), and I slightly disagree with per-interface configuration.


With a protocol purist perspective - nothing wrong - there is the
suggestion that ICMPv6 should bless or address only ICMP rate limiting that is implemented inside the ICMP protocol engine.

The reality is that ICMP protocol does not have a rate limiting
mechanism of its own - ICMP message headers do not have any rate
limiting fields.

The ICMP rate limiting is simply a set of operational requirements,
supported by "some" implementation mechanism: token bucket is suggested.

Consequently, placing the "token bucket" anywhere in the processing graph: from policing detected errored packets (A) to traffic management of 'egress' traffic on interface (B), as long as it achieves the operational requirements is valid, and legal, as far as ICMPv6 spec is concerned.

One place versus the other is selected based on implementation constraints, and operational purposes.

The latter - (B) above - is a choice for minimizing the number of vertices in the node's internal functional topology, combined with maximizing the operational capabilities of the node: it controls all "send" traffic, as well as "send" ICMP traffic (both "originated" and "in transit"). It is a common (cost effective) choice in modern router implementations, and possible choice in software (host) implementations (LINUX is known, but there may be others).

As configuring is the same (CLIs hide implementation details), and the effect is the desired ICMP rate limiting, the IETF spec should not be preferential in its blessing.

[...]
Rather than first figuring out whether to use MAY or SHOULD it might be better to try to figure how to reword the section etc. to be more specific about the recommended method and where it applies to.


Steve Deering, and Bob Hinden who last edited this part of the text in the past did a wonderful job, approved by the WG, and IESG: in a very succinct portion of text, they were able to capture the essential.

This could possibly be achived by changing "send" to "originate" elsewhere in the spec, and reword (f) to something like:


The judgment of how good a spec is, should not be based on the ability
of the spec to serve/support one side of a debate - lack of support triggered discussion of "send".

The judgment is the abundant existence of architectures and (host and
router) implementations of the traffic management model (above) and the abundant existence of hardware/software components to build it, and ultimately their use in operational networks. Based on that:

I am opposed to any rewording of the first 3 paragraphs of
Section 2.4 (f) of the ICMPv6 specifications, that would exclude "in transit" ICMP traffic from the ICMPv6 scope, because it can break
current implementation compliance and being within the ICMPv6 scope,
with a particular effect on routers and hardware/software
implementations in network processors, which are components for routers.
It can also break operational capabilities of routers, and the reliance of network operators on these capabilities, which are important for managing traffic in networks.

This includes opposition to the suggestion to use packets per second as
the ONLY metric for ICMPv6 rate limiting.

Packets/second is not an accurate metric: packets are not equal in size.
Different packet sizes have different impact on processing resources,
and thus different impact on implementations.

Bits/second is an accurate and widely recognized standard metric,
expressing link bandwidth/speed, or interface bandwidth/speed. It can be
easily used as it is, or translated into internal resource metrics (for
instance for memory, or internal bus) and should be the recommended metric.

====
(f) Finally, an IPv6 node MUST limit the rate of ICMPv6 error messages it originates in order to limit the processing at the
node and bandwidth and forwarding costs incurred on the network by originating ICMPv6 error messages. This
situation may occur when a source sending a stream of erroneous
packets fails to heed the resulting ICMPv6 error messages.

Rate-limiting of forwarded ICMP messages is out of scope of this
specification.
A recommended method for implementing the rate-limiting function
is a token bucket, limiting the average rate of transmission to
N packets/second, but allowing up to B error messages
to be originated in a burst, as long as the long-term average
is not exceeded.
Rate-limiting mechanisms which cannot cope with bursty traffic
(e.g., traceroute) are not recommended; for example a simple
timer-based implementation, allowing an error message every T
milliseconds (even with low values for T), is not reasonable.
The rate-limiting parameters SHOULD be configurable. In the
case of a token-bucket implementation, the best defaults depend
on where the implementation is expected to be deployed (e.g., a
high-end router vs. an embedded host). For example, in a
small/mid -sized device, the possible defaults could be B=10,
N=10/s.
====

- make it clearer that the justification is also to limit the processing on the node
- add note of non-scope of generic rate-limiting (this is also so obvious that 2nd paragraph could possibly be removed now)
- just specify packets/second in the recommended token bucket
rate-limiter to avoid confusion with interface-specific issues. Note
that as this is just a recommended *EXAMPLE*, it's fully compliant
with the spec to provide another kind of rate-limiter as well, hence
bandwidth-based measurements don't belong here (and haven't been even
really implemented -- all the vendors I know of do pps + burst)!



In your opinion (no reasoning please), the rate limiting
configuration per-interface in the ICMPv6 spec should be a

1) SHOULD
2) MAY
3) Any of them is fine for you.



My amended preference is:

If an addition is made to current paragraph 4 of Section 2.4 (f):
1) SHOULD for routers, 2) MAY for hosts

If the addition is not made as the line above, leave the text unchanged.

Regards,
Alex


Attachment: smime.p7s
Description: S/MIME Cryptographic Signature

--------------------------------------------------------------------
IETF IPv6 working group mailing list
[EMAIL PROTECTED]
Administrative Requests: https://www1.ietf.org/mailman/listinfo/ipv6
--------------------------------------------------------------------

Reply via email to