At Wed, 06 May 2009 13:28:03 -0400,
Thomas Narten <nar...@us.ibm.com> wrote:

> > But then I'd like to further simplify the rule of processing
> > incoming NSes by discarding ones if the source address is not
> > on-link.
> 
> I would be concerned with doing this, because it would lead to failure
> cases in situations that I believe will happen in practice. That is,
> it would make ND less robust than it is today or needs to
> be. Specifically, such a rule would result in Node A not being able to
> communicate with Node B, if for some reason Node A thought B was
> on-link (and it really is), but Node B for some reason thinks it is
> not on-link (whatever it means for a node to think its own address is
> not on-link) and thus refuses to respond to a perfectly legitmate NS.
> 
> I don't see a good reason to ignore an NS from a neighbor asking about
> a target address that is assigned to my interface. Not responding to
> such a message could result in loss of connectivity. Sure, you can
> argue this is a misconfiguration, but it could also result when RAs
> aren't being delivered reliably, and one node has received a
> particular RA, but another node has not, so they have differing
> information. Or, node A may have been manually configured to assume
> node B as on-link (and it is in fact).

I believe I understand your position.  My baseline point is that such
failure modes should be extremely rare "in practice" (looks like we
have different definitions of this term).  In fact, there's an
implementation that already fails in this situation as I pointed out
in an off-list discussion.  (I don't think it's confidential so I
copied the relevant part below. note: BSD used to work this way
because it considered a neighbor an on-link node, so I'm talking about
a different implementation than BSD here).  Yet, as far as I know,
there has been no problem report about this failure for years.  This
strongly indicates no one has ever needed such a case.

One could still argue that we "may" need it "in the future" (and
that's probably why you said "will happen").  If we could implement it
with no or less cost, I might agree.  But I know this would make the
implementation I'm involved with (that is, BSD) too complicated.
Besides, they already introduced the tighter filtering of incoming NS
for the security reason that triggered this discussion.  I'm also
pretty sure that they now reject the complicated patch with canceling
the filtering simply because it's in the specification and there may
be of use in the future.

I can imagine a counter argument from a protocol architect that the
protocol definition shouldn't be affected for the sake of a particular
implementation's convenience.  As a general rule, I see the point.
But at the same time, since I believe won't lose anything "in
practice" even if we make the "failure mode" keep failing, and in that
case keeping some implementation simple can be justified in defining a
protocol.  If, for example, we really encounter a realistic
operational requirement that needs this behavior, we may then be able
to consider extending the rule again.

I hope I've clarified my point sufficiently.  You may still not agree
with me, and, in that case, I'm afraid we cannot resolve it through
further technical discussions.  If we can make rough consensus from
any follow-up discussions, that would be great; if not, maybe we
should hum or vote to move forward.

p.s. I'll be mostly offline until May 20 for vacation (officially I'm
already off).  I'm sorry for not being able to be so responsive.  I
hope we can make progress without me (hopefully toward the way I
wish:-).

---
JINMEI, Tatuya
Internet Systems Consortium, Inc.

> > The difficult part is how the responding node out the solicited NA (to
> > P2::B in the above scenario).  Since the NA is also a normal IPv6
> > packet originated from the node, one reasonable assumption would be
> > that it follows the Conceptual Sending Algorithm as described in
> > section 5.2 of RFC4861.
>
> It was never intended that NS/NA (or RA/RS messages) go through the
> normal IP processing rules. That ways lies madness. The spec may not
> actually say that, but I don't see how you can reasonably conclude
> that is a reasonable way to proceed.
>
> And, given that nobody (AFAIK) has implemented things this way, I'm
> surprised that we are having this conversation...

At least BSD does this.  Of course, it doesn't naively follow the
conceptual sending algorithm: it ensures the outgoing interface be the
same as the receiving interface of the corresponding NS, but it
implements this restriction by creating a host route (a "destination
cache" according to ND-ish terminology) and uses the general IPv6
output routine that follows the conceptual sending algorithm.

Solaris also seems (according to the source code) to call a general
IPv6 output routine to send ND packets (specifically, NA in response
to NS).  It uses a more explicit approach to specify the outgoing
interface.  However, the Solaris box I used to test couldn't actually
send out the NA to P2::B in the above scenario, even though it could
create a neighbor cache for P2::B as specified in the RFC.

The case of DHCP-PD delegating router I mentioned in the dhc list:
http://www.ietf.org/mail-archive/web/dhcwg/current/msg09092.html is
another example of how the implementation of this part could be
tricky.  Note that this was about an actual product, rather than just
a logical case analysis with an imaginary implementation (I don't know
the origin of the implementation, though, so it might just be BSD or
Solaris-based).

All these cases seem to me to indicate that the protocol author's
intent was not so obvious for implementors and that implementing it as
intended by the protocol author may not be that trivial.  That's my
point in this discussion.
--------------------------------------------------------------------
IETF IPv6 working group mailing list
ipv6@ietf.org
Administrative Requests: https://www.ietf.org/mailman/listinfo/ipv6
--------------------------------------------------------------------

Reply via email to