Thomas Bastian - Sun Microsystems writes:
> > What is the motivation for having two separate ways to set this?  Why
> > not have this new feature _only_ at the stream level?  Are there usage
> > models that correspond to both levels?  The only one I see is
> > DLT_LINUX_SLL, which seems to imply stream-level (though I'm not
> > positive).
> You are right. From a feature point of view the streams level would be
> enough. Were I can see the benefit with the device level setting is that
> we could avoid using mac_txloop() and therefore use the fast regular way
> to get packets out (since we don't need a loop copy). But maybe the
> speed benefit will be negligible after all. I have not made any
> measurements for this.

I'd rather not expose the details of performance optimizations to
uninvolved parties.  They change far too often for this to be a good
way to entangle the design.

In other words, if there is any performance to be gained here, then
the system should detect the special cases itself and set up the right
behavior.  Thus, if all of the streams either are non-promiscuous or
if all of the promiscuous streams elect not to have local copies, then
use the "fast" version.  Otherwise, don't.

(I really think the complexity involved with the pointer management
dwarfs any possible gain from avoiding a single, well-designed flag
check, and that the current design needs a rethink.  But that's
probably a different topic.)

> >   c.  All open streams are switched to loop-on mode, and, because this
> >       takes precedence over the stream level control, subsequent use
> >       of DL_PROMLOOP_STR_OFF does nothing.
> Its partly c.) I guess my proposal is not clear enough on this point.
> Let me try to rephrase it. The DL_PROMLOOP_DEV_ON enables loopback mode
> for the device, hence this setting is a pre-requisite for any stream to
> see loopback packets at all from this device. If the DL_PROMLOOP_DEV

So ... this means there are really *three* states for the device level
flag.  It can be "forced on," "forced off," or "unset."  There's no
way to set that third mode with the new interface; the system starts
up that way by default, but if anyone ever sets either of the other
modes, it's a one-way trap door.  You can't get back (except, perhaps,
by unplumbing).

That's a bit confusing, and I'm not sure I see why it's necessary.

> > Does the proposal distinguish between looped-back traffic that
> > originates with the stream user and traffic that originates with other
> > streams?
> Not in the POC currently. This is an important point on which I am still
> unclear what the best approach would be.

It seems to me that it's really key to the problem.

> > So, why not dispense with the knob entirely, and simply change the
> > definition?  Fix it so that promiscuous mode in DLPI does not itself
> > loop back traffic to the same stream that generated it.  I.e., only
> > cases that cause loopback in the non-promiscuous behavior would loop
> > back.  This would simplify the driver changes, the documentation, the
> > user interface, and the porting work required for applications.
> I am not sure this is possible. Agreed that it would be the simplest
> approach. I am not 100% positive but I think it is a well known
> "feature" of DLPI that in promiscuous mode, packets are looped back. I
> think this is the way it works on other systems (HP-UX, AIX, etc...) as
> well (to be confirmed). If there is such a requirement for DLPI in
> promiscuous mode, then we could not go down that route because we would
> break compatibility I suppose.

I don't think that's the important question.  I think this one is:

> > Is there any case in which seeing the unicast traffic that you
> > generated on your own promiscuous-mode stream is not a bug?

It seems to me that promiscuous DLPI streams are relatively rare.  In
most (nearly all) cases, they're used for snoop/ethereal/libpcap, and
those applications are read-only.

The narrow case where the current DLPI semantics break down for some
users is in the rarest of the rare: a promiscuous DLPI stream user who
also transmits unicast packets.  It seems fair to me to ask whether
the current behavior is something that anyone could ever have relied
on in any useful way, or whether it's merely a bug.  In other words,
do those applications _ever_ process those packets beyond just
detecting and discarding them?

I'd be strongly tempted to treat this as a bug, and change it in a
Minor release along with a suitable release note.  The only "tunable"
I might provide would be an intentionally undocumented variable (that
could be tweaked with /etc/system) to reenable the old behavior, just
in case there's some unknown application somewhere that's actually
harmed by the new behavior.

The chance of that, though, seems quite remote to me, and the risk
looks reasonable for a Minor release, especially in comparison to the
complexity and risk of potentially modifying multiple (and largely
unknown!) DLPI applications to take advantage of this new feature,
and adding lasting complexity to Solaris for the mode switch
implementation that could really never be removed.

(For a patch or micro release binding, the default may need to be the
other way.)

But, yes, I agree that verifying against the standards (which seem to
say nothing about the issue) and against other implementations is a
good idea.  I don't think, though, that if other implementations have
bugs, this necessarily means we must as well.

-- 
James Carlson, KISS Network                    <[EMAIL PROTECTED]>
Sun Microsystems / 1 Network Drive         71.232W   Vox +1 781 442 2084
MS UBUR02-212 / Burlington MA 01803-2757   42.496N   Fax +1 781 442 1677
_______________________________________________
networking-discuss mailing list
[email protected]

Reply via email to