On 08/20/14 12:25, Hans Petter Selasky wrote:
On 08/20/14 20:44, Navdeep Parhar wrote:
On 08/20/14 00:34, Hans Petter Selasky wrote:
Hi,

A month has passed since the last e-mail on this topic, and in the
meanwhile some new patches have been created and tested:

Basically the approach has been changed a little bit:

- The creation of hardware transmit rings has been made independent of
the TCP stack. This allows firewall applications to forward traffic into
hardware transmit rings aswell, and not only native TCP applications.
This should be one more reason to get the feature into the kernel.

- A hardware transmit ring basically can have two modes: FIXED-RATE or
AUTOMATIC-RATE. In the fixed rate mode all traffic is sent at a fixed
bytes per second rate. In the automatic mode you can configure a time
after which the TX queue must be empty. The hardware driver uses this to
configure the actual rate. In automatic mode you can also set an upper
and lower transmit rate limit.

- The MBUF has got a new field in the packet header: "txringid"

- IOCTLs for TCP v4 and v6 sockets has been updated to allow setting of
the "txringid" field in the mbuf.

The current patch [see attachment] should be much simpler and less
intrusive than the previous one.

Any comments ?


Here are some thoughts.  The first two bullets cover relatively
minor issues, the rest are more important.

- All of the mbuf pkthdr fields today have the same meaning no matter
   what the context.  It is not clear what txringid's global meaning is.
   Is it even possible for driver foo to interpret it the same way as
   driver bar?  What if the number of rings are different, or if the ring
   at the particular index for foo is setup differently than the ring at
   that same index for bar?  You are attempting to influence the driver's
   txq selection and traditionally the mbuf's flowid has been used for
   this purpose.  Have you considered allowing the user to set the flowid
   directly?  And mark it as such via a new rsstype so the kernel will
   leave it alone.

Hi,

At work so to speak, we have tried to make a simple approach that will
not break existing code, without trying to optimise the possibilities
and reduce memory footprint.


- uint32_t -> m_flowid_t is plain gratuitous.  Now we need to include
   mbuf.h in more places just to get this definition.  What's the
   advantage of this?  style(9) isn't too fond of typedefs either.  Also,
   drivers *do* need to know the width of the flowid.  At least lagg(4)
   looks at the high bits of the flowid (see flowid_shift in lagg).  How
   high it can go depends on the width of the flowid.

The flowid should be typedef'ed. Else how can you know its type passing
flowid along function arguments and so on?

It's just a simple 32 bit unsigned int and all drivers know exactly what
it is.  I don't think we need type checking for trivial stuff like this.
We trust code to do the right thing and that's the correct tradeoff
here, in my opinion.  Or else we'd end up with errno_t, fd_t, etc. and
programming in C would not be fun anymore.  Here's a hyperbolic example:

errno_t socket(domain_t domain, socktype_t type, protocol_t protocol);

(oops, it returns an int -1 or 0 so errno_t is not strictly correct, but
you get my point).



- Interfaces can come and go, routes can change, and so the relationship
   between an inpcb and txringid is not stable at all.  What happens when
   the outbound route for an inpcb changes?

This is managed separately by a daemon or such. The problem about using
the "inpcb" approach which you are suggesting, is that you limit the
rate control feature to traffic which is bound by sockets. Can your way
of doing rate control be useful to non-socket based firewall
applications, for example?

You also assume a 1:1 mapping between "inpcb" and the flowID, right.
What about M:N mappings, where multiple streams should share the same
flowID, because it makes more sense?

You're right that an inpcb based scheme won't work for non-socket based
firewall.

inpcb represents an endpoint, almost always with an associated socket,
and it mostly has a 1:1 relation with an n-tuple (SO_LISTEN and UDP
sockets with no default destination are notable exceptions).  If you're
talking of non-socket based firewalls, then where is the inpcb coming
from?  Firewalls typically keep their own state for the n-tuples that
they are interested in.  It almost seems like you need a n-tuple ->
rate_limit mapping scheme instead of inpcb -> rate_limit.

Regards,
Navdeep



- The in_ratectlreq structure that you propose is inadequate in its
   current form.  For example, cxgbe's hardware can do rate limiting on a
   per-ring as well as per-connection basis, and it allows for pps,
   bandwidth, or min-max limits.  I think this is the critical piece that
   we NIC maintainers must agree on before any code hits the core kernel:
   how to express a rate-limit policy in a standard way and allow for
   hardware assistance opportunistically.  ipfw(4)'s dummynet is probably
   interested in this part too, so it's great that Luigi is paying
   attention to this thread.

My "in_ratectlreq" is a work in progress.


- The RATECTL ioctls deal with in_ratectlreq so we need to standardize
   the ratectlreq structure before these ioctls can be considered generic
   ifnet ioctls.  This is the reason cxgbetool (and not ifconfig) has a
   private ioctl to frob cxgbe's per-queue rate-limiters.  I did not want
   to add ifnet ioctls that in reality were cxgbe only.  Ditto for i2c
   ioctls.  Now we have multiple drivers with i2c and melifaro@ is doing
   the right thing by promoting these private ioctls to a standard ifnet
   ioctl.  Have you considered a private mlxtool as a stop gap measure?

It might end that we need to create our own tool for this, having vendor
specific IOCTLs, if we cannot agree how to do this in a general way.


To summarize my take on all of this: we need a standard ratectlreq
structure,

Agree.

a standard way to associate an inpcb with one,

Maybe.

and a standard
way to pass on this info to if_transmit.

Agree.

After all this is in place we
could even have a dummynet-ish software layer that implements rate
limiters when the underlying hardware offers no assistance.

Right.

--HPS

_______________________________________________
freebsd-net@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-net
To unsubscribe, send any mail to "freebsd-net-unsubscr...@freebsd.org"

Reply via email to