On 07/09/14 18:31, Navdeep Parhar wrote:
On Wed, Jul 09, 2014 at 04:36:53PM +0200, Hans Petter Selasky wrote:
On 07/08/14 21:17, Navdeep Parhar wrote:
...

I think we need to design this to be as generic as possible.  I have
quite a bit of code that does this stuff but I haven't pushed it
upstream or even offered it for review (yet).


Hi,

When will the non hardware related patches be available for review?
I understand there are multiple ways to reach the same goal, and I
think it would be great if we could agree on a common API for
applications.

Here is the kernel side of the patch:
http://people.freebsd.org/~np/flow_pacing_kern.diff

The registration parameters and the throttling parameters are probably
cxgbe-centric, because that's what it was written for.  We'll need to
tidy up those structs certainly.  And I'd like to add pps constraints to
the throttling parameters (all it does is bandwidth right now).

Hi Navdeep,

After reviewing your patch, we've concluded that we can't use your flow-ID API's AS-IS for the mlxen hardware. We are working on a new patch proposal after the feedback received here, and will probably post something in August, hence now it is vacation time and not all people are available.

Also we are worried that the m_tags cause un-needed overhead, that it invokes malloc for every pkt header duplication on transmit.

As to give you some clues: One of FreeBSD's targets is firewalls and routers. We think that a flow ID / hardware queue feature should also be usable by the firewall, and not only limited to TCP/UDP. That means you can create queues for multiple connections which then get rate limited as a firewall rule. Right now our main target is not the firewall, but we see that by minor modifications of the APIs, this feature becomes very easy to implement.

--HPS
_______________________________________________
freebsd-current@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-current
To unsubscribe, send any mail to "freebsd-current-unsubscr...@freebsd.org"

Reply via email to