On Wed, Aug 12, 2009 at 8:34 PM, Nicolas Droux<[email protected]> wrote:
>
>
> Ross Walker wrote:
>>
>> On Wed, Aug 12, 2009 at 7:23 PM, Ross Walker<[email protected]> wrote:
>>>
>>> On Wed, Aug 12, 2009 at 5:56 PM, Ross Walker<[email protected]> wrote:
>>>>
>>>> On Wed, Aug 12, 2009 at 5:44 PM, Peter
>>>> Memishian<[email protected]> wrote:
>>>>>
>>>>>  > set dld:dld_opt = 2
>>>>>
>>>>> Yikes!  Perhaps you're looking for the no-poll or no-softring
>>>>> capabilities
>>>>> in /kernel/drv/dld.conf?  (Not to say those will address your original
>>>>> goal, but they're better than twiddling implementation artifacts via
>>>>> /etc/system.)
>>>>
>>>> Thanks for the pointer.
>>>>
>>>> Don't know why I didn't think of looking there first, guess it's a bad
>>>> case of Google Monkey see, Google Monkey do...
>>>
>>> Ok, I set the no_poll=1 and no-softring=1 in the dld.conf and it made
>>> no real noticeable difference.
>
> The data-path that we introduced as part of Crossbow will let packets go
> through the same context as long as there's no backlog for best latency.
> These dld properties were never meant to be public interfaces for tuning.
>
>>>
>>> I'm thinking that the igb driver is doing some interrupt coalesence
>>> here, but I can't find any options to disable or tune it.
>>
>> On a hunch I thought I'd run the igb module through strings and see if
>> there was anything interesting and found these.
>>
>> intr_throttling
>> rx_limit_per_intr
>> intr_force
>>
>> I don't have any docs on these, so it's going to be interesting to see
>> how they work.
>>
>> I think I'll try the intr_throttling option first as it looks most
>> promising.
>
> http://src.opensolaris.org/source/xref/onnv/onnv-gate/usr/src/uts/common/io/igb/igb_main.c
>
> has references to a throttling property configurable via igb.conf. It's also
> not a public interface, but if you want to play the source is there :-)
>
> In general the idea is for the drivers to use aggressive blanking/throttling
> out of the box for best latency, and let the polling in the stack kick-in as
> the load increases to dynamically disable interrupts and schedule incoming
> packets according to the load.  Some drivers might not be tuned for the best
> out-of-the-box latency.

Thanks for the explaination.

Actually in testing I found the out-of-the-box settings to perform as
well as the "tuned" settings for my application, so I reverted back to
the defaults.

I'm going to look at the storage and process layer some more now to
see if I can get any better performance out of those.

Thanks,

Ross
_______________________________________________
networking-discuss mailing list
[email protected]

Reply via email to