Il 05 aprile 2012 19:12, Arnaud Lacombe <lacom...@gmail.com> ha scritto:
> Hi,
>
> [Sorry for the delay, I got a bit sidetrack'ed...]
>
> 2012/2/17 Alexander Motin <m...@freebsd.org>:
>> On 17.02.2012 18:53, Arnaud Lacombe wrote:
>>>
>>> On Fri, Feb 17, 2012 at 11:29 AM, Alexander Motin<m...@freebsd.org>  wrote:
>>>>
>>>> On 02/15/12 21:54, Jeff Roberson wrote:
>>>>>
>>>>> On Wed, 15 Feb 2012, Alexander Motin wrote:
>>>>>>
>>>>>> I've decided to stop those cache black magic practices and focus on
>>>>>> things that really exist in this world -- SMT and CPU load. I've
>>>>>> dropped most of cache related things from the patch and made the rest
>>>>>> of things more strict and predictable:
>>>>>> http://people.freebsd.org/~mav/sched.htt34.patch
>>>>>
>>>>>
>>>>> This looks great. I think there is value in considering the other
>>>>> approach further but I would like to do this part first. It would be
>>>>> nice to also add priority as a greater influence in the load balancing
>>>>> as well.
>>>>
>>>>
>>>> I haven't got good idea yet about balancing priorities, but I've
>>>> rewritten
>>>> balancer itself. As soon as sched_lowest() / sched_highest() are more
>>>> intelligent now, they allowed to remove topology traversing from the
>>>> balancer itself. That should fix double-swapping problem, allow to keep
>>>> some
>>>> affinity while moving threads and make balancing more fair. I did number
>>>> of
>>>> tests running 4, 8, 9 and 16 CPU-bound threads on 8 CPUs. With 4, 8 and
>>>> 16
>>>> threads everything is stationary as it should. With 9 threads I see
>>>> regular
>>>> and random load move between all 8 CPUs. Measurements on 5 minutes run
>>>> show
>>>> deviation of only about 5 seconds. It is the same deviation as I see
>>>> caused
>>>> by only scheduling of 16 threads on 8 cores without any balancing needed
>>>> at
>>>> all. So I believe this code works as it should.
>>>>
>>>> Here is the patch: http://people.freebsd.org/~mav/sched.htt40.patch
>>>>
>>>> I plan this to be a final patch of this series (more to come :)) and if
>>>> there will be no problems or objections, I am going to commit it (except
>>>> some debugging KTRs) in about ten days. So now it's a good time for
>>>> reviews
>>>> and testing. :)
>>>>
>>> is there a place where all the patches are available ?
>>
>>
>> All my scheduler patches are cumulative, so all you need is only the last
>> mentioned here sched.htt40.patch.
>>
> You may want to have a look to the result I collected in the
> `runs/freebsd-experiments' branch of:
>
> https://github.com/lacombar/hackbench/
>
> and compare them with vanilla FreeBSD 9.0 and -CURRENT results
> available in `runs/freebsd'. On the dual package platform, your patch
> is not a definite win.
>
>> But in some cases, especially for multi-socket systems, to let it show its
>> best, you may want to apply additional patch from avg@ to better detect CPU
>> topology:
>> https://gitorious.org/~avg/freebsd/avgbsd/commit/6bca4a2e4854ea3fc275946a023db65c483cb9dd
>>
> test I conducted specifically for this patch did not showed much 
> improvement...

Can you please clarify on this point?
The test you did included cases where the topology was detected badly
against cases where the topology was detected correctly as a patched
kernel (and you still didn't see a performance improvement), in terms
of cache line sharing?

Attilio


-- 
Peace can only be achieved by understanding - A. Einstein
_______________________________________________
freebsd-current@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-current
To unsubscribe, send any mail to "freebsd-current-unsubscr...@freebsd.org"

Reply via email to