On 04/10/12 21:46, Arnaud Lacombe wrote:
On Tue, Apr 10, 2012 at 1:53 PM, Alexander Motin<m...@freebsd.org>  wrote:
On 04/10/12 20:18, Alexander Motin wrote:
On 04/10/12 19:58, Arnaud Lacombe wrote:
2012/4/9 Alexander Motin<m...@freebsd.org>:
I have strong feeling that while this test may be interesting for
profiling,
it's own results in first place depend not from how fast scheduler
is, but
from the pipes capacity and other alike things. Can somebody hint me
what
except pipe capacity and context switch to unblocked receiver prevents
sender from sending all data in batch and then receiver from
receiving them
all in batch? If different OSes have different policies there, I think
results could be incomparable.

Let me disagree on your conclusion. If OS A does a task in X seconds,
and OS B does the same task in Y seconds, if Y>  X, then OS B is just
not performing good enough. Internal implementation's difference for
the task can not be waived as an excuse for result's comparability.


Sure, numbers are always numbers, but the question is what are they
showing? Understanding of the test results is even more important for
purely synthetic tests like this. Especially when one test run gives 25
seconds, while another gives 50. This test is not completely clear to me
and that is what I've told.

Small illustration to my point. Simple scheduler tuning affects thread
preemption policy and changes this test results in three times:

mav@test:/test/hackbench# ./hackbench 30 process 1000
Running with 30*40 (== 1200) tasks.
Time: 9.568

mav@test:/test/hackbench# sysctl kern.sched.interact=0
kern.sched.interact: 30 ->  0
mav@test:/test/hackbench# ./hackbench 30 process 1000
Running with 30*40 (== 1200) tasks.
Time: 5.163

mav@test:/test/hackbench# sysctl kern.sched.interact=100
kern.sched.interact: 0 ->  100
mav@test:/test/hackbench# ./hackbench 30 process 1000
Running with 30*40 (== 1200) tasks.
Time: 3.190

I think it affects balance between pipe latency and bandwidth, while test
measures only the last. It is clear that conclusion from these numbers
depends on what do we want to have.

I don't really care on this point, I'm only testing default values, or
more precisely, whatever developers though default values would be
good.

Btw, you are testing 3 differents configuration. Different results are
expected. What worries me more is the rather the huge instability on
the *same* configuration, say on a pipe/thread/70 groups/600
iterations run, where results range from 2.7s[0] to 7.4s, or a
socket/thread/20 groups/1400 iterations run, where results range from
2.4s to 4.5s.

Due to reason I've pointed in my first message this test is _extremely_ sensitive to context switch interval. The more aggressive scheduler switches threads, the smaller will be pipe latency, but the smaller will be also bandwidth. During test run scheduler all the time recalculates interactivity index for each thread, trying to balance between latency and switching overhead. With hundreds of threads running simultaneously and interfering with each other it is quite unpredictable process.

--
Alexander Motin
_______________________________________________
freebsd-current@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-current
To unsubscribe, send any mail to "freebsd-current-unsubscr...@freebsd.org"

Reply via email to