Hello!

I spend some time investigating why switching runtime* tasks to real-time 
scheduling policies increases
response time dispersion, while the opposite is expected.

The main reason is hyper-threading. rt-scheduler tries only to load all logical 
CPUs, selecting topologically
closest when the current is busy. If hyper-threading is enabled, this strategy 
is counter-productive:
tasks are suffering on busy HT-threads when there is a plenty of idle physical 
cores.

Also, rt-scheduler doesn't try to balance rt load between physical CPUs. It's 
significant because of
turbo-boost and frequency scaling technologies: per-core performance depends on 
the number of
idle cores in the same physical cpu.


Are there any known solutions of this problem except disabling hyper-threading 
and frequency scaling at all?

Are there any common plans to enhance the load balancing algorithm in the 
rt-scheduler?

Does anyone use rt-scheduler for runtime-like cpu-bound tasks?


Why just don't use CFS? :-)
Rt-scheduler with modified load balancing shows much better results.
I have a prototype (still incomplete and with many dirty hacks), that shows 
10-15% 
performance increase in our production.


(*) A simplified model can be described as following:
there is one process per machine, with one thread, that receives request from 
network and puts them into queue;
n (n ~ NCPU + 1) worker threads, that get requests from the queue and handle 
them.
Load is cpu-bound, tens of milliseconds per request. Typical CPU load is 
between 40% and 70%.
A typical system has two physical x86-64 cpus with 8-16 physical cores each (x2 
with hyper-threading).


Thanks,
Roman
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Reply via email to