On 9/8/18 12:13 AM, John Hubbard wrote:
I'm interested in the first 3 of those 4 topics, so if it doesn't conflict with 
HMM topics or
fix-gup-with-dma topics, I'd like to attend.

Great, we'll add your name to the list.

GPUs generally need to access large chunks of
memory, and that includes migrating (dma-copying) pages around.

So for example a multi-threaded migration of huge pages between normal RAM and 
GPU memory is an
intriguing direction (and I realize that it's a well-known topic, already). 
Doing that properly
(how many threads to use?) seems like it requires scheduler interaction.

Yes, in past discussions of multithreading kernel work, there's been some discussion of a 
scheduler API that could answer "are there idle CPUs we could use to 
multithread?".

Instead of adding an interface, though, we could just let the scheduler do 
something it already knows how to do: prioritize.

Additional threads used to parallelize kernel work could run at the lowest 
priority (i.e. MAX_NICE).  If the machine is heavily loaded, these extra 
threads simply won't run and other workloads on the system will be unaffected.

There's the issue of priority inversion if one or more of those extra threads 
get started and are then preempted by normal-priority tasks midway through, but 
the main thread doing the job can just will its priority to each worker in turn 
once it's finished, so at most one thread will be active on a heavily loaded 
system, again leaving other workloads on the system undisturbed.

Reply via email to