zturner added a comment.

The only real suggestion / question I have is a design one.

By using this implementation we can't take advantage of the system thread pool. 
 That was the point of using std async in the first place, but we found that it 
doesn't always limit the number of threads.  Maybe there's a way to get the 
best of both worlds.

What if, instead of storing a `std::queue<std::function<void()>>` you instead 
store a `std::queue<std::packaged_task>`.  Now the only problem that remains is 
how to guarantee that no more than `std::hardware_concurrency()` of these 
`packaged_task` is waiting at any given time.  You could do this by taking the 
`std::function` that someone gives you, and wrapping it in a `packaged_task` 
which first runs the function, and then signals a condition variable after the 
function completes.  Then a single "dispatch" thread (for lack of a better 
word) could wake on this condition variable, pull a new `packaged_task` off the 
queue, and execute it asynchronously.  You'd also need to signal that same 
condition variable when a new item is added to the queue so that the dispatch 
thread could decide whether to run it immediately (if it's under-scheduled) or 
wait if it's full.

This would probably also make the implementation quite a bit simpler as well as 
being able to take advantage of any deep optimizations a platform has in its 
own thread pool implementation (if any).


http://reviews.llvm.org/D13727



_______________________________________________
lldb-commits mailing list
lldb-commits@lists.llvm.org
http://lists.llvm.org/cgi-bin/mailman/listinfo/lldb-commits

Reply via email to