clayborg added a comment.

In http://reviews.llvm.org/D13727#268053, @zturner wrote:

> The main thought I had was just that it would make the code simpler and 
> delegate more of the synchronization stuff to the library.  But if we don't 
> get any of that benefit then I guess there's no point in doing that.
>
> That said, I really dont' want to see 48 threads running inside the LLDB 
> process at all times (I have a 48-core machine).  It makes it really annoying 
> to debug.  So some way to tear down threads would be a big help.  But that's 
> even more complexity.
>
> If this is only going to be used occasionally (like each time some DWARF is 
> being parsed), then what about making `TaskPool` non-static?  Just create a 
> new TaskPool each time you want to do work in parallel and tear it down when 
> you're done doing the work?  If someone creates 2 task pools they get 2 
> different sets of threads.  This is kind of why I prefer delegating to the 
> standard library whenever possible, because it shoudl already handle all of 
> these cases.


We really need a global thread pool so we don't make too many threads. Many GUI 
components have 3 or 4 threads, we have a private process state thread that 
sometimes does work and responds to the dynamic loader breakpoints and the 
dynamic loaders often do work, and when shared libraries get loaded we have 
LanguageRuntime and other plug-ins that look for a symbol inside one or more 
modules. So this definitely be a global queue. If you run task_runner. 
WaitForAllTasks() you could actually have the implementation run 1 task on the 
current thread instead of just parking and waiting so you will probably get as 
good of performance as you used to even if all the threads from the thread pool 
are busy. So this needs to stay global so we only spawn up to N threads in a 
process when using the thread pool. We can add a setting do LLDB that can be 
set via settings set to change the concurrency if needed. If you spawn too many 
threads you can bring a machine to it a halt quite quickly.

> Another option is to go back to the original one line implementation using 
> std::async, but have each asynchronous thread work on `num_compile_units / 
> hardware_concurrency()` compile units inside of a loop.  That way you never 
> get more than `hardware_concurrency()` threads.


Again, this is you won't get more that "hardware_concurrency()" threads for 
each invocation of std::async right?


http://reviews.llvm.org/D13727



_______________________________________________
lldb-commits mailing list
lldb-commits@lists.llvm.org
http://lists.llvm.org/cgi-bin/mailman/listinfo/lldb-commits

Reply via email to