dsimcha wrote:
For now, parallelFuture was designed with a single producer, multiple worker model. Absolutely no attempt was made to allow for tasks running in the task pool to themselves submit jobs to the same task pool, because it would have made things more complicated and I couldn't think of any use cases.
like recursion a function is gona need to call another function, a thread needs to be able to spawn threads and task should be able to create new tasks. Making newly spawned tasks stay on the same thread is good optimization. This shouldn't need a specific use case.
I designed the lib with the types of use cases I encounter in my work in mind. (mathy, pure throughput oriented computing on large, embarrassingly parallel problems.) If someone comes up with a compelling use case, though, I'd certainly consider adding such abilities provided they don't interfere with performance or API simplicity for the more common cases. To make this discussion simple, let's define F1 as a future/task submitted by the main producer thread, and F2 as a task/future submitted by F1. The queue is (for now) strictly FIFO, except that if you have a pointer to the Task/Future object you can steal a job. When F1 submits F2 to the queue, F2 goes to the back of the queue like anything else. This means when F1 waits on F2, it is possible to have a cyclical dependency (F1 waiting on F2, F2 waiting for a worker thread populated by F1). This is mitigated by work stealing (F1 may just steal F2 and do it in its own thread).
I don't like that ^ idea of simple discussion with the many F1 and F2 all over the place. I hope this video can help visualize some ideas: http://channel9.msdn.com/pdc2008/TL26/
In parallel map and foreach, I should probably document this, but for now it's undefined behavior for the mapping function or parallel foreach loop body to submit jobs to the task pool and wait on them, and in practice will likely result in deadlocks.
You want to document that as undefined behavior? It can be made to work.