Hi,

> First of all, if you haven't started with the FSF assignment paperwork,
> please do so, it takes a while.  See http://gcc.gnu.org/contribute.html

I've already started it. Thanks.

> For #pragma omp parallel and tied tasks you just want user-level ==
> kernel-level thread as implemented in libgomp, with affinity only
> done when requested by the user (GOMP_CPU_AFFINITY resp. on gomp-3_1-branch
> also the to be implemented OMP_PROC_BIND env vars).

In my opinion, even tied task needs user-level thread for scheduling.
I've read several paper related to task implementations. Many of task schedulers
have user-level thread with its own private queue. In order to prevent
contention of
task queue, it is good idea to let user-level threads have their own
private queue.
(I got this idea mostly from Section 3 of "Evaluation of OpenMP Task
Scheduling Strategies" by Nanos Group)

Also, it could be difficult to implement untied task without user-level thread.
So, implementing user-level thread for tied task will keep simplicity
of task scheduler
since libgomp will have untied task implementation in the future.


> IMHO you don't want to rewrite the task support, just primarily investigate
> various scheduling algorithms and attempt to implement some of them and
> benchmark.

Sorry. I could not catch what I you would like me to do in GSoC project.
I'm planning to almost rewrite task scheduler in libgomp.

My goal for the project is to make faster tied task implementation and
(if I have enough time,) untied task.
One global task queue which libgomp currently uses would be one of
the biggest defeats. So I would first like to make new data structures
including user-level threads with their private queues.
If I do so and current global task queue in libgomp is replaced by new
data structures, most accesses to task queues should be rewrote, then
most codes related to task scheduler should be rewrote, too.

Give me any comment freely on this point since it is very important
point in my project ;-)


Thanks,
--
Sho Nakatani

Reply via email to