On Thu, Sep 01, 2022 at 11:39:42PM +0800, Chung-Lin Tang wrote:
> our work on SPEChpc2021 benchmarks show that, after the fix for PR99555 was 
> committed:
> [libgomp, nvptx] Fix hang in gomp_team_barrier_wait_end
> https://gcc.gnu.org/git/gitweb.cgi?p=gcc.git;h=5ed77fb3ed1ee0289a0ec9499ef52b99b39421f1
> 
> while that patch fixed the hang, there were quite severe performance 
> regressions caused
> by this new barrier code. Under OpenMP target offload mode, Minisweep 
> regressed by about 350%,
> while HPGMG-FV was about 2x slower.
> 
> So the problem was presumably the new barriers, which replaced erroneous but 
> fast bar.sync
> instructions, with correct but really heavy-weight futex_wait/wake operations 
> on the GPU.
> This is probably required for preserving correct task vs. barrier behavior.
> 
> However, the observation is that: when tasks-related functionality are not 
> used at all by
> the team inside an OpenMP target region, and a barrier is just a place to 
> wait for all
> threads to rejoin (no problem of invoking waiting tasks to re-start) a 
> barrier can in that
> case be implemented by simple bar.sync and bar.arrive PTX instructions. That 
> should be
> able to recover most performance the cases that usually matter, e.g. 'omp 
> parallel for' inside
> 'omp target'.
> 
> So the plan is to mark cases where 'tasks are never used'. This patch adds a 
> 'task_never_used'
> flag inside struct gomp_team, initialized to true, and set to false when 
> tasks are added to
> the team. The nvptx specific gomp_team_barrier_wait_end routines can then use 
> simple barrier
> when team->task_never_used remains true on the barrier.

I'll defer the nvptx specific changes to Tom because I'm not familiar enough
with NVPTX.  But I'll certainly object against any changes for this outside
of nvptx.  We don't need or want the task_never_used field and its
maintainance nor GOMP_task_set_used entrypoint in host libgomp.so nor for
NVPTX.
As you use it for many other constructs (master/masked/critical/single -
does omp_set_lock etc. count too?  only one thread acquires the lock, others
don't), it looks very much misnamed, perhaps better talk about thread
divergence or what is the PTX term for it.
Anyway, there is no point to track this all on the host or for amdgcn of
xeon phi offloading, nothing will use that info ever, so it is just wasted
memory and CPU cycles.
I don't understand how it can safely work, because if it needs to fallback
to the fixed behavior for master or single, why isn't user just using
  if (omp_get_thread_num () == 0)
    {
      // whatever
    }
etc. problematic too?
If it can for some reason work safely, then instead of adding
GOMP_task_set_used calls add some ifn call that will be after IPA folded to
nothing everywhere but on NVPTX and only have those calls on NVPTX, on the
library add some macros for the team->task_ever_used tweaks, defined to
nothing except for config/nvptx/*.h and limit the changes to PTX libgomp.a
then.
But I'm afraid a lot of code creates some asymmetric loads, even just a
work-sharing loop, if number of iterations isn't divisible by number of
threads, some threads could do less work, or with dynamic etc. schedules,
...

        Jakub

Reply via email to