Hello Jakub,

On Fri, 12 Nov 2021, Jakub Jelinek via Gcc-patches wrote:

> On Fri, Nov 12, 2021 at 02:27:16PM +0100, Jakub Jelinek via Gcc-patches wrote:
> > On Fri, Nov 12, 2021 at 02:20:23PM +0100, Jakub Jelinek via Gcc-patches 
> > wrote:
> > > This patch assumes that .shared variables are initialized to 0,
> > > https://docs.nvidia.com/cuda/parallel-thread-execution/index.html lists
> > > in Table 7. .shared as non-initializable.  If that isn't the case,
> > > we need to initialize it somewhere for the case of #pragma omp target
> > > without #pragma omp teams in it, maybe in libgcc/config/nvptx/crt0.c ?
> > 
> > A quick look at libgcc/config/nvptx/crt0.c shows the target supports
> > __attribute__((shared)), so perhaps either following instead, or, if
> > .shared isn't preinitialized to zero, defining the variable in
> > libgcc/config/nvptx/crt0.c , adding there __gomp_team_num = 0;
> > and adding extern keyword before int __gomp_team_num 
> > __attribute__((shared));
> > in libgomp/config/nvptx/target.c.
> 
> And finally here is a third version, which fixes a typo in the previous
> patch (in instead of int) and actually initializes the shared var because
> PTX documentation doesn't say anything about how the shared vars are
> initialized.
> 
> Tested on x86_64-linux with nvptx-none offloading, ok for trunk?

I suspect there may be a misunderstanding here, or maybe your explanation is
incomplete. I don't think the intention of the standard was to force such
complexity. You can launch as many blocks on the GPU as you like, limited only
by the bitwidth of the indexing register used in hardware, NVIDIA guarantees
at least INT_MAX blocks (in fact almost 1<<63 blocks if you launch a
three-dimensional grid with INT_MAX x 65535 x 65535 blocks).

The hardware will schedule blocks automatically (so for example if the hardware
can run 40 blocks simultaneously and you launch 100, the hardware may launch
blocks 0 to 39, then when one of those finishes it will launch the 40'th block
and so on).

So isn't the solution simply to adjust the logic around
nvptx_adjust_launch_bounds in GOMP_OFFLOAD_run, that is, if there's a lower
bound specified, use it instead of what adjust_launch_bounds is computing as
max_blocks?

Yours,
Alexander

Reply via email to