Hi Jakub,

On 16.05.23 13:00, Jakub Jelinek wrote:
On Tue, May 16, 2023 at 11:45:16AM +0200, Frederik Harwath wrote:
The place where different compilers implement the loop transformations
was discussed in an OpenMP loop transformation meeting last year. Two
compilers (another one and GCC with this patch series) transformed the loops in the middle end after the handling of data sharing, one planned to do so. Yet another vendor had not yet decided where it will be implemented. Clang currently does everything in the front end, but it was mentioned that this
might change in the future e.g. for code sharing with Flang. Implementing
the loop transformations late could potentially
complicate the implementation of transformations which require adjustments of the data sharing clauses, but this is known and consequentially, no such
When already in the FE we determine how many canonical loops a particular
loop transformation creates, I think the primary changes I'd like to see is really have OMP_UNROLL/OMP_TILE GENERIC statements (see below) and consider
where is the best spot to lower it. I believe for data sharing it is best
done during gimplification before the containing loops are handled, it is
already shared code among all the FEs, I think will make it easier to handle
data sharing right and gimplification is also where doacross processing is
done. While there is restriction that ordered clause is incompatible with
generated loops from tile construct, there isn't one for unroll (unless
"The ordered clause must not appear on a worksharing-loop directive if the associated loops
include the generated loops of a tile directive."
means unroll partial implicitly because partial unroll tiles the loop, but
it doesn't say it acts as if it was a tile construct), so we'd have to handle
#pragma omp for ordered(2)
for (int i = 0; i < 64; i++)
#pragma omp unroll partial(4)
for (int j = 0; j < 64; j++)
{
#pragma omp ordered depend (sink: i - 1, j - 2)
#pragma omp ordered depend (source)
}
and I think handling it after gimplification is going to be increasingly
harder. Of course another possibility is ask lang committee to clarify
unless it has been clarified already in 6.0 (but in TR11 it is not).

I do not really expect that we will have to handle this. Questions concerning
the correctness of code after applying loop transformations came up several
times since I have been following the design meetings and the result was
always either that nothing will be changed, because the loop transformations
are not expected to ensure the correctness of enclosing directives, or that
the use of the problematic construct in conjunction with loop transformations
will be forbidden. Concerning the use of "ordered" on transformed loops, the
latter approach was suggested for all transformations, cf. issue #3494 in the private OpenMP spec repository. I see that you have already asked for clarification
on unroll. I suppose this could also be fixed after gimplification with
reasonable effort. But let's just wait for the result of that discussion before we
continue worrying about this.

Also, I think creating temporaries is easier to be done during
gimplification than later.

This has not caused problems with the current approach.

Another option is as you implemented a separate pre-omp-lowering pass,
and another one would be do it in the omplower pass, which has actually
several subpasses internally, do it in the scan phase. Disadvantage of
a completely separate pass is that we have to walk the whole IL again,
while doing it in the scan phase means we avoid that cost. We already
do there similar transformations, scan_omp_simd transforms simd constructs
into if (...) simd else simt and then we process it with normal scan_omp_for
on what we've created. So, if you insist doing it after gimplification
perhaps for compatibility with other non-LLVM compilers, I'd prefer to
do it there rather than in a completely separate pass.

I see. This would be possible. My current approach is indeed rather
wasteful because the pass is not restricted to functions that actually
use loop transformations. I could add an attribute to such functions
that could be used to avoid the execution of the pass and hence
the gimple walk on functions that do not use transformations.

This is necessary to represent the loop nest that is affected by the
loop transformations by a single OMP_FOR to meet the expectations
of all later OpenMP code transformations. This is also the major
reason why the loop transformations are represented by clauses
instead of representing them asĀ  "OMP_UNROLL/OMP_TILE as
GENERIC constructs like OMP_FOR" as you suggest below. Since the
I really don't see why. We try to represent what we see in the source
as OpenMP constructs as those constructs. We already have a precedent
with composite loop constructs, where for the combined constructs which
aren't innermost we temporarily use NULL OMP_FOR_{INIT,COND,INCR,ORIG_DECLS}
vectors to stand for this will be some loop, but the details for it aren't
known yet, to be filled up later. So, why can't we similarly represent
#pragma omp for collapse(3)
#pragma omp tile sizes (4, 2, 2)
#pragma omp tile sizes (4, 8, 16)
for (int i = 0; i < 64; ++i)
for (int j = 0; j < 64; ++j)
for (int k = 0; k < 64; ++k)
body;
as OMP_FOR with NULL OMP_FOR_{INIT,COND,INCR,ORIG_DECLS}
with the appropriate clauses on it, with
OMP_TILE (again, right clauses, NULL OMP_FOR_{INIT,COND,INCR,ORIG_DECLS})
and another OMP_TILE, this time with all the vectors filled in in GENERIC?

#pragma omp for collapse(2)
for (int i = 0; i < 64; ++i)
#pragma omp tile sizes (4)
for (int j = 0; j < 64; ++j)
would be represented by non-NULL vectors which would have all the inner
entries NULL (the outer loop is not generated loop, the inner one is
generated) with OMP_TILE inside of it.

Then depending on where the loop transformation is actually performed,
we'd either need to preserve such shape from gimplification until the
loop transformations are applied, or would be solely on GENERIC and
GIMPLE would already have the transformed loops.

Thanks for the explanation! I think now I understand how you would do this.

Clauses e.g. have the disadvantage that generally they aren't ordered.
If it is separate statements, it is e.g. easier to print it right in
original dump, so that people can compare the loops before the
transformation and after it.

You mean, the clauses are not ordered at the level of the specification?
In the implementation they are of course ordered and the order has
proved to be sufficiently stable. But perhaps you mean that you would
like to avoid introducing code that relies on the ordering of the clauses?
In this case, I could move the transformations to a separate chain which
could be accessed e.g. by OMP_FOR_TRANSFORMS and by
gimple_omp_for_transforms per level. This would also allow to print the
transformations in the pretty printing functions on the corresponding levels
of the loop nest. This would also be possible somehow with the present
representation. But right now the transformations are just printed together
with the other clauses on the directive. I considerd this to be acceptable
because I suppose the dumps will be mostly read by GCC developers.
There are also other clauses that are only used internally.
You suggest to implement the loop transformations during gimplification.
I am not sure if gimplification is actually well-suited to implement the
depth-first evaluation of the loop transformations. I also believe that
Why not? The loop transformation constructs can't be deeply nested in the
bodies, they need to be close.
gimplify_omp_for already searches the body for the case of composite
constructs - if (OMP_FOR_INIT (for_stmt) == NULL_TREE) early in it.
So, this would just mean doing it if that condition is true also looking
for loop transformation constructs (if they are found, pass in the
containing OMP_{FOR,SIMD,LOOP,DISTRIBUTE,TASKLOOP} if any to a routine
that handles the transformation, such that it can update the containing
looping construct if any during the transformation.
That alone would handle the case where the looping construct should work
solely with the generated loop. It would need to do the same thing
also if OMP_FOR_INIT (for_stmt) is non-NULL but
TREE_VEC_ELT (OMP_FOR_INIT (for_stmt), TREE_VEC_LENGTH (OMP_FOR_INIT (for_stmt))) is NULL to handle the case where generated loops are just some of the inner
ones.
And then when gimplify_omp_for would encounter an OMP_TILE/OMP_UNROLL
loop on its own (i.e. not nested inside some other loop), it would similarly
find further transform constructs in it like the above but then would just
normally do the loop transformation, with NULL_TREE for the containing loop,
meaning it is a sequential stuff.

Thanks for the explanation. But actually doing this would require a
complete rewrite which would almost certainly imply that mainline GCC
would not support the loop transformations for a long time.


Best regards,

Frederik

Reply via email to